1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

57 Commits

Author SHA1 Message Date
Nick Craig-Wood
bc279ec4b0 zip: backend to read or create zip files FIXME WIP
Use as `:zip:remote:path/to/file.zip` for reading or writing.

- reading zip files works - can mount zip files
- writing works
- unknowns in writing like end?
- lots of bodges
2023-08-08 11:24:17 +01:00
Nick Craig-Wood
0cccda61db docs: add some more docs on making your own backend 2023-08-05 04:25:27 +01:00
Nick Craig-Wood
de185de215 accounting: show server side stats in own lines and not as bytes transferred
Before this change we showed both server side moves and server side
copies as bytes transferred.

This made a nice easy to use stats display, but also caused confusion
for users who saw unrealistic transfer times. It also caused a problem
with --max-transfer and chunker which renames each chunk after
uploading which was counted as a transfer byte.

This patch instead accounts the server side move and copy statistics
as a seperate lines in the stats display which will only appear if
there are any server side moves / copies. This is also output in the
rc.

This gives users something to look at when transfers are running which
was the point of the original change but it now means that transfer
bytes represents data transfers through this rclone instance only.

Fixes #7183
2023-08-05 03:54:01 +01:00
Nick Craig-Wood
d362db2e08 rclone test info: add --check-base32768 flag to check can store all base32768 characters
Fixes #7208
2023-08-05 02:59:58 +01:00
Nick Craig-Wood
db2a49e384 Add Raymond Berger to contributors 2023-08-05 02:59:58 +01:00
Kaloyan Raev
d63fcc6e44 storj: performance improvement for large file uploads
storj.io/uplink v1.11.0 comes with an improved logic for uploading large
files where file segments are uploaded concurrently instead of serially.
This allows to fully utilize the network connection during the entire
upload process.

This change enable the new upload logic.
2023-08-04 17:40:03 +09:00
kapitainsky
4444037f5c docs: box client_id creation
See: https://forum.rclone.org/t/box-getting-403-errors-when-using-chunker-working-fine-without-it/40388/22
2023-08-03 14:51:49 +01:00
Raymond Berger
a555513c26 docs: add missing comma to overview webdav footnote 2023-08-03 14:50:04 +01:00
Nick Craig-Wood
039c260216 build: update to go1.21rc4 2023-08-03 13:53:43 +01:00
Nick Craig-Wood
4577c08e05 Add Julian Lepinski to contributors 2023-08-03 13:53:43 +01:00
Nick Craig-Wood
c9ed691919 docs: add minio as a sponsor 2023-08-03 08:39:15 +01:00
Julian Lepinski
9f96c0d4ea swift: fix HEADing 0-length objects when --swift-no-large-objects set
The Swift backend does not always respect the flag telling it to skip
HEADing zero-length objects. This commit fixes that for ls/lsl/lsf.

Swift returns zero length for dynamic large object files when they're
included in a files lookup, which means that determining their size
requires HEADing each file that returns a size of zero. rclone's
--swift-no-large-objects instructs rclone that no large objects are
present and accordingly rclone should not HEAD files that return zero
length.

When rclone is performing an ls / lsf / lsl type lookup, however, it
continues to HEAD any zero length objects it encounters, even with
this flag set. Accordingly, this change causes rclone to respect the
flag in these situations.

NB: It is worth noting that this will cause rclone to incorrectly
report zero length for any dynamic large objects encountered with the
--swift-no-large-objects flag set.
2023-08-03 08:38:39 +01:00
Nick Craig-Wood
91d095f468 docs: update command docs to new style 2023-08-02 12:53:09 +01:00
Nick Craig-Wood
bff702a6f1 docs: group the global flags and make them appear on command and flags pages
This adds an additional parameter to the creation of each flag. This
specifies one or more flag groups. This **must** be set for global
flags and **must not** be set for local flags.

This causes flags.md to be built with sections to aid comprehension
and it causes the documentation pages for each command (and the
`--help`) to be built showing the flags groups as specified in the
`groups` annotation on the command.

See: https://forum.rclone.org/t/make-docs-for-mortals-not-only-rclone-gurus/39476/
2023-08-02 12:53:09 +01:00
Nick Craig-Wood
a1d6bbd31f Add rclone completion powershell - basic implementation only 2023-08-02 12:53:09 +01:00
Nick Craig-Wood
fb6a9dfbf3 docs: fix rclone config edit docs 2023-08-02 12:53:09 +01:00
Nick Craig-Wood
3f3c5f3ff4 build: remove unused package cmd/serve/http/data
This was superseded by lib/http/template.go
2023-08-02 12:53:09 +01:00
Nick Craig-Wood
89196cb353 Add nielash to contributors 2023-08-02 12:53:09 +01:00
Nick Craig-Wood
9284506b86 Add Zach to contributors 2023-08-02 12:53:09 +01:00
yuudi
88c72d1f4d http: fix webdav OPTIONS response (#6433) 2023-08-01 11:48:41 +09:00
Paul
5e3bf50b2e webdav: nextcloud: fix segment violation in low-level retry
Fix https://github.com/rclone/rclone/issues/7168

Co-authored-by: ncw <nick@craig-wood.com>
Co-authored-by: Paul <devnoname120@gmail.com>
2023-08-01 11:15:33 +09:00
nielash
982f76b4df sftp: support dynamic --sftp-path-override
Before this change, rclone always expected --sftp-path-override to be
the absolute SSH path to remote:path/subpath which effectively made it
unusable for wrapped remotes (for example, when used with a crypt
remote, the user would need to provide the full decrypted path.)

After this change, the old behavior remains the default, but dynamic
paths are now also supported, if the user adds '@' as the first
character of --sftp-path-override. Rclone will ignore the '@' and
treat the rest of the string as the path to the SFTP remote's root.
Rclone will then add any relative subpaths automatically (including
unwrapping/decrypting remotes as necessary).

In other words, the path_override config parameter can now be used to
specify the difference between the SSH and SFTP paths. Once specified
in the config, it is no longer necessary to re-specify for each
command.

See: https://forum.rclone.org/t/sftp-path-override-breaks-on-wrapped-remotes/40025
2023-07-30 03:12:07 +01:00
Zach
347812d1d3 ftp,sftp: add socks_proxy support for SOCKS5 proxies
Fixes #3558
2023-07-30 03:02:08 +01:00
yuudi
f4449440f8 http: CORS should not be send if not set (#6433) 2023-07-29 16:12:31 +09:00
kapitainsky
e66675d346 docs: rclone backend restore 2023-07-29 11:31:16 +09:00
Nick Craig-Wood
45228e2f18 build: update dependencies
This does not update bazil/fuse because it does not build on freebsd

https://github.com/bazil/fuse/issues/295

This partially updates the prometheus library as the latest no longer compiles with plan9

https://github.com/prometheus/procfs/issues/554
2023-07-29 01:57:23 +01:00
Nick Craig-Wood
b866850fdd Add yuudi to contributors 2023-07-29 01:57:23 +01:00
yuudi
5b63b9534f rc: add execute-id for job-id 2023-07-28 18:35:14 +09:00
Nick Craig-Wood
10449c86a4 sftp: add --sftp-ssh to specify an external ssh binary to use
This allows using an external ssh binary instead of the built in ssh
library for making SFTP connections.

This makes another integration test target TestSFTPRcloneSSH:

Fixes #7012
2023-07-28 10:29:02 +01:00
Nick Craig-Wood
26a9a9fed2 Add Niklas Hambüchen to contributors 2023-07-28 10:29:02 +01:00
Chun-Hung Tseng
602e42d334 protondrive: fix a bug in parsing User metadata (#7174) 2023-07-28 11:03:23 +02:00
Niklas Hambüchen
4c5a21703e docs: dropbox: Explain that Teams needs "Full Dropbox" 2023-07-28 17:52:29 +09:00
Nick Craig-Wood
f2ee949eff fichier: implement DirMove
See: https://forum.rclone.org/t/1fichier-rclone-does-not-allow-to-rename-files-and-folders-when-you-mount-a-1fichier-disk-drive/24726/
2023-07-28 01:25:42 +01:00
kapitainsky
3ad255172c docs: b2 versions names caveat 2023-07-28 09:23:34 +09:00
Nick Craig-Wood
29b1751d0e serve webdav: fix error: Expecting fs.Object or fs.Directory, got <nil>
Before this change rclone serve webdav would sometimes give this error

    Expecting fs.Object or fs.Directory, got <nil>

It turns out that when a file is being updated it doesn't have a
DirEntry and it is allowed to be <nil> so in this case we create the
mime type from the extension.

See: https://forum.rclone.org/t/webdav-union-of-onedrive-expecting-fs-object-or-fs-directory-got-nil/40298
2023-07-28 00:54:45 +01:00
kapitainsky
363da9aa82 docs: s3 versions names caveat 2023-07-27 12:36:50 +09:00
yuudi
6c8148ef39 http servers: allow CORS to be set with --allow-origin flag - fixes #5078
Some changes about test cases:
Because MiddlewareCORS will return early on OPTIONS request,
this middleware should only be used once at NewServer function.
Test cases should pass AllowOrigin config instead of adding
this middleware again.

A new test case was added to test CORS preflight request with
an authenticator. Preflight request should always return 200 OK
regardless of autentications.

Co-authored-by: yuudi <yuudi@users.noreply.github.com>
2023-07-26 10:15:54 +01:00
Nick Craig-Wood
3ed4a2e963 sftp: stop uploads re-using the same ssh connection to improve performance
Before this change we released the ssh connection back to the pool
before the upload was finished.

This meant that uploads were re-using the same ssh connection which
reduces throughput.

This releases the ssh connection back to the pool only after the
upload has finished, or on error state.

See: https://forum.rclone.org/t/sftp-backend-opens-less-connection-than-expected/40245
2023-07-25 13:05:37 +01:00
Anagh Kumar Baranwal
aaadb48d48 vfs: keep virtual directory status accurate and reduce deadlock potential
This changes hasVirtual to an atomic struct variable that's updated on
add or delete from the virtual map.

This keeps it up to date and avoids deadlocks.

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-07-25 09:08:16 +01:00
Anagh Kumar Baranwal
52e25c43b9 vfs: Added cache cleaner for directories to reduce memory usage
This empties the directory cache after twice the directory cache
period to release memory.

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-07-25 09:08:16 +01:00
Nick Craig-Wood
9a66563fc6 Add Edwin Mackenzie-Owen to contributors 2023-07-25 09:08:16 +01:00
Nick Craig-Wood
6ca670d66a Add Tiago Boeing to contributors 2023-07-25 09:08:16 +01:00
Nick Craig-Wood
809653055d Add gabriel-suela to contributors 2023-07-25 09:08:16 +01:00
Nick Craig-Wood
61325ce507 Add Ricardo D'O. Albanus to contributors 2023-07-25 09:08:16 +01:00
Edwin Mackenzie-Owen
c3989d1906 smb: implement multi-threaded writes for copies to smb
smb2.File implements the WriterAtCloser interface defined in
fs/types.go. Expose it via a OpenWriterAt method on
the fs struct to support multi-threaded writes.
2023-07-25 08:31:36 +01:00
Tiago Boeing
a79887171c docs: mega: update with solution when receiving killed on process 2023-07-25 04:21:37 +01:00
Chun-Hung Tseng
f29e284c90 protondrive: fix download signature verification bug (#7169) 2023-07-24 14:54:39 +02:00
Chun-Hung Tseng
9a66086fa0 protondrive: fix bug in digests parsing (#7164) 2023-07-24 09:00:18 +02:00
Chun-Hung Tseng
1845c261c6 protondrive: fix missing file sha1 and appstring issues (#7163) 2023-07-24 08:56:21 +02:00
Chun-Hung Tseng
70cbcef624 Add Chun-Hung Tseng to Maintainer (#7162) 2023-07-23 16:29:24 +02:00
gabriel-suela
9169b2b5ab cmd: fix log message typo 2023-07-23 08:43:03 +09:00
Ricardo D'O. Albanus
0957c8fb74 chunker: Update documentation to mention issue with small files
See: https://forum.rclone.org/t/chunker-not-deactivating-for-small-files-and-wasting-api-calls/40122
2023-07-23 00:40:50 +01:00
Anagh Kumar Baranwal
bb0cd76a5f fix: mount parsing for linux 2023-07-22 17:29:20 +05:30
Nick Craig-Wood
08240c8cf5 Add Chun-Hung Tseng to contributors 2023-07-22 10:54:21 +01:00
Chun-Hung Tseng
014acc902d protondrive: add protondrive backend - fixes #6072 2023-07-22 10:46:21 +01:00
Benjamin
33fec9c835 doc: Fix Leviia block 2023-07-18 19:58:19 +01:00
kapitainsky
3a5ffc7839 docs: mention Box as base32768 compatible
As suddenly many people move to Box - another "unlimited" cloud story migration saga there are frequent questions about crypt files encoding to be used.

Box is base32768 friendly.

It has been tested with:

https://pub.rclone.org/base32768.zip

and:

rclone test info --check-length boxremote:

maxFileLength = 255 // for 1 byte unicode characters
maxFileLength = 255 // for 2 byte unicode characters
maxFileLength = 255 // for 3 byte unicode characters
maxFileLength = -1 // for 4 byte unicode characters
2023-07-18 19:55:54 +01:00
213 changed files with 7232 additions and 1835 deletions

View File

@@ -32,7 +32,7 @@ jobs:
include:
- job_name: linux
os: ubuntu-latest
go: '1.21.0-rc.3'
go: '1.21.0-rc.4'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -43,14 +43,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '1.21.0-rc.3'
go: '1.21.0-rc.4'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-11
go: '1.21.0-rc.3'
go: '1.21.0-rc.4'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -59,14 +59,14 @@ jobs:
- job_name: mac_arm64
os: macos-11
go: '1.21.0-rc.3'
go: '1.21.0-rc.4'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '1.21.0-rc.3'
go: '1.21.0-rc.4'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -76,7 +76,7 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '1.21.0-rc.3'
go: '1.21.0-rc.4'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
@@ -244,7 +244,7 @@ jobs:
- name: Install Go
uses: actions/setup-go@v4
with:
go-version: '1.21.0-rc.3'
go-version: '1.21.0-rc.4'
check-latest: true
- name: Install govulncheck
@@ -269,7 +269,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v4
with:
go-version: '1.21.0-rc.3'
go-version: '1.21.0-rc.4'
- name: Go module cache
uses: actions/cache@v3

View File

@@ -419,7 +419,7 @@ remote or an fs.
Research
* Look at the interfaces defined in `fs/fs.go`
* Look at the interfaces defined in `fs/types.go`
* Study one or more of the existing remotes
Getting going
@@ -428,14 +428,19 @@ Getting going
* box is a good one to start from if you have a directory-based remote
* b2 is a good one to start from if you have a bucket-based remote
* Add your remote to the imports in `backend/all/all.go`
* HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead.
* HTTP based remotes are easiest to maintain if they use rclone's [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but if there is a really good go SDK then use that instead.
* Try to implement as many optional methods as possible as it makes the remote more usable.
* Use lib/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed
* Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to make sure we can encode any path name and `rclone info` to help determine the encodings needed
* `rclone purge -v TestRemote:rclone-info`
* `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
* `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
* open `remote.csv` in a spreadsheet and examine
Important:
* Please use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) if you are implementing a REST like backend and parsing XML/JSON in the backend. It makes maintenance much easier.
* If your backend is HTTP based then please use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp) - this adds features like `--dump bodies`, `--tpslimit`, `--user-agent` without you having to code anything!
Unit tests
* Create a config entry called `TestRemote` for the unit tests to use

View File

@@ -18,6 +18,7 @@ Current active maintainers of rclone are:
| Caleb Case | @calebcase | storj backend |
| wiserain | @wiserain | pikpak backend |
| albertony | @albertony | |
| Chun-Hung Tseng | @henrybear327 | Proton Drive Backend |
**This is a work in progress Draft**

View File

@@ -38,6 +38,7 @@ import (
_ "github.com/rclone/rclone/backend/pcloud"
_ "github.com/rclone/rclone/backend/pikpak"
_ "github.com/rclone/rclone/backend/premiumizeme"
_ "github.com/rclone/rclone/backend/protondrive"
_ "github.com/rclone/rclone/backend/putio"
_ "github.com/rclone/rclone/backend/qingstor"
_ "github.com/rclone/rclone/backend/s3"
@@ -53,5 +54,6 @@ import (
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav"
_ "github.com/rclone/rclone/backend/yandex"
_ "github.com/rclone/rclone/backend/zip"
_ "github.com/rclone/rclone/backend/zoho"
)

View File

@@ -408,6 +408,32 @@ func (f *Fs) moveFile(ctx context.Context, url string, folderID int, rename stri
return response, nil
}
func (f *Fs) moveDir(ctx context.Context, folderID int, newLeaf string, destinationFolderID int) (response *MoveDirResponse, err error) {
request := &MoveDirRequest{
FolderID: folderID,
DestinationFolderID: destinationFolderID,
Rename: newLeaf,
// DestinationUser: destinationUser,
}
opts := rest.Opts{
Method: "POST",
Path: "/folder/mv.cgi",
}
response = &MoveDirResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("couldn't move dir: %w", err)
}
return response, nil
}
func (f *Fs) copyFile(ctx context.Context, url string, folderID int, rename string) (response *CopyFileResponse, err error) {
request := &CopyFileRequest{
URLs: []string{url},

View File

@@ -488,6 +488,51 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return dstObj, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove.
//
// If destination exists then return fs.ErrorDirExists.
//
// This is complicated by the fact that we can't use moveDir to move
// to a different directory AND rename at the same time as it can
// overwrite files in the source directory.
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
srcID, _, _, dstDirectoryID, dstLeaf, err := f.dirCache.DirMove(ctx, srcFs.dirCache, srcFs.root, srcRemote, f.root, dstRemote)
if err != nil {
return err
}
srcIDnumeric, err := strconv.Atoi(srcID)
if err != nil {
return err
}
dstDirectoryIDnumeric, err := strconv.Atoi(dstDirectoryID)
if err != nil {
return err
}
var resp *MoveDirResponse
resp, err = f.moveDir(ctx, srcIDnumeric, dstLeaf, dstDirectoryIDnumeric)
if err != nil {
return fmt.Errorf("couldn't rename leaf: %w", err)
}
if resp.Status != "OK" {
return fmt.Errorf("couldn't rename leaf: %s", resp.Message)
}
srcFs.dirCache.FlushDir(srcRemote)
return nil
}
// Copy src to this remote using server side move operations.
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
@@ -561,6 +606,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)

View File

@@ -70,6 +70,22 @@ type MoveFileResponse struct {
URLs []string `json:"urls"`
}
// MoveDirRequest is the request structure of the corresponding request
type MoveDirRequest struct {
FolderID int `json:"folder_id"`
DestinationFolderID int `json:"destination_folder_id,omitempty"`
DestinationUser string `json:"destination_user"`
Rename string `json:"rename,omitempty"`
}
// MoveDirResponse is the response structure of the corresponding request
type MoveDirResponse struct {
Status string `json:"status"`
Message string `json:"message"`
OldName string `json:"old_name"`
NewName string `json:"new_name"`
}
// CopyFileRequest is the request structure of the corresponding request
type CopyFileRequest struct {
URLs []string `json:"urls"`

View File

@@ -28,6 +28,7 @@ import (
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/proxy"
"github.com/rclone/rclone/lib/readers"
)
@@ -174,6 +175,18 @@ Enabled by default. Use 0 to disable.`,
If this is set and no password is supplied then rclone will ask for a password
`,
Advanced: true,
}, {
Name: "socks_proxy",
Default: "",
Help: `Socks 5 proxy host.
Supports the format user:pass@host:port, user@host:port, host:port.
Example:
myUser:myPass@localhost:9005
`,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -218,6 +231,7 @@ type Options struct {
ShutTimeout fs.Duration `config:"shut_timeout"`
AskPassword bool `config:"ask_password"`
Enc encoder.MultiEncoder `config:"encoding"`
SocksProxy string `config:"socks_proxy"`
}
// Fs represents a remote FTP server
@@ -359,7 +373,12 @@ func (f *Fs) ftpConnection(ctx context.Context) (c *ftp.ServerConn, err error) {
defer func() {
fs.Debugf(f, "> dial: conn=%T, err=%v", conn, err)
}()
conn, err = fshttp.NewDialer(ctx).Dial(network, address)
baseDialer := fshttp.NewDialer(ctx)
if f.opt.SocksProxy != "" {
conn, err = proxy.SOCKS5Dial(network, address, f.opt.SocksProxy, baseDialer)
} else {
conn, err = baseDialer.Dial(network, address)
}
if err != nil {
return nil, err
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,16 @@
package protondrive_test
import (
"testing"
"github.com/rclone/rclone/backend/protondrive"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestProtonDrive:",
NilObject: (*protondrive.Object)(nil),
})
}

View File

@@ -4421,17 +4421,17 @@ to normal storage.
Usage Examples:
rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]
rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags
rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard
rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1
All the objects shown will be marked for restore, then
rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard
rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1
It returns a list of status dictionaries with Remote and Status
keys. The Status will be OK if it was successful or an error message

View File

@@ -27,7 +27,6 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/pacer"
@@ -169,7 +168,19 @@ E.g. if shared folders can be found in directories representing volumes:
E.g. if home directory can be found in a shared folder called "home":
rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory`,
rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory
To specify only the path to the SFTP remote's root, and allow rclone to add any relative subpaths automatically (including unwrapping/decrypting remotes as necessary), add the '@' character to the beginning of the path.
E.g. the first example above could be rewritten as:
rclone sync /home/local/directory remote:/directory --sftp-path-override @/volume2
Note that when using this method with Synology "home" folders, the full "/homes/USER" path should be specified instead of "/home".
E.g. the second example above should be rewritten as:
rclone sync /home/local/directory remote:/homes/USER/directory --sftp-path-override @/volume1`,
Advanced: true,
}, {
Name: "set_modtime",
@@ -388,6 +399,47 @@ Example:
ssh-ed25519 ssh-rsa ssh-dss
`,
Advanced: true,
}, {
Name: "ssh",
Default: fs.SpaceSepList{},
Help: `Path and arguments to external ssh binary.
Normally rclone will use its internal ssh library to connect to the
SFTP server. However it does not implement all possible ssh options so
it may be desirable to use an external ssh binary.
Rclone ignores all the internal config if you use this option and
expects you to configure the ssh binary with the user/host/port and
any other options you need.
**Important** The ssh command must log in without asking for a
password so needs to be configured with keys or certificates.
Rclone will run the command supplied either with the additional
arguments "-s sftp" to access the SFTP subsystem or with commands such
as "md5sum /path/to/file" appended to read checksums.
Any arguments with spaces in should be surrounded by "double quotes".
An example setting might be:
ssh -o ServerAliveInterval=20 user@example.com
Note that when using an external ssh binary rclone makes a new ssh
connection for every hash it calculates.
`,
}, {
Name: "socks_proxy",
Default: "",
Help: `Socks 5 proxy host.
Supports the format user:pass@host:port, user@host:port, host:port.
Example:
myUser:myPass@localhost:9005
`,
Advanced: true,
}},
}
fs.Register(fsi)
@@ -427,6 +479,8 @@ type Options struct {
KeyExchange fs.SpaceSepList `config:"key_exchange"`
MACs fs.SpaceSepList `config:"macs"`
HostKeyAlgorithms fs.SpaceSepList `config:"host_key_algorithms"`
SSH fs.SpaceSepList `config:"ssh"`
SocksProxy string `config:"socks_proxy"`
}
// Fs stores the interface to the remote SFTP files
@@ -463,41 +517,16 @@ type Object struct {
sha1sum *string // Cached SHA1 checksum
}
// dial starts a client connection to the given SSH server. It is a
// convenience function that connects to the given network address,
// initiates the SSH handshake, and then sets up a Client.
func (f *Fs) dial(ctx context.Context, network, addr string, sshConfig *ssh.ClientConfig) (*ssh.Client, error) {
dialer := fshttp.NewDialer(ctx)
conn, err := dialer.Dial(network, addr)
if err != nil {
return nil, err
}
c, chans, reqs, err := ssh.NewClientConn(conn, addr, sshConfig)
if err != nil {
return nil, err
}
fs.Debugf(f, "New connection %s->%s to %q", c.LocalAddr(), c.RemoteAddr(), c.ServerVersion())
return ssh.NewClient(c, chans, reqs), nil
}
// conn encapsulates an ssh client and corresponding sftp client
type conn struct {
sshClient *ssh.Client
sshClient sshClient
sftpClient *sftp.Client
err chan error
}
// Wait for connection to close
func (c *conn) wait() {
c.err <- c.sshClient.Conn.Wait()
}
// Send a keepalive over the ssh connection
func (c *conn) sendKeepAlive() {
_, _, err := c.sshClient.SendRequest("keepalive@openssh.com", true, nil)
if err != nil {
fs.Debugf(nil, "Failed to send keep alive: %v", err)
}
c.err <- c.sshClient.Wait()
}
// Send keepalives every interval over the ssh connection until done is closed
@@ -509,7 +538,7 @@ func (c *conn) sendKeepAlives(interval time.Duration) (done chan struct{}) {
for {
select {
case <-t.C:
c.sendKeepAlive()
c.sshClient.SendKeepAlive()
case <-done:
return
}
@@ -561,7 +590,11 @@ func (f *Fs) sftpConnection(ctx context.Context) (c *conn, err error) {
c = &conn{
err: make(chan error, 1),
}
c.sshClient, err = f.dial(ctx, "tcp", f.opt.Host+":"+f.opt.Port, f.config)
if len(f.opt.SSH) == 0 {
c.sshClient, err = f.newSSHClientInternal(ctx, "tcp", f.opt.Host+":"+f.opt.Port, f.config)
} else {
c.sshClient, err = f.newSSHClientExternal()
}
if err != nil {
return nil, fmt.Errorf("couldn't connect SSH: %w", err)
}
@@ -575,7 +608,7 @@ func (f *Fs) sftpConnection(ctx context.Context) (c *conn, err error) {
}
// Set any environment variables on the ssh.Session
func (f *Fs) setEnv(s *ssh.Session) error {
func (f *Fs) setEnv(s sshSession) error {
for _, env := range f.opt.SetEnv {
equal := strings.IndexRune(env, '=')
if equal < 0 {
@@ -592,8 +625,8 @@ func (f *Fs) setEnv(s *ssh.Session) error {
// Creates a new SFTP client on conn, using the specified subsystem
// or sftp server, and zero or more option functions
func (f *Fs) newSftpClient(conn *ssh.Client, opts ...sftp.ClientOption) (*sftp.Client, error) {
s, err := conn.NewSession()
func (f *Fs) newSftpClient(client sshClient, opts ...sftp.ClientOption) (*sftp.Client, error) {
s, err := client.NewSession()
if err != nil {
return nil, err
}
@@ -666,6 +699,9 @@ func (f *Fs) getSftpConnection(ctx context.Context) (c *conn, err error) {
// Getwd request
func (f *Fs) putSftpConnection(pc **conn, err error) {
c := *pc
if !c.sshClient.CanReuse() {
return
}
*pc = nil
if err != nil {
// work out if this is an expected error
@@ -744,6 +780,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, err
}
if len(opt.SSH) != 0 && (opt.User != "" || opt.Host != "" || opt.Port != "") {
fs.Logf(name, "--sftp-ssh is in use - ignoring user/host/port from config - set in the parameters to --sftp-ssh (remove them from the config to silence this warning)")
}
if opt.User == "" {
opt.User = currentUser
}
@@ -1016,8 +1056,8 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
fs.Debugf(f, "Failed to get shell session for shell type detection command: %v", err)
} else {
var stdout, stderr bytes.Buffer
session.Stdout = &stdout
session.Stderr = &stderr
session.SetStdout(&stdout)
session.SetStderr(&stderr)
shellCmd := "echo ${ShellId}%ComSpec%"
fs.Debugf(f, "Running shell type detection remote command: %s", shellCmd)
err = session.Run(shellCmd)
@@ -1427,8 +1467,8 @@ func (f *Fs) run(ctx context.Context, cmd string) ([]byte, error) {
}()
var stdout, stderr bytes.Buffer
session.Stdout = &stdout
session.Stderr = &stderr
session.SetStdout(&stdout)
session.SetStderr(&stderr)
fs.Debugf(f, "Running remote command: %s", cmd)
err = session.Run(cmd)
@@ -1735,6 +1775,9 @@ func (f *Fs) remotePath(remote string) string {
func (f *Fs) remoteShellPath(remote string) string {
if f.opt.PathOverride != "" {
shellPath := path.Join(f.opt.PathOverride, remote)
if f.opt.PathOverride[0] == '@' {
shellPath = path.Join(strings.TrimPrefix(f.opt.PathOverride, "@"), f.absRoot, remote)
}
fs.Debugf(f, "Shell path redirected to %q with option path_override", shellPath)
return shellPath
}
@@ -1992,9 +2035,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return fmt.Errorf("Update: %w", err)
}
// Hang on to the connection for the whole upload so it doesn't get re-used while we are uploading
file, err := c.sftpClient.OpenFile(o.path(), os.O_WRONLY|os.O_CREATE|os.O_TRUNC)
o.fs.putSftpConnection(&c, err)
if err != nil {
o.fs.putSftpConnection(&c, err)
return fmt.Errorf("Update Create failed: %w", err)
}
// remove the file if upload failed
@@ -2014,14 +2058,18 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
_, err = file.ReadFrom(&sizeReader{Reader: in, size: src.Size()})
if err != nil {
o.fs.putSftpConnection(&c, err)
remove()
return fmt.Errorf("Update ReadFrom failed: %w", err)
}
err = file.Close()
if err != nil {
o.fs.putSftpConnection(&c, err)
remove()
return fmt.Errorf("Update Close failed: %w", err)
}
// Release connection only when upload has finished so we don't upload multiple files on the same connection
o.fs.putSftpConnection(&c, err)
// Set the mod time - this stats the object if o.fs.opt.SetModTime == true
err = o.SetModTime(ctx, src.ModTime(ctx))

View File

@@ -30,3 +30,13 @@ func TestIntegration2(t *testing.T) {
NilObject: (*sftp.Object)(nil),
})
}
func TestIntegration3(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestSFTPRcloneSSH:",
NilObject: (*sftp.Object)(nil),
})
}

73
backend/sftp/ssh.go Normal file
View File

@@ -0,0 +1,73 @@
//go:build !plan9
// +build !plan9
package sftp
import "io"
// Interfaces for ssh client and session implemented in ssh_internal.go and ssh_external.go
// An interface for an ssh client to abstract over internal ssh library and external binary
type sshClient interface {
// Wait blocks until the connection has shut down, and returns the
// error causing the shutdown.
Wait() error
// SendKeepAlive sends a keepalive message to keep the connection open
SendKeepAlive()
// Close the connection
Close() error
// NewSession opens a new sshSession for this sshClient. (A
// session is a remote execution of a program.)
NewSession() (sshSession, error)
// CanReuse indicates if this client can be reused
CanReuse() bool
}
// An interface for an ssh session to abstract over internal ssh library and external binary
type sshSession interface {
// Setenv sets an environment variable that will be applied to any
// command executed by Shell or Run.
Setenv(name, value string) error
// Start runs cmd on the remote host. Typically, the remote
// server passes cmd to the shell for interpretation.
// A Session only accepts one call to Run, Start or Shell.
Start(cmd string) error
// StdinPipe returns a pipe that will be connected to the
// remote command's standard input when the command starts.
StdinPipe() (io.WriteCloser, error)
// StdoutPipe returns a pipe that will be connected to the
// remote command's standard output when the command starts.
// There is a fixed amount of buffering that is shared between
// stdout and stderr streams. If the StdoutPipe reader is
// not serviced fast enough it may eventually cause the
// remote command to block.
StdoutPipe() (io.Reader, error)
// RequestSubsystem requests the association of a subsystem
// with the session on the remote host. A subsystem is a
// predefined command that runs in the background when the ssh
// session is initiated
RequestSubsystem(subsystem string) error
// Run runs cmd on the remote host. Typically, the remote
// server passes cmd to the shell for interpretation.
// A Session only accepts one call to Run, Start, Shell, Output,
// or CombinedOutput.
Run(cmd string) error
// Close the session
Close() error
// Set the stdout
SetStdout(io.Writer)
// Set the stderr
SetStderr(io.Writer)
}

View File

@@ -0,0 +1,223 @@
//go:build !plan9
// +build !plan9
package sftp
import (
"context"
"errors"
"fmt"
"io"
"os/exec"
"strings"
"github.com/rclone/rclone/fs"
)
// Implement the sshClient interface for external ssh programs
type sshClientExternal struct {
f *Fs
session *sshSessionExternal
}
func (f *Fs) newSSHClientExternal() (sshClient, error) {
return &sshClientExternal{f: f}, nil
}
// Wait for connection to close
func (s *sshClientExternal) Wait() error {
if s.session == nil {
return nil
}
return s.session.Wait()
}
// Send a keepalive over the ssh connection
func (s *sshClientExternal) SendKeepAlive() {
// Up to the user to configure -o ServerAliveInterval=20 on their ssh connections
}
// Close the connection
func (s *sshClientExternal) Close() error {
if s.session == nil {
return nil
}
return s.session.Close()
}
// NewSession makes a new external SSH connection
func (s *sshClientExternal) NewSession() (sshSession, error) {
session := s.f.newSshSessionExternal()
if s.session == nil {
fs.Debugf(s.f, "ssh external: creating additional session")
}
return session, nil
}
// CanReuse indicates if this client can be reused
func (s *sshClientExternal) CanReuse() bool {
if s.session == nil {
return true
}
exited := s.session.exited()
canReuse := !exited && s.session.runningSFTP
// fs.Debugf(s.f, "ssh external: CanReuse %v, exited=%v runningSFTP=%v", canReuse, exited, s.session.runningSFTP)
return canReuse
}
// Check interfaces
var _ sshClient = &sshClientExternal{}
// implement the sshSession interface for external ssh binary
type sshSessionExternal struct {
f *Fs
cmd *exec.Cmd
cancel func()
startCalled bool
runningSFTP bool
}
func (f *Fs) newSshSessionExternal() *sshSessionExternal {
s := &sshSessionExternal{
f: f,
}
// Make a cancellation function for this to call in Close()
ctx, cancel := context.WithCancel(context.Background())
s.cancel = cancel
// Connect to a remote host and request the sftp subsystem via
// the 'ssh' command. This assumes that passwordless login is
// correctly configured.
ssh := append([]string(nil), s.f.opt.SSH...)
s.cmd = exec.CommandContext(ctx, ssh[0], ssh[1:]...)
// Allow the command a short time only to shut down
// FIXME enable when we get rid of go1.19
// s.cmd.WaitDelay = time.Second
return s
}
// Setenv sets an environment variable that will be applied to any
// command executed by Shell or Run.
func (s *sshSessionExternal) Setenv(name, value string) error {
return errors.New("ssh external: can't set environment variables")
}
const requestSubsystem = "***Subsystem***:"
// Start runs cmd on the remote host. Typically, the remote
// server passes cmd to the shell for interpretation.
// A Session only accepts one call to Run, Start or Shell.
func (s *sshSessionExternal) Start(cmd string) error {
if s.startCalled {
return errors.New("internal error: ssh external: command already running")
}
s.startCalled = true
// Adjust the args
if strings.HasPrefix(cmd, requestSubsystem) {
s.cmd.Args = append(s.cmd.Args, "-s", cmd[len(requestSubsystem):])
s.runningSFTP = true
} else {
s.cmd.Args = append(s.cmd.Args, cmd)
s.runningSFTP = false
}
fs.Debugf(s.f, "ssh external: running: %v", fs.SpaceSepList(s.cmd.Args))
// start the process
err := s.cmd.Start()
if err != nil {
return fmt.Errorf("ssh external: start process: %w", err)
}
return nil
}
// RequestSubsystem requests the association of a subsystem
// with the session on the remote host. A subsystem is a
// predefined command that runs in the background when the ssh
// session is initiated
func (s *sshSessionExternal) RequestSubsystem(subsystem string) error {
return s.Start(requestSubsystem + subsystem)
}
// StdinPipe returns a pipe that will be connected to the
// remote command's standard input when the command starts.
func (s *sshSessionExternal) StdinPipe() (io.WriteCloser, error) {
rd, err := s.cmd.StdinPipe()
if err != nil {
return nil, fmt.Errorf("ssh external: stdin pipe: %w", err)
}
return rd, nil
}
// StdoutPipe returns a pipe that will be connected to the
// remote command's standard output when the command starts.
// There is a fixed amount of buffering that is shared between
// stdout and stderr streams. If the StdoutPipe reader is
// not serviced fast enough it may eventually cause the
// remote command to block.
func (s *sshSessionExternal) StdoutPipe() (io.Reader, error) {
wr, err := s.cmd.StdoutPipe()
if err != nil {
return nil, fmt.Errorf("ssh external: stdout pipe: %w", err)
}
return wr, nil
}
// Return whether the command has finished or not
func (s *sshSessionExternal) exited() bool {
return s.cmd.ProcessState != nil
}
// Wait for the command to exit
func (s *sshSessionExternal) Wait() error {
if s.exited() {
return nil
}
err := s.cmd.Wait()
if err == nil {
fs.Debugf(s.f, "ssh external: command exited OK")
} else {
fs.Debugf(s.f, "ssh external: command exited with error: %v", err)
}
return err
}
// Run runs cmd on the remote host. Typically, the remote
// server passes cmd to the shell for interpretation.
// A Session only accepts one call to Run, Start, Shell, Output,
// or CombinedOutput.
func (s *sshSessionExternal) Run(cmd string) error {
err := s.Start(cmd)
if err != nil {
return err
}
return s.Wait()
}
// Close the external ssh
func (s *sshSessionExternal) Close() error {
fs.Debugf(s.f, "ssh external: close")
// Cancel the context which kills the process
s.cancel()
// Wait for it to finish
_ = s.Wait()
return nil
}
// Set the stdout
func (s *sshSessionExternal) SetStdout(wr io.Writer) {
s.cmd.Stdout = wr
}
// Set the stderr
func (s *sshSessionExternal) SetStderr(wr io.Writer) {
s.cmd.Stderr = wr
}
// Check interfaces
var _ sshSession = &sshSessionExternal{}

View File

@@ -0,0 +1,101 @@
//go:build !plan9
// +build !plan9
package sftp
import (
"context"
"io"
"net"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/lib/proxy"
"golang.org/x/crypto/ssh"
)
// Internal ssh connections with "golang.org/x/crypto/ssh"
type sshClientInternal struct {
srv *ssh.Client
}
// newSSHClientInternal starts a client connection to the given SSH server. It is a
// convenience function that connects to the given network address,
// initiates the SSH handshake, and then sets up a Client.
func (f *Fs) newSSHClientInternal(ctx context.Context, network, addr string, sshConfig *ssh.ClientConfig) (sshClient, error) {
baseDialer := fshttp.NewDialer(ctx)
var (
conn net.Conn
err error
)
if f.opt.SocksProxy != "" {
conn, err = proxy.SOCKS5Dial(network, addr, f.opt.SocksProxy, baseDialer)
} else {
conn, err = baseDialer.Dial(network, addr)
}
if err != nil {
return nil, err
}
c, chans, reqs, err := ssh.NewClientConn(conn, addr, sshConfig)
if err != nil {
return nil, err
}
fs.Debugf(f, "New connection %s->%s to %q", c.LocalAddr(), c.RemoteAddr(), c.ServerVersion())
srv := ssh.NewClient(c, chans, reqs)
return sshClientInternal{srv}, nil
}
// Wait for connection to close
func (s sshClientInternal) Wait() error {
return s.srv.Conn.Wait()
}
// Send a keepalive over the ssh connection
func (s sshClientInternal) SendKeepAlive() {
_, _, err := s.srv.SendRequest("keepalive@openssh.com", true, nil)
if err != nil {
fs.Debugf(nil, "Failed to send keep alive: %v", err)
}
}
// Close the connection
func (s sshClientInternal) Close() error {
return s.srv.Close()
}
// CanReuse indicates if this client can be reused
func (s sshClientInternal) CanReuse() bool {
return true
}
// Check interfaces
var _ sshClient = sshClientInternal{}
// Thin wrapper for *ssh.Session to implement sshSession interface
type sshSessionInternal struct {
*ssh.Session
}
// Set the stdout
func (s sshSessionInternal) SetStdout(wr io.Writer) {
s.Session.Stdout = wr
}
// Set the stderr
func (s sshSessionInternal) SetStderr(wr io.Writer) {
s.Session.Stderr = wr
}
// NewSession makes an sshSession from an sshClient
func (s sshClientInternal) NewSession() (sshSession, error) {
session, err := s.srv.NewSession()
if err != nil {
return nil, err
}
return sshSessionInternal{Session: session}, nil
}
// Check interfaces
var _ sshSession = sshSessionInternal{}

View File

@@ -475,6 +475,45 @@ func (f *Fs) About(ctx context.Context) (_ *fs.Usage, err error) {
return usage, nil
}
// OpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
//
// It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
var err error
o := &Object{
fs: f,
remote: remote,
}
share, filename := o.split()
if share == "" || filename == "" {
return nil, fs.ErrorIsDir
}
err = o.fs.ensureDirectory(ctx, share, filename)
if err != nil {
return nil, fmt.Errorf("failed to make parent directories: %w", err)
}
filename = o.fs.toSambaPath(filename)
o.fs.addSession() // Show session in use
defer o.fs.removeSession()
cn, err := o.fs.getConnection(ctx, share)
if err != nil {
return nil, err
}
fl, err := cn.smbShare.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0o644)
if err != nil {
return nil, fmt.Errorf("failed to open: %w", err)
}
return fl, nil
}
// Shutdown the backend, closing any background tasks and any
// cached connections.
func (f *Fs) Shutdown(ctx context.Context) error {

View File

@@ -24,6 +24,7 @@ import (
"storj.io/uplink"
"storj.io/uplink/edge"
"storj.io/uplink/private/testuplink"
)
const (
@@ -276,6 +277,8 @@ func (f *Fs) connect(ctx context.Context) (project *uplink.Project, err error) {
UserAgent: "rclone",
}
ctx = testuplink.WithConcurrentSegmentUploadsDefaultConfig(ctx)
project, err = cfg.OpenProject(ctx, f.access)
if err != nil {
return nil, fmt.Errorf("storj: project: %w", err)

View File

@@ -561,7 +561,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *swift.O
// returned as 0 bytes in the listing. Correct this here by
// making sure we read the full metadata for all 0 byte files.
// We don't read the metadata for directory marker objects.
if info != nil && info.Bytes == 0 && info.ContentType != "application/directory" {
if info != nil && info.Bytes == 0 && info.ContentType != "application/directory" && !o.fs.opt.NoLargeObjects {
err := o.readMetaData(ctx) // reads info and headers, returning an error
if err == fs.ErrorObjectNotFound {
// We have a dangling large object here so just return the original metadata

View File

@@ -121,7 +121,7 @@ func (o *Object) uploadChunks(ctx context.Context, in0 io.Reader, size int64, pa
getBody := func() (io.ReadCloser, error) {
// RepeatableReader{} plays well with accounting so rewinding doesn't make the progress buggy
if _, err := in.Seek(0, io.SeekStart); err == nil {
if _, err := in.Seek(0, io.SeekStart); err != nil {
return nil, err
}
@@ -203,7 +203,7 @@ func (o *Object) purgeUploadedChunks(ctx context.Context, uploadDir string) erro
resp, err := o.fs.srv.CallXML(ctx, &opts, nil, nil)
// directory doesn't exist, no need to purge
if resp.StatusCode == http.StatusNotFound {
if resp != nil && resp.StatusCode == http.StatusNotFound {
return false, nil
}

572
backend/zip/zip.go Normal file
View File

@@ -0,0 +1,572 @@
// Package zip provides wrappers for Fs and Object which read and
// write a zip file.
//
// Use like :zip:remote:path/to/file.zip
package zip
/*
FIXME could maybe make an append to zip file with specially named zip
files having more info in?
So have base zip file file.zip then file.extra000.zip etc.
We read the objects sequentially but can transfer them in parallel.
FIXME what happens when we want to copy a file out of the zip?
So rclone copy remote:file.zip/file/inside.zip?
Maybe rclone copy remote:file.zip#file/inside.zip?
Or just look for first .zip file?
FIXME what happens when writing - need to know the end... Don't have a
backend Shutdown call.
Could have a read only Feature flag so we turn that on when we have a
zip. Then if placed in dest position could error. Might be useful for
http also.
FIXME this will perform poorly for unpacking as the VFS Reader is bad
at multiple streams.
FIXME not writing directories
cf zipinfo
drwxr-xr-x 3.0 unx 0 bx stor 19-Oct-05 12:14 rclone-v1.49.5-linux-amd64/
FIXME can probably check directories better than trailing /
FIXME enormous atexit bodge
FIXME crc32 skip on write bodge
Bodge bodge bodge
*/
import (
"archive/zip"
"context"
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"strings"
"sync"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/dirtree"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/vfs"
)
// Globals
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "zip",
Description: "Read or write a zip file",
NewFs: NewFs,
Options: []fs.Option{},
})
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
// Parse the root which should be remote:path/to/file.zip
parent, leaf, err := fspath.Split(root)
if err != nil {
return nil, fmt.Errorf("failed to parse root %q: %w", root, err)
}
if leaf == "" {
return nil, fmt.Errorf("failed to parse root %q: not pointing to a file", root)
}
root = strings.TrimRight(parent, "/")
// Create the remote from the cache
wrappedFs, err := cache.Get(ctx, root)
if err != nil {
return nil, fmt.Errorf("failed to make %q to wrap: %w", root, err)
}
// FIXME vfs cache?
// FIXME could factor out ReadFileHandle and just use that rather than the full VFS
VFS := vfs.New(wrappedFs, nil)
if err != nil {
return nil, fmt.Errorf("failed to create VFS for %q: %w", root, err)
}
node, err := VFS.Stat(leaf)
if err != nil && !os.IsNotExist(err) {
return nil, fmt.Errorf("failed to find %q object: %w", leaf, err)
}
if node != nil && node.IsDir() {
return nil, fmt.Errorf("can't unzip a directory %q", leaf)
}
f := &Fs{
f: wrappedFs,
name: name,
root: root,
opt: *opt,
vfs: VFS,
node: node,
leaf: leaf,
}
// FIXME Maybe delay this until first read size if writing we don't need it?
if node != nil {
err = f.readZip()
if err != nil {
return nil, fmt.Errorf("failed to open zip file: %w", err)
}
}
// FIXME
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
//
// FIXME some of these need to be forced on - CanHaveEmptyDirectories
f.features = (&fs.Features{
CaseInsensitive: false,
DuplicateFiles: false,
ReadMimeType: false, // MimeTypes not supported with gzip
WriteMimeType: false,
BucketBased: false,
CanHaveEmptyDirectories: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
return f, nil
}
// Options defines the configuration for this backend
type Options struct {
}
// Fs represents a wrapped fs.Fs
type Fs struct {
f fs.Fs
wrapper fs.Fs
name string
root string
opt Options
features *fs.Features // optional features
vfs *vfs.VFS
node vfs.Node // zip file object - set if reading
leaf string // leaf name of the zip file object
dt dirtree.DirTree // read from zipfile
wrmu sync.Mutex // writing mutex protects the below
wh vfs.Handle // write handle
zw *zip.Writer
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("Zip '%s:%s'", f.name, f.root)
}
// readZip the zip file into f
func (f *Fs) readZip() (err error) {
if f.node == nil {
return fs.ErrorDirNotFound
}
size := f.node.Size()
if size < 0 {
return errors.New("can't read from zip file with unknown size")
}
r, err := f.node.Open(os.O_RDONLY)
if err != nil {
return fmt.Errorf("failed to open zip file: %w", err)
}
zr, err := zip.NewReader(r, size)
if err != nil {
return fmt.Errorf("failed to read zip file: %w", err)
}
dt := dirtree.New()
for _, file := range zr.File {
remote := strings.Trim(path.Clean(file.Name), "/")
if remote == "." {
remote = ""
}
if strings.HasSuffix(file.Name, "/") {
dir := fs.NewDir(remote, file.Modified)
dt.AddDir(dir)
} else {
o := &Object{
f: f,
remote: remote,
fh: &file.FileHeader,
file: file,
}
dt.Add(o)
}
}
dt.CheckParents("")
dt.Sort()
f.dt = dt
fs.Debugf(nil, "dt = %#v", dt)
return nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
defer log.Trace(f, "dir=%q", dir)("entries=%v, err=%v", &entries, &err)
entries, ok := f.dt[dir]
if !ok {
return nil, fs.ErrorDirNotFound
}
fs.Debugf(f, "dir=%q, entries=%v", dir, entries)
return entries, nil
}
// ListR lists the objects and directories of the Fs starting
// from dir recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
//
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
for _, entries := range f.dt {
err = callback(entries)
if err != nil {
return err
}
}
return nil
}
// NewObject finds the Object at remote.
func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) {
defer log.Trace(f, "remote=%q", remote)("obj=%v, err=%v", &o, &err)
if f.dt == nil {
return nil, fs.ErrorObjectNotFound
}
_, entry := f.dt.Find(remote)
if entry == nil {
return nil, fs.ErrorObjectNotFound
}
o, ok := entry.(*Object)
if !ok {
return nil, fs.ErrorNotAFile
}
return o, nil
}
// Precision of the ModTimes in this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// FIXME should this create a "directory" entry if not already created?
return nil
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return errors.New("can't remove directories from zip file")
}
// Wrapper for src to make it into os.FileInfo
type fileInfo struct {
src fs.ObjectInfo
}
// Name - base name of the file (actually full name for zip)
func (fi fileInfo) Name() string {
return fi.src.Remote()
}
// Size - length in bytes for regular files; system-dependent for others
func (fi fileInfo) Size() int64 {
return fi.src.Size()
}
// Mode - file mode bits
func (fi fileInfo) Mode() os.FileMode {
return 0777
}
// ModTime - modification time
func (fi fileInfo) ModTime() time.Time {
return fi.src.ModTime(context.Background())
}
// IsDir - abbreviation for Mode().IsDir()
func (fi fileInfo) IsDir() bool {
return false
}
// Sys - underlying data source (can return nil)
func (fi fileInfo) Sys() interface{} {
return nil
}
// check type
var _ os.FileInfo = fileInfo{}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) {
defer log.Trace(f, "src=%v", src)("obj=%v, err=%v", &o, &err)
f.wrmu.Lock()
defer f.wrmu.Unlock()
size := src.Size()
if size < 0 {
return nil, errors.New("can't zip unknown sized objects")
}
if f.zw == nil {
wh, err := f.vfs.OpenFile(f.leaf, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0777)
if err != nil {
return nil, fmt.Errorf("zip put: failed to open output file: %w", err)
}
f.wh = wh
f.zw = zip.NewWriter(f.wh)
if err != nil {
return nil, fmt.Errorf("zip put: failed to create zip writer: %w", err)
}
// FIXME enormous bodge to work around no lifecycle for backends
atexit.Register(func() {
err = f.zw.Close()
if err != nil {
fs.Errorf(f, "Error closing zip writer: %v", err)
}
err = f.wh.Close()
if err != nil {
fs.Errorf(f, "Error closing zip file: %v", err)
}
})
}
// FIXME not putting in directories?
fh, err := zip.FileInfoHeader(fileInfo{src})
if err != nil {
return nil, fmt.Errorf("zip put: failed to create file header: %w", err)
}
w, err := f.zw.CreateHeader(fh)
if err != nil {
return nil, fmt.Errorf("zip put: failed to open file: %w", err)
}
// FIXME just a guess!
// should probably try a better heuristic?
if size < 10 {
fh.Method = zip.Store
} else {
fh.Method = zip.Deflate
}
_, err = io.CopyN(w, in, size)
if err != nil {
return nil, fmt.Errorf("zip put: failed to copy file: %w", err)
}
// err = w.Close()
// if err != nil {
// return nil,fmt.Errorf("zip put: failed to close file: %w", err)
// }
o = &Object{
f: f,
fh: fh,
remote: src.Remote(),
}
return o, nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.CRC32)
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs {
return f.f
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// Object describes an object to be read from the raw zip file
type Object struct {
f *Fs
remote string
fh *zip.FileHeader
file *zip.File
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.f
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Remote()
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Size returns the size of the file
func (o *Object) Size() int64 {
return int64(o.fh.UncompressedSize64)
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.fh.Modified
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
fs.Debugf(o, "Can't set mod time - ignoring")
return nil
}
// Storable raturns a boolean indicating if this object is storable
func (o *Object) Storable() bool {
return true
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
if ht == hash.CRC32 {
// FIXME return empty CRC if writing
if o.f.dt == nil {
return "", nil
}
return fmt.Sprintf("%08x", o.fh.CRC32), nil
}
return "", hash.ErrUnsupported
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
offset = x.Offset
case *fs.RangeOption:
offset, limit = x.Decode(o.Size())
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
rc, err = o.file.Open()
if err != nil {
return nil, err
}
// discard data from start as necessary
if offset > 0 {
_, err = io.CopyN(ioutil.Discard, rc, offset)
if err != nil {
return nil, err
}
}
// If limited then don't return everything
if limit >= 0 {
return readers.NewLimitedReadCloser(rc, limit-offset), nil
}
return rc, nil
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
return errors.New("can't Update zip files")
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return errors.New("can't Remove zip file objects")
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
// _ fs.Abouter = (*Fs)(nil) - FIXME can implemnet
_ fs.Wrapper = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

View File

@@ -66,6 +66,7 @@ docs = [
"pcloud.md",
"pikpak.md",
"premiumizeme.md",
"protondrive.md",
"putio.md",
"seafile.md",
"sftp.md",

View File

@@ -22,8 +22,8 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &jsonOutput, "json", "", false, "Format output as JSON")
flags.BoolVarP(cmdFlags, &fullOutput, "full", "", false, "Full numbers instead of human-readable")
flags.BoolVarP(cmdFlags, &jsonOutput, "json", "", false, "Format output as JSON", "")
flags.BoolVarP(cmdFlags, &fullOutput, "full", "", false, "Full numbers instead of human-readable", "")
}
// printValue formats uv to be output
@@ -95,6 +95,7 @@ see complete list in [documentation](https://rclone.org/overview/#optional-featu
`,
Annotations: map[string]string{
"versionIntroduced": "v1.41",
// "groups": "",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -18,8 +18,8 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &noAutoBrowser, "auth-no-open-browser", "", false, "Do not automatically open auth link in default browser")
flags.StringVarP(cmdFlags, &template, "template", "", "", "The path to a custom Go template for generating HTML responses")
flags.BoolVarP(cmdFlags, &noAutoBrowser, "auth-no-open-browser", "", false, "Do not automatically open auth link in default browser", "")
flags.StringVarP(cmdFlags, &template, "template", "", "", "The path to a custom Go template for generating HTML responses", "")
}
var commandDefinition = &cobra.Command{
@@ -36,6 +36,7 @@ link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.`,
Annotations: map[string]string{
"versionIntroduced": "v1.27",
// "groups": "",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 3, command, args)

View File

@@ -24,8 +24,8 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.StringArrayVarP(cmdFlags, &options, "option", "o", options, "Option in the form name=value or name")
flags.BoolVarP(cmdFlags, &useJSON, "json", "", useJSON, "Always output in JSON format")
flags.StringArrayVarP(cmdFlags, &options, "option", "o", options, "Option in the form name=value or name", "")
flags.BoolVarP(cmdFlags, &useJSON, "json", "", useJSON, "Always output in JSON format", "")
}
var commandDefinition = &cobra.Command{
@@ -60,6 +60,7 @@ Note to run these commands on a running backend then see
`,
Annotations: map[string]string{
"versionIntroduced": "v1.52",
"groups": "Important",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(2, 1e6, command, args)

View File

@@ -98,16 +98,16 @@ var Opt Options
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &Opt.Resync, "resync", "1", Opt.Resync, "Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first.")
flags.BoolVarP(cmdFlags, &Opt.CheckAccess, "check-access", "", Opt.CheckAccess, makeHelp("Ensure expected {CHECKFILE} files are found on both Path1 and Path2 filesystems, else abort."))
flags.StringVarP(cmdFlags, &Opt.CheckFilename, "check-filename", "", Opt.CheckFilename, makeHelp("Filename for --check-access (default: {CHECKFILE})"))
flags.BoolVarP(cmdFlags, &Opt.Force, "force", "", Opt.Force, "Bypass --max-delete safety check and run the sync. Consider using with --verbose")
flags.FVarP(cmdFlags, &Opt.CheckSync, "check-sync", "", "Controls comparison of final listings: true|false|only (default: true)")
flags.BoolVarP(cmdFlags, &Opt.RemoveEmptyDirs, "remove-empty-dirs", "", Opt.RemoveEmptyDirs, "Remove empty directories at the final cleanup step.")
flags.StringVarP(cmdFlags, &Opt.FiltersFile, "filters-file", "", Opt.FiltersFile, "Read filtering patterns from a file")
flags.StringVarP(cmdFlags, &Opt.Workdir, "workdir", "", Opt.Workdir, makeHelp("Use custom working dir - useful for testing. (default: {WORKDIR})"))
flags.BoolVarP(cmdFlags, &tzLocal, "localtime", "", tzLocal, "Use local time in listings (default: UTC)")
flags.BoolVarP(cmdFlags, &Opt.NoCleanup, "no-cleanup", "", Opt.NoCleanup, "Retain working files (useful for troubleshooting and testing).")
flags.BoolVarP(cmdFlags, &Opt.Resync, "resync", "1", Opt.Resync, "Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first.", "")
flags.BoolVarP(cmdFlags, &Opt.CheckAccess, "check-access", "", Opt.CheckAccess, makeHelp("Ensure expected {CHECKFILE} files are found on both Path1 and Path2 filesystems, else abort."), "")
flags.StringVarP(cmdFlags, &Opt.CheckFilename, "check-filename", "", Opt.CheckFilename, makeHelp("Filename for --check-access (default: {CHECKFILE})"), "")
flags.BoolVarP(cmdFlags, &Opt.Force, "force", "", Opt.Force, "Bypass --max-delete safety check and run the sync. Consider using with --verbose", "")
flags.FVarP(cmdFlags, &Opt.CheckSync, "check-sync", "", "Controls comparison of final listings: true|false|only (default: true)", "")
flags.BoolVarP(cmdFlags, &Opt.RemoveEmptyDirs, "remove-empty-dirs", "", Opt.RemoveEmptyDirs, "Remove empty directories at the final cleanup step.", "")
flags.StringVarP(cmdFlags, &Opt.FiltersFile, "filters-file", "", Opt.FiltersFile, "Read filtering patterns from a file", "")
flags.StringVarP(cmdFlags, &Opt.Workdir, "workdir", "", Opt.Workdir, makeHelp("Use custom working dir - useful for testing. (default: {WORKDIR})"), "")
flags.BoolVarP(cmdFlags, &tzLocal, "localtime", "", tzLocal, "Use local time in listings (default: UTC)", "")
flags.BoolVarP(cmdFlags, &Opt.NoCleanup, "no-cleanup", "", Opt.NoCleanup, "Retain working files (useful for troubleshooting and testing).", "")
}
// bisync command definition
@@ -117,6 +117,7 @@ var commandDefinition = &cobra.Command{
Long: longHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.58",
"groups": "Filter,Copy,Important",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -27,12 +27,12 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.Int64VarP(cmdFlags, &head, "head", "", head, "Only print the first N characters")
flags.Int64VarP(cmdFlags, &tail, "tail", "", tail, "Only print the last N characters")
flags.Int64VarP(cmdFlags, &offset, "offset", "", offset, "Start printing at offset N (or from end if -ve)")
flags.Int64VarP(cmdFlags, &count, "count", "", count, "Only print N characters")
flags.BoolVarP(cmdFlags, &discard, "discard", "", discard, "Discard the output instead of printing")
flags.StringVarP(cmdFlags, &separator, "separator", "", separator, "Separator to use between objects when printing multiple files")
flags.Int64VarP(cmdFlags, &head, "head", "", head, "Only print the first N characters", "")
flags.Int64VarP(cmdFlags, &tail, "tail", "", tail, "Only print the last N characters", "")
flags.Int64VarP(cmdFlags, &offset, "offset", "", offset, "Start printing at offset N (or from end if -ve)", "")
flags.Int64VarP(cmdFlags, &count, "count", "", count, "Only print N characters", "")
flags.BoolVarP(cmdFlags, &discard, "discard", "", discard, "Discard the output instead of printing", "")
flags.StringVarP(cmdFlags, &separator, "separator", "", separator, "Separator to use between objects when printing multiple files", "")
}
var commandDefinition = &cobra.Command{
@@ -73,6 +73,7 @@ files, use:
`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.33",
"groups": "Filter,Listing",
},
Run: func(command *cobra.Command, args []string) {
usedOffset := offset != 0 || count >= 0

View File

@@ -33,20 +33,20 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &download, "download", "", download, "Check by downloading rather than with hash")
flags.StringVarP(cmdFlags, &checkFileHashType, "checkfile", "C", checkFileHashType, "Treat source:path as a SUM file with hashes of given type")
flags.BoolVarP(cmdFlags, &download, "download", "", download, "Check by downloading rather than with hash", "")
flags.StringVarP(cmdFlags, &checkFileHashType, "checkfile", "C", checkFileHashType, "Treat source:path as a SUM file with hashes of given type", "")
AddFlags(cmdFlags)
}
// AddFlags adds the check flags to the cmdFlags command
func AddFlags(cmdFlags *pflag.FlagSet) {
flags.BoolVarP(cmdFlags, &oneway, "one-way", "", oneway, "Check one way only, source files must exist on remote")
flags.StringVarP(cmdFlags, &combined, "combined", "", combined, "Make a combined report of changes to this file")
flags.StringVarP(cmdFlags, &missingOnSrc, "missing-on-src", "", missingOnSrc, "Report all files missing from the source to this file")
flags.StringVarP(cmdFlags, &missingOnDst, "missing-on-dst", "", missingOnDst, "Report all files missing from the destination to this file")
flags.StringVarP(cmdFlags, &match, "match", "", match, "Report all matching files to this file")
flags.StringVarP(cmdFlags, &differ, "differ", "", differ, "Report all non-matching files to this file")
flags.StringVarP(cmdFlags, &errFile, "error", "", errFile, "Report all files with errors (hashing or reading) to this file")
flags.BoolVarP(cmdFlags, &oneway, "one-way", "", oneway, "Check one way only, source files must exist on remote", "")
flags.StringVarP(cmdFlags, &combined, "combined", "", combined, "Make a combined report of changes to this file", "")
flags.StringVarP(cmdFlags, &missingOnSrc, "missing-on-src", "", missingOnSrc, "Report all files missing from the source to this file", "")
flags.StringVarP(cmdFlags, &missingOnDst, "missing-on-dst", "", missingOnDst, "Report all files missing from the destination to this file", "")
flags.StringVarP(cmdFlags, &match, "match", "", match, "Report all matching files to this file", "")
flags.StringVarP(cmdFlags, &differ, "differ", "", differ, "Report all non-matching files to this file", "")
flags.StringVarP(cmdFlags, &errFile, "error", "", errFile, "Report all files with errors (hashing or reading) to this file", "")
}
// FlagsHelp describes the flags for the help
@@ -158,6 +158,9 @@ to check all the data.
If you supply the |--checkfile HASH| flag with a valid hash name,
the |source:path| must point to a text file in the SUM format.
`, "|", "`") + FlagsHelp,
Annotations: map[string]string{
"groups": "Filter,Listing,Check",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(2, 2, command, args)
var (

View File

@@ -19,7 +19,7 @@ var download = false
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &download, "download", "", download, "Check by hashing the contents")
flags.BoolVarP(cmdFlags, &download, "download", "", download, "Check by hashing the contents", "")
check.AddFlags(cmdFlags)
}
@@ -39,6 +39,7 @@ Note that hash values in the SUM file are treated as case insensitive.
`, "|", "`") + check.FlagsHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.56",
"groups": "Filter,Listing",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(3, 3, command, args)

View File

@@ -22,6 +22,7 @@ versions. Not supported by all remotes.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.31",
"groups": "Important",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -48,13 +48,13 @@ import (
// Globals
var (
// Flags
cpuProfile = flags.StringP("cpuprofile", "", "", "Write cpu profile to file")
memProfile = flags.StringP("memprofile", "", "", "Write memory profile to file")
statsInterval = flags.DurationP("stats", "", time.Minute*1, "Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable)")
dataRateUnit = flags.StringP("stats-unit", "", "bytes", "Show data rate in stats as either 'bits' or 'bytes' per second")
cpuProfile = flags.StringP("cpuprofile", "", "", "Write cpu profile to file", "Debugging")
memProfile = flags.StringP("memprofile", "", "", "Write memory profile to file", "Debugging")
statsInterval = flags.DurationP("stats", "", time.Minute*1, "Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable)", "Logging")
dataRateUnit = flags.StringP("stats-unit", "", "bytes", "Show data rate in stats as either 'bits' or 'bytes' per second", "Logging")
version bool
retries = flags.IntP("retries", "", 3, "Retry operations this many times if they fail")
retriesInterval = flags.DurationP("retries-sleep", "", 0, "Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable)")
retries = flags.IntP("retries", "", 3, "Retry operations this many times if they fail", "Config")
retriesInterval = flags.DurationP("retries-sleep", "", 0, "Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable)", "Config")
// Errors
errorCommandNotFound = errors.New("command not found")
errorUncategorized = errors.New("uncategorized error")

View File

@@ -61,7 +61,10 @@ var configEditCommand = &cobra.Command{
Annotations: map[string]string{
"versionIntroduced": "v1.39",
},
Run: configCommand.Run,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
return config.EditConfig(context.Background())
},
}
var configFileCommand = &cobra.Command{
@@ -318,13 +321,13 @@ func doConfig(name string, in rc.Params, do func(config.UpdateRemoteOpt) (*fs.Co
func init() {
for _, cmdFlags := range []*pflag.FlagSet{configCreateCommand.Flags(), configUpdateCommand.Flags()} {
flags.BoolVarP(cmdFlags, &updateRemoteOpt.Obscure, "obscure", "", false, "Force any passwords to be obscured")
flags.BoolVarP(cmdFlags, &updateRemoteOpt.NoObscure, "no-obscure", "", false, "Force any passwords not to be obscured")
flags.BoolVarP(cmdFlags, &updateRemoteOpt.NonInteractive, "non-interactive", "", false, "Don't interact with user and return questions")
flags.BoolVarP(cmdFlags, &updateRemoteOpt.Continue, "continue", "", false, "Continue the configuration process with an answer")
flags.BoolVarP(cmdFlags, &updateRemoteOpt.All, "all", "", false, "Ask the full set of config questions")
flags.StringVarP(cmdFlags, &updateRemoteOpt.State, "state", "", "", "State - use with --continue")
flags.StringVarP(cmdFlags, &updateRemoteOpt.Result, "result", "", "", "Result - use with --continue")
flags.BoolVarP(cmdFlags, &updateRemoteOpt.Obscure, "obscure", "", false, "Force any passwords to be obscured", "Config")
flags.BoolVarP(cmdFlags, &updateRemoteOpt.NoObscure, "no-obscure", "", false, "Force any passwords not to be obscured", "Config")
flags.BoolVarP(cmdFlags, &updateRemoteOpt.NonInteractive, "non-interactive", "", false, "Don't interact with user and return questions", "Config")
flags.BoolVarP(cmdFlags, &updateRemoteOpt.Continue, "continue", "", false, "Continue the configuration process with an answer", "Config")
flags.BoolVarP(cmdFlags, &updateRemoteOpt.All, "all", "", false, "Ask the full set of config questions", "Config")
flags.StringVarP(cmdFlags, &updateRemoteOpt.State, "state", "", "", "State - use with --continue", "Config")
flags.StringVarP(cmdFlags, &updateRemoteOpt.Result, "result", "", "", "Result - use with --continue", "Config")
}
}
@@ -480,7 +483,7 @@ var (
)
func init() {
flags.BoolVarP(configUserInfoCommand.Flags(), &jsonOutput, "json", "", false, "Format output as JSON")
flags.BoolVarP(configUserInfoCommand.Flags(), &jsonOutput, "json", "", false, "Format output as JSON", "")
}
var configUserInfoCommand = &cobra.Command{

View File

@@ -19,7 +19,7 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after copy")
flags.BoolVarP(cmdFlags, &createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after copy", "")
}
var commandDefinition = &cobra.Command{
@@ -83,6 +83,9 @@ recently very efficiently like this:
**Note**: Use the |--dry-run| or the |--interactive|/|-i| flag to test without copying anything.
`, "|", "`"),
Annotations: map[string]string{
"groups": "Copy,Filter,Listing,Important",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -48,6 +48,7 @@ the destination.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.35",
"groups": "Copy,Filter,Listing,Important",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -25,11 +25,11 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &autoFilename, "auto-filename", "a", autoFilename, "Get the file name from the URL and use it for destination file path")
flags.BoolVarP(cmdFlags, &headerFilename, "header-filename", "", headerFilename, "Get the file name from the Content-Disposition header")
flags.BoolVarP(cmdFlags, &printFilename, "print-filename", "p", printFilename, "Print the resulting name from --auto-filename")
flags.BoolVarP(cmdFlags, &noClobber, "no-clobber", "", noClobber, "Prevent overwriting file with same name")
flags.BoolVarP(cmdFlags, &stdout, "stdout", "", stdout, "Write the output to stdout rather than a file")
flags.BoolVarP(cmdFlags, &autoFilename, "auto-filename", "a", autoFilename, "Get the file name from the URL and use it for destination file path", "")
flags.BoolVarP(cmdFlags, &headerFilename, "header-filename", "", headerFilename, "Get the file name from the Content-Disposition header", "")
flags.BoolVarP(cmdFlags, &printFilename, "print-filename", "p", printFilename, "Print the resulting name from --auto-filename", "")
flags.BoolVarP(cmdFlags, &noClobber, "no-clobber", "", noClobber, "Prevent overwriting file with same name", "")
flags.BoolVarP(cmdFlags, &stdout, "stdout", "", stdout, "Write the output to stdout rather than a file", "")
}
var commandDefinition = &cobra.Command{
@@ -53,6 +53,7 @@ will cause the output to be written to standard output.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.43",
"groups": "Important",
},
RunE: func(command *cobra.Command, args []string) (err error) {
cmd.CheckArgs(1, 2, command, args)

View File

@@ -49,6 +49,7 @@ After it has run it will log the status of the encryptedremote:.
` + check.FlagsHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.36",
"groups": "Filter,Listing,Check",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -20,7 +20,7 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &Reverse, "reverse", "", Reverse, "Reverse cryptdecode, encrypts filenames")
flags.BoolVarP(cmdFlags, &Reverse, "reverse", "", Reverse, "Reverse cryptdecode, encrypts filenames", "")
}
var commandDefinition = &cobra.Command{

View File

@@ -20,8 +20,8 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlag := commandDefinition.Flags()
flags.FVarP(cmdFlag, &dedupeMode, "dedupe-mode", "", "Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename")
flags.BoolVarP(cmdFlag, &byHash, "by-hash", "", false, "Find identical hashes rather than names")
flags.FVarP(cmdFlag, &dedupeMode, "dedupe-mode", "", "Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename", "")
flags.BoolVarP(cmdFlag, &byHash, "by-hash", "", false, "Find identical hashes rather than names", "")
}
var commandDefinition = &cobra.Command{
@@ -137,6 +137,7 @@ Or
`,
Annotations: map[string]string{
"versionIntroduced": "v1.27",
"groups": "Important",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 2, command, args)

View File

@@ -18,7 +18,7 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &rmdirs, "rmdirs", "", rmdirs, "rmdirs removes empty directories but leaves root intact")
flags.BoolVarP(cmdFlags, &rmdirs, "rmdirs", "", rmdirs, "rmdirs removes empty directories but leaves root intact", "")
}
var commandDefinition = &cobra.Command{
@@ -55,6 +55,7 @@ delete all files bigger than 100 MiB.
`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.27",
"groups": "Important,Filter,Listing",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -25,6 +25,7 @@ it will always be removed.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.42",
"groups": "Important",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -0,0 +1,44 @@
package genautocomplete
import (
"log"
"os"
"github.com/rclone/rclone/cmd"
"github.com/spf13/cobra"
)
func init() {
completionDefinition.AddCommand(powershellCommandDefinition)
}
var powershellCommandDefinition = &cobra.Command{
Use: "powershell [output_file]",
Short: `Output powershell completion script for rclone.`,
Long: `
Generate the autocompletion script for powershell.
To load completions in your current shell session:
rclone completion powershell | Out-String | Invoke-Expression
To load completions for every new session, add the output of the above command
to your powershell profile.
If output_file is "-" or missing, then the output will be written to stdout.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1, command, args)
if len(args) == 0 || (len(args) > 0 && args[0] == "-") {
err := cmd.Root.GenPowerShellCompletion(os.Stdout)
if err != nil {
log.Fatal(err)
}
return
}
err := cmd.Root.GenPowerShellCompletionFile(args[0])
if err != nil {
log.Fatal(err)
}
},
}

View File

@@ -3,6 +3,7 @@ package gendocs
import (
"bytes"
"fmt"
"log"
"os"
"path"
@@ -13,10 +14,10 @@ import (
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/lib/file"
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
"github.com/spf13/pflag"
)
func init() {
@@ -88,6 +89,7 @@ rclone.org website.`,
Annotations map[string]string
}
var commands = map[string]commandDetails{}
var aliases []string
var addCommandDetails func(root *cobra.Command)
addCommandDetails = func(root *cobra.Command) {
name := strings.ReplaceAll(root.CommandPath(), " ", "_") + ".md"
@@ -95,6 +97,7 @@ rclone.org website.`,
Short: root.Short,
Annotations: root.Annotations,
}
aliases = append(aliases, root.Aliases...)
for _, c := range root.Commands() {
addCommandDetails(c)
}
@@ -126,10 +129,6 @@ rclone.org website.`,
return "/commands/" + strings.ToLower(base) + "/"
}
// Hide all of the root entries flags
cmd.Root.Flags().VisitAll(func(flag *pflag.Flag) {
flag.Hidden = true
})
err = doc.GenMarkdownTreeCustom(cmd.Root, out, prepender, linkHandler)
if err != nil {
return err
@@ -143,15 +142,50 @@ rclone.org website.`,
return err
}
if !info.IsDir() {
name := filepath.Base(path)
cmd, ok := commands[name]
if !ok {
// Avoid man pages which are for aliases. This is a bit messy!
for _, alias := range aliases {
if strings.Contains(name, alias) {
return nil
}
}
return fmt.Errorf("didn't find command for %q", name)
}
b, err := os.ReadFile(path)
if err != nil {
return err
}
doc := string(b)
doc = strings.Replace(doc, "\n### SEE ALSO", `
var out strings.Builder
if groupsString := cmd.Annotations["groups"]; groupsString != "" {
groups := flags.All.Include(groupsString)
for _, group := range groups.Groups {
if group.Flags.HasFlags() {
_, _ = fmt.Fprintf(&out, "\n### %s Options\n\n", group.Name)
_, _ = fmt.Fprintf(&out, "%s\n\n", group.Help)
_, _ = fmt.Fprintln(&out, "```")
_, _ = out.WriteString(group.Flags.FlagUsages())
_, _ = fmt.Fprintln(&out, "```")
}
}
}
_, _ = out.WriteString(`
See the [global flags page](/flags/) for global options not listed here.
### SEE ALSO`, 1)
`)
startCut := strings.Index(doc, `### Options inherited from parent commands`)
endCut := strings.Index(doc, `## SEE ALSO`)
if startCut < 0 || endCut < 0 {
if name == "rclone.md" {
return nil
}
return fmt.Errorf("internal error: failed to find cut points: startCut = %d, endCut = %d", startCut, endCut)
}
doc = doc[:startCut] + out.String() + doc[endCut:]
// outdent all the titles by one
doc = outdentTitle.ReplaceAllString(doc, `$1`)
err = os.WriteFile(path, []byte(doc), 0777)

View File

@@ -31,10 +31,10 @@ func init() {
// AddHashsumFlags is a convenience function to add the command flags OutputBase64 and DownloadFlag to hashsum, md5sum, sha1sum
func AddHashsumFlags(cmdFlags *pflag.FlagSet) {
flags.BoolVarP(cmdFlags, &OutputBase64, "base64", "", OutputBase64, "Output base64 encoded hashsum")
flags.StringVarP(cmdFlags, &HashsumOutfile, "output-file", "", HashsumOutfile, "Output hashsums to a file rather than the terminal")
flags.StringVarP(cmdFlags, &ChecksumFile, "checkfile", "C", ChecksumFile, "Validate hashes against a given SUM file instead of printing them")
flags.BoolVarP(cmdFlags, &DownloadFlag, "download", "", DownloadFlag, "Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote")
flags.BoolVarP(cmdFlags, &OutputBase64, "base64", "", OutputBase64, "Output base64 encoded hashsum", "")
flags.StringVarP(cmdFlags, &HashsumOutfile, "output-file", "", HashsumOutfile, "Output hashsums to a file rather than the terminal", "")
flags.StringVarP(cmdFlags, &ChecksumFile, "checkfile", "C", ChecksumFile, "Validate hashes against a given SUM file instead of printing them", "")
flags.BoolVarP(cmdFlags, &DownloadFlag, "download", "", DownloadFlag, "Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote", "")
}
// GetHashsumOutput opens and closes the output file when using the output-file flag
@@ -114,6 +114,7 @@ Note that hash names are case insensitive and values are output in lower case.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.41",
"groups": "Filter,Listing",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 2, command, args)

View File

@@ -11,6 +11,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configflags"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/filter/filterflags"
"github.com/rclone/rclone/fs/log/logflags"
"github.com/rclone/rclone/fs/rc/rcflags"
@@ -113,7 +114,7 @@ var helpFlags = &cobra.Command{
Short: "Show the global flags for rclone",
Run: func(command *cobra.Command, args []string) {
if len(args) > 0 {
re, err := regexp.Compile(args[0])
re, err := regexp.Compile(`(?i)` + args[0])
if err != nil {
log.Fatalf("Failed to compile flags regexp: %v", err)
}
@@ -181,7 +182,7 @@ func setupRootCommand(rootCmd *cobra.Command) {
Root.Flags().BoolVarP(&version, "version", "V", false, "Print the version number")
cobra.AddTemplateFunc("showGlobalFlags", func(cmd *cobra.Command) bool {
return cmd.CalledAs() == "flags"
return cmd.CalledAs() == "flags" || cmd.Annotations["groups"] != ""
})
cobra.AddTemplateFunc("showCommands", func(cmd *cobra.Command) bool {
return cmd.CalledAs() != "flags"
@@ -191,15 +192,21 @@ func setupRootCommand(rootCmd *cobra.Command) {
// "rclone help" (which shows the global help)
return cmd.CalledAs() != "rclone" && cmd.CalledAs() != ""
})
cobra.AddTemplateFunc("backendFlags", func(cmd *cobra.Command, include bool) *pflag.FlagSet {
backendFlagSet := pflag.NewFlagSet("Backend Flags", pflag.ExitOnError)
cobra.AddTemplateFunc("flagGroups", func(cmd *cobra.Command) []*flags.Group {
// Add the backend flags and check all flags
backendGroup := flags.All.NewGroup("Backend", "Backend only flags. These can be set in the config file also.")
allRegistered := flags.All.AllRegistered()
cmd.InheritedFlags().VisitAll(func(flag *pflag.Flag) {
matched := flagsRe == nil || flagsRe.MatchString(flag.Name)
if _, ok := backendFlags[flag.Name]; matched && ok == include {
backendFlagSet.AddFlag(flag)
if _, ok := backendFlags[flag.Name]; ok {
backendGroup.Add(flag)
} else if _, ok := allRegistered[flag]; ok {
// flag is in a group already
} else {
fs.Errorf(nil, "Flag --%s is unknown", flag.Name)
}
})
return backendFlagSet
groups := flags.All.Filter(flagsRe).Include(cmd.Annotations["groups"])
return groups.Groups
})
rootCmd.SetUsageTemplate(usageTemplate)
// rootCmd.SetHelpTemplate(helpTemplate)
@@ -233,11 +240,13 @@ Available Commands:{{range .Commands}}{{if (or .IsAvailableCommand (eq .Name "he
Flags:
{{.LocalFlags.FlagUsages | trimTrailingWhitespaces}}{{end}}{{if and (showGlobalFlags .) .HasAvailableInheritedFlags}}
Global Flags:
{{(backendFlags . false).FlagUsages | trimTrailingWhitespaces}}
{{ range flagGroups . }}{{ if .Flags.HasFlags }}
# {{ .Name }} Flags
Backend Flags:
{{(backendFlags . true).FlagUsages | trimTrailingWhitespaces}}{{end}}{{if .HasHelpSubCommands}}
{{ .Help }}
{{ .Flags.FlagUsages | trimTrailingWhitespaces}}
{{ end }}{{ end }}
Additional help topics:{{range .Commands}}{{if .IsAdditionalHelpTopicCommand}}
{{rpad .CommandPath .CommandPathPadding}} {{.Short}}{{end}}{{end}}{{end}}
@@ -255,24 +264,18 @@ description: "Rclone Global Flags"
# Global Flags
This describes the global flags available to every rclone command
split into two groups, non backend and backend flags.
split into groups.
## Non Backend Flags
{{ range flagGroups . }}{{ if .Flags.HasFlags }}
## {{ .Name }}
These flags are available for every command.
{{ .Help }}
` + "```" + `
{{(backendFlags . false).FlagUsages | trimTrailingWhitespaces}}
{{ .Flags.FlagUsages | trimTrailingWhitespaces}}
` + "```" + `
## Backend Flags
These flags are available for every command. They control the backends
and may be set in the config file.
` + "```" + `
{{(backendFlags . true).FlagUsages | trimTrailingWhitespaces}}
` + "```" + `
{{ end }}{{ end }}
`
// show all the backends

View File

@@ -20,8 +20,8 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.FVarP(cmdFlags, &expire, "expire", "", "The amount of time that the link will be valid")
flags.BoolVarP(cmdFlags, &unlink, "unlink", "", unlink, "Remove existing public link to file/folder")
flags.FVarP(cmdFlags, &expire, "expire", "", "The amount of time that the link will be valid", "")
flags.BoolVarP(cmdFlags, &unlink, "unlink", "", unlink, "Remove existing public link to file/folder", "")
}
var commandDefinition = &cobra.Command{

View File

@@ -19,7 +19,7 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &listLong, "long", "", listLong, "Show the type as well as names")
flags.BoolVarP(cmdFlags, &listLong, "long", "", listLong, "Show the type as well as names", "")
}
var commandDefinition = &cobra.Command{

View File

@@ -31,6 +31,9 @@ Eg
37600 fubuwic
` + lshelp.Help,
Annotations: map[string]string{
"groups": "Filter,Listing",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)

View File

@@ -20,7 +20,7 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &recurse, "recursive", "R", false, "Recurse into the listing")
flags.BoolVarP(cmdFlags, &recurse, "recursive", "R", false, "Recurse into the listing", "")
}
var commandDefinition = &cobra.Command{
@@ -49,6 +49,9 @@ Or
If you just want the directory names use ` + "`rclone lsf --dirs-only`" + `.
` + lshelp.Help,
Annotations: map[string]string{
"groups": "Filter,Listing",
},
Run: func(command *cobra.Command, args []string) {
ci := fs.GetConfig(context.Background())
cmd.CheckArgs(1, 1, command, args)

View File

@@ -31,15 +31,15 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.StringVarP(cmdFlags, &format, "format", "F", "p", "Output format - see help for details")
flags.StringVarP(cmdFlags, &separator, "separator", "s", ";", "Separator for the items in the format")
flags.BoolVarP(cmdFlags, &dirSlash, "dir-slash", "d", true, "Append a slash to directory names")
flags.FVarP(cmdFlags, &hashType, "hash", "", "Use this hash when `h` is used in the format MD5|SHA-1|DropboxHash")
flags.BoolVarP(cmdFlags, &filesOnly, "files-only", "", false, "Only list files")
flags.BoolVarP(cmdFlags, &dirsOnly, "dirs-only", "", false, "Only list directories")
flags.BoolVarP(cmdFlags, &csv, "csv", "", false, "Output in CSV format")
flags.BoolVarP(cmdFlags, &absolute, "absolute", "", false, "Put a leading / in front of path names")
flags.BoolVarP(cmdFlags, &recurse, "recursive", "R", false, "Recurse into the listing")
flags.StringVarP(cmdFlags, &format, "format", "F", "p", "Output format - see help for details", "")
flags.StringVarP(cmdFlags, &separator, "separator", "s", ";", "Separator for the items in the format", "")
flags.BoolVarP(cmdFlags, &dirSlash, "dir-slash", "d", true, "Append a slash to directory names", "")
flags.FVarP(cmdFlags, &hashType, "hash", "", "Use this hash when `h` is used in the format MD5|SHA-1|DropboxHash", "")
flags.BoolVarP(cmdFlags, &filesOnly, "files-only", "", false, "Only list files", "")
flags.BoolVarP(cmdFlags, &dirsOnly, "dirs-only", "", false, "Only list directories", "")
flags.BoolVarP(cmdFlags, &csv, "csv", "", false, "Output in CSV format", "")
flags.BoolVarP(cmdFlags, &absolute, "absolute", "", false, "Put a leading / in front of path names", "")
flags.BoolVarP(cmdFlags, &recurse, "recursive", "R", false, "Recurse into the listing", "")
}
var commandDefinition = &cobra.Command{
@@ -144,6 +144,7 @@ those only (without traversing the whole directory structure):
` + lshelp.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.40",
"groups": "Filter,Listing",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -23,17 +23,17 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &opt.Recurse, "recursive", "R", false, "Recurse into the listing")
flags.BoolVarP(cmdFlags, &opt.ShowHash, "hash", "", false, "Include hashes in the output (may take longer)")
flags.BoolVarP(cmdFlags, &opt.NoModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up)")
flags.BoolVarP(cmdFlags, &opt.NoMimeType, "no-mimetype", "", false, "Don't read the mime type (can speed things up)")
flags.BoolVarP(cmdFlags, &opt.ShowEncrypted, "encrypted", "", false, "Show the encrypted names")
flags.BoolVarP(cmdFlags, &opt.ShowOrigIDs, "original", "", false, "Show the ID of the underlying Object")
flags.BoolVarP(cmdFlags, &opt.FilesOnly, "files-only", "", false, "Show only files in the listing")
flags.BoolVarP(cmdFlags, &opt.DirsOnly, "dirs-only", "", false, "Show only directories in the listing")
flags.BoolVarP(cmdFlags, &opt.Metadata, "metadata", "M", false, "Add metadata to the listing")
flags.StringArrayVarP(cmdFlags, &opt.HashTypes, "hash-type", "", nil, "Show only this hash type (may be repeated)")
flags.BoolVarP(cmdFlags, &statOnly, "stat", "", false, "Just return the info for the pointed to file")
flags.BoolVarP(cmdFlags, &opt.Recurse, "recursive", "R", false, "Recurse into the listing", "")
flags.BoolVarP(cmdFlags, &opt.ShowHash, "hash", "", false, "Include hashes in the output (may take longer)", "")
flags.BoolVarP(cmdFlags, &opt.NoModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up)", "")
flags.BoolVarP(cmdFlags, &opt.NoMimeType, "no-mimetype", "", false, "Don't read the mime type (can speed things up)", "")
flags.BoolVarP(cmdFlags, &opt.ShowEncrypted, "encrypted", "", false, "Show the encrypted names", "")
flags.BoolVarP(cmdFlags, &opt.ShowOrigIDs, "original", "", false, "Show the ID of the underlying Object", "")
flags.BoolVarP(cmdFlags, &opt.FilesOnly, "files-only", "", false, "Show only files in the listing", "")
flags.BoolVarP(cmdFlags, &opt.DirsOnly, "dirs-only", "", false, "Show only directories in the listing", "")
flags.BoolVarP(cmdFlags, &opt.Metadata, "metadata", "M", false, "Add metadata to the listing", "")
flags.StringArrayVarP(cmdFlags, &opt.HashTypes, "hash-type", "", nil, "Show only this hash type (may be repeated)", "")
flags.BoolVarP(cmdFlags, &statOnly, "stat", "", false, "Just return the info for the pointed to file", "")
}
var commandDefinition = &cobra.Command{
@@ -114,6 +114,7 @@ can be processed line by line as each item is written one to a line.
` + lshelp.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.37",
"groups": "Filter,Listing",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -33,6 +33,7 @@ Eg
` + lshelp.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.02",
"groups": "Filter,Listing",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -40,6 +40,7 @@ as a relative path).
`,
Annotations: map[string]string{
"versionIntroduced": "v1.02",
"groups": "Filter,Listing",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 1, command, args)

View File

@@ -18,6 +18,9 @@ func init() {
var commandDefinition = &cobra.Command{
Use: "mkdir remote:path",
Short: `Make the path if it doesn't already exist.`,
Annotations: map[string]string{
"groups": "Important",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fdst := cmd.NewFsDir(args)

View File

@@ -4,22 +4,20 @@
package mountlib
import (
"errors"
"fmt"
"path/filepath"
"strings"
"time"
"github.com/artyom/mtab"
"github.com/moby/sys/mountinfo"
)
const (
mtabPath = "/proc/mounts"
pollInterval = 100 * time.Millisecond
)
// CheckMountEmpty checks if folder is not already a mountpoint.
// On Linux we use the OS-specific /proc/mount API so the check won't access the path.
// On Linux we use the OS-specific /proc/self/mountinfo API so the check won't access the path.
// Directories marked as "mounted" by autofs are considered not mounted.
func CheckMountEmpty(mountpoint string) error {
const msg = "directory already mounted, use --allow-non-empty to mount anyway: %s"
@@ -29,43 +27,48 @@ func CheckMountEmpty(mountpoint string) error {
return fmt.Errorf("cannot get absolute path: %s: %w", mountpoint, err)
}
entries, err := mtab.Entries(mtabPath)
infos, err := mountinfo.GetMounts(mountinfo.SingleEntryFilter(mountpointAbs))
if err != nil {
return fmt.Errorf("cannot read %s: %w", mtabPath, err)
return fmt.Errorf("cannot get mounts: %w", err)
}
foundAutofs := false
for _, entry := range entries {
if entry.Dir == mountpointAbs {
if entry.Type != "autofs" {
return fmt.Errorf(msg, mountpointAbs)
}
foundAutofs = true
for _, info := range infos {
if info.FSType != "autofs" {
return fmt.Errorf(msg, mountpointAbs)
}
foundAutofs = true
}
// It isn't safe to list an autofs in the middle of mounting
if foundAutofs {
return nil
}
return checkMountEmpty(mountpoint)
}
// CheckMountReady checks whether mountpoint is mounted by rclone.
// Only mounts with type "rclone" or "fuse.rclone" count.
func CheckMountReady(mountpoint string) error {
const msg = "mount not ready: %s"
mountpointAbs, err := filepath.Abs(mountpoint)
if err != nil {
return fmt.Errorf("cannot get absolute path: %s: %w", mountpoint, err)
}
entries, err := mtab.Entries(mtabPath)
infos, err := mountinfo.GetMounts(mountinfo.SingleEntryFilter(mountpointAbs))
if err != nil {
return fmt.Errorf("cannot read %s: %w", mtabPath, err)
return fmt.Errorf("cannot get mounts: %w", err)
}
for _, entry := range entries {
if entry.Dir == mountpointAbs && strings.Contains(entry.Type, "rclone") {
for _, info := range infos {
if strings.Contains(info.FSType, "rclone") {
return nil
}
}
return errors.New("mount not ready")
return fmt.Errorf(msg, mountpointAbs)
}
// WaitMountReady waits until mountpoint is mounted by rclone.

View File

@@ -125,31 +125,31 @@ var Opt Options
// AddFlags adds the non filing system specific flags to the command
func AddFlags(flagSet *pflag.FlagSet) {
rc.AddOption("mount", &Opt)
flags.BoolVarP(flagSet, &Opt.DebugFUSE, "debug-fuse", "", Opt.DebugFUSE, "Debug the FUSE internals - needs -v")
flags.DurationVarP(flagSet, &Opt.AttrTimeout, "attr-timeout", "", Opt.AttrTimeout, "Time for which file/directory attributes are cached")
flags.StringArrayVarP(flagSet, &Opt.ExtraOptions, "option", "o", []string{}, "Option for libfuse/WinFsp (repeat if required)")
flags.StringArrayVarP(flagSet, &Opt.ExtraFlags, "fuse-flag", "", []string{}, "Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)")
flags.BoolVarP(flagSet, &Opt.DebugFUSE, "debug-fuse", "", Opt.DebugFUSE, "Debug the FUSE internals - needs -v", "Mount")
flags.DurationVarP(flagSet, &Opt.AttrTimeout, "attr-timeout", "", Opt.AttrTimeout, "Time for which file/directory attributes are cached", "Mount")
flags.StringArrayVarP(flagSet, &Opt.ExtraOptions, "option", "o", []string{}, "Option for libfuse/WinFsp (repeat if required)", "Mount")
flags.StringArrayVarP(flagSet, &Opt.ExtraFlags, "fuse-flag", "", []string{}, "Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)", "Mount")
// Non-Windows only
flags.BoolVarP(flagSet, &Opt.Daemon, "daemon", "", Opt.Daemon, "Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)")
flags.DurationVarP(flagSet, &Opt.DaemonTimeout, "daemon-timeout", "", Opt.DaemonTimeout, "Time limit for rclone to respond to kernel (not supported on Windows)")
flags.BoolVarP(flagSet, &Opt.DefaultPermissions, "default-permissions", "", Opt.DefaultPermissions, "Makes kernel enforce access control based on the file mode (not supported on Windows)")
flags.BoolVarP(flagSet, &Opt.AllowNonEmpty, "allow-non-empty", "", Opt.AllowNonEmpty, "Allow mounting over a non-empty directory (not supported on Windows)")
flags.BoolVarP(flagSet, &Opt.AllowRoot, "allow-root", "", Opt.AllowRoot, "Allow access to root user (not supported on Windows)")
flags.BoolVarP(flagSet, &Opt.AllowOther, "allow-other", "", Opt.AllowOther, "Allow access to other users (not supported on Windows)")
flags.BoolVarP(flagSet, &Opt.AsyncRead, "async-read", "", Opt.AsyncRead, "Use asynchronous reads (not supported on Windows)")
flags.FVarP(flagSet, &Opt.MaxReadAhead, "max-read-ahead", "", "The number of bytes that can be prefetched for sequential reads (not supported on Windows)")
flags.BoolVarP(flagSet, &Opt.WritebackCache, "write-back-cache", "", Opt.WritebackCache, "Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)")
flags.StringVarP(flagSet, &Opt.DeviceName, "devname", "", Opt.DeviceName, "Set the device name - default is remote:path")
flags.FVarP(flagSet, &Opt.CaseInsensitive, "mount-case-insensitive", "", "Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto)")
flags.BoolVarP(flagSet, &Opt.Daemon, "daemon", "", Opt.Daemon, "Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)", "Mount")
flags.DurationVarP(flagSet, &Opt.DaemonTimeout, "daemon-timeout", "", Opt.DaemonTimeout, "Time limit for rclone to respond to kernel (not supported on Windows)", "Mount")
flags.BoolVarP(flagSet, &Opt.DefaultPermissions, "default-permissions", "", Opt.DefaultPermissions, "Makes kernel enforce access control based on the file mode (not supported on Windows)", "Mount")
flags.BoolVarP(flagSet, &Opt.AllowNonEmpty, "allow-non-empty", "", Opt.AllowNonEmpty, "Allow mounting over a non-empty directory (not supported on Windows)", "Mount")
flags.BoolVarP(flagSet, &Opt.AllowRoot, "allow-root", "", Opt.AllowRoot, "Allow access to root user (not supported on Windows)", "Mount")
flags.BoolVarP(flagSet, &Opt.AllowOther, "allow-other", "", Opt.AllowOther, "Allow access to other users (not supported on Windows)", "Mount")
flags.BoolVarP(flagSet, &Opt.AsyncRead, "async-read", "", Opt.AsyncRead, "Use asynchronous reads (not supported on Windows)", "Mount")
flags.FVarP(flagSet, &Opt.MaxReadAhead, "max-read-ahead", "", "The number of bytes that can be prefetched for sequential reads (not supported on Windows)", "Mount")
flags.BoolVarP(flagSet, &Opt.WritebackCache, "write-back-cache", "", Opt.WritebackCache, "Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)", "Mount")
flags.StringVarP(flagSet, &Opt.DeviceName, "devname", "", Opt.DeviceName, "Set the device name - default is remote:path", "Mount")
flags.FVarP(flagSet, &Opt.CaseInsensitive, "mount-case-insensitive", "", "Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto)", "Mount")
// Windows and OSX
flags.StringVarP(flagSet, &Opt.VolumeName, "volname", "", Opt.VolumeName, "Set the volume name (supported on Windows and OSX only)")
flags.StringVarP(flagSet, &Opt.VolumeName, "volname", "", Opt.VolumeName, "Set the volume name (supported on Windows and OSX only)", "Mount")
// OSX only
flags.BoolVarP(flagSet, &Opt.NoAppleDouble, "noappledouble", "", Opt.NoAppleDouble, "Ignore Apple Double (._) and .DS_Store files (supported on OSX only)")
flags.BoolVarP(flagSet, &Opt.NoAppleXattr, "noapplexattr", "", Opt.NoAppleXattr, "Ignore all \"com.apple.*\" extended attributes (supported on OSX only)")
flags.BoolVarP(flagSet, &Opt.NoAppleDouble, "noappledouble", "", Opt.NoAppleDouble, "Ignore Apple Double (._) and .DS_Store files (supported on OSX only)", "Mount")
flags.BoolVarP(flagSet, &Opt.NoAppleXattr, "noapplexattr", "", Opt.NoAppleXattr, "Ignore all \"com.apple.*\" extended attributes (supported on OSX only)", "Mount")
// Windows only
flags.BoolVarP(flagSet, &Opt.NetworkMode, "network-mode", "", Opt.NetworkMode, "Mount as remote network drive, instead of fixed disk drive (supported on Windows only)")
flags.BoolVarP(flagSet, &Opt.NetworkMode, "network-mode", "", Opt.NetworkMode, "Mount as remote network drive, instead of fixed disk drive (supported on Windows only)", "Mount")
// Unix only
flags.DurationVarP(flagSet, &Opt.DaemonWait, "daemon-wait", "", Opt.DaemonWait, "Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows)")
flags.DurationVarP(flagSet, &Opt.DaemonWait, "daemon-wait", "", Opt.DaemonWait, "Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows)", "Mount")
}
// NewMountCommand makes a mount command with the given name and Mount function
@@ -161,6 +161,7 @@ func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Comm
Long: strings.ReplaceAll(strings.ReplaceAll(mountHelp, "|", "`"), "@", commandName) + vfs.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.33",
"groups": "Filter",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -21,8 +21,8 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &deleteEmptySrcDirs, "delete-empty-src-dirs", "", deleteEmptySrcDirs, "Delete empty source dirs after move")
flags.BoolVarP(cmdFlags, &createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after move")
flags.BoolVarP(cmdFlags, &deleteEmptySrcDirs, "delete-empty-src-dirs", "", deleteEmptySrcDirs, "Delete empty source dirs after move", "")
flags.BoolVarP(cmdFlags, &createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after move", "")
}
var commandDefinition = &cobra.Command{
@@ -62,6 +62,7 @@ can speed transfers up greatly.
`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.19",
"groups": "Filter,Listing,Important,Copy",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -51,6 +51,7 @@ successful transfer.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.35",
"groups": "Filter,Listing,Important,Copy",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -77,6 +77,7 @@ the remote you can also use the [size](/commands/rclone_size/) command.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.37",
"groups": "Filter,Listing",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -26,6 +26,9 @@ delete files. To delete empty directories only, use command
**Important**: Since this can cause data loss, test first with the
` + "`--dry-run` or the `--interactive`/`-i`" + ` flag.
`,
Annotations: map[string]string{
"groups": "Important",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fdst := cmd.NewFsDir(args)

View File

@@ -36,14 +36,14 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &noOutput, "no-output", "", noOutput, "If set, don't output the JSON result")
flags.StringVarP(cmdFlags, &url, "url", "", url, "URL to connect to rclone remote control")
flags.StringVarP(cmdFlags, &jsonInput, "json", "", jsonInput, "Input JSON - use instead of key=value args")
flags.StringVarP(cmdFlags, &authUser, "user", "", "", "Username to use to rclone remote control")
flags.StringVarP(cmdFlags, &authPass, "pass", "", "", "Password to use to connect to rclone remote control")
flags.BoolVarP(cmdFlags, &loopback, "loopback", "", false, "If set connect to this rclone instance not via HTTP")
flags.StringArrayVarP(cmdFlags, &options, "opt", "o", options, "Option in the form name=value or name placed in the \"opt\" array")
flags.StringArrayVarP(cmdFlags, &arguments, "arg", "a", arguments, "Argument placed in the \"arg\" array")
flags.BoolVarP(cmdFlags, &noOutput, "no-output", "", noOutput, "If set, don't output the JSON result", "")
flags.StringVarP(cmdFlags, &url, "url", "", url, "URL to connect to rclone remote control", "")
flags.StringVarP(cmdFlags, &jsonInput, "json", "", jsonInput, "Input JSON - use instead of key=value args", "")
flags.StringVarP(cmdFlags, &authUser, "user", "", "", "Username to use to rclone remote control", "")
flags.StringVarP(cmdFlags, &authPass, "pass", "", "", "Password to use to connect to rclone remote control", "")
flags.BoolVarP(cmdFlags, &loopback, "loopback", "", false, "If set connect to this rclone instance not via HTTP", "")
flags.StringArrayVarP(cmdFlags, &options, "opt", "o", options, "Option in the form name=value or name placed in the \"opt\" array", "")
flags.StringArrayVarP(cmdFlags, &arguments, "arg", "a", arguments, "Argument placed in the \"arg\" array", "")
}
var commandDefinition = &cobra.Command{

View File

@@ -20,7 +20,7 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.Int64VarP(cmdFlags, &size, "size", "", size, "File size hint to preallocate")
flags.Int64VarP(cmdFlags, &size, "size", "", size, "File size hint to preallocate", "")
}
var commandDefinition = &cobra.Command{
@@ -59,6 +59,7 @@ off caching it locally and then ` + "`rclone move`" + ` it to the
destination which can use retries.`,
Annotations: map[string]string{
"versionIntroduced": "v1.38",
"groups": "Important",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -35,6 +35,7 @@ See the [rc documentation](/rc/) for more info on the rc flags.
` + libhttp.Help(rcflags.FlagPrefix) + libhttp.TemplateHelp(rcflags.FlagPrefix) + libhttp.AuthHelp(rcflags.FlagPrefix),
Annotations: map[string]string{
"versionIntroduced": "v1.45",
"groups": "RC",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1, command, args)

View File

@@ -24,6 +24,9 @@ with option ` + "`--rmdirs`" + `) to do that.
To delete a path and any objects in it, use [purge](/commands/rclone_purge/) command.
`,
Annotations: map[string]string{
"groups": "Important",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fdst := cmd.NewFsDir(args)

View File

@@ -40,6 +40,7 @@ command.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.35",
"groups": "Important",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -51,12 +51,12 @@ var Opt = Options{}
func init() {
cmd.Root.AddCommand(cmdSelfUpdate)
cmdFlags := cmdSelfUpdate.Flags()
flags.BoolVarP(cmdFlags, &Opt.Check, "check", "", Opt.Check, "Check for latest release, do not download")
flags.StringVarP(cmdFlags, &Opt.Output, "output", "", Opt.Output, "Save the downloaded binary at a given path (default: replace running binary)")
flags.BoolVarP(cmdFlags, &Opt.Stable, "stable", "", Opt.Stable, "Install stable release (this is the default)")
flags.BoolVarP(cmdFlags, &Opt.Beta, "beta", "", Opt.Beta, "Install beta release")
flags.StringVarP(cmdFlags, &Opt.Version, "version", "", Opt.Version, "Install the given rclone version (default: latest)")
flags.StringVarP(cmdFlags, &Opt.Package, "package", "", Opt.Package, "Package format: zip|deb|rpm (default: zip)")
flags.BoolVarP(cmdFlags, &Opt.Check, "check", "", Opt.Check, "Check for latest release, do not download", "")
flags.StringVarP(cmdFlags, &Opt.Output, "output", "", Opt.Output, "Save the downloaded binary at a given path (default: replace running binary)", "")
flags.BoolVarP(cmdFlags, &Opt.Stable, "stable", "", Opt.Stable, "Install stable release (this is the default)", "")
flags.BoolVarP(cmdFlags, &Opt.Beta, "beta", "", Opt.Beta, "Install beta release", "")
flags.StringVarP(cmdFlags, &Opt.Version, "version", "", Opt.Version, "Install the given rclone version (default: latest)", "")
flags.StringVarP(cmdFlags, &Opt.Package, "package", "", Opt.Package, "Package format: zip|deb|rpm (default: zip)", "")
}
var cmdSelfUpdate = &cobra.Command{

View File

@@ -50,6 +50,7 @@ files that they are not able to play back correctly.
` + dlnaflags.Help + vfs.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.46",
"groups": "Filter",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -49,11 +49,11 @@ var (
func addFlagsPrefix(flagSet *pflag.FlagSet, prefix string, Opt *Options) {
rc.AddOption("dlna", &Opt)
flags.StringVarP(flagSet, &Opt.ListenAddr, prefix+"addr", "", Opt.ListenAddr, "The ip:port or :port to bind the DLNA http server to")
flags.StringVarP(flagSet, &Opt.FriendlyName, prefix+"name", "", Opt.FriendlyName, "Name of DLNA server")
flags.BoolVarP(flagSet, &Opt.LogTrace, prefix+"log-trace", "", Opt.LogTrace, "Enable trace logging of SOAP traffic")
flags.StringArrayVarP(flagSet, &Opt.InterfaceNames, prefix+"interface", "", Opt.InterfaceNames, "The interface to use for SSDP (repeat as necessary)")
flags.DurationVarP(flagSet, &Opt.AnnounceInterval, prefix+"announce-interval", "", Opt.AnnounceInterval, "The interval between SSDP announcements")
flags.StringVarP(flagSet, &Opt.ListenAddr, prefix+"addr", "", Opt.ListenAddr, "The ip:port or :port to bind the DLNA http server to", prefix)
flags.StringVarP(flagSet, &Opt.FriendlyName, prefix+"name", "", Opt.FriendlyName, "Name of DLNA server", prefix)
flags.BoolVarP(flagSet, &Opt.LogTrace, prefix+"log-trace", "", Opt.LogTrace, "Enable trace logging of SOAP traffic", prefix)
flags.StringArrayVarP(flagSet, &Opt.InterfaceNames, prefix+"interface", "", Opt.InterfaceNames, "The interface to use for SSDP (repeat as necessary)", prefix)
flags.DurationVarP(flagSet, &Opt.AnnounceInterval, prefix+"announce-interval", "", Opt.AnnounceInterval, "The interval between SSDP announcements", prefix)
}
// AddFlags add the command line flags for DLNA serving.

View File

@@ -33,11 +33,11 @@ var (
func init() {
cmdFlags := Command.Flags()
// Add command specific flags
flags.StringVarP(cmdFlags, &baseDir, "base-dir", "", baseDir, "Base directory for volumes")
flags.StringVarP(cmdFlags, &socketAddr, "socket-addr", "", socketAddr, "Address <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)")
flags.IntVarP(cmdFlags, &socketGid, "socket-gid", "", socketGid, "GID for unix socket (default: current process GID)")
flags.BoolVarP(cmdFlags, &forgetState, "forget-state", "", forgetState, "Skip restoring previous state")
flags.BoolVarP(cmdFlags, &noSpec, "no-spec", "", noSpec, "Do not write spec file")
flags.StringVarP(cmdFlags, &baseDir, "base-dir", "", baseDir, "Base directory for volumes", "")
flags.StringVarP(cmdFlags, &socketAddr, "socket-addr", "", socketAddr, "Address <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)", "")
flags.IntVarP(cmdFlags, &socketGid, "socket-gid", "", socketGid, "GID for unix socket (default: current process GID)", "")
flags.BoolVarP(cmdFlags, &forgetState, "forget-state", "", forgetState, "Skip restoring previous state", "")
flags.BoolVarP(cmdFlags, &noSpec, "no-spec", "", noSpec, "Do not write spec file", "")
// Add common mount/vfs flags
mountlib.AddFlags(cmdFlags)
vfsflags.AddFlags(cmdFlags)
@@ -50,6 +50,7 @@ var Command = &cobra.Command{
Long: strings.ReplaceAll(longHelp, "|", "`") + vfs.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.56",
"groups": "Filter",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)

View File

@@ -59,13 +59,13 @@ var Opt = DefaultOpt
// AddFlags adds flags for ftp
func AddFlags(flagSet *pflag.FlagSet) {
rc.AddOption("ftp", &Opt)
flags.StringVarP(flagSet, &Opt.ListenAddr, "addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to")
flags.StringVarP(flagSet, &Opt.PublicIP, "public-ip", "", Opt.PublicIP, "Public IP address to advertise for passive connections")
flags.StringVarP(flagSet, &Opt.PassivePorts, "passive-port", "", Opt.PassivePorts, "Passive port range to use")
flags.StringVarP(flagSet, &Opt.BasicUser, "user", "", Opt.BasicUser, "User name for authentication")
flags.StringVarP(flagSet, &Opt.BasicPass, "pass", "", Opt.BasicPass, "Password for authentication (empty value allow every password)")
flags.StringVarP(flagSet, &Opt.TLSCert, "cert", "", Opt.TLSCert, "TLS PEM key (concatenation of certificate and CA certificate)")
flags.StringVarP(flagSet, &Opt.TLSKey, "key", "", Opt.TLSKey, "TLS PEM Private key")
flags.StringVarP(flagSet, &Opt.ListenAddr, "addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to", "")
flags.StringVarP(flagSet, &Opt.PublicIP, "public-ip", "", Opt.PublicIP, "Public IP address to advertise for passive connections", "")
flags.StringVarP(flagSet, &Opt.PassivePorts, "passive-port", "", Opt.PassivePorts, "Passive port range to use", "")
flags.StringVarP(flagSet, &Opt.BasicUser, "user", "", Opt.BasicUser, "User name for authentication", "")
flags.StringVarP(flagSet, &Opt.BasicPass, "pass", "", Opt.BasicPass, "Password for authentication (empty value allow every password)", "")
flags.StringVarP(flagSet, &Opt.TLSCert, "cert", "", Opt.TLSCert, "TLS PEM key (concatenation of certificate and CA certificate)", "")
flags.StringVarP(flagSet, &Opt.TLSKey, "key", "", Opt.TLSKey, "TLS PEM Private key", "")
}
func init() {
@@ -101,6 +101,7 @@ You can set a single username and password with the --user and --pass flags.
` + vfs.Help + proxy.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.44",
"groups": "Filter",
},
Run: func(command *cobra.Command, args []string) {
var f fs.Fs

View File

@@ -1,23 +0,0 @@
//go:build ignore
// +build ignore
package main
import (
"log"
"net/http"
"github.com/shurcooL/vfsgen"
)
func main() {
var AssetDir http.FileSystem = http.Dir("./templates")
err := vfsgen.Generate(AssetDir, vfsgen.Options{
PackageName: "data",
BuildTags: "!dev",
VariableName: "Assets",
})
if err != nil {
log.Fatalln(err)
}
}

File diff suppressed because one or more lines are too long

View File

@@ -1,99 +0,0 @@
// Package data provides common functionality for http servers
// The "go:generate" directive compiles static assets by running assets_generate.go
//
//go:generate go run assets_generate.go
package data
import (
"fmt"
"html/template"
"io"
"os"
"time"
"github.com/spf13/pflag"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
)
// Help describes the options for the serve package
var Help = `
#### Template
` + "`--template`" + ` allows a user to specify a custom markup template for HTTP
and WebDAV serve functions. The server exports the following markup
to be used within the template to server pages:
| Parameter | Description |
| :---------- | :---------- |
| .Name | The full path of a file/directory. |
| .Title | Directory listing of .Name |
| .Sort | The current sort used. This is changeable via ?sort= parameter |
| | Sort Options: namedirfirst,name,size,time (default namedirfirst) |
| .Order | The current ordering used. This is changeable via ?order= parameter |
| | Order Options: asc,desc (default asc) |
| .Query | Currently unused. |
| .Breadcrumb | Allows for creating a relative navigation |
|-- .Link | The relative to the root link of the Text. |
|-- .Text | The Name of the directory. |
| .Entries | Information about a specific file/directory. |
|-- .URL | The 'url' of an entry. |
|-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. |
|-- .IsDir | Boolean for if an entry is a directory or not. |
|-- .Size | Size in Bytes of the entry. |
|-- .ModTime | The UTC timestamp of an entry. |
`
// Options for the templating functionality
type Options struct {
Template string
}
// AddFlags for the templating functionality
func AddFlags(flagSet *pflag.FlagSet, prefix string, Opt *Options) {
flags.StringVarP(flagSet, &Opt.Template, prefix+"template", "", Opt.Template, "User-specified template")
}
// AfterEpoch returns the time since the epoch for the given time
func AfterEpoch(t time.Time) bool {
return t.After(time.Time{})
}
// GetTemplate returns the HTML template for serving directories via HTTP/WebDAV
func GetTemplate(tmpl string) (tpl *template.Template, err error) {
var templateString string
if tmpl == "" {
templateFile, err := Assets.Open("index.html")
if err != nil {
return nil, fmt.Errorf("get template open: %w", err)
}
defer fs.CheckClose(templateFile, &err)
templateBytes, err := io.ReadAll(templateFile)
if err != nil {
return nil, fmt.Errorf("get template read: %w", err)
}
templateString = string(templateBytes)
} else {
templateFile, err := os.ReadFile(tmpl)
if err != nil {
return nil, fmt.Errorf("get template open: %w", err)
}
templateString = string(templateFile)
}
funcMap := template.FuncMap{
"afterEpoch": AfterEpoch,
}
tpl, err = template.New("index").Funcs(funcMap).Parse(templateString)
if err != nil {
return nil, fmt.Errorf("get template parse: %w", err)
}
return
}

View File

@@ -1,389 +0,0 @@
<!--
// Copyright 2015 Matthew Holt and The Caddy Authors
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
Modifications: Adapted to rclone markup -->
<!DOCTYPE html>
<html>
<head>
<title>{{html .Name}}</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="google" content="notranslate">
<style>
* { padding: 0; margin: 0; }
body {
font-family: sans-serif;
text-rendering: optimizespeed;
background-color: #ffffff;
}
a {
color: #006ed3;
text-decoration: none;
}
a:hover,
h1 a:hover {
color: #319cff;
}
header,
#summary {
padding-left: 5%;
padding-right: 5%;
}
th:first-child,
td:first-child {
width: 1%; /* tighter for mobile */
}
th:last-child,
td:last-child {
width: 1%; /* tighter for mobile */
}
header {
padding-top: 25px;
padding-bottom: 15px;
background-color: #f2f2f2;
}
h1 {
font-size: 20px;
font-weight: normal;
white-space: nowrap;
overflow-x: hidden;
text-overflow: ellipsis;
color: #999;
}
h1 a {
color: #000;
margin: 0 4px;
}
h1 a:hover {
text-decoration: underline;
}
h1 a:first-child {
margin: 0;
}
main {
display: block;
}
.meta {
font-size: 12px;
font-family: Verdana, sans-serif;
border-bottom: 1px solid #9C9C9C;
padding-top: 10px;
padding-bottom: 10px;
}
.meta-item {
margin-right: 1em;
}
#filter {
padding: 4px;
border: 1px solid #CCC;
}
table {
width: 100%;
border-collapse: collapse;
}
tr {
border-bottom: 1px dashed #dadada;
}
tbody tr:hover {
background-color: #ffffec;
}
th,
td {
text-align: left;
padding: 10px 2px; /* Changed from 0 to 2 */
}
th {
padding-top: 10px;
padding-bottom: 10px;
font-size: 16px;
white-space: nowrap;
}
th a {
color: black;
}
th svg {
vertical-align: middle;
}
td {
white-space: nowrap;
font-size: 14px;
}
td:nth-child(2) {
width: 80%;
}
td:nth-child(3) {
padding: 0 20px 0 20px;
}
th:nth-child(4),
td:nth-child(4) {
text-align: right;
}
td:nth-child(2) svg {
position: absolute;
}
td .name,
td .goup {
margin-left: 1.75em;
word-break: break-all;
overflow-wrap: break-word;
white-space: pre-wrap;
}
.icon {
margin-right: 5px;
}
.icon.sort {
display: inline-block;
width: 1em;
height: 1em;
position: relative;
top: .2em;
}
.icon.sort .top {
position: absolute;
left: 0;
top: -1px;
}
.icon.sort .bottom {
position: absolute;
bottom: -1px;
left: 0;
}
footer {
padding: 40px 20px;
font-size: 12px;
text-align: center;
}
@media (max-width: 600px) {
/* .hideable {
display: none;
} removed to keep dates on mobile */
td:nth-child(2) {
width: auto;
}
th:nth-child(3),
td:nth-child(3) {
padding-right: 5%;
text-align: right;
}
h1 {
color: #000;
}
h1 a {
margin: 0;
}
#filter {
max-width: 100px;
}
}
</style>
</head>
<body onload='filter();toggle("order");changeSize()'>
<svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" height="0" width="0" style="position: absolute;">
<defs>
<!-- Folder -->
<g id="folder" fill-rule="nonzero" fill="none">
<path d="M285.22 37.55h-142.6L110.9 0H31.7C14.25 0 0 16.9 0 37.55v75.1h316.92V75.1c0-20.65-14.26-37.55-31.7-37.55z" fill="#FFA000"/>
<path d="M285.22 36H31.7C14.25 36 0 50.28 0 67.74v158.7c0 17.47 14.26 31.75 31.7 31.75H285.2c17.44 0 31.7-14.3 31.7-31.75V67.75c0-17.47-14.26-31.75-31.7-31.75z" fill="#FFCA28"/>
</g>
<g id="folder-shortcut" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<g id="folder-shortcut-group" fill-rule="nonzero">
<g id="folder-shortcut-shape">
<path d="M285.224876,37.5486902 L142.612438,37.5486902 L110.920785,0 L31.6916529,0 C14.2612438,0 0,16.8969106 0,37.5486902 L0,112.646071 L316.916529,112.646071 L316.916529,75.0973805 C316.916529,54.4456008 302.655285,37.5486902 285.224876,37.5486902 Z" id="Shape" fill="#FFA000"></path>
<path d="M285.224876,36 L31.6916529,36 C14.2612438,36 0,50.2838568 0,67.7419039 L0,226.451424 C0,243.909471 14.2612438,258.193328 31.6916529,258.193328 L285.224876,258.193328 C302.655285,258.193328 316.916529,243.909471 316.916529,226.451424 L316.916529,67.7419039 C316.916529,50.2838568 302.655285,36 285.224876,36 Z" id="Shape" fill="#FFCA28"></path>
</g>
<path d="M126.154134,250.559184 C126.850974,251.883673 127.300549,253.006122 127.772602,254.106122 C128.469442,255.206122 128.919016,256.104082 129.638335,257.002041 C130.559962,258.326531 131.728855,259 133.100057,259 C134.493737,259 135.415364,258.55102 136.112204,257.67551 C136.809044,257.002041 137.258619,255.902041 137.258619,254.577551 C137.258619,253.904082 137.258619,252.804082 137.033832,251.457143 C136.786566,249.908163 136.561779,249.032653 136.561779,248.583673 C136.089726,242.814286 135.864939,237.920408 135.864939,233.273469 C135.864939,225.057143 136.786566,217.514286 138.180246,210.846939 C139.798713,204.202041 141.889234,198.634694 144.429328,193.763265 C147.216689,188.869388 150.678411,184.873469 154.836973,181.326531 C158.995535,177.779592 163.626149,174.883673 168.481552,172.661224 C173.336954,170.438776 179.113983,168.665306 185.587852,167.340816 C192.061722,166.218367 198.760378,165.342857 205.481514,164.669388 C212.18017,164.220408 219.598146,163.995918 228.162535,163.995918 L246.055591,163.995918 L246.055591,195.514286 C246.055591,197.736735 246.752431,199.510204 248.370899,201.059184 C250.214153,202.608163 252.079886,203.506122 254.372715,203.506122 C256.463236,203.506122 258.531277,202.608163 260.172223,201.059184 L326.102289,137.797959 C327.720757,136.24898 328.642384,134.47551 328.642384,132.253061 C328.642384,130.030612 327.720757,128.257143 326.102289,126.708163 L260.172223,63.4469388 C258.553756,61.8979592 256.463236,61 254.395194,61 C252.079886,61 250.236632,61.8979592 248.393377,63.4469388 C246.77491,64.9959184 246.07807,66.7693878 246.07807,68.9918367 L246.07807,100.510204 L228.162535,100.510204 C166.863084,100.510204 129.166282,117.167347 115.274437,150.459184 C110.666301,161.54898 108.350993,175.310204 108.350993,191.742857 C108.350993,205.279592 113.903236,223.912245 124.760454,247.438776 C125.00772,248.112245 125.457294,249.010204 126.154134,250.559184 Z" id="Shape" fill="#FFFFFF" transform="translate(218.496689, 160.000000) scale(-1, 1) translate(-218.496689, -160.000000) "></path>
</g>
</g>
<!-- File -->
<g id="file" stroke="#000" stroke-width="25" fill="#FFF" fill-rule="evenodd" stroke-linecap="round" stroke-linejoin="round">
<path d="M13 24.12v274.76c0 6.16 5.87 11.12 13.17 11.12H239c7.3 0 13.17-4.96 13.17-11.12V136.15S132.6 13 128.37 13H26.17C18.87 13 13 17.96 13 24.12z"/>
<path d="M129.37 13L129 113.9c0 10.58 7.26 19.1 16.27 19.1H249L129.37 13z"/>
</g>
<g id="file-shortcut" stroke="none" stroke-width="1" fill="none" fill-rule="evenodd">
<g id="file-shortcut-group" transform="translate(13.000000, 13.000000)">
<g id="file-shortcut-shape" stroke="#000000" stroke-width="25" fill="#FFFFFF" stroke-linecap="round" stroke-linejoin="round">
<path d="M0,11.1214886 L0,285.878477 C0,292.039924 5.87498876,296.999983 13.1728373,296.999983 L225.997983,296.999983 C233.295974,296.999983 239.17082,292.039942 239.17082,285.878477 L239.17082,123.145388 C239.17082,123.145388 119.58541,2.84217094e-14 115.369423,2.84217094e-14 L13.1728576,2.84217094e-14 C5.87500907,-1.71479982e-05 0,4.96022995 0,11.1214886 Z" id="rect1171"></path>
<path d="M116.37005,0 L116,100.904964 C116,111.483663 123.258008,120 132.273377,120 L236,120 L116.37005,0 L116.37005,0 Z" id="rect1794"></path>
</g>
<path d="M47.803141,294.093878 C48.4999811,295.177551 48.9495553,296.095918 49.4216083,296.995918 C50.1184484,297.895918 50.5680227,298.630612 51.2873415,299.365306 C52.2089688,300.44898 53.3778619,301 54.7490634,301 C56.1427436,301 57.0643709,300.632653 57.761211,299.916327 C58.4580511,299.365306 58.9076254,298.465306 58.9076254,297.381633 C58.9076254,296.830612 58.9076254,295.930612 58.6828382,294.828571 C58.4355724,293.561224 58.2107852,292.844898 58.2107852,292.477551 C57.7387323,287.757143 57.5139451,283.753061 57.5139451,279.95102 C57.5139451,273.228571 58.4355724,267.057143 59.8292526,261.602041 C61.44772,256.165306 63.5382403,251.610204 66.0783349,247.62449 C68.8656954,243.620408 72.3274172,240.35102 76.4859792,237.44898 C80.6445412,234.546939 85.2751561,232.177551 90.1305582,230.359184 C94.9859603,228.540816 100.76299,227.089796 107.236859,226.006122 C113.710728,225.087755 120.409385,224.371429 127.13052,223.820408 C133.829177,223.453061 141.247152,223.269388 149.811542,223.269388 L167.704598,223.269388 L167.704598,249.057143 C167.704598,250.87551 168.401438,252.326531 170.019905,253.593878 C171.86316,254.861224 173.728893,255.595918 176.021722,255.595918 C178.112242,255.595918 180.180284,254.861224 181.82123,253.593878 L247.751296,201.834694 C249.369763,200.567347 250.291391,199.116327 250.291391,197.297959 C250.291391,195.479592 249.369763,194.028571 247.751296,192.761224 L181.82123,141.002041 C180.202763,139.734694 178.112242,139 176.044201,139 C173.728893,139 171.885639,139.734694 170.042384,141.002041 C168.423917,142.269388 167.727077,143.720408 167.727077,145.538776 L167.727077,171.326531 L149.811542,171.326531 C88.5120908,171.326531 50.8152886,184.955102 36.9234437,212.193878 C32.3153075,221.267347 30,232.526531 30,245.971429 C30,257.046939 35.5522422,272.291837 46.4094607,291.540816 C46.6567266,292.091837 47.1063009,292.826531 47.803141,294.093878 Z" id="Shape-Copy" fill="#000000" fill-rule="nonzero" transform="translate(140.145695, 220.000000) scale(-1, 1) translate(-140.145695, -220.000000) "></path>
</g>
</g>
<!-- Up arrow -->
<g id="up-arrow" transform="translate(-279.22 -208.12)">
<path transform="matrix(.22413 0 0 .12089 335.67 164.35)" stroke-width="0" d="m-194.17 412.01h-28.827-28.827l14.414-24.965 14.414-24.965 14.414 24.965z"/>
</g>
<!-- Down arrow -->
<g id="down-arrow" transform="translate(-279.22 -208.12)">
<path transform="matrix(.22413 0 0 -.12089 335.67 257.93)" stroke-width="0" d="m-194.17 412.01h-28.827-28.827l14.414-24.965 14.414-24.965 14.414 24.965z"/>
</g>
</defs>
</svg>
<header>
<h1>
{{range $i, $crumb := .Breadcrumb}}<a href="{{html $crumb.Link}}">{{html $crumb.Text}}</a>{{if ne $i 0}}/{{end}}{{end}}
</h1>
</header>
<main>
<div class="meta">
<div id="summary">
<span class="meta-item"><input type="text" placeholder="filter" id="filter" onkeyup='filter()'></span>
</div>
</div>
<div class="listing">
<table aria-describedby="summary">
<thead>
<tr>
<th></th>
<th>
<a href="?sort=namedirfirst&order=asc" class="icon sort order"><svg class="top" width="1em" height=".5em" version="1.1" viewBox="0 0 12.922194 6.0358899"><use xlink:href="#up-arrow"></use></svg><svg class="bottom" width="1em" height=".5em" version="1.1" viewBox="0 0 12.922194 6.0358899"><use xlink:href="#down-arrow"></use></svg></a>
<a href="?sort=name&order=asc" class="order">Name</a>
</th>
<th>
<a href="?sort=size&order=asc" class="order">Size</a>
</th>
<th class="hideable">
<a href="?sort=time&order=asc" class="order">Modified</a>
</th>
<th class="hideable"></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>
<a href="..">
<span class="goup">Go up</span>
</a>
</td>
<td>&mdash;</td>
<td class="hideable">&mdash;</td>
<td class="hideable"></td>
</tr>
{{- range .Entries}}
<tr class="file">
<td>
</td>
<td>
{{- if .IsDir}}
<svg width="1.5em" height="1em" version="1.1" viewBox="0 0 317 259"><use xlink:href="#folder"></use></svg>
{{- else}}
<svg width="1.5em" height="1em" version="1.1" viewBox="0 0 265 323"><use xlink:href="#file"></use></svg>
{{- end}}
<span class="name"><a href="{{html .URL}}">{{html .Leaf}}</a></span>
</td>
{{- if .IsDir}}
<td data-order="-1">&mdash;</td>
{{- else}}
<td data-order="{{.Size}}"><size>{{.Size}}</size></td>
{{- end}}
{{- if .ModTime | afterEpoch }}
<td class="hideable"><time datetime="{{.ModTime }}">{{.ModTime }}</time></td>
{{- else}}
<td class="hideable"></td>
{{- end}}
<td class="hideable"></td>
</tr>
{{- end}}
</tbody>
</table>
</div>
</main>
<script>
var filterEl = document.getElementById('filter');
filterEl.focus();
function filter() {
var q = filterEl.value.trim().toLowerCase();
var elems = document.querySelectorAll('tr.file');
elems.forEach(function(el) {
if (!q) {
el.style.display = '';
return;
}
var nameEl = el.querySelector('.name');
var nameVal = nameEl.textContent.trim().toLowerCase();
if (nameVal.indexOf(q) !== -1) {
el.style.display = '';
} else {
el.style.display = 'none';
}
});
}
function localizeDatetime(e, index, ar) {
if (e.textContent === undefined) {
return;
}
var d = new Date(e.getAttribute('datetime'));
if (isNaN(d)) {
d = new Date(e.textContent);
if (isNaN(d)) {
return;
}
}
e.textContent = d.toLocaleString([], {day: "2-digit", month: "2-digit", year: "numeric", hour: "2-digit", minute: "2-digit", second: "2-digit"});
}
var timeList = Array.prototype.slice.call(document.getElementsByTagName("time"));
timeList.forEach(localizeDatetime);
var getUrlParameter = function getUrlParameter(sParam) {
var sPageURL = window.location.search.substring(1),
sURLVariables = sPageURL.split('&'),
sParameterName,
i;
for (i = 0; i < sURLVariables.length; i++) {
sParameterName = sURLVariables[i].split('=');
if (sParameterName[0] === sParam) {
return sParameterName[1] === undefined ? true : decodeURIComponent(sParameterName[1]);
}
}
};
function toggle(className){
var order = getUrlParameter('order');
var elements = document.getElementsByClassName(className);
for(var i = 0, length = elements.length; i < length; i++) {
var currHref = elements[i].href;
if(order=='desc'){
var chg = currHref.replace('desc', 'asc');
elements[i].href = chg;
}
if(order=='asc'){
var chg = currHref.replace('asc', 'desc');
elements[i].href = chg;
}
}
};
function readableFileSize(size) {
var units = ['B', 'KiB', 'MiB', 'GiB', 'TiB', 'PiB', 'EiB', 'ZiB', 'YiB'];
var i = 0;
while(size >= 1024) {
size /= 1024;
++i;
}
return parseFloat(size).toFixed(2) + ' ' + units[i];
}
function changeSize() {
var sizes = document.getElementsByTagName("size");
for (var i = 0; i < sizes.length; i++) {
humanSize = readableFileSize(sizes[i].innerHTML);
sizes[i].innerHTML = humanSize
}
}
</script>
</body>
</html>

View File

@@ -75,6 +75,7 @@ control the stats printing.
` + libhttp.Help(flagPrefix) + libhttp.TemplateHelp(flagPrefix) + libhttp.AuthHelp(flagPrefix) + vfs.Help + proxy.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
"groups": "Filter",
},
Run: func(command *cobra.Command, args []string) {
var f fs.Fs

View File

@@ -14,5 +14,5 @@ var (
// AddFlags adds the non filing system specific flags to the command
func AddFlags(flagSet *pflag.FlagSet) {
flags.StringVarP(flagSet, &Opt.AuthProxy, "auth-proxy", "", Opt.AuthProxy, "A program to use to create the backend from the auth")
flags.StringVarP(flagSet, &Opt.AuthProxy, "auth-proxy", "", Opt.AuthProxy, "A program to use to create the backend from the auth", "")
}

View File

@@ -57,10 +57,10 @@ func init() {
flagSet := Command.Flags()
libhttp.AddAuthFlagsPrefix(flagSet, flagPrefix, &Opt.Auth)
libhttp.AddHTTPFlagsPrefix(flagSet, flagPrefix, &Opt.HTTP)
flags.BoolVarP(flagSet, &Opt.Stdio, "stdio", "", false, "Run an HTTP2 server on stdin/stdout")
flags.BoolVarP(flagSet, &Opt.AppendOnly, "append-only", "", false, "Disallow deletion of repository data")
flags.BoolVarP(flagSet, &Opt.PrivateRepos, "private-repos", "", false, "Users can only access their private repo")
flags.BoolVarP(flagSet, &Opt.CacheObjects, "cache-objects", "", true, "Cache listed objects")
flags.BoolVarP(flagSet, &Opt.Stdio, "stdio", "", false, "Run an HTTP2 server on stdin/stdout", "")
flags.BoolVarP(flagSet, &Opt.AppendOnly, "append-only", "", false, "Disallow deletion of repository data", "")
flags.BoolVarP(flagSet, &Opt.PrivateRepos, "private-repos", "", false, "Users can only access their private repo", "")
flags.BoolVarP(flagSet, &Opt.CacheObjects, "cache-objects", "", true, "Cache listed objects", "")
}
// Command definition for cobra

View File

@@ -42,13 +42,13 @@ var Opt = DefaultOpt
// AddFlags adds flags for the sftp
func AddFlags(flagSet *pflag.FlagSet, Opt *Options) {
rc.AddOption("sftp", &Opt)
flags.StringVarP(flagSet, &Opt.ListenAddr, "addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to")
flags.StringArrayVarP(flagSet, &Opt.HostKeys, "key", "", Opt.HostKeys, "SSH private host key file (Can be multi-valued, leave blank to auto generate)")
flags.StringVarP(flagSet, &Opt.AuthorizedKeys, "authorized-keys", "", Opt.AuthorizedKeys, "Authorized keys file")
flags.StringVarP(flagSet, &Opt.User, "user", "", Opt.User, "User name for authentication")
flags.StringVarP(flagSet, &Opt.Pass, "pass", "", Opt.Pass, "Password for authentication")
flags.BoolVarP(flagSet, &Opt.NoAuth, "no-auth", "", Opt.NoAuth, "Allow connections with no authentication if set")
flags.BoolVarP(flagSet, &Opt.Stdio, "stdio", "", Opt.Stdio, "Run an sftp server on stdin/stdout")
flags.StringVarP(flagSet, &Opt.ListenAddr, "addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to", "")
flags.StringArrayVarP(flagSet, &Opt.HostKeys, "key", "", Opt.HostKeys, "SSH private host key file (Can be multi-valued, leave blank to auto generate)", "")
flags.StringVarP(flagSet, &Opt.AuthorizedKeys, "authorized-keys", "", Opt.AuthorizedKeys, "Authorized keys file", "")
flags.StringVarP(flagSet, &Opt.User, "user", "", Opt.User, "User name for authentication", "")
flags.StringVarP(flagSet, &Opt.Pass, "pass", "", Opt.Pass, "Password for authentication", "")
flags.BoolVarP(flagSet, &Opt.NoAuth, "no-auth", "", Opt.NoAuth, "Allow connections with no authentication if set", "")
flags.BoolVarP(flagSet, &Opt.Stdio, "stdio", "", Opt.Stdio, "Run an sftp server on stdin/stdout", "")
}
func init() {
@@ -116,6 +116,7 @@ provided by OpenSSH in this case.
` + vfs.Help + proxy.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.48",
"groups": "Filter",
},
Run: func(command *cobra.Command, args []string) {
var f fs.Fs

View File

@@ -6,8 +6,10 @@ import (
"encoding/xml"
"errors"
"fmt"
"mime"
"net/http"
"os"
"path"
"strconv"
"strings"
"time"
@@ -61,8 +63,8 @@ func init() {
libhttp.AddTemplateFlagsPrefix(flagSet, "", &Opt.Template)
vfsflags.AddFlags(flagSet)
proxyflags.AddFlags(flagSet)
flags.StringVarP(flagSet, &Opt.HashName, "etag-hash", "", "", "Which hash to use for the ETag, or auto or blank for off")
flags.BoolVarP(flagSet, &Opt.DisableGETDir, "disable-dir-list", "", false, "Disable HTML directory list on GET request for a directory")
flags.StringVarP(flagSet, &Opt.HashName, "etag-hash", "", "", "Which hash to use for the ETag, or auto or blank for off", "")
flags.BoolVarP(flagSet, &Opt.DisableGETDir, "disable-dir-list", "", false, "Disable HTML directory list on GET request for a directory", "")
}
// Command definition for cobra
@@ -112,6 +114,7 @@ https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-bl
` + libhttp.Help(flagPrefix) + libhttp.TemplateHelp(flagPrefix) + libhttp.AuthHelp(flagPrefix) + vfs.Help + proxy.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
"groups": "Filter",
},
RunE: func(command *cobra.Command, args []string) error {
var f fs.Fs
@@ -580,12 +583,14 @@ func (fi FileInfo) ContentType(ctx context.Context) (contentType string, err err
fs.Errorf(fi, "Expecting vfs.Node, got %T", fi.FileInfo)
return "application/octet-stream", nil
}
entry := node.DirEntry()
entry := node.DirEntry() // can be nil
switch x := entry.(type) {
case fs.Object:
return fs.MimeType(ctx, x), nil
case fs.Directory:
return "inode/directory", nil
case nil:
return mime.TypeByExtension(path.Ext(node.Name())), nil
}
fs.Errorf(fi, "Expecting fs.Object or fs.Directory, got %T", entry)
return "application/octet-stream", nil

View File

@@ -43,6 +43,7 @@ a remote:path.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.27",
"groups": "Filter,Listing",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 1, command, args)

View File

@@ -19,7 +19,7 @@ var jsonOutput bool
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &jsonOutput, "json", "", false, "Format output as JSON")
flags.BoolVarP(cmdFlags, &jsonOutput, "json", "", false, "Format output as JSON", "")
}
var commandDefinition = &cobra.Command{
@@ -46,6 +46,7 @@ of the size command.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.23",
"groups": "Filter,Listing",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -18,7 +18,7 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after sync")
flags.BoolVarP(cmdFlags, &createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after sync", "")
}
var commandDefinition = &cobra.Command{
@@ -60,6 +60,9 @@ destination that is inside the source directory.
**Note**: Use the ` + "`rclone dedupe`" + ` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.
See [this forum post](https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) for more info.
`,
Annotations: map[string]string{
"groups": "Sync,Copy,Filter,Listing,Important",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, srcFileName, fdst := cmd.NewFsSrcFileDst(args)

View File

@@ -20,7 +20,7 @@ var (
func init() {
test.Command.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.DurationVarP(cmdFlags, &pollInterval, "poll-interval", "", pollInterval, "Time to wait between polling for changes")
flags.DurationVarP(cmdFlags, &pollInterval, "poll-interval", "", pollInterval, "Time to wait between polling for changes", "")
}
var commandDefinition = &cobra.Command{

View File

@@ -0,0 +1,94 @@
package info
// Create files with all possible base 32768 file names
import (
"context"
"fmt"
"log"
"os"
"path/filepath"
"strings"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/sync"
)
const safeAlphabet = "ƀɀɠʀҠԀڀڠݠހ߀ကႠᄀᄠᅀᆀᇠሀሠበዠጠᎠᏀᐠᑀᑠᒀᒠᓀᓠᔀᔠᕀᕠᖀᖠᗀᗠᘀᘠᙀᚠᛀកᠠᡀᣀᦀ᧠ᨠᯀᰀᴀ⇠⋀⍀⍠⎀⎠⏀␀─┠╀╠▀■◀◠☀☠♀♠⚀⚠⛀⛠✀✠❀➀➠⠀⠠⡀⡠⢀⢠⣀⣠⤀⤠⥀⥠⦠⨠⩀⪀⪠⫠⬀⬠⭀ⰀⲀⲠⳀⴀⵀ⺠⻀㇀㐀㐠㑀㑠㒀㒠㓀㓠㔀㔠㕀㕠㖀㖠㗀㗠㘀㘠㙀㙠㚀㚠㛀㛠㜀㜠㝀㝠㞀㞠㟀㟠㠀㠠㡀㡠㢀㢠㣀㣠㤀㤠㥀㥠㦀㦠㧀㧠㨀㨠㩀㩠㪀㪠㫀㫠㬀㬠㭀㭠㮀㮠㯀㯠㰀㰠㱀㱠㲀㲠㳀㳠㴀㴠㵀㵠㶀㶠㷀㷠㸀㸠㹀㹠㺀㺠㻀㻠㼀㼠㽀㽠㾀㾠㿀㿠䀀䀠䁀䁠䂀䂠䃀䃠䄀䄠䅀䅠䆀䆠䇀䇠䈀䈠䉀䉠䊀䊠䋀䋠䌀䌠䍀䍠䎀䎠䏀䏠䐀䐠䑀䑠䒀䒠䓀䓠䔀䔠䕀䕠䖀䖠䗀䗠䘀䘠䙀䙠䚀䚠䛀䛠䜀䜠䝀䝠䞀䞠䟀䟠䠀䠠䡀䡠䢀䢠䣀䣠䤀䤠䥀䥠䦀䦠䧀䧠䨀䨠䩀䩠䪀䪠䫀䫠䬀䬠䭀䭠䮀䮠䯀䯠䰀䰠䱀䱠䲀䲠䳀䳠䴀䴠䵀䵠䶀䷀䷠一丠乀习亀亠什仠伀传佀你侀侠俀俠倀倠偀偠傀傠僀僠儀儠兀兠冀冠净几刀删剀剠劀加勀勠匀匠區占厀厠叀叠吀吠呀呠咀咠哀哠唀唠啀啠喀喠嗀嗠嘀嘠噀噠嚀嚠囀因圀圠址坠垀垠埀埠堀堠塀塠墀墠壀壠夀夠奀奠妀妠姀姠娀娠婀婠媀媠嫀嫠嬀嬠孀孠宀宠寀寠尀尠局屠岀岠峀峠崀崠嵀嵠嶀嶠巀巠帀帠幀幠庀庠廀廠开张彀彠往徠忀忠怀怠恀恠悀悠惀惠愀愠慀慠憀憠懀懠戀戠所扠技抠拀拠挀挠捀捠掀掠揀揠搀搠摀摠撀撠擀擠攀攠敀敠斀斠旀无昀映晀晠暀暠曀曠最朠杀杠枀枠柀柠栀栠桀桠梀梠检棠椀椠楀楠榀榠槀槠樀樠橀橠檀檠櫀櫠欀欠歀歠殀殠毀毠氀氠汀池沀沠泀泠洀洠浀浠涀涠淀淠渀渠湀湠満溠滀滠漀漠潀潠澀澠激濠瀀瀠灀灠炀炠烀烠焀焠煀煠熀熠燀燠爀爠牀牠犀犠狀狠猀猠獀獠玀玠珀珠琀琠瑀瑠璀璠瓀瓠甀甠畀畠疀疠痀痠瘀瘠癀癠皀皠盀盠眀眠着睠瞀瞠矀矠砀砠础硠碀碠磀磠礀礠祀祠禀禠秀秠稀稠穀穠窀窠竀章笀笠筀筠简箠節篠簀簠籀籠粀粠糀糠紀素絀絠綀綠緀締縀縠繀繠纀纠绀绠缀缠罀罠羀羠翀翠耀耠聀聠肀肠胀胠脀脠腀腠膀膠臀臠舀舠艀艠芀芠苀苠茀茠荀荠莀莠菀菠萀萠葀葠蒀蒠蓀蓠蔀蔠蕀蕠薀薠藀藠蘀蘠虀虠蚀蚠蛀蛠蜀蜠蝀蝠螀螠蟀蟠蠀蠠血衠袀袠裀裠褀褠襀襠覀覠觀觠言訠詀詠誀誠諀諠謀謠譀譠讀讠诀诠谀谠豀豠貀負賀賠贀贠赀赠趀趠跀跠踀踠蹀蹠躀躠軀軠輀輠轀轠辀辠迀迠退造遀遠邀邠郀郠鄀鄠酀酠醀醠釀釠鈀鈠鉀鉠銀銠鋀鋠錀錠鍀鍠鎀鎠鏀鏠鐀鐠鑀鑠钀钠铀铠销锠镀镠門閠闀闠阀阠陀陠隀隠雀雠需霠靀靠鞀鞠韀韠頀頠顀顠颀颠飀飠餀餠饀饠馀馠駀駠騀騠驀驠骀骠髀髠鬀鬠魀魠鮀鮠鯀鯠鰀鰠鱀鱠鲀鲠鳀鳠鴀鴠鵀鵠鶀鶠鷀鷠鸀鸠鹀鹠麀麠黀黠鼀鼠齀齠龀龠ꀀꀠꁀꁠꂀꂠꃀꃠꄀꄠꅀꅠꆀꆠꇀꇠꈀꈠꉀꉠꊀꊠꋀꋠꌀꌠꍀꍠꎀꎠꏀꏠꐀꐠꑀꑠ꒠ꔀꔠꕀꕠꖀꖠꗀꗠꙀꚠꛀ꜀꜠ꝀꞀꡀ"
func (r *results) checkBase32768() {
r.canBase32768 = false
ctx := context.Background()
n := 0
dir, err := os.MkdirTemp("", "rclone-base32768-files")
if err != nil {
log.Printf("Failed to make temp dir: %v", err)
return
}
defer func() {
_ = os.RemoveAll(dir)
}()
// Create test files
for _, c := range safeAlphabet {
var out strings.Builder
for i := rune(0); i < 32; i++ {
out.WriteRune(c + i)
}
fileName := filepath.Join(dir, fmt.Sprintf("%04d-%s.txt", n, out.String()))
err = os.WriteFile(fileName, []byte(fileName), 0666)
if err != nil {
log.Printf("write %q failed: %v", fileName, err)
return
}
n++
}
// Make a local fs
fLocal, err := fs.NewFs(ctx, dir)
if err != nil {
log.Printf("Failed to make local fs: %v", err)
return
}
testDir := "test-base32768"
// Make a remote fs
s := fs.ConfigStringFull(r.f)
s = fspath.JoinRootPath(s, testDir)
fRemote, err := fs.NewFs(ctx, s)
if err != nil {
log.Printf("Failed to make remote fs: %v", err)
return
}
defer func() {
err := operations.Purge(ctx, r.f, testDir)
if err != nil {
log.Printf("Failed to purge test directory: %v", err)
return
}
}()
// Sync local to remote
err = sync.Sync(ctx, fRemote, fLocal, false)
if err != nil {
log.Printf("Failed to sync remote fs: %v", err)
return
}
// Check local to remote
err = operations.Check(ctx, &operations.CheckOpt{
Fdst: fRemote,
Fsrc: fLocal,
})
if err != nil {
log.Printf("Failed to check remote fs: %v", err)
return
}
r.canBase32768 = true
}

View File

@@ -37,6 +37,7 @@ var (
checkControl bool
checkLength bool
checkStreaming bool
checkBase32768 bool
all bool
uploadWait time.Duration
positionLeftRe = regexp.MustCompile(`(?s)^(.*)-position-left-([[:xdigit:]]+)$`)
@@ -47,13 +48,14 @@ var (
func init() {
test.Command.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.StringVarP(cmdFlags, &writeJSON, "write-json", "", "", "Write results to file")
flags.BoolVarP(cmdFlags, &checkNormalization, "check-normalization", "", false, "Check UTF-8 Normalization")
flags.BoolVarP(cmdFlags, &checkControl, "check-control", "", false, "Check control characters")
flags.DurationVarP(cmdFlags, &uploadWait, "upload-wait", "", 0, "Wait after writing a file")
flags.BoolVarP(cmdFlags, &checkLength, "check-length", "", false, "Check max filename length")
flags.BoolVarP(cmdFlags, &checkStreaming, "check-streaming", "", false, "Check uploads with indeterminate file size")
flags.BoolVarP(cmdFlags, &all, "all", "", false, "Run all tests")
flags.StringVarP(cmdFlags, &writeJSON, "write-json", "", "", "Write results to file", "")
flags.BoolVarP(cmdFlags, &checkNormalization, "check-normalization", "", false, "Check UTF-8 Normalization", "")
flags.BoolVarP(cmdFlags, &checkControl, "check-control", "", false, "Check control characters", "")
flags.DurationVarP(cmdFlags, &uploadWait, "upload-wait", "", 0, "Wait after writing a file", "")
flags.BoolVarP(cmdFlags, &checkLength, "check-length", "", false, "Check max filename length", "")
flags.BoolVarP(cmdFlags, &checkStreaming, "check-streaming", "", false, "Check uploads with indeterminate file size", "")
flags.BoolVarP(cmdFlags, &checkBase32768, "check-base32768", "", false, "Check can store all possible base32768 characters", "")
flags.BoolVarP(cmdFlags, &all, "all", "", false, "Run all tests", "")
}
var commandDefinition = &cobra.Command{
@@ -71,14 +73,15 @@ a bit of go code for each one.
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1e6, command, args)
if !checkNormalization && !checkControl && !checkLength && !checkStreaming && !all {
log.Fatalf("no tests selected - select a test or use -all")
if !checkNormalization && !checkControl && !checkLength && !checkStreaming && !checkBase32768 && !all {
log.Fatalf("no tests selected - select a test or use --all")
}
if all {
checkNormalization = true
checkControl = true
checkLength = true
checkStreaming = true
checkBase32768 = true
}
for i := range args {
f := cmd.NewFsDir(args[i : i+1])
@@ -100,6 +103,7 @@ type results struct {
canReadUnnormalized bool
canReadRenormalized bool
canStream bool
canBase32768 bool
}
func newResults(ctx context.Context, f fs.Fs) *results {
@@ -141,6 +145,9 @@ func (r *results) Print() {
if checkStreaming {
fmt.Printf("canStream = %v\n", r.canStream)
}
if checkBase32768 {
fmt.Printf("base32768isOK = %v // make sure maxFileLength for 2 byte unicode chars is the same as for 1 byte characters\n", r.canBase32768)
}
}
// WriteJSON writes the results to a JSON file when requested
@@ -483,6 +490,9 @@ func readInfo(ctx context.Context, f fs.Fs) error {
if checkStreaming {
r.checkStreaming()
}
if checkBase32768 {
r.checkBase32768()
}
r.Print()
r.WriteJSON()
return nil

View File

@@ -49,25 +49,25 @@ var (
func init() {
test.Command.AddCommand(makefilesCmd)
makefilesFlags := makefilesCmd.Flags()
flags.IntVarP(makefilesFlags, &numberOfFiles, "files", "", numberOfFiles, "Number of files to create")
flags.IntVarP(makefilesFlags, &averageFilesPerDirectory, "files-per-directory", "", averageFilesPerDirectory, "Average number of files per directory")
flags.IntVarP(makefilesFlags, &maxDepth, "max-depth", "", maxDepth, "Maximum depth of directory hierarchy")
flags.FVarP(makefilesFlags, &minFileSize, "min-file-size", "", "Minimum size of file to create")
flags.FVarP(makefilesFlags, &maxFileSize, "max-file-size", "", "Maximum size of files to create")
flags.IntVarP(makefilesFlags, &minFileNameLength, "min-name-length", "", minFileNameLength, "Minimum size of file names")
flags.IntVarP(makefilesFlags, &maxFileNameLength, "max-name-length", "", maxFileNameLength, "Maximum size of file names")
flags.IntVarP(makefilesFlags, &numberOfFiles, "files", "", numberOfFiles, "Number of files to create", "")
flags.IntVarP(makefilesFlags, &averageFilesPerDirectory, "files-per-directory", "", averageFilesPerDirectory, "Average number of files per directory", "")
flags.IntVarP(makefilesFlags, &maxDepth, "max-depth", "", maxDepth, "Maximum depth of directory hierarchy", "")
flags.FVarP(makefilesFlags, &minFileSize, "min-file-size", "", "Minimum size of file to create", "")
flags.FVarP(makefilesFlags, &maxFileSize, "max-file-size", "", "Maximum size of files to create", "")
flags.IntVarP(makefilesFlags, &minFileNameLength, "min-name-length", "", minFileNameLength, "Minimum size of file names", "")
flags.IntVarP(makefilesFlags, &maxFileNameLength, "max-name-length", "", maxFileNameLength, "Maximum size of file names", "")
test.Command.AddCommand(makefileCmd)
makefileFlags := makefileCmd.Flags()
// Common flags to makefiles and makefile
for _, f := range []*pflag.FlagSet{makefilesFlags, makefileFlags} {
flags.Int64VarP(f, &seed, "seed", "", seed, "Seed for the random number generator (0 for random)")
flags.BoolVarP(f, &zero, "zero", "", zero, "Fill files with ASCII 0x00")
flags.BoolVarP(f, &sparse, "sparse", "", sparse, "Make the files sparse (appear to be filled with ASCII 0x00)")
flags.BoolVarP(f, &ascii, "ascii", "", ascii, "Fill files with random ASCII printable bytes only")
flags.BoolVarP(f, &pattern, "pattern", "", pattern, "Fill files with a periodic pattern")
flags.BoolVarP(f, &chargen, "chargen", "", chargen, "Fill files with a ASCII chargen pattern")
flags.Int64VarP(f, &seed, "seed", "", seed, "Seed for the random number generator (0 for random)", "")
flags.BoolVarP(f, &zero, "zero", "", zero, "Fill files with ASCII 0x00", "")
flags.BoolVarP(f, &sparse, "sparse", "", sparse, "Make the files sparse (appear to be filled with ASCII 0x00)", "")
flags.BoolVarP(f, &ascii, "ascii", "", ascii, "Fill files with random ASCII printable bytes only", "")
flags.BoolVarP(f, &pattern, "pattern", "", pattern, "Fill files with a periodic pattern", "")
flags.BoolVarP(f, &chargen, "chargen", "", chargen, "Fill files with a ASCII chargen pattern", "")
}
}

View File

@@ -34,10 +34,10 @@ const (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &notCreateNewFile, "no-create", "C", false, "Do not create the file if it does not exist (implied with --recursive)")
flags.StringVarP(cmdFlags, &timeAsArgument, "timestamp", "t", "", "Use specified time instead of the current time of day")
flags.BoolVarP(cmdFlags, &localTime, "localtime", "", false, "Use localtime for timestamp, not UTC")
flags.BoolVarP(cmdFlags, &recursive, "recursive", "R", false, "Recursively touch all files")
flags.BoolVarP(cmdFlags, &notCreateNewFile, "no-create", "C", false, "Do not create the file if it does not exist (implied with --recursive)", "")
flags.StringVarP(cmdFlags, &timeAsArgument, "timestamp", "t", "", "Use specified time instead of the current time of day", "")
flags.BoolVarP(cmdFlags, &localTime, "localtime", "", false, "Use localtime for timestamp, not UTC", "")
flags.BoolVarP(cmdFlags, &recursive, "recursive", "R", false, "Recursively touch all files", "")
}
var commandDefinition = &cobra.Command{
@@ -66,6 +66,7 @@ then add the ` + "`--localtime`" + ` flag.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.39",
"groups": "Filter,Listing,Important",
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -35,35 +35,35 @@ func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
// List
flags.BoolVarP(cmdFlags, &opts.All, "all", "a", false, "All files are listed (list . files too)")
flags.BoolVarP(cmdFlags, &opts.DirsOnly, "dirs-only", "d", false, "List directories only")
flags.BoolVarP(cmdFlags, &opts.FullPath, "full-path", "", false, "Print the full path prefix for each file")
//flags.BoolVarP(cmdFlags, &opts.IgnoreCase, "ignore-case", "", false, "Ignore case when pattern matching")
flags.BoolVarP(cmdFlags, &noReport, "noreport", "", false, "Turn off file/directory count at end of tree listing")
// flags.BoolVarP(cmdFlags, &opts.FollowLink, "follow", "l", false, "Follow symbolic links like directories")
flags.IntVarP(cmdFlags, &opts.DeepLevel, "level", "", 0, "Descend only level directories deep")
flags.BoolVarP(cmdFlags, &opts.All, "all", "a", false, "All files are listed (list . files too)", "")
flags.BoolVarP(cmdFlags, &opts.DirsOnly, "dirs-only", "d", false, "List directories only", "")
flags.BoolVarP(cmdFlags, &opts.FullPath, "full-path", "", false, "Print the full path prefix for each file", "")
//flags.BoolVarP(cmdFlags, &opts.IgnoreCase, "ignore-case", "", false, "Ignore case when pattern matching", "")
flags.BoolVarP(cmdFlags, &noReport, "noreport", "", false, "Turn off file/directory count at end of tree listing", "")
// flags.BoolVarP(cmdFlags, &opts.FollowLink, "follow", "l", false, "Follow symbolic links like directories","")
flags.IntVarP(cmdFlags, &opts.DeepLevel, "level", "", 0, "Descend only level directories deep", "")
// flags.StringVarP(cmdFlags, &opts.Pattern, "pattern", "P", "", "List only those files that match the pattern given")
// flags.StringVarP(cmdFlags, &opts.IPattern, "exclude", "", "", "Do not list files that match the given pattern")
flags.StringVarP(cmdFlags, &outFileName, "output", "o", "", "Output to file instead of stdout")
flags.StringVarP(cmdFlags, &outFileName, "output", "o", "", "Output to file instead of stdout", "")
// Files
flags.BoolVarP(cmdFlags, &opts.ByteSize, "size", "s", false, "Print the size in bytes of each file.")
flags.BoolVarP(cmdFlags, &opts.FileMode, "protections", "p", false, "Print the protections for each file.")
flags.BoolVarP(cmdFlags, &opts.ByteSize, "size", "s", false, "Print the size in bytes of each file.", "")
flags.BoolVarP(cmdFlags, &opts.FileMode, "protections", "p", false, "Print the protections for each file.", "")
// flags.BoolVarP(cmdFlags, &opts.ShowUid, "uid", "", false, "Displays file owner or UID number.")
// flags.BoolVarP(cmdFlags, &opts.ShowGid, "gid", "", false, "Displays file group owner or GID number.")
flags.BoolVarP(cmdFlags, &opts.Quotes, "quote", "Q", false, "Quote filenames with double quotes.")
flags.BoolVarP(cmdFlags, &opts.LastMod, "modtime", "D", false, "Print the date of last modification.")
flags.BoolVarP(cmdFlags, &opts.Quotes, "quote", "Q", false, "Quote filenames with double quotes.", "")
flags.BoolVarP(cmdFlags, &opts.LastMod, "modtime", "D", false, "Print the date of last modification.", "")
// flags.BoolVarP(cmdFlags, &opts.Inodes, "inodes", "", false, "Print inode number of each file.")
// flags.BoolVarP(cmdFlags, &opts.Device, "device", "", false, "Print device ID number to which each file belongs.")
// Sort
flags.BoolVarP(cmdFlags, &opts.NoSort, "unsorted", "U", false, "Leave files unsorted")
flags.BoolVarP(cmdFlags, &opts.VerSort, "version", "", false, "Sort files alphanumerically by version")
flags.BoolVarP(cmdFlags, &opts.ModSort, "sort-modtime", "t", false, "Sort files by last modification time")
flags.BoolVarP(cmdFlags, &opts.CTimeSort, "sort-ctime", "", false, "Sort files by last status change time")
flags.BoolVarP(cmdFlags, &opts.ReverSort, "sort-reverse", "r", false, "Reverse the order of the sort")
flags.BoolVarP(cmdFlags, &opts.DirSort, "dirsfirst", "", false, "List directories before files (-U disables)")
flags.StringVarP(cmdFlags, &sort, "sort", "", "", "Select sort: name,version,size,mtime,ctime")
flags.BoolVarP(cmdFlags, &opts.NoSort, "unsorted", "U", false, "Leave files unsorted", "")
flags.BoolVarP(cmdFlags, &opts.VerSort, "version", "", false, "Sort files alphanumerically by version", "")
flags.BoolVarP(cmdFlags, &opts.ModSort, "sort-modtime", "t", false, "Sort files by last modification time", "")
flags.BoolVarP(cmdFlags, &opts.CTimeSort, "sort-ctime", "", false, "Sort files by last status change time", "")
flags.BoolVarP(cmdFlags, &opts.ReverSort, "sort-reverse", "r", false, "Reverse the order of the sort", "")
flags.BoolVarP(cmdFlags, &opts.DirSort, "dirsfirst", "", false, "List directories before files (-U disables)", "")
flags.StringVarP(cmdFlags, &sort, "sort", "", "", "Select sort: name,version,size,mtime,ctime", "")
// Graphics
flags.BoolVarP(cmdFlags, &opts.NoIndent, "noindent", "", false, "Don't print indentation lines")
flags.BoolVarP(cmdFlags, &opts.NoIndent, "noindent", "", false, "Don't print indentation lines", "")
}
var commandDefinition = &cobra.Command{
@@ -99,6 +99,7 @@ For a more interactive navigation of the remote see the
`,
Annotations: map[string]string{
"versionIntroduced": "v1.38",
"groups": "Filter,Listing",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)

View File

@@ -25,7 +25,7 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &check, "check", "", false, "Check for new version")
flags.BoolVarP(cmdFlags, &check, "check", "", false, "Check for new version", "")
}
var commandDefinition = &cobra.Command{

View File

@@ -746,3 +746,14 @@ put them back in again.` >}}
* Vladislav Vorobev <x.miere@gmail.com>
* darix <darix@users.noreply.github.com>
* Benjamin <36415086+bbenjamin-sys@users.noreply.github.com>
* Chun-Hung Tseng <henrybear327@users.noreply.github.com>
* Ricardo D'O. Albanus <rdalbanus@users.noreply.github.com>
* gabriel-suela <gscsuela@gmail.com>
* Tiago Boeing <contato@tiagoboeing.com>
* Edwin Mackenzie-Owen <edwin.mowen@gmail.com>
* Niklas Hambüchen <mail@nh2.me>
* yuudi <yuudi@users.noreply.github.com>
* Zach <github@prozach.org>
* nielash <31582349+nielash@users.noreply.github.com>
* Julian Lepinski <lepinsk@users.noreply.github.com>
* Raymond Berger <RayBB@users.noreply.github.com>

View File

@@ -231,6 +231,19 @@ $ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
```
#### Versions naming caveat
When using `--b2-versions` flag rclone is relying on the file name
to work out whether the objects are versions or not. Versions' names
are created by inserting timestamp between file name and its extension.
```
9 file.txt
8 file-v2023-07-17-161032-000.txt
16 file-v2023-06-15-141003-000.txt
```
If there are real files present with the same names as versions, then
behaviour of `--b2-versions` can be unpredictable.
### Data usage
It is useful to know how many requests are sent to the server in different scenarios.

View File

@@ -473,3 +473,27 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
## Get your own Box App ID
Here is how to create your own Box App ID for rclone:
1. Go to the [Box Developer Console](https://app.box.com/developers/console)
and login, then click `My Apps` on the sidebar. Click `Create New App`
and select `Custom App`.
2. In the first screen on the box that pops up, you can pretty much enter
whatever you want. The `App Name` can be whatever. For `Purpose` choose
automation to avoid having to fill out anything else. Click `Next`.
3. In the second screen of the creation screen, select
`User Authentication (OAuth 2.0)`. Then click `Create App`.
4. You should now be on the `Configuration` tab of your new app. If not,
click on it at the top of the webpage. Copy down `Client ID`
and `Client Secret`, you'll need those for rclone.
5. Under "OAuth 2.0 Redirect URI", add `http://127.0.0.1:53682/`
6. For `Application Scopes`, select `Read all files and folders stored in Box`
and `Write all files and folders stored in box` (assuming you want to do both).
Leave others unchecked. Click `Save Changes` at the top right.

View File

@@ -97,7 +97,7 @@ will put files in a directory called `name` in the current directory.
When rclone starts a file upload, chunker checks the file size. If it
doesn't exceed the configured chunk size, chunker will just pass the file
to the wrapped remote. If a file is large, chunker will transparently cut
to the wrapped remote (however, see caveat below). If a file is large, chunker will transparently cut
data in pieces with temporary names and stream them one by one, on the fly.
Each data chunk will contain the specified number of bytes, except for the
last one which may have less data. If file size is unknown in advance
@@ -129,6 +129,14 @@ proceed with current command.
You can set the `--chunker-fail-hard` flag to have commands abort with
error message in such cases.
**Caveat**: As it is now, chunker will always create a temporary file in the
backend and then rename it, even if the file is below the chunk threshold.
This will result in unnecessary API calls and can severely restrict throughput
when handling transfers primarily composed of small files on some backends (e.g. Box).
A workaround to this issue is to use chunker only for files above the chunk threshold
via `--min-size` and then perform a separate call without chunker on the remaining
files.
#### Chunk names

View File

@@ -5,11 +5,11 @@ slug: rclone
url: /commands/rclone/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/ and as part of making a release run "make commanddocs"
---
# rclone
## rclone
Show help for rclone commands, flags and backends.
## Synopsis
### Synopsis
Rclone syncs files to and from cloud storage providers as well as
@@ -24,15 +24,788 @@ documentation, changelog and configuration walkthroughs.
rclone [flags]
```
## Options
### Options
```
-h, --help help for rclone
--acd-auth-url string Auth server URL
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
--acd-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob
--acd-token-url string Token server url
--acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
--alias-remote string Remote or path to alias
--ask-password Allow prompt for password for encrypted configuration (default true)
--auto-confirm If enabled, do not request console confirmation
--azureblob-access-tier string Access tier of blob: hot, cool or archive
--azureblob-account string Azure Storage Account Name
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting
--azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi)
--azureblob-client-certificate-password string Password for the certificate file (optional) (obscured)
--azureblob-client-certificate-path string Path to a PEM or PKCS12 certificate file including the private key
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal's client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
--azureblob-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
--azureblob-key string Storage Account Shared Key
--azureblob-list-chunk int Size of blob list (default 5000)
--azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any
--azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any
--azureblob-no-check-container If set, don't attempt to check the container exists or create it
--azureblob-no-head-object If set, do not do HEAD before GET when getting objects
--azureblob-password string The user's password (obscured)
--azureblob-public-access string Public access level of a container: blob or container
--azureblob-sas-url string SAS URL for container level access only
--azureblob-service-principal-file string Path to file containing credentials for use with a service principal
--azureblob-tenant string ID of the service principal's tenant. Also called its directory ID
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads
--b2-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
--b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings
--backup-dir string Make backups into hierarchy based in DIR
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name
--box-access-token string Box App Primary Access Token
--box-auth-url string Auth server URL
--box-box-config-file string Box App config.json location
--box-box-sub-type string (default "user")
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
--box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in
--box-root-folder-id string Fill in for rclone to use a non root folder as its starting point
--box-token string OAuth Access Token as a JSON blob
--box-token-url string Token server url
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50 MiB) (default 50Mi)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
--bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--ca-cert stringArray CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
--cache-chunk-path string Directory to cache chunk files (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-size SizeSuffix The size of a chunk (partial file data) (default 5Mi)
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk (default 10Gi)
--cache-db-path string Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend")
--cache-db-purge Clear all the cached data for this remote on start
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone")
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s)
--cache-plex-insecure string Skip all certificate verification when connecting to the Plex server
--cache-plex-password string The password of the Plex user (obscured)
--cache-plex-url string The URL of the Plex server
--cache-plex-username string The username of the Plex user
--cache-read-retries int How many times to retry a read from a cache storage (default 10)
--cache-remote string Remote to cache
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Cache file data on writes through the FS
--check-first Do all the checks before starting transfers
--checkers int Number of checkers to run in parallel (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi)
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO")
--combine-upstreams SpaceSepList Upstreams for combining
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--compress-level int GZIP compression level (-2 to 9) (default -1)
--compress-mode string Compression mode (default "gzip")
--compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi)
--compress-remote string Remote to compress
--config string Config file (default "$HOME/.config/rclone/rclone.conf")
--contimeout Duration Connect timeout (default 1m0s)
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
-L, --copy-links Follow symlinks and copy the pointed to item
--cpuprofile string Write cpu profile to file
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true)
--crypt-filename-encoding string How to encode the encrypted filename to text string (default "base32")
--crypt-filename-encryption string How to encrypt the filenames (default "standard")
--crypt-no-data-encryption Option to either encrypt file data or leave it unencrypted
--crypt-pass-bad-blocks If set this will pass bad blocks through as all 0
--crypt-password string Password or pass phrase for encryption (obscured)
--crypt-password2 string Password or pass phrase for salt (obscured)
--crypt-remote string Remote to encrypt/decrypt
--crypt-server-side-across-configs Deprecated: use --server-side-across-configs instead
--crypt-show-mapping For all files listed show how the names encrypt
--crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin")
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--delete-after When synchronizing, delete files on destination after transferring (default)
--delete-before When synchronizing, delete files on destination before transferring
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features (use --disable help to see a list)
--disable-http-keep-alives Disable HTTP keep-alives and use each connection once.
--disable-http2 Disable HTTP/2 in the global transport
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
--drive-auth-url string Auth server URL
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
--drive-disable-http2 Disable drive using http2 (default true)
--drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8)
--drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-formats string Deprecated: See export_formats
--drive-impersonate string Impersonate this user when using a service account
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
--drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
--drive-resource-key string Resource key for accessing a link-shared file
--drive-root-folder-id string ID of the root folder
--drive-scope string Scope that rclone should use when requesting access from drive
--drive-server-side-across-configs Deprecated: use --server-side-across-configs instead
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
--drive-size-as-quota Show sizes as storage quota usage, not actual size
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only
--drive-skip-dangling-shortcuts If set skip dangling shortcut files
--drive-skip-gdocs Skip google documents in all listings
--drive-skip-shortcuts If set skip shortcut files
--drive-starred-only Only show files that are starred
--drive-stop-on-download-limit Make download limit errors be fatal
--drive-stop-on-upload-limit Make upload limit errors be fatal
--drive-team-drive string ID of the Shared Drive (Team Drive)
--drive-token string OAuth Access Token as a JSON blob
--drive-token-url string Token server url
--drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8Mi)
--drive-use-created-date Use file created date instead of modified date
--drive-use-shared-date Use date file was shared instead of modified date
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
--dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
--dropbox-token string OAuth Access Token as a JSON blob
--dropbox-token-url string Token server url
-n, --dry-run Do a trial run with no permanent changes
--dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21
--dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info
--error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
--exclude-if-present stringArray Exclude directories if filename is present
--expect-continue-timeout Duration Timeout when using expect / 100-continue in HTTP (default 1s)
--fast-list Use recursive list if available; uses more memory but fewer transactions
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-cdn Set if you wish to use CDN download links
--fichier-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
--filefabric-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--fs-cache-expire-duration Duration Cache remotes for this long (0 to disable caching) (default 5m0s)
--fs-cache-expire-interval Duration Interval to check for expired remotes (default 1m0s)
--ftp-ask-password Allow asking for FTP password when needed
--ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s)
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-disable-epsv Disable using EPSV even if server advertises support
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
--ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-pass string FTP password (obscured)
--ftp-port int FTP port number (default 21)
--ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
--ftp-socks-proxy string Socks 5 proxy host
--ftp-tls Use Implicit FTPS (FTP over TLS)
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
--ftp-user string FTP username (default "$USER")
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
--gcs-bucket-acl string Access Control List for new buckets
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
--gcs-project-number string Project number
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage
--gcs-token string OAuth Access Token as a JSON blob
--gcs-token-url string Token server url
--gcs-user-project string User project
--gphotos-auth-url string Auth server URL
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
--gphotos-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
--gphotos-token string OAuth Access Token as a JSON blob
--gphotos-token-url string Token server url
--hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default)
--hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1)
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
--hdfs-encoding MultiEncoder The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
--hdfs-namenode string Hadoop name node and port
--hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string Hadoop user name
--header stringArray Set HTTP header for all transactions
--header-download stringArray Set HTTP header for download transactions
--header-upload stringArray Set HTTP header for upload transactions
-h, --help help for rclone
--hidrive-auth-url string Auth server URL
--hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi)
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary
--hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
--hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1")
--hidrive-root-prefix string The root/parent folder for all paths (default "/")
--hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw")
--hidrive-scope-role string User-level that rclone should use when requesting access from HiDrive (default "user")
--hidrive-token string OAuth Access Token as a JSON blob
--hidrive-token-url string Token server url
--hidrive-upload-concurrency int Concurrency for chunked uploads (default 4)
--hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi)
--http-headers CommaSepList Set HTTP headers for all transactions
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
--human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi
--ignore-case Ignore case in filters (case insensitive)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-errors Delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
--inplace Download directly to destination file instead of atomic download to temp/rename
-i, --interactive Enable interactive mode
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
--internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
--jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
--jottacloud-token string OAuth Access Token as a JSON blob
--jottacloud-token-url string Token server url
--jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
--koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name
--kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s)
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
--local-nounc Disable UNC (long path names) conversion on Windows
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time")
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--log-systemd Activate systemd integration for the logger
--low-level-retries int Number of low level retries to do (default 10)
--mailru-auth-url string Auth server URL
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
--mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
--mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi)
--mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk (default 32Mi)
--mailru-token string OAuth Access Token as a JSON blob
--mailru-token-url string Token server url
--mailru-user string User name (usually email)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
--max-depth int If set limits the recursion depth to this (default -1)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
--mega-debug Output more debug from Mega
--mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
--mega-use-https Use HTTPS for transfers
--mega-user string User name
--memprofile string Write memory profile to file
-M, --metadata If set, preserve metadata when copying objects
--metadata-exclude stringArray Exclude metadatas matching pattern
--metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
--metadata-filter stringArray Add a metadata filtering rule
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--metadata-set stringArray Add metadata key=value when uploading
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
--modify-window Duration Max time diff to be considered the same (default 1ns)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--netstorage-account string Set the NetStorage account name
--netstorage-host string Domain+path of NetStorage host to connect to
--netstorage-protocol string Select between HTTP or HTTPS protocol (default "https")
--netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured)
--no-check-certificate Do not verify the server SSL certificate (insecure)
--no-check-dest Don't check the destination, copy regardless
--no-console Hide console window (supported on Windows only)
--no-gzip-encoding Don't set Accept-Encoding: gzip
--no-traverse Don't traverse destination file system on copy
--no-unicode-normalization Don't normalize unicode characters in filenames
--no-update-modtime Don't update destination mod-time if files identical
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only)
--onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access)
--onedrive-auth-url string Auth server URL
--onedrive-av-override Allows download of files the server thinks has a virus
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command (default "view")
--onedrive-list-chunk int Size of listing chunk (default 1000)
--onedrive-no-versions Remove all versions on modifying operations
--onedrive-region string Choose national cloud region for OneDrive (default "global")
--onedrive-root-folder-id string ID of the root folder
--onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--oos-compartment string Object storage compartment OCID
--oos-config-file string Path to OCI config file (default "~/.oci/config")
--oos-config-profile string Profile name inside the oci config file (default "Default")
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--oos-copy-timeout Duration Timeout for copy (default 1m0s)
--oos-disable-checksum Don't store MD5 checksum with object metadata
--oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--oos-endpoint string Endpoint for Object storage API
--oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--oos-namespace string Object storage namespace
--oos-no-check-bucket If set, don't attempt to check the bucket exists or create it
--oos-provider string Choose your Auth Provider (default "env_auth")
--oos-region string Object storage Region
--oos-sse-customer-algorithm string If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm
--oos-sse-customer-key string To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to
--oos-sse-customer-key-file string To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated
--oos-sse-customer-key-sha256 string If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption
--oos-sse-kms-key-id string if using your own master key in vault, this header specifies the
--oos-storage-tier string The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm (default "Standard")
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
--opendrive-username string Username
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--password-command SpaceSepList Command for supplying password for encrypted configuration
--pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
--pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-password string Your pcloud password (obscured)
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--pikpak-auth-url string Auth server URL
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
--pikpak-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
--pikpak-token string OAuth Access Token as a JSON blob
--pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
--premiumizeme-auth-url string Auth server URL
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
--premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--premiumizeme-token string OAuth Access Token as a JSON blob
--premiumizeme-token-url string Token server url
-P, --progress Show progress during transfer
--progress-terminal-title Show progress on the terminal title (requires -P/--progress)
--protondrive-2fa string The 2FA code
--protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
--protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-original-file-size Return the file size before encryption (default true)
--protondrive-password string The password of your proton drive account (obscured)
--protondrive-replace-existing-draft Create a new revision when filename conflict is detected
--protondrive-username string The username of your proton drive account
--putio-auth-url string Auth server URL
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
--putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-token string OAuth Access Token as a JSON blob
--putio-token-url string Token server url
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3)
--qingstor-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime
--qingstor-secret-access-key string QingStor Secret Access Key (password)
--qingstor-upload-concurrency int Concurrency for multipart uploads (default 1)
--qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--qingstor-zone string Zone to connect to
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default [localhost:5572])
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-enable-metrics Enable prometheus metrics on /metrics
--rc-files string Path to local files to serve on the HTTP server
--rc-htpasswd string A htpasswd file - if not provided no authentication is done
--rc-job-expire-duration Duration Expire finished async jobs older than this value (default 1m0s)
--rc-job-expire-interval Duration Interval to check for expired async jobs (default 10s)
--rc-key string TLS PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--rc-no-auth Don't require auth for certain methods
--rc-pass string Password for authentication
--rc-realm string Realm for authentication
--rc-salt string Password hashing salt (default "dlPL2MqE")
--rc-serve Enable the serving of remote objects
--rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
--rc-user string User name for authentication
--rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
--rc-web-gui-no-open-browser Don't open the browser automatically
--rc-web-gui-update Check and update to latest version of web gui
--refresh-times Refresh the modtime of remote files
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s)
--s3-access-key-id string AWS Access Key ID
--s3-acl string Canned ACL used when creating buckets and storing or copying objects
--s3-bucket-acl string Canned ACL used when creating buckets
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-decompress If set this will decompress gzip encoded objects
--s3-directory-markers Upload an empty object with a trailing slash when a new directory is created
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
--s3-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
--s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
--s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto
--s3-location-constraint string Location constraint - must be set to match the Region
--s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
--s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
--s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
--s3-might-gzip Tristate Set this if the backend might gzip objects (default unset)
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
--s3-no-head-object If set, do not do HEAD before GET when getting objects
--s3-no-system-metadata Suppress setting and reading of system metadata
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
--s3-requester-pays Enables requester pays option when interacting with S3 bucket
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
--s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
--s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
--s3-sts-endpoint string Endpoint for STS
--s3-upload-concurrency int Concurrency for multipart uploads (default 4)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset)
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
--seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured)
--seafile-pass string Password (obscured)
--seafile-url string URL of seafile host to connect to
--seafile-user string User name (usually email address)
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--sftp-ask-password Allow asking for SFTP password when needed
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
--sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
--sftp-host string SSH host to connect to
--sftp-host-key-algorithms SpaceSepList Space separated list of host key algorithms, ordered by preference
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--sftp-key-exchange SpaceSepList Space separated list of key exchange algorithms, ordered by preference
--sftp-key-file string Path to PEM-encoded private key file
--sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file (obscured)
--sftp-key-pem string Raw PEM-encoded private key
--sftp-key-use-agent When set forces the usage of the ssh-agent
--sftp-known-hosts-file string Optional path to known_hosts file
--sftp-macs SpaceSepList Space separated list of MACs (message authentication code) algorithms, ordered by preference
--sftp-md5sum-command string The command used to read md5 hashes
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH shell commands
--sftp-port int SSH port number (default 22)
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-env SpaceSepList Environment variables to pass to sftp and commands
--sftp-set-modtime Set the modified time on the remote if set (default true)
--sftp-sha1sum-command string The command used to read sha1 hashes
--sftp-shell-type string The type of SSH shell on remote server, if any
--sftp-skip-links Set to skip any symlinks and any other non regular files
--sftp-socks-proxy string Socks 5 proxy host
--sftp-ssh SpaceSepList Path and arguments to external ssh binary
--sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp")
--sftp-use-fstat If set use fstat instead of stat
--sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods
--sftp-user string SSH username (default "$USER")
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
--sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder
--sharefile-token string OAuth Access Token as a JSON blob
--sharefile-token-url string Token server url
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--sia-api-password string Sia Daemon API Password (obscured)
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
--sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--size-only Skip based on size only, not mod-time or checksum
--skip-links Don't warn about skipped symlinks
--smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
--smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
--smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
--smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
--smb-host string SMB server hostname to connect to
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--smb-pass string SMB password (obscured)
--smb-port int SMB port number (default 445)
--smb-spn string Service principal name
--smb-user string SMB username (default "$USER")
--stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-one-line Make the stats fit on one line
--stats-one-line-date Enable --stats-one-line and add current date/time prefix
--stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format
--stats-unit string Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes")
--storj-access-grant string Access grant
--storj-api-key string API key
--storj-passphrase string Encryption passphrase
--storj-provider string Choose an authentication method (default "existing")
--storj-satellite-address string Satellite address (default "us1.storj.io")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
--suffix string Suffix to add to changed files
--suffix-keep-extension Preserve the extension when using --suffix
--sugarsync-access-key-id string Sugarsync Access Key ID
--sugarsync-app-id string Sugarsync App ID
--sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id
--sugarsync-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key
--sugarsync-refresh-token string Sugarsync refresh token
--sugarsync-root-id string Sugarsync root id
--sugarsync-user string Sugarsync user
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-auth string Authentication URL for server (OS_AUTH_URL)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form
--swift-key string API key or password (OS_PASSWORD)
--swift-leave-parts-on-error If true avoid calling abort upload on a failure
--swift-no-chunk Don't chunk files during streaming upload
--swift-no-large-objects Disable support for static and dynamic large objects
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, e.g. KERN,USER,... (default "DAEMON")
--temp-dir string Directory rclone will use for temporary files (default "/tmp")
--timeout Duration IO idle timeout (default 5m0s)
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--track-renames When synchronizing, track file renames and do a server-side move if possible
--track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash")
--transfers int Number of file transfers to run in parallel (default 4)
--union-action-policy string Policy to choose upstream on ACTION category (default "epall")
--union-cache-time int Cache time of usage and free space (in seconds) (default 120)
--union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
--union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi)
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
-u, --update Skip files that are newer on the destination
--uptobox-access-token string Your access token
--uptobox-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
--use-cookies Enable session cookiejar
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0-beta.7196.08e40f21b.fix-flag-groups")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-encoding string The encoding for the backend
--webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi)
--webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--webdav-pass string Password (obscured)
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the WebDAV site/service/software you are using
--yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
--yandex-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-hard-delete Delete files permanently rather than putting them into the trash
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
### SEE ALSO
* [rclone about](/commands/rclone_about/) - Get quota information from the remote.
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization.

View File

@@ -72,9 +72,10 @@ rclone about remote: [flags]
--json Format output as JSON
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
# SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.

View File

@@ -34,9 +34,10 @@ rclone authorize [flags]
--template string The path to a custom Go template for generating HTML responses
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
# SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.

View File

@@ -3,6 +3,7 @@ title: "rclone backend"
description: "Run a backend-specific command."
slug: rclone_backend
url: /commands/rclone_backend/
groups: Important
versionIntroduced: v1.52
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/backend/ and as part of making a release run "make commanddocs"
---
@@ -52,9 +53,20 @@ rclone backend <command> remote:path [opts] <args> [flags]
-o, --option stringArray Option in the form name=value or name
```
## Important Options
Important flags useful for most commands.
```
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
# SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.

View File

@@ -3,6 +3,7 @@ title: "rclone bisync"
description: "Perform bidirectional synchronization between two paths."
slug: rclone_bisync
url: /commands/rclone_bisync/
groups: Filter,Copy,Important
versionIntroduced: v1.58
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/bisync/ and as part of making a release run "make commanddocs"
---
@@ -45,9 +46,85 @@ rclone bisync remote1:path1 remote2:path2 [flags]
--workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync)
```
## Copy Options
Flags for anything which can Copy a file.
```
--check-first Do all the checks before starting transfers
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
-M, --metadata If set, preserve metadata when copying objects
--modify-window Duration Max time diff to be considered the same (default 1ns)
--multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi)
--multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4)
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
--no-update-modtime Don't update destination mod-time if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
## Important Options
Important flags useful for most commands.
```
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
```
## Filter Options
Flags for filtering directory listings.
```
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
--exclude-if-present stringArray Exclude directories if filename is present
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-depth int If set limits the recursion depth to this (default -1)
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--metadata-exclude stringArray Exclude metadatas matching pattern
--metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
--metadata-filter stringArray Add a metadata filtering rule
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
# SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.

View File

@@ -3,6 +3,7 @@ title: "rclone cat"
description: "Concatenates any files and sends them to stdout."
slug: rclone_cat
url: /commands/rclone_cat/
groups: Filter,Listing
versionIntroduced: v1.33
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cat/ and as part of making a release run "make commanddocs"
---
@@ -61,9 +62,48 @@ rclone cat remote:path [flags]
--tail int Only print the last N characters
```
## Filter Options
Flags for filtering directory listings.
```
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
--exclude-if-present stringArray Exclude directories if filename is present
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-depth int If set limits the recursion depth to this (default -1)
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--metadata-exclude stringArray Exclude metadatas matching pattern
--metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
--metadata-filter stringArray Add a metadata filtering rule
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
```
## Listing Options
Flags for listing directories.
```
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
# SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.

View File

@@ -3,6 +3,7 @@ title: "rclone check"
description: "Checks the files in the source and destination match."
slug: rclone_check
url: /commands/rclone_check/
groups: Filter,Listing,Check
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/check/ and as part of making a release run "make commanddocs"
---
# rclone check
@@ -75,9 +76,56 @@ rclone check source:path dest:path [flags]
--one-way Check one way only, source files must exist on remote
```
## Check Options
Flags used for `rclone check`.
```
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
```
## Filter Options
Flags for filtering directory listings.
```
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
--exclude-if-present stringArray Exclude directories if filename is present
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-depth int If set limits the recursion depth to this (default -1)
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--metadata-exclude stringArray Exclude metadatas matching pattern
--metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
--metadata-filter stringArray Add a metadata filtering rule
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
```
## Listing Options
Flags for listing directories.
```
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
# SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.

View File

@@ -3,6 +3,7 @@ title: "rclone checksum"
description: "Checks the files in the source against a SUM file."
slug: rclone_checksum
url: /commands/rclone_checksum/
groups: Filter,Listing
versionIntroduced: v1.56
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/checksum/ and as part of making a release run "make commanddocs"
---
@@ -66,9 +67,48 @@ rclone checksum <hash> sumfile src:path [flags]
--one-way Check one way only, source files must exist on remote
```
## Filter Options
Flags for filtering directory listings.
```
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
--exclude-if-present stringArray Exclude directories if filename is present
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file filtering rule
--filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
--ignore-case Ignore case in filters (case insensitive)
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--max-depth int If set limits the recursion depth to this (default -1)
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--metadata-exclude stringArray Exclude metadatas matching pattern
--metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
--metadata-filter stringArray Add a metadata filtering rule
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
```
## Listing Options
Flags for listing directories.
```
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
# SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.

Some files were not shown because too many files have changed in this diff Show More