1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-20 09:23:21 +00:00

Compare commits

...

62 Commits

Author SHA1 Message Date
Nick Craig-Wood
ad7078c365 drive: add experimental --drive-fast-list-grouping flag #3114 2019-04-16 12:08:01 +01:00
Nick Craig-Wood
16d8014cbb build: drop support for go1.8 2019-04-15 21:49:58 +01:00
Nick Craig-Wood
613a9bb86b vendor: update all dependencies 2019-04-15 20:12:56 +01:00
calisro
8190a81201 lsjson: added EncryptedPath to output - fixes #3094 2019-04-15 18:12:09 +01:00
Nick Craig-Wood
f5795db6d2 build: fix fetch_binaries not to fetch test binaries 2019-04-13 13:08:53 +01:00
Nick Craig-Wood
e2a2eb349f Start v1.47.0-DEV development 2019-04-13 13:08:37 +01:00
Nick Craig-Wood
a0d4fdb2fa Version v1.47.0 2019-04-13 11:01:58 +01:00
Nick Craig-Wood
a28239f005 filter: Make --files-from traverse as before unless --no-traverse is set
In c5ac96e9e7 we made --files-from only read the objects specified and
don't scan directories.

This caused problems with Google drive (very very slow) and B2
(excessive API consumption) so it was decided to make the old
behaviour (traversing the directories) the default with --files-from
and use the existing --no-traverse flag (which has exactly the right
semantics) to enable the new non scanning behaviour.

See: https://forum.rclone.org/t/using-files-from-with-drive-hammers-the-api/8726

Fixes #3102 Fixes #3095
2019-04-12 17:16:49 +01:00
Nick Craig-Wood
b05da61c82 build: move linter build tags into Makefile to fix golangci-lint 2019-04-12 15:48:36 +01:00
Nick Craig-Wood
41f01da625 docs: add link to Go report card 2019-04-12 15:28:37 +01:00
Nick Craig-Wood
901811bb26 docs: Remove references to Google+ 2019-04-12 15:25:17 +01:00
Oliver Heyme
0d4a3520ad jottacloud: add device registration
jottacloud: Updated documenation
2019-04-11 16:31:27 +01:00
calistri
5855714474 lsjson: added --files-only and --dirs-only flags
Factored common code from lsf/lsjson into operations.ListJSON
2019-04-11 11:43:25 +01:00
Nick Craig-Wood
120de505a9 Add Manu to contributors 2019-04-11 10:22:17 +01:00
Manu
6e86526c9d s3: add support for "Glacier Deep Archive" storage class - fixes #3088 2019-04-11 10:21:41 +01:00
Nick Craig-Wood
0862dc9b2b build: update to Xenial in travis build to fix link errors 2019-04-10 15:22:21 +01:00
Nick Craig-Wood
1c301f9f7a drive: Fix creation of duplicates with server side copy - fixes #3067 2019-03-29 16:58:19 +00:00
Nick Craig-Wood
9f6b09dfaf Add Ben Boeckel to contributors 2019-03-28 15:13:17 +00:00
Ben Boeckel
3d424c6e08 docs: fix various typos 2019-03-28 15:12:51 +00:00
Nick Craig-Wood
6fb1c8f51c b2: ignore malformed src_last_modified_millis
This fixes rclone returning `listing failed: strconv.ParseInt` errors
when listing files which have a malformed `src_last_modified_millis`.
This is uploaded by the client so care is needed in interpreting it as
it can be malformed.

Fixes #3065
2019-03-25 15:51:45 +00:00
Nick Craig-Wood
626f0d1886 copy: account for server side copy bytes and obey --max-transfer 2019-03-25 15:36:38 +00:00
Nick Craig-Wood
9ee9fe3885 swift: obey Retry-After to fix OVH restore from cold storage
In as many methods as possible we attempt to obey the Retry-After
header where it is provided.

This means that when objects are being requested from OVH cold storage
rclone will sleep the correct amount of time before retrying.

If the sleeps are short it does them immediately, if long then it
returns an ErrorRetryAfter which will cause the outer retry to sleep
before retrying.

Fixes #3041
2019-03-25 13:41:34 +00:00
Nick Craig-Wood
b0380aad95 vendor: update github.com/ncw/swift to help with #3041 2019-03-25 13:41:34 +00:00
Nick Craig-Wood
2065e73d0b cmd: implement RetryAfter errors which cause a sleep before a retry
Use NewRetryAfterError to return an error which will cause a high
level retry after the delay specified.
2019-03-25 13:41:34 +00:00
Nick Craig-Wood
d3e3bbedf3 docs: Fix typo - fixes #3071 2019-03-25 13:18:41 +00:00
Cnly
8d29d69ade onedrive: Implement graceful cancel 2019-03-19 15:54:51 +08:00
Nick Craig-Wood
6e70d88f54 swift: work around token expiry on CEPH
This implements the Expiry interface so token expiry works properly

This change makes sure that this change from the swift library works
correctly with rclone's custom authenticator.

> Renew the token 60s before the expiry time
>
> The v2 and v3 auth schemes both return the expiry time of the token,
> so instead of waiting for a 401 error, renew the token 60s before this
> time.
>
> This makes transfers more efficient and also works around a bug in
> CEPH which returns 403 instead of 401 when the token expires.
>
> http://tracker.ceph.com/issues/22223
2019-03-18 13:30:59 +00:00
Nick Craig-Wood
595fea757d vendor: update github.com/ncw/swift to bring in Expires changes 2019-03-18 13:30:59 +00:00
Nick Craig-Wood
bb80586473 bin/get-github-release: fetch the most recent not the least recent 2019-03-18 11:29:37 +00:00
Nick Craig-Wood
0d475958c7 Fix errors discovered with go vet nilness tool 2019-03-18 11:23:00 +00:00
Nick Craig-Wood
2728948fb0 Add xopez to contributors 2019-03-18 11:04:10 +00:00
Nick Craig-Wood
3756f211b5 Add Danil Semelenov to contributors 2019-03-18 11:04:10 +00:00
xopez
2faf2aed80 docs: Update Copyright to current Year 2019-03-18 11:03:45 +00:00
Nick Craig-Wood
1bd8183af1 build: use matrix build for travis
This makes the build more efficient, the .travis.yml file more
comprehensible and reduces the Makefile spaghetti.

Windows support is commented out for the moment as it isn't very
reliable yet.
2019-03-17 14:58:18 +00:00
Nick Craig-Wood
5aa706831f b2: ignore already_hidden error on remove
Sometimes (possibly through eventual consistency) b2 returns an
already_hidden error on a delete.  Ignore this since it is harmless.
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
ac7e1dbf62 test_all: add the vfs tests to the integration tests
Fix failing tests for some remotes
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
14ef4437e5 dedupe: fix bug introduced when converting to use walk.ListR #2902
Before the fix we were only de-duping the ListR batches.

Afterwards we dedupe everything.

This will have the consequence that rclone uses more memory as it will
build a map of all the directory names, not just the names in a given
directory.
2019-03-17 11:01:20 +00:00
Danil Semelenov
a0d2ab5b4f cmd: Fix autocompletion of remote paths with spaces - fixes #3047 2019-03-17 10:15:20 +00:00
Nick Craig-Wood
3bfde5f52a ftp: add --ftp-concurrency to limit maximum number of connections
Fixes #2166
2019-03-17 09:57:14 +00:00
Nick Craig-Wood
2b05bd9a08 rc: implement operations/publiclink the equivalent of rclone link
Fixes #3042
2019-03-17 09:41:31 +00:00
Nick Craig-Wood
1318be3b0a vendor: update github.com/goftp/server to fix hang while reading a file from the server
See: https://forum.rclone.org/t/minor-issue-with-linux-ftp-client-and-rclone-ftp-access-denied/8959
2019-03-17 09:30:57 +00:00
Nick Craig-Wood
f4a754a36b drive: add --skip-checksum-gphotos to ignore incorrect checksums on Google Photos
First implementation by @jammin84, re-written by @ncw

Fixes #2207
2019-03-17 09:10:51 +00:00
Nick Craig-Wood
fef73763aa lib/atexit: add SIGTERM to signals which run the exit handlers on unix 2019-03-16 17:47:02 +00:00
Nick Craig-Wood
7267d19ad8 fstest: Use walk.ListR for listing 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
47099466c0 cache: Use walk.ListR for listing the temporary Fs. 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
4376019062 dedupe: Use walk.ListR for listing commands.
This dramatically increases the speed (7x in my tests) of the de-dupe
as google drive supports ListR directly and dedupe did not work with
`--fast-list`.

Fixes #2902
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
e5f4210b09 serve restic: use walk.ListR for listing
This is effectively what the old code did anyway so this should not
make any functional changes.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
d5f2df2f3d Use walk.ListR for listing operations
This will increase speed for backends which support ListR and will not
have the memory overhead of using --fast-list.

It also means that errors are queued until the end so as much of the
remote will be listed as possible before returning an error.

Commands affected are:
- lsf
- ls
- lsl
- lsjson
- lsd
- md5sum/sha1sum/hashsum
- size
- delete
- cat
- settier
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
efd720b533 walk: Implement walk.ListR which will use ListR if at all possible
It otherwise has the nearly the same interface as walk.Walk which it
will fall back to if it can't use ListR.

Using walk.ListR will speed up file system operations by default and
use much less memory and start immediately compared to if --fast-list
had been supplied.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
047f00a411 filter: Add BoundedRecursion method
This indicates that the filter set could be satisfied by a bounded
directory recursion.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
bb5ac8efbe http: fix socket leak on 404 errors 2019-03-15 17:04:28 +00:00
Nick Craig-Wood
e62bbf761b http: add --http-no-slash for websites with directories with no slashes #3053
See: https://forum.rclone.org/t/is-there-a-way-to-log-into-an-htpp-server/8484
2019-03-15 17:04:06 +00:00
Nick Craig-Wood
54a2e99d97 http: remove duplicates from listings 2019-03-15 16:59:36 +00:00
Nick Craig-Wood
28230d93b4 sync: Implement --suffix-keep-extension for use with --suffix - fixes #3032 2019-03-15 14:21:39 +00:00
Florian Gamböck
3c4407442d cmd: fix completion of remotes
The previous behavior of the remotes completion was that only
alphanumeric characters were allowed in a remote name. This limitation
has been lifted somewhat by #2985, which also allowed an underscore.

With the new implementation introduced in this commit, the completion of
the remote name has been simplified: If there is no colon (":") in the
current word, then complete remote name. Otherwise, complete the path
inside the specified remote. This allows correct completion of all
remote names that are allowed by the config (including - and _).
Actually it matches much more than that, even remote names that are not
allowed by the config, but in such a case there already would be a wrong
identifier in the configuration file.

With this simpler string comparison, we can get rid of the regular
expression, which makes the completion multiple times faster. For a
sample benchmark, try the following:

     # Old way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path =~ ^[[:alnum:]]*$ ]]; done'

     real    0m15,637s
     user    0m15,613s
     sys     0m0,024s

     # New way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path != *:* ]]; done'

     real    0m1,324s
     user    0m1,304s
     sys     0m0,020s
2019-03-15 13:16:42 +00:00
Dan Walters
caf318d499 dlna: add connection manager service description
The UPnP MediaServer spec says that the ConnectionManager service is
required, and adding it was enough to get dlna support working on my
other TV (LG webOS 2.2.1).
2019-03-15 13:14:31 +00:00
Nick Craig-Wood
2fbb504b66 webdav: fix About/df when reading the available/total returns 0
Some WebDAV servers return an empty Available and Used which parses as 0.

This caused About to return the Total as 0 which can confused mounted
file systems.

After this change we ignore the result if Available and Used are both 0.

See: https://forum.rclone.org/t/windows-mounted-webdav-drive-has-no-free-space/8938
2019-03-15 12:03:04 +00:00
Alex Chen
2b58d1a46f docs: onedrive: Add guide to refreshing token after MFA is enabled 2019-03-14 00:21:05 +08:00
Cnly
1582a21408 onedrive: Always add trailing colon to path when addressing items - #2720, #3039 2019-03-13 11:30:15 +08:00
Nick Craig-Wood
229898dcee Add Dan Walters to contributors 2019-03-11 17:31:46 +00:00
Dan Walters
95194adfd5 dlna: fix root XML service descriptor
The SCPD URL was being set after marshalling the XML, and thus coming
out blank.  Now works on my Samsung TV, and likely fixes some issues
reported by others in #2648.
2019-03-11 17:31:32 +00:00
Nick Craig-Wood
4827496234 webdav: fix race when creating directories - fixes #3035
Before this change a race condition existed in mkdir
- the directory was attempted to be created
- the parent didn't exist so it failed
- the parent was created
- the directory was created again

The last step failed as the directory was created in a different thread.

This was fixed by checking the error messages of MKCOL for both
directory creations, rather than only the first.
2019-03-11 16:20:05 +00:00
646 changed files with 73264 additions and 19586 deletions

View File

@@ -1,9 +1,5 @@
# golangci-lint configuration options
run:
build-tags:
- cmount
linters:
enable:
- deadcode

View File

@@ -1,27 +1,33 @@
---
language: go
sudo: required
dist: trusty
dist: xenial
os:
- linux
go:
- 1.8.x
- 1.9.x
- 1.10.x
- 1.11.x
- 1.12.x
- tip
- linux
go_import_path: github.com/ncw/rclone
before_install:
- if [[ $TRAVIS_OS_NAME == linux ]]; then sudo modprobe fuse ; sudo chmod 666 /dev/fuse ; sudo chown root:$USER /etc/fuse.conf ; fi
- if [[ $TRAVIS_OS_NAME == osx ]]; then brew update && brew tap caskroom/cask && brew cask install osxfuse ; fi
- git fetch --unshallow --tags
- |
if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
fi
if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
brew update
brew tap caskroom/cask
brew cask install osxfuse
fi
if [[ "$TRAVIS_OS_NAME" == "windows" ]]; then
choco install -y winfsp zip make
cd ../.. # fix crlf in git checkout
mv $TRAVIS_REPO_SLUG _old
git config --global core.autocrlf false
git clone _old $TRAVIS_REPO_SLUG
cd $TRAVIS_REPO_SLUG
fi
install:
- git fetch --unshallow --tags
- make vars
- make build_dep
script:
- make check
- make quicktest
- make compile_all
- make vars
env:
global:
- GOTAGS=cmount
@@ -32,23 +38,63 @@ env:
addons:
apt:
packages:
- fuse
- libfuse-dev
- rpm
- pkg-config
- fuse
- libfuse-dev
- rpm
- pkg-config
cache:
directories:
- $HOME/.cache/go-build
matrix:
allow_failures:
- go: tip
- go: tip
include:
- os: osx
go: 1.12.x
env: GOTAGS=""
cache:
directories:
- $HOME/Library/Caches/go-build
- go: 1.9.x
script:
- make quicktest
- go: 1.10.x
script:
- make quicktest
- go: 1.11.x
script:
- make quicktest
- go: 1.12.x
env:
- GOTAGS=cmount
script:
- make build_dep
- make check
- make quicktest
- make racequicktest
- make compile_all
- os: osx
go: 1.12.x
env:
- GOTAGS= # cmount doesn't work on osx travis for some reason
cache:
directories:
- $HOME/Library/Caches/go-build
script:
- make
- make quicktest
- make racequicktest
# - os: windows
# go: 1.12.x
# env:
# - GOTAGS=cmount
# - CPATH='C:\Program Files (x86)\WinFsp\inc\fuse'
# #filter_secrets: false # works around a problem with secrets under windows
# cache:
# directories:
# - ${LocalAppData}/go-build
# script:
# - make
# - make quicktest
# - make racequicktest
- go: tip
script:
- make quicktest
deploy:
provider: script
script: make travis_beta
@@ -57,4 +103,4 @@ deploy:
repo: ncw/rclone
all_branches: true
go: 1.12.x
condition: $TRAVIS_PULL_REQUEST == false
condition: $TRAVIS_PULL_REQUEST == false && $TRAVIS_OS_NAME != "windows"

View File

@@ -135,7 +135,7 @@ then change into the project root and run
make test
This command is run daily on the the integration test server. You can
This command is run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
## Code Organisation ##

File diff suppressed because it is too large Load Diff

692
MANUAL.md

File diff suppressed because it is too large Load Diff

3775
MANUAL.txt

File diff suppressed because it is too large Load Diff

View File

@@ -17,8 +17,6 @@ ifneq ($(TAG),$(LAST_TAG))
endif
GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... | grep -v /vendor/ )
# Run full tests if go >= go1.12
FULL_TESTS := $(shell go version | perl -lne 'print "go$$1.$$2" if /go(\d+)\.(\d+)/ && ($$1 > 1 || $$2 >= 12)')
BETA_PATH := $(BRANCH_PATH)$(TAG)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
@@ -26,6 +24,7 @@ BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
LINTTAGS=--build-tags "$(GOTAGS)"
endif
.PHONY: rclone vars version
@@ -42,7 +41,6 @@ vars:
@echo LAST_TAG="'$(LAST_TAG)'"
@echo NEW_TAG="'$(NEW_TAG)'"
@echo GO_VERSION="'$(GO_VERSION)'"
@echo FULL_TESTS="'$(FULL_TESTS)'"
@echo BETA_URL="'$(BETA_URL)'"
version:
@@ -57,28 +55,22 @@ test: rclone
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) $(GO_FILES)
ifdef FULL_TESTS
racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race $(GO_FILES)
endif
# Do source code quality checks
check: rclone
ifdef FULL_TESTS
@# we still run go vet for -printfuncs which golangci-lint doesn't do yet
@# see: https://github.com/golangci/golangci-lint/issues/204
@echo "-- START CODE QUALITY REPORT -------------------------------"
@go vet $(BUILDTAGS) -printfuncs Debugf,Infof,Logf,Errorf ./...
@golangci-lint run ./...
@golangci-lint run $(LINTTAGS) ./...
@echo "-- END CODE QUALITY REPORT ---------------------------------"
else
@echo Skipping source quality tests as version of go too old
endif
# Get the build dependencies
build_dep:
ifdef FULL_TESTS
go run bin/get-github-release.go -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
endif
# Get the release dependencies
release_dep:
@@ -162,11 +154,7 @@ log_since_last_release:
git log $(LAST_TAG)..
compile_all:
ifdef FULL_TESTS
go run bin/cross-compile.go -parallel 8 -compile-only $(BUILDTAGS) $(TAG)
else
@echo Skipping compile all as version of go too old
endif
appveyor_upload:
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
@@ -186,6 +174,11 @@ BUILD_FLAGS := -exclude "^(windows|darwin)/"
ifeq ($(TRAVIS_OS_NAME),osx)
BUILD_FLAGS := -include "^darwin/" -cgo
endif
ifeq ($(TRAVIS_OS_NAME),windows)
# BUILD_FLAGS := -include "^windows/" -cgo
# 386 doesn't build yet
BUILD_FLAGS := -include "^windows/amd64" -cgo
endif
travis_beta:
ifeq ($(TRAVIS_OS_NAME),linux)
@@ -201,7 +194,7 @@ endif
# Fetch the binary builds from travis and appveyor
fetch_binaries:
rclone -P sync $(BETA_UPLOAD) build/
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w

View File

@@ -7,11 +7,11 @@
[Changelog](https://rclone.org/changelog/) |
[Installation](https://rclone.org/install/) |
[Forum](https://forum.rclone.org/) |
[G+](https://google.com/+RcloneOrg)
[![Build Status](https://travis-ci.org/ncw/rclone.svg?branch=master)](https://travis-ci.org/ncw/rclone)
[![Windows Build Status](https://ci.appveyor.com/api/projects/status/github/ncw/rclone?branch=master&passingText=windows%20-%20ok&svg=true)](https://ci.appveyor.com/project/ncw/rclone)
[![CircleCI](https://circleci.com/gh/ncw/rclone/tree/master.svg?style=svg)](https://circleci.com/gh/ncw/rclone/tree/master)
[![Go Report Card](https://goreportcard.com/badge/github.com/ncw/rclone)](https://goreportcard.com/report/github.com/ncw/rclone)
[![GoDoc](https://godoc.org/github.com/ncw/rclone?status.svg)](https://godoc.org/github.com/ncw/rclone)
# Rclone

View File

@@ -11,7 +11,7 @@ Making a release
* edit docs/content/changelog.md
* make doc
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX"
* git commit -a -v -m "Version v1.XX.0"
* make retag
* git push --tags origin master
* # Wait for the appveyor and travis builds to complete then...
@@ -27,6 +27,7 @@ Making a release
Early in the next release cycle update the vendored dependencies
* Review any pinned packages in go.mod and remove if possible
* GO111MODULE=on go get -u github.com/spf13/cobra@master
* make update
* git status
* git add new files

View File

@@ -1,6 +1,6 @@
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
// +build !plan9,!solaris,go1.8
// +build !plan9,!solaris
package azureblob

View File

@@ -1,4 +1,4 @@
// +build !plan9,!solaris,go1.8
// +build !plan9,!solaris
package azureblob

View File

@@ -1,6 +1,6 @@
// Test AzureBlob filesystem interface
// +build !plan9,!solaris,go1.8
// +build !plan9,!solaris
package azureblob

View File

@@ -1,6 +1,6 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9 solaris !go1.8
// +build plan9 solaris
package azureblob

View File

@@ -952,6 +952,13 @@ func (f *Fs) hide(Name string) error {
return f.shouldRetry(resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
if apiErr.Code == "already_hidden" {
// sometimes eventual consistency causes this, so
// ignore this error since it is harmless
return nil
}
}
return errors.Wrapf(err, "failed to hide %q", Name)
}
return nil
@@ -1206,7 +1213,7 @@ func (o *Object) parseTimeString(timeString string) (err error) {
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
if err != nil {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return err
return nil
}
o.modTime = time.Unix(unixMilliseconds/1E3, (unixMilliseconds%1E3)*1E6).UTC()
return nil

View File

@@ -1191,7 +1191,7 @@ func (f *Fs) Rmdir(dir string) error {
}
var queuedEntries []*Object
err = walk.Walk(f.tempFs, dir, true, -1, func(path string, entries fs.DirEntries, err error) error {
err = walk.ListR(f.tempFs, dir, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
co := ObjectFromOriginal(f, oo)
@@ -1287,7 +1287,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
}
var queuedEntries []*Object
err := walk.Walk(f.tempFs, srcRemote, true, -1, func(path string, entries fs.DirEntries, err error) error {
err := walk.ListR(f.tempFs, srcRemote, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
co := ObjectFromOriginal(f, oo)

View File

@@ -1023,7 +1023,7 @@ func (b *Persistent) ReconcileTempUploads(cacheFs *Fs) error {
}
var queuedEntries []fs.Object
err = walk.Walk(cacheFs.tempFs, "", true, -1, func(path string, entries fs.DirEntries, err error) error {
err = walk.ListR(cacheFs.tempFs, "", true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
queuedEntries = append(queuedEntries, oo)

View File

@@ -1,7 +1,4 @@
// Package drive interfaces with the Google Drive object storage system
// +build go1.9
package drive
// FIXME need to deal with some corner cases
@@ -240,6 +237,22 @@ func init() {
Default: false,
Help: "Skip google documents in all listings.\nIf given, gdocs practically become invisible to rclone.",
Advanced: true,
}, {
Name: "skip_checksum_gphotos",
Default: false,
Help: `Skip MD5 checksum on Google photos and videos only.
Use this if you get checksum errors when transferring Google photos or
videos.
Setting this flag will cause Google photos and videos to return a
blank MD5 checksum.
Google photos are identifed by being in the "photos" space.
Corrupted checksums are caused by Google modifying the image/video but
not updating the checksum.`,
Advanced: true,
}, {
Name: "shared_with_me",
Default: false,
@@ -369,6 +382,11 @@ will download it anyway.`,
Default: defaultBurst,
Help: "Number of API calls to allow without sleeping.",
Advanced: true,
}, {
Name: "fast_list_grouping",
Default: 50,
Help: "Groups of IDs to request with --fast-list.",
Advanced: true,
}},
})
@@ -396,6 +414,7 @@ type Options struct {
AuthOwnerOnly bool `config:"auth_owner_only"`
UseTrash bool `config:"use_trash"`
SkipGdocs bool `config:"skip_gdocs"`
SkipChecksumGphotos bool `config:"skip_checksum_gphotos"`
SharedWithMe bool `config:"shared_with_me"`
TrashedOnly bool `config:"trashed_only"`
Extensions string `config:"formats"`
@@ -413,6 +432,7 @@ type Options struct {
V2DownloadMinSize fs.SizeSuffix `config:"v2_download_min_size"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
PacerBurst int `config:"pacer_burst"`
FastListGrouping int `config:"fast_list_grouping"`
}
// Fs represents a remote drive server
@@ -615,6 +635,9 @@ func (f *Fs) list(dirIDs []string, title string, directoriesOnly, filesOnly, inc
if f.opt.AuthOwnerOnly {
fields += ",owners"
}
if f.opt.SkipChecksumGphotos {
fields += ",spaces"
}
fields = fmt.Sprintf("files(%s),nextPageToken", fields)
@@ -1002,6 +1025,15 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
// newRegularObject creates a fs.Object for a normal drive.File
func (f *Fs) newRegularObject(remote string, info *drive.File) fs.Object {
// wipe checksum if SkipChecksumGphotos and file is type Photo or Video
if f.opt.SkipChecksumGphotos {
for _, space := range info.Spaces {
if space == "photos" {
info.Md5Checksum = ""
break
}
}
}
return &Object{
baseObject: f.newBaseObject(remote, info),
url: fmt.Sprintf("%sfiles/%s?alt=media", f.svc.BasePath, info.Id),
@@ -1457,7 +1489,6 @@ func (f *Fs) listRRunner(wg *sync.WaitGroup, in <-chan listREntry, out chan<- er
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
const (
grouping = 50
inputBuffer = 1000
)
@@ -1509,7 +1540,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
in <- listREntry{directoryID, dir}
for i := 0; i < fs.Config.Checkers; i++ {
go f.listRRunner(&wg, in, out, cb, grouping)
go f.listRRunner(&wg, in, out, cb, f.opt.FastListGrouping)
}
go func() {
// wait until the all directories are processed
@@ -1843,6 +1874,9 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
remote = remote[:len(remote)-len(ext)]
}
// Look to see if there is an existing object
existingObject, _ := f.NewObject(remote)
createInfo, err := f.createFileInfo(remote, src.ModTime())
if err != nil {
return nil, err
@@ -1860,7 +1894,17 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
if err != nil {
return nil, err
}
return f.newObjectWithInfo(remote, info)
newObject, err := f.newObjectWithInfo(remote, info)
if err != nil {
return nil, err
}
if existingObject != nil {
err = existingObject.Remove()
if err != nil {
fs.Errorf(existingObject, "Failed to remove existing object after copy: %v", err)
}
}
return newObject, nil
}
// Purge deletes all the files and the container

View File

@@ -1,5 +1,3 @@
// +build go1.9
package drive
import (

View File

@@ -1,7 +1,5 @@
// Test Drive filesystem interface
// +build go1.9
package drive
import (

View File

@@ -1,6 +0,0 @@
// Build for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !go1.9
package drive

View File

@@ -8,8 +8,6 @@
//
// This contains code adapted from google.golang.org/api (C) the GO AUTHORS
// +build go1.9
package drive
import (

View File

@@ -15,6 +15,7 @@ import (
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
)
@@ -45,6 +46,11 @@ func init() {
Help: "FTP password",
IsPassword: true,
Required: true,
}, {
Name: "concurrency",
Help: "Maximum number of FTP simultaneous connections, 0 for unlimited",
Default: 0,
Advanced: true,
},
},
})
@@ -52,10 +58,11 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
Concurrency int `config:"concurrency"`
}
// Fs represents a remote FTP server
@@ -70,6 +77,7 @@ type Fs struct {
dialAddr string
poolMu sync.Mutex
pool []*ftp.ServerConn
tokens *pacer.TokenDispenser
}
// Object describes an FTP file
@@ -128,6 +136,9 @@ func (f *Fs) ftpConnection() (*ftp.ServerConn, error) {
// Get an FTP connection from the pool, or open a new one
func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
f.tokens.Get()
}
f.poolMu.Lock()
if len(f.pool) > 0 {
c = f.pool[0]
@@ -147,6 +158,9 @@ func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
// if err is not nil then it checks the connection is alive using a
// NOOP request
func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
defer f.tokens.Put()
}
c := *pc
*pc = nil
if err != nil {
@@ -198,6 +212,7 @@ func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
user: user,
pass: pass,
dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,

View File

@@ -1,7 +1,4 @@
// Package googlecloudstorage provides an interface to Google Cloud Storage
// +build go1.9
package googlecloudstorage
/*

View File

@@ -1,7 +1,5 @@
// Test GoogleCloudStorage filesystem interface
// +build go1.9
package googlecloudstorage_test
import (

View File

@@ -1,6 +0,0 @@
// Build for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !go1.9
package googlecloudstorage

View File

@@ -6,6 +6,7 @@ package http
import (
"io"
"mime"
"net/http"
"net/url"
"path"
@@ -44,6 +45,22 @@ func init() {
Value: "https://user:pass@example.com",
Help: "Connect to example.com using a username and password",
}},
}, {
Name: "no_slash",
Help: `Set this if the site doesn't end directories with /
Use this if your target website does not use / on the end of
directories.
A / on the end of a path is how rclone normally tells the difference
between files and directories. If this flag is set, then rclone will
treat all files with Content-Type: text/html as directories and read
URLs from them rather than downloading them.
Note that this may cause rclone to confuse genuine HTML files with
directories.`,
Default: false,
Advanced: true,
}},
}
fs.Register(fsi)
@@ -52,6 +69,7 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
Endpoint string `config:"url"`
NoSlash bool `config:"no_slash"`
}
// Fs stores the interface to the remote HTTP files
@@ -270,14 +288,20 @@ func parse(base *url.URL, in io.Reader) (names []string, err error) {
if err != nil {
return nil, err
}
var walk func(*html.Node)
var (
walk func(*html.Node)
seen = make(map[string]struct{})
)
walk = func(n *html.Node) {
if n.Type == html.ElementNode && n.Data == "a" {
for _, a := range n.Attr {
if a.Key == "href" {
name, err := parseName(base, a.Val)
if err == nil {
names = append(names, name)
if _, found := seen[name]; !found {
names = append(names, name)
seen[name] = struct{}{}
}
}
break
}
@@ -302,14 +326,16 @@ func (f *Fs) readDir(dir string) (names []string, err error) {
return nil, errors.Errorf("internal error: readDir URL %q didn't end in /", URL)
}
res, err := f.httpClient.Get(URL)
if err == nil && res.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
if err == nil {
defer fs.CheckClose(res.Body, &err)
if res.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
}
}
err = statusError(res, err)
if err != nil {
return nil, errors.Wrap(err, "failed to readDir")
}
defer fs.CheckClose(res.Body, &err)
contentType := strings.SplitN(res.Header.Get("Content-Type"), ";", 2)[0]
switch contentType {
@@ -353,11 +379,16 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
fs: f,
remote: remote,
}
if err = file.stat(); err != nil {
switch err = file.stat(); err {
case nil:
entries = append(entries, file)
case fs.ErrorNotAFile:
// ...found a directory not a file
dir := fs.NewDir(remote, timeUnset)
entries = append(entries, dir)
default:
fs.Debugf(remote, "skipping because of error: %v", err)
continue
}
entries = append(entries, file)
}
}
return entries, nil
@@ -433,6 +464,16 @@ func (o *Object) stat() error {
o.size = parseInt64(res.Header.Get("Content-Length"), -1)
o.modTime = t
o.contentType = res.Header.Get("Content-Type")
// If NoSlash is set then check ContentType to see if it is a directory
if o.fs.opt.NoSlash {
mediaType, _, err := mime.ParseMediaType(o.contentType)
if err != nil {
return errors.Wrapf(err, "failed to parse Content-Type: %q", o.contentType)
}
if mediaType == "text/html" {
return fs.ErrorNotAFile
}
}
return nil
}

View File

@@ -1,5 +1,3 @@
// +build go1.8
package http
import (
@@ -65,7 +63,7 @@ func prepare(t *testing.T) (fs.Fs, func()) {
return f, tidy
}
func testListRoot(t *testing.T, f fs.Fs) {
func testListRoot(t *testing.T, f fs.Fs, noSlash bool) {
entries, err := f.List("")
require.NoError(t, err)
@@ -93,15 +91,29 @@ func testListRoot(t *testing.T, f fs.Fs) {
e = entries[3]
assert.Equal(t, "two.html", e.Remote())
assert.Equal(t, int64(7), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
if noSlash {
assert.Equal(t, int64(-1), e.Size())
_, ok = e.(fs.Directory)
assert.True(t, ok)
} else {
assert.Equal(t, int64(41), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
}
}
func TestListRoot(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
testListRoot(t, f)
testListRoot(t, f, false)
}
func TestListRootNoSlash(t *testing.T) {
f, tidy := prepare(t)
f.(*Fs).opt.NoSlash = true
defer tidy()
testListRoot(t, f, true)
}
func TestListSubDir(t *testing.T) {
@@ -194,7 +206,7 @@ func TestIsAFileRoot(t *testing.T) {
f, err := NewFs(remoteName, "one%.txt", m)
assert.Equal(t, err, fs.ErrorIsFile)
testListRoot(t, f)
testListRoot(t, f, false)
}
func TestIsAFileSubDir(t *testing.T) {

View File

@@ -1 +1 @@
potato
<a href="two.html/file.txt">file.txt</a>

View File

@@ -314,3 +314,9 @@ type UploadResponse struct {
Deleted interface{} `json:"deleted"`
Mime string `json:"mime"`
}
// DeviceRegistrationResponse is the response to registering a device
type DeviceRegistrationResponse struct {
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
}

View File

@@ -8,6 +8,7 @@ import (
"io"
"io/ioutil"
"log"
"math/rand"
"net/http"
"net/url"
"os"
@@ -45,10 +46,14 @@ const (
apiURL = "https://api.jottacloud.com/files/v1/"
baseURL = "https://www.jottacloud.com/"
tokenURL = "https://api.jottacloud.com/auth/v1/token"
registerURL = "https://api.jottacloud.com/auth/v1/register"
cachePrefix = "rclone-jcmd5-"
rcloneClientID = "nibfk8biu12ju7hpqomr8b1e40"
rcloneEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2"
configUsername = "user"
configClientID = "client_id"
configClientSecret = "client_secret"
charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
)
var (
@@ -58,9 +63,7 @@ var (
AuthURL: tokenURL,
TokenURL: tokenURL,
},
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
RedirectURL: oauthutil.RedirectLocalhostURL,
}
)
@@ -79,6 +82,55 @@ func init() {
}
}
srv := rest.NewClient(fshttp.NewClient(fs.Config))
// ask if we should create a device specifc token: https://github.com/ncw/rclone/issues/2995
fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n")
if config.Confirm() {
// random generator to generate random device names
seededRand := rand.New(rand.NewSource(time.Now().UnixNano()))
randonDeviceNamePartLength := 21
randomDeviceNamePart := make([]byte, randonDeviceNamePartLength)
for i := range randomDeviceNamePart {
randomDeviceNamePart[i] = charset[seededRand.Intn(len(charset))]
}
randomDeviceName := "rclone-" + string(randomDeviceNamePart)
fs.Debugf(nil, "Trying to register device '%s'", randomDeviceName)
values := url.Values{}
values.Set("device_id", randomDeviceName)
// all information comes from https://github.com/ttyridal/aiojotta/wiki/Jotta-protocol-3.-Authentication#token-authentication
opts := rest.Opts{
Method: "POST",
RootURL: registerURL,
ContentType: "application/x-www-form-urlencoded",
ExtraHeaders: map[string]string{"Authorization": "Bearer c2xrZmpoYWRsZmFramhkc2xma2phaHNkbGZramhhc2xkZmtqaGFzZGxrZmpobGtq"},
Parameters: values,
}
var deviceRegistration api.DeviceRegistrationResponse
_, err := srv.CallJSON(&opts, nil, &deviceRegistration)
if err != nil {
log.Fatalf("Failed to register device: %v", err)
}
m.Set(configClientID, deviceRegistration.ClientID)
m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret))
fs.Debugf(nil, "Got clientID '%s' and clientSecret '%s'", deviceRegistration.ClientID, deviceRegistration.ClientSecret)
}
clientID, ok := m.Get(configClientID)
if !ok {
clientID = rcloneClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = rcloneEncryptedClientSecret
}
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
username, ok := m.Get(configUsername)
if !ok {
log.Fatalf("No username defined")
@@ -86,7 +138,6 @@ func init() {
password := config.GetPassword("Your Jottacloud password is only required during config and will not be stored.")
// prepare out token request with username and password
srv := rest.NewClient(fshttp.NewClient(fs.Config))
values := url.Values{}
values.Set("grant_type", "PASSWORD")
values.Set("password", password)
@@ -381,6 +432,17 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root)
clientID, ok := m.Get(configClientID)
if !ok {
clientID = rcloneClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = rcloneEncryptedClientSecret
}
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
// add jottacloud to the long list of sites that don't follow the oauth spec correctly
oauth2.RegisterBrokenAuthHeaderProvider("https://www.jottacloud.com/")

View File

@@ -14,6 +14,8 @@ import (
"strings"
"time"
"github.com/ncw/rclone/lib/atexit"
"github.com/ncw/rclone/backend/onedrive/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
@@ -335,8 +337,13 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
// readMetaDataForPathRelativeToID reads the metadata for a path relative to an item that is addressed by its normalized ID.
// if `relPath` == "", it reads the metadata for the item with that ID.
//
// We address items using the pattern `drives/driveID/items/itemID:/relativePath`
// instead of simply using `drives/driveID/root:/itemPath` because it works for
// "shared with me" folders in OneDrive Personal (See #2536, #2778)
// This path pattern comes from https://github.com/OneDrive/onedrive-api-docs/issues/908#issuecomment-417488480
func (f *Fs) readMetaDataForPathRelativeToID(normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) {
opts := newOptsCall(normalizedID, "GET", ":/"+rest.URLPathEscape(replaceReservedChars(relPath)))
opts := newOptsCall(normalizedID, "GET", ":/"+withTrailingColon(rest.URLPathEscape(replaceReservedChars(relPath))))
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err)
@@ -703,9 +710,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
id := info.GetID()
f.dirCache.Put(remote, id)
d := fs.NewDir(remote, time.Time(info.GetLastModifiedDateTime())).SetID(id)
if folder != nil {
d.SetItems(folder.ChildCount)
}
d.SetItems(folder.ChildCount)
entries = append(entries, d)
} else {
o, err := f.newObjectWithInfo(remote, info)
@@ -819,9 +824,6 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
return err
}
f.dirCache.FlushDir(dir)
if err != nil {
return err
}
return nil
}
@@ -1340,12 +1342,12 @@ func (o *Object) setModTime(modTime time.Time) (*api.Item, error) {
opts = rest.Opts{
Method: "PATCH",
RootURL: rootURL,
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(leaf),
Path: "/" + drive + "/items/" + trueDirID + ":/" + withTrailingColon(rest.URLPathEscape(leaf)),
}
} else {
opts = rest.Opts{
Method: "PATCH",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()),
Path: "/root:/" + withTrailingColon(rest.URLPathEscape(o.srvPath())),
}
}
update := api.SetFileSystemInfo{
@@ -1491,23 +1493,40 @@ func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (i
return nil, errors.New("unknown-sized upload not supported")
}
uploadURLChan := make(chan string, 1)
gracefulCancel := func() {
uploadURL, ok := <-uploadURLChan
// Reading from uploadURLChan blocks the atexit process until
// we are able to use uploadURL to cancel the upload
if !ok { // createUploadSession failed - no need to cancel upload
return
}
fs.Debugf(o, "Cancelling multipart upload")
cancelErr := o.cancelUploadSession(uploadURL)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr)
}
}
cancelFuncHandle := atexit.Register(gracefulCancel)
// Create upload session
fs.Debugf(o, "Starting multipart upload")
session, err := o.createUploadSession(modTime)
if err != nil {
close(uploadURLChan)
atexit.Unregister(cancelFuncHandle)
return nil, err
}
uploadURL := session.UploadURL
uploadURLChan <- uploadURL
// Cancel the session if something went wrong
defer func() {
if err != nil {
fs.Debugf(o, "Cancelling multipart upload: %v", err)
cancelErr := o.cancelUploadSession(uploadURL)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", err)
}
fs.Debugf(o, "Error encountered during upload: %v", err)
gracefulCancel()
}
atexit.Unregister(cancelFuncHandle)
}()
// Upload the chunks
@@ -1668,6 +1687,21 @@ func getRelativePathInsideBase(base, target string) (string, bool) {
return "", false
}
// Adds a ":" at the end of `remotePath` in a proper manner.
// If `remotePath` already ends with "/", change it to ":/"
// If `remotePath` is "", return "".
// A workaround for #2720 and #3039
func withTrailingColon(remotePath string) string {
if remotePath == "" {
return ""
}
if strings.HasSuffix(remotePath, "/") {
return remotePath[:len(remotePath)-1] + ":/"
}
return remotePath + ":"
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)

View File

@@ -287,9 +287,6 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
return err
}
f.dirCache.FlushDir(dir)
if err != nil {
return err
}
return nil
}

View File

@@ -645,6 +645,9 @@ isn't set then "acl" is used instead.`,
}, {
Value: "GLACIER",
Help: "Glacier storage class",
}, {
Value: "DEEP_ARCHIVE",
Help: "Glacier Deep Archive storage class",
}},
}, {
// Mapping from here: https://www.alibabacloud.com/help/doc-detail/64919.htm

View File

@@ -1,6 +1,6 @@
// Package sftp provides a filesystem interface using github.com/pkg/sftp
// +build !plan9,go1.9
// +build !plan9
package sftp

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.9
// +build !plan9
package sftp

View File

@@ -1,6 +1,6 @@
// Test Sftp filesystem interface
// +build !plan9,go1.9
// +build !plan9
package sftp_test

View File

@@ -1,6 +1,6 @@
// Build for sftp for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9 !go1.9
// +build plan9
package sftp

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.9
// +build !plan9
package sftp

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.9
// +build !plan9
package sftp

View File

@@ -2,6 +2,7 @@ package swift
import (
"net/http"
"time"
"github.com/ncw/swift"
)
@@ -65,6 +66,14 @@ func (a *auth) Token() string {
return a.parentAuth.Token()
}
// Expires returns the time the token expires if known or Zero if not.
func (a *auth) Expires() (t time.Time) {
if do, ok := a.parentAuth.(swift.Expireser); ok {
t = do.Expires()
}
return t
}
// The CDN url if available
func (a *auth) CdnUrl() string { // nolint
if a.parentAuth == nil {
@@ -74,4 +83,7 @@ func (a *auth) CdnUrl() string { // nolint
}
// Check the interfaces are satisfied
var _ swift.Authenticator = (*auth)(nil)
var (
_ swift.Authenticator = (*auth)(nil)
_ swift.Expireser = (*auth)(nil)
)

View File

@@ -286,6 +286,31 @@ func shouldRetry(err error) (bool, error) {
return fserrors.ShouldRetry(err), err
}
// shouldRetryHeaders returns a boolean as to whether this err
// deserves to be retried. It reads the headers passed in looking for
// `Retry-After`. It returns the err as a convenience
func shouldRetryHeaders(headers swift.Headers, err error) (bool, error) {
if swiftError, ok := err.(*swift.Error); ok && swiftError.StatusCode == 429 {
if value := headers["Retry-After"]; value != "" {
retryAfter, parseErr := strconv.Atoi(value)
if parseErr != nil {
fs.Errorf(nil, "Failed to parse Retry-After: %q: %v", value, parseErr)
} else {
duration := time.Second * time.Duration(retryAfter)
if duration <= 60*time.Second {
// Do a short sleep immediately
fs.Debugf(nil, "Sleeping for %v to obey Retry-After", duration)
time.Sleep(duration)
return true, err
}
// Delay a long sleep for a retry
return false, fserrors.NewErrorRetryAfter(duration)
}
}
}
return shouldRetry(err)
}
// Pattern to match a swift path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
@@ -413,8 +438,9 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
// Check to see if the object exists - ignoring directory markers
var info swift.Object
err = f.pacer.Call(func() (bool, error) {
info, _, err = f.c.Object(container, directory)
return shouldRetry(err)
var rxHeaders swift.Headers
info, rxHeaders, err = f.c.Object(container, directory)
return shouldRetryHeaders(rxHeaders, err)
})
if err == nil && info.ContentType != directoryMarkerContentType {
f.root = path.Dir(directory)
@@ -723,8 +749,9 @@ func (f *Fs) Mkdir(dir string) error {
var err error = swift.ContainerNotFound
if !f.noCheckContainer {
err = f.pacer.Call(func() (bool, error) {
_, _, err = f.c.Container(f.container)
return shouldRetry(err)
var rxHeaders swift.Headers
_, rxHeaders, err = f.c.Container(f.container)
return shouldRetryHeaders(rxHeaders, err)
})
}
if err == swift.ContainerNotFound {
@@ -816,8 +843,9 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
}
srcFs := srcObj.fs
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
return shouldRetry(err)
var rxHeaders swift.Headers
rxHeaders, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
return nil, err
@@ -927,7 +955,7 @@ func (o *Object) readMetaData() (err error) {
var h swift.Headers
err = o.fs.pacer.Call(func() (bool, error) {
info, h, err = o.fs.c.Object(o.fs.container, o.fs.root+o.remote)
return shouldRetry(err)
return shouldRetryHeaders(h, err)
})
if err != nil {
if err == swift.ObjectNotFound {
@@ -1002,8 +1030,9 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
headers := fs.OpenOptionHeaders(options)
_, isRanging := headers["Range"]
err = o.fs.pacer.Call(func() (bool, error) {
in, _, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
return shouldRetry(err)
var rxHeaders swift.Headers
in, rxHeaders, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
return shouldRetryHeaders(rxHeaders, err)
})
return
}
@@ -1074,8 +1103,9 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
// Create the segmentsContainer if it doesn't exist
var err error
err = o.fs.pacer.Call(func() (bool, error) {
_, _, err = o.fs.c.Container(o.fs.segmentsContainer)
return shouldRetry(err)
var rxHeaders swift.Headers
_, rxHeaders, err = o.fs.c.Container(o.fs.segmentsContainer)
return shouldRetryHeaders(rxHeaders, err)
})
if err == swift.ContainerNotFound {
headers := swift.Headers{}
@@ -1115,8 +1145,9 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
segmentPath := fmt.Sprintf("%s/%08d", segmentsPath, i)
fs.Debugf(o, "Uploading segment file %q into %q", segmentPath, o.fs.segmentsContainer)
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
return shouldRetry(err)
var rxHeaders swift.Headers
rxHeaders, err = o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
return "", err
@@ -1129,8 +1160,9 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
emptyReader := bytes.NewReader(nil)
manifestName := o.fs.root + o.remote
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
return shouldRetry(err)
var rxHeaders swift.Headers
rxHeaders, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
return shouldRetryHeaders(rxHeaders, err)
})
return uniquePrefix + "/", err
}
@@ -1174,7 +1206,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
var rxHeaders swift.Headers
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
rxHeaders, err = o.fs.c.ObjectPut(o.fs.container, o.fs.root+o.remote, in, true, "", contentType, headers)
return shouldRetry(err)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
return err

View File

@@ -1,6 +1,13 @@
package swift
import "testing"
import (
"testing"
"time"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/swift"
"github.com/stretchr/testify/assert"
)
func TestInternalUrlEncode(t *testing.T) {
for _, test := range []struct {
@@ -23,3 +30,37 @@ func TestInternalUrlEncode(t *testing.T) {
}
}
}
func TestInternalShouldRetryHeaders(t *testing.T) {
headers := swift.Headers{
"Content-Length": "64",
"Content-Type": "text/html; charset=UTF-8",
"Date": "Mon: 18 Mar 2019 12:11:23 GMT",
"Retry-After": "1",
}
err := &swift.Error{
StatusCode: 429,
Text: "Too Many Requests",
}
// Short sleep should just do the sleep
start := time.Now()
retry, gotErr := shouldRetryHeaders(headers, err)
dt := time.Since(start)
assert.True(t, retry)
assert.Equal(t, err, gotErr)
assert.True(t, dt > time.Second/2)
// Long sleep should return RetryError
headers["Retry-After"] = "3600"
start = time.Now()
retry, gotErr = shouldRetryHeaders(headers, err)
dt = time.Since(start)
assert.True(t, dt < time.Second)
assert.False(t, retry)
assert.Equal(t, true, fserrors.IsRetryAfterError(gotErr))
after := gotErr.(fserrors.RetryAfter).RetryAfter()
dt = after.Sub(start)
assert.True(t, dt >= time.Hour-time.Second && dt <= time.Hour+time.Second)
}

View File

@@ -644,10 +644,18 @@ func (f *Fs) _mkdir(dirPath string) error {
Path: dirPath,
NoResponse: true,
}
return f.pacer.Call(func() (bool, error) {
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(&opts)
return shouldRetry(resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
// already exists
// owncloud returns 423/StatusLocked if the create is already in progress
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable || apiErr.StatusCode == http.StatusLocked {
return nil
}
}
return err
}
// mkdir makes the directory and parents using native paths
@@ -655,12 +663,7 @@ func (f *Fs) mkdir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
err := f._mkdir(dirPath)
if apiErr, ok := err.(*api.Error); ok {
// already exists
// owncloud returns 423/StatusLocked if the create is already in progress
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable || apiErr.StatusCode == http.StatusLocked {
return nil
}
// parent does not exist
// parent does not exist so create it first then try again
if apiErr.StatusCode == http.StatusConflict {
err = f.mkParentDir(dirPath)
if err == nil {
@@ -913,11 +916,13 @@ func (f *Fs) About() (*fs.Usage, error) {
return nil, errors.Wrap(err, "about call failed")
}
usage := &fs.Usage{}
if q.Available >= 0 && q.Used >= 0 {
usage.Total = fs.NewUsageValue(q.Available + q.Used)
}
if q.Used >= 0 {
usage.Used = fs.NewUsageValue(q.Used)
if q.Available != 0 || q.Used != 0 {
if q.Available >= 0 && q.Used >= 0 {
usage.Total = fs.NewUsageValue(q.Available + q.Used)
}
if q.Used >= 0 {
usage.Used = fs.NewUsageValue(q.Used)
}
}
return usage, nil
}

View File

@@ -244,8 +244,10 @@ func getAssetFromReleasesPage(project string, matchName *regexp.Regexp) (assetUR
if a.Key == "href" {
if name := path.Base(a.Val); matchName.MatchString(name) && isOurOsArch(name) {
if u, err := rest.URLJoin(base, a.Val); err == nil {
assetName = name
assetURL = u.String()
if assetName == "" {
assetName = name
assetURL = u.String()
}
}
}
break

View File

@@ -230,22 +230,31 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
SigInfoHandler()
for try := 1; try <= *retries; try++ {
err = f()
if !Retry || (err == nil && !accounting.Stats.Errored()) {
fs.CountError(err)
if !Retry || !accounting.Stats.Errored() {
if try > 1 {
fs.Errorf(nil, "Attempt %d/%d succeeded", try, *retries)
}
break
}
if fserrors.IsFatalError(err) || accounting.Stats.HadFatalError() {
if accounting.Stats.HadFatalError() {
fs.Errorf(nil, "Fatal error received - not attempting retries")
break
}
if fserrors.IsNoRetryError(err) || (accounting.Stats.Errored() && !accounting.Stats.HadRetryError()) {
if accounting.Stats.Errored() && !accounting.Stats.HadRetryError() {
fs.Errorf(nil, "Can't retry this error - not attempting retries")
break
}
if err != nil {
fs.Errorf(nil, "Attempt %d/%d failed with %d errors and: %v", try, *retries, accounting.Stats.GetErrors(), err)
if retryAfter := accounting.Stats.RetryAfter(); !retryAfter.IsZero() {
d := retryAfter.Sub(time.Now())
if d > 0 {
fs.Logf(nil, "Received retry after error - sleeping until %s (%v)", retryAfter.Format(time.RFC3339Nano), d)
time.Sleep(d)
}
}
lastErr := accounting.Stats.GetLastError()
if lastErr != nil {
fs.Errorf(nil, "Attempt %d/%d failed with %d errors and: %v", try, *retries, accounting.Stats.GetErrors(), lastErr)
} else {
fs.Errorf(nil, "Attempt %d/%d failed with %d errors", try, *retries, accounting.Stats.GetErrors())
}

View File

@@ -45,7 +45,7 @@ __rclone_custom_func() {
else
__rclone_init_completion -n : || return
fi
if [[ $cur =~ ^[[:alnum:]_]*$ ]]; then
if [[ $cur != *:* ]]; then
local remote
while IFS= read -r remote; do
[[ $remote != $cur* ]] || COMPREPLY+=("$remote")
@@ -54,10 +54,10 @@ __rclone_custom_func() {
local paths=("$cur"*)
[[ ! -f ${paths[0]} ]] || COMPREPLY+=("${paths[@]}")
fi
elif [[ $cur =~ ^[[:alnum:]_]+: ]]; then
else
local path=${cur#*:}
if [[ $path == */* ]]; then
local prefix=${path%/*}
local prefix=$(eval printf '%s' "${path%/*}")
else
local prefix=
fi
@@ -66,6 +66,7 @@ __rclone_custom_func() {
local reply=${prefix:+$prefix/}$line
[[ $reply != $path* ]] || COMPREPLY+=("$reply")
done < <(rclone lsf "${cur%%:*}:$prefix" 2>/dev/null)
[[ ! ${COMPREPLY[@]} ]] || compopt -o filenames
fi
[[ ! ${COMPREPLY[@]} ]] || compopt -o nospace
fi

View File

@@ -164,6 +164,8 @@ func Lsf(fsrc fs.Fs, out io.Writer) error {
list.SetAbsolute(absolute)
var opt = operations.ListJSONOpt{
NoModTime: true,
DirsOnly: dirsOnly,
FilesOnly: filesOnly,
Recurse: recurse,
}
@@ -195,15 +197,6 @@ func Lsf(fsrc fs.Fs, out io.Writer) error {
}
return operations.ListJSON(fsrc, "", &opt, func(item *operations.ListJSONItem) error {
if item.IsDir {
if filesOnly {
return nil
}
} else {
if dirsOnly {
return nil
}
}
_, _ = fmt.Fprintln(out, list.Format(item))
return nil
})

View File

@@ -23,6 +23,8 @@ func init() {
commandDefintion.Flags().BoolVarP(&opt.NoModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up).")
commandDefintion.Flags().BoolVarP(&opt.ShowEncrypted, "encrypted", "M", false, "Show the encrypted names.")
commandDefintion.Flags().BoolVarP(&opt.ShowOrigIDs, "original", "", false, "Show the ID of the underlying Object.")
commandDefintion.Flags().BoolVarP(&opt.FilesOnly, "files-only", "", false, "Show only files in the listing.")
commandDefintion.Flags().BoolVarP(&opt.DirsOnly, "dirs-only", "", false, "Show only directories in the listing.")
}
var commandDefintion = &cobra.Command{
@@ -55,6 +57,10 @@ If --no-modtime is specified then ModTime will be blank.
If --encrypted is not specified the Encrypted won't be emitted.
If --dirs-only is not specified files in addition to directories are returned
If --files-only is not specified directories in addition to the files will be returned.
The Path field will only show folders below the remote path being listed.
If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".

View File

@@ -0,0 +1,184 @@
package dlna
const connectionManagerServiceDescription = `<?xml version="1.0" encoding="UTF-8"?>
<scpd xmlns="urn:schemas-upnp-org:service-1-0">
<specVersion>
<major>1</major>
<minor>0</minor>
</specVersion>
<actionList>
<action>
<name>GetProtocolInfo</name>
<argumentList>
<argument>
<name>Source</name>
<direction>out</direction>
<relatedStateVariable>SourceProtocolInfo</relatedStateVariable>
</argument>
<argument>
<name>Sink</name>
<direction>out</direction>
<relatedStateVariable>SinkProtocolInfo</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>PrepareForConnection</name>
<argumentList>
<argument>
<name>RemoteProtocolInfo</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ProtocolInfo</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionManager</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionManager</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>Direction</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Direction</relatedStateVariable>
</argument>
<argument>
<name>ConnectionID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>AVTransportID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_AVTransportID</relatedStateVariable>
</argument>
<argument>
<name>RcsID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_RcsID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>ConnectionComplete</name>
<argumentList>
<argument>
<name>ConnectionID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetCurrentConnectionIDs</name>
<argumentList>
<argument>
<name>ConnectionIDs</name>
<direction>out</direction>
<relatedStateVariable>CurrentConnectionIDs</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetCurrentConnectionInfo</name>
<argumentList>
<argument>
<name>ConnectionID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>RcsID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_RcsID</relatedStateVariable>
</argument>
<argument>
<name>AVTransportID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_AVTransportID</relatedStateVariable>
</argument>
<argument>
<name>ProtocolInfo</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ProtocolInfo</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionManager</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionManager</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>Direction</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Direction</relatedStateVariable>
</argument>
<argument>
<name>Status</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionStatus</relatedStateVariable>
</argument>
</argumentList>
</action>
</actionList>
<serviceStateTable>
<stateVariable sendEvents="yes">
<name>SourceProtocolInfo</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>SinkProtocolInfo</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>CurrentConnectionIDs</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ConnectionStatus</name>
<dataType>string</dataType>
<allowedValueList>
<allowedValue>OK</allowedValue>
<allowedValue>ContentFormatMismatch</allowedValue>
<allowedValue>InsufficientBandwidth</allowedValue>
<allowedValue>UnreliableChannel</allowedValue>
<allowedValue>Unknown</allowedValue>
</allowedValueList>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ConnectionManager</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Direction</name>
<dataType>string</dataType>
<allowedValueList>
<allowedValue>Input</allowedValue>
<allowedValue>Output</allowedValue>
</allowedValueList>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ProtocolInfo</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ConnectionID</name>
<dataType>i4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_AVTransportID</name>
<dataType>i4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_RcsID</name>
<dataType>i4</dataType>
</stateVariable>
</serviceStateTable>
</scpd>`

View File

@@ -84,6 +84,21 @@ var services = []*service{
},
SCPD: contentDirectoryServiceDescription,
},
{
Service: upnp.Service{
ServiceType: "urn:schemas-upnp-org:service:ConnectionManager:1",
ServiceId: "urn:upnp-org:serviceId:ConnectionManager",
ControlURL: serviceControlURL,
},
SCPD: connectionManagerServiceDescription,
},
}
func init() {
for _, s := range services {
p := path.Join("/scpd", s.ServiceId)
s.SCPDURL = p
}
}
func devices() []string {
@@ -250,9 +265,6 @@ func (s *server) initMux(mux *http.ServeMux) {
// Install handlers to serve SCPD for each UPnP service.
for _, s := range services {
p := path.Join("/scpd", s.ServiceId)
s.SCPDURL = p
mux.HandleFunc(s.SCPDURL, func(serviceDesc string) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("content-type", `text/xml; charset="utf-8"`)

View File

@@ -1,5 +1,3 @@
// +build go1.8
package dlna
import (
@@ -59,6 +57,11 @@ func TestRootSCPD(t *testing.T) {
// Make sure that the SCPD contains a CDS service.
require.Contains(t, string(body),
"<serviceType>urn:schemas-upnp-org:service:ContentDirectory:1</serviceType>")
// Make sure that the SCPD contains a CM service.
require.Contains(t, string(body),
"<serviceType>urn:schemas-upnp-org:service:ConnectionManager:1</serviceType>")
// Ensure that the SCPD url is configured.
require.Regexp(t, "<SCPDURL>/.*</SCPDURL>", string(body))
}
// Make sure that it serves content from the remote.

View File

@@ -1,5 +1,3 @@
// +build go1.8
package http
import (

View File

@@ -1,21 +0,0 @@
// HTTP parts go1.8+
//+build go1.8
package httplib
import (
"net/http"
"time"
)
// Initialise the http.Server for post go1.8
func initServer(s *http.Server) {
s.ReadHeaderTimeout = 10 * time.Second // time to send the headers
s.IdleTimeout = 60 * time.Second // time to keep idle connections open
}
// closeServer closes the server in a non graceful way
func closeServer(s *http.Server) error {
return s.Close()
}

View File

@@ -1,18 +0,0 @@
// HTTP parts pre go1.8
//+build !go1.8
package httplib
import (
"net/http"
)
// Initialise the http.Server for pre go1.8
func initServer(s *http.Server) {
}
// closeServer closes the server in a non graceful way
func closeServer(s *http.Server) error {
return nil
}

View File

@@ -180,17 +180,17 @@ func NewServer(handler http.Handler, opt *Options) *Server {
// FIXME make a transport?
s.httpServer = &http.Server{
Addr: s.Opt.ListenAddr,
Handler: handler,
ReadTimeout: s.Opt.ServerReadTimeout,
WriteTimeout: s.Opt.ServerWriteTimeout,
MaxHeaderBytes: s.Opt.MaxHeaderBytes,
Addr: s.Opt.ListenAddr,
Handler: handler,
ReadTimeout: s.Opt.ServerReadTimeout,
WriteTimeout: s.Opt.ServerWriteTimeout,
MaxHeaderBytes: s.Opt.MaxHeaderBytes,
ReadHeaderTimeout: 10 * time.Second, // time to send the headers
IdleTimeout: 60 * time.Second, // time to keep idle connections open
TLSConfig: &tls.Config{
MinVersion: tls.VersionTLS10, // disable SSL v3.0 and earlier
},
}
// go version specific initialisation
initServer(s.httpServer)
if s.Opt.ClientCA != "" {
if !s.useSSL {
@@ -267,7 +267,7 @@ func (s *Server) Wait() {
// Close shuts the running server down
func (s *Server) Close() {
err := closeServer(s.httpServer)
err := s.httpServer.Close()
if err != nil {
log.Printf("Error on closing HTTP server: %v", err)
return

View File

@@ -330,25 +330,12 @@ func (s *server) listObjects(w http.ResponseWriter, r *http.Request, remote stri
ls := listItems{}
// if remote supports ListR use that directly, otherwise use recursive Walk
var err error
if ListR := s.f.Features().ListR; ListR != nil {
err = ListR(remote, func(entries fs.DirEntries) error {
for _, entry := range entries {
ls.add(entry)
}
return nil
})
} else {
err = walk.Walk(s.f, remote, true, -1, func(path string, entries fs.DirEntries, err error) error {
if err == nil {
for _, entry := range entries {
ls.add(entry)
}
}
return err
})
}
err := walk.ListR(s.f, remote, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, entry := range entries {
ls.add(entry)
}
return nil
})
if err != nil {
_, err = fserrors.Cause(err)
if err != fs.ErrorDirNotFound {

View File

@@ -71,5 +71,4 @@ Links
* <i class="fa fa-home"></i> [Home page](https://rclone.org/)
* <i class="fa fa-github"></i> [GitHub project page for source and bug tracker](https://github.com/ncw/rclone)
* <i class="fa fa-comments"></i> [Rclone Forum](https://forum.rclone.org)
* <i class="fa fa-google-plus"></i> <a href="https://google.com/+RcloneOrg" rel="publisher">Google+ page</a>
* <i class="fa fa-cloud-download"></i>[Downloads](/downloads/)

View File

@@ -15,7 +15,7 @@ eg `remote:directory/subdirectory` or `/directory/subdirectory`.
During the initial setup with `rclone config` you will specify the target
remote. The target remote can either be a local path or another remote.
Subfolders can be used in target remote. Asume a alias remote named `backup`
Subfolders can be used in target remote. Assume a alias remote named `backup`
with the target `mydrive:private/backup`. Invoking `rclone mkdir backup:desktop`
is exactly the same as invoking `rclone mkdir mydrive:private/backup/desktop`.

View File

@@ -245,3 +245,8 @@ Contributors
* marcintustin <marcintustin@users.noreply.github.com>
* jaKa Močnik <jaka@koofr.net>
* Fionera <fionera@fionera.de>
* Dan Walters <dan@walters.io>
* Danil Semelenov <sgtpep@users.noreply.github.com>
* xopez <28950736+xopez@users.noreply.github.com>
* Ben Boeckel <mathstuf@gmail.com>
* Manu <manu@snapdragon.cc>

View File

@@ -270,7 +270,7 @@ start and finish the upload) and another 2 requests for each chunk:
#### Versions ####
Versions can be viewd with the `--b2-versions` flag. When it is set
Versions can be viewed with the `--b2-versions` flag. When it is set
rclone will show and act on older versions of files. For example
Listing without `--b2-versions`
@@ -409,5 +409,18 @@ Disable checksums for large (> upload cutoff) files
- Type: bool
- Default: false
#### --b2-download-url
Custom endpoint for downloads.
This is usually set to a Cloudflare CDN URL as Backblaze offers
free egress for data downloaded through the Cloudflare network.
Leave blank if you want to use the endpoint provided by Backblaze.
- Config: download_url
- Env Var: RCLONE_B2_DOWNLOAD_URL
- Type: string
- Default: ""
<!--- autogenerated options stop -->

View File

@@ -262,7 +262,7 @@ There is an issue with wrapping the remotes in this order:
During testing, I experienced a lot of bans with the remotes in this order.
I suspect it might be related to how crypt opens files on the cloud provider
which makes it think we're downloading the full file instead of small chunks.
Organizing the remotes in this order yelds better results:
Organizing the remotes in this order yields better results:
<span style="color:green">**cloud remote** -> **cache** -> **crypt**</span>
#### absolute remote paths ####

View File

@@ -1,11 +1,97 @@
---
title: "Documentation"
description: "Rclone Changelog"
date: "2019-02-09"
date: "2019-04-13"
---
# Changelog
## v1.47.0 - 2019-04-13
* New backends
* Backend for Koofr cloud storage service. (jaKa)
* New Features
* Resume downloads if the reader fails in copy (Nick Craig-Wood)
* this means rclone will restart transfers if the source has an error
* this is most useful for downloads or cloud to cloud copies
* Use `--fast-list` for listing operations where it won't use more memory (Nick Craig-Wood)
* this should speed up the following operations on remotes which support `ListR`
* `dedupe`, `serve restic` `lsf`, `ls`, `lsl`, `lsjson`, `lsd`, `md5sum`, `sha1sum`, `hashsum`, `size`, `delete`, `cat`, `settier`
* use `--disable ListR` to get old behaviour if required
* Make `--files-from` traverse the destination unless `--no-traverse` is set (Nick Craig-Wood)
* this fixes `--files-from` with Google drive and excessive API use in general.
* Make server side copy account bytes and obey `--max-transfer` (Nick Craig-Wood)
* Add `--create-empty-src-dirs` flag and default to not creating empty dirs (ishuah)
* Add client side TLS/SSL flags `--ca-cert`/`--client-cert`/`--client-key` (Nick Craig-Wood)
* Implement `--suffix-keep-extension` for use with `--suffix` (Nick Craig-Wood)
* build:
* Switch to semvar compliant version tags to be go modules compliant (Nick Craig-Wood)
* Update to use go1.12.x for the build (Nick Craig-Wood)
* serve dlna: Add connection manager service description to improve compatibility (Dan Walters)
* lsf: Add 'e' format to show encrypted names and 'o' for original IDs (Nick Craig-Wood)
* lsjson: Added `--files-only` and `--dirs-only` flags (calistri)
* rc: Implement operations/publiclink the equivalent of `rclone link` (Nick Craig-Wood)
* Bug Fixes
* accounting: Fix total ETA when `--stats-unit bits` is in effect (Nick Craig-Wood)
* Bash TAB completion
* Use private custom func to fix clash between rclone and kubectl (Nick Craig-Wood)
* Fix for remotes with underscores in their names (Six)
* Fix completion of remotes (Florian Gamböck)
* Fix autocompletion of remote paths with spaces (Danil Semelenov)
* serve dlna: Fix root XML service descriptor (Dan Walters)
* ncdu: Fix display corruption with Chinese characters (Nick Craig-Wood)
* Add SIGTERM to signals which run the exit handlers on unix (Nick Craig-Wood)
* rc: Reload filter when the options are set via the rc (Nick Craig-Wood)
* VFS / Mount
* Fix FreeBSD: Ignore Truncate if called with no readers and already the correct size (Nick Craig-Wood)
* Read directory and check for a file before mkdir (Nick Craig-Wood)
* Shorten the locking window for vfs/refresh (Nick Craig-Wood)
* Azure Blob
* Enable MD5 checksums when uploading files bigger than the "Cutoff" (Dr.Rx)
* Fix SAS URL support (Nick Craig-Wood)
* B2
* Allow manual configuration of backblaze downloadUrl (Vince)
* Ignore already_hidden error on remove (Nick Craig-Wood)
* Ignore malformed `src_last_modified_millis` (Nick Craig-Wood)
* Drive
* Add `--skip-checksum-gphotos` to ignore incorrect checksums on Google Photos (Nick Craig-Wood)
* Allow server side move/copy between different remotes. (Fionera)
* Add docs on team drives and `--fast-list` eventual consistency (Nestar47)
* Fix imports of text files (Nick Craig-Wood)
* Fix range requests on 0 length files (Nick Craig-Wood)
* Fix creation of duplicates with server side copy (Nick Craig-Wood)
* Dropbox
* Retry blank errors to fix long listings (Nick Craig-Wood)
* FTP
* Add `--ftp-concurrency` to limit maximum number of connections (Nick Craig-Wood)
* Google Cloud Storage
* Fall back to default application credentials (marcintustin)
* Allow bucket policy only buckets (Nick Craig-Wood)
* HTTP
* Add `--http-no-slash` for websites with directories with no slashes (Nick Craig-Wood)
* Remove duplicates from listings (Nick Craig-Wood)
* Fix socket leak on 404 errors (Nick Craig-Wood)
* Jottacloud
* Fix token refresh (Sebastian Bünger)
* Add device registration (Oliver Heyme)
* Onedrive
* Implement graceful cancel of multipart uploads if rclone is interrupted (Cnly)
* Always add trailing colon to path when addressing items, (Cnly)
* Return errors instead of panic for invalid uploads (Fabian Möller)
* S3
* Add support for "Glacier Deep Archive" storage class (Manu)
* Update Dreamhost endpoint (Nick Craig-Wood)
* Note incompatibility with CEPH Jewel (Nick Craig-Wood)
* SFTP
* Allow custom ssh client config (Alexandru Bumbacea)
* Swift
* Obey Retry-After to enable OVH restore from cold storage (Nick Craig-Wood)
* Work around token expiry on CEPH (Nick Craig-Wood)
* WebDAV
* Allow IsCollection property to be integer or boolean (Nick Craig-Wood)
* Fix race when creating directories (Nick Craig-Wood)
* Fix About/df when reading the available/total returns 0 (Nick Craig-Wood)
## v1.46 - 2019-02-09
* New backends

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone"
slug: rclone
url: /commands/rclone/
@@ -46,6 +46,7 @@ rclone [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -60,6 +61,7 @@ rclone [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -83,6 +85,8 @@ rclone [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -118,6 +122,7 @@ rclone [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -140,11 +145,13 @@ rclone [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -153,6 +160,7 @@ rclone [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
-h, --help help for rclone
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -173,6 +181,10 @@ rclone [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -278,6 +290,7 @@ rclone [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -310,7 +323,7 @@ rclone [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
@@ -373,4 +386,4 @@ rclone [flags]
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone about"
slug: rclone_about
url: /commands/rclone_about/
@@ -89,6 +89,7 @@ rclone about remote: [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -103,6 +104,7 @@ rclone about remote: [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -126,6 +128,8 @@ rclone about remote: [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -161,6 +165,7 @@ rclone about remote: [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -183,11 +188,13 @@ rclone about remote: [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -195,6 +202,7 @@ rclone about remote: [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -215,6 +223,10 @@ rclone about remote: [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -320,6 +332,7 @@ rclone about remote: [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -352,7 +365,7 @@ rclone about remote: [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -368,4 +381,4 @@ rclone about remote: [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
@@ -48,6 +48,7 @@ rclone authorize [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -62,6 +63,7 @@ rclone authorize [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -85,6 +87,8 @@ rclone authorize [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -120,6 +124,7 @@ rclone authorize [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -142,11 +147,13 @@ rclone authorize [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -154,6 +161,7 @@ rclone authorize [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -174,6 +182,10 @@ rclone authorize [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -279,6 +291,7 @@ rclone authorize [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -311,7 +324,7 @@ rclone authorize [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -327,4 +340,4 @@ rclone authorize [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone cachestats"
slug: rclone_cachestats
url: /commands/rclone_cachestats/
@@ -47,6 +47,7 @@ rclone cachestats source: [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -61,6 +62,7 @@ rclone cachestats source: [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -84,6 +86,8 @@ rclone cachestats source: [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -119,6 +123,7 @@ rclone cachestats source: [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -141,11 +146,13 @@ rclone cachestats source: [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -153,6 +160,7 @@ rclone cachestats source: [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -173,6 +181,10 @@ rclone cachestats source: [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -278,6 +290,7 @@ rclone cachestats source: [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -310,7 +323,7 @@ rclone cachestats source: [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -326,4 +339,4 @@ rclone cachestats source: [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/
@@ -69,6 +69,7 @@ rclone cat remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -83,6 +84,7 @@ rclone cat remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -106,6 +108,8 @@ rclone cat remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -141,6 +145,7 @@ rclone cat remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -163,11 +168,13 @@ rclone cat remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -175,6 +182,7 @@ rclone cat remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -195,6 +203,10 @@ rclone cat remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -300,6 +312,7 @@ rclone cat remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -332,7 +345,7 @@ rclone cat remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -348,4 +361,4 @@ rclone cat remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
@@ -63,6 +63,7 @@ rclone check source:path dest:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -77,6 +78,7 @@ rclone check source:path dest:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -100,6 +102,8 @@ rclone check source:path dest:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -135,6 +139,7 @@ rclone check source:path dest:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -157,11 +162,13 @@ rclone check source:path dest:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -169,6 +176,7 @@ rclone check source:path dest:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -189,6 +197,10 @@ rclone check source:path dest:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -294,6 +306,7 @@ rclone check source:path dest:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -326,7 +339,7 @@ rclone check source:path dest:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -342,4 +355,4 @@ rclone check source:path dest:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
@@ -48,6 +48,7 @@ rclone cleanup remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -62,6 +63,7 @@ rclone cleanup remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -85,6 +87,8 @@ rclone cleanup remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -120,6 +124,7 @@ rclone cleanup remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -142,11 +147,13 @@ rclone cleanup remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -154,6 +161,7 @@ rclone cleanup remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -174,6 +182,10 @@ rclone cleanup remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -279,6 +291,7 @@ rclone cleanup remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -311,7 +324,7 @@ rclone cleanup remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -327,4 +340,4 @@ rclone cleanup remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
@@ -48,6 +48,7 @@ rclone config [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -62,6 +63,7 @@ rclone config [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -85,6 +87,8 @@ rclone config [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -120,6 +124,7 @@ rclone config [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -142,11 +147,13 @@ rclone config [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -154,6 +161,7 @@ rclone config [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -174,6 +182,10 @@ rclone config [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -279,6 +291,7 @@ rclone config [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -311,7 +324,7 @@ rclone config [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -336,4 +349,4 @@ rclone config [flags]
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config create"
slug: rclone_config_create
url: /commands/rclone_config_create/
@@ -62,6 +62,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -76,6 +77,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -99,6 +101,8 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -134,6 +138,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -156,11 +161,13 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -168,6 +175,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -188,6 +196,10 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -293,6 +305,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -325,7 +338,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -341,4 +354,4 @@ rclone config create <name> <type> [<key> <value>]* [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config delete"
slug: rclone_config_delete
url: /commands/rclone_config_delete/
@@ -45,6 +45,7 @@ rclone config delete <name> [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -59,6 +60,7 @@ rclone config delete <name> [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -82,6 +84,8 @@ rclone config delete <name> [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -117,6 +121,7 @@ rclone config delete <name> [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -139,11 +144,13 @@ rclone config delete <name> [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -151,6 +158,7 @@ rclone config delete <name> [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -171,6 +179,10 @@ rclone config delete <name> [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -276,6 +288,7 @@ rclone config delete <name> [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -308,7 +321,7 @@ rclone config delete <name> [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -324,4 +337,4 @@ rclone config delete <name> [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config dump"
slug: rclone_config_dump
url: /commands/rclone_config_dump/
@@ -45,6 +45,7 @@ rclone config dump [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -59,6 +60,7 @@ rclone config dump [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -82,6 +84,8 @@ rclone config dump [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -117,6 +121,7 @@ rclone config dump [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -139,11 +144,13 @@ rclone config dump [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -151,6 +158,7 @@ rclone config dump [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -171,6 +179,10 @@ rclone config dump [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -276,6 +288,7 @@ rclone config dump [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -308,7 +321,7 @@ rclone config dump [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -324,4 +337,4 @@ rclone config dump [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config edit"
slug: rclone_config_edit
url: /commands/rclone_config_edit/
@@ -48,6 +48,7 @@ rclone config edit [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -62,6 +63,7 @@ rclone config edit [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -85,6 +87,8 @@ rclone config edit [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -120,6 +124,7 @@ rclone config edit [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -142,11 +147,13 @@ rclone config edit [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -154,6 +161,7 @@ rclone config edit [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -174,6 +182,10 @@ rclone config edit [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -279,6 +291,7 @@ rclone config edit [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -311,7 +324,7 @@ rclone config edit [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -327,4 +340,4 @@ rclone config edit [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config file"
slug: rclone_config_file
url: /commands/rclone_config_file/
@@ -45,6 +45,7 @@ rclone config file [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -59,6 +60,7 @@ rclone config file [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -82,6 +84,8 @@ rclone config file [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -117,6 +121,7 @@ rclone config file [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -139,11 +144,13 @@ rclone config file [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -151,6 +158,7 @@ rclone config file [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -171,6 +179,10 @@ rclone config file [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -276,6 +288,7 @@ rclone config file [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -308,7 +321,7 @@ rclone config file [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -324,4 +337,4 @@ rclone config file [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config password"
slug: rclone_config_password
url: /commands/rclone_config_password/
@@ -52,6 +52,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -66,6 +67,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -89,6 +91,8 @@ rclone config password <name> [<key> <value>]+ [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -124,6 +128,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -146,11 +151,13 @@ rclone config password <name> [<key> <value>]+ [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -158,6 +165,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -178,6 +186,10 @@ rclone config password <name> [<key> <value>]+ [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -283,6 +295,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -315,7 +328,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -331,4 +344,4 @@ rclone config password <name> [<key> <value>]+ [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config providers"
slug: rclone_config_providers
url: /commands/rclone_config_providers/
@@ -45,6 +45,7 @@ rclone config providers [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -59,6 +60,7 @@ rclone config providers [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -82,6 +84,8 @@ rclone config providers [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -117,6 +121,7 @@ rclone config providers [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -139,11 +144,13 @@ rclone config providers [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -151,6 +158,7 @@ rclone config providers [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -171,6 +179,10 @@ rclone config providers [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -276,6 +288,7 @@ rclone config providers [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -308,7 +321,7 @@ rclone config providers [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -324,4 +337,4 @@ rclone config providers [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config show"
slug: rclone_config_show
url: /commands/rclone_config_show/
@@ -45,6 +45,7 @@ rclone config show [<remote>] [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -59,6 +60,7 @@ rclone config show [<remote>] [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -82,6 +84,8 @@ rclone config show [<remote>] [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -117,6 +121,7 @@ rclone config show [<remote>] [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -139,11 +144,13 @@ rclone config show [<remote>] [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -151,6 +158,7 @@ rclone config show [<remote>] [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -171,6 +179,10 @@ rclone config show [<remote>] [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -276,6 +288,7 @@ rclone config show [<remote>] [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -308,7 +321,7 @@ rclone config show [<remote>] [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -324,4 +337,4 @@ rclone config show [<remote>] [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config update"
slug: rclone_config_update
url: /commands/rclone_config_update/
@@ -57,6 +57,7 @@ rclone config update <name> [<key> <value>]+ [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -71,6 +72,7 @@ rclone config update <name> [<key> <value>]+ [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -94,6 +96,8 @@ rclone config update <name> [<key> <value>]+ [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -129,6 +133,7 @@ rclone config update <name> [<key> <value>]+ [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -151,11 +156,13 @@ rclone config update <name> [<key> <value>]+ [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -163,6 +170,7 @@ rclone config update <name> [<key> <value>]+ [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -183,6 +191,10 @@ rclone config update <name> [<key> <value>]+ [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -288,6 +300,7 @@ rclone config update <name> [<key> <value>]+ [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -320,7 +333,7 @@ rclone config update <name> [<key> <value>]+ [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -336,4 +349,4 @@ rclone config update <name> [<key> <value>]+ [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
@@ -68,7 +68,8 @@ rclone copy source:path dest:path [flags]
### Options
```
-h, --help help for copy
--create-empty-src-dirs Create empty source dirs on destination after copy
-h, --help help for copy
```
### Options inherited from parent commands
@@ -94,6 +95,7 @@ rclone copy source:path dest:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -108,6 +110,7 @@ rclone copy source:path dest:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -131,6 +134,8 @@ rclone copy source:path dest:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -166,6 +171,7 @@ rclone copy source:path dest:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -188,11 +194,13 @@ rclone copy source:path dest:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -200,6 +208,7 @@ rclone copy source:path dest:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -220,6 +229,10 @@ rclone copy source:path dest:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -325,6 +338,7 @@ rclone copy source:path dest:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -357,7 +371,7 @@ rclone copy source:path dest:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -373,4 +387,4 @@ rclone copy source:path dest:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone copyto"
slug: rclone_copyto
url: /commands/rclone_copyto/
@@ -73,6 +73,7 @@ rclone copyto source:path dest:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -87,6 +88,7 @@ rclone copyto source:path dest:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -110,6 +112,8 @@ rclone copyto source:path dest:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -145,6 +149,7 @@ rclone copyto source:path dest:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -167,11 +172,13 @@ rclone copyto source:path dest:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -179,6 +186,7 @@ rclone copyto source:path dest:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -199,6 +207,10 @@ rclone copyto source:path dest:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -304,6 +316,7 @@ rclone copyto source:path dest:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -336,7 +349,7 @@ rclone copyto source:path dest:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -352,4 +365,4 @@ rclone copyto source:path dest:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone copyurl"
slug: rclone_copyurl
url: /commands/rclone_copyurl/
@@ -48,6 +48,7 @@ rclone copyurl https://example.com dest:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -62,6 +63,7 @@ rclone copyurl https://example.com dest:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -85,6 +87,8 @@ rclone copyurl https://example.com dest:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -120,6 +124,7 @@ rclone copyurl https://example.com dest:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -142,11 +147,13 @@ rclone copyurl https://example.com dest:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -154,6 +161,7 @@ rclone copyurl https://example.com dest:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -174,6 +182,10 @@ rclone copyurl https://example.com dest:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -279,6 +291,7 @@ rclone copyurl https://example.com dest:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -311,7 +324,7 @@ rclone copyurl https://example.com dest:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -327,4 +340,4 @@ rclone copyurl https://example.com dest:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone cryptcheck"
slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/
@@ -73,6 +73,7 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -87,6 +88,7 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -110,6 +112,8 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -145,6 +149,7 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -167,11 +172,13 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -179,6 +186,7 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -199,6 +207,10 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -304,6 +316,7 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -336,7 +349,7 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -352,4 +365,4 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone cryptdecode"
slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/
@@ -57,6 +57,7 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -71,6 +72,7 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -94,6 +96,8 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -129,6 +133,7 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -151,11 +156,13 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -163,6 +170,7 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -183,6 +191,10 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -288,6 +300,7 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -320,7 +333,7 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -336,4 +349,4 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone dbhashsum"
slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/
@@ -50,6 +50,7 @@ rclone dbhashsum remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -64,6 +65,7 @@ rclone dbhashsum remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -87,6 +89,8 @@ rclone dbhashsum remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -122,6 +126,7 @@ rclone dbhashsum remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -144,11 +149,13 @@ rclone dbhashsum remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -156,6 +163,7 @@ rclone dbhashsum remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -176,6 +184,10 @@ rclone dbhashsum remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -281,6 +293,7 @@ rclone dbhashsum remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -313,7 +326,7 @@ rclone dbhashsum remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -329,4 +342,4 @@ rclone dbhashsum remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/
@@ -126,6 +126,7 @@ rclone dedupe [mode] remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -140,6 +141,7 @@ rclone dedupe [mode] remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -163,6 +165,8 @@ rclone dedupe [mode] remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -198,6 +202,7 @@ rclone dedupe [mode] remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -220,11 +225,13 @@ rclone dedupe [mode] remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -232,6 +239,7 @@ rclone dedupe [mode] remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -252,6 +260,10 @@ rclone dedupe [mode] remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -357,6 +369,7 @@ rclone dedupe [mode] remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -389,7 +402,7 @@ rclone dedupe [mode] remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -405,4 +418,4 @@ rclone dedupe [mode] remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/
@@ -66,6 +66,7 @@ rclone delete remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -80,6 +81,7 @@ rclone delete remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -103,6 +105,8 @@ rclone delete remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -138,6 +142,7 @@ rclone delete remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -160,11 +165,13 @@ rclone delete remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -172,6 +179,7 @@ rclone delete remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -192,6 +200,10 @@ rclone delete remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -297,6 +309,7 @@ rclone delete remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -329,7 +342,7 @@ rclone delete remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -345,4 +358,4 @@ rclone delete remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone deletefile"
slug: rclone_deletefile
url: /commands/rclone_deletefile/
@@ -49,6 +49,7 @@ rclone deletefile remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -63,6 +64,7 @@ rclone deletefile remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -86,6 +88,8 @@ rclone deletefile remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -121,6 +125,7 @@ rclone deletefile remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -143,11 +148,13 @@ rclone deletefile remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -155,6 +162,7 @@ rclone deletefile remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -175,6 +183,10 @@ rclone deletefile remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -280,6 +292,7 @@ rclone deletefile remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -312,7 +325,7 @@ rclone deletefile remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -328,4 +341,4 @@ rclone deletefile remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone genautocomplete"
slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/
@@ -44,6 +44,7 @@ Run with --help to list the supported shells.
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -58,6 +59,7 @@ Run with --help to list the supported shells.
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -81,6 +83,8 @@ Run with --help to list the supported shells.
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -116,6 +120,7 @@ Run with --help to list the supported shells.
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -138,11 +143,13 @@ Run with --help to list the supported shells.
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -150,6 +157,7 @@ Run with --help to list the supported shells.
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -170,6 +178,10 @@ Run with --help to list the supported shells.
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -275,6 +287,7 @@ Run with --help to list the supported shells.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -307,7 +320,7 @@ Run with --help to list the supported shells.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -325,4 +338,4 @@ Run with --help to list the supported shells.
* [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone.
* [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone genautocomplete bash"
slug: rclone_genautocomplete_bash
url: /commands/rclone_genautocomplete_bash/
@@ -60,6 +60,7 @@ rclone genautocomplete bash [output_file] [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -74,6 +75,7 @@ rclone genautocomplete bash [output_file] [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -97,6 +99,8 @@ rclone genautocomplete bash [output_file] [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -132,6 +136,7 @@ rclone genautocomplete bash [output_file] [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -154,11 +159,13 @@ rclone genautocomplete bash [output_file] [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -166,6 +173,7 @@ rclone genautocomplete bash [output_file] [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -186,6 +194,10 @@ rclone genautocomplete bash [output_file] [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -291,6 +303,7 @@ rclone genautocomplete bash [output_file] [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -323,7 +336,7 @@ rclone genautocomplete bash [output_file] [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -339,4 +352,4 @@ rclone genautocomplete bash [output_file] [flags]
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone genautocomplete zsh"
slug: rclone_genautocomplete_zsh
url: /commands/rclone_genautocomplete_zsh/
@@ -60,6 +60,7 @@ rclone genautocomplete zsh [output_file] [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -74,6 +75,7 @@ rclone genautocomplete zsh [output_file] [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -97,6 +99,8 @@ rclone genautocomplete zsh [output_file] [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -132,6 +136,7 @@ rclone genautocomplete zsh [output_file] [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -154,11 +159,13 @@ rclone genautocomplete zsh [output_file] [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -166,6 +173,7 @@ rclone genautocomplete zsh [output_file] [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -186,6 +194,10 @@ rclone genautocomplete zsh [output_file] [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -291,6 +303,7 @@ rclone genautocomplete zsh [output_file] [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -323,7 +336,7 @@ rclone genautocomplete zsh [output_file] [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -339,4 +352,4 @@ rclone genautocomplete zsh [output_file] [flags]
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone gendocs"
slug: rclone_gendocs
url: /commands/rclone_gendocs/
@@ -48,6 +48,7 @@ rclone gendocs output_directory [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -62,6 +63,7 @@ rclone gendocs output_directory [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -85,6 +87,8 @@ rclone gendocs output_directory [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -120,6 +124,7 @@ rclone gendocs output_directory [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -142,11 +147,13 @@ rclone gendocs output_directory [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -154,6 +161,7 @@ rclone gendocs output_directory [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -174,6 +182,10 @@ rclone gendocs output_directory [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -279,6 +291,7 @@ rclone gendocs output_directory [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -311,7 +324,7 @@ rclone gendocs output_directory [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -327,4 +340,4 @@ rclone gendocs output_directory [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone hashsum"
slug: rclone_hashsum
url: /commands/rclone_hashsum/
@@ -62,6 +62,7 @@ rclone hashsum <hash> remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -76,6 +77,7 @@ rclone hashsum <hash> remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -99,6 +101,8 @@ rclone hashsum <hash> remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -134,6 +138,7 @@ rclone hashsum <hash> remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -156,11 +161,13 @@ rclone hashsum <hash> remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -168,6 +175,7 @@ rclone hashsum <hash> remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -188,6 +196,10 @@ rclone hashsum <hash> remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -293,6 +305,7 @@ rclone hashsum <hash> remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -325,7 +338,7 @@ rclone hashsum <hash> remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -341,4 +354,4 @@ rclone hashsum <hash> remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone link"
slug: rclone_link
url: /commands/rclone_link/
@@ -55,6 +55,7 @@ rclone link remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -69,6 +70,7 @@ rclone link remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -92,6 +94,8 @@ rclone link remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -127,6 +131,7 @@ rclone link remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -149,11 +154,13 @@ rclone link remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -161,6 +168,7 @@ rclone link remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -181,6 +189,10 @@ rclone link remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -286,6 +298,7 @@ rclone link remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -318,7 +331,7 @@ rclone link remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -334,4 +347,4 @@ rclone link remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone listremotes"
slug: rclone_listremotes
url: /commands/rclone_listremotes/
@@ -50,6 +50,7 @@ rclone listremotes [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -64,6 +65,7 @@ rclone listremotes [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -87,6 +89,8 @@ rclone listremotes [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -122,6 +126,7 @@ rclone listremotes [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -144,11 +149,13 @@ rclone listremotes [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -156,6 +163,7 @@ rclone listremotes [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -176,6 +184,10 @@ rclone listremotes [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -281,6 +293,7 @@ rclone listremotes [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -313,7 +326,7 @@ rclone listremotes [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -329,4 +342,4 @@ rclone listremotes [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone ls"
slug: rclone_ls
url: /commands/rclone_ls/
@@ -79,6 +79,7 @@ rclone ls remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -93,6 +94,7 @@ rclone ls remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -116,6 +118,8 @@ rclone ls remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -151,6 +155,7 @@ rclone ls remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -173,11 +178,13 @@ rclone ls remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -185,6 +192,7 @@ rclone ls remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -205,6 +213,10 @@ rclone ls remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -310,6 +322,7 @@ rclone ls remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -342,7 +355,7 @@ rclone ls remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -358,4 +371,4 @@ rclone ls remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone lsd"
slug: rclone_lsd
url: /commands/rclone_lsd/
@@ -90,6 +90,7 @@ rclone lsd remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -104,6 +105,7 @@ rclone lsd remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -127,6 +129,8 @@ rclone lsd remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -162,6 +166,7 @@ rclone lsd remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -184,11 +189,13 @@ rclone lsd remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -196,6 +203,7 @@ rclone lsd remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -216,6 +224,10 @@ rclone lsd remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -321,6 +333,7 @@ rclone lsd remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -353,7 +366,7 @@ rclone lsd remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -369,4 +382,4 @@ rclone lsd remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone lsf"
slug: rclone_lsf
url: /commands/rclone_lsf/
@@ -33,8 +33,10 @@ output:
s - size
t - modification time
h - hash
i - ID of object if known
i - ID of object
o - Original ID of underlying object
m - MimeType of object if known
e - encrypted name
So if you wanted the path, size and modification time, you would use
--format "pst", or maybe --format "tsp" to put the path last.
@@ -168,6 +170,7 @@ rclone lsf remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -182,6 +185,7 @@ rclone lsf remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -205,6 +209,8 @@ rclone lsf remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -240,6 +246,7 @@ rclone lsf remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -262,11 +269,13 @@ rclone lsf remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -274,6 +283,7 @@ rclone lsf remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -294,6 +304,10 @@ rclone lsf remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -399,6 +413,7 @@ rclone lsf remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -431,7 +446,7 @@ rclone lsf remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -447,4 +462,4 @@ rclone lsf remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone lsjson"
slug: rclone_lsjson
url: /commands/rclone_lsjson/
@@ -37,6 +37,10 @@ If --no-modtime is specified then ModTime will be blank.
If --encrypted is not specified the Encrypted won't be emitted.
If --dirs-only is not specified files in addition to directories are returned
If --files-only is not specified directories in addition to the files will be returned.
The Path field will only show folders below the remote path being listed.
If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".
@@ -83,7 +87,9 @@ rclone lsjson remote:path [flags]
### Options
```
--dirs-only Show only directories in the listing.
-M, --encrypted Show the encrypted names.
--files-only Show only files in the listing.
--hash Include hashes in the output (may take longer).
-h, --help help for lsjson
--no-modtime Don't read the modification time (can speed things up).
@@ -114,6 +120,7 @@ rclone lsjson remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -128,6 +135,7 @@ rclone lsjson remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -151,6 +159,8 @@ rclone lsjson remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -186,6 +196,7 @@ rclone lsjson remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -208,11 +219,13 @@ rclone lsjson remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -220,6 +233,7 @@ rclone lsjson remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -240,6 +254,10 @@ rclone lsjson remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -345,6 +363,7 @@ rclone lsjson remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -377,7 +396,7 @@ rclone lsjson remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -393,4 +412,4 @@ rclone lsjson remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone lsl"
slug: rclone_lsl
url: /commands/rclone_lsl/
@@ -79,6 +79,7 @@ rclone lsl remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -93,6 +94,7 @@ rclone lsl remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -116,6 +118,8 @@ rclone lsl remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -151,6 +155,7 @@ rclone lsl remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -173,11 +178,13 @@ rclone lsl remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -185,6 +192,7 @@ rclone lsl remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -205,6 +213,10 @@ rclone lsl remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -310,6 +322,7 @@ rclone lsl remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -342,7 +355,7 @@ rclone lsl remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -358,4 +371,4 @@ rclone lsl remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

Some files were not shown because too many files have changed in this diff Show More