1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-23 21:03:24 +00:00

Compare commits

..

91 Commits

Author SHA1 Message Date
Nick Craig-Wood
8fd2577694 mount,cmount: run Release asynchronously
Before this change the mount and cmount would run Release
synchronously.  This would mean that it would wait for files to be
closed (eg uploaded) before returning to the kernel.

However Release is already running asynchronously from userspace so
this commit changes it to do the functionality of Release
asynchronously too.

This should fix libfuse blocking when Release is active and it is
asked to do something else with a file.

Forum: https://forum.rclone.org/t/vfs-cache-mode-writes-upload-expected-behaviour/8014
2019-05-11 10:21:43 +01:00
calistri
f865280afa Adds a public IP flag for ftp. Closes #3158
Fixed variable names
2019-05-09 22:52:21 +01:00
Nick Craig-Wood
8beab1aaf2 build: more pre go1.8 workarounds removed 2019-05-08 15:14:51 +01:00
Fionera
b9e16b36e5 Fix Multipart upload check
In the Documentation it states:
// If (opts.MultipartParams or opts.MultipartContentName) and
// opts.Body are set then CallJSON will do a multipart upload with a
// file attached.
2019-05-02 16:22:17 +01:00
Nick Craig-Wood
b68c3ce74d s3: suppport S3 Accelerated endpoints with --s3-use-accelerate-endpoint
Fixes #3123
2019-05-02 14:00:00 +01:00
Fabian Möller
d04b0b856a fserrors: use errors.Walk for the wrapped error types 2019-05-01 16:56:08 +01:00
Gary Kim
d0ff07bdb0 mega: add cleanup support
Fixes #3138
2019-05-01 16:32:34 +01:00
Nick Craig-Wood
577fda059d rc: fix race in tests 2019-05-01 16:09:50 +01:00
Nick Craig-Wood
49d2ab512d test_all: run restic integration tests against local backend 2019-05-01 16:09:50 +01:00
Nick Craig-Wood
9df322e889 tests: make test servers choose a random port to make more reliable
Tests have been randomly failing with messages like

    listen tcp 127.0.0.1:51778: bind: address already in use

Rework all the test servers so they choose a random free port on
startup and use that for the tests to avoid.
2019-05-01 16:09:50 +01:00
Nick Craig-Wood
8f89b03d7b vendor: update github.com/t3rm1n4l/go-mega and dependencies
This is to fix a crash reported in #3140
2019-05-01 16:09:50 +01:00
Fabian Möller
48c09608ea fix spelling 2019-04-30 14:12:18 +02:00
Nick Craig-Wood
7963320a29 lib/errors: don't panic on uncomparable errors #3123
Same fix as c5775cf73d
2019-04-26 09:56:20 +01:00
Nick Craig-Wood
81f8a5e0d9 Use golangci-lint to check everything
Now that this issue is fixed: https://github.com/golangci/golangci-lint/issues/204

We can use golangci-lint to check the printfuncs too.
2019-04-25 15:58:49 +01:00
Nick Craig-Wood
2763598bd1 Add Gary Kim to contributors 2019-04-25 10:51:47 +01:00
Gary Kim
49d7b0d278 sftp: add About support - fixes #3107
This adds support for using About with SFTP remotes. This works by calling the df command remotely.
2019-04-25 10:51:15 +01:00
Animosity022
3d475dc0ee mount: Fix poll interval documentation 2019-04-24 18:21:04 +01:00
Fionera
2657d70567 drive: fix move and copy from TeamDrive to GDrive 2019-04-24 18:11:34 +01:00
Nick Craig-Wood
45df37f55f Add Kyle E. Mitchell to contributors 2019-04-24 18:09:40 +01:00
Kyle E. Mitchell
81007c10cb doc: Fix "conververt" typo 2019-04-24 18:09:24 +01:00
Nick Craig-Wood
aba15f11d8 cache: note unsupported optional methods 2019-04-16 13:34:06 +01:00
Nick Craig-Wood
a57756a05c lsjson, lsf: support showing the Tier of the object 2019-04-16 13:34:06 +01:00
Nick Craig-Wood
eeab7a0a43 crypt: Implement Optional methods SetTier, GetTier - fixes #2895
This implements optional methods on Object
- ID
- SetTier
- GetTier

And declares that it will not implement MimeType for the FsCheck test.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
ac8d1db8d3 crypt: support PublicLink (rclone link) of underlying backend - fixes #3042 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
cd0d43fffb fs: add missing PublicLink to mask
The enables wrapping file systems to declare that they don't support
PublicLink if the underlying fs doesn't.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
cdf12b1fc8 crypt: Fix wrapping of ChangeNotify to decrypt directories properly
Also change the way it is added to make the FsCheckWrap test pass
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
7981e450a4 crypt: make rclone dedupe work through crypt
Implement these optional methods:

- WrapFs
- SetWrapper
- MergeDirs
- DirCacheFlush

Fixes #2233 Fixes #2689
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
e7fc3dcd31 fs: copy the ID too when we copy a Directory object
This means that crypt which wraps directory objects will retain the ID
of the underlying object.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
2386c5adc1 hubic: fix tests for optional methods 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
2f21aa86b4 fstest: add tests for coverage of optional methods for wrapping Fs 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
16d8014cbb build: drop support for go1.8 2019-04-15 21:49:58 +01:00
Nick Craig-Wood
613a9bb86b vendor: update all dependencies 2019-04-15 20:12:56 +01:00
calisro
8190a81201 lsjson: added EncryptedPath to output - fixes #3094 2019-04-15 18:12:09 +01:00
Nick Craig-Wood
f5795db6d2 build: fix fetch_binaries not to fetch test binaries 2019-04-13 13:08:53 +01:00
Nick Craig-Wood
e2a2eb349f Start v1.47.0-DEV development 2019-04-13 13:08:37 +01:00
Nick Craig-Wood
a0d4fdb2fa Version v1.47.0 2019-04-13 11:01:58 +01:00
Nick Craig-Wood
a28239f005 filter: Make --files-from traverse as before unless --no-traverse is set
In c5ac96e9e7 we made --files-from only read the objects specified and
don't scan directories.

This caused problems with Google drive (very very slow) and B2
(excessive API consumption) so it was decided to make the old
behaviour (traversing the directories) the default with --files-from
and use the existing --no-traverse flag (which has exactly the right
semantics) to enable the new non scanning behaviour.

See: https://forum.rclone.org/t/using-files-from-with-drive-hammers-the-api/8726

Fixes #3102 Fixes #3095
2019-04-12 17:16:49 +01:00
Nick Craig-Wood
b05da61c82 build: move linter build tags into Makefile to fix golangci-lint 2019-04-12 15:48:36 +01:00
Nick Craig-Wood
41f01da625 docs: add link to Go report card 2019-04-12 15:28:37 +01:00
Nick Craig-Wood
901811bb26 docs: Remove references to Google+ 2019-04-12 15:25:17 +01:00
Oliver Heyme
0d4a3520ad jottacloud: add device registration
jottacloud: Updated documenation
2019-04-11 16:31:27 +01:00
calistri
5855714474 lsjson: added --files-only and --dirs-only flags
Factored common code from lsf/lsjson into operations.ListJSON
2019-04-11 11:43:25 +01:00
Nick Craig-Wood
120de505a9 Add Manu to contributors 2019-04-11 10:22:17 +01:00
Manu
6e86526c9d s3: add support for "Glacier Deep Archive" storage class - fixes #3088 2019-04-11 10:21:41 +01:00
Nick Craig-Wood
0862dc9b2b build: update to Xenial in travis build to fix link errors 2019-04-10 15:22:21 +01:00
Nick Craig-Wood
1c301f9f7a drive: Fix creation of duplicates with server side copy - fixes #3067 2019-03-29 16:58:19 +00:00
Nick Craig-Wood
9f6b09dfaf Add Ben Boeckel to contributors 2019-03-28 15:13:17 +00:00
Ben Boeckel
3d424c6e08 docs: fix various typos 2019-03-28 15:12:51 +00:00
Nick Craig-Wood
6fb1c8f51c b2: ignore malformed src_last_modified_millis
This fixes rclone returning `listing failed: strconv.ParseInt` errors
when listing files which have a malformed `src_last_modified_millis`.
This is uploaded by the client so care is needed in interpreting it as
it can be malformed.

Fixes #3065
2019-03-25 15:51:45 +00:00
Nick Craig-Wood
626f0d1886 copy: account for server side copy bytes and obey --max-transfer 2019-03-25 15:36:38 +00:00
Nick Craig-Wood
9ee9fe3885 swift: obey Retry-After to fix OVH restore from cold storage
In as many methods as possible we attempt to obey the Retry-After
header where it is provided.

This means that when objects are being requested from OVH cold storage
rclone will sleep the correct amount of time before retrying.

If the sleeps are short it does them immediately, if long then it
returns an ErrorRetryAfter which will cause the outer retry to sleep
before retrying.

Fixes #3041
2019-03-25 13:41:34 +00:00
Nick Craig-Wood
b0380aad95 vendor: update github.com/ncw/swift to help with #3041 2019-03-25 13:41:34 +00:00
Nick Craig-Wood
2065e73d0b cmd: implement RetryAfter errors which cause a sleep before a retry
Use NewRetryAfterError to return an error which will cause a high
level retry after the delay specified.
2019-03-25 13:41:34 +00:00
Nick Craig-Wood
d3e3bbedf3 docs: Fix typo - fixes #3071 2019-03-25 13:18:41 +00:00
Cnly
8d29d69ade onedrive: Implement graceful cancel 2019-03-19 15:54:51 +08:00
Nick Craig-Wood
6e70d88f54 swift: work around token expiry on CEPH
This implements the Expiry interface so token expiry works properly

This change makes sure that this change from the swift library works
correctly with rclone's custom authenticator.

> Renew the token 60s before the expiry time
>
> The v2 and v3 auth schemes both return the expiry time of the token,
> so instead of waiting for a 401 error, renew the token 60s before this
> time.
>
> This makes transfers more efficient and also works around a bug in
> CEPH which returns 403 instead of 401 when the token expires.
>
> http://tracker.ceph.com/issues/22223
2019-03-18 13:30:59 +00:00
Nick Craig-Wood
595fea757d vendor: update github.com/ncw/swift to bring in Expires changes 2019-03-18 13:30:59 +00:00
Nick Craig-Wood
bb80586473 bin/get-github-release: fetch the most recent not the least recent 2019-03-18 11:29:37 +00:00
Nick Craig-Wood
0d475958c7 Fix errors discovered with go vet nilness tool 2019-03-18 11:23:00 +00:00
Nick Craig-Wood
2728948fb0 Add xopez to contributors 2019-03-18 11:04:10 +00:00
Nick Craig-Wood
3756f211b5 Add Danil Semelenov to contributors 2019-03-18 11:04:10 +00:00
xopez
2faf2aed80 docs: Update Copyright to current Year 2019-03-18 11:03:45 +00:00
Nick Craig-Wood
1bd8183af1 build: use matrix build for travis
This makes the build more efficient, the .travis.yml file more
comprehensible and reduces the Makefile spaghetti.

Windows support is commented out for the moment as it isn't very
reliable yet.
2019-03-17 14:58:18 +00:00
Nick Craig-Wood
5aa706831f b2: ignore already_hidden error on remove
Sometimes (possibly through eventual consistency) b2 returns an
already_hidden error on a delete.  Ignore this since it is harmless.
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
ac7e1dbf62 test_all: add the vfs tests to the integration tests
Fix failing tests for some remotes
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
14ef4437e5 dedupe: fix bug introduced when converting to use walk.ListR #2902
Before the fix we were only de-duping the ListR batches.

Afterwards we dedupe everything.

This will have the consequence that rclone uses more memory as it will
build a map of all the directory names, not just the names in a given
directory.
2019-03-17 11:01:20 +00:00
Danil Semelenov
a0d2ab5b4f cmd: Fix autocompletion of remote paths with spaces - fixes #3047 2019-03-17 10:15:20 +00:00
Nick Craig-Wood
3bfde5f52a ftp: add --ftp-concurrency to limit maximum number of connections
Fixes #2166
2019-03-17 09:57:14 +00:00
Nick Craig-Wood
2b05bd9a08 rc: implement operations/publiclink the equivalent of rclone link
Fixes #3042
2019-03-17 09:41:31 +00:00
Nick Craig-Wood
1318be3b0a vendor: update github.com/goftp/server to fix hang while reading a file from the server
See: https://forum.rclone.org/t/minor-issue-with-linux-ftp-client-and-rclone-ftp-access-denied/8959
2019-03-17 09:30:57 +00:00
Nick Craig-Wood
f4a754a36b drive: add --skip-checksum-gphotos to ignore incorrect checksums on Google Photos
First implementation by @jammin84, re-written by @ncw

Fixes #2207
2019-03-17 09:10:51 +00:00
Nick Craig-Wood
fef73763aa lib/atexit: add SIGTERM to signals which run the exit handlers on unix 2019-03-16 17:47:02 +00:00
Nick Craig-Wood
7267d19ad8 fstest: Use walk.ListR for listing 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
47099466c0 cache: Use walk.ListR for listing the temporary Fs. 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
4376019062 dedupe: Use walk.ListR for listing commands.
This dramatically increases the speed (7x in my tests) of the de-dupe
as google drive supports ListR directly and dedupe did not work with
`--fast-list`.

Fixes #2902
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
e5f4210b09 serve restic: use walk.ListR for listing
This is effectively what the old code did anyway so this should not
make any functional changes.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
d5f2df2f3d Use walk.ListR for listing operations
This will increase speed for backends which support ListR and will not
have the memory overhead of using --fast-list.

It also means that errors are queued until the end so as much of the
remote will be listed as possible before returning an error.

Commands affected are:
- lsf
- ls
- lsl
- lsjson
- lsd
- md5sum/sha1sum/hashsum
- size
- delete
- cat
- settier
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
efd720b533 walk: Implement walk.ListR which will use ListR if at all possible
It otherwise has the nearly the same interface as walk.Walk which it
will fall back to if it can't use ListR.

Using walk.ListR will speed up file system operations by default and
use much less memory and start immediately compared to if --fast-list
had been supplied.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
047f00a411 filter: Add BoundedRecursion method
This indicates that the filter set could be satisfied by a bounded
directory recursion.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
bb5ac8efbe http: fix socket leak on 404 errors 2019-03-15 17:04:28 +00:00
Nick Craig-Wood
e62bbf761b http: add --http-no-slash for websites with directories with no slashes #3053
See: https://forum.rclone.org/t/is-there-a-way-to-log-into-an-htpp-server/8484
2019-03-15 17:04:06 +00:00
Nick Craig-Wood
54a2e99d97 http: remove duplicates from listings 2019-03-15 16:59:36 +00:00
Nick Craig-Wood
28230d93b4 sync: Implement --suffix-keep-extension for use with --suffix - fixes #3032 2019-03-15 14:21:39 +00:00
Florian Gamböck
3c4407442d cmd: fix completion of remotes
The previous behavior of the remotes completion was that only
alphanumeric characters were allowed in a remote name. This limitation
has been lifted somewhat by #2985, which also allowed an underscore.

With the new implementation introduced in this commit, the completion of
the remote name has been simplified: If there is no colon (":") in the
current word, then complete remote name. Otherwise, complete the path
inside the specified remote. This allows correct completion of all
remote names that are allowed by the config (including - and _).
Actually it matches much more than that, even remote names that are not
allowed by the config, but in such a case there already would be a wrong
identifier in the configuration file.

With this simpler string comparison, we can get rid of the regular
expression, which makes the completion multiple times faster. For a
sample benchmark, try the following:

     # Old way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path =~ ^[[:alnum:]]*$ ]]; done'

     real    0m15,637s
     user    0m15,613s
     sys     0m0,024s

     # New way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path != *:* ]]; done'

     real    0m1,324s
     user    0m1,304s
     sys     0m0,020s
2019-03-15 13:16:42 +00:00
Dan Walters
caf318d499 dlna: add connection manager service description
The UPnP MediaServer spec says that the ConnectionManager service is
required, and adding it was enough to get dlna support working on my
other TV (LG webOS 2.2.1).
2019-03-15 13:14:31 +00:00
Nick Craig-Wood
2fbb504b66 webdav: fix About/df when reading the available/total returns 0
Some WebDAV servers return an empty Available and Used which parses as 0.

This caused About to return the Total as 0 which can confused mounted
file systems.

After this change we ignore the result if Available and Used are both 0.

See: https://forum.rclone.org/t/windows-mounted-webdav-drive-has-no-free-space/8938
2019-03-15 12:03:04 +00:00
Alex Chen
2b58d1a46f docs: onedrive: Add guide to refreshing token after MFA is enabled 2019-03-14 00:21:05 +08:00
Cnly
1582a21408 onedrive: Always add trailing colon to path when addressing items - #2720, #3039 2019-03-13 11:30:15 +08:00
Nick Craig-Wood
229898dcee Add Dan Walters to contributors 2019-03-11 17:31:46 +00:00
Dan Walters
95194adfd5 dlna: fix root XML service descriptor
The SCPD URL was being set after marshalling the XML, and thus coming
out blank.  Now works on my Samsung TV, and likely fixes some issues
reported by others in #2648.
2019-03-11 17:31:32 +00:00
Nick Craig-Wood
4827496234 webdav: fix race when creating directories - fixes #3035
Before this change a race condition existed in mkdir
- the directory was attempted to be created
- the parent didn't exist so it failed
- the parent was created
- the directory was created again

The last step failed as the directory was created in a different thread.

This was fixed by checking the error messages of MKCOL for both
directory creations, rather than only the first.
2019-03-11 16:20:05 +00:00
760 changed files with 101808 additions and 20428 deletions

View File

@@ -1,9 +1,5 @@
# golangci-lint configuration options
run:
build-tags:
- cmount
linters:
enable:
- deadcode

View File

@@ -1,27 +1,33 @@
---
language: go
sudo: required
dist: trusty
dist: xenial
os:
- linux
go:
- 1.8.x
- 1.9.x
- 1.10.x
- 1.11.x
- 1.12.x
- tip
- linux
go_import_path: github.com/ncw/rclone
before_install:
- if [[ $TRAVIS_OS_NAME == linux ]]; then sudo modprobe fuse ; sudo chmod 666 /dev/fuse ; sudo chown root:$USER /etc/fuse.conf ; fi
- if [[ $TRAVIS_OS_NAME == osx ]]; then brew update && brew tap caskroom/cask && brew cask install osxfuse ; fi
- git fetch --unshallow --tags
- |
if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
fi
if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
brew update
brew tap caskroom/cask
brew cask install osxfuse
fi
if [[ "$TRAVIS_OS_NAME" == "windows" ]]; then
choco install -y winfsp zip make
cd ../.. # fix crlf in git checkout
mv $TRAVIS_REPO_SLUG _old
git config --global core.autocrlf false
git clone _old $TRAVIS_REPO_SLUG
cd $TRAVIS_REPO_SLUG
fi
install:
- git fetch --unshallow --tags
- make vars
- make build_dep
script:
- make check
- make quicktest
- make compile_all
- make vars
env:
global:
- GOTAGS=cmount
@@ -32,23 +38,63 @@ env:
addons:
apt:
packages:
- fuse
- libfuse-dev
- rpm
- pkg-config
- fuse
- libfuse-dev
- rpm
- pkg-config
cache:
directories:
- $HOME/.cache/go-build
matrix:
allow_failures:
- go: tip
- go: tip
include:
- os: osx
go: 1.12.x
env: GOTAGS=""
cache:
directories:
- $HOME/Library/Caches/go-build
- go: 1.9.x
script:
- make quicktest
- go: 1.10.x
script:
- make quicktest
- go: 1.11.x
script:
- make quicktest
- go: 1.12.x
env:
- GOTAGS=cmount
script:
- make build_dep
- make check
- make quicktest
- make racequicktest
- make compile_all
- os: osx
go: 1.12.x
env:
- GOTAGS= # cmount doesn't work on osx travis for some reason
cache:
directories:
- $HOME/Library/Caches/go-build
script:
- make
- make quicktest
- make racequicktest
# - os: windows
# go: 1.12.x
# env:
# - GOTAGS=cmount
# - CPATH='C:\Program Files (x86)\WinFsp\inc\fuse'
# #filter_secrets: false # works around a problem with secrets under windows
# cache:
# directories:
# - ${LocalAppData}/go-build
# script:
# - make
# - make quicktest
# - make racequicktest
- go: tip
script:
- make quicktest
deploy:
provider: script
script: make travis_beta
@@ -57,4 +103,4 @@ deploy:
repo: ncw/rclone
all_branches: true
go: 1.12.x
condition: $TRAVIS_PULL_REQUEST == false
condition: $TRAVIS_PULL_REQUEST == false && $TRAVIS_OS_NAME != "windows"

View File

@@ -135,7 +135,7 @@ then change into the project root and run
make test
This command is run daily on the the integration test server. You can
This command is run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
## Code Organisation ##

File diff suppressed because it is too large Load Diff

692
MANUAL.md

File diff suppressed because it is too large Load Diff

3775
MANUAL.txt

File diff suppressed because it is too large Load Diff

View File

@@ -17,8 +17,6 @@ ifneq ($(TAG),$(LAST_TAG))
endif
GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... | grep -v /vendor/ )
# Run full tests if go >= go1.12
FULL_TESTS := $(shell go version | perl -lne 'print "go$$1.$$2" if /go(\d+)\.(\d+)/ && ($$1 > 1 || $$2 >= 12)')
BETA_PATH := $(BRANCH_PATH)$(TAG)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
@@ -26,6 +24,7 @@ BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
LINTTAGS=--build-tags "$(GOTAGS)"
endif
.PHONY: rclone vars version
@@ -42,7 +41,6 @@ vars:
@echo LAST_TAG="'$(LAST_TAG)'"
@echo NEW_TAG="'$(NEW_TAG)'"
@echo GO_VERSION="'$(GO_VERSION)'"
@echo FULL_TESTS="'$(FULL_TESTS)'"
@echo BETA_URL="'$(BETA_URL)'"
version:
@@ -57,28 +55,19 @@ test: rclone
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) $(GO_FILES)
ifdef FULL_TESTS
racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race $(GO_FILES)
endif
# Do source code quality checks
check: rclone
ifdef FULL_TESTS
@# we still run go vet for -printfuncs which golangci-lint doesn't do yet
@# see: https://github.com/golangci/golangci-lint/issues/204
@echo "-- START CODE QUALITY REPORT -------------------------------"
@go vet $(BUILDTAGS) -printfuncs Debugf,Infof,Logf,Errorf ./...
@golangci-lint run ./...
@golangci-lint run $(LINTTAGS) ./...
@echo "-- END CODE QUALITY REPORT ---------------------------------"
else
@echo Skipping source quality tests as version of go too old
endif
# Get the build dependencies
build_dep:
ifdef FULL_TESTS
go run bin/get-github-release.go -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
endif
# Get the release dependencies
release_dep:
@@ -162,11 +151,7 @@ log_since_last_release:
git log $(LAST_TAG)..
compile_all:
ifdef FULL_TESTS
go run bin/cross-compile.go -parallel 8 -compile-only $(BUILDTAGS) $(TAG)
else
@echo Skipping compile all as version of go too old
endif
appveyor_upload:
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
@@ -186,6 +171,11 @@ BUILD_FLAGS := -exclude "^(windows|darwin)/"
ifeq ($(TRAVIS_OS_NAME),osx)
BUILD_FLAGS := -include "^darwin/" -cgo
endif
ifeq ($(TRAVIS_OS_NAME),windows)
# BUILD_FLAGS := -include "^windows/" -cgo
# 386 doesn't build yet
BUILD_FLAGS := -include "^windows/amd64" -cgo
endif
travis_beta:
ifeq ($(TRAVIS_OS_NAME),linux)
@@ -201,7 +191,7 @@ endif
# Fetch the binary builds from travis and appveyor
fetch_binaries:
rclone -P sync $(BETA_UPLOAD) build/
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w

View File

@@ -7,11 +7,11 @@
[Changelog](https://rclone.org/changelog/) |
[Installation](https://rclone.org/install/) |
[Forum](https://forum.rclone.org/) |
[G+](https://google.com/+RcloneOrg)
[![Build Status](https://travis-ci.org/ncw/rclone.svg?branch=master)](https://travis-ci.org/ncw/rclone)
[![Windows Build Status](https://ci.appveyor.com/api/projects/status/github/ncw/rclone?branch=master&passingText=windows%20-%20ok&svg=true)](https://ci.appveyor.com/project/ncw/rclone)
[![CircleCI](https://circleci.com/gh/ncw/rclone/tree/master.svg?style=svg)](https://circleci.com/gh/ncw/rclone/tree/master)
[![Go Report Card](https://goreportcard.com/badge/github.com/ncw/rclone)](https://goreportcard.com/report/github.com/ncw/rclone)
[![GoDoc](https://godoc.org/github.com/ncw/rclone?status.svg)](https://godoc.org/github.com/ncw/rclone)
# Rclone

View File

@@ -11,7 +11,7 @@ Making a release
* edit docs/content/changelog.md
* make doc
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX"
* git commit -a -v -m "Version v1.XX.0"
* make retag
* git push --tags origin master
* # Wait for the appveyor and travis builds to complete then...
@@ -27,6 +27,7 @@ Making a release
Early in the next release cycle update the vendored dependencies
* Review any pinned packages in go.mod and remove if possible
* GO111MODULE=on go get -u github.com/spf13/cobra@master
* make update
* git status
* git add new files

View File

@@ -32,7 +32,6 @@ import (
"github.com/ncw/rclone/lib/dircache"
"github.com/ncw/rclone/lib/oauthutil"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
"golang.org/x/oauth2"
)
@@ -1093,7 +1092,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
if !bigObject {
in, resp, err = file.OpenHeaders(headers)
} else {
in, resp, err = file.OpenTempURLHeaders(rest.ClientWithHeaderReset(o.fs.noAuthClient, headers), headers)
in, resp, err = file.OpenTempURLHeaders(o.fs.noAuthClient, headers)
}
return o.fs.shouldRetry(resp, err)
})

View File

@@ -1,6 +1,6 @@
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
// +build !plan9,!solaris,go1.8
// +build !plan9,!solaris
package azureblob

View File

@@ -1,4 +1,4 @@
// +build !plan9,!solaris,go1.8
// +build !plan9,!solaris
package azureblob

View File

@@ -1,6 +1,6 @@
// Test AzureBlob filesystem interface
// +build !plan9,!solaris,go1.8
// +build !plan9,!solaris
package azureblob

View File

@@ -1,6 +1,6 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9 solaris !go1.8
// +build plan9 solaris
package azureblob

View File

@@ -952,6 +952,13 @@ func (f *Fs) hide(Name string) error {
return f.shouldRetry(resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
if apiErr.Code == "already_hidden" {
// sometimes eventual consistency causes this, so
// ignore this error since it is harmless
return nil
}
}
return errors.Wrapf(err, "failed to hide %q", Name)
}
return nil
@@ -1206,7 +1213,7 @@ func (o *Object) parseTimeString(timeString string) (err error) {
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
if err != nil {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return err
return nil
}
o.modTime = time.Unix(unixMilliseconds/1E3, (unixMilliseconds%1E3)*1E6).UTC()
return nil

View File

@@ -1191,7 +1191,7 @@ func (f *Fs) Rmdir(dir string) error {
}
var queuedEntries []*Object
err = walk.Walk(f.tempFs, dir, true, -1, func(path string, entries fs.DirEntries, err error) error {
err = walk.ListR(f.tempFs, dir, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
co := ObjectFromOriginal(f, oo)
@@ -1287,7 +1287,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
}
var queuedEntries []*Object
err := walk.Walk(f.tempFs, srcRemote, true, -1, func(path string, entries fs.DirEntries, err error) error {
err := walk.ListR(f.tempFs, srcRemote, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
co := ObjectFromOriginal(f, oo)

View File

@@ -15,7 +15,9 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "MergeDirs"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier"},
})
}

View File

@@ -1023,7 +1023,7 @@ func (b *Persistent) ReconcileTempUploads(cacheFs *Fs) error {
}
var queuedEntries []fs.Object
err = walk.Walk(cacheFs.tempFs, "", true, -1, func(path string, entries fs.DirEntries, err error) error {
err = walk.ListR(cacheFs.tempFs, "", true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
queuedEntries = append(queuedEntries, oo)

View File

@@ -169,23 +169,10 @@ func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
WriteMimeType: false,
BucketBased: true,
CanHaveEmptyDirectories: true,
SetTier: true,
GetTier: true,
}).Fill(f).Mask(wrappedFs).WrapsFs(f, wrappedFs)
doChangeNotify := wrappedFs.Features().ChangeNotify
if doChangeNotify != nil {
f.features.ChangeNotify = func(notifyFunc func(string, fs.EntryType), pollInterval <-chan time.Duration) {
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
decrypted, err := f.DecryptFileName(path)
if err != nil {
fs.Logf(f, "ChangeNotify was unable to decrypt %q: %s", path, err)
return
}
notifyFunc(decrypted, entryType)
}
doChangeNotify(wrappedNotifyFunc, pollInterval)
}
}
return f, err
}
@@ -202,6 +189,7 @@ type Options struct {
// Fs represents a wrapped fs.Fs
type Fs struct {
fs.Fs
wrapper fs.Fs
name string
root string
opt Options
@@ -544,6 +532,16 @@ func (f *Fs) UnWrap() fs.Fs {
return f.Fs
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// EncryptFileName returns an encrypted file name
func (f *Fs) EncryptFileName(fileName string) string {
return f.cipher.EncryptFileName(fileName)
@@ -616,6 +614,75 @@ func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType hash.Type) (hashStr
return m.Sums()[hashType], nil
}
// MergeDirs merges the contents of all the directories passed
// in into the first one and rmdirs the other directories.
func (f *Fs) MergeDirs(dirs []fs.Directory) error {
do := f.Fs.Features().MergeDirs
if do == nil {
return errors.New("MergeDirs not supported")
}
out := make([]fs.Directory, len(dirs))
for i, dir := range dirs {
out[i] = fs.NewDirCopy(dir).SetRemote(f.cipher.EncryptDirName(dir.Remote()))
}
return do(out)
}
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
do := f.Fs.Features().DirCacheFlush
if do != nil {
do()
}
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(remote string) (string, error) {
do := f.Fs.Features().PublicLink
if do == nil {
return "", errors.New("PublicLink not supported")
}
o, err := f.NewObject(remote)
if err != nil {
// assume it is a directory
return do(f.cipher.EncryptDirName(remote))
}
return do(o.(*Object).Object.Remote())
}
// ChangeNotify calls the passed function with a path
// that has had changes. If the implementation
// uses polling, it should adhere to the given interval.
func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
do := f.Fs.Features().ChangeNotify
if do == nil {
return
}
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
fs.Logf(f, "path %q entryType %d", path, entryType)
var (
err error
decrypted string
)
switch entryType {
case fs.EntryDirectory:
decrypted, err = f.cipher.DecryptDirName(path)
case fs.EntryObject:
decrypted, err = f.cipher.DecryptFileName(path)
default:
fs.Errorf(path, "crypt ChangeNotify: ignoring unknown EntryType %d", entryType)
return
}
if err != nil {
fs.Logf(f, "ChangeNotify was unable to decrypt %q: %s", path, err)
return
}
notifyFunc(decrypted, entryType)
}
do(wrappedNotifyFunc, pollIntervalChan)
}
// Object describes a wrapped for being read from the Fs
//
// This decrypts the remote name and decrypts the data
@@ -774,6 +841,34 @@ func (o *ObjectInfo) Hash(hash hash.Type) (string, error) {
return "", nil
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
do, ok := o.Object.(fs.IDer)
if !ok {
return ""
}
return do.ID()
}
// SetTier performs changing storage tier of the Object if
// multiple storage classes supported
func (o *Object) SetTier(tier string) error {
do, ok := o.Object.(fs.SetTierer)
if !ok {
return errors.New("crypt: underlying remote does not support SetTier")
}
return do.SetTier(tier)
}
// GetTier returns storage tier or class of the Object
func (o *Object) GetTier() string {
do, ok := o.Object.(fs.GetTierer)
if !ok {
return ""
}
return do.GetTier()
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
@@ -787,7 +882,15 @@ var (
_ fs.UnWrapper = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.ObjectInfo = (*ObjectInfo)(nil)
_ fs.Object = (*Object)(nil)
_ fs.ObjectUnWrapper = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
_ fs.SetTierer = (*Object)(nil)
_ fs.GetTierer = (*Object)(nil)
)

View File

@@ -21,8 +21,9 @@ func TestIntegration(t *testing.T) {
t.Skip("Skipping as -remote not set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*crypt.Object)(nil),
RemoteName: *fstest.RemoteName,
NilObject: (*crypt.Object)(nil),
UnimplementableObjectMethods: []string{"MimeType"},
})
}
@@ -42,6 +43,7 @@ func TestStandard(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato")},
{Name: name, Key: "filename_encryption", Value: "standard"},
},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
@@ -61,6 +63,7 @@ func TestOff(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "off"},
},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
@@ -80,6 +83,7 @@ func TestObfuscate(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "obfuscate"},
},
SkipBadWindowsCharacters: true,
SkipBadWindowsCharacters: true,
UnimplementableObjectMethods: []string{"MimeType"},
})
}

View File

@@ -1,7 +1,4 @@
// Package drive interfaces with the Google Drive object storage system
// +build go1.9
package drive
// FIXME need to deal with some corner cases
@@ -240,6 +237,22 @@ func init() {
Default: false,
Help: "Skip google documents in all listings.\nIf given, gdocs practically become invisible to rclone.",
Advanced: true,
}, {
Name: "skip_checksum_gphotos",
Default: false,
Help: `Skip MD5 checksum on Google photos and videos only.
Use this if you get checksum errors when transferring Google photos or
videos.
Setting this flag will cause Google photos and videos to return a
blank MD5 checksum.
Google photos are identifed by being in the "photos" space.
Corrupted checksums are caused by Google modifying the image/video but
not updating the checksum.`,
Advanced: true,
}, {
Name: "shared_with_me",
Default: false,
@@ -396,6 +409,7 @@ type Options struct {
AuthOwnerOnly bool `config:"auth_owner_only"`
UseTrash bool `config:"use_trash"`
SkipGdocs bool `config:"skip_gdocs"`
SkipChecksumGphotos bool `config:"skip_checksum_gphotos"`
SharedWithMe bool `config:"shared_with_me"`
TrashedOnly bool `config:"trashed_only"`
Extensions string `config:"formats"`
@@ -615,6 +629,9 @@ func (f *Fs) list(dirIDs []string, title string, directoriesOnly, filesOnly, inc
if f.opt.AuthOwnerOnly {
fields += ",owners"
}
if f.opt.SkipChecksumGphotos {
fields += ",spaces"
}
fields = fmt.Sprintf("files(%s),nextPageToken", fields)
@@ -1002,6 +1019,15 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
// newRegularObject creates a fs.Object for a normal drive.File
func (f *Fs) newRegularObject(remote string, info *drive.File) fs.Object {
// wipe checksum if SkipChecksumGphotos and file is type Photo or Video
if f.opt.SkipChecksumGphotos {
for _, space := range info.Spaces {
if space == "photos" {
info.Md5Checksum = ""
break
}
}
}
return &Object{
baseObject: f.newBaseObject(remote, info),
url: fmt.Sprintf("%sfiles/%s?alt=media", f.svc.BasePath, info.Id),
@@ -1843,16 +1869,24 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
remote = remote[:len(remote)-len(ext)]
}
// Look to see if there is an existing object
existingObject, _ := f.NewObject(remote)
createInfo, err := f.createFileInfo(remote, src.ModTime())
if err != nil {
return nil, err
}
supportTeamDrives, err := f.ShouldSupportTeamDrives(src)
if err != nil {
return nil, err
}
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Copy(srcObj.id, createInfo).
Fields(partialFields).
SupportsTeamDrives(f.isTeamDrive).
SupportsTeamDrives(supportTeamDrives).
KeepRevisionForever(f.opt.KeepRevisionForever).
Do()
return shouldRetry(err)
@@ -1860,7 +1894,17 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
if err != nil {
return nil, err
}
return f.newObjectWithInfo(remote, info)
newObject, err := f.newObjectWithInfo(remote, info)
if err != nil {
return nil, err
}
if existingObject != nil {
err = existingObject.Remove()
if err != nil {
fs.Errorf(existingObject, "Failed to remove existing object after copy: %v", err)
}
}
return newObject, nil
}
// Purge deletes all the files and the container
@@ -1986,6 +2030,11 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
dstParents := strings.Join(dstInfo.Parents, ",")
dstInfo.Parents = nil
supportTeamDrives, err := f.ShouldSupportTeamDrives(src)
if err != nil {
return nil, err
}
// Do the move
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
@@ -1993,7 +2042,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
RemoveParents(srcParentID).
AddParents(dstParents).
Fields(partialFields).
SupportsTeamDrives(f.isTeamDrive).
SupportsTeamDrives(supportTeamDrives).
Do()
return shouldRetry(err)
})
@@ -2004,6 +2053,20 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, info)
}
// ShouldSupportTeamDrives returns the request should support TeamDrives
func (f *Fs) ShouldSupportTeamDrives(src fs.Object) (bool, error) {
srcIsTeamDrive := false
if srcFs, ok := src.Fs().(*Fs); ok {
srcIsTeamDrive = srcFs.isTeamDrive
}
if f.isTeamDrive {
return true, nil
}
return srcIsTeamDrive, nil
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(remote string) (link string, err error) {
id, err := f.dirCache.FindDir(remote, false)

View File

@@ -1,5 +1,3 @@
// +build go1.9
package drive
import (

View File

@@ -1,7 +1,5 @@
// Test Drive filesystem interface
// +build go1.9
package drive
import (

View File

@@ -1,6 +0,0 @@
// Build for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !go1.9
package drive

View File

@@ -8,8 +8,6 @@
//
// This contains code adapted from google.golang.org/api (C) the GO AUTHORS
// +build go1.9
package drive
import (

View File

@@ -15,6 +15,7 @@ import (
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
)
@@ -45,6 +46,11 @@ func init() {
Help: "FTP password",
IsPassword: true,
Required: true,
}, {
Name: "concurrency",
Help: "Maximum number of FTP simultaneous connections, 0 for unlimited",
Default: 0,
Advanced: true,
},
},
})
@@ -52,10 +58,11 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
Concurrency int `config:"concurrency"`
}
// Fs represents a remote FTP server
@@ -70,6 +77,7 @@ type Fs struct {
dialAddr string
poolMu sync.Mutex
pool []*ftp.ServerConn
tokens *pacer.TokenDispenser
}
// Object describes an FTP file
@@ -128,6 +136,9 @@ func (f *Fs) ftpConnection() (*ftp.ServerConn, error) {
// Get an FTP connection from the pool, or open a new one
func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
f.tokens.Get()
}
f.poolMu.Lock()
if len(f.pool) > 0 {
c = f.pool[0]
@@ -147,6 +158,9 @@ func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
// if err is not nil then it checks the connection is alive using a
// NOOP request
func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
defer f.tokens.Put()
}
c := *pc
*pc = nil
if err != nil {
@@ -198,6 +212,7 @@ func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
user: user,
pass: pass,
dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,

View File

@@ -1,7 +1,4 @@
// Package googlecloudstorage provides an interface to Google Cloud Storage
// +build go1.9
package googlecloudstorage
/*

View File

@@ -1,7 +1,5 @@
// Test GoogleCloudStorage filesystem interface
// +build go1.9
package googlecloudstorage_test
import (

View File

@@ -1,6 +0,0 @@
// Build for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !go1.9
package googlecloudstorage

View File

@@ -6,6 +6,7 @@ package http
import (
"io"
"mime"
"net/http"
"net/url"
"path"
@@ -44,6 +45,22 @@ func init() {
Value: "https://user:pass@example.com",
Help: "Connect to example.com using a username and password",
}},
}, {
Name: "no_slash",
Help: `Set this if the site doesn't end directories with /
Use this if your target website does not use / on the end of
directories.
A / on the end of a path is how rclone normally tells the difference
between files and directories. If this flag is set, then rclone will
treat all files with Content-Type: text/html as directories and read
URLs from them rather than downloading them.
Note that this may cause rclone to confuse genuine HTML files with
directories.`,
Default: false,
Advanced: true,
}},
}
fs.Register(fsi)
@@ -52,6 +69,7 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
Endpoint string `config:"url"`
NoSlash bool `config:"no_slash"`
}
// Fs stores the interface to the remote HTTP files
@@ -270,14 +288,20 @@ func parse(base *url.URL, in io.Reader) (names []string, err error) {
if err != nil {
return nil, err
}
var walk func(*html.Node)
var (
walk func(*html.Node)
seen = make(map[string]struct{})
)
walk = func(n *html.Node) {
if n.Type == html.ElementNode && n.Data == "a" {
for _, a := range n.Attr {
if a.Key == "href" {
name, err := parseName(base, a.Val)
if err == nil {
names = append(names, name)
if _, found := seen[name]; !found {
names = append(names, name)
seen[name] = struct{}{}
}
}
break
}
@@ -302,14 +326,16 @@ func (f *Fs) readDir(dir string) (names []string, err error) {
return nil, errors.Errorf("internal error: readDir URL %q didn't end in /", URL)
}
res, err := f.httpClient.Get(URL)
if err == nil && res.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
if err == nil {
defer fs.CheckClose(res.Body, &err)
if res.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
}
}
err = statusError(res, err)
if err != nil {
return nil, errors.Wrap(err, "failed to readDir")
}
defer fs.CheckClose(res.Body, &err)
contentType := strings.SplitN(res.Header.Get("Content-Type"), ";", 2)[0]
switch contentType {
@@ -353,11 +379,16 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
fs: f,
remote: remote,
}
if err = file.stat(); err != nil {
switch err = file.stat(); err {
case nil:
entries = append(entries, file)
case fs.ErrorNotAFile:
// ...found a directory not a file
dir := fs.NewDir(remote, timeUnset)
entries = append(entries, dir)
default:
fs.Debugf(remote, "skipping because of error: %v", err)
continue
}
entries = append(entries, file)
}
}
return entries, nil
@@ -433,6 +464,16 @@ func (o *Object) stat() error {
o.size = parseInt64(res.Header.Get("Content-Length"), -1)
o.modTime = t
o.contentType = res.Header.Get("Content-Type")
// If NoSlash is set then check ContentType to see if it is a directory
if o.fs.opt.NoSlash {
mediaType, _, err := mime.ParseMediaType(o.contentType)
if err != nil {
return errors.Wrapf(err, "failed to parse Content-Type: %q", o.contentType)
}
if mediaType == "text/html" {
return fs.ErrorNotAFile
}
}
return nil
}

View File

@@ -1,5 +1,3 @@
// +build go1.8
package http
import (
@@ -65,7 +63,7 @@ func prepare(t *testing.T) (fs.Fs, func()) {
return f, tidy
}
func testListRoot(t *testing.T, f fs.Fs) {
func testListRoot(t *testing.T, f fs.Fs, noSlash bool) {
entries, err := f.List("")
require.NoError(t, err)
@@ -93,15 +91,29 @@ func testListRoot(t *testing.T, f fs.Fs) {
e = entries[3]
assert.Equal(t, "two.html", e.Remote())
assert.Equal(t, int64(7), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
if noSlash {
assert.Equal(t, int64(-1), e.Size())
_, ok = e.(fs.Directory)
assert.True(t, ok)
} else {
assert.Equal(t, int64(41), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
}
}
func TestListRoot(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
testListRoot(t, f)
testListRoot(t, f, false)
}
func TestListRootNoSlash(t *testing.T) {
f, tidy := prepare(t)
f.(*Fs).opt.NoSlash = true
defer tidy()
testListRoot(t, f, true)
}
func TestListSubDir(t *testing.T) {
@@ -194,7 +206,7 @@ func TestIsAFileRoot(t *testing.T) {
f, err := NewFs(remoteName, "one%.txt", m)
assert.Equal(t, err, fs.ErrorIsFile)
testListRoot(t, f)
testListRoot(t, f, false)
}
func TestIsAFileSubDir(t *testing.T) {

View File

@@ -1 +1 @@
potato
<a href="two.html/file.txt">file.txt</a>

View File

@@ -11,7 +11,9 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestHubic:",
NilObject: (*hubic.Object)(nil),
RemoteName: "TestHubic:",
NilObject: (*hubic.Object)(nil),
SkipFsCheckWrap: true,
SkipObjectCheckWrap: true,
})
}

View File

@@ -314,3 +314,9 @@ type UploadResponse struct {
Deleted interface{} `json:"deleted"`
Mime string `json:"mime"`
}
// DeviceRegistrationResponse is the response to registering a device
type DeviceRegistrationResponse struct {
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
}

View File

@@ -8,6 +8,7 @@ import (
"io"
"io/ioutil"
"log"
"math/rand"
"net/http"
"net/url"
"os"
@@ -45,10 +46,14 @@ const (
apiURL = "https://api.jottacloud.com/files/v1/"
baseURL = "https://www.jottacloud.com/"
tokenURL = "https://api.jottacloud.com/auth/v1/token"
registerURL = "https://api.jottacloud.com/auth/v1/register"
cachePrefix = "rclone-jcmd5-"
rcloneClientID = "nibfk8biu12ju7hpqomr8b1e40"
rcloneEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2"
configUsername = "user"
configClientID = "client_id"
configClientSecret = "client_secret"
charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
)
var (
@@ -58,9 +63,7 @@ var (
AuthURL: tokenURL,
TokenURL: tokenURL,
},
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
RedirectURL: oauthutil.RedirectLocalhostURL,
}
)
@@ -79,6 +82,55 @@ func init() {
}
}
srv := rest.NewClient(fshttp.NewClient(fs.Config))
// ask if we should create a device specifc token: https://github.com/ncw/rclone/issues/2995
fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n")
if config.Confirm() {
// random generator to generate random device names
seededRand := rand.New(rand.NewSource(time.Now().UnixNano()))
randonDeviceNamePartLength := 21
randomDeviceNamePart := make([]byte, randonDeviceNamePartLength)
for i := range randomDeviceNamePart {
randomDeviceNamePart[i] = charset[seededRand.Intn(len(charset))]
}
randomDeviceName := "rclone-" + string(randomDeviceNamePart)
fs.Debugf(nil, "Trying to register device '%s'", randomDeviceName)
values := url.Values{}
values.Set("device_id", randomDeviceName)
// all information comes from https://github.com/ttyridal/aiojotta/wiki/Jotta-protocol-3.-Authentication#token-authentication
opts := rest.Opts{
Method: "POST",
RootURL: registerURL,
ContentType: "application/x-www-form-urlencoded",
ExtraHeaders: map[string]string{"Authorization": "Bearer c2xrZmpoYWRsZmFramhkc2xma2phaHNkbGZramhhc2xkZmtqaGFzZGxrZmpobGtq"},
Parameters: values,
}
var deviceRegistration api.DeviceRegistrationResponse
_, err := srv.CallJSON(&opts, nil, &deviceRegistration)
if err != nil {
log.Fatalf("Failed to register device: %v", err)
}
m.Set(configClientID, deviceRegistration.ClientID)
m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret))
fs.Debugf(nil, "Got clientID '%s' and clientSecret '%s'", deviceRegistration.ClientID, deviceRegistration.ClientSecret)
}
clientID, ok := m.Get(configClientID)
if !ok {
clientID = rcloneClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = rcloneEncryptedClientSecret
}
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
username, ok := m.Get(configUsername)
if !ok {
log.Fatalf("No username defined")
@@ -86,7 +138,6 @@ func init() {
password := config.GetPassword("Your Jottacloud password is only required during config and will not be stored.")
// prepare out token request with username and password
srv := rest.NewClient(fshttp.NewClient(fs.Config))
values := url.Values{}
values.Set("grant_type", "PASSWORD")
values.Set("password", password)
@@ -381,6 +432,17 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root)
clientID, ok := m.Get(configClientID)
if !ok {
clientID = rcloneClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = rcloneEncryptedClientSecret
}
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
// add jottacloud to the long list of sites that don't follow the oauth spec correctly
oauth2.RegisterBrokenAuthHeaderProvider("https://www.jottacloud.com/")

View File

@@ -402,6 +402,27 @@ func (f *Fs) clearRoot() {
//log.Printf("cleared root directory")
}
// CleanUp deletes all files currently in trash
func (f *Fs) CleanUp() (err error) {
trash := f.srv.FS.GetTrash()
items := []*mega.Node{}
_, err = f.list(trash, func(item *mega.Node) bool {
items = append(items, item)
return false
})
if err != nil {
return errors.Wrap(err, "CleanUp failed to list items in trash")
}
// similar to f.deleteNode(trash) but with HardDelete as true
for _, item := range items {
err = f.pacer.Call(func() (bool, error) {
err = f.srv.Delete(item, true)
return shouldRetry(err)
})
}
return err
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.

View File

@@ -14,6 +14,8 @@ import (
"strings"
"time"
"github.com/ncw/rclone/lib/atexit"
"github.com/ncw/rclone/backend/onedrive/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
@@ -335,8 +337,13 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
// readMetaDataForPathRelativeToID reads the metadata for a path relative to an item that is addressed by its normalized ID.
// if `relPath` == "", it reads the metadata for the item with that ID.
//
// We address items using the pattern `drives/driveID/items/itemID:/relativePath`
// instead of simply using `drives/driveID/root:/itemPath` because it works for
// "shared with me" folders in OneDrive Personal (See #2536, #2778)
// This path pattern comes from https://github.com/OneDrive/onedrive-api-docs/issues/908#issuecomment-417488480
func (f *Fs) readMetaDataForPathRelativeToID(normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) {
opts := newOptsCall(normalizedID, "GET", ":/"+rest.URLPathEscape(replaceReservedChars(relPath)))
opts := newOptsCall(normalizedID, "GET", ":/"+withTrailingColon(rest.URLPathEscape(replaceReservedChars(relPath))))
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err)
@@ -703,9 +710,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
id := info.GetID()
f.dirCache.Put(remote, id)
d := fs.NewDir(remote, time.Time(info.GetLastModifiedDateTime())).SetID(id)
if folder != nil {
d.SetItems(folder.ChildCount)
}
d.SetItems(folder.ChildCount)
entries = append(entries, d)
} else {
o, err := f.newObjectWithInfo(remote, info)
@@ -819,9 +824,6 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
return err
}
f.dirCache.FlushDir(dir)
if err != nil {
return err
}
return nil
}
@@ -1340,12 +1342,12 @@ func (o *Object) setModTime(modTime time.Time) (*api.Item, error) {
opts = rest.Opts{
Method: "PATCH",
RootURL: rootURL,
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(leaf),
Path: "/" + drive + "/items/" + trueDirID + ":/" + withTrailingColon(rest.URLPathEscape(leaf)),
}
} else {
opts = rest.Opts{
Method: "PATCH",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()),
Path: "/root:/" + withTrailingColon(rest.URLPathEscape(o.srvPath())),
}
}
update := api.SetFileSystemInfo{
@@ -1491,23 +1493,40 @@ func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (i
return nil, errors.New("unknown-sized upload not supported")
}
uploadURLChan := make(chan string, 1)
gracefulCancel := func() {
uploadURL, ok := <-uploadURLChan
// Reading from uploadURLChan blocks the atexit process until
// we are able to use uploadURL to cancel the upload
if !ok { // createUploadSession failed - no need to cancel upload
return
}
fs.Debugf(o, "Cancelling multipart upload")
cancelErr := o.cancelUploadSession(uploadURL)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr)
}
}
cancelFuncHandle := atexit.Register(gracefulCancel)
// Create upload session
fs.Debugf(o, "Starting multipart upload")
session, err := o.createUploadSession(modTime)
if err != nil {
close(uploadURLChan)
atexit.Unregister(cancelFuncHandle)
return nil, err
}
uploadURL := session.UploadURL
uploadURLChan <- uploadURL
// Cancel the session if something went wrong
defer func() {
if err != nil {
fs.Debugf(o, "Cancelling multipart upload: %v", err)
cancelErr := o.cancelUploadSession(uploadURL)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", err)
}
fs.Debugf(o, "Error encountered during upload: %v", err)
gracefulCancel()
}
atexit.Unregister(cancelFuncHandle)
}()
// Upload the chunks
@@ -1668,6 +1687,21 @@ func getRelativePathInsideBase(base, target string) (string, bool) {
return "", false
}
// Adds a ":" at the end of `remotePath` in a proper manner.
// If `remotePath` already ends with "/", change it to ":/"
// If `remotePath` is "", return "".
// A workaround for #2720 and #3039
func withTrailingColon(remotePath string) string {
if remotePath == "" {
return ""
}
if strings.HasSuffix(remotePath, "/") {
return remotePath[:len(remotePath)-1] + ":/"
}
return remotePath + ":"
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)

View File

@@ -287,9 +287,6 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
return err
}
f.dirCache.FlushDir(dir)
if err != nil {
return err
}
return nil
}

View File

@@ -645,6 +645,9 @@ isn't set then "acl" is used instead.`,
}, {
Value: "GLACIER",
Help: "Glacier storage class",
}, {
Value: "DEEP_ARCHIVE",
Help: "Glacier Deep Archive storage class",
}},
}, {
// Mapping from here: https://www.alibabacloud.com/help/doc-detail/64919.htm
@@ -729,6 +732,14 @@ If it is set then rclone will use v2 authentication.
Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.`,
Default: false,
Advanced: true,
}, {
Name: "use_accelerate_endpoint",
Provider: "AWS",
Help: `If true use the AWS S3 accelerated endpoint.
See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)`,
Default: false,
Advanced: true,
}},
})
}
@@ -749,25 +760,26 @@ const (
// Options defines the configuration for this backend
type Options struct {
Provider string `config:"provider"`
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Region string `config:"region"`
Endpoint string `config:"endpoint"`
LocationConstraint string `config:"location_constraint"`
ACL string `config:"acl"`
BucketACL string `config:"bucket_acl"`
ServerSideEncryption string `config:"server_side_encryption"`
SSEKMSKeyID string `config:"sse_kms_key_id"`
StorageClass string `config:"storage_class"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
Provider string `config:"provider"`
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Region string `config:"region"`
Endpoint string `config:"endpoint"`
LocationConstraint string `config:"location_constraint"`
ACL string `config:"acl"`
BucketACL string `config:"bucket_acl"`
ServerSideEncryption string `config:"server_side_encryption"`
SSEKMSKeyID string `config:"sse_kms_key_id"`
StorageClass string `config:"storage_class"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"`
}
// Fs represents a remote s3 server
@@ -941,14 +953,15 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
if opt.Region == "" {
opt.Region = "us-east-1"
}
if opt.Provider == "Alibaba" || opt.Provider == "Netease" {
if opt.Provider == "Alibaba" || opt.Provider == "Netease" || opt.UseAccelerateEndpoint {
opt.ForcePathStyle = false
}
awsConfig := aws.NewConfig().
WithMaxRetries(maxRetries).
WithCredentials(cred).
WithHTTPClient(fshttp.NewClient(fs.Config)).
WithS3ForcePathStyle(opt.ForcePathStyle)
WithS3ForcePathStyle(opt.ForcePathStyle).
WithS3UseAccelerate(opt.UseAccelerateEndpoint)
if opt.Region != "" {
awsConfig.WithRegion(opt.Region)
}

View File

@@ -1,6 +1,6 @@
// Package sftp provides a filesystem interface using github.com/pkg/sftp
// +build !plan9,go1.9
// +build !plan9
package sftp
@@ -14,6 +14,7 @@ import (
"os/user"
"path"
"regexp"
"strconv"
"strings"
"sync"
"time"
@@ -813,6 +814,47 @@ func (f *Fs) Hashes() hash.Set {
return set
}
// About gets usage stats
func (f *Fs) About() (*fs.Usage, error) {
c, err := f.getSftpConnection()
if err != nil {
return nil, errors.Wrap(err, "About get SFTP connection")
}
session, err := c.sshClient.NewSession()
f.putSftpConnection(&c, err)
if err != nil {
return nil, errors.Wrap(err, "About put SFTP connection")
}
var stdout, stderr bytes.Buffer
session.Stdout = &stdout
session.Stderr = &stderr
escapedPath := shellEscape(f.root)
if f.opt.PathOverride != "" {
escapedPath = shellEscape(path.Join(f.opt.PathOverride, f.root))
}
if len(escapedPath) == 0 {
escapedPath = "/"
}
err = session.Run("df -k " + escapedPath)
if err != nil {
_ = session.Close()
return nil, errors.Wrap(err, "About invocation of df failed. Your remote may not support about.")
}
_ = session.Close()
usageTotal, usageUsed, usageAvail := parseUsage(stdout.Bytes())
if usageTotal < 0 || usageUsed < 0 || usageAvail < 0 {
return nil, errors.Wrap(err, "About failed to parse information")
}
usage := &fs.Usage{
Total: fs.NewUsageValue(usageTotal),
Used: fs.NewUsageValue(usageUsed),
Free: fs.NewUsageValue(usageAvail),
}
return usage, nil
}
// Fs is the filesystem this remote sftp file object is located within
func (o *Object) Fs() fs.Info {
return o.fs
@@ -903,6 +945,34 @@ func parseHash(bytes []byte) string {
return strings.Split(string(bytes), " ")[0] // Split at hash / filename separator
}
// Parses the byte array output from the SSH session
// returned by an invocation of df into
// the disk size, used space, and avaliable space on the disk, in that order.
// Only works when `df` has output info on only one disk
func parseUsage(bytes []byte) (int64, int64, int64) {
lines := strings.Split(string(bytes), "\n")
if len(lines) < 2 {
return -1, -1, -1
}
split := strings.Fields(lines[1])
if len(split) < 6 {
return -1, -1, -1
}
spaceTotal, err := strconv.ParseInt(split[1], 10, 64)
if err != nil {
return -1, -1, -1
}
spaceUsed, err := strconv.ParseInt(split[2], 10, 64)
if err != nil {
return -1, -1, -1
}
spaceAvail, err := strconv.ParseInt(split[3], 10, 64)
if err != nil {
return -1, -1, -1
}
return spaceTotal * 1024, spaceUsed * 1024, spaceAvail * 1024
}
// Size returns the size in bytes of the remote sftp file
func (o *Object) Size() int64 {
return o.size

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.9
// +build !plan9
package sftp
@@ -35,3 +35,17 @@ func TestParseHash(t *testing.T) {
assert.Equal(t, test.checksum, got, fmt.Sprintf("Test %d sshOutput = %q", i, test.sshOutput))
}
}
func TestParseUsage(t *testing.T) {
for i, test := range []struct {
sshOutput string
usage [3]int64
}{
{"Filesystem 1K-blocks Used Available Use% Mounted on\n/dev/root 91283092 81111888 10154820 89% /", [3]int64{93473886208, 83058573312, 10398535680}},
{"Filesystem 1K-blocks Used Available Use% Mounted on\ntmpfs 818256 1636 816620 1% /run", [3]int64{837894144, 1675264, 836218880}},
{"Filesystem 1024-blocks Used Available Capacity iused ifree %iused Mounted on\n/dev/disk0s2 244277768 94454848 149566920 39% 997820 4293969459 0% /", [3]int64{250140434432, 96721764352, 153156526080}},
} {
gotSpaceTotal, gotSpaceUsed, gotSpaceAvail := parseUsage([]byte(test.sshOutput))
assert.Equal(t, test.usage, [3]int64{gotSpaceTotal, gotSpaceUsed, gotSpaceAvail}, fmt.Sprintf("Test %d sshOutput = %q", i, test.sshOutput))
}
}

View File

@@ -1,6 +1,6 @@
// Test Sftp filesystem interface
// +build !plan9,go1.9
// +build !plan9
package sftp_test

View File

@@ -1,6 +1,6 @@
// Build for sftp for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9 !go1.9
// +build plan9
package sftp

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.9
// +build !plan9
package sftp

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.9
// +build !plan9
package sftp

View File

@@ -2,6 +2,7 @@ package swift
import (
"net/http"
"time"
"github.com/ncw/swift"
)
@@ -65,6 +66,14 @@ func (a *auth) Token() string {
return a.parentAuth.Token()
}
// Expires returns the time the token expires if known or Zero if not.
func (a *auth) Expires() (t time.Time) {
if do, ok := a.parentAuth.(swift.Expireser); ok {
t = do.Expires()
}
return t
}
// The CDN url if available
func (a *auth) CdnUrl() string { // nolint
if a.parentAuth == nil {
@@ -74,4 +83,7 @@ func (a *auth) CdnUrl() string { // nolint
}
// Check the interfaces are satisfied
var _ swift.Authenticator = (*auth)(nil)
var (
_ swift.Authenticator = (*auth)(nil)
_ swift.Expireser = (*auth)(nil)
)

View File

@@ -286,6 +286,31 @@ func shouldRetry(err error) (bool, error) {
return fserrors.ShouldRetry(err), err
}
// shouldRetryHeaders returns a boolean as to whether this err
// deserves to be retried. It reads the headers passed in looking for
// `Retry-After`. It returns the err as a convenience
func shouldRetryHeaders(headers swift.Headers, err error) (bool, error) {
if swiftError, ok := err.(*swift.Error); ok && swiftError.StatusCode == 429 {
if value := headers["Retry-After"]; value != "" {
retryAfter, parseErr := strconv.Atoi(value)
if parseErr != nil {
fs.Errorf(nil, "Failed to parse Retry-After: %q: %v", value, parseErr)
} else {
duration := time.Second * time.Duration(retryAfter)
if duration <= 60*time.Second {
// Do a short sleep immediately
fs.Debugf(nil, "Sleeping for %v to obey Retry-After", duration)
time.Sleep(duration)
return true, err
}
// Delay a long sleep for a retry
return false, fserrors.NewErrorRetryAfter(duration)
}
}
}
return shouldRetry(err)
}
// Pattern to match a swift path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
@@ -413,8 +438,9 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
// Check to see if the object exists - ignoring directory markers
var info swift.Object
err = f.pacer.Call(func() (bool, error) {
info, _, err = f.c.Object(container, directory)
return shouldRetry(err)
var rxHeaders swift.Headers
info, rxHeaders, err = f.c.Object(container, directory)
return shouldRetryHeaders(rxHeaders, err)
})
if err == nil && info.ContentType != directoryMarkerContentType {
f.root = path.Dir(directory)
@@ -723,8 +749,9 @@ func (f *Fs) Mkdir(dir string) error {
var err error = swift.ContainerNotFound
if !f.noCheckContainer {
err = f.pacer.Call(func() (bool, error) {
_, _, err = f.c.Container(f.container)
return shouldRetry(err)
var rxHeaders swift.Headers
_, rxHeaders, err = f.c.Container(f.container)
return shouldRetryHeaders(rxHeaders, err)
})
}
if err == swift.ContainerNotFound {
@@ -816,8 +843,9 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
}
srcFs := srcObj.fs
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
return shouldRetry(err)
var rxHeaders swift.Headers
rxHeaders, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
return nil, err
@@ -927,7 +955,7 @@ func (o *Object) readMetaData() (err error) {
var h swift.Headers
err = o.fs.pacer.Call(func() (bool, error) {
info, h, err = o.fs.c.Object(o.fs.container, o.fs.root+o.remote)
return shouldRetry(err)
return shouldRetryHeaders(h, err)
})
if err != nil {
if err == swift.ObjectNotFound {
@@ -1002,8 +1030,9 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
headers := fs.OpenOptionHeaders(options)
_, isRanging := headers["Range"]
err = o.fs.pacer.Call(func() (bool, error) {
in, _, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
return shouldRetry(err)
var rxHeaders swift.Headers
in, rxHeaders, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
return shouldRetryHeaders(rxHeaders, err)
})
return
}
@@ -1074,8 +1103,9 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
// Create the segmentsContainer if it doesn't exist
var err error
err = o.fs.pacer.Call(func() (bool, error) {
_, _, err = o.fs.c.Container(o.fs.segmentsContainer)
return shouldRetry(err)
var rxHeaders swift.Headers
_, rxHeaders, err = o.fs.c.Container(o.fs.segmentsContainer)
return shouldRetryHeaders(rxHeaders, err)
})
if err == swift.ContainerNotFound {
headers := swift.Headers{}
@@ -1115,8 +1145,9 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
segmentPath := fmt.Sprintf("%s/%08d", segmentsPath, i)
fs.Debugf(o, "Uploading segment file %q into %q", segmentPath, o.fs.segmentsContainer)
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
return shouldRetry(err)
var rxHeaders swift.Headers
rxHeaders, err = o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
return "", err
@@ -1129,8 +1160,9 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
emptyReader := bytes.NewReader(nil)
manifestName := o.fs.root + o.remote
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
return shouldRetry(err)
var rxHeaders swift.Headers
rxHeaders, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
return shouldRetryHeaders(rxHeaders, err)
})
return uniquePrefix + "/", err
}
@@ -1174,7 +1206,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
var rxHeaders swift.Headers
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
rxHeaders, err = o.fs.c.ObjectPut(o.fs.container, o.fs.root+o.remote, in, true, "", contentType, headers)
return shouldRetry(err)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
return err

View File

@@ -1,6 +1,13 @@
package swift
import "testing"
import (
"testing"
"time"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/swift"
"github.com/stretchr/testify/assert"
)
func TestInternalUrlEncode(t *testing.T) {
for _, test := range []struct {
@@ -23,3 +30,37 @@ func TestInternalUrlEncode(t *testing.T) {
}
}
}
func TestInternalShouldRetryHeaders(t *testing.T) {
headers := swift.Headers{
"Content-Length": "64",
"Content-Type": "text/html; charset=UTF-8",
"Date": "Mon: 18 Mar 2019 12:11:23 GMT",
"Retry-After": "1",
}
err := &swift.Error{
StatusCode: 429,
Text: "Too Many Requests",
}
// Short sleep should just do the sleep
start := time.Now()
retry, gotErr := shouldRetryHeaders(headers, err)
dt := time.Since(start)
assert.True(t, retry)
assert.Equal(t, err, gotErr)
assert.True(t, dt > time.Second/2)
// Long sleep should return RetryError
headers["Retry-After"] = "3600"
start = time.Now()
retry, gotErr = shouldRetryHeaders(headers, err)
dt = time.Since(start)
assert.True(t, dt < time.Second)
assert.False(t, retry)
assert.Equal(t, true, fserrors.IsRetryAfterError(gotErr))
after := gotErr.(fserrors.RetryAfter).RetryAfter()
dt = after.Sub(start)
assert.True(t, dt >= time.Hour-time.Second && dt <= time.Hour+time.Second)
}

View File

@@ -644,10 +644,18 @@ func (f *Fs) _mkdir(dirPath string) error {
Path: dirPath,
NoResponse: true,
}
return f.pacer.Call(func() (bool, error) {
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(&opts)
return shouldRetry(resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
// already exists
// owncloud returns 423/StatusLocked if the create is already in progress
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable || apiErr.StatusCode == http.StatusLocked {
return nil
}
}
return err
}
// mkdir makes the directory and parents using native paths
@@ -655,12 +663,7 @@ func (f *Fs) mkdir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
err := f._mkdir(dirPath)
if apiErr, ok := err.(*api.Error); ok {
// already exists
// owncloud returns 423/StatusLocked if the create is already in progress
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable || apiErr.StatusCode == http.StatusLocked {
return nil
}
// parent does not exist
// parent does not exist so create it first then try again
if apiErr.StatusCode == http.StatusConflict {
err = f.mkParentDir(dirPath)
if err == nil {
@@ -913,11 +916,13 @@ func (f *Fs) About() (*fs.Usage, error) {
return nil, errors.Wrap(err, "about call failed")
}
usage := &fs.Usage{}
if q.Available >= 0 && q.Used >= 0 {
usage.Total = fs.NewUsageValue(q.Available + q.Used)
}
if q.Used >= 0 {
usage.Used = fs.NewUsageValue(q.Used)
if q.Available != 0 || q.Used != 0 {
if q.Available >= 0 && q.Used >= 0 {
usage.Total = fs.NewUsageValue(q.Available + q.Used)
}
if q.Used >= 0 {
usage.Used = fs.NewUsageValue(q.Used)
}
}
return usage, nil
}

View File

@@ -244,8 +244,10 @@ func getAssetFromReleasesPage(project string, matchName *regexp.Regexp) (assetUR
if a.Key == "href" {
if name := path.Base(a.Val); matchName.MatchString(name) && isOurOsArch(name) {
if u, err := rest.URLJoin(base, a.Val); err == nil {
assetName = name
assetURL = u.String()
if assetName == "" {
assetName = name
assetURL = u.String()
}
}
}
break

View File

@@ -230,22 +230,31 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
SigInfoHandler()
for try := 1; try <= *retries; try++ {
err = f()
if !Retry || (err == nil && !accounting.Stats.Errored()) {
fs.CountError(err)
if !Retry || !accounting.Stats.Errored() {
if try > 1 {
fs.Errorf(nil, "Attempt %d/%d succeeded", try, *retries)
}
break
}
if fserrors.IsFatalError(err) || accounting.Stats.HadFatalError() {
if accounting.Stats.HadFatalError() {
fs.Errorf(nil, "Fatal error received - not attempting retries")
break
}
if fserrors.IsNoRetryError(err) || (accounting.Stats.Errored() && !accounting.Stats.HadRetryError()) {
if accounting.Stats.Errored() && !accounting.Stats.HadRetryError() {
fs.Errorf(nil, "Can't retry this error - not attempting retries")
break
}
if err != nil {
fs.Errorf(nil, "Attempt %d/%d failed with %d errors and: %v", try, *retries, accounting.Stats.GetErrors(), err)
if retryAfter := accounting.Stats.RetryAfter(); !retryAfter.IsZero() {
d := retryAfter.Sub(time.Now())
if d > 0 {
fs.Logf(nil, "Received retry after error - sleeping until %s (%v)", retryAfter.Format(time.RFC3339Nano), d)
time.Sleep(d)
}
}
lastErr := accounting.Stats.GetLastError()
if lastErr != nil {
fs.Errorf(nil, "Attempt %d/%d failed with %d errors and: %v", try, *retries, accounting.Stats.GetErrors(), lastErr)
} else {
fs.Errorf(nil, "Attempt %d/%d failed with %d errors", try, *retries, accounting.Stats.GetErrors())
}

View File

@@ -388,7 +388,11 @@ func (fsys *FS) Release(path string, fh uint64) (errc int) {
return errc
}
_ = fsys.closeHandle(fh)
return translateError(handle.Release())
// Run the Release asynchronously, ignoring errors
go func() {
_ = handle.Release()
}()
return 0
}
// Unlink removes a file.

View File

@@ -127,7 +127,7 @@ func waitFor(fn func() bool) (ok bool) {
func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, error) {
fs.Debugf(f, "Mounting on %q", mountpoint)
// Check the mountpoint - in Windows the mountpoint musn't exist before the mount
// Check the mountpoint - in Windows the mountpoint mustn't exist before the mount
if runtime.GOOS != "windows" {
fi, err := os.Stat(mountpoint)
if err != nil {

View File

@@ -45,7 +45,7 @@ __rclone_custom_func() {
else
__rclone_init_completion -n : || return
fi
if [[ $cur =~ ^[[:alnum:]_]*$ ]]; then
if [[ $cur != *:* ]]; then
local remote
while IFS= read -r remote; do
[[ $remote != $cur* ]] || COMPREPLY+=("$remote")
@@ -54,10 +54,10 @@ __rclone_custom_func() {
local paths=("$cur"*)
[[ ! -f ${paths[0]} ]] || COMPREPLY+=("${paths[@]}")
fi
elif [[ $cur =~ ^[[:alnum:]_]+: ]]; then
else
local path=${cur#*:}
if [[ $path == */* ]]; then
local prefix=${path%/*}
local prefix=$(eval printf '%s' "${path%/*}")
else
local prefix=
fi
@@ -66,6 +66,7 @@ __rclone_custom_func() {
local reply=${prefix:+$prefix/}$line
[[ $reply != $path* ]] || COMPREPLY+=("$reply")
done < <(rclone lsf "${cur%%:*}:$prefix" 2>/dev/null)
[[ ! ${COMPREPLY[@]} ]] || compopt -o filenames
fi
[[ ! ${COMPREPLY[@]} ]] || compopt -o nospace
fi

View File

@@ -2,7 +2,7 @@ package lshelp
// Help describes the common help for all the list commands
var Help = `
Any of the filtering options can be applied to this commmand.
Any of the filtering options can be applied to this command.
There are several related list commands

View File

@@ -70,6 +70,7 @@ output:
o - Original ID of underlying object
m - MimeType of object if known
e - encrypted name
T - tier of storage if known, eg "Hot" or "Cool"
So if you wanted the path, size and modification time, you would use
--format "pst", or maybe --format "tsp" to put the path last.
@@ -164,6 +165,8 @@ func Lsf(fsrc fs.Fs, out io.Writer) error {
list.SetAbsolute(absolute)
var opt = operations.ListJSONOpt{
NoModTime: true,
DirsOnly: dirsOnly,
FilesOnly: filesOnly,
Recurse: recurse,
}
@@ -189,21 +192,14 @@ func Lsf(fsrc fs.Fs, out io.Writer) error {
case 'o':
list.AddOrigID()
opt.ShowOrigIDs = true
case 'T':
list.AddTier()
default:
return errors.Errorf("Unknown format character %q", char)
}
}
return operations.ListJSON(fsrc, "", &opt, func(item *operations.ListJSONItem) error {
if item.IsDir {
if filesOnly {
return nil
}
} else {
if dirsOnly {
return nil
}
}
_, _ = fmt.Fprintln(out, list.Format(item))
return nil
})

View File

@@ -23,6 +23,8 @@ func init() {
commandDefintion.Flags().BoolVarP(&opt.NoModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up).")
commandDefintion.Flags().BoolVarP(&opt.ShowEncrypted, "encrypted", "M", false, "Show the encrypted names.")
commandDefintion.Flags().BoolVarP(&opt.ShowOrigIDs, "original", "", false, "Show the ID of the underlying Object.")
commandDefintion.Flags().BoolVarP(&opt.FilesOnly, "files-only", "", false, "Show only files in the listing.")
commandDefintion.Flags().BoolVarP(&opt.DirsOnly, "dirs-only", "", false, "Show only directories in the listing.")
}
var commandDefintion = &cobra.Command{
@@ -55,6 +57,10 @@ If --no-modtime is specified then ModTime will be blank.
If --encrypted is not specified the Encrypted won't be emitted.
If --dirs-only is not specified files in addition to directories are returned
If --files-only is not specified directories in addition to the files will be returned.
The Path field will only show folders below the remote path being listed.
If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".

View File

@@ -3,6 +3,7 @@
package mount
import (
"context"
"os"
"time"
@@ -12,7 +13,6 @@ import (
"github.com/ncw/rclone/fs/log"
"github.com/ncw/rclone/vfs"
"github.com/pkg/errors"
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.8
)
// Dir represents a directory entry
@@ -20,7 +20,7 @@ type Dir struct {
*vfs.Dir
}
// Check interface satsified
// Check interface satisfied
var _ fusefs.Node = (*Dir)(nil)
// Attr updates the attributes of a directory

View File

@@ -3,6 +3,7 @@
package mount
import (
"context"
"io"
"time"
@@ -11,7 +12,6 @@ import (
"github.com/ncw/rclone/cmd/mountlib"
"github.com/ncw/rclone/fs/log"
"github.com/ncw/rclone/vfs"
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.8
)
// File represents a file

View File

@@ -5,6 +5,7 @@
package mount
import (
"context"
"syscall"
"bazil.org/fuse"
@@ -15,7 +16,6 @@ import (
"github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags"
"github.com/pkg/errors"
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.8
)
// FS represents the top level filing system
@@ -24,7 +24,7 @@ type FS struct {
f fs.Fs
}
// Check interface satistfied
// Check interface satisfied
var _ fusefs.FS = (*FS)(nil)
// NewFS makes a new FS
@@ -46,7 +46,7 @@ func (f *FS) Root() (node fusefs.Node, err error) {
return &Dir{root}, nil
}
// Check interface satsified
// Check interface satisfied
var _ fusefs.FSStatfser = (*FS)(nil)
// Statfs is called to obtain file system metadata.

View File

@@ -3,13 +3,13 @@
package mount
import (
"context"
"io"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/fs/log"
"github.com/ncw/rclone/vfs"
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.8
)
// FileHandle is an open for read file handle on a File
@@ -80,5 +80,9 @@ var _ fusefs.HandleReleaser = (*FileHandle)(nil)
// the kernel
func (fh *FileHandle) Release(ctx context.Context, req *fuse.ReleaseRequest) (err error) {
defer log.Trace(fh, "")("err=%v", &err)
return translateError(fh.Handle.Release())
// Run the Release asynchronously, ignoring errors
go func() {
_ = fh.Handle.Release()
}()
return nil
}

View File

@@ -19,7 +19,7 @@ If source:path is a file or directory then it moves it to a file or
directory named dest:path.
This can be used to rename files or upload single files to other than
their existing name. If the source is a directory then it acts exacty
their existing name. If the source is a directory then it acts exactly
like the move command.
So

View File

@@ -361,7 +361,7 @@ func (u *UI) Draw() error {
Linef(0, h-1, w, termbox.ColorBlack, termbox.ColorWhite, ' ', "Total usage: %v, Objects: %d%s", fs.SizeSuffix(size), count, message)
}
// Show the box on top if requred
// Show the box on top if required
if u.showBox {
u.Box()
}

View File

@@ -82,7 +82,7 @@ var (
progressMu sync.Mutex
)
// printProgress prings the progress with an optional log
// printProgress prints the progress with an optional log
func printProgress(logMessage string) {
progressMu.Lock()
defer progressMu.Unlock()

View File

@@ -0,0 +1,184 @@
package dlna
const connectionManagerServiceDescription = `<?xml version="1.0" encoding="UTF-8"?>
<scpd xmlns="urn:schemas-upnp-org:service-1-0">
<specVersion>
<major>1</major>
<minor>0</minor>
</specVersion>
<actionList>
<action>
<name>GetProtocolInfo</name>
<argumentList>
<argument>
<name>Source</name>
<direction>out</direction>
<relatedStateVariable>SourceProtocolInfo</relatedStateVariable>
</argument>
<argument>
<name>Sink</name>
<direction>out</direction>
<relatedStateVariable>SinkProtocolInfo</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>PrepareForConnection</name>
<argumentList>
<argument>
<name>RemoteProtocolInfo</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ProtocolInfo</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionManager</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionManager</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>Direction</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Direction</relatedStateVariable>
</argument>
<argument>
<name>ConnectionID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>AVTransportID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_AVTransportID</relatedStateVariable>
</argument>
<argument>
<name>RcsID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_RcsID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>ConnectionComplete</name>
<argumentList>
<argument>
<name>ConnectionID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetCurrentConnectionIDs</name>
<argumentList>
<argument>
<name>ConnectionIDs</name>
<direction>out</direction>
<relatedStateVariable>CurrentConnectionIDs</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetCurrentConnectionInfo</name>
<argumentList>
<argument>
<name>ConnectionID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>RcsID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_RcsID</relatedStateVariable>
</argument>
<argument>
<name>AVTransportID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_AVTransportID</relatedStateVariable>
</argument>
<argument>
<name>ProtocolInfo</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ProtocolInfo</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionManager</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionManager</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>Direction</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Direction</relatedStateVariable>
</argument>
<argument>
<name>Status</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionStatus</relatedStateVariable>
</argument>
</argumentList>
</action>
</actionList>
<serviceStateTable>
<stateVariable sendEvents="yes">
<name>SourceProtocolInfo</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>SinkProtocolInfo</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>CurrentConnectionIDs</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ConnectionStatus</name>
<dataType>string</dataType>
<allowedValueList>
<allowedValue>OK</allowedValue>
<allowedValue>ContentFormatMismatch</allowedValue>
<allowedValue>InsufficientBandwidth</allowedValue>
<allowedValue>UnreliableChannel</allowedValue>
<allowedValue>Unknown</allowedValue>
</allowedValueList>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ConnectionManager</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Direction</name>
<dataType>string</dataType>
<allowedValueList>
<allowedValue>Input</allowedValue>
<allowedValue>Output</allowedValue>
</allowedValueList>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ProtocolInfo</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ConnectionID</name>
<dataType>i4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_AVTransportID</name>
<dataType>i4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_RcsID</name>
<dataType>i4</dataType>
</stateVariable>
</serviceStateTable>
</scpd>`

View File

@@ -84,6 +84,21 @@ var services = []*service{
},
SCPD: contentDirectoryServiceDescription,
},
{
Service: upnp.Service{
ServiceType: "urn:schemas-upnp-org:service:ConnectionManager:1",
ServiceId: "urn:upnp-org:serviceId:ConnectionManager",
ControlURL: serviceControlURL,
},
SCPD: connectionManagerServiceDescription,
},
}
func init() {
for _, s := range services {
p := path.Join("/scpd", s.ServiceId)
s.SCPDURL = p
}
}
func devices() []string {
@@ -250,9 +265,6 @@ func (s *server) initMux(mux *http.ServeMux) {
// Install handlers to serve SCPD for each UPnP service.
for _, s := range services {
p := path.Join("/scpd", s.ServiceId)
s.SCPDURL = p
mux.HandleFunc(s.SCPDURL, func(serviceDesc string) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("content-type", `text/xml; charset="utf-8"`)

View File

@@ -1,5 +1,3 @@
// +build go1.8
package dlna
import (
@@ -22,11 +20,11 @@ import (
var (
dlnaServer *server
testURL string
)
const (
testBindAddress = "localhost:51777"
testURL = "http://" + testBindAddress + "/"
testBindAddress = "localhost:0"
)
func startServer(t *testing.T, f fs.Fs) {
@@ -34,6 +32,7 @@ func startServer(t *testing.T, f fs.Fs) {
opt.ListenAddr = testBindAddress
dlnaServer = newServer(f, &opt)
assert.NoError(t, dlnaServer.Serve())
testURL = "http://" + dlnaServer.HTTPConn.Addr().String() + "/"
}
func TestInit(t *testing.T) {
@@ -59,6 +58,11 @@ func TestRootSCPD(t *testing.T) {
// Make sure that the SCPD contains a CDS service.
require.Contains(t, string(body),
"<serviceType>urn:schemas-upnp-org:service:ContentDirectory:1</serviceType>")
// Make sure that the SCPD contains a CM service.
require.Contains(t, string(body),
"<serviceType>urn:schemas-upnp-org:service:ConnectionManager:1</serviceType>")
// Ensure that the SCPD url is configured.
require.Regexp(t, "<SCPDURL>/.*</SCPDURL>", string(body))
}
// Make sure that it serves content from the remote.

View File

@@ -78,6 +78,7 @@ func newServer(f fs.Fs, opt *ftpopt.Options) (*server, error) {
},
Hostname: host,
Port: portNum,
PublicIp: opt.PublicIP,
PassivePorts: opt.PassivePorts,
Auth: &Auth{
BasicUser: opt.BasicUser,
@@ -155,7 +156,7 @@ func (f *DriverFactory) NewDriver() (ftp.Driver, error) {
}, nil
}
//Driver impletation of ftp server
//Driver implementation of ftp server
type Driver struct {
vfs *vfs.VFS
lock sync.Mutex
@@ -378,7 +379,7 @@ func (d *Driver) PutFile(path string, data io.Reader, appendData bool) (n int64,
return bytes, nil
}
//FileInfo struct ot hold file infor for ftp server
//FileInfo struct to hold file info for ftp server
type FileInfo struct {
os.FileInfo
@@ -387,7 +388,7 @@ type FileInfo struct {
group uint32
}
//Mode return êrm mode of file.
//Mode return mode of file.
func (f *FileInfo) Mode() os.FileMode {
return f.mode
}
@@ -407,7 +408,7 @@ func (f *FileInfo) Group() string {
str := fmt.Sprint(f.group)
g, err := user.LookupGroupId(str)
if err != nil {
return str //Group not found default to numrical value
return str //Group not found default to numerical value
}
return g.Name
}

View File

@@ -16,6 +16,7 @@ var (
func AddFlagsPrefix(flagSet *pflag.FlagSet, prefix string, Opt *ftpopt.Options) {
rc.AddOption("ftp", &Opt)
flags.StringVarP(flagSet, &Opt.ListenAddr, prefix+"addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to.")
flags.StringVarP(flagSet, &Opt.PublicIP, prefix+"public-ip", "", Opt.PublicIP, "Public IP address to advertise for passive connections.")
flags.StringVarP(flagSet, &Opt.PassivePorts, prefix+"passive-port", "", Opt.PassivePorts, "Passive port range to use.")
flags.StringVarP(flagSet, &Opt.BasicUser, prefix+"user", "", Opt.BasicUser, "User name for authentication.")
flags.StringVarP(flagSet, &Opt.BasicPass, prefix+"pass", "", Opt.BasicPass, "Password for authentication. (empty value allow every password)")

View File

@@ -24,6 +24,7 @@ You can set a single username and password with the --user and --pass flags.
type Options struct {
//TODO add more options
ListenAddr string // Port to listen on
PublicIP string // Passive ports range
PassivePorts string // Passive ports range
BasicUser string // single username for basic auth if not using Htpasswd
BasicPass string // password for BasicUser
@@ -32,6 +33,7 @@ type Options struct {
// DefaultOpt is the default values used for Options
var DefaultOpt = Options{
ListenAddr: "localhost:2121",
PublicIP: "",
PassivePorts: "30000-32000",
BasicUser: "anonymous",
BasicPass: "",

View File

@@ -1,11 +1,8 @@
// +build go1.8
package http
import (
"flag"
"io/ioutil"
"net"
"net/http"
"strings"
"testing"
@@ -23,11 +20,11 @@ import (
var (
updateGolden = flag.Bool("updategolden", false, "update golden files for regression test")
httpServer *server
testURL string
)
const (
testBindAddress = "localhost:51777"
testURL = "http://" + testBindAddress + "/"
testBindAddress = "localhost:0"
)
func startServer(t *testing.T, f fs.Fs) {
@@ -35,13 +32,14 @@ func startServer(t *testing.T, f fs.Fs) {
opt.ListenAddr = testBindAddress
httpServer = newServer(f, &opt)
assert.NoError(t, httpServer.Serve())
testURL = httpServer.Server.URL()
// try to connect to the test server
pause := time.Millisecond
for i := 0; i < 10; i++ {
conn, err := net.Dial("tcp", testBindAddress)
resp, err := http.Head(testURL)
if err == nil {
_ = conn.Close()
_ = resp.Body.Close()
return
}
// t.Logf("couldn't connect, sleeping for %v: %v", pause, err)

View File

@@ -1,21 +0,0 @@
// HTTP parts go1.8+
//+build go1.8
package httplib
import (
"net/http"
"time"
)
// Initialise the http.Server for post go1.8
func initServer(s *http.Server) {
s.ReadHeaderTimeout = 10 * time.Second // time to send the headers
s.IdleTimeout = 60 * time.Second // time to keep idle connections open
}
// closeServer closes the server in a non graceful way
func closeServer(s *http.Server) error {
return s.Close()
}

View File

@@ -1,18 +0,0 @@
// HTTP parts pre go1.8
//+build !go1.8
package httplib
import (
"net/http"
)
// Initialise the http.Server for pre go1.8
func initServer(s *http.Server) {
}
// closeServer closes the server in a non graceful way
func closeServer(s *http.Server) error {
return nil
}

View File

@@ -180,17 +180,17 @@ func NewServer(handler http.Handler, opt *Options) *Server {
// FIXME make a transport?
s.httpServer = &http.Server{
Addr: s.Opt.ListenAddr,
Handler: handler,
ReadTimeout: s.Opt.ServerReadTimeout,
WriteTimeout: s.Opt.ServerWriteTimeout,
MaxHeaderBytes: s.Opt.MaxHeaderBytes,
Addr: s.Opt.ListenAddr,
Handler: handler,
ReadTimeout: s.Opt.ServerReadTimeout,
WriteTimeout: s.Opt.ServerWriteTimeout,
MaxHeaderBytes: s.Opt.MaxHeaderBytes,
ReadHeaderTimeout: 10 * time.Second, // time to send the headers
IdleTimeout: 60 * time.Second, // time to keep idle connections open
TLSConfig: &tls.Config{
MinVersion: tls.VersionTLS10, // disable SSL v3.0 and earlier
},
}
// go version specific initialisation
initServer(s.httpServer)
if s.Opt.ClientCA != "" {
if !s.useSSL {
@@ -267,7 +267,7 @@ func (s *Server) Wait() {
// Close shuts the running server down
func (s *Server) Close() {
err := closeServer(s.httpServer)
err := s.httpServer.Close()
if err != nil {
log.Printf("Error on closing HTTP server: %v", err)
return

View File

@@ -11,7 +11,7 @@ import (
"github.com/pkg/errors"
)
// GetTemplate eturns the HTML template for serving directories via HTTP
// GetTemplate returns the HTML template for serving directories via HTTP
func GetTemplate() (tpl *template.Template, err error) {
templateFile, err := Assets.Open("index.html")
if err != nil {

View File

@@ -330,25 +330,12 @@ func (s *server) listObjects(w http.ResponseWriter, r *http.Request, remote stri
ls := listItems{}
// if remote supports ListR use that directly, otherwise use recursive Walk
var err error
if ListR := s.f.Features().ListR; ListR != nil {
err = ListR(remote, func(entries fs.DirEntries) error {
for _, entry := range entries {
ls.add(entry)
}
return nil
})
} else {
err = walk.Walk(s.f, remote, true, -1, func(path string, entries fs.DirEntries, err error) error {
if err == nil {
for _, entry := range entries {
ls.add(entry)
}
}
return err
})
}
err := walk.ListR(s.f, remote, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, entry := range entries {
ls.add(entry)
}
return nil
})
if err != nil {
_, err = fserrors.Cause(err)
if err != fs.ErrorDirNotFound {

View File

@@ -17,8 +17,7 @@ import (
)
const (
testBindAddress = "localhost:51779"
testURL = "http://" + testBindAddress + "/"
testBindAddress = "localhost:0"
resticSource = "../../../../../restic/restic"
)
@@ -62,7 +61,7 @@ func TestRestic(t *testing.T) {
}
cmd := exec.Command("go", args...)
cmd.Env = append(os.Environ(),
"RESTIC_TEST_REST_REPOSITORY=rest:"+testURL+path,
"RESTIC_TEST_REST_REPOSITORY=rest:"+w.Server.URL()+path,
)
out, err := cmd.CombinedOutput()
if len(out) != 0 {

View File

@@ -3,6 +3,7 @@
package webdav
import (
"context"
"net/http"
"os"
@@ -15,7 +16,6 @@ import (
"github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags"
"github.com/spf13/cobra"
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.8
"golang.org/x/net/webdav"
)

View File

@@ -20,8 +20,7 @@ import (
)
const (
testBindAddress = "localhost:51778"
testURL = "http://" + testBindAddress + "/"
testBindAddress = "localhost:0"
)
// check interfaces
@@ -70,7 +69,7 @@ func TestWebDav(t *testing.T) {
cmd := exec.Command("go", args...)
cmd.Env = append(os.Environ(),
"RCLONE_CONFIG_WEBDAVTEST_TYPE=webdav",
"RCLONE_CONFIG_WEBDAVTEST_URL="+testURL,
"RCLONE_CONFIG_WEBDAVTEST_URL="+w.Server.URL(),
"RCLONE_CONFIG_WEBDAVTEST_VENDOR=other",
)
out, err := cmd.CombinedOutput()

View File

@@ -20,7 +20,7 @@ Few cloud storage services provides different storage classes on objects,
for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive,
Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
Note that, certain tier chages make objects not available to access immediately.
Note that, certain tier changes make objects not available to access immediately.
For example tiering to archive in azure blob storage makes objects in frozen state,
user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object
inaccessible.true

View File

@@ -71,5 +71,4 @@ Links
* <i class="fa fa-home"></i> [Home page](https://rclone.org/)
* <i class="fa fa-github"></i> [GitHub project page for source and bug tracker](https://github.com/ncw/rclone)
* <i class="fa fa-comments"></i> [Rclone Forum](https://forum.rclone.org)
* <i class="fa fa-google-plus"></i> <a href="https://google.com/+RcloneOrg" rel="publisher">Google+ page</a>
* <i class="fa fa-cloud-download"></i>[Downloads](/downloads/)

View File

@@ -15,7 +15,7 @@ eg `remote:directory/subdirectory` or `/directory/subdirectory`.
During the initial setup with `rclone config` you will specify the target
remote. The target remote can either be a local path or another remote.
Subfolders can be used in target remote. Asume a alias remote named `backup`
Subfolders can be used in target remote. Assume a alias remote named `backup`
with the target `mydrive:private/backup`. Invoking `rclone mkdir backup:desktop`
is exactly the same as invoking `rclone mkdir mydrive:private/backup/desktop`.

View File

@@ -245,3 +245,10 @@ Contributors
* marcintustin <marcintustin@users.noreply.github.com>
* jaKa Močnik <jaka@koofr.net>
* Fionera <fionera@fionera.de>
* Dan Walters <dan@walters.io>
* Danil Semelenov <sgtpep@users.noreply.github.com>
* xopez <28950736+xopez@users.noreply.github.com>
* Ben Boeckel <mathstuf@gmail.com>
* Manu <manu@snapdragon.cc>
* Kyle E. Mitchell <kyle@kemitchell.com>
* Gary Kim <gary@garykim.dev>

View File

@@ -270,7 +270,7 @@ start and finish the upload) and another 2 requests for each chunk:
#### Versions ####
Versions can be viewd with the `--b2-versions` flag. When it is set
Versions can be viewed with the `--b2-versions` flag. When it is set
rclone will show and act on older versions of files. For example
Listing without `--b2-versions`
@@ -409,5 +409,18 @@ Disable checksums for large (> upload cutoff) files
- Type: bool
- Default: false
#### --b2-download-url
Custom endpoint for downloads.
This is usually set to a Cloudflare CDN URL as Backblaze offers
free egress for data downloaded through the Cloudflare network.
Leave blank if you want to use the endpoint provided by Backblaze.
- Config: download_url
- Env Var: RCLONE_B2_DOWNLOAD_URL
- Type: string
- Default: ""
<!--- autogenerated options stop -->

View File

@@ -262,7 +262,7 @@ There is an issue with wrapping the remotes in this order:
During testing, I experienced a lot of bans with the remotes in this order.
I suspect it might be related to how crypt opens files on the cloud provider
which makes it think we're downloading the full file instead of small chunks.
Organizing the remotes in this order yelds better results:
Organizing the remotes in this order yields better results:
<span style="color:green">**cloud remote** -> **cache** -> **crypt**</span>
#### absolute remote paths ####

View File

@@ -1,11 +1,97 @@
---
title: "Documentation"
description: "Rclone Changelog"
date: "2019-02-09"
date: "2019-04-13"
---
# Changelog
## v1.47.0 - 2019-04-13
* New backends
* Backend for Koofr cloud storage service. (jaKa)
* New Features
* Resume downloads if the reader fails in copy (Nick Craig-Wood)
* this means rclone will restart transfers if the source has an error
* this is most useful for downloads or cloud to cloud copies
* Use `--fast-list` for listing operations where it won't use more memory (Nick Craig-Wood)
* this should speed up the following operations on remotes which support `ListR`
* `dedupe`, `serve restic` `lsf`, `ls`, `lsl`, `lsjson`, `lsd`, `md5sum`, `sha1sum`, `hashsum`, `size`, `delete`, `cat`, `settier`
* use `--disable ListR` to get old behaviour if required
* Make `--files-from` traverse the destination unless `--no-traverse` is set (Nick Craig-Wood)
* this fixes `--files-from` with Google drive and excessive API use in general.
* Make server side copy account bytes and obey `--max-transfer` (Nick Craig-Wood)
* Add `--create-empty-src-dirs` flag and default to not creating empty dirs (ishuah)
* Add client side TLS/SSL flags `--ca-cert`/`--client-cert`/`--client-key` (Nick Craig-Wood)
* Implement `--suffix-keep-extension` for use with `--suffix` (Nick Craig-Wood)
* build:
* Switch to semvar compliant version tags to be go modules compliant (Nick Craig-Wood)
* Update to use go1.12.x for the build (Nick Craig-Wood)
* serve dlna: Add connection manager service description to improve compatibility (Dan Walters)
* lsf: Add 'e' format to show encrypted names and 'o' for original IDs (Nick Craig-Wood)
* lsjson: Added `--files-only` and `--dirs-only` flags (calistri)
* rc: Implement operations/publiclink the equivalent of `rclone link` (Nick Craig-Wood)
* Bug Fixes
* accounting: Fix total ETA when `--stats-unit bits` is in effect (Nick Craig-Wood)
* Bash TAB completion
* Use private custom func to fix clash between rclone and kubectl (Nick Craig-Wood)
* Fix for remotes with underscores in their names (Six)
* Fix completion of remotes (Florian Gamböck)
* Fix autocompletion of remote paths with spaces (Danil Semelenov)
* serve dlna: Fix root XML service descriptor (Dan Walters)
* ncdu: Fix display corruption with Chinese characters (Nick Craig-Wood)
* Add SIGTERM to signals which run the exit handlers on unix (Nick Craig-Wood)
* rc: Reload filter when the options are set via the rc (Nick Craig-Wood)
* VFS / Mount
* Fix FreeBSD: Ignore Truncate if called with no readers and already the correct size (Nick Craig-Wood)
* Read directory and check for a file before mkdir (Nick Craig-Wood)
* Shorten the locking window for vfs/refresh (Nick Craig-Wood)
* Azure Blob
* Enable MD5 checksums when uploading files bigger than the "Cutoff" (Dr.Rx)
* Fix SAS URL support (Nick Craig-Wood)
* B2
* Allow manual configuration of backblaze downloadUrl (Vince)
* Ignore already_hidden error on remove (Nick Craig-Wood)
* Ignore malformed `src_last_modified_millis` (Nick Craig-Wood)
* Drive
* Add `--skip-checksum-gphotos` to ignore incorrect checksums on Google Photos (Nick Craig-Wood)
* Allow server side move/copy between different remotes. (Fionera)
* Add docs on team drives and `--fast-list` eventual consistency (Nestar47)
* Fix imports of text files (Nick Craig-Wood)
* Fix range requests on 0 length files (Nick Craig-Wood)
* Fix creation of duplicates with server side copy (Nick Craig-Wood)
* Dropbox
* Retry blank errors to fix long listings (Nick Craig-Wood)
* FTP
* Add `--ftp-concurrency` to limit maximum number of connections (Nick Craig-Wood)
* Google Cloud Storage
* Fall back to default application credentials (marcintustin)
* Allow bucket policy only buckets (Nick Craig-Wood)
* HTTP
* Add `--http-no-slash` for websites with directories with no slashes (Nick Craig-Wood)
* Remove duplicates from listings (Nick Craig-Wood)
* Fix socket leak on 404 errors (Nick Craig-Wood)
* Jottacloud
* Fix token refresh (Sebastian Bünger)
* Add device registration (Oliver Heyme)
* Onedrive
* Implement graceful cancel of multipart uploads if rclone is interrupted (Cnly)
* Always add trailing colon to path when addressing items, (Cnly)
* Return errors instead of panic for invalid uploads (Fabian Möller)
* S3
* Add support for "Glacier Deep Archive" storage class (Manu)
* Update Dreamhost endpoint (Nick Craig-Wood)
* Note incompatibility with CEPH Jewel (Nick Craig-Wood)
* SFTP
* Allow custom ssh client config (Alexandru Bumbacea)
* Swift
* Obey Retry-After to enable OVH restore from cold storage (Nick Craig-Wood)
* Work around token expiry on CEPH (Nick Craig-Wood)
* WebDAV
* Allow IsCollection property to be integer or boolean (Nick Craig-Wood)
* Fix race when creating directories (Nick Craig-Wood)
* Fix About/df when reading the available/total returns 0 (Nick Craig-Wood)
## v1.46 - 2019-02-09
* New backends

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone"
slug: rclone
url: /commands/rclone/
@@ -46,6 +46,7 @@ rclone [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -60,6 +61,7 @@ rclone [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -83,6 +85,8 @@ rclone [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -118,6 +122,7 @@ rclone [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -140,11 +145,13 @@ rclone [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -153,6 +160,7 @@ rclone [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
-h, --help help for rclone
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -173,6 +181,10 @@ rclone [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -278,6 +290,7 @@ rclone [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -310,7 +323,7 @@ rclone [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
@@ -373,4 +386,4 @@ rclone [flags]
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone about"
slug: rclone_about
url: /commands/rclone_about/
@@ -89,6 +89,7 @@ rclone about remote: [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -103,6 +104,7 @@ rclone about remote: [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -126,6 +128,8 @@ rclone about remote: [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -161,6 +165,7 @@ rclone about remote: [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -183,11 +188,13 @@ rclone about remote: [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -195,6 +202,7 @@ rclone about remote: [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -215,6 +223,10 @@ rclone about remote: [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -320,6 +332,7 @@ rclone about remote: [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -352,7 +365,7 @@ rclone about remote: [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -368,4 +381,4 @@ rclone about remote: [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
@@ -48,6 +48,7 @@ rclone authorize [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -62,6 +63,7 @@ rclone authorize [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -85,6 +87,8 @@ rclone authorize [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -120,6 +124,7 @@ rclone authorize [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -142,11 +147,13 @@ rclone authorize [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -154,6 +161,7 @@ rclone authorize [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -174,6 +182,10 @@ rclone authorize [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -279,6 +291,7 @@ rclone authorize [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -311,7 +324,7 @@ rclone authorize [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -327,4 +340,4 @@ rclone authorize [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone cachestats"
slug: rclone_cachestats
url: /commands/rclone_cachestats/
@@ -47,6 +47,7 @@ rclone cachestats source: [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -61,6 +62,7 @@ rclone cachestats source: [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -84,6 +86,8 @@ rclone cachestats source: [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -119,6 +123,7 @@ rclone cachestats source: [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -141,11 +146,13 @@ rclone cachestats source: [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -153,6 +160,7 @@ rclone cachestats source: [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -173,6 +181,10 @@ rclone cachestats source: [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -278,6 +290,7 @@ rclone cachestats source: [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -310,7 +323,7 @@ rclone cachestats source: [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -326,4 +339,4 @@ rclone cachestats source: [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/
@@ -69,6 +69,7 @@ rclone cat remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -83,6 +84,7 @@ rclone cat remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -106,6 +108,8 @@ rclone cat remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -141,6 +145,7 @@ rclone cat remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -163,11 +168,13 @@ rclone cat remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -175,6 +182,7 @@ rclone cat remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -195,6 +203,10 @@ rclone cat remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -300,6 +312,7 @@ rclone cat remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -332,7 +345,7 @@ rclone cat remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -348,4 +361,4 @@ rclone cat remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
@@ -63,6 +63,7 @@ rclone check source:path dest:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -77,6 +78,7 @@ rclone check source:path dest:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -100,6 +102,8 @@ rclone check source:path dest:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -135,6 +139,7 @@ rclone check source:path dest:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -157,11 +162,13 @@ rclone check source:path dest:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -169,6 +176,7 @@ rclone check source:path dest:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -189,6 +197,10 @@ rclone check source:path dest:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -294,6 +306,7 @@ rclone check source:path dest:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -326,7 +339,7 @@ rclone check source:path dest:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -342,4 +355,4 @@ rclone check source:path dest:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
@@ -48,6 +48,7 @@ rclone cleanup remote:path [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -62,6 +63,7 @@ rclone cleanup remote:path [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -85,6 +87,8 @@ rclone cleanup remote:path [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -120,6 +124,7 @@ rclone cleanup remote:path [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -142,11 +147,13 @@ rclone cleanup remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -154,6 +161,7 @@ rclone cleanup remote:path [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -174,6 +182,10 @@ rclone cleanup remote:path [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -279,6 +291,7 @@ rclone cleanup remote:path [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -311,7 +324,7 @@ rclone cleanup remote:path [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -327,4 +340,4 @@ rclone cleanup remote:path [flags]
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
@@ -48,6 +48,7 @@ rclone config [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -62,6 +63,7 @@ rclone config [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -85,6 +87,8 @@ rclone config [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -120,6 +124,7 @@ rclone config [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -142,11 +147,13 @@ rclone config [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -154,6 +161,7 @@ rclone config [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -174,6 +182,10 @@ rclone config [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -279,6 +291,7 @@ rclone config [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -311,7 +324,7 @@ rclone config [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -336,4 +349,4 @@ rclone config [flags]
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config create"
slug: rclone_config_create
url: /commands/rclone_config_create/
@@ -62,6 +62,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -76,6 +77,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -99,6 +101,8 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -134,6 +138,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -156,11 +161,13 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -168,6 +175,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -188,6 +196,10 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -293,6 +305,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -325,7 +338,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -341,4 +354,4 @@ rclone config create <name> <type> [<key> <value>]* [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config delete"
slug: rclone_config_delete
url: /commands/rclone_config_delete/
@@ -45,6 +45,7 @@ rclone config delete <name> [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -59,6 +60,7 @@ rclone config delete <name> [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -82,6 +84,8 @@ rclone config delete <name> [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -117,6 +121,7 @@ rclone config delete <name> [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -139,11 +144,13 @@ rclone config delete <name> [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -151,6 +158,7 @@ rclone config delete <name> [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -171,6 +179,10 @@ rclone config delete <name> [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -276,6 +288,7 @@ rclone config delete <name> [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -308,7 +321,7 @@ rclone config delete <name> [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -324,4 +337,4 @@ rclone config delete <name> [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config dump"
slug: rclone_config_dump
url: /commands/rclone_config_dump/
@@ -45,6 +45,7 @@ rclone config dump [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -59,6 +60,7 @@ rclone config dump [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -82,6 +84,8 @@ rclone config dump [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -117,6 +121,7 @@ rclone config dump [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -139,11 +144,13 @@ rclone config dump [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -151,6 +158,7 @@ rclone config dump [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -171,6 +179,10 @@ rclone config dump [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -276,6 +288,7 @@ rclone config dump [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -308,7 +321,7 @@ rclone config dump [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -324,4 +337,4 @@ rclone config dump [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config edit"
slug: rclone_config_edit
url: /commands/rclone_config_edit/
@@ -48,6 +48,7 @@ rclone config edit [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -62,6 +63,7 @@ rclone config edit [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -85,6 +87,8 @@ rclone config edit [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -120,6 +124,7 @@ rclone config edit [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -142,11 +147,13 @@ rclone config edit [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -154,6 +161,7 @@ rclone config edit [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -174,6 +182,10 @@ rclone config edit [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -279,6 +291,7 @@ rclone config edit [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -311,7 +324,7 @@ rclone config edit [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -327,4 +340,4 @@ rclone config edit [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config file"
slug: rclone_config_file
url: /commands/rclone_config_file/
@@ -45,6 +45,7 @@ rclone config file [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -59,6 +60,7 @@ rclone config file [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -82,6 +84,8 @@ rclone config file [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -117,6 +121,7 @@ rclone config file [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -139,11 +144,13 @@ rclone config file [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -151,6 +158,7 @@ rclone config file [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -171,6 +179,10 @@ rclone config file [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -276,6 +288,7 @@ rclone config file [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -308,7 +321,7 @@ rclone config file [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -324,4 +337,4 @@ rclone config file [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

View File

@@ -1,5 +1,5 @@
---
date: 2019-02-09T10:42:18Z
date: 2019-04-13T11:00:52+01:00
title: "rclone config password"
slug: rclone_config_password
url: /commands/rclone_config_password/
@@ -52,6 +52,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-url string Custom endpoint for downloads.
--b2-endpoint string Endpoint for the service.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-key string Application Key
@@ -66,6 +67,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--ca-cert string CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
@@ -89,6 +91,8 @@ rclone config password <name> [<key> <value>]+ [flags]
--cache-writes Cache file data on writes through the FS
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
@@ -124,6 +128,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me.
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
--drive-skip-gdocs Skip google documents in all listings.
--drive-team-drive string ID of the Team Drive
--drive-trashed-only Only show files that are in the trash.
@@ -146,11 +151,13 @@ rclone config password <name> [<key> <value>]+ [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-host string FTP host to connect to
--ftp-pass string FTP password
--ftp-port string FTP port, leave blank to use default (21)
--ftp-user string FTP username, leave blank for current username, $USER
--gcs-bucket-acl string Access Control List for new buckets.
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
--gcs-client-id string Google Application Client Id
--gcs-client-secret string Google Application Client Secret
--gcs-location string Location for the newly created buckets.
@@ -158,6 +165,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--gcs-project-number string Project number.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
@@ -178,6 +186,10 @@ rclone config password <name> [<key> <value>]+ [flags]
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--jottacloud-user string User Name:
--koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
--koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
--koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
--koofr-user string Your Koofr user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-no-check-updated Don't check to see if the files change during upload
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -283,6 +295,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--suffix string Suffix for use with --backup-dir.
--suffix-keep-extension Preserve the extension when using --suffix.
--swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
@@ -315,7 +328,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.47.0")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -331,4 +344,4 @@ rclone config password <name> [<key> <value>]+ [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 9-Feb-2019
###### Auto generated by spf13/cobra on 13-Apr-2019

Some files were not shown because too many files have changed in this diff Show More