1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-31 15:43:53 +00:00

Compare commits

..

153 Commits

Author SHA1 Message Date
Nick Craig-Wood
d6778c9d19 mount: make directories show with non zero size
See: https://forum.rclone.org/t/empty-folder-when-rclone-mount-used-as-external-storage-of-nextcloud/9251
2019-03-25 11:21:26 +00:00
Nick Craig-Wood
6e70d88f54 swift: work around token expiry on CEPH
This implements the Expiry interface so token expiry works properly

This change makes sure that this change from the swift library works
correctly with rclone's custom authenticator.

> Renew the token 60s before the expiry time
>
> The v2 and v3 auth schemes both return the expiry time of the token,
> so instead of waiting for a 401 error, renew the token 60s before this
> time.
>
> This makes transfers more efficient and also works around a bug in
> CEPH which returns 403 instead of 401 when the token expires.
>
> http://tracker.ceph.com/issues/22223
2019-03-18 13:30:59 +00:00
Nick Craig-Wood
595fea757d vendor: update github.com/ncw/swift to bring in Expires changes 2019-03-18 13:30:59 +00:00
Nick Craig-Wood
bb80586473 bin/get-github-release: fetch the most recent not the least recent 2019-03-18 11:29:37 +00:00
Nick Craig-Wood
0d475958c7 Fix errors discovered with go vet nilness tool 2019-03-18 11:23:00 +00:00
Nick Craig-Wood
2728948fb0 Add xopez to contributors 2019-03-18 11:04:10 +00:00
Nick Craig-Wood
3756f211b5 Add Danil Semelenov to contributors 2019-03-18 11:04:10 +00:00
xopez
2faf2aed80 docs: Update Copyright to current Year 2019-03-18 11:03:45 +00:00
Nick Craig-Wood
1bd8183af1 build: use matrix build for travis
This makes the build more efficient, the .travis.yml file more
comprehensible and reduces the Makefile spaghetti.

Windows support is commented out for the moment as it isn't very
reliable yet.
2019-03-17 14:58:18 +00:00
Nick Craig-Wood
5aa706831f b2: ignore already_hidden error on remove
Sometimes (possibly through eventual consistency) b2 returns an
already_hidden error on a delete.  Ignore this since it is harmless.
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
ac7e1dbf62 test_all: add the vfs tests to the integration tests
Fix failing tests for some remotes
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
14ef4437e5 dedupe: fix bug introduced when converting to use walk.ListR #2902
Before the fix we were only de-duping the ListR batches.

Afterwards we dedupe everything.

This will have the consequence that rclone uses more memory as it will
build a map of all the directory names, not just the names in a given
directory.
2019-03-17 11:01:20 +00:00
Danil Semelenov
a0d2ab5b4f cmd: Fix autocompletion of remote paths with spaces - fixes #3047 2019-03-17 10:15:20 +00:00
Nick Craig-Wood
3bfde5f52a ftp: add --ftp-concurrency to limit maximum number of connections
Fixes #2166
2019-03-17 09:57:14 +00:00
Nick Craig-Wood
2b05bd9a08 rc: implement operations/publiclink the equivalent of rclone link
Fixes #3042
2019-03-17 09:41:31 +00:00
Nick Craig-Wood
1318be3b0a vendor: update github.com/goftp/server to fix hang while reading a file from the server
See: https://forum.rclone.org/t/minor-issue-with-linux-ftp-client-and-rclone-ftp-access-denied/8959
2019-03-17 09:30:57 +00:00
Nick Craig-Wood
f4a754a36b drive: add --skip-checksum-gphotos to ignore incorrect checksums on Google Photos
First implementation by @jammin84, re-written by @ncw

Fixes #2207
2019-03-17 09:10:51 +00:00
Nick Craig-Wood
fef73763aa lib/atexit: add SIGTERM to signals which run the exit handlers on unix 2019-03-16 17:47:02 +00:00
Nick Craig-Wood
7267d19ad8 fstest: Use walk.ListR for listing 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
47099466c0 cache: Use walk.ListR for listing the temporary Fs. 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
4376019062 dedupe: Use walk.ListR for listing commands.
This dramatically increases the speed (7x in my tests) of the de-dupe
as google drive supports ListR directly and dedupe did not work with
`--fast-list`.

Fixes #2902
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
e5f4210b09 serve restic: use walk.ListR for listing
This is effectively what the old code did anyway so this should not
make any functional changes.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
d5f2df2f3d Use walk.ListR for listing operations
This will increase speed for backends which support ListR and will not
have the memory overhead of using --fast-list.

It also means that errors are queued until the end so as much of the
remote will be listed as possible before returning an error.

Commands affected are:
- lsf
- ls
- lsl
- lsjson
- lsd
- md5sum/sha1sum/hashsum
- size
- delete
- cat
- settier
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
efd720b533 walk: Implement walk.ListR which will use ListR if at all possible
It otherwise has the nearly the same interface as walk.Walk which it
will fall back to if it can't use ListR.

Using walk.ListR will speed up file system operations by default and
use much less memory and start immediately compared to if --fast-list
had been supplied.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
047f00a411 filter: Add BoundedRecursion method
This indicates that the filter set could be satisfied by a bounded
directory recursion.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
bb5ac8efbe http: fix socket leak on 404 errors 2019-03-15 17:04:28 +00:00
Nick Craig-Wood
e62bbf761b http: add --http-no-slash for websites with directories with no slashes #3053
See: https://forum.rclone.org/t/is-there-a-way-to-log-into-an-htpp-server/8484
2019-03-15 17:04:06 +00:00
Nick Craig-Wood
54a2e99d97 http: remove duplicates from listings 2019-03-15 16:59:36 +00:00
Nick Craig-Wood
28230d93b4 sync: Implement --suffix-keep-extension for use with --suffix - fixes #3032 2019-03-15 14:21:39 +00:00
Florian Gamböck
3c4407442d cmd: fix completion of remotes
The previous behavior of the remotes completion was that only
alphanumeric characters were allowed in a remote name. This limitation
has been lifted somewhat by #2985, which also allowed an underscore.

With the new implementation introduced in this commit, the completion of
the remote name has been simplified: If there is no colon (":") in the
current word, then complete remote name. Otherwise, complete the path
inside the specified remote. This allows correct completion of all
remote names that are allowed by the config (including - and _).
Actually it matches much more than that, even remote names that are not
allowed by the config, but in such a case there already would be a wrong
identifier in the configuration file.

With this simpler string comparison, we can get rid of the regular
expression, which makes the completion multiple times faster. For a
sample benchmark, try the following:

     # Old way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path =~ ^[[:alnum:]]*$ ]]; done'

     real    0m15,637s
     user    0m15,613s
     sys     0m0,024s

     # New way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path != *:* ]]; done'

     real    0m1,324s
     user    0m1,304s
     sys     0m0,020s
2019-03-15 13:16:42 +00:00
Dan Walters
caf318d499 dlna: add connection manager service description
The UPnP MediaServer spec says that the ConnectionManager service is
required, and adding it was enough to get dlna support working on my
other TV (LG webOS 2.2.1).
2019-03-15 13:14:31 +00:00
Nick Craig-Wood
2fbb504b66 webdav: fix About/df when reading the available/total returns 0
Some WebDAV servers return an empty Available and Used which parses as 0.

This caused About to return the Total as 0 which can confused mounted
file systems.

After this change we ignore the result if Available and Used are both 0.

See: https://forum.rclone.org/t/windows-mounted-webdav-drive-has-no-free-space/8938
2019-03-15 12:03:04 +00:00
Alex Chen
2b58d1a46f docs: onedrive: Add guide to refreshing token after MFA is enabled 2019-03-14 00:21:05 +08:00
Cnly
1582a21408 onedrive: Always add trailing colon to path when addressing items - #2720, #3039 2019-03-13 11:30:15 +08:00
Nick Craig-Wood
229898dcee Add Dan Walters to contributors 2019-03-11 17:31:46 +00:00
Dan Walters
95194adfd5 dlna: fix root XML service descriptor
The SCPD URL was being set after marshalling the XML, and thus coming
out blank.  Now works on my Samsung TV, and likely fixes some issues
reported by others in #2648.
2019-03-11 17:31:32 +00:00
Nick Craig-Wood
4827496234 webdav: fix race when creating directories - fixes #3035
Before this change a race condition existed in mkdir
- the directory was attempted to be created
- the parent didn't exist so it failed
- the parent was created
- the directory was created again

The last step failed as the directory was created in a different thread.

This was fixed by checking the error messages of MKCOL for both
directory creations, rather than only the first.
2019-03-11 16:20:05 +00:00
Nick Craig-Wood
415eeca6cf drive: fix range requests on 0 length files
Before this change a range request on a 0 length file would fail

    $ rclone cat --head 128 drive:test/emptyfile
    ERROR : open file failed: googleapi: Error 416: Request range not satisfiable, requestedRangeNotSatisfiable

To fix this we remove Range: headers on requests for zero length files.
2019-03-10 15:47:34 +00:00
Nick Craig-Wood
58d9a3e1b5 filter: reload filter when the options are set via the rc - fixes #3018 2019-03-10 13:09:44 +00:00
Nick Craig-Wood
cccadfa7ae rc: add ability for options blocks to register reload functions 2019-03-10 13:09:44 +00:00
ishuah
1b52f8d2a5 copy/sync/move: add --create-empty-src-dirs flag - fixes #2869 2019-03-10 11:56:38 +00:00
Nick Craig-Wood
2078ad68a5 gcs: Allow bucket policy only buckets - fixes #3014
This introduces a new config variable bucket_policy_only.  If this is
set then rclone:

- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
2019-03-10 11:45:42 +00:00
Nick Craig-Wood
368ed9e67d docs: add a FAQ entry about --max-backlog 2019-03-09 16:19:24 +00:00
Nick Craig-Wood
7c30993bb7 Add Fionera to contributors 2019-03-09 16:19:24 +00:00
Fionera
55b9a4ed30 Add ServerSideAcrossConfig Flag and check for it. fixes #2728 2019-03-09 16:18:45 +00:00
jaKa
118a8b949e koofr: implemented a backend for Koofr cloud storage service.
Implemented a Koofr REST API backend.
Added said backend to tests.
Added documentation for said backend.
2019-03-06 13:41:43 +00:00
jaKa
1d14e30383 vendor: add github.com/koofr/go-koofrclient
* added koofr client SDK dep for koofr backend
2019-03-06 13:41:43 +00:00
Nick Craig-Wood
27714e29c3 s3: note incompatibility with CEPH Jewel - fixes #3015 2019-03-06 11:50:37 +00:00
Nick Craig-Wood
9f8e1a1dc5 drive: fix imports of text files
Before this change text file imports were ignored.  This was because
the mime type wasn't matched.

Fix this by adjusting the keys in the mime type maps as well as the
values.

See: https://forum.rclone.org/t/how-to-upload-text-files-to-google-drive-as-google-docs/9014
2019-03-05 17:20:31 +00:00
Nick Craig-Wood
1692c6bd0a vfs: shorten the locking window for vfs/refresh
Before this change we locked the root directory, recursively fetched
the listing, applied it then unlocked the root directory.

After this change we recursively fetch the listing then apply it with
the root directory locked which shortens the time that the root
directory is locked greatly.

With the original method and the new method the subdirectories are
left unlocked and so potentially could be changed leading to
inconsistencies.  This change makes the potential for inconsistencies
slightly worse by leaving the root directory unlocked at a gain of a
much more responsive system while runing vfs/refresh.

See: https://forum.rclone.org/t/rclone-rc-vfs-refresh-locking-directory-being-refreshed/9004
2019-03-05 14:17:42 +00:00
Nick Craig-Wood
d233efbf63 Add marcintustin to contributors 2019-03-01 17:10:26 +00:00
marcintustin
e9a45a5a34 googlecloudstorage: fall back to default application credentials
Fall back to default application credentials when all other credentials sources fail

This change allows users with default application credentials
configured (notably when running on google compute instances) to
dispense with explicitly configuring google cloud storage credentials
in rclone's own configuration.
2019-03-01 18:05:31 +01:00
Nick Craig-Wood
f6eb5c6983 lib/pacer: fix test on macOS 2019-03-01 12:27:33 +00:00
Nick Craig-Wood
2bf19787d5 Add Dr.Rx to contributors 2019-03-01 12:25:16 +00:00
Dr.Rx
0ea3a57ecb azureblob: Enable MD5 checksums when uploading files bigger than the "Cutoff"
This enables MD5 checksum calculation and publication when uploading file above the "Cutoff" limit.
It was explictely ignored in case of multi-block (a.k.a. multipart) uploads to Azure Blob Storage.
2019-03-01 11:12:23 +01:00
Nick Craig-Wood
b353c730d8 vfs: make tests work on remotes which don't support About 2019-02-28 14:05:21 +00:00
Nick Craig-Wood
173dfbd051 vfs: read directory and check for a file before mkdir
Before this change when doing Mkdir the VFS layer could add the new
item to an unread directory which caused confusion.

It could also do mkdir on a file when run on a bucket based remote
which would temporarily overwrite the file with a directory.

Fixes #2993
2019-02-28 14:05:17 +00:00
Nick Craig-Wood
e3bceb9083 operations: fix Overlapping test for Windows native paths 2019-02-28 11:39:32 +00:00
Nick Craig-Wood
52c6b373cc Add calisro to contributors 2019-02-28 10:20:35 +00:00
calisro
0bc0f62277 Recommendation for creating own client ID 2019-02-28 11:20:08 +01:00
Cnly
12c8ee4b4b atexit: allow functions to be unregistered 2019-02-27 23:37:24 +01:00
Nick Craig-Wood
5240f9d1e5 sync: fix integration tests to check correct error 2019-02-27 22:05:16 +00:00
Nick Craig-Wood
997654d77d ncdu: fix display corruption with Chinese characters - #2989 2019-02-27 09:55:28 +00:00
Nick Craig-Wood
f1809451f6 docs: add more examples of config-less usage 2019-02-27 09:41:40 +00:00
Nick Craig-Wood
84c650818e sync: don't allow syncs on overlapping remotes - fixes #2932 2019-02-26 19:25:52 +00:00
Nick Craig-Wood
c5775cf73d fserrors: don't panic on uncomparable errors 2019-02-26 15:39:16 +00:00
Nick Craig-Wood
dca482e058 Add Alexandru Bumbacea to contributors 2019-02-26 15:39:16 +00:00
Nick Craig-Wood
6943169cef Add Six to contributors 2019-02-26 15:38:25 +00:00
Alexandru Bumbacea
4fddec113c sftp: allow custom ssh client config 2019-02-26 16:37:54 +01:00
Six
2114fd8f26 cmd: Fix tab-completion for remotes with underscores in their names 2019-02-26 16:25:45 +01:00
Nick Craig-Wood
63bb6de491 build: update to use go1.12 for the build 2019-02-26 13:18:31 +00:00
Nick Craig-Wood
0a56a168ff bin/get-github-release.go: scrape the downloads page to avoid the API limit
This should fix pull requests build failures which can't use the
github token.
2019-02-25 21:34:59 +00:00
Nick Craig-Wood
88e22087a8 Add Nestar47 to contributors 2019-02-25 21:34:59 +00:00
Nestar47
9404ed703a drive: add docs on team drives and --fast-list eventual consistency 2019-02-25 21:46:27 +01:00
Nick Craig-Wood
c7ecccd5ca mount: remove an obsolete EXPERIMENTAL tag from the docs 2019-02-25 17:53:53 +00:00
Sebastian Bünger
972e27a861 jottacloud: fix token refresh - fixes #2992 2019-02-21 19:26:18 +01:00
Fabian Möller
8f4ea77c07 fs: remove unnecessary pacer warning 2019-02-18 08:42:36 +01:00
Fabian Möller
61616ba864 pacer: make pacer more flexible
Make the pacer package more flexible by extracting the pace calculation
functions into a separate interface. This also allows to move features
that require the fs package like logging and custom errors into the fs
package.

Also add a RetryAfterError sentinel error that can be used to signal a
desired retry time to the Calculator.
2019-02-16 14:38:07 +00:00
Fabian Möller
9ed721a3f6 errors: add lib/errors package 2019-02-16 14:38:07 +00:00
Nick Craig-Wood
0b9d7fec0c lsf: add 'e' format to show encrypted names and 'o' for original IDs
This brings it up to par with lsjson.

This commit also reworks the framework to use ListJSON internally
which removes duplicated code and makes testing easier.
2019-02-14 14:45:35 +00:00
Nick Craig-Wood
240c15883f accounting: fix total ETA when --stats-unit bits is in effect 2019-02-14 07:56:52 +00:00
Nick Craig-Wood
38864adc9c cmd: Use private custom func to fix clash between rclone and kubectl
Before this change, rclone used the `__custom_func` hook to control
the completions of remote files.  However this clashes with other
cobra users, the most notable example being kubectl.

Upgrading cobra to master allows us to use a namespaced function
`__rclone_custom_func` which fixes the problem.

Fixes #1529
2019-02-13 23:02:22 +00:00
Nick Craig-Wood
5991315990 vendor: update github.com/spf13/cobra to master 2019-02-13 23:02:22 +00:00
Nick Craig-Wood
73f0a67d98 s3: Update Dreamhost endpoint - fixes #2974 2019-02-13 21:10:43 +00:00
Nick Craig-Wood
ffe067d6e7 azureblob: fix SAS URL support - fixes #2969
This was broken accidentally in 5d1d93e163 as part of #2654
2019-02-13 17:36:14 +00:00
Nick Craig-Wood
b5f563fb0f vfs: Ignore Truncate if called with no readers and already the correct size
This fixes FreeBSD which seems to call SetAttr with a size even on
read only files.

This is probably a bug in the FreeBSD FUSE implementation as it
happens with mount and cmount.

See: https://forum.rclone.org/t/freebsd-question/8662/12
2019-02-12 17:27:04 +00:00
Nick Craig-Wood
9310c7f3e2 build: update to use go1.12rc1 for the build 2019-02-12 16:23:08 +00:00
Nick Craig-Wood
1c1a8ef24b webdav: allow IsCollection property to be integer or boolean - fixes #2964
It turns out that some servers emit "true" or "false" rather than "1"
or "0" for this property, so adapt accordingly.
2019-02-12 12:33:08 +00:00
Nick Craig-Wood
2cfbc2852d docs: move --no-traverse docs to the correct section 2019-02-12 12:26:19 +00:00
Nick Craig-Wood
b167d30420 Add client side TLS/SSL flags --ca-cert/--client-cert/--client-key
Fixes #2966
2019-02-12 12:26:19 +00:00
Nick Craig-Wood
ec59760d9c pcloud: remove duplicated UserInfo.Result field spotted by go vet 2019-02-12 11:53:26 +00:00
Nick Craig-Wood
076d3da825 operations: resume downloads if the reader fails in copy - fixes #2108
This puts a shim on the reader opened by Copy so that if an error is
returned, the reader is re-opened at the correct seek point.

This should make downloading very large files more reliable.
2019-02-12 11:47:57 +00:00
Nick Craig-Wood
c3eecbe933 dropbox: retry blank errors to fix long listings
Sometimes dropbox returns blank errors in listings - retry this

See: https://forum.rclone.org/t/bug-sync-dropbox-to-gdrive-failing-for-large-files-50gb-error-unexpected-eof/8595
2019-02-10 20:55:16 +00:00
Nick Craig-Wood
d8e5b19ed4 build: switch to semvar compliant version tags
Fixes #2960
2019-02-10 20:55:16 +00:00
Nick Craig-Wood
43bc381e90 vendor: update all dependencies 2019-02-10 20:55:16 +00:00
Nick Craig-Wood
fb5ee22112 Add Vince to contributors 2019-02-10 20:55:16 +00:00
Vince
35327dad6f b2: allow manual configuration of backblaze downloadUrl - fixes #2808 2019-02-10 20:54:10 +00:00
Fabian Möller
ef5e1909a0 encoder: add lib/encoder to handle character subsitution and quoting 2019-02-09 18:23:47 +00:00
Fabian Möller
bca5d8009e onedrive: return errors instead of panic for invalid uploads 2019-02-09 18:23:47 +00:00
Fabian Möller
334f19c974 info: improve allowed character testing 2019-02-09 18:23:47 +00:00
Fabian Möller
42a5bf1d9f golangci: enable lints excluded by default 2019-02-09 18:18:22 +00:00
Nick Craig-Wood
71d1890316 build: ignore testbuilds when uploading to github 2019-02-09 12:22:06 +00:00
Nick Craig-Wood
d29c545627 Start v1.46-DEV development 2019-02-09 12:21:57 +00:00
Nick Craig-Wood
eb85ecc9c4 Version v1.46 2019-02-09 10:42:57 +00:00
Nick Craig-Wood
0dc08e1e61 Add James Carpenter to contributors 2019-02-09 09:00:22 +00:00
James Carpenter
76532408ef b2: Application Key usage clarifications 2019-02-09 09:00:05 +00:00
Nick Craig-Wood
60a4a8a86d genautocomplete: add remote path completion for bash - fixes #1529
Thanks to:
- Christopher Peterson (@cspeterson) for the original script
- Danil Semelenov (@sgtpep) for many refinements
2019-02-08 19:03:30 +00:00
Fabian Möller
a0d4c04687 backend: fix misspellings 2019-02-07 19:51:03 +01:00
Fabian Möller
f3874707ee drive: fix ListR for items with multiple parents
Fixes #2946
2019-02-07 19:46:50 +01:00
Fabian Möller
f8c2689e77 drive: improve ChangeNotify support for items with multiple parents 2019-02-07 19:46:50 +01:00
Nick Craig-Wood
8ec55ae20b Fix broken flag type tests
Introduced in fc1bf5f931
2019-02-07 16:42:26 +00:00
Nick Craig-Wood
fc1bf5f931 Make flags show up with their proper names, eg SizeSuffix rather than int 2019-02-07 11:57:26 +00:00
Nick Craig-Wood
578d00666c test_all: make -clean not give up on the first error 2019-02-07 11:29:52 +00:00
Nick Craig-Wood
f5c853b5c8 Add Jonathan to contributors 2019-02-07 11:29:16 +00:00
Jonathan
23c0cd2482 Update README.md 2019-02-07 11:28:42 +00:00
Nick Craig-Wood
8217f361cc webdav: if MKCOL fails with 423 Locked assume the directory exists
This fixes the integration tests with owncloud
2019-02-07 11:00:28 +00:00
Nick Craig-Wood
a0016e00d1 mega: return error if an unknown length file is attempted to be uploaded
This fixes the integration test created in #2947 to attempt to flush
out non-conforming backends.
2019-02-07 10:43:31 +00:00
Nick Craig-Wood
99c37028ee build: disable go modules for travis build 2019-02-06 21:25:32 +00:00
Nick Craig-Wood
cfba337ef0 lib/pool: fix memory leak by freeing buffers on flush 2019-02-06 17:20:54 +00:00
Nick Craig-Wood
fd370fcad2 vendor: update github.com/t3rm1n4l/go-mega to add new error codes 2019-02-05 17:22:28 +00:00
Nick Craig-Wood
c680bb3254 box: document how to use rclone with Enterprise SSO
Thanks to Lorenzo Grassi for help with this.
2019-02-05 14:29:13 +00:00
Nick Craig-Wood
7d5d6c041f vendor: update github.com/t3rm1n4l/go-mega to fix v2 account login
Fixes #2771
2019-02-04 17:33:15 +00:00
Nick Craig-Wood
bdc638530e walk: make NewDirTree always use ListR #2946
This fixes vfs/refresh with recurse=true needing the --fast-list flag
2019-02-04 10:37:27 +00:00
Nick Craig-Wood
315cee23a0 http: add an example with username and password 2019-02-04 10:30:05 +00:00
Nick Craig-Wood
2135879dda lsjson: use exactly the correct number of decimal places in the seconds 2019-02-03 20:03:23 +00:00
Nick Craig-Wood
da90069462 lib/pool: only flush buffers if they are unused between flush intervals 2019-02-03 19:07:50 +00:00
Nick Craig-Wood
08c4854e00 webdav: fix identification of directories for Bitrix Site Manager - #2716
Bitrix Site Manager emits `<D:resourcetype><collection/></D:resourcetype>`
missing the namespace on the `collection` tag.  This causes the item
to be identified as a file instead of a directory.

To work around this look at the Microsoft extension prop
`iscollection` which seems to be emitted as well.
2019-02-03 12:34:18 +00:00
Nick Craig-Wood
a838add230 fstests: skip chunked uploading tests with -short 2019-02-03 12:28:44 +00:00
Nick Craig-Wood
d68b091170 hubic: make error message more informative if authentication fails 2019-02-03 12:25:19 +00:00
Nick Craig-Wood
d809bed438 Add weetmuts to contributors 2019-02-03 12:19:08 +00:00
weetmuts
3aa1818870 listremotes: remove -l short flag as it conflicts with the new global flag 2019-02-03 12:17:15 +00:00
weetmuts
96f6708461 s3: add aws endpoint eu-north-1 2019-02-03 12:17:15 +00:00
weetmuts
6641a25f8c gcs: update google cloud storage endpoints 2019-02-03 12:17:15 +00:00
Cnly
cd46ce916b fstests: ensure Fs.Put and Object.Update don't panic on unknown-sized uploads 2019-02-03 11:47:57 +00:00
Cnly
318d1bb6f9 fs: clarify behaviour of Put() and Upload() for unknown-sized objects 2019-02-03 11:47:57 +00:00
Cnly
b8b53901e8 operations: call Rcat in Copy when size is -1 - #2832 2019-02-03 11:47:57 +00:00
Nick Craig-Wood
6e153781a7 rc: add help to show how to set log level with options/set 2019-02-03 11:47:57 +00:00
Nick Craig-Wood
f27c2d9760 vfs: make cache tests more reliable 2019-02-02 16:26:55 +00:00
Nick Craig-Wood
eb91356e28 fs/asyncreader: optionally user mmap for memory allocation with --use-mmap #2200
This replaces the `sync.Pool` allocator with lib/pool.  This
implements a pool of buffers of up to 64MB which can be re-used but is
flushed every 5 seconds.

If `--use-mmap` is set then rclone will use mmap for memory
allocations which is much better at returning memory to the OS.
2019-02-02 14:35:56 +00:00
Nick Craig-Wood
bed2971bf0 lib/pool: a buffer recycling library which can be optionally be used with mmap 2019-02-02 14:35:56 +00:00
Nick Craig-Wood
f0696dfe30 lib/mmap: library to do memory allocation with anonymous memory maps 2019-02-02 14:35:56 +00:00
Nick Craig-Wood
a43ed567ee vfs: implement --vfs-cache-max-size to limit the total size of the cache 2019-02-02 12:30:10 +00:00
Nick Craig-Wood
fffdbb31f5 bin/get-github-release.go: Use GOPATH/bin by preference to place binary 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
cacefb9a82 bin/get-github-release.go: automatically choose the right os/arch
This fixes the install of golangci-lint on non Linux platforms
2019-02-02 11:45:07 +00:00
Nick Craig-Wood
d966cef14c build: fix problems found with unconvert 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
a551978a3f build: fix problems found with structcheck linter 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
97752ca8fb build: fix problems found with ineffasign linter 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
8d5d332daf build: fix problems found with golint 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
6b3a9bf26a build: fix problems found by the deadcode linter 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
c1d9a1e174 build: use golangci-lint for code quality checks 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
98120bb864 bin/get-github-release.go: enable extraction of binary not in root of tar
Also fix project name regexp to allow -
2019-02-02 11:34:51 +00:00
Nick Craig-Wood
f8ced557e3 mount: print more things in seek_speed test 2019-02-02 11:30:49 +00:00
Cnly
7b20139c6a onedrive: return err instead of panic on unknown-sized uploads 2019-02-02 16:37:33 +08:00
619 changed files with 74084 additions and 30260 deletions

30
.golangci.yml Normal file
View File

@@ -0,0 +1,30 @@
# golangci-lint configuration options
run:
build-tags:
- cmount
linters:
enable:
- deadcode
- errcheck
- goimports
- golint
- ineffassign
- structcheck
- varcheck
- govet
- unconvert
#- prealloc
#- maligned
disable-all: true
issues:
# Enable some lints excluded by default
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0

View File

@@ -1,14 +0,0 @@
{
"Enable": [
"deadcode",
"errcheck",
"goimports",
"golint",
"ineffassign",
"structcheck",
"varcheck",
"vet"
],
"EnableGC": true,
"Vendor": true
}

View File

@@ -1,52 +1,103 @@
---
language: go language: go
sudo: required sudo: required
dist: trusty dist: trusty
os: os:
- linux - linux
go:
- 1.8.x
- 1.9.x
- 1.10.x
- 1.11.x
- tip
go_import_path: github.com/ncw/rclone go_import_path: github.com/ncw/rclone
before_install: before_install:
- if [[ $TRAVIS_OS_NAME == linux ]]; then sudo modprobe fuse ; sudo chmod 666 /dev/fuse ; sudo chown root:$USER /etc/fuse.conf ; fi - git fetch --unshallow --tags
- if [[ $TRAVIS_OS_NAME == osx ]]; then brew update && brew tap caskroom/cask && brew cask install osxfuse ; fi - |
if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
fi
if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
brew update
brew tap caskroom/cask
brew cask install osxfuse
fi
if [[ "$TRAVIS_OS_NAME" == "windows" ]]; then
choco install -y winfsp zip make
cd ../.. # fix crlf in git checkout
mv $TRAVIS_REPO_SLUG _old
git config --global core.autocrlf false
git clone _old $TRAVIS_REPO_SLUG
cd $TRAVIS_REPO_SLUG
fi
install: install:
- git fetch --unshallow --tags - make vars
- make vars
- make build_dep
script:
- make check
- make quicktest
- make compile_all
env: env:
global: global:
- GOTAGS=cmount - GOTAGS=cmount
- GO111MODULE=off
- secure: gU8gCV9R8Kv/Gn0SmCP37edpfIbPoSvsub48GK7qxJdTU628H0KOMiZW/T0gtV5d67XJZ4eKnhJYlxwwxgSgfejO32Rh5GlYEKT/FuVoH0BD72dM1GDFLSrUiUYOdoHvf/BKIFA3dJFT4lk2ASy4Zh7SEoXHG6goBlqUpYx8hVA= - secure: gU8gCV9R8Kv/Gn0SmCP37edpfIbPoSvsub48GK7qxJdTU628H0KOMiZW/T0gtV5d67XJZ4eKnhJYlxwwxgSgfejO32Rh5GlYEKT/FuVoH0BD72dM1GDFLSrUiUYOdoHvf/BKIFA3dJFT4lk2ASy4Zh7SEoXHG6goBlqUpYx8hVA=
- secure: AMjrMAksDy3QwqGqnvtUg8FL/GNVgNqTqhntLF9HSU0njHhX6YurGGnfKdD9vNHlajPQOewvmBjwNLcDWGn2WObdvmh9Ohep0EmOjZ63kliaRaSSQueSd8y0idfqMQAxep0SObOYbEDVmQh0RCAE9wOVKRaPgw98XvgqWGDq5Tw= - secure: AMjrMAksDy3QwqGqnvtUg8FL/GNVgNqTqhntLF9HSU0njHhX6YurGGnfKdD9vNHlajPQOewvmBjwNLcDWGn2WObdvmh9Ohep0EmOjZ63kliaRaSSQueSd8y0idfqMQAxep0SObOYbEDVmQh0RCAE9wOVKRaPgw98XvgqWGDq5Tw=
- secure: Uaiveq+/rvQjO03GzvQZV2J6pZfedoFuhdXrLVhhHSeP4ZBca0olw7xaqkabUyP3LkVYXMDSX8EbyeuQT1jfEe5wp5sBdfaDtuYW6heFyjiHIIIbVyBfGXon6db4ETBjOaX/Xt8uktrgNge6qFlj+kpnmpFGxf0jmDLw1zgg7tk= - secure: Uaiveq+/rvQjO03GzvQZV2J6pZfedoFuhdXrLVhhHSeP4ZBca0olw7xaqkabUyP3LkVYXMDSX8EbyeuQT1jfEe5wp5sBdfaDtuYW6heFyjiHIIIbVyBfGXon6db4ETBjOaX/Xt8uktrgNge6qFlj+kpnmpFGxf0jmDLw1zgg7tk=
addons: addons:
apt: apt:
packages: packages:
- fuse - fuse
- libfuse-dev - libfuse-dev
- rpm - rpm
- pkg-config - pkg-config
cache: cache:
directories: directories:
- $HOME/.cache/go-build - $HOME/.cache/go-build
matrix: matrix:
allow_failures: allow_failures:
- go: tip - go: tip
include: include:
- os: osx - go: 1.8.x
go: 1.11.x script:
env: GOTAGS="" - make quicktest
cache: - go: 1.9.x
directories: script:
- $HOME/Library/Caches/go-build - make quicktest
- go: 1.10.x
script:
- make quicktest
- go: 1.11.x
script:
- make quicktest
- go: 1.12.x
env:
- GOTAGS=cmount
script:
- make build_dep
- make check
- make quicktest
- make racequicktest
- make compile_all
- os: osx
go: 1.12.x
env:
- GOTAGS= # cmount doesn't work on osx travis for some reason
cache:
directories:
- $HOME/Library/Caches/go-build
script:
- make
- make quicktest
- make racequicktest
# - os: windows
# go: 1.12.x
# env:
# - GOTAGS=cmount
# - CPATH='C:\Program Files (x86)\WinFsp\inc\fuse'
# #filter_secrets: false # works around a problem with secrets under windows
# cache:
# directories:
# - ${LocalAppData}/go-build
# script:
# - make
# - make quicktest
# - make racequicktest
- go: tip
script:
- make quicktest
deploy: deploy:
provider: script provider: script
script: make travis_beta script: make travis_beta
@@ -54,5 +105,5 @@ deploy:
on: on:
repo: ncw/rclone repo: ncw/rclone
all_branches: true all_branches: true
go: 1.11.x go: 1.12.x
condition: $TRAVIS_PULL_REQUEST == false condition: $TRAVIS_PULL_REQUEST == false && $TRAVIS_OS_NAME != "windows"

File diff suppressed because it is too large Load Diff

1416
MANUAL.md

File diff suppressed because it is too large Load Diff

1525
MANUAL.txt

File diff suppressed because it is too large Load Diff

View File

@@ -11,14 +11,12 @@ ifeq ($(subst HEAD,,$(subst master,,$(BRANCH))),)
BRANCH_PATH := BRANCH_PATH :=
endif endif
TAG := $(shell echo $$(git describe --abbrev=8 --tags | sed 's/-\([0-9]\)-/-00\1-/; s/-\([0-9][0-9]\)-/-0\1-/'))$(TAG_BRANCH) TAG := $(shell echo $$(git describe --abbrev=8 --tags | sed 's/-\([0-9]\)-/-00\1-/; s/-\([0-9][0-9]\)-/-0\1-/'))$(TAG_BRANCH)
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f", $$_)') NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f.0", $$_)')
ifneq ($(TAG),$(LAST_TAG)) ifneq ($(TAG),$(LAST_TAG))
TAG := $(TAG)-beta TAG := $(TAG)-beta
endif endif
GO_VERSION := $(shell go version) GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... | grep -v /vendor/ ) GO_FILES := $(shell go list ./... | grep -v /vendor/ )
# Run full tests if go >= go1.11
FULL_TESTS := $(shell go version | perl -lne 'print "go$$1.$$2" if /go(\d+)\.(\d+)/ && ($$1 > 1 || $$2 >= 11)')
BETA_PATH := $(BRANCH_PATH)$(TAG) BETA_PATH := $(BRANCH_PATH)$(TAG)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/ BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org BETA_UPLOAD_ROOT := memstore:beta-rclone-org
@@ -42,7 +40,6 @@ vars:
@echo LAST_TAG="'$(LAST_TAG)'" @echo LAST_TAG="'$(LAST_TAG)'"
@echo NEW_TAG="'$(NEW_TAG)'" @echo NEW_TAG="'$(NEW_TAG)'"
@echo GO_VERSION="'$(GO_VERSION)'" @echo GO_VERSION="'$(GO_VERSION)'"
@echo FULL_TESTS="'$(FULL_TESTS)'"
@echo BETA_URL="'$(BETA_URL)'" @echo BETA_URL="'$(BETA_URL)'"
version: version:
@@ -57,38 +54,22 @@ test: rclone
# Quick test # Quick test
quicktest: quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) $(GO_FILES) RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) $(GO_FILES)
ifdef FULL_TESTS
racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race $(GO_FILES) RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race $(GO_FILES)
endif
# Do source code quality checks # Do source code quality checks
check: rclone check: rclone
ifdef FULL_TESTS @# we still run go vet for -printfuncs which golangci-lint doesn't do yet
go vet $(BUILDTAGS) -printfuncs Debugf,Infof,Logf,Errorf ./... @# see: https://github.com/golangci/golangci-lint/issues/204
errcheck $(BUILDTAGS) ./... @echo "-- START CODE QUALITY REPORT -------------------------------"
find . -name \*.go | grep -v /vendor/ | xargs goimports -d | grep . ; test $$? -eq 1 @go vet $(BUILDTAGS) -printfuncs Debugf,Infof,Logf,Errorf ./...
go list ./... | xargs -n1 golint | grep -E -v '(StorageUrl|CdnUrl|ApplicationCredentialId)' ; test $$? -eq 1 @golangci-lint run ./...
else @echo "-- END CODE QUALITY REPORT ---------------------------------"
@echo Skipping source quality tests as version of go too old
endif
gometalinter_install:
go get -u github.com/alecthomas/gometalinter
gometalinter --install --update
# We aren't using gometalinter as the default linter yet because
# 1. it doesn't support build tags: https://github.com/alecthomas/gometalinter/issues/275
# 2. can't get -printfuncs working with the vet linter
gometalinter:
gometalinter ./...
# Get the build dependencies # Get the build dependencies
build_dep: build_dep:
ifdef FULL_TESTS go run bin/get-github-release.go -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
go get -u github.com/kisielk/errcheck
go get -u golang.org/x/tools/cmd/goimports
go get -u golang.org/x/lint/golint
endif
# Get the release dependencies # Get the release dependencies
release_dep: release_dep:
@@ -172,11 +153,7 @@ log_since_last_release:
git log $(LAST_TAG).. git log $(LAST_TAG)..
compile_all: compile_all:
ifdef FULL_TESTS
go run bin/cross-compile.go -parallel 8 -compile-only $(BUILDTAGS) $(TAG) go run bin/cross-compile.go -parallel 8 -compile-only $(BUILDTAGS) $(TAG)
else
@echo Skipping compile all as version of go too old
endif
appveyor_upload: appveyor_upload:
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD) rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
@@ -196,10 +173,15 @@ BUILD_FLAGS := -exclude "^(windows|darwin)/"
ifeq ($(TRAVIS_OS_NAME),osx) ifeq ($(TRAVIS_OS_NAME),osx)
BUILD_FLAGS := -include "^darwin/" -cgo BUILD_FLAGS := -include "^darwin/" -cgo
endif endif
ifeq ($(TRAVIS_OS_NAME),windows)
# BUILD_FLAGS := -include "^windows/" -cgo
# 386 doesn't build yet
BUILD_FLAGS := -include "^windows/amd64" -cgo
endif
travis_beta: travis_beta:
ifeq ($(TRAVIS_OS_NAME),linux) ifeq ($(TRAVIS_OS_NAME),linux)
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64.tar.gz' go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*\.tar.gz'
endif endif
git log $(LAST_TAG).. > /tmp/git-log.txt git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) -parallel 8 $(BUILDTAGS) $(TAG) go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) -parallel 8 $(BUILDTAGS) $(TAG)

View File

@@ -36,6 +36,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Hubic [:page_facing_up:](https://rclone.org/hubic/) * Hubic [:page_facing_up:](https://rclone.org/hubic/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/) * Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3) * IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/) * Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/) * Mega [:page_facing_up:](https://rclone.org/mega/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/) * Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
@@ -44,7 +45,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud) * Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
* OVH [:page_facing_up:](https://rclone.org/swift/) * OVH [:page_facing_up:](https://rclone.org/swift/)
* OpenDrive [:page_facing_up:](https://rclone.org/opendrive/) * OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
* Openstack Swift [:page_facing_up:](https://rclone.org/swift/) * OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/) * Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud) * ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/) * pCloud [:page_facing_up:](https://rclone.org/pcloud/)
@@ -62,13 +63,13 @@ Please see [the full list of all storage providers and their features](https://r
## Features ## Features
* MD5/SHA1 hashes checked at all times for file integrity * MD5/SHA-1 hashes checked at all times for file integrity
* Timestamps preserved on files * Timestamps preserved on files
* Partial syncs supported on a whole file basis * Partial syncs supported on a whole file basis
* [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files * [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical * [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality * [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts * Can sync to and from network, e.g. two different cloud accounts
* Optional encryption ([Crypt](https://rclone.org/crypt/)) * Optional encryption ([Crypt](https://rclone.org/crypt/))
* Optional cache ([Cache](https://rclone.org/cache/)) * Optional cache ([Cache](https://rclone.org/cache/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/)) * Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))

View File

@@ -30,7 +30,7 @@ type Options struct {
Remote string `config:"remote"` Remote string `config:"remote"`
} }
// NewFs contstructs an Fs from the path. // NewFs constructs an Fs from the path.
// //
// The returned Fs is the actual Fs, referenced by remote in the config // The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {

View File

@@ -80,7 +80,7 @@ func TestNewFS(t *testing.T) {
wantEntry := test.entries[i] wantEntry := test.entries[i]
require.Equal(t, wantEntry.remote, gotEntry.Remote(), what) require.Equal(t, wantEntry.remote, gotEntry.Remote(), what)
require.Equal(t, wantEntry.size, int64(gotEntry.Size()), what) require.Equal(t, wantEntry.size, gotEntry.Size(), what)
_, isDir := gotEntry.(fs.Directory) _, isDir := gotEntry.(fs.Directory)
require.Equal(t, wantEntry.isDir, isDir, what) require.Equal(t, wantEntry.isDir, isDir, what)
} }

View File

@@ -16,6 +16,7 @@ import (
_ "github.com/ncw/rclone/backend/http" _ "github.com/ncw/rclone/backend/http"
_ "github.com/ncw/rclone/backend/hubic" _ "github.com/ncw/rclone/backend/hubic"
_ "github.com/ncw/rclone/backend/jottacloud" _ "github.com/ncw/rclone/backend/jottacloud"
_ "github.com/ncw/rclone/backend/koofr"
_ "github.com/ncw/rclone/backend/local" _ "github.com/ncw/rclone/backend/local"
_ "github.com/ncw/rclone/backend/mega" _ "github.com/ncw/rclone/backend/mega"
_ "github.com/ncw/rclone/backend/onedrive" _ "github.com/ncw/rclone/backend/onedrive"

View File

@@ -155,7 +155,7 @@ type Fs struct {
noAuthClient *http.Client // unauthenticated http client noAuthClient *http.Client // unauthenticated http client
root string // the path we are working on root string // the path we are working on
dirCache *dircache.DirCache // Map of directory path to directory id dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls pacer *fs.Pacer // pacer for API calls
trueRootID string // ID of true root directory trueRootID string // ID of true root directory
tokenRenewer *oauthutil.Renew // renew the token on expiry tokenRenewer *oauthutil.Renew // renew the token on expiry
} }
@@ -273,7 +273,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root, root: root,
opt: *opt, opt: *opt,
c: c, c: c,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer), pacer: fs.NewPacer(pacer.NewAmazonCloudDrive(pacer.MinSleep(minSleep))),
noAuthClient: fshttp.NewClient(fs.Config), noAuthClient: fshttp.NewClient(fs.Config),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{

View File

@@ -77,7 +77,7 @@ func init() {
}, { }, {
Name: "upload_cutoff", Name: "upload_cutoff",
Help: "Cutoff for switching to chunked upload (<= 256MB).", Help: "Cutoff for switching to chunked upload (<= 256MB).",
Default: fs.SizeSuffix(defaultUploadCutoff), Default: defaultUploadCutoff,
Advanced: true, Advanced: true,
}, { }, {
Name: "chunk_size", Name: "chunk_size",
@@ -85,7 +85,7 @@ func init() {
Note that this is stored in memory and there may be up to Note that this is stored in memory and there may be up to
"--transfers" chunks stored at once in memory.`, "--transfers" chunks stored at once in memory.`,
Default: fs.SizeSuffix(defaultChunkSize), Default: defaultChunkSize,
Advanced: true, Advanced: true,
}, { }, {
Name: "list_chunk", Name: "list_chunk",
@@ -144,7 +144,7 @@ type Fs struct {
containerOKMu sync.Mutex // mutex to protect container OK containerOKMu sync.Mutex // mutex to protect container OK
containerOK bool // true if we have created the container containerOK bool // true if we have created the container
containerDeleted bool // true if we have deleted the container containerDeleted bool // true if we have deleted the container
pacer *pacer.Pacer // To pace and retry the API calls pacer *fs.Pacer // To pace and retry the API calls
uploadToken *pacer.TokenDispenser // control concurrency uploadToken *pacer.TokenDispenser // control concurrency
} }
@@ -307,7 +307,7 @@ func (f *Fs) newPipeline(c azblob.Credential, o azblob.PipelineOptions) pipeline
return pipeline.NewPipeline(factories, pipeline.Options{HTTPSender: httpClientFactory(f.client), Log: o.Log}) return pipeline.NewPipeline(factories, pipeline.Options{HTTPSender: httpClientFactory(f.client), Log: o.Log})
} }
// NewFs contstructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
@@ -347,7 +347,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
opt: *opt, opt: *opt,
container: container, container: container,
root: directory, root: directory,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant).SetPacer(pacer.S3Pacer), pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers), uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
client: fshttp.NewClient(fs.Config), client: fshttp.NewClient(fs.Config),
} }
@@ -392,7 +392,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.New("Container name in SAS URL and container provided in command do not match") return nil, errors.New("Container name in SAS URL and container provided in command do not match")
} }
container = parts.ContainerName f.container = parts.ContainerName
containerURL = azblob.NewContainerURL(*u, pipeline) containerURL = azblob.NewContainerURL(*u, pipeline)
} else { } else {
serviceURL = azblob.NewServiceURL(*u, pipeline) serviceURL = azblob.NewServiceURL(*u, pipeline)
@@ -1038,7 +1038,7 @@ func (o *Object) decodeMetaDataFromPropertiesResponse(info *azblob.BlobGetProper
o.md5 = base64.StdEncoding.EncodeToString(info.ContentMD5()) o.md5 = base64.StdEncoding.EncodeToString(info.ContentMD5())
o.mimeType = info.ContentType() o.mimeType = info.ContentType()
o.size = size o.size = size
o.modTime = time.Time(info.LastModified()) o.modTime = info.LastModified()
o.accessTier = azblob.AccessTierType(info.AccessTier()) o.accessTier = azblob.AccessTierType(info.AccessTier())
o.setMetadata(metadata) o.setMetadata(metadata)
@@ -1104,12 +1104,6 @@ func (o *Object) readMetaData() (err error) {
return o.decodeMetaDataFromPropertiesResponse(blobProperties) return o.decodeMetaDataFromPropertiesResponse(blobProperties)
} }
// timeString returns modTime as the number of milliseconds
// elapsed since January 1, 1970 UTC as a decimal string.
func timeString(modTime time.Time) string {
return strconv.FormatInt(modTime.UnixNano()/1E6, 10)
}
// parseTimeString converts a decimal string number of milliseconds // parseTimeString converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time and stores it in // elapsed since January 1, 1970 UTC into a time.Time and stores it in
// the modTime variable. // the modTime variable.
@@ -1392,16 +1386,16 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
blob := o.getBlobReference() blob := o.getBlobReference()
httpHeaders := azblob.BlobHTTPHeaders{} httpHeaders := azblob.BlobHTTPHeaders{}
httpHeaders.ContentType = fs.MimeType(o) httpHeaders.ContentType = fs.MimeType(o)
// Multipart upload doesn't support MD5 checksums at put block calls, hence calculate // Compute the Content-MD5 of the file, for multiparts uploads it
// MD5 only for PutBlob requests // will be set in PutBlockList API call using the 'x-ms-blob-content-md5' header
if size < int64(o.fs.opt.UploadCutoff) { // Note: If multipart, a MD5 checksum will also be computed for each uploaded block
if sourceMD5, _ := src.Hash(hash.MD5); sourceMD5 != "" { // in order to validate its integrity during transport
sourceMD5bytes, err := hex.DecodeString(sourceMD5) if sourceMD5, _ := src.Hash(hash.MD5); sourceMD5 != "" {
if err == nil { sourceMD5bytes, err := hex.DecodeString(sourceMD5)
httpHeaders.ContentMD5 = sourceMD5bytes if err == nil {
} else { httpHeaders.ContentMD5 = sourceMD5bytes
fs.Debugf(o, "Failed to decode %q as MD5: %v", sourceMD5, err) } else {
} fs.Debugf(o, "Failed to decode %q as MD5: %v", sourceMD5, err)
} }
} }

View File

@@ -17,12 +17,12 @@ type Error struct {
Message string `json:"message"` // A human-readable message, in English, saying what went wrong. Message string `json:"message"` // A human-readable message, in English, saying what went wrong.
} }
// Error statisfies the error interface // Error satisfies the error interface
func (e *Error) Error() string { func (e *Error) Error() string {
return fmt.Sprintf("%s (%d %s)", e.Message, e.Status, e.Code) return fmt.Sprintf("%s (%d %s)", e.Message, e.Status, e.Code)
} }
// Fatal statisfies the Fatal interface // Fatal satisfies the Fatal interface
// //
// It indicates which errors should be treated as fatal // It indicates which errors should be treated as fatal
func (e *Error) Fatal() bool { func (e *Error) Fatal() bool {
@@ -100,7 +100,7 @@ func RemoveVersion(remote string) (t Timestamp, newRemote string) {
return Timestamp(newT), base[:versionStart] + ext return Timestamp(newT), base[:versionStart] + ext
} }
// IsZero returns true if the timestamp is unitialised // IsZero returns true if the timestamp is uninitialized
func (t Timestamp) IsZero() bool { func (t Timestamp) IsZero() bool {
return time.Time(t).IsZero() return time.Time(t).IsZero()
} }

View File

@@ -108,7 +108,7 @@ in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration
Files above this size will be uploaded in chunks of "--b2-chunk-size". Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657GiB (== 5GB).`, This value should be set no larger than 4.657GiB (== 5GB).`,
Default: fs.SizeSuffix(defaultUploadCutoff), Default: defaultUploadCutoff,
Advanced: true, Advanced: true,
}, { }, {
Name: "chunk_size", Name: "chunk_size",
@@ -117,14 +117,22 @@ This value should be set no larger than 4.657GiB (== 5GB).`,
When uploading large files, chunk the file into this size. Note that When uploading large files, chunk the file into this size. Note that
these chunks are buffered in memory and there might a maximum of these chunks are buffered in memory and there might a maximum of
"--transfers" chunks in progress at once. 5,000,000 Bytes is the "--transfers" chunks in progress at once. 5,000,000 Bytes is the
minimim size.`, minimum size.`,
Default: fs.SizeSuffix(defaultChunkSize), Default: defaultChunkSize,
Advanced: true, Advanced: true,
}, { }, {
Name: "disable_checksum", Name: "disable_checksum",
Help: `Disable checksums for large (> upload cutoff) files`, Help: `Disable checksums for large (> upload cutoff) files`,
Default: false, Default: false,
Advanced: true, Advanced: true,
}, {
Name: "download_url",
Help: `Custom endpoint for downloads.
This is usually set to a Cloudflare CDN URL as Backblaze offers
free egress for data downloaded through the Cloudflare network.
Leave blank if you want to use the endpoint provided by Backblaze.`,
Advanced: true,
}}, }},
}) })
} }
@@ -140,6 +148,7 @@ type Options struct {
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"` UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"` ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableCheckSum bool `config:"disable_checksum"` DisableCheckSum bool `config:"disable_checksum"`
DownloadURL string `config:"download_url"`
} }
// Fs represents a remote b2 server // Fs represents a remote b2 server
@@ -158,7 +167,7 @@ type Fs struct {
uploadMu sync.Mutex // lock for upload variable uploadMu sync.Mutex // lock for upload variable
uploads []*api.GetUploadURLResponse // result of get upload URL calls uploads []*api.GetUploadURLResponse // result of get upload URL calls
authMu sync.Mutex // lock for authorizing the account authMu sync.Mutex // lock for authorizing the account
pacer *pacer.Pacer // To pace and retry the API calls pacer *fs.Pacer // To pace and retry the API calls
bufferTokens chan []byte // control concurrency of multipart uploads bufferTokens chan []byte // control concurrency of multipart uploads
} }
@@ -242,13 +251,7 @@ func (f *Fs) shouldRetryNoReauth(resp *http.Response, err error) (bool, error) {
fs.Errorf(f, "Malformed %s header %q: %v", retryAfterHeader, retryAfterString, err) fs.Errorf(f, "Malformed %s header %q: %v", retryAfterHeader, retryAfterString, err)
} }
} }
retryAfterDuration := time.Duration(retryAfter) * time.Second return true, pacer.RetryAfterError(err, time.Duration(retryAfter)*time.Second)
if f.pacer.GetSleep() < retryAfterDuration {
fs.Debugf(f, "Setting sleep to %v after error: %v", retryAfterDuration, err)
// We set 1/2 the value here because the pacer will double it immediately
f.pacer.SetSleep(retryAfterDuration / 2)
}
return true, err
} }
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
@@ -319,7 +322,7 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return return
} }
// NewFs contstructs an Fs from the path, bucket:path // NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
@@ -354,7 +357,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
bucket: bucket, bucket: bucket,
root: directory, root: directory,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler), srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
ReadMimeType: true, ReadMimeType: true,
@@ -949,6 +952,13 @@ func (f *Fs) hide(Name string) error {
return f.shouldRetry(resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
if apiErr, ok := err.(*api.Error); ok {
if apiErr.Code == "already_hidden" {
// sometimes eventual consistency causes this, so
// ignore this error since it is harmless
return nil
}
}
return errors.Wrapf(err, "failed to hide %q", Name) return errors.Wrapf(err, "failed to hide %q", Name)
} }
return nil return nil
@@ -1296,9 +1306,17 @@ var _ io.ReadCloser = &openFile{}
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) { func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
RootURL: o.fs.info.DownloadURL,
Options: options, Options: options,
} }
// Use downloadUrl from backblaze if downloadUrl is not set
// otherwise use the custom downloadUrl
if o.fs.opt.DownloadURL == "" {
opts.RootURL = o.fs.info.DownloadURL
} else {
opts.RootURL = o.fs.opt.DownloadURL
}
// Download by id if set otherwise by name // Download by id if set otherwise by name
if o.id != "" { if o.id != "" {
opts.Path += "/b2api/v1/b2_download_file_by_id?fileId=" + urlEncode(o.id) opts.Path += "/b2api/v1/b2_download_file_by_id?fileId=" + urlEncode(o.id)
@@ -1459,7 +1477,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// Content-Type b2/x-auto to automatically set the stored Content-Type // Content-Type b2/x-auto to automatically set the stored Content-Type
// post upload. In the case where a file extension is absent or the // post upload. In the case where a file extension is absent or the
// lookup fails, the Content-Type is set to application/octet-stream. The // lookup fails, the Content-Type is set to application/octet-stream. The
// Content-Type mappings can be purused here. // Content-Type mappings can be pursued here.
// //
// X-Bz-Content-Sha1 // X-Bz-Content-Sha1
// required // required

View File

@@ -45,7 +45,7 @@ type Error struct {
RequestID string `json:"request_id"` RequestID string `json:"request_id"`
} }
// Error returns a string for the error and statistifes the error interface // Error returns a string for the error and satisfies the error interface
func (e *Error) Error() string { func (e *Error) Error() string {
out := fmt.Sprintf("Error %q (%d)", e.Code, e.Status) out := fmt.Sprintf("Error %q (%d)", e.Code, e.Status)
if e.Message != "" { if e.Message != "" {
@@ -57,7 +57,7 @@ func (e *Error) Error() string {
return out return out
} }
// Check Error statisfies the error interface // Check Error satisfies the error interface
var _ error = (*Error)(nil) var _ error = (*Error)(nil)
// ItemFields are the fields needed for FileInfo // ItemFields are the fields needed for FileInfo

View File

@@ -111,7 +111,7 @@ type Fs struct {
features *fs.Features // optional features features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry tokenRenewer *oauthutil.Renew // renew the token on expiry
uploadToken *pacer.TokenDispenser // control concurrency uploadToken *pacer.TokenDispenser // control concurrency
} }
@@ -171,13 +171,13 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(resp *http.Response, err error) (bool, error) {
authRety := false authRetry := false
if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 { if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
authRety = true authRetry = true
fs.Debugf(nil, "Should retry: %v", err) fs.Debugf(nil, "Should retry: %v", err)
} }
return authRety || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
// substitute reserved characters for box // substitute reserved characters for box
@@ -260,7 +260,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root, root: root,
opt: *opt, opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL), srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers), uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
@@ -530,10 +530,10 @@ func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Obje
// //
// The new object may have been created if an error is returned // The new object may have been created if an error is returned
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
exisitingObj, err := f.newObjectWithInfo(src.Remote(), nil) existingObj, err := f.newObjectWithInfo(src.Remote(), nil)
switch err { switch err {
case nil: case nil:
return exisitingObj, exisitingObj.Update(in, src, options...) return existingObj, existingObj.Update(in, src, options...)
case fs.ErrorObjectNotFound: case fs.ErrorObjectNotFound:
// Not found so create it // Not found so create it
return f.PutUnchecked(in, src) return f.PutUnchecked(in, src)

View File

@@ -211,8 +211,8 @@ outer:
} }
reqSize := remaining reqSize := remaining
if reqSize >= int64(chunkSize) { if reqSize >= chunkSize {
reqSize = int64(chunkSize) reqSize = chunkSize
} }
// Make a block of memory // Make a block of memory

View File

@@ -576,7 +576,7 @@ The slice indices are similar to Python slices: start[:end]
start is the 0 based chunk number from the beginning of the file start is the 0 based chunk number from the beginning of the file
to fetch inclusive. end is 0 based chunk number from the beginning to fetch inclusive. end is 0 based chunk number from the beginning
of the file to fetch exclisive. of the file to fetch exclusive.
Both values can be negative, in which case they count from the back Both values can be negative, in which case they count from the back
of the file. The value "-5:" represents the last 5 chunks of a file. of the file. The value "-5:" represents the last 5 chunks of a file.
@@ -870,7 +870,7 @@ func (f *Fs) notifyChangeUpstream(remote string, entryType fs.EntryType) {
} }
} }
// ChangeNotify can subsribe multiple callers // ChangeNotify can subscribe multiple callers
// this is coupled with the wrapped fs ChangeNotify (if it supports it) // this is coupled with the wrapped fs ChangeNotify (if it supports it)
// and also notifies other caches (i.e VFS) to clear out whenever something changes // and also notifies other caches (i.e VFS) to clear out whenever something changes
func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollInterval <-chan time.Duration) { func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollInterval <-chan time.Duration) {
@@ -1191,7 +1191,7 @@ func (f *Fs) Rmdir(dir string) error {
} }
var queuedEntries []*Object var queuedEntries []*Object
err = walk.Walk(f.tempFs, dir, true, -1, func(path string, entries fs.DirEntries, err error) error { err = walk.ListR(f.tempFs, dir, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries { for _, o := range entries {
if oo, ok := o.(fs.Object); ok { if oo, ok := o.(fs.Object); ok {
co := ObjectFromOriginal(f, oo) co := ObjectFromOriginal(f, oo)
@@ -1287,7 +1287,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
} }
var queuedEntries []*Object var queuedEntries []*Object
err := walk.Walk(f.tempFs, srcRemote, true, -1, func(path string, entries fs.DirEntries, err error) error { err := walk.ListR(f.tempFs, srcRemote, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries { for _, o := range entries {
if oo, ok := o.(fs.Object); ok { if oo, ok := o.(fs.Object); ok {
co := ObjectFromOriginal(f, oo) co := ObjectFromOriginal(f, oo)
@@ -1549,7 +1549,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
} }
if srcObj.isTempFile() { if srcObj.isTempFile() {
// we check if the feature is stil active // we check if the feature is still active
if f.opt.TempWritePath == "" { if f.opt.TempWritePath == "" {
fs.Errorf(srcObj, "can't copy - this is a local cached file but this feature is turned off this run") fs.Errorf(srcObj, "can't copy - this is a local cached file but this feature is turned off this run")
return nil, fs.ErrorCantCopy return nil, fs.ErrorCantCopy
@@ -1625,7 +1625,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
// if this is a temp object then we perform the changes locally // if this is a temp object then we perform the changes locally
if srcObj.isTempFile() { if srcObj.isTempFile() {
// we check if the feature is stil active // we check if the feature is still active
if f.opt.TempWritePath == "" { if f.opt.TempWritePath == "" {
fs.Errorf(srcObj, "can't move - this is a local cached file but this feature is turned off this run") fs.Errorf(srcObj, "can't move - this is a local cached file but this feature is turned off this run")
return nil, fs.ErrorCantMove return nil, fs.ErrorCantMove

View File

@@ -387,10 +387,10 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
// write the object // write the object
o := runInstance.writeObjectBytes(t, cfs.UnWrap(), "data.bin", testData) o := runInstance.writeObjectBytes(t, cfs.UnWrap(), "data.bin", testData)
require.Equal(t, o.Size(), int64(testSize)) require.Equal(t, o.Size(), testSize)
time.Sleep(time.Second * 3) time.Sleep(time.Second * 3)
checkSample, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, int64(testSize), false) checkSample, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, testSize, false)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, int64(len(checkSample)), o.Size()) require.Equal(t, int64(len(checkSample)), o.Size())
@@ -726,6 +726,7 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
// Call the rc function // Call the rc function
m, err := cacheExpire.Fn(rc.Params{"remote": "data.bin"}) m, err := cacheExpire.Fn(rc.Params{"remote": "data.bin"})
require.NoError(t, err)
require.Contains(t, m, "status") require.Contains(t, m, "status")
require.Contains(t, m, "message") require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"]) require.Equal(t, "ok", m["status"])
@@ -735,18 +736,21 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
co, err = rootFs.NewObject("data.bin") co, err = rootFs.NewObject("data.bin")
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, wrappedTime.Unix(), co.ModTime().Unix()) require.Equal(t, wrappedTime.Unix(), co.ModTime().Unix())
li1, err := runInstance.list(t, rootFs, "") _, err = runInstance.list(t, rootFs, "")
require.NoError(t, err)
// create some rand test data // create some rand test data
testData2 := randStringBytes(int(chunkSize)) testData2 := randStringBytes(int(chunkSize))
runInstance.writeObjectBytes(t, cfs.UnWrap(), runInstance.encryptRemoteIfNeeded(t, "test2"), testData2) runInstance.writeObjectBytes(t, cfs.UnWrap(), runInstance.encryptRemoteIfNeeded(t, "test2"), testData2)
// list should have 1 item only // list should have 1 item only
li1, err = runInstance.list(t, rootFs, "") li1, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, li1, 1) require.Len(t, li1, 1)
// Call the rc function // Call the rc function
m, err = cacheExpire.Fn(rc.Params{"remote": "/"}) m, err = cacheExpire.Fn(rc.Params{"remote": "/"})
require.NoError(t, err)
require.Contains(t, m, "status") require.Contains(t, m, "status")
require.Contains(t, m, "message") require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"]) require.Equal(t, "ok", m["status"])
@@ -754,6 +758,7 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
// list should have 2 items now // list should have 2 items now
li2, err := runInstance.list(t, rootFs, "") li2, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, li2, 2) require.Len(t, li2, 2)
} }
@@ -1490,7 +1495,8 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
var err error var err error
if r.useMount { if r.useMount {
f, err := os.OpenFile(path.Join(runInstance.mntDir, src), os.O_TRUNC|os.O_CREATE|os.O_WRONLY, 0644) var f *os.File
f, err = os.OpenFile(path.Join(runInstance.mntDir, src), os.O_TRUNC|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil { if err != nil {
return err return err
} }
@@ -1500,7 +1506,8 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
}() }()
_, err = f.WriteString(data + append) _, err = f.WriteString(data + append)
} else { } else {
obj1, err := rootFs.NewObject(src) var obj1 fs.Object
obj1, err = rootFs.NewObject(src)
if err != nil { if err != nil {
return err return err
} }
@@ -1632,15 +1639,13 @@ func (r *run) getCacheFs(f fs.Fs) (*cache.Fs, error) {
cfs, ok := f.(*cache.Fs) cfs, ok := f.(*cache.Fs)
if ok { if ok {
return cfs, nil return cfs, nil
} else { }
if f.Features().UnWrap != nil { if f.Features().UnWrap != nil {
cfs, ok := f.Features().UnWrap().(*cache.Fs) cfs, ok := f.Features().UnWrap().(*cache.Fs)
if ok { if ok {
return cfs, nil return cfs, nil
}
} }
} }
return nil, errors.New("didn't found a cache fs") return nil, errors.New("didn't found a cache fs")
} }

View File

@@ -398,7 +398,7 @@ func (b *Persistent) AddObject(cachedObject *Object) error {
if err != nil { if err != nil {
return errors.Errorf("couldn't marshal object (%v) info: %v", cachedObject, err) return errors.Errorf("couldn't marshal object (%v) info: %v", cachedObject, err)
} }
err = bucket.Put([]byte(cachedObject.Name), []byte(encoded)) err = bucket.Put([]byte(cachedObject.Name), encoded)
if err != nil { if err != nil {
return errors.Errorf("couldn't cache object (%v) info: %v", cachedObject, err) return errors.Errorf("couldn't cache object (%v) info: %v", cachedObject, err)
} }
@@ -809,7 +809,7 @@ func (b *Persistent) addPendingUpload(destPath string, started bool) error {
if err != nil { if err != nil {
return errors.Errorf("couldn't marshal object (%v) info: %v", destPath, err) return errors.Errorf("couldn't marshal object (%v) info: %v", destPath, err)
} }
err = bucket.Put([]byte(destPath), []byte(encoded)) err = bucket.Put([]byte(destPath), encoded)
if err != nil { if err != nil {
return errors.Errorf("couldn't cache object (%v) info: %v", destPath, err) return errors.Errorf("couldn't cache object (%v) info: %v", destPath, err)
} }
@@ -1023,7 +1023,7 @@ func (b *Persistent) ReconcileTempUploads(cacheFs *Fs) error {
} }
var queuedEntries []fs.Object var queuedEntries []fs.Object
err = walk.Walk(cacheFs.tempFs, "", true, -1, func(path string, entries fs.DirEntries, err error) error { err = walk.ListR(cacheFs.tempFs, "", true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries { for _, o := range entries {
if oo, ok := o.(fs.Object); ok { if oo, ok := o.(fs.Object); ok {
queuedEntries = append(queuedEntries, oo) queuedEntries = append(queuedEntries, oo)
@@ -1049,7 +1049,7 @@ func (b *Persistent) ReconcileTempUploads(cacheFs *Fs) error {
if err != nil { if err != nil {
return errors.Errorf("couldn't marshal object (%v) info: %v", queuedEntry, err) return errors.Errorf("couldn't marshal object (%v) info: %v", queuedEntry, err)
} }
err = bucket.Put([]byte(destPath), []byte(encoded)) err = bucket.Put([]byte(destPath), encoded)
if err != nil { if err != nil {
return errors.Errorf("couldn't cache object (%v) info: %v", destPath, err) return errors.Errorf("couldn't cache object (%v) info: %v", destPath, err)
} }

View File

@@ -144,7 +144,6 @@ type cipher struct {
buffers sync.Pool // encrypt/decrypt buffers buffers sync.Pool // encrypt/decrypt buffers
cryptoRand io.Reader // read crypto random numbers from here cryptoRand io.Reader // read crypto random numbers from here
dirNameEncrypt bool dirNameEncrypt bool
passCorrupted bool
} }
// newCipher initialises the cipher. If salt is "" then it uses a built in salt val // newCipher initialises the cipher. If salt is "" then it uses a built in salt val
@@ -164,11 +163,6 @@ func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bo
return c, nil return c, nil
} }
// Set to pass corrupted blocks
func (c *cipher) setPassCorrupted(passCorrupted bool) {
c.passCorrupted = passCorrupted
}
// Key creates all the internal keys from the password passed in using // Key creates all the internal keys from the password passed in using
// scrypt. // scrypt.
// //
@@ -469,7 +463,7 @@ func (c *cipher) deobfuscateSegment(ciphertext string) (string, error) {
if int(newRune) < base { if int(newRune) < base {
newRune += 256 newRune += 256
} }
_, _ = result.WriteRune(rune(newRune)) _, _ = result.WriteRune(newRune)
default: default:
_, _ = result.WriteRune(runeValue) _, _ = result.WriteRune(runeValue)
@@ -754,7 +748,7 @@ func (c *cipher) newDecrypter(rc io.ReadCloser) (*decrypter, error) {
if !bytes.Equal(readBuf[:fileMagicSize], fileMagicBytes) { if !bytes.Equal(readBuf[:fileMagicSize], fileMagicBytes) {
return nil, fh.finishAndClose(ErrorEncryptedBadMagic) return nil, fh.finishAndClose(ErrorEncryptedBadMagic)
} }
// retreive the nonce // retrieve the nonce
fh.nonce.fromBuf(readBuf[fileMagicSize:]) fh.nonce.fromBuf(readBuf[fileMagicSize:])
fh.initialNonce = fh.nonce fh.initialNonce = fh.nonce
return fh, nil return fh, nil
@@ -828,10 +822,7 @@ func (fh *decrypter) fillBuffer() (err error) {
if err != nil { if err != nil {
return err // return pending error as it is likely more accurate return err // return pending error as it is likely more accurate
} }
if !fh.c.passCorrupted { return ErrorEncryptedBadBlock
return ErrorEncryptedBadBlock
}
fs.Errorf(nil, "passing corrupted block")
} }
fh.bufIndex = 0 fh.bufIndex = 0
fh.bufSize = n - blockHeaderSize fh.bufSize = n - blockHeaderSize

View File

@@ -17,6 +17,7 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
) )
// Globals
// Register with Fs // Register with Fs
func init() { func init() {
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
@@ -79,15 +80,6 @@ names, or for debugging purposes.`,
Default: false, Default: false,
Hide: fs.OptionHideConfigurator, Hide: fs.OptionHideConfigurator,
Advanced: true, Advanced: true,
}, {
Name: "pass_corrupted_blocks",
Help: `Pass through corrupted blocks to the output.
This is for debugging corruption problems in crypt - it shouldn't be needed normally.
`,
Default: false,
Hide: fs.OptionHideConfigurator,
Advanced: true,
}}, }},
}) })
} }
@@ -116,7 +108,6 @@ func newCipherForConfig(opt *Options) (Cipher, error) {
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to make cipher") return nil, errors.Wrap(err, "failed to make cipher")
} }
cipher.setPassCorrupted(opt.PassCorruptedBlocks)
return cipher, nil return cipher, nil
} }
@@ -131,7 +122,7 @@ func NewCipher(m configmap.Mapper) (Cipher, error) {
return newCipherForConfig(opt) return newCipherForConfig(opt)
} }
// NewFs contstructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
@@ -206,7 +197,6 @@ type Options struct {
Password string `config:"password"` Password string `config:"password"`
Password2 string `config:"password2"` Password2 string `config:"password2"`
ShowMapping bool `config:"show_mapping"` ShowMapping bool `config:"show_mapping"`
PassCorruptedBlocks bool `config:"pass_corrupted_blocks"`
} }
// Fs represents a wrapped fs.Fs // Fs represents a wrapped fs.Fs
@@ -565,7 +555,7 @@ func (f *Fs) DecryptFileName(encryptedFileName string) (string, error) {
} }
// ComputeHash takes the nonce from o, and encrypts the contents of // ComputeHash takes the nonce from o, and encrypts the contents of
// src with it, and calcuates the hash given by HashType on the fly // src with it, and calculates the hash given by HashType on the fly
// //
// Note that we break lots of encapsulation in this function. // Note that we break lots of encapsulation in this function.
func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType hash.Type) (hashStr string, err error) { func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType hash.Type) (hashStr string, err error) {

View File

@@ -21,6 +21,7 @@ import (
"net/url" "net/url"
"os" "os"
"path" "path"
"sort"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
@@ -185,10 +186,10 @@ func init() {
}, },
Options: []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigClientID, Name: config.ConfigClientID,
Help: "Google Application Client Id\nLeave blank normally.", Help: "Google Application Client Id\nSetting your own is recommended.\nSee https://rclone.org/drive/#making-your-own-client-id for how to create your own.\nIf you leave this blank, it will use an internal key which is low performance.",
}, { }, {
Name: config.ConfigClientSecret, Name: config.ConfigClientSecret,
Help: "Google Application Client Secret\nLeave blank normally.", Help: "Google Application Client Secret\nSetting your own is recommended.",
}, { }, {
Name: "scope", Name: "scope",
Help: "Scope that rclone should use when requesting access from drive.", Help: "Scope that rclone should use when requesting access from drive.",
@@ -239,6 +240,22 @@ func init() {
Default: false, Default: false,
Help: "Skip google documents in all listings.\nIf given, gdocs practically become invisible to rclone.", Help: "Skip google documents in all listings.\nIf given, gdocs practically become invisible to rclone.",
Advanced: true, Advanced: true,
}, {
Name: "skip_checksum_gphotos",
Default: false,
Help: `Skip MD5 checksum on Google photos and videos only.
Use this if you get checksum errors when transferring Google photos or
videos.
Setting this flag will cause Google photos and videos to return a
blank MD5 checksum.
Google photos are identifed by being in the "photos" space.
Corrupted checksums are caused by Google modifying the image/video but
not updating the checksum.`,
Advanced: true,
}, { }, {
Name: "shared_with_me", Name: "shared_with_me",
Default: false, Default: false,
@@ -395,6 +412,7 @@ type Options struct {
AuthOwnerOnly bool `config:"auth_owner_only"` AuthOwnerOnly bool `config:"auth_owner_only"`
UseTrash bool `config:"use_trash"` UseTrash bool `config:"use_trash"`
SkipGdocs bool `config:"skip_gdocs"` SkipGdocs bool `config:"skip_gdocs"`
SkipChecksumGphotos bool `config:"skip_checksum_gphotos"`
SharedWithMe bool `config:"shared_with_me"` SharedWithMe bool `config:"shared_with_me"`
TrashedOnly bool `config:"trashed_only"` TrashedOnly bool `config:"trashed_only"`
Extensions string `config:"formats"` Extensions string `config:"formats"`
@@ -425,7 +443,7 @@ type Fs struct {
client *http.Client // authorized client client *http.Client // authorized client
rootFolderID string // the id of the root folder rootFolderID string // the id of the root folder
dirCache *dircache.DirCache // Map of directory path to directory id dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // To pace the API calls pacer *fs.Pacer // To pace the API calls
exportExtensions []string // preferred extensions to download docs exportExtensions []string // preferred extensions to download docs
importMimeTypes []string // MIME types to convert to docs importMimeTypes []string // MIME types to convert to docs
isTeamDrive bool // true if this is a team drive isTeamDrive bool // true if this is a team drive
@@ -481,7 +499,7 @@ func (f *Fs) Features() *fs.Features {
return f.features return f.features
} }
// shouldRetry determines whehter a given err rates being retried // shouldRetry determines whether a given err rates being retried
func shouldRetry(err error) (bool, error) { func shouldRetry(err error) (bool, error) {
if err == nil { if err == nil {
return false, nil return false, nil
@@ -614,6 +632,9 @@ func (f *Fs) list(dirIDs []string, title string, directoriesOnly, filesOnly, inc
if f.opt.AuthOwnerOnly { if f.opt.AuthOwnerOnly {
fields += ",owners" fields += ",owners"
} }
if f.opt.SkipChecksumGphotos {
fields += ",spaces"
}
fields = fmt.Sprintf("files(%s),nextPageToken", fields) fields = fmt.Sprintf("files(%s),nextPageToken", fields)
@@ -675,28 +696,33 @@ func isPowerOfTwo(x int64) bool {
} }
// add a charset parameter to all text/* MIME types // add a charset parameter to all text/* MIME types
func fixMimeType(mimeType string) string { func fixMimeType(mimeTypeIn string) string {
mediaType, param, err := mime.ParseMediaType(mimeType) if mimeTypeIn == "" {
return ""
}
mediaType, param, err := mime.ParseMediaType(mimeTypeIn)
if err != nil { if err != nil {
return mimeType return mimeTypeIn
} }
if strings.HasPrefix(mimeType, "text/") && param["charset"] == "" { mimeTypeOut := mimeTypeIn
if strings.HasPrefix(mediaType, "text/") && param["charset"] == "" {
param["charset"] = "utf-8" param["charset"] = "utf-8"
mimeType = mime.FormatMediaType(mediaType, param) mimeTypeOut = mime.FormatMediaType(mediaType, param)
} }
return mimeType if mimeTypeOut == "" {
panic(errors.Errorf("unable to fix MIME type %q", mimeTypeIn))
}
return mimeTypeOut
} }
func fixMimeTypeMap(m map[string][]string) map[string][]string { func fixMimeTypeMap(in map[string][]string) (out map[string][]string) {
for _, v := range m { out = make(map[string][]string, len(in))
for k, v := range in {
for i, mt := range v { for i, mt := range v {
fixed := fixMimeType(mt) v[i] = fixMimeType(mt)
if fixed == "" {
panic(errors.Errorf("unable to fix MIME type %q", mt))
}
v[i] = fixed
} }
out[fixMimeType(k)] = v
} }
return m return out
} }
func isInternalMimeType(mimeType string) bool { func isInternalMimeType(mimeType string) bool {
return strings.HasPrefix(mimeType, "application/vnd.google-apps.") return strings.HasPrefix(mimeType, "application/vnd.google-apps.")
@@ -788,8 +814,8 @@ func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
} }
// newPacer makes a pacer configured for drive // newPacer makes a pacer configured for drive
func newPacer(opt *Options) *pacer.Pacer { func newPacer(opt *Options) *fs.Pacer {
return pacer.New().SetMinSleep(time.Duration(opt.PacerMinSleep)).SetBurst(opt.PacerBurst).SetPacer(pacer.GoogleDrivePacer) return fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(opt.PacerMinSleep), pacer.Burst(opt.PacerBurst)))
} }
func getServiceAccountClient(opt *Options, credentialsData []byte) (*http.Client, error) { func getServiceAccountClient(opt *Options, credentialsData []byte) (*http.Client, error) {
@@ -862,7 +888,7 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return return
} }
// NewFs contstructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
@@ -901,6 +927,7 @@ func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
ReadMimeType: true, ReadMimeType: true,
WriteMimeType: true, WriteMimeType: true,
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: true,
}).Fill(f) }).Fill(f)
// Create a new authorized Drive client. // Create a new authorized Drive client.
@@ -995,6 +1022,15 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
// newRegularObject creates a fs.Object for a normal drive.File // newRegularObject creates a fs.Object for a normal drive.File
func (f *Fs) newRegularObject(remote string, info *drive.File) fs.Object { func (f *Fs) newRegularObject(remote string, info *drive.File) fs.Object {
// wipe checksum if SkipChecksumGphotos and file is type Photo or Video
if f.opt.SkipChecksumGphotos {
for _, space := range info.Spaces {
if space == "photos" {
info.Md5Checksum = ""
break
}
}
}
return &Object{ return &Object{
baseObject: f.newBaseObject(remote, info), baseObject: f.newBaseObject(remote, info),
url: fmt.Sprintf("%sfiles/%s?alt=media", f.svc.BasePath, info.Id), url: fmt.Sprintf("%sfiles/%s?alt=media", f.svc.BasePath, info.Id),
@@ -1339,17 +1375,46 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
return entries, nil return entries, nil
} }
// listREntry is a task to be executed by a litRRunner
type listREntry struct {
id, path string
}
// listRSlices is a helper struct to sort two slices at once
type listRSlices struct {
dirs []string
paths []string
}
func (s listRSlices) Sort() {
sort.Sort(s)
}
func (s listRSlices) Len() int {
return len(s.dirs)
}
func (s listRSlices) Swap(i, j int) {
s.dirs[i], s.dirs[j] = s.dirs[j], s.dirs[i]
s.paths[i], s.paths[j] = s.paths[j], s.paths[i]
}
func (s listRSlices) Less(i, j int) bool {
return s.dirs[i] < s.dirs[j]
}
// listRRunner will read dirIDs from the in channel, perform the file listing an call cb with each DirEntry. // listRRunner will read dirIDs from the in channel, perform the file listing an call cb with each DirEntry.
// //
// In each cycle, will wait up to 10ms to read up to grouping entries from the in channel. // In each cycle it will read up to grouping entries from the in channel without blocking.
// If an error occurs it will be send to the out channel and then return. Once the in channel is closed, // If an error occurs it will be send to the out channel and then return. Once the in channel is closed,
// nil is send to the out channel and the function returns. // nil is send to the out channel and the function returns.
func (f *Fs) listRRunner(wg *sync.WaitGroup, in <-chan string, out chan<- error, cb func(fs.DirEntry) error, grouping int) { func (f *Fs) listRRunner(wg *sync.WaitGroup, in <-chan listREntry, out chan<- error, cb func(fs.DirEntry) error, grouping int) {
var dirs []string var dirs []string
var paths []string
for dir := range in { for dir := range in {
dirs = append(dirs[:0], dir) dirs = append(dirs[:0], dir.id)
wait := time.After(10 * time.Millisecond) paths = append(paths[:0], dir.path)
waitloop: waitloop:
for i := 1; i < grouping; i++ { for i := 1; i < grouping; i++ {
select { select {
@@ -1357,31 +1422,32 @@ func (f *Fs) listRRunner(wg *sync.WaitGroup, in <-chan string, out chan<- error,
if !ok { if !ok {
break waitloop break waitloop
} }
dirs = append(dirs, d) dirs = append(dirs, d.id)
case <-wait: paths = append(paths, d.path)
break waitloop default:
} }
} }
listRSlices{dirs, paths}.Sort()
var iErr error var iErr error
_, err := f.list(dirs, "", false, false, false, func(item *drive.File) bool { _, err := f.list(dirs, "", false, false, false, func(item *drive.File) bool {
parentPath := "" for _, parent := range item.Parents {
if len(item.Parents) > 0 { // only handle parents that are in the requested dirs list
p, ok := f.dirCache.GetInv(item.Parents[0]) i := sort.SearchStrings(dirs, parent)
if ok { if i == len(dirs) || dirs[i] != parent {
parentPath = p continue
}
remote := path.Join(paths[i], item.Name)
entry, err := f.itemToDirEntry(remote, item)
if err != nil {
iErr = err
return true
} }
}
remote := path.Join(parentPath, item.Name)
entry, err := f.itemToDirEntry(remote, item)
if err != nil {
iErr = err
return true
}
err = cb(entry) err = cb(entry)
if err != nil { if err != nil {
iErr = err iErr = err
return true return true
}
} }
return false return false
}) })
@@ -1432,30 +1498,44 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
if err != nil { if err != nil {
return err return err
} }
if directoryID == "root" {
var info *drive.File
err = f.pacer.CallNoRetry(func() (bool, error) {
info, err = f.svc.Files.Get("root").
Fields("id").
SupportsTeamDrives(f.isTeamDrive).
Do()
return shouldRetry(err)
})
if err != nil {
return err
}
directoryID = info.Id
}
mu := sync.Mutex{} // protects in and overflow mu := sync.Mutex{} // protects in and overflow
wg := sync.WaitGroup{} wg := sync.WaitGroup{}
in := make(chan string, inputBuffer) in := make(chan listREntry, inputBuffer)
out := make(chan error, fs.Config.Checkers) out := make(chan error, fs.Config.Checkers)
list := walk.NewListRHelper(callback) list := walk.NewListRHelper(callback)
overfflow := []string{} overflow := []listREntry{}
cb := func(entry fs.DirEntry) error { cb := func(entry fs.DirEntry) error {
mu.Lock() mu.Lock()
defer mu.Unlock() defer mu.Unlock()
if d, isDir := entry.(*fs.Dir); isDir && in != nil { if d, isDir := entry.(*fs.Dir); isDir && in != nil {
select { select {
case in <- d.ID(): case in <- listREntry{d.ID(), d.Remote()}:
wg.Add(1) wg.Add(1)
default: default:
overfflow = append(overfflow, d.ID()) overflow = append(overflow, listREntry{d.ID(), d.Remote()})
} }
} }
return list.Add(entry) return list.Add(entry)
} }
wg.Add(1) wg.Add(1)
in <- directoryID in <- listREntry{directoryID, dir}
for i := 0; i < fs.Config.Checkers; i++ { for i := 0; i < fs.Config.Checkers; i++ {
go f.listRRunner(&wg, in, out, cb, grouping) go f.listRRunner(&wg, in, out, cb, grouping)
@@ -1464,18 +1544,18 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
// wait until the all directories are processed // wait until the all directories are processed
wg.Wait() wg.Wait()
// if the input channel overflowed add the collected entries to the channel now // if the input channel overflowed add the collected entries to the channel now
for len(overfflow) > 0 { for len(overflow) > 0 {
mu.Lock() mu.Lock()
l := len(overfflow) l := len(overflow)
// only fill half of the channel to prevent entries beeing put into overfflow again // only fill half of the channel to prevent entries beeing put into overflow again
if l > inputBuffer/2 { if l > inputBuffer/2 {
l = inputBuffer / 2 l = inputBuffer / 2
} }
wg.Add(l) wg.Add(l)
for _, d := range overfflow[:l] { for _, d := range overflow[:l] {
in <- d in <- d
} }
overfflow = overfflow[l:] overflow = overflow[l:]
mu.Unlock() mu.Unlock()
// wait again for the completion of all directories // wait again for the completion of all directories
@@ -1666,14 +1746,14 @@ func (f *Fs) MergeDirs(dirs []fs.Directory) error {
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
return errors.Wrapf(err, "MergDirs move failed on %q in %v", info.Name, srcDir) return errors.Wrapf(err, "MergeDirs move failed on %q in %v", info.Name, srcDir)
} }
} }
// rmdir (into trash) the now empty source directory // rmdir (into trash) the now empty source directory
fs.Infof(srcDir, "removing empty directory") fs.Infof(srcDir, "removing empty directory")
err = f.rmdir(srcDir.ID(), true) err = f.rmdir(srcDir.ID(), true)
if err != nil { if err != nil {
return errors.Wrapf(err, "MergDirs move failed to rmdir %q", srcDir) return errors.Wrapf(err, "MergeDirs move failed to rmdir %q", srcDir)
} }
} }
return nil return nil
@@ -2092,7 +2172,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
// ChangeNotify calls the passed function with a path that has had changes. // ChangeNotify calls the passed function with a path that has had changes.
// If the implementation uses polling, it should adhere to the given interval. // If the implementation uses polling, it should adhere to the given interval.
// //
// Automatically restarts itself in case of unexpected behaviour of the remote. // Automatically restarts itself in case of unexpected behavior of the remote.
// //
// Close the returned channel to stop being notified. // Close the returned channel to stop being notified.
func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) { func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
@@ -2199,11 +2279,13 @@ func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), startPage
// translate the parent dir of this object // translate the parent dir of this object
if len(change.File.Parents) > 0 { if len(change.File.Parents) > 0 {
if parentPath, ok := f.dirCache.GetInv(change.File.Parents[0]); ok { for _, parent := range change.File.Parents {
// and append the drive file name to compute the full file name if parentPath, ok := f.dirCache.GetInv(parent); ok {
newPath := path.Join(parentPath, change.File.Name) // and append the drive file name to compute the full file name
// this will now clear the actual file too newPath := path.Join(parentPath, change.File.Name)
pathsToClear = append(pathsToClear, entryType{path: newPath, entryType: changeType}) // this will now clear the actual file too
pathsToClear = append(pathsToClear, entryType{path: newPath, entryType: changeType})
}
} }
} else { // a true root object that is changed } else { // a true root object that is changed
pathsToClear = append(pathsToClear, entryType{path: change.File.Name, entryType: changeType}) pathsToClear = append(pathsToClear, entryType{path: change.File.Name, entryType: changeType})
@@ -2383,6 +2465,10 @@ func (o *baseObject) httpResponse(url, method string, options []fs.OpenOption) (
return req, nil, err return req, nil, err
} }
fs.OpenOptionAddHTTPHeaders(req.Header, options) fs.OpenOptionAddHTTPHeaders(req.Header, options)
if o.bytes == 0 {
// Don't supply range requests for 0 length objects as they always fail
delete(req.Header, "Range")
}
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
res, err = o.fs.client.Do(req) res, err = o.fs.client.Do(req)
if err == nil { if err == nil {
@@ -2586,6 +2672,9 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return err return err
} }
newO, err := o.fs.newObjectWithInfo(src.Remote(), info) newO, err := o.fs.newObjectWithInfo(src.Remote(), info)
if err != nil {
return err
}
switch newO := newO.(type) { switch newO := newO.(type) {
case *Object: case *Object:
*o = *newO *o = *newO
@@ -2624,6 +2713,9 @@ func (o *documentObject) Update(in io.Reader, src fs.ObjectInfo, options ...fs.O
remote = remote[:len(remote)-o.extLen] remote = remote[:len(remote)-o.extLen]
newO, err := o.fs.newObjectWithInfo(remote, info) newO, err := o.fs.newObjectWithInfo(remote, info)
if err != nil {
return err
}
switch newO := newO.(type) { switch newO := newO.(type) {
case *documentObject: case *documentObject:
*o = *newO *o = *newO

View File

@@ -185,7 +185,7 @@ func (rx *resumableUpload) transferChunk(start int64, chunk io.ReadSeeker, chunk
// been 200 OK. // been 200 OK.
// //
// So parse the response out of the body. We aren't expecting // So parse the response out of the body. We aren't expecting
// any other 2xx codes, so we parse it unconditionaly on // any other 2xx codes, so we parse it unconditionally on
// StatusCode // StatusCode
if err = json.NewDecoder(res.Body).Decode(&rx.ret); err != nil { if err = json.NewDecoder(res.Body).Decode(&rx.ret); err != nil {
return 598, err return 598, err

View File

@@ -130,8 +130,8 @@ Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can Note that chunks are buffered in memory (one at a time) so rclone can
deal with retries. Setting this larger will increase the speed deal with retries. Setting this larger will increase the speed
slightly (at most 10%% for 128MB in tests) at the cost of using more slightly (at most 10%% for 128MB in tests) at the cost of using more
memory. It can be set smaller if you are tight on memory.`, fs.SizeSuffix(maxChunkSize)), memory. It can be set smaller if you are tight on memory.`, maxChunkSize),
Default: fs.SizeSuffix(defaultChunkSize), Default: defaultChunkSize,
Advanced: true, Advanced: true,
}, { }, {
Name: "impersonate", Name: "impersonate",
@@ -160,7 +160,7 @@ type Fs struct {
team team.Client // for the Teams API team team.Client // for the Teams API
slashRoot string // root with "/" prefix, lowercase slashRoot string // root with "/" prefix, lowercase
slashRootSlash string // root with "/" prefix and postfix, lowercase slashRootSlash string // root with "/" prefix and postfix, lowercase
pacer *pacer.Pacer // To pace the API calls pacer *fs.Pacer // To pace the API calls
ns string // The namespace we are using or "" for none ns string // The namespace we are using or "" for none
} }
@@ -209,12 +209,12 @@ func shouldRetry(err error) (bool, error) {
case auth.RateLimitAPIError: case auth.RateLimitAPIError:
if e.RateLimitError.RetryAfter > 0 { if e.RateLimitError.RetryAfter > 0 {
fs.Debugf(baseErrString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter) fs.Debugf(baseErrString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter)
time.Sleep(time.Duration(e.RateLimitError.RetryAfter) * time.Second) err = pacer.RetryAfterError(err, time.Duration(e.RateLimitError.RetryAfter)*time.Second)
} }
return true, err return true, err
} }
// Keep old behaviour for backward compatibility // Keep old behavior for backward compatibility
if strings.Contains(baseErrString, "too_many_write_operations") || strings.Contains(baseErrString, "too_many_requests") { if strings.Contains(baseErrString, "too_many_write_operations") || strings.Contains(baseErrString, "too_many_requests") || baseErrString == "" {
return true, err return true, err
} }
return fserrors.ShouldRetry(err), err return fserrors.ShouldRetry(err), err
@@ -239,7 +239,7 @@ func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error)
return return
} }
// NewFs contstructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
@@ -273,7 +273,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f := &Fs{ f := &Fs{
name: name, name: name,
opt: *opt, opt: *opt,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
} }
config := dropbox.Config{ config := dropbox.Config{
LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo

View File

@@ -15,6 +15,7 @@ import (
"github.com/ncw/rclone/fs/config/configstruct" "github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure" "github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/readers" "github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors" "github.com/pkg/errors"
) )
@@ -45,6 +46,11 @@ func init() {
Help: "FTP password", Help: "FTP password",
IsPassword: true, IsPassword: true,
Required: true, Required: true,
}, {
Name: "concurrency",
Help: "Maximum number of FTP simultaneous connections, 0 for unlimited",
Default: 0,
Advanced: true,
}, },
}, },
}) })
@@ -52,10 +58,11 @@ func init() {
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
Host string `config:"host"` Host string `config:"host"`
User string `config:"user"` User string `config:"user"`
Pass string `config:"pass"` Pass string `config:"pass"`
Port string `config:"port"` Port string `config:"port"`
Concurrency int `config:"concurrency"`
} }
// Fs represents a remote FTP server // Fs represents a remote FTP server
@@ -70,6 +77,7 @@ type Fs struct {
dialAddr string dialAddr string
poolMu sync.Mutex poolMu sync.Mutex
pool []*ftp.ServerConn pool []*ftp.ServerConn
tokens *pacer.TokenDispenser
} }
// Object describes an FTP file // Object describes an FTP file
@@ -128,6 +136,9 @@ func (f *Fs) ftpConnection() (*ftp.ServerConn, error) {
// Get an FTP connection from the pool, or open a new one // Get an FTP connection from the pool, or open a new one
func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) { func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
f.tokens.Get()
}
f.poolMu.Lock() f.poolMu.Lock()
if len(f.pool) > 0 { if len(f.pool) > 0 {
c = f.pool[0] c = f.pool[0]
@@ -147,6 +158,9 @@ func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
// if err is not nil then it checks the connection is alive using a // if err is not nil then it checks the connection is alive using a
// NOOP request // NOOP request
func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) { func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
defer f.tokens.Put()
}
c := *pc c := *pc
*pc = nil *pc = nil
if err != nil { if err != nil {
@@ -166,7 +180,7 @@ func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
f.poolMu.Unlock() f.poolMu.Unlock()
} }
// NewFs contstructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) { func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
// defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err) // defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err)
// Parse config into Options struct // Parse config into Options struct
@@ -198,6 +212,7 @@ func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
user: user, user: user,
pass: pass, pass: pass,
dialAddr: dialAddr, dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,

View File

@@ -16,6 +16,7 @@ FIXME Patch/Delete/Get isn't working with files with spaces in - giving 404 erro
*/ */
import ( import (
"context"
"encoding/base64" "encoding/base64"
"encoding/hex" "encoding/hex"
"fmt" "fmt"
@@ -45,6 +46,8 @@ import (
"golang.org/x/oauth2" "golang.org/x/oauth2"
"golang.org/x/oauth2/google" "golang.org/x/oauth2/google"
"google.golang.org/api/googleapi" "google.golang.org/api/googleapi"
// NOTE: This API is deprecated
storage "google.golang.org/api/storage/v1" storage "google.golang.org/api/storage/v1"
) )
@@ -144,6 +147,22 @@ func init() {
Value: "publicReadWrite", Value: "publicReadWrite",
Help: "Project team owners get OWNER access, and all Users get WRITER access.", Help: "Project team owners get OWNER access, and all Users get WRITER access.",
}}, }},
}, {
Name: "bucket_policy_only",
Help: `Access checks should use bucket-level IAM policies.
If you want to upload objects to a bucket with Bucket Policy Only set
then you will need to set this.
When it is set, rclone:
- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
Docs: https://cloud.google.com/storage/docs/bucket-policy-only
`,
Default: false,
}, { }, {
Name: "location", Name: "location",
Help: "Location for the newly created buckets.", Help: "Location for the newly created buckets.",
@@ -162,21 +181,36 @@ func init() {
}, { }, {
Value: "asia-east1", Value: "asia-east1",
Help: "Taiwan.", Help: "Taiwan.",
}, {
Value: "asia-east2",
Help: "Hong Kong.",
}, { }, {
Value: "asia-northeast1", Value: "asia-northeast1",
Help: "Tokyo.", Help: "Tokyo.",
}, {
Value: "asia-south1",
Help: "Mumbai.",
}, { }, {
Value: "asia-southeast1", Value: "asia-southeast1",
Help: "Singapore.", Help: "Singapore.",
}, { }, {
Value: "australia-southeast1", Value: "australia-southeast1",
Help: "Sydney.", Help: "Sydney.",
}, {
Value: "europe-north1",
Help: "Finland.",
}, { }, {
Value: "europe-west1", Value: "europe-west1",
Help: "Belgium.", Help: "Belgium.",
}, { }, {
Value: "europe-west2", Value: "europe-west2",
Help: "London.", Help: "London.",
}, {
Value: "europe-west3",
Help: "Frankfurt.",
}, {
Value: "europe-west4",
Help: "Netherlands.",
}, { }, {
Value: "us-central1", Value: "us-central1",
Help: "Iowa.", Help: "Iowa.",
@@ -189,6 +223,9 @@ func init() {
}, { }, {
Value: "us-west1", Value: "us-west1",
Help: "Oregon.", Help: "Oregon.",
}, {
Value: "us-west2",
Help: "California.",
}}, }},
}, { }, {
Name: "storage_class", Name: "storage_class",
@@ -223,6 +260,7 @@ type Options struct {
ServiceAccountCredentials string `config:"service_account_credentials"` ServiceAccountCredentials string `config:"service_account_credentials"`
ObjectACL string `config:"object_acl"` ObjectACL string `config:"object_acl"`
BucketACL string `config:"bucket_acl"` BucketACL string `config:"bucket_acl"`
BucketPolicyOnly bool `config:"bucket_policy_only"`
Location string `config:"location"` Location string `config:"location"`
StorageClass string `config:"storage_class"` StorageClass string `config:"storage_class"`
} }
@@ -238,7 +276,7 @@ type Fs struct {
bucket string // the bucket we are working on bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket bucketOK bool // true if we have created the bucket
pacer *pacer.Pacer // To pace the API calls pacer *fs.Pacer // To pace the API calls
} }
// Object describes a storage object // Object describes a storage object
@@ -282,7 +320,7 @@ func (f *Fs) Features() *fs.Features {
return f.features return f.features
} }
// shouldRetry determines whehter a given err rates being retried // shouldRetry determines whether a given err rates being retried
func shouldRetry(err error) (again bool, errOut error) { func shouldRetry(err error) (again bool, errOut error) {
again = false again = false
if err != nil { if err != nil {
@@ -330,7 +368,7 @@ func getServiceAccountClient(credentialsData []byte) (*http.Client, error) {
return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil
} }
// NewFs contstructs an Fs from the path, bucket:path // NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
var oAuthClient *http.Client var oAuthClient *http.Client
@@ -363,7 +401,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
} else { } else {
oAuthClient, _, err = oauthutil.NewClient(name, m, storageConfig) oAuthClient, _, err = oauthutil.NewClient(name, m, storageConfig)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to configure Google Cloud Storage") ctx := context.Background()
oAuthClient, err = google.DefaultClient(ctx, storage.DevstorageFullControlScope)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Google Cloud Storage")
}
} }
} }
@@ -377,7 +419,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
bucket: bucket, bucket: bucket,
root: directory, root: directory,
opt: *opt, opt: *opt,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.GoogleDrivePacer), pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(minSleep))),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
ReadMimeType: true, ReadMimeType: true,
@@ -691,8 +733,19 @@ func (f *Fs) Mkdir(dir string) (err error) {
Location: f.opt.Location, Location: f.opt.Location,
StorageClass: f.opt.StorageClass, StorageClass: f.opt.StorageClass,
} }
if f.opt.BucketPolicyOnly {
bucket.IamConfiguration = &storage.BucketIamConfiguration{
BucketPolicyOnly: &storage.BucketIamConfigurationBucketPolicyOnly{
Enabled: true,
},
}
}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket).PredefinedAcl(f.opt.BucketACL).Do() insertBucket := f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket)
if !f.opt.BucketPolicyOnly {
insertBucket.PredefinedAcl(f.opt.BucketACL)
}
_, err = insertBucket.Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err == nil { if err == nil {
@@ -958,7 +1011,11 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
} }
var newObject *storage.Object var newObject *storage.Object
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
newObject, err = o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name).PredefinedAcl(o.fs.opt.ObjectACL).Do() insertObject := o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name)
if !o.fs.opt.BucketPolicyOnly {
insertObject.PredefinedAcl(o.fs.opt.ObjectACL)
}
newObject, err = insertObject.Do()
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {

View File

@@ -6,6 +6,7 @@ package http
import ( import (
"io" "io"
"mime"
"net/http" "net/http"
"net/url" "net/url"
"path" "path"
@@ -40,7 +41,26 @@ func init() {
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "https://example.com", Value: "https://example.com",
Help: "Connect to example.com", Help: "Connect to example.com",
}, {
Value: "https://user:pass@example.com",
Help: "Connect to example.com using a username and password",
}}, }},
}, {
Name: "no_slash",
Help: `Set this if the site doesn't end directories with /
Use this if your target website does not use / on the end of
directories.
A / on the end of a path is how rclone normally tells the difference
between files and directories. If this flag is set, then rclone will
treat all files with Content-Type: text/html as directories and read
URLs from them rather than downloading them.
Note that this may cause rclone to confuse genuine HTML files with
directories.`,
Default: false,
Advanced: true,
}}, }},
} }
fs.Register(fsi) fs.Register(fsi)
@@ -49,6 +69,7 @@ func init() {
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
Endpoint string `config:"url"` Endpoint string `config:"url"`
NoSlash bool `config:"no_slash"`
} }
// Fs stores the interface to the remote HTTP files // Fs stores the interface to the remote HTTP files
@@ -248,7 +269,7 @@ func parseName(base *url.URL, name string) (string, error) {
} }
// calculate the name relative to the base // calculate the name relative to the base
name = u.Path[len(base.Path):] name = u.Path[len(base.Path):]
// musn't be empty // mustn't be empty
if name == "" { if name == "" {
return "", errNameIsEmpty return "", errNameIsEmpty
} }
@@ -267,14 +288,20 @@ func parse(base *url.URL, in io.Reader) (names []string, err error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
var walk func(*html.Node) var (
walk func(*html.Node)
seen = make(map[string]struct{})
)
walk = func(n *html.Node) { walk = func(n *html.Node) {
if n.Type == html.ElementNode && n.Data == "a" { if n.Type == html.ElementNode && n.Data == "a" {
for _, a := range n.Attr { for _, a := range n.Attr {
if a.Key == "href" { if a.Key == "href" {
name, err := parseName(base, a.Val) name, err := parseName(base, a.Val)
if err == nil { if err == nil {
names = append(names, name) if _, found := seen[name]; !found {
names = append(names, name)
seen[name] = struct{}{}
}
} }
break break
} }
@@ -299,14 +326,16 @@ func (f *Fs) readDir(dir string) (names []string, err error) {
return nil, errors.Errorf("internal error: readDir URL %q didn't end in /", URL) return nil, errors.Errorf("internal error: readDir URL %q didn't end in /", URL)
} }
res, err := f.httpClient.Get(URL) res, err := f.httpClient.Get(URL)
if err == nil && res.StatusCode == http.StatusNotFound { if err == nil {
return nil, fs.ErrorDirNotFound defer fs.CheckClose(res.Body, &err)
if res.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
}
} }
err = statusError(res, err) err = statusError(res, err)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to readDir") return nil, errors.Wrap(err, "failed to readDir")
} }
defer fs.CheckClose(res.Body, &err)
contentType := strings.SplitN(res.Header.Get("Content-Type"), ";", 2)[0] contentType := strings.SplitN(res.Header.Get("Content-Type"), ";", 2)[0]
switch contentType { switch contentType {
@@ -350,11 +379,16 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
fs: f, fs: f,
remote: remote, remote: remote,
} }
if err = file.stat(); err != nil { switch err = file.stat(); err {
case nil:
entries = append(entries, file)
case fs.ErrorNotAFile:
// ...found a directory not a file
dir := fs.NewDir(remote, timeUnset)
entries = append(entries, dir)
default:
fs.Debugf(remote, "skipping because of error: %v", err) fs.Debugf(remote, "skipping because of error: %v", err)
continue
} }
entries = append(entries, file)
} }
} }
return entries, nil return entries, nil
@@ -430,6 +464,16 @@ func (o *Object) stat() error {
o.size = parseInt64(res.Header.Get("Content-Length"), -1) o.size = parseInt64(res.Header.Get("Content-Length"), -1)
o.modTime = t o.modTime = t
o.contentType = res.Header.Get("Content-Type") o.contentType = res.Header.Get("Content-Type")
// If NoSlash is set then check ContentType to see if it is a directory
if o.fs.opt.NoSlash {
mediaType, _, err := mime.ParseMediaType(o.contentType)
if err != nil {
return errors.Wrapf(err, "failed to parse Content-Type: %q", o.contentType)
}
if mediaType == "text/html" {
return fs.ErrorNotAFile
}
}
return nil return nil
} }

View File

@@ -65,7 +65,7 @@ func prepare(t *testing.T) (fs.Fs, func()) {
return f, tidy return f, tidy
} }
func testListRoot(t *testing.T, f fs.Fs) { func testListRoot(t *testing.T, f fs.Fs, noSlash bool) {
entries, err := f.List("") entries, err := f.List("")
require.NoError(t, err) require.NoError(t, err)
@@ -93,15 +93,29 @@ func testListRoot(t *testing.T, f fs.Fs) {
e = entries[3] e = entries[3]
assert.Equal(t, "two.html", e.Remote()) assert.Equal(t, "two.html", e.Remote())
assert.Equal(t, int64(7), e.Size()) if noSlash {
_, ok = e.(*Object) assert.Equal(t, int64(-1), e.Size())
assert.True(t, ok) _, ok = e.(fs.Directory)
assert.True(t, ok)
} else {
assert.Equal(t, int64(41), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
}
} }
func TestListRoot(t *testing.T) { func TestListRoot(t *testing.T) {
f, tidy := prepare(t) f, tidy := prepare(t)
defer tidy() defer tidy()
testListRoot(t, f) testListRoot(t, f, false)
}
func TestListRootNoSlash(t *testing.T) {
f, tidy := prepare(t)
f.(*Fs).opt.NoSlash = true
defer tidy()
testListRoot(t, f, true)
} }
func TestListSubDir(t *testing.T) { func TestListSubDir(t *testing.T) {
@@ -194,7 +208,7 @@ func TestIsAFileRoot(t *testing.T) {
f, err := NewFs(remoteName, "one%.txt", m) f, err := NewFs(remoteName, "one%.txt", m)
assert.Equal(t, err, fs.ErrorIsFile) assert.Equal(t, err, fs.ErrorIsFile)
testListRoot(t, f) testListRoot(t, f, false)
} }
func TestIsAFileSubDir(t *testing.T) { func TestIsAFileSubDir(t *testing.T) {

View File

@@ -1 +1 @@
potato <a href="two.html/file.txt">file.txt</a>

View File

@@ -9,8 +9,10 @@ package hubic
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io/ioutil"
"log" "log"
"net/http" "net/http"
"strings"
"time" "time"
"github.com/ncw/rclone/backend/swift" "github.com/ncw/rclone/backend/swift"
@@ -124,7 +126,9 @@ func (f *Fs) getCredentials() (err error) {
} }
defer fs.CheckClose(resp.Body, &err) defer fs.CheckClose(resp.Body, &err)
if resp.StatusCode < 200 || resp.StatusCode > 299 { if resp.StatusCode < 200 || resp.StatusCode > 299 {
return errors.Errorf("failed to get credentials: %s", resp.Status) body, _ := ioutil.ReadAll(resp.Body)
bodyStr := strings.TrimSpace(strings.Replace(string(body), "\n", " ", -1))
return errors.Errorf("failed to get credentials: %s: %s", resp.Status, bodyStr)
} }
decoder := json.NewDecoder(resp.Body) decoder := json.NewDecoder(resp.Body)
var result credentials var result credentials

View File

@@ -40,7 +40,7 @@ const (
maxSleep = 2 * time.Second maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential decayConstant = 2 // bigger for slower decay, exponential
defaultDevice = "Jotta" defaultDevice = "Jotta"
defaultMountpoint = "Sync" defaultMountpoint = "Sync" // nolint
rootURL = "https://www.jottacloud.com/jfs/" rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com/files/v1/" apiURL = "https://api.jottacloud.com/files/v1/"
baseURL = "https://www.jottacloud.com/" baseURL = "https://www.jottacloud.com/"
@@ -103,7 +103,7 @@ func init() {
var jsonToken api.TokenJSON var jsonToken api.TokenJSON
resp, err := srv.CallJSON(&opts, nil, &jsonToken) resp, err := srv.CallJSON(&opts, nil, &jsonToken)
if err != nil { if err != nil {
// if 2fa is enabled the first request is expected to fail. we'lls do another request with the 2fa code as an additional http header // if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header
if resp != nil { if resp != nil {
if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" { if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" {
fmt.Printf("This account has 2 factor authentication enabled you will receive a verification code via SMS.\n") fmt.Printf("This account has 2 factor authentication enabled you will receive a verification code via SMS.\n")
@@ -163,7 +163,7 @@ func init() {
Advanced: true, Advanced: true,
}, { }, {
Name: "upload_resume_limit", Name: "upload_resume_limit",
Help: "Files bigger than this can be resumed if the upload failes.", Help: "Files bigger than this can be resumed if the upload fail's.",
Default: fs.SizeSuffix(10 * 1024 * 1024), Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true, Advanced: true,
}}, }},
@@ -190,7 +190,7 @@ type Fs struct {
endpointURL string endpointURL string
srv *rest.Client srv *rest.Client
apiSrv *rest.Client apiSrv *rest.Client
pacer *pacer.Pacer pacer *fs.Pacer
tokenRenewer *oauthutil.Renew // renew the token on expiry tokenRenewer *oauthutil.Renew // renew the token on expiry
} }
@@ -361,7 +361,7 @@ func grantTypeFilter(req *http.Request) {
} }
_ = req.Body.Close() _ = req.Body.Close()
// make the refesh token upper case // make the refresh token upper case
refreshBody = []byte(strings.Replace(string(refreshBody), "grant_type=refresh_token", "grant_type=REFRESH_TOKEN", 1)) refreshBody = []byte(strings.Replace(string(refreshBody), "grant_type=refresh_token", "grant_type=REFRESH_TOKEN", 1))
// set the new ReadCloser (with a dummy Close()) // set the new ReadCloser (with a dummy Close())
@@ -381,6 +381,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
rootIsDir := strings.HasSuffix(root, "/") rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root) root = parsePath(root)
// add jottacloud to the long list of sites that don't follow the oauth spec correctly
oauth2.RegisterBrokenAuthHeaderProvider("https://www.jottacloud.com/")
// the oauth client for the api servers needs // the oauth client for the api servers needs
// a filter to fix the grant_type issues (see above) // a filter to fix the grant_type issues (see above)
baseClient := fshttp.NewClient(fs.Config) baseClient := fshttp.NewClient(fs.Config)
@@ -403,7 +406,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
opt: *opt, opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL), srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
apiSrv: rest.NewClient(oAuthClient).SetRoot(apiURL), apiSrv: rest.NewClient(oAuthClient).SetRoot(apiURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CaseInsensitive: true, CaseInsensitive: true,
@@ -769,7 +772,7 @@ func (f *Fs) Purge() error {
return f.purgeCheck("", false) return f.purgeCheck("", false)
} }
// copyOrMoves copys or moves directories or files depending on the mthod parameter // copyOrMoves copies or moves directories or files depending on the method parameter
func (f *Fs) copyOrMove(method, src, dest string) (info *api.JottaFile, err error) { func (f *Fs) copyOrMove(method, src, dest string) (info *api.JottaFile, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
@@ -1006,7 +1009,7 @@ func (o *Object) MimeType() string {
// setMetaData sets the metadata from info // setMetaData sets the metadata from info
func (o *Object) setMetaData(info *api.JottaFile) (err error) { func (o *Object) setMetaData(info *api.JottaFile) (err error) {
o.hasMetaData = true o.hasMetaData = true
o.size = int64(info.Size) o.size = info.Size
o.md5 = info.MD5 o.md5 = info.MD5
o.mimeType = info.MimeType o.mimeType = info.MimeType
o.modTime = time.Time(info.ModifiedAt) o.modTime = time.Time(info.ModifiedAt)
@@ -1080,7 +1083,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
func readMD5(in io.Reader, size, threshold int64) (md5sum string, out io.Reader, cleanup func(), err error) { func readMD5(in io.Reader, size, threshold int64) (md5sum string, out io.Reader, cleanup func(), err error) {
// we need a MD5 // we need a MD5
md5Hasher := md5.New() md5Hasher := md5.New()
// use the teeReader to write to the local file AND caclulate the MD5 while doing so // use the teeReader to write to the local file AND calculate the MD5 while doing so
teeReader := io.TeeReader(in, md5Hasher) teeReader := io.TeeReader(in, md5Hasher)
// nothing to clean up by default // nothing to clean up by default
@@ -1212,7 +1215,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// finally update the meta data // finally update the meta data
o.hasMetaData = true o.hasMetaData = true
o.size = int64(result.Bytes) o.size = result.Bytes
o.md5 = result.Md5 o.md5 = result.Md5
o.modTime = time.Unix(result.Modified/1000, 0) o.modTime = time.Unix(result.Modified/1000, 0)
} else { } else {

View File

@@ -2,7 +2,7 @@
Translate file names for JottaCloud adapted from OneDrive Translate file names for JottaCloud adapted from OneDrive
The following characters are JottaClous reserved characters, and can't The following characters are JottaCloud reserved characters, and can't
be used in JottaCloud folder and file names. be used in JottaCloud folder and file names.
jottacloud = "/" / "\" / "*" / "<" / ">" / "?" / "!" / "&" / ":" / ";" / "|" / "#" / "%" / """ / "'" / "." / "~" jottacloud = "/" / "\" / "*" / "<" / ">" / "?" / "!" / "&" / ":" / ";" / "|" / "#" / "%" / """ / "'" / "." / "~"

589
backend/koofr/koofr.go Normal file
View File

@@ -0,0 +1,589 @@
package koofr
import (
"encoding/base64"
"errors"
"fmt"
"io"
"net/http"
"path"
"strings"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
httpclient "github.com/koofr/go-httpclient"
koofrclient "github.com/koofr/go-koofrclient"
)
// Register Fs with rclone
func init() {
fs.Register(&fs.RegInfo{
Name: "koofr",
Description: "Koofr",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "endpoint",
Help: "The Koofr API endpoint to use",
Default: "https://app.koofr.net",
Required: true,
Advanced: true,
}, {
Name: "mountid",
Help: "Mount ID of the mount to use. If omitted, the primary mount is used.",
Required: false,
Default: "",
Advanced: true,
}, {
Name: "user",
Help: "Your Koofr user name",
Required: true,
}, {
Name: "password",
Help: "Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)",
IsPassword: true,
Required: true,
},
},
})
}
// Options represent the configuration of the Koofr backend
type Options struct {
Endpoint string `config:"endpoint"`
MountID string `config:"mountid"`
User string `config:"user"`
Password string `config:"password"`
}
// A Fs is a representation of a remote Koofr Fs
type Fs struct {
name string
mountID string
root string
opt Options
features *fs.Features
client *koofrclient.KoofrClient
}
// An Object on the remote Koofr Fs
type Object struct {
fs *Fs
remote string
info koofrclient.FileInfo
}
func base(pth string) string {
rv := path.Base(pth)
if rv == "" || rv == "." {
rv = "/"
}
return rv
}
func dir(pth string) string {
rv := path.Dir(pth)
if rv == "" || rv == "." {
rv = "/"
}
return rv
}
// String returns a string representation of the remote Object
func (o *Object) String() string {
return o.remote
}
// Remote returns the remote path of the Object, relative to Fs root
func (o *Object) Remote() string {
return o.remote
}
// ModTime returns the modification time of the Object
func (o *Object) ModTime() time.Time {
return time.Unix(o.info.Modified/1000, (o.info.Modified%1000)*1000*1000)
}
// Size return the size of the Object in bytes
func (o *Object) Size() int64 {
return o.info.Size
}
// Fs returns a reference to the Koofr Fs containing the Object
func (o *Object) Fs() fs.Info {
return o.fs
}
// Hash returns an MD5 hash of the Object
func (o *Object) Hash(typ hash.Type) (string, error) {
if typ == hash.MD5 {
return o.info.Hash, nil
}
return "", nil
}
// fullPath returns full path of the remote Object (including Fs root)
func (o *Object) fullPath() string {
return o.fs.fullPath(o.remote)
}
// Storable returns true if the Object is storable
func (o *Object) Storable() bool {
return true
}
// SetModTime is not supported
func (o *Object) SetModTime(mtime time.Time) error {
return nil
}
// Open opens the Object for reading
func (o *Object) Open(options ...fs.OpenOption) (io.ReadCloser, error) {
var sOff, eOff int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
sOff = x.Offset
case *fs.RangeOption:
sOff = x.Start
eOff = x.End
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
if sOff == 0 && eOff < 0 {
return o.fs.client.FilesGet(o.fs.mountID, o.fullPath())
}
if sOff < 0 {
sOff = o.Size() - eOff
eOff = o.Size()
}
if eOff > o.Size() {
eOff = o.Size()
}
span := &koofrclient.FileSpan{
Start: sOff,
End: eOff,
}
return o.fs.client.FilesGetRange(o.fs.mountID, o.fullPath(), span)
}
// Update updates the Object contents
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
putopts := &koofrclient.PutFilter{
ForceOverwrite: true,
NoRename: true,
IgnoreNonExisting: true,
}
fullPath := o.fullPath()
dirPath := dir(fullPath)
name := base(fullPath)
err := o.fs.mkdir(dirPath)
if err != nil {
return err
}
info, err := o.fs.client.FilesPutOptions(o.fs.mountID, dirPath, name, in, putopts)
if err != nil {
return err
}
o.info = *info
return nil
}
// Remove deletes the remote Object
func (o *Object) Remove() error {
return o.fs.client.FilesDelete(o.fs.mountID, o.fullPath())
}
// Name returns the name of the Fs
func (f *Fs) Name() string {
return f.name
}
// Root returns the root path of the Fs
func (f *Fs) Root() string {
return f.root
}
// String returns a string representation of the Fs
func (f *Fs) String() string {
return "koofr:" + f.mountID + ":" + f.root
}
// Features returns the optional features supported by this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Precision denotes that setting modification times is not supported
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Hashes returns a set of hashes are Provided by the Fs
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
}
// fullPath constructs a full, absolute path from a Fs root relative path,
func (f *Fs) fullPath(part string) string {
return path.Join("/", f.root, part)
}
// NewFs constructs a new filesystem given a root path and configuration options
func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
pass, err := obscure.Reveal(opt.Password)
if err != nil {
return nil, err
}
client := koofrclient.NewKoofrClient(opt.Endpoint, false)
basicAuth := fmt.Sprintf("Basic %s",
base64.StdEncoding.EncodeToString([]byte(opt.User+":"+pass)))
client.HTTPClient.Headers.Set("Authorization", basicAuth)
mounts, err := client.Mounts()
if err != nil {
return nil, err
}
f := &Fs{
name: name,
root: root,
opt: *opt,
client: client,
}
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
BucketBased: false,
CanHaveEmptyDirectories: true,
}).Fill(f)
for _, m := range mounts {
if opt.MountID != "" {
if m.Id == opt.MountID {
f.mountID = m.Id
break
}
} else if m.IsPrimary {
f.mountID = m.Id
break
}
}
if f.mountID == "" {
if opt.MountID == "" {
return nil, errors.New("Failed to find primary mount")
}
return nil, errors.New("Failed to find mount " + opt.MountID)
}
rootFile, err := f.client.FilesInfo(f.mountID, "/"+f.root)
if err == nil && rootFile.Type != "dir" {
f.root = dir(f.root)
err = fs.ErrorIsFile
} else {
err = nil
}
return f, err
}
// List returns a list of items in a directory
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
files, err := f.client.FilesList(f.mountID, f.fullPath(dir))
if err != nil {
return nil, translateErrorsDir(err)
}
entries = make([]fs.DirEntry, len(files))
for i, file := range files {
if file.Type == "dir" {
entries[i] = fs.NewDir(path.Join(dir, file.Name), time.Unix(0, 0))
} else {
entries[i] = &Object{
fs: f,
info: file,
remote: path.Join(dir, file.Name),
}
}
}
return entries, nil
}
// NewObject creates a new remote Object for a given remote path
func (f *Fs) NewObject(remote string) (obj fs.Object, err error) {
info, err := f.client.FilesInfo(f.mountID, f.fullPath(remote))
if err != nil {
return nil, translateErrorsObject(err)
}
if info.Type == "dir" {
return nil, fs.ErrorNotAFile
}
return &Object{
fs: f,
info: info,
remote: remote,
}, nil
}
// Put updates a remote Object
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (obj fs.Object, err error) {
putopts := &koofrclient.PutFilter{
ForceOverwrite: true,
NoRename: true,
IgnoreNonExisting: true,
}
fullPath := f.fullPath(src.Remote())
dirPath := dir(fullPath)
name := base(fullPath)
err = f.mkdir(dirPath)
if err != nil {
return nil, err
}
info, err := f.client.FilesPutOptions(f.mountID, dirPath, name, in, putopts)
if err != nil {
return nil, translateErrorsObject(err)
}
return &Object{
fs: f,
info: *info,
remote: src.Remote(),
}, nil
}
// PutStream updates a remote Object with a stream of unknown size
func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(in, src, options...)
}
// isBadRequest is a predicate which holds true iff the error returned was
// HTTP status 400
func isBadRequest(err error) bool {
switch err := err.(type) {
case httpclient.InvalidStatusError:
if err.Got == http.StatusBadRequest {
return true
}
}
return false
}
// translateErrorsDir translates koofr errors to rclone errors (for a dir
// operation)
func translateErrorsDir(err error) error {
switch err := err.(type) {
case httpclient.InvalidStatusError:
if err.Got == http.StatusNotFound {
return fs.ErrorDirNotFound
}
}
return err
}
// translatesErrorsObject translates Koofr errors to rclone errors (for an object operation)
func translateErrorsObject(err error) error {
switch err := err.(type) {
case httpclient.InvalidStatusError:
if err.Got == http.StatusNotFound {
return fs.ErrorObjectNotFound
}
}
return err
}
// mkdir creates a directory at the given remote path. Creates ancestors if
// neccessary
func (f *Fs) mkdir(fullPath string) error {
if fullPath == "/" {
return nil
}
info, err := f.client.FilesInfo(f.mountID, fullPath)
if err == nil && info.Type == "dir" {
return nil
}
err = translateErrorsDir(err)
if err != nil && err != fs.ErrorDirNotFound {
return err
}
dirs := strings.Split(fullPath, "/")
parent := "/"
for _, part := range dirs {
if part == "" {
continue
}
info, err = f.client.FilesInfo(f.mountID, path.Join(parent, part))
if err != nil || info.Type != "dir" {
err = translateErrorsDir(err)
if err != nil && err != fs.ErrorDirNotFound {
return err
}
err = f.client.FilesNewFolder(f.mountID, parent, part)
if err != nil && !isBadRequest(err) {
return err
}
}
parent = path.Join(parent, part)
}
return nil
}
// Mkdir creates a directory at the given remote path. Creates ancestors if
// necessary
func (f *Fs) Mkdir(dir string) error {
fullPath := f.fullPath(dir)
return f.mkdir(fullPath)
}
// Rmdir removes an (empty) directory at the given remote path
func (f *Fs) Rmdir(dir string) error {
files, err := f.client.FilesList(f.mountID, f.fullPath(dir))
if err != nil {
return translateErrorsDir(err)
}
if len(files) > 0 {
return fs.ErrorDirectoryNotEmpty
}
err = f.client.FilesDelete(f.mountID, f.fullPath(dir))
if err != nil {
return translateErrorsDir(err)
}
return nil
}
// Copy copies a remote Object to the given path
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
dstFullPath := f.fullPath(remote)
dstDir := dir(dstFullPath)
err := f.mkdir(dstDir)
if err != nil {
return nil, fs.ErrorCantCopy
}
err = f.client.FilesCopy((src.(*Object)).fs.mountID,
(src.(*Object)).fs.fullPath((src.(*Object)).remote),
f.mountID, dstFullPath)
if err != nil {
return nil, fs.ErrorCantCopy
}
return f.NewObject(remote)
}
// Move moves a remote Object to the given path
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
srcObj := src.(*Object)
dstFullPath := f.fullPath(remote)
dstDir := dir(dstFullPath)
err := f.mkdir(dstDir)
if err != nil {
return nil, fs.ErrorCantMove
}
err = f.client.FilesMove(srcObj.fs.mountID,
srcObj.fs.fullPath(srcObj.remote), f.mountID, dstFullPath)
if err != nil {
return nil, fs.ErrorCantMove
}
return f.NewObject(remote)
}
// DirMove moves a remote directory to the given path
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
srcFs := src.(*Fs)
srcFullPath := srcFs.fullPath(srcRemote)
dstFullPath := f.fullPath(dstRemote)
if srcFs.mountID == f.mountID && srcFullPath == dstFullPath {
return fs.ErrorDirExists
}
dstDir := dir(dstFullPath)
err := f.mkdir(dstDir)
if err != nil {
return fs.ErrorCantDirMove
}
err = f.client.FilesMove(srcFs.mountID, srcFullPath, f.mountID, dstFullPath)
if err != nil {
return fs.ErrorCantDirMove
}
return nil
}
// About reports space usage (with a MB precision)
func (f *Fs) About() (*fs.Usage, error) {
mount, err := f.client.MountsDetails(f.mountID)
if err != nil {
return nil, err
}
return &fs.Usage{
Total: fs.NewUsageValue(mount.SpaceTotal * 1024 * 1024),
Used: fs.NewUsageValue(mount.SpaceUsed * 1024 * 1024),
Trashed: nil,
Other: nil,
Free: fs.NewUsageValue((mount.SpaceTotal - mount.SpaceUsed) * 1024 * 1024),
Objects: nil,
}, nil
}
// Purge purges the complete Fs
func (f *Fs) Purge() error {
err := translateErrorsDir(f.client.FilesDelete(f.mountID, f.fullPath("")))
return err
}
// linkCreate is a Koofr API request for creating a public link
type linkCreate struct {
Path string `json:"path"`
}
// link is a Koofr API response to creating a public link
type link struct {
ID string `json:"id"`
Name string `json:"name"`
Path string `json:"path"`
Counter int64 `json:"counter"`
URL string `json:"url"`
ShortURL string `json:"shortUrl"`
Hash string `json:"hash"`
Host string `json:"host"`
HasPassword bool `json:"hasPassword"`
Password string `json:"password"`
ValidFrom int64 `json:"validFrom"`
ValidTo int64 `json:"validTo"`
PasswordRequired bool `json:"passwordRequired"`
}
// createLink makes a Koofr API call to create a public link
func createLink(c *koofrclient.KoofrClient, mountID string, path string) (*link, error) {
linkCreate := linkCreate{
Path: path,
}
linkData := link{}
request := httpclient.RequestData{
Method: "POST",
Path: "/api/v2/mounts/" + mountID + "/links",
ExpectedStatus: []int{http.StatusOK, http.StatusCreated},
ReqEncoding: httpclient.EncodingJSON,
ReqValue: linkCreate,
RespEncoding: httpclient.EncodingJSON,
RespValue: &linkData,
}
_, err := c.Request(&request)
if err != nil {
return nil, err
}
return &linkData, nil
}
// PublicLink creates a public link to the remote path
func (f *Fs) PublicLink(remote string) (string, error) {
linkData, err := createLink(f.client, f.mountID, f.fullPath(remote))
if err != nil {
return "", translateErrorsDir(err)
}
return linkData.ShortURL, nil
}

View File

@@ -0,0 +1,14 @@
package koofr_test
import (
"testing"
"github.com/ncw/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestKoofr:",
})
}

View File

@@ -16,7 +16,7 @@ func (f *Fs) About() (*fs.Usage, error) {
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to read disk usage") return nil, errors.Wrap(err, "failed to read disk usage")
} }
bs := int64(s.Bsize) bs := int64(s.Bsize) // nolint: unconvert
usage := &fs.Usage{ usage := &fs.Usage{
Total: fs.NewUsageValue(bs * int64(s.Blocks)), // quota of bytes that can be used Total: fs.NewUsageValue(bs * int64(s.Blocks)), // quota of bytes that can be used
Used: fs.NewUsageValue(bs * int64(s.Blocks-s.Bfree)), // bytes in use Used: fs.NewUsageValue(bs * int64(s.Blocks-s.Bfree)), // bytes in use

View File

@@ -225,10 +225,10 @@ func (f *Fs) Features() *fs.Features {
return f.features return f.features
} }
// caseInsenstive returns whether the remote is case insensitive or not // caseInsensitive returns whether the remote is case insensitive or not
func (f *Fs) caseInsensitive() bool { func (f *Fs) caseInsensitive() bool {
// FIXME not entirely accurate since you can have case // FIXME not entirely accurate since you can have case
// sensitive Fses on darwin and case insenstive Fses on linux. // sensitive Fses on darwin and case insensitive Fses on linux.
// Should probably check but that would involve creating a // Should probably check but that would involve creating a
// file in the remote to be most accurate which probably isn't // file in the remote to be most accurate which probably isn't
// desirable. // desirable.
@@ -288,7 +288,7 @@ func (f *Fs) newObjectWithInfo(remote, dstPath string, info os.FileInfo) (fs.Obj
} }
return nil, err return nil, err
} }
// Handle the odd case, that a symlink was specfied by name without the link suffix // Handle the odd case, that a symlink was specified by name without the link suffix
if o.fs.opt.TranslateSymlinks && o.mode&os.ModeSymlink != 0 && !o.translatedLink { if o.fs.opt.TranslateSymlinks && o.mode&os.ModeSymlink != 0 && !o.translatedLink {
return nil, fs.ErrorObjectNotFound return nil, fs.ErrorObjectNotFound
} }
@@ -958,7 +958,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
if o.translatedLink { if o.translatedLink {
if err == nil { if err == nil {
// Remove any current symlink or file, if one exsits // Remove any current symlink or file, if one exists
if _, err := os.Lstat(o.path); err == nil { if _, err := os.Lstat(o.path); err == nil {
if removeErr := os.Remove(o.path); removeErr != nil { if removeErr := os.Remove(o.path); removeErr != nil {
fs.Errorf(o, "Failed to remove previous file: %v", removeErr) fs.Errorf(o, "Failed to remove previous file: %v", removeErr)

View File

@@ -22,5 +22,5 @@ func readDevice(fi os.FileInfo, oneFileSystem bool) uint64 {
fs.Debugf(fi.Name(), "Type assertion fi.Sys().(*syscall.Stat_t) failed from: %#v", fi.Sys()) fs.Debugf(fi.Name(), "Type assertion fi.Sys().(*syscall.Stat_t) failed from: %#v", fi.Sys())
return devUnset return devUnset
} }
return uint64(statT.Dev) return uint64(statT.Dev) // nolint: unconvert
} }

View File

@@ -98,7 +98,7 @@ type Fs struct {
opt Options // parsed config options opt Options // parsed config options
features *fs.Features // optional features features *fs.Features // optional features
srv *mega.Mega // the connection to the server srv *mega.Mega // the connection to the server
pacer *pacer.Pacer // pacer for API calls pacer *fs.Pacer // pacer for API calls
rootNodeMu sync.Mutex // mutex for _rootNode rootNodeMu sync.Mutex // mutex for _rootNode
_rootNode *mega.Node // root node - call findRoot to use this _rootNode *mega.Node // root node - call findRoot to use this
mkdirMu sync.Mutex // used to serialize calls to mkdir / rmdir mkdirMu sync.Mutex // used to serialize calls to mkdir / rmdir
@@ -217,7 +217,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root, root: root,
opt: *opt, opt: *opt,
srv: srv, srv: srv,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
DuplicateFiles: true, DuplicateFiles: true,
@@ -497,7 +497,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// Creates from the parameters passed in a half finished Object which // Creates from the parameters passed in a half finished Object which
// must have setMetaData called on it // must have setMetaData called on it
// //
// Returns the dirNode, obect, leaf and error // Returns the dirNode, object, leaf and error
// //
// Used to create new objects // Used to create new objects
func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object, dirNode *mega.Node, leaf string, err error) { func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object, dirNode *mega.Node, leaf string, err error) {
@@ -523,10 +523,10 @@ func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Obje
// This will create a duplicate if we upload a new file without // This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that. // checking to see if there is one already - use Put() for that.
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
exisitingObj, err := f.newObjectWithInfo(src.Remote(), nil) existingObj, err := f.newObjectWithInfo(src.Remote(), nil)
switch err { switch err {
case nil: case nil:
return exisitingObj, exisitingObj.Update(in, src, options...) return existingObj, existingObj.Update(in, src, options...)
case fs.ErrorObjectNotFound: case fs.ErrorObjectNotFound:
// Not found so create it // Not found so create it
return f.PutUnchecked(in, src) return f.PutUnchecked(in, src)
@@ -847,14 +847,14 @@ func (f *Fs) MergeDirs(dirs []fs.Directory) error {
return shouldRetry(err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
return errors.Wrapf(err, "MergDirs move failed on %q in %v", info.GetName(), srcDir) return errors.Wrapf(err, "MergeDirs move failed on %q in %v", info.GetName(), srcDir)
} }
} }
// rmdir (into trash) the now empty source directory // rmdir (into trash) the now empty source directory
fs.Infof(srcDir, "removing empty directory") fs.Infof(srcDir, "removing empty directory")
err = f.deleteNode(srcDirNode) err = f.deleteNode(srcDirNode)
if err != nil { if err != nil {
return errors.Wrapf(err, "MergDirs move failed to rmdir %q", srcDir) return errors.Wrapf(err, "MergeDirs move failed to rmdir %q", srcDir)
} }
} }
return nil return nil
@@ -1076,6 +1076,9 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
// The new object may have been created if an error is returned // The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
size := src.Size() size := src.Size()
if size < 0 {
return errors.New("mega backend can't upload a file of unknown length")
}
//modTime := src.ModTime() //modTime := src.ModTime()
remote := o.Remote() remote := o.Remote()
@@ -1126,7 +1129,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return errors.Wrap(err, "failed to finish upload") return errors.Wrap(err, "failed to finish upload")
} }
// If the upload succeded and the original object existed, then delete it // If the upload succeeded and the original object existed, then delete it
if o.info != nil { if o.info != nil {
err = o.fs.deleteNode(o.info) err = o.fs.deleteNode(o.info)
if err != nil { if err != nil {

View File

@@ -25,7 +25,7 @@ type Error struct {
} `json:"error"` } `json:"error"`
} }
// Error returns a string for the error and statistifes the error interface // Error returns a string for the error and satisfies the error interface
func (e *Error) Error() string { func (e *Error) Error() string {
out := e.ErrorInfo.Code out := e.ErrorInfo.Code
if e.ErrorInfo.InnerError.Code != "" { if e.ErrorInfo.InnerError.Code != "" {
@@ -35,7 +35,7 @@ func (e *Error) Error() string {
return out return out
} }
// Check Error statisfies the error interface // Check Error satisfies the error interface
var _ error = (*Error)(nil) var _ error = (*Error)(nil)
// Identity represents an identity of an actor. For example, and actor // Identity represents an identity of an actor. For example, and actor
@@ -295,9 +295,9 @@ func (i *Item) GetID() string {
return i.ID return i.ID
} }
// GetDriveID returns a normalized ParentReferance of the item // GetDriveID returns a normalized ParentReference of the item
func (i *Item) GetDriveID() string { func (i *Item) GetDriveID() string {
return i.GetParentReferance().DriveID return i.GetParentReference().DriveID
} }
// GetName returns a normalized Name of the item // GetName returns a normalized Name of the item
@@ -398,8 +398,8 @@ func (i *Item) GetLastModifiedDateTime() Timestamp {
return i.LastModifiedDateTime return i.LastModifiedDateTime
} }
// GetParentReferance returns a normalized ParentReferance of the item // GetParentReference returns a normalized ParentReference of the item
func (i *Item) GetParentReferance() *ItemReference { func (i *Item) GetParentReference() *ItemReference {
if i.IsRemote() && i.ParentReference == nil { if i.IsRemote() && i.ParentReference == nil {
return i.RemoteItem.ParentReference return i.RemoteItem.ParentReference
} }

View File

@@ -227,7 +227,7 @@ that the chunks will be buffered into memory.`,
Advanced: true, Advanced: true,
}, { }, {
Name: "drive_type", Name: "drive_type",
Help: "The type of the drive ( personal | business | documentLibrary )", Help: "The type of the drive ( " + driveTypePersonal + " | " + driveTypeBusiness + " | " + driveTypeSharepoint + " )",
Default: "", Default: "",
Advanced: true, Advanced: true,
}, { }, {
@@ -261,7 +261,7 @@ type Fs struct {
features *fs.Features // optional features features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry tokenRenewer *oauthutil.Renew // renew the token on expiry
driveID string // ID to use for querying Microsoft Graph driveID string // ID to use for querying Microsoft Graph
driveType string // https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/resources/drive driveType string // https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/resources/drive
@@ -324,19 +324,24 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(resp *http.Response, err error) (bool, error) {
authRety := false authRetry := false
if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 { if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
authRety = true authRetry = true
fs.Debugf(nil, "Should retry: %v", err) fs.Debugf(nil, "Should retry: %v", err)
} }
return authRety || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
// readMetaDataForPathRelativeToID reads the metadata for a path relative to an item that is addressed by its normalized ID. // readMetaDataForPathRelativeToID reads the metadata for a path relative to an item that is addressed by its normalized ID.
// if `relPath` == "", it reads the metadata for the item with that ID. // if `relPath` == "", it reads the metadata for the item with that ID.
//
// We address items using the pattern `drives/driveID/items/itemID:/relativePath`
// instead of simply using `drives/driveID/root:/itemPath` because it works for
// "shared with me" folders in OneDrive Personal (See #2536, #2778)
// This path pattern comes from https://github.com/OneDrive/onedrive-api-docs/issues/908#issuecomment-417488480
func (f *Fs) readMetaDataForPathRelativeToID(normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) { func (f *Fs) readMetaDataForPathRelativeToID(normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) {
opts := newOptsCall(normalizedID, "GET", ":/"+rest.URLPathEscape(replaceReservedChars(relPath))) opts := newOptsCall(normalizedID, "GET", ":/"+withTrailingColon(rest.URLPathEscape(replaceReservedChars(relPath))))
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info) resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(resp, err)
@@ -475,7 +480,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
driveID: opt.DriveID, driveID: opt.DriveID,
driveType: opt.DriveType, driveType: opt.DriveType,
srv: rest.NewClient(oAuthClient).SetRoot(graphURL + "/drives/" + opt.DriveID), srv: rest.NewClient(oAuthClient).SetRoot(graphURL + "/drives/" + opt.DriveID),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CaseInsensitive: true, CaseInsensitive: true,
@@ -703,9 +708,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
id := info.GetID() id := info.GetID()
f.dirCache.Put(remote, id) f.dirCache.Put(remote, id)
d := fs.NewDir(remote, time.Time(info.GetLastModifiedDateTime())).SetID(id) d := fs.NewDir(remote, time.Time(info.GetLastModifiedDateTime())).SetID(id)
if folder != nil { d.SetItems(folder.ChildCount)
d.SetItems(folder.ChildCount)
}
entries = append(entries, d) entries = append(entries, d)
} else { } else {
o, err := f.newObjectWithInfo(remote, info) o, err := f.newObjectWithInfo(remote, info)
@@ -819,9 +822,6 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
return err return err
} }
f.dirCache.FlushDir(dir) f.dirCache.FlushDir(dir)
if err != nil {
return err
}
return nil return nil
} }
@@ -1340,12 +1340,12 @@ func (o *Object) setModTime(modTime time.Time) (*api.Item, error) {
opts = rest.Opts{ opts = rest.Opts{
Method: "PATCH", Method: "PATCH",
RootURL: rootURL, RootURL: rootURL,
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(leaf), Path: "/" + drive + "/items/" + trueDirID + ":/" + withTrailingColon(rest.URLPathEscape(leaf)),
} }
} else { } else {
opts = rest.Opts{ opts = rest.Opts{
Method: "PATCH", Method: "PATCH",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()), Path: "/root:/" + withTrailingColon(rest.URLPathEscape(o.srvPath())),
} }
} }
update := api.SetFileSystemInfo{ update := api.SetFileSystemInfo{
@@ -1488,7 +1488,7 @@ func (o *Object) cancelUploadSession(url string) (err error) {
// uploadMultipart uploads a file using multipart upload // uploadMultipart uploads a file using multipart upload
func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (info *api.Item, err error) { func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (info *api.Item, err error) {
if size <= 0 { if size <= 0 {
panic("size passed into uploadMultipart must be > 0") return nil, errors.New("unknown-sized upload not supported")
} }
// Create upload session // Create upload session
@@ -1535,7 +1535,7 @@ func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (i
// This function will set modtime after uploading, which will create a new version for the remote file // This function will set modtime after uploading, which will create a new version for the remote file
func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (info *api.Item, err error) { func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (info *api.Item, err error) {
if size < 0 || size > int64(fs.SizeSuffix(4*1024*1024)) { if size < 0 || size > int64(fs.SizeSuffix(4*1024*1024)) {
panic("size passed into uploadSinglepart must be >= 0 and <= 4MiB") return nil, errors.New("size passed into uploadSinglepart must be >= 0 and <= 4MiB")
} }
fs.Debugf(o, "Starting singlepart upload") fs.Debugf(o, "Starting singlepart upload")
@@ -1602,7 +1602,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
} else if size == 0 { } else if size == 0 {
info, err = o.uploadSinglepart(in, size, modTime) info, err = o.uploadSinglepart(in, size, modTime)
} else { } else {
panic("src file size must be >= 0") return errors.New("unknown-sized upload not supported")
} }
if err != nil { if err != nil {
return err return err
@@ -1668,6 +1668,21 @@ func getRelativePathInsideBase(base, target string) (string, bool) {
return "", false return "", false
} }
// Adds a ":" at the end of `remotePath` in a proper manner.
// If `remotePath` already ends with "/", change it to ":/"
// If `remotePath` is "", return "".
// A workaround for #2720 and #3039
func withTrailingColon(remotePath string) string {
if remotePath == "" {
return ""
}
if strings.HasSuffix(remotePath, "/") {
return remotePath[:len(remotePath)-1] + ":/"
}
return remotePath + ":"
}
// Check the interfaces are satisfied // Check the interfaces are satisfied
var ( var (
_ fs.Fs = (*Fs)(nil) _ fs.Fs = (*Fs)(nil)

View File

@@ -65,7 +65,7 @@ type Fs struct {
opt Options // parsed options opt Options // parsed options
features *fs.Features // optional features features *fs.Features // optional features
srv *rest.Client // the connection to the server srv *rest.Client // the connection to the server
pacer *pacer.Pacer // To pace and retry the API calls pacer *fs.Pacer // To pace and retry the API calls
session UserSessionInfo // contains the session data session UserSessionInfo // contains the session data
dirCache *dircache.DirCache // Map of directory path to directory id dirCache *dircache.DirCache // Map of directory path to directory id
} }
@@ -119,7 +119,7 @@ func (f *Fs) DirCacheFlush() {
f.dirCache.ResetRoot() f.dirCache.ResetRoot()
} }
// NewFs contstructs an Fs from the path, bucket:path // NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
@@ -144,7 +144,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root, root: root,
opt: *opt, opt: *opt,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler), srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
} }
f.dirCache = dircache.New(root, "0", f) f.dirCache = dircache.New(root, "0", f)
@@ -287,9 +287,6 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
return err return err
} }
f.dirCache.FlushDir(dir) f.dirCache.FlushDir(dir)
if err != nil {
return err
}
return nil return nil
} }
@@ -785,7 +782,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
remote := path.Join(dir, folder.Name) remote := path.Join(dir, folder.Name)
// cache the directory ID for later lookups // cache the directory ID for later lookups
f.dirCache.Put(remote, folder.FolderID) f.dirCache.Put(remote, folder.FolderID)
d := fs.NewDir(remote, time.Unix(int64(folder.DateModified), 0)).SetID(folder.FolderID) d := fs.NewDir(remote, time.Unix(folder.DateModified, 0)).SetID(folder.FolderID)
d.SetItems(int64(folder.ChildFolders)) d.SetItems(int64(folder.ChildFolders))
entries = append(entries, d) entries = append(entries, d)
} }

View File

@@ -13,7 +13,7 @@ type Error struct {
} `json:"error"` } `json:"error"`
} }
// Error statisfies the error interface // Error satisfies the error interface
func (e *Error) Error() string { func (e *Error) Error() string {
return fmt.Sprintf("%s (Error %d)", e.Info.Message, e.Info.Code) return fmt.Sprintf("%s (Error %d)", e.Info.Message, e.Info.Code)
} }

View File

@@ -41,7 +41,7 @@ type Error struct {
ErrorString string `json:"error"` ErrorString string `json:"error"`
} }
// Error returns a string for the error and statistifes the error interface // Error returns a string for the error and satisfies the error interface
func (e *Error) Error() string { func (e *Error) Error() string {
return fmt.Sprintf("pcloud error: %s (%d)", e.ErrorString, e.Result) return fmt.Sprintf("pcloud error: %s (%d)", e.ErrorString, e.Result)
} }
@@ -58,7 +58,7 @@ func (e *Error) Update(err error) error {
return e return e
} }
// Check Error statisfies the error interface // Check Error satisfies the error interface
var _ error = (*Error)(nil) var _ error = (*Error)(nil)
// Item describes a folder or a file as returned by Get Folder Items and others // Item describes a folder or a file as returned by Get Folder Items and others
@@ -161,7 +161,6 @@ type UserInfo struct {
PublicLinkQuota int64 `json:"publiclinkquota"` PublicLinkQuota int64 `json:"publiclinkquota"`
Email string `json:"email"` Email string `json:"email"`
UserID int `json:"userid"` UserID int `json:"userid"`
Result int `json:"result"`
Quota int64 `json:"quota"` Quota int64 `json:"quota"`
TrashRevretentionDays int `json:"trashrevretentiondays"` TrashRevretentionDays int `json:"trashrevretentiondays"`
Premium bool `json:"premium"` Premium bool `json:"premium"`

View File

@@ -95,7 +95,7 @@ type Fs struct {
features *fs.Features // optional features features *fs.Features // optional features
srv *rest.Client // the connection to the server srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry tokenRenewer *oauthutil.Renew // renew the token on expiry
} }
@@ -254,7 +254,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root, root: root,
opt: *opt, opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL), srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CaseInsensitive: false, CaseInsensitive: false,
@@ -385,7 +385,7 @@ func fileIDtoNumber(fileID string) string {
if len(fileID) > 0 && fileID[0] == 'f' { if len(fileID) > 0 && fileID[0] == 'f' {
return fileID[1:] return fileID[1:]
} }
fs.Debugf(nil, "Invalid filee id %q", fileID) fs.Debugf(nil, "Invalid file id %q", fileID)
return fileID return fileID
} }

View File

@@ -449,7 +449,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
} }
_, err = bucketInit.PutObject(key, &req) _, err = bucketInit.PutObject(key, &req)
if err != nil { if err != nil {
fs.Debugf(f, "Copied Faild, API Error: %v", err) fs.Debugf(f, "Copy Failed, API Error: %v", err)
return nil, err return nil, err
} }
return f.NewObject(remote) return f.NewObject(remote)
@@ -756,7 +756,7 @@ func (f *Fs) Mkdir(dir string) error {
} }
switch *statistics.Status { switch *statistics.Status {
case "deleted": case "deleted":
fs.Debugf(f, "Wiat for qingstor sync bucket status, retries: %d", retries) fs.Debugf(f, "Wait for qingstor sync bucket status, retries: %d", retries)
time.Sleep(time.Second * 1) time.Sleep(time.Second * 1)
retries++ retries++
continue continue
@@ -875,7 +875,7 @@ func (o *Object) readMetaData() (err error) {
fs.Debugf(o, "Read metadata of key: %s", key) fs.Debugf(o, "Read metadata of key: %s", key)
resp, err := bucketInit.HeadObject(key, &qs.HeadObjectInput{}) resp, err := bucketInit.HeadObject(key, &qs.HeadObjectInput{})
if err != nil { if err != nil {
fs.Debugf(o, "Read metadata faild, API Error: %v", err) fs.Debugf(o, "Read metadata failed, API Error: %v", err)
if e, ok := err.(*qsErr.QingStorError); ok { if e, ok := err.(*qsErr.QingStorError); ok {
if e.StatusCode == http.StatusNotFound { if e.StatusCode == http.StatusNotFound {
return fs.ErrorObjectNotFound return fs.ErrorObjectNotFound

View File

@@ -143,7 +143,7 @@ func (u *uploader) init() {
// Try to adjust partSize if it is too small and account for // Try to adjust partSize if it is too small and account for
// integer division truncation. // integer division truncation.
if u.totalSize/u.cfg.partSize >= int64(u.cfg.partSize) { if u.totalSize/u.cfg.partSize >= u.cfg.partSize {
// Add one to the part size to account for remainders // Add one to the part size to account for remainders
// during the size calculation. e.g odd number of bytes. // during the size calculation. e.g odd number of bytes.
u.cfg.partSize = (u.totalSize / int64(u.cfg.maxUploadParts)) + 1 u.cfg.partSize = (u.totalSize / int64(u.cfg.maxUploadParts)) + 1
@@ -163,7 +163,7 @@ func (u *uploader) singlePartUpload(buf io.Reader, size int64) error {
_, err := bucketInit.PutObject(u.cfg.key, &req) _, err := bucketInit.PutObject(u.cfg.key, &req)
if err == nil { if err == nil {
fs.Debugf(u, "Upload single objcet finished") fs.Debugf(u, "Upload single object finished")
} }
return err return err
} }

View File

@@ -131,6 +131,9 @@ func init() {
}, { }, {
Value: "eu-west-2", Value: "eu-west-2",
Help: "EU (London) Region\nNeeds location constraint eu-west-2.", Help: "EU (London) Region\nNeeds location constraint eu-west-2.",
}, {
Value: "eu-north-1",
Help: "EU (Stockholm) Region\nNeeds location constraint eu-north-1.",
}, { }, {
Value: "eu-central-1", Value: "eu-central-1",
Help: "EU (Frankfurt) Region\nNeeds location constraint eu-central-1.", Help: "EU (Frankfurt) Region\nNeeds location constraint eu-central-1.",
@@ -234,10 +237,10 @@ func init() {
Help: "EU Cross Region Amsterdam Private Endpoint", Help: "EU Cross Region Amsterdam Private Endpoint",
}, { }, {
Value: "s3.eu-gb.objectstorage.softlayer.net", Value: "s3.eu-gb.objectstorage.softlayer.net",
Help: "Great Britan Endpoint", Help: "Great Britain Endpoint",
}, { }, {
Value: "s3.eu-gb.objectstorage.service.networklayer.com", Value: "s3.eu-gb.objectstorage.service.networklayer.com",
Help: "Great Britan Private Endpoint", Help: "Great Britain Private Endpoint",
}, { }, {
Value: "s3.ap-geo.objectstorage.softlayer.net", Value: "s3.ap-geo.objectstorage.softlayer.net",
Help: "APAC Cross Regional Endpoint", Help: "APAC Cross Regional Endpoint",
@@ -343,7 +346,7 @@ func init() {
Help: "Endpoint for S3 API.\nRequired when using an S3 clone.", Help: "Endpoint for S3 API.\nRequired when using an S3 clone.",
Provider: "!AWS,IBMCOS,Alibaba", Provider: "!AWS,IBMCOS,Alibaba",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "objects-us-west-1.dream.io", Value: "objects-us-east-1.dream.io",
Help: "Dream Objects endpoint", Help: "Dream Objects endpoint",
Provider: "Dreamhost", Provider: "Dreamhost",
}, { }, {
@@ -392,6 +395,9 @@ func init() {
}, { }, {
Value: "eu-west-2", Value: "eu-west-2",
Help: "EU (London) Region.", Help: "EU (London) Region.",
}, {
Value: "eu-north-1",
Help: "EU (Stockholm) Region.",
}, { }, {
Value: "EU", Value: "EU",
Help: "EU Region.", Help: "EU Region.",
@@ -444,7 +450,7 @@ func init() {
Help: "US East Region Flex", Help: "US East Region Flex",
}, { }, {
Value: "us-south-standard", Value: "us-south-standard",
Help: "US Sout hRegion Standard", Help: "US South Region Standard",
}, { }, {
Value: "us-south-vault", Value: "us-south-vault",
Help: "US South Region Vault", Help: "US South Region Vault",
@@ -468,16 +474,16 @@ func init() {
Help: "EU Cross Region Flex", Help: "EU Cross Region Flex",
}, { }, {
Value: "eu-gb-standard", Value: "eu-gb-standard",
Help: "Great Britan Standard", Help: "Great Britain Standard",
}, { }, {
Value: "eu-gb-vault", Value: "eu-gb-vault",
Help: "Great Britan Vault", Help: "Great Britain Vault",
}, { }, {
Value: "eu-gb-cold", Value: "eu-gb-cold",
Help: "Great Britan Cold", Help: "Great Britain Cold",
}, { }, {
Value: "eu-gb-flex", Value: "eu-gb-flex",
Help: "Great Britan Flex", Help: "Great Britain Flex",
}, { }, {
Value: "ap-standard", Value: "ap-standard",
Help: "APAC Standard", Help: "APAC Standard",
@@ -776,7 +782,7 @@ type Fs struct {
bucketOKMu sync.Mutex // mutex to protect bucket OK bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket bucketOK bool // true if we have created the bucket
bucketDeleted bool // true if we have deleted the bucket bucketDeleted bool // true if we have deleted the bucket
pacer *pacer.Pacer // To pace the API calls pacer *fs.Pacer // To pace the API calls
srv *http.Client // a plain http client srv *http.Client // a plain http client
} }
@@ -836,7 +842,7 @@ var retryErrorCodes = []int{
func (f *Fs) shouldRetry(err error) (bool, error) { func (f *Fs) shouldRetry(err error) (bool, error) {
// If this is an awserr object, try and extract more useful information to determine if we should retry // If this is an awserr object, try and extract more useful information to determine if we should retry
if awsError, ok := err.(awserr.Error); ok { if awsError, ok := err.(awserr.Error); ok {
// Simple case, check the original embedded error in case it's generically retriable // Simple case, check the original embedded error in case it's generically retryable
if fserrors.ShouldRetry(awsError.OrigErr()) { if fserrors.ShouldRetry(awsError.OrigErr()) {
return true, err return true, err
} }
@@ -1049,7 +1055,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
c: c, c: c,
bucket: bucket, bucket: bucket,
ses: ses, ses: ses,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.S3Pacer), pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))),
srv: fshttp.NewClient(fs.Config), srv: fshttp.NewClient(fs.Config),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{

View File

@@ -427,6 +427,12 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
sshConfig.Auth = append(sshConfig.Auth, ssh.Password(clearpass)) sshConfig.Auth = append(sshConfig.Auth, ssh.Password(clearpass))
} }
return NewFsWithConnection(name, root, opt, sshConfig)
}
// NewFsWithConnection creates a new Fs object from the name and root and a ssh.ClientConfig. It connects to
// the host specified in the ssh.ClientConfig
func NewFsWithConnection(name string, root string, opt *Options, sshConfig *ssh.ClientConfig) (fs.Fs, error) {
f := &Fs{ f := &Fs{
name: name, name: name,
root: root, root: root,

View File

@@ -2,6 +2,7 @@ package swift
import ( import (
"net/http" "net/http"
"time"
"github.com/ncw/swift" "github.com/ncw/swift"
) )
@@ -65,6 +66,14 @@ func (a *auth) Token() string {
return a.parentAuth.Token() return a.parentAuth.Token()
} }
// Expires returns the time the token expires if known or Zero if not.
func (a *auth) Expires() (t time.Time) {
if do, ok := a.parentAuth.(swift.Expireser); ok {
t = do.Expires()
}
return t
}
// The CDN url if available // The CDN url if available
func (a *auth) CdnUrl() string { // nolint func (a *auth) CdnUrl() string { // nolint
if a.parentAuth == nil { if a.parentAuth == nil {
@@ -74,4 +83,7 @@ func (a *auth) CdnUrl() string { // nolint
} }
// Check the interfaces are satisfied // Check the interfaces are satisfied
var _ swift.Authenticator = (*auth)(nil) var (
_ swift.Authenticator = (*auth)(nil)
_ swift.Expireser = (*auth)(nil)
)

View File

@@ -195,7 +195,7 @@ type Options struct {
StorageURL string `config:"storage_url"` StorageURL string `config:"storage_url"`
AuthToken string `config:"auth_token"` AuthToken string `config:"auth_token"`
AuthVersion int `config:"auth_version"` AuthVersion int `config:"auth_version"`
ApplicationCredentialId string `config:"application_credential_id"` ApplicationCredentialID string `config:"application_credential_id"`
ApplicationCredentialName string `config:"application_credential_name"` ApplicationCredentialName string `config:"application_credential_name"`
ApplicationCredentialSecret string `config:"application_credential_secret"` ApplicationCredentialSecret string `config:"application_credential_secret"`
StoragePolicy string `config:"storage_policy"` StoragePolicy string `config:"storage_policy"`
@@ -216,7 +216,7 @@ type Fs struct {
containerOK bool // true if we have created the container containerOK bool // true if we have created the container
segmentsContainer string // container to store the segments (if any) in segmentsContainer string // container to store the segments (if any) in
noCheckContainer bool // don't check the container before creating it noCheckContainer bool // don't check the container before creating it
pacer *pacer.Pacer // To pace the API calls pacer *fs.Pacer // To pace the API calls
} }
// Object describes a swift object // Object describes a swift object
@@ -317,7 +317,7 @@ func swiftConnection(opt *Options, name string) (*swift.Connection, error) {
StorageUrl: opt.StorageURL, StorageUrl: opt.StorageURL,
AuthToken: opt.AuthToken, AuthToken: opt.AuthToken,
AuthVersion: opt.AuthVersion, AuthVersion: opt.AuthVersion,
ApplicationCredentialId: opt.ApplicationCredentialId, ApplicationCredentialId: opt.ApplicationCredentialID,
ApplicationCredentialName: opt.ApplicationCredentialName, ApplicationCredentialName: opt.ApplicationCredentialName,
ApplicationCredentialSecret: opt.ApplicationCredentialSecret, ApplicationCredentialSecret: opt.ApplicationCredentialSecret,
EndpointType: swift.EndpointType(opt.EndpointType), EndpointType: swift.EndpointType(opt.EndpointType),
@@ -401,7 +401,7 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
segmentsContainer: container + "_segments", segmentsContainer: container + "_segments",
root: directory, root: directory,
noCheckContainer: noCheckContainer, noCheckContainer: noCheckContainer,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.S3Pacer), pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
ReadMimeType: true, ReadMimeType: true,
@@ -430,7 +430,7 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
return f, nil return f, nil
} }
// NewFs contstructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)

View File

@@ -177,8 +177,8 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
// At least one value will be written to the channel, // At least one value will be written to the channel,
// specifying the initial value and updated values might // specifying the initial value and updated values might
// follow. A 0 Duration should pause the polling. // follow. A 0 Duration should pause the polling.
// The ChangeNotify implemantion must empty the channel // The ChangeNotify implementation must empty the channel
// regulary. When the channel gets closed, the implemantion // regularly. When the channel gets closed, the implementation
// should stop polling and release resources. // should stop polling and release resources.
func (f *Fs) ChangeNotify(fn func(string, fs.EntryType), ch <-chan time.Duration) { func (f *Fs) ChangeNotify(fn func(string, fs.EntryType), ch <-chan time.Duration) {
var remoteChans []chan time.Duration var remoteChans []chan time.Duration

View File

@@ -66,12 +66,13 @@ type Response struct {
// Note that status collects all the status values for which we just // Note that status collects all the status values for which we just
// check the first is OK. // check the first is OK.
type Prop struct { type Prop struct {
Status []string `xml:"DAV: status"` Status []string `xml:"DAV: status"`
Name string `xml:"DAV: prop>displayname,omitempty"` Name string `xml:"DAV: prop>displayname,omitempty"`
Type *xml.Name `xml:"DAV: prop>resourcetype>collection,omitempty"` Type *xml.Name `xml:"DAV: prop>resourcetype>collection,omitempty"`
Size int64 `xml:"DAV: prop>getcontentlength,omitempty"` IsCollection *string `xml:"DAV: prop>iscollection,omitempty"` // this is a Microsoft extension see #2716
Modified Time `xml:"DAV: prop>getlastmodified,omitempty"` Size int64 `xml:"DAV: prop>getcontentlength,omitempty"`
Checksums []string `xml:"prop>checksums>checksum,omitempty"` Modified Time `xml:"DAV: prop>getlastmodified,omitempty"`
Checksums []string `xml:"prop>checksums>checksum,omitempty"`
} }
// Parse a status of the form "HTTP/1.1 200 OK" or "HTTP/1.1 200" // Parse a status of the form "HTTP/1.1 200 OK" or "HTTP/1.1 200"
@@ -123,7 +124,7 @@ type PropValue struct {
Value string `xml:",chardata"` Value string `xml:",chardata"`
} }
// Error is used to desribe webdav errors // Error is used to describe webdav errors
// //
// <d:error xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns"> // <d:error xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns">
// <s:exception>Sabre\DAV\Exception\NotFound</s:exception> // <s:exception>Sabre\DAV\Exception\NotFound</s:exception>
@@ -136,7 +137,7 @@ type Error struct {
StatusCode int StatusCode int
} }
// Error returns a string for the error and statistifes the error interface // Error returns a string for the error and satisfies the error interface
func (e *Error) Error() string { func (e *Error) Error() string {
var out []string var out []string
if e.Message != "" { if e.Message != "" {

View File

@@ -102,7 +102,7 @@ func (ca *CookieAuth) Cookies() (*CookieResponse, error) {
func (ca *CookieAuth) getSPCookie(conf *SuccessResponse) (*CookieResponse, error) { func (ca *CookieAuth) getSPCookie(conf *SuccessResponse) (*CookieResponse, error) {
spRoot, err := url.Parse(ca.endpoint) spRoot, err := url.Parse(ca.endpoint)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "Error while contructing endpoint URL") return nil, errors.Wrap(err, "Error while constructing endpoint URL")
} }
u, err := url.Parse("https://" + spRoot.Host + "/_forms/default.aspx?wa=wsignin1.0") u, err := url.Parse("https://" + spRoot.Host + "/_forms/default.aspx?wa=wsignin1.0")
@@ -121,7 +121,7 @@ func (ca *CookieAuth) getSPCookie(conf *SuccessResponse) (*CookieResponse, error
Jar: jar, Jar: jar,
} }
// Send the previously aquired Token as a Post parameter // Send the previously acquired Token as a Post parameter
if _, err = client.Post(u.String(), "text/xml", strings.NewReader(conf.Succ.Token)); err != nil { if _, err = client.Post(u.String(), "text/xml", strings.NewReader(conf.Succ.Token)); err != nil {
return nil, errors.Wrap(err, "Error while grabbing cookies from endpoint: %v") return nil, errors.Wrap(err, "Error while grabbing cookies from endpoint: %v")
} }

View File

@@ -2,13 +2,10 @@ package odrvcookie
import ( import (
"time" "time"
"github.com/ncw/rclone/lib/rest"
) )
// CookieRenew holds information for the renew // CookieRenew holds information for the renew
type CookieRenew struct { type CookieRenew struct {
srv *rest.Client
timer *time.Ticker timer *time.Ticker
renewFn func() renewFn func()
} }

View File

@@ -101,7 +101,7 @@ type Fs struct {
endpoint *url.URL // URL of the host endpoint *url.URL // URL of the host
endpointURL string // endpoint as a string endpointURL string // endpoint as a string
srv *rest.Client // the connection to the one drive server srv *rest.Client // the connection to the one drive server
pacer *pacer.Pacer // pacer for API calls pacer *fs.Pacer // pacer for API calls
precision time.Duration // mod time precision precision time.Duration // mod time precision
canStream bool // set if can stream canStream bool // set if can stream
useOCMtime bool // set if can use X-OC-Mtime useOCMtime bool // set if can use X-OC-Mtime
@@ -172,6 +172,18 @@ func itemIsDir(item *api.Response) bool {
} }
fs.Debugf(nil, "Unknown resource type %q/%q on %q", t.Space, t.Local, item.Props.Name) fs.Debugf(nil, "Unknown resource type %q/%q on %q", t.Space, t.Local, item.Props.Name)
} }
// the iscollection prop is a Microsoft extension, but if present it is a reliable indicator
// if the above check failed - see #2716. This can be an integer or a boolean - see #2964
if t := item.Props.IsCollection; t != nil {
switch x := strings.ToLower(*t); x {
case "0", "false":
return false
case "1", "true":
return true
default:
fs.Debugf(nil, "Unknown value %q for IsCollection", x)
}
}
return false return false
} }
@@ -244,7 +256,7 @@ func errorHandler(resp *http.Response) error {
return errResponse return errResponse
} }
// addShlash makes sure s is terminated with a / if non empty // addSlash makes sure s is terminated with a / if non empty
func addSlash(s string) string { func addSlash(s string) string {
if s != "" && !strings.HasSuffix(s, "/") { if s != "" && !strings.HasSuffix(s, "/") {
s += "/" s += "/"
@@ -306,7 +318,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
endpoint: u, endpoint: u,
endpointURL: u.String(), endpointURL: u.String(),
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(u.String()), srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(u.String()),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
precision: fs.ModTimeNotSupported, precision: fs.ModTimeNotSupported,
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
@@ -632,10 +644,18 @@ func (f *Fs) _mkdir(dirPath string) error {
Path: dirPath, Path: dirPath,
NoResponse: true, NoResponse: true,
} }
return f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(&opts) resp, err := f.srv.Call(&opts)
return shouldRetry(resp, err) return shouldRetry(resp, err)
}) })
if apiErr, ok := err.(*api.Error); ok {
// already exists
// owncloud returns 423/StatusLocked if the create is already in progress
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable || apiErr.StatusCode == http.StatusLocked {
return nil
}
}
return err
} }
// mkdir makes the directory and parents using native paths // mkdir makes the directory and parents using native paths
@@ -643,11 +663,7 @@ func (f *Fs) mkdir(dirPath string) error {
// defer log.Trace(dirPath, "")("") // defer log.Trace(dirPath, "")("")
err := f._mkdir(dirPath) err := f._mkdir(dirPath)
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
// already exists // parent does not exist so create it first then try again
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable {
return nil
}
// parent does not exist
if apiErr.StatusCode == http.StatusConflict { if apiErr.StatusCode == http.StatusConflict {
err = f.mkParentDir(dirPath) err = f.mkParentDir(dirPath)
if err == nil { if err == nil {
@@ -900,11 +916,13 @@ func (f *Fs) About() (*fs.Usage, error) {
return nil, errors.Wrap(err, "about call failed") return nil, errors.Wrap(err, "about call failed")
} }
usage := &fs.Usage{} usage := &fs.Usage{}
if q.Available >= 0 && q.Used >= 0 { if q.Available != 0 || q.Used != 0 {
usage.Total = fs.NewUsageValue(q.Available + q.Used) if q.Available >= 0 && q.Used >= 0 {
} usage.Total = fs.NewUsageValue(q.Available + q.Used)
if q.Used >= 0 { }
usage.Used = fs.NewUsageValue(q.Used) if q.Used >= 0 {
usage.Used = fs.NewUsageValue(q.Used)
}
} }
return usage, nil return usage, nil
} }

View File

@@ -56,7 +56,7 @@ type AsyncInfo struct {
Templated bool `json:"templated"` Templated bool `json:"templated"`
} }
// AsyncStatus is returned when requesting the status of an async operations. Possble values in-progress, success, failure // AsyncStatus is returned when requesting the status of an async operations. Possible values in-progress, success, failure
type AsyncStatus struct { type AsyncStatus struct {
Status string `json:"status"` Status string `json:"status"`
} }

View File

@@ -93,7 +93,7 @@ type Fs struct {
opt Options // parsed options opt Options // parsed options
features *fs.Features // optional features features *fs.Features // optional features
srv *rest.Client // the connection to the yandex server srv *rest.Client // the connection to the yandex server
pacer *pacer.Pacer // pacer for API calls pacer *fs.Pacer // pacer for API calls
diskRoot string // root path with "disk:/" container name diskRoot string // root path with "disk:/" container name
} }
@@ -269,7 +269,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
name: name, name: name,
opt: *opt, opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL), srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
} }
f.setRoot(root) f.setRoot(root)
f.features = (&fs.Features{ f.features = (&fs.Features{
@@ -307,7 +307,7 @@ func (f *Fs) itemToDirEntry(remote string, object *api.ResourceInfoResponse) (fs
if err != nil { if err != nil {
return nil, errors.Wrap(err, "error parsing time in directory item") return nil, errors.Wrap(err, "error parsing time in directory item")
} }
d := fs.NewDir(remote, t).SetSize(int64(object.Size)) d := fs.NewDir(remote, t).SetSize(object.Size)
return d, nil return d, nil
case "file": case "file":
o, err := f.newObjectWithInfo(remote, object) o, err := f.newObjectWithInfo(remote, object)
@@ -634,7 +634,7 @@ func (f *Fs) Purge() error {
return f.purgeCheck("", false) return f.purgeCheck("", false)
} }
// copyOrMoves copys or moves directories or files depending on the mthod parameter // copyOrMoves copies or moves directories or files depending on the method parameter
func (f *Fs) copyOrMove(method, src, dst string, overwrite bool) (err error) { func (f *Fs) copyOrMove(method, src, dst string, overwrite bool) (err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
@@ -1107,7 +1107,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return err return err
} }
//if file uploaded sucessfully then return metadata //if file uploaded successfully then return metadata
o.modTime = modTime o.modTime = modTime
o.md5sum = "" // according to unit tests after put the md5 is empty. o.md5sum = "" // according to unit tests after put the md5 is empty.
o.size = int64(in1.BytesRead()) // better solution o.readMetaData() ? o.size = int64(in1.BytesRead()) // better solution o.readMetaData() ?

View File

@@ -8,6 +8,8 @@
package main package main
import ( import (
"archive/tar"
"compress/gzip"
"encoding/json" "encoding/json"
"flag" "flag"
"fmt" "fmt"
@@ -15,13 +17,18 @@ import (
"io/ioutil" "io/ioutil"
"log" "log"
"net/http" "net/http"
"net/url"
"os" "os"
"os/exec" "os/exec"
"path"
"path/filepath" "path/filepath"
"regexp" "regexp"
"runtime"
"strings" "strings"
"time" "time"
"github.com/ncw/rclone/lib/rest"
"golang.org/x/net/html"
"golang.org/x/sys/unix" "golang.org/x/sys/unix"
) )
@@ -30,8 +37,15 @@ var (
install = flag.Bool("install", false, "Install the downloaded package using sudo dpkg -i.") install = flag.Bool("install", false, "Install the downloaded package using sudo dpkg -i.")
extract = flag.String("extract", "", "Extract the named executable from the .tar.gz and install into bindir.") extract = flag.String("extract", "", "Extract the named executable from the .tar.gz and install into bindir.")
bindir = flag.String("bindir", defaultBinDir(), "Directory to install files downloaded with -extract.") bindir = flag.String("bindir", defaultBinDir(), "Directory to install files downloaded with -extract.")
useAPI = flag.Bool("use-api", false, "Use the API for finding the release instead of scraping the page.")
// Globals // Globals
matchProject = regexp.MustCompile(`^(\w+)/(\w+)$`) matchProject = regexp.MustCompile(`^([\w-]+)/([\w-]+)$`)
osAliases = map[string][]string{
"darwin": []string{"macos", "osx"},
}
archAliases = map[string][]string{
"amd64": []string{"x86_64"},
}
) )
// A github release // A github release
@@ -113,25 +127,41 @@ func writable(path string) bool {
// Directory to install releases in by default // Directory to install releases in by default
// //
// Find writable directories on $PATH. Use the first writable // Find writable directories on $PATH. Use $GOPATH/bin if that is on
// directory which is in $HOME or failing that the first writable // the path and writable or use the first writable directory which is
// directory. // in $HOME or failing that the first writable directory.
// //
// Returns "" if none of the above were found // Returns "" if none of the above were found
func defaultBinDir() string { func defaultBinDir() string {
home := os.Getenv("HOME") home := os.Getenv("HOME")
var binDir string var (
bin string
homeBin string
goHomeBin string
gopath = os.Getenv("GOPATH")
)
for _, dir := range strings.Split(os.Getenv("PATH"), ":") { for _, dir := range strings.Split(os.Getenv("PATH"), ":") {
if writable(dir) { if writable(dir) {
if strings.HasPrefix(dir, home) { if strings.HasPrefix(dir, home) {
return dir if homeBin != "" {
homeBin = dir
}
if gopath != "" && strings.HasPrefix(dir, gopath) && goHomeBin == "" {
goHomeBin = dir
}
} }
if binDir != "" { if bin == "" {
binDir = dir bin = dir
} }
} }
} }
return binDir if goHomeBin != "" {
return goHomeBin
}
if homeBin != "" {
return homeBin
}
return bin
} }
// read the body or an error message // read the body or an error message
@@ -175,7 +205,8 @@ func getAsset(project string, matchName *regexp.Regexp) (string, string) {
} }
for _, asset := range release.Assets { for _, asset := range release.Assets {
if matchName.MatchString(asset.Name) { //log.Printf("Finding %s", asset.Name)
if matchName.MatchString(asset.Name) && isOurOsArch(asset.Name) {
return asset.BrowserDownloadURL, asset.Name return asset.BrowserDownloadURL, asset.Name
} }
} }
@@ -183,6 +214,73 @@ func getAsset(project string, matchName *regexp.Regexp) (string, string) {
return "", "" return "", ""
} }
// Get an asset URL and name by scraping the downloads page
//
// This doesn't use the API so isn't rate limited when not using GITHUB login details
func getAssetFromReleasesPage(project string, matchName *regexp.Regexp) (assetURL string, assetName string) {
baseURL := "https://github.com/" + project + "/releases"
log.Printf("Fetching asset info for %q from %q", project, baseURL)
base, err := url.Parse(baseURL)
if err != nil {
log.Fatalf("URL Parse failed: %v", err)
}
resp, err := http.Get(baseURL)
if err != nil {
log.Fatalf("Failed to fetch release info %q: %v", baseURL, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
log.Printf("Error: %s", readBody(resp.Body))
log.Fatalf("Bad status %d when fetching %q release info: %s", resp.StatusCode, baseURL, resp.Status)
}
doc, err := html.Parse(resp.Body)
if err != nil {
log.Fatalf("Failed to parse web page: %v", err)
}
var walk func(*html.Node)
walk = func(n *html.Node) {
if n.Type == html.ElementNode && n.Data == "a" {
for _, a := range n.Attr {
if a.Key == "href" {
if name := path.Base(a.Val); matchName.MatchString(name) && isOurOsArch(name) {
if u, err := rest.URLJoin(base, a.Val); err == nil {
if assetName == "" {
assetName = name
assetURL = u.String()
}
}
}
break
}
}
}
for c := n.FirstChild; c != nil; c = c.NextSibling {
walk(c)
}
}
walk(doc)
if assetName == "" || assetURL == "" {
log.Fatalf("Didn't find URL in page")
}
return assetURL, assetName
}
// isOurOsArch returns true if s contains our OS and our Arch
func isOurOsArch(s string) bool {
s = strings.ToLower(s)
check := func(base string, aliases map[string][]string) bool {
names := []string{base}
names = append(names, aliases[base]...)
for _, name := range names {
if strings.Contains(s, name) {
return true
}
}
return false
}
return check(runtime.GOARCH, archAliases) && check(runtime.GOOS, osAliases)
}
// get a file for download // get a file for download
func getFile(url, fileName string) { func getFile(url, fileName string) {
log.Printf("Downloading %q from %q", fileName, url) log.Printf("Downloading %q from %q", fileName, url)
@@ -229,6 +327,66 @@ func run(args ...string) {
} }
} }
// Untars fileName from srcFile
func untar(srcFile, fileName, extractDir string) {
f, err := os.Open(srcFile)
if err != nil {
log.Fatalf("Couldn't open tar: %v", err)
}
defer func() {
err := f.Close()
if err != nil {
log.Fatalf("Couldn't close tar: %v", err)
}
}()
var in io.Reader = f
srcExt := filepath.Ext(srcFile)
if srcExt == ".gz" || srcExt == ".tgz" {
gzf, err := gzip.NewReader(f)
if err != nil {
log.Fatalf("Couldn't open gzip: %v", err)
}
in = gzf
}
tarReader := tar.NewReader(in)
for {
header, err := tarReader.Next()
if err == io.EOF {
break
}
if err != nil {
log.Fatalf("Trouble reading tar file: %v", err)
}
name := header.Name
switch header.Typeflag {
case tar.TypeReg:
baseName := filepath.Base(name)
if baseName == fileName {
outPath := filepath.Join(extractDir, fileName)
out, err := os.OpenFile(outPath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0777)
if err != nil {
log.Fatalf("Couldn't open output file: %v", err)
}
defer func() {
err := out.Close()
if err != nil {
log.Fatalf("Couldn't close output: %v", err)
}
}()
n, err := io.Copy(out, tarReader)
if err != nil {
log.Fatalf("Couldn't write output file: %v", err)
}
log.Printf("Wrote %s (%d bytes) as %q", fileName, n, outPath)
}
}
}
}
func main() { func main() {
flag.Parse() flag.Parse()
args := flag.Args() args := flag.Args()
@@ -244,7 +402,12 @@ func main() {
log.Fatalf("Invalid regexp for name %q: %v", nameRe, err) log.Fatalf("Invalid regexp for name %q: %v", nameRe, err)
} }
assetURL, assetName := getAsset(project, matchName) var assetURL, assetName string
if *useAPI {
assetURL, assetName = getAsset(project, matchName)
} else {
assetURL, assetName = getAssetFromReleasesPage(project, matchName)
}
fileName := filepath.Join(os.TempDir(), assetName) fileName := filepath.Join(os.TempDir(), assetName)
getFile(assetURL, fileName) getFile(assetURL, fileName)
@@ -257,8 +420,6 @@ func main() {
log.Fatalf("Need to set -bindir") log.Fatalf("Need to set -bindir")
} }
log.Printf("Unpacking %s from %s and installing into %s", *extract, fileName, *bindir) log.Printf("Unpacking %s from %s and installing into %s", *extract, fileName, *bindir)
run("tar", "xf", fileName, *extract) untar(fileName, *extract, *bindir+"/")
run("chmod", "a+x", *extract)
run("mv", "-f", *extract, *bindir+"/")
} }
} }

View File

@@ -36,6 +36,7 @@ docs = [
"http.md", "http.md",
"hubic.md", "hubic.md",
"jottacloud.md", "jottacloud.md",
"koofr.md",
"mega.md", "mega.md",
"azureblob.md", "azureblob.md",
"onedrive.md", "onedrive.md",

View File

@@ -29,7 +29,7 @@ github-release release \
--name "rclone" \ --name "rclone" \
--description "Rclone - rsync for cloud storage. Sync files to and from many cloud storage providers." --description "Rclone - rsync for cloud storage. Sync files to and from many cloud storage providers."
for build in `ls build | grep -v current`; do for build in `ls build | grep -v current | grep -v testbuilds`; do
echo "Uploading ${build}" echo "Uploading ${build}"
base="${build%.*}" base="${build%.*}"
parts=(${base//-/ }) parts=(${base//-/ })

View File

@@ -341,8 +341,7 @@ func initConfig() {
configflags.SetFlags() configflags.SetFlags()
// Load filters // Load filters
var err error err := filterflags.Reload()
filter.Active, err = filter.NewFilter(&filterflags.Opt)
if err != nil { if err != nil {
log.Fatalf("Failed to load filters: %v", err) log.Fatalf("Failed to load filters: %v", err)
} }
@@ -456,7 +455,7 @@ func AddBackendFlags() {
help = help[:nl] help = help[:nl]
} }
help = strings.TrimSpace(help) help = strings.TrimSpace(help)
flag := pflag.CommandLine.VarPF(opt, name, string(opt.ShortOpt), help) flag := pflag.CommandLine.VarPF(opt, name, opt.ShortOpt, help)
if _, isBool := opt.Default.(bool); isBool { if _, isBool := opt.Default.(bool); isBool {
flag.NoOptDefVal = "true" flag.NoOptDefVal = "true"
} }

View File

@@ -7,8 +7,13 @@ import (
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
var (
createEmptySrcDirs = false
)
func init() { func init() {
cmd.Root.AddCommand(commandDefintion) cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after copy")
} }
var commandDefintion = &cobra.Command{ var commandDefintion = &cobra.Command{
@@ -69,7 +74,7 @@ changed recently very efficiently like this:
fsrc, srcFileName, fdst := cmd.NewFsSrcFileDst(args) fsrc, srcFileName, fdst := cmd.NewFsSrcFileDst(args)
cmd.Run(true, true, command, func() error { cmd.Run(true, true, command, func() error {
if srcFileName == "" { if srcFileName == "" {
return sync.CopyDir(fdst, fsrc) return sync.CopyDir(fdst, fsrc, createEmptySrcDirs)
} }
return operations.CopyFile(fdst, fsrc, srcFileName, srcFileName) return operations.CopyFile(fdst, fsrc, srcFileName, srcFileName)
}) })

View File

@@ -48,7 +48,7 @@ destination.
fsrc, srcFileName, fdst, dstFileName := cmd.NewFsSrcDstFiles(args) fsrc, srcFileName, fdst, dstFileName := cmd.NewFsSrcDstFiles(args)
cmd.Run(true, true, command, func() error { cmd.Run(true, true, command, func() error {
if srcFileName == "" { if srcFileName == "" {
return sync.CopyDir(fdst, fsrc) return sync.CopyDir(fdst, fsrc, false)
} }
return operations.CopyFile(fdst, fsrc, dstFileName, srcFileName) return operations.CopyFile(fdst, fsrc, dstFileName, srcFileName)
}) })

View File

@@ -32,8 +32,48 @@ documentation, changelog and configuration walkthroughs.
fs.Debugf("rclone", "Version %q finishing with parameters %q", fs.Version, os.Args) fs.Debugf("rclone", "Version %q finishing with parameters %q", fs.Version, os.Args)
atexit.Run() atexit.Run()
}, },
BashCompletionFunction: bashCompletionFunc,
} }
const (
bashCompletionFunc = `
__rclone_custom_func() {
if [[ ${#COMPREPLY[@]} -eq 0 ]]; then
local cur cword prev words
if declare -F _init_completion > /dev/null; then
_init_completion -n : || return
else
__rclone_init_completion -n : || return
fi
if [[ $cur != *:* ]]; then
local remote
while IFS= read -r remote; do
[[ $remote != $cur* ]] || COMPREPLY+=("$remote")
done < <(command rclone listremotes)
if [[ ${COMPREPLY[@]} ]]; then
local paths=("$cur"*)
[[ ! -f ${paths[0]} ]] || COMPREPLY+=("${paths[@]}")
fi
else
local path=${cur#*:}
if [[ $path == */* ]]; then
local prefix=$(eval printf '%s' "${path%/*}")
else
local prefix=
fi
local line
while IFS= read -r line; do
local reply=${prefix:+$prefix/}$line
[[ $reply != $path* ]] || COMPREPLY+=("$reply")
done < <(rclone lsf "${cur%%:*}:$prefix" 2>/dev/null)
[[ ! ${COMPREPLY[@]} ]] || compopt -o filenames
fi
[[ ! ${COMPREPLY[@]} ]] || compopt -o nospace
fi
}
`
)
// root help command // root help command
var helpCommand = &cobra.Command{ var helpCommand = &cobra.Command{
Use: "help", Use: "help",

View File

@@ -21,11 +21,22 @@ import (
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
type position int
const (
positionMiddle position = 1 << iota
positionLeft
positionRight
positionNone position = 0
positionAll position = positionRight<<1 - 1
)
var ( var (
checkNormalization bool checkNormalization bool
checkControl bool checkControl bool
checkLength bool checkLength bool
checkStreaming bool checkStreaming bool
positionList = []position{positionMiddle, positionLeft, positionRight}
) )
func init() { func init() {
@@ -59,7 +70,7 @@ a bit of go code for each one.
type results struct { type results struct {
f fs.Fs f fs.Fs
mu sync.Mutex mu sync.Mutex
charNeedsEscaping map[rune]bool stringNeedsEscaping map[string]position
maxFileLength int maxFileLength int
canWriteUnnormalized bool canWriteUnnormalized bool
canReadUnnormalized bool canReadUnnormalized bool
@@ -69,8 +80,8 @@ type results struct {
func newResults(f fs.Fs) *results { func newResults(f fs.Fs) *results {
return &results{ return &results{
f: f, f: f,
charNeedsEscaping: make(map[rune]bool), stringNeedsEscaping: make(map[string]position),
} }
} }
@@ -79,13 +90,13 @@ func (r *results) Print() {
fmt.Printf("// %s\n", r.f.Name()) fmt.Printf("// %s\n", r.f.Name())
if checkControl { if checkControl {
escape := []string{} escape := []string{}
for c, needsEscape := range r.charNeedsEscaping { for c, needsEscape := range r.stringNeedsEscaping {
if needsEscape { if needsEscape != positionNone {
escape = append(escape, fmt.Sprintf("0x%02X", c)) escape = append(escape, fmt.Sprintf("0x%02X", c))
} }
} }
sort.Strings(escape) sort.Strings(escape)
fmt.Printf("charNeedsEscaping = []byte{\n") fmt.Printf("stringNeedsEscaping = []byte{\n")
fmt.Printf("\t%s\n", strings.Join(escape, ", ")) fmt.Printf("\t%s\n", strings.Join(escape, ", "))
fmt.Printf("}\n") fmt.Printf("}\n")
} }
@@ -130,20 +141,45 @@ func (r *results) checkUTF8Normalization() {
} }
} }
// check we can write file with the rune passed in func (r *results) checkStringPositions(s string) {
func (r *results) checkChar(c rune) { fs.Infof(r.f, "Writing position file 0x%0X", s)
fs.Infof(r.f, "Writing file 0x%02X", c) positionError := positionNone
path := fmt.Sprintf("0x%02X-%c-", c, c)
_, err := r.writeFile(path) for _, pos := range positionList {
escape := false path := ""
if err != nil { switch pos {
fs.Infof(r.f, "Couldn't write file 0x%02X", c) case positionMiddle:
escape = true path = fmt.Sprintf("position-middle-%0X-%s-", s, s)
} else { case positionLeft:
fs.Infof(r.f, "OK writing file 0x%02X", c) path = fmt.Sprintf("%s-position-left-%0X", s, s)
case positionRight:
path = fmt.Sprintf("position-right-%0X-%s", s, s)
default:
panic("invalid position: " + pos.String())
}
_, writeErr := r.writeFile(path)
if writeErr != nil {
fs.Infof(r.f, "Writing %s position file 0x%0X Error: %s", pos.String(), s, writeErr)
} else {
fs.Infof(r.f, "Writing %s position file 0x%0X OK", pos.String(), s)
}
obj, getErr := r.f.NewObject(path)
if getErr != nil {
fs.Infof(r.f, "Getting %s position file 0x%0X Error: %s", pos.String(), s, getErr)
} else {
if obj.Size() != 50 {
fs.Infof(r.f, "Getting %s position file 0x%0X Invalid Size: %d", pos.String(), s, obj.Size())
} else {
fs.Infof(r.f, "Getting %s position file 0x%0X OK", pos.String(), s)
}
}
if writeErr != nil || getErr != nil {
positionError += pos
}
} }
r.mu.Lock() r.mu.Lock()
r.charNeedsEscaping[c] = escape r.stringNeedsEscaping[s] = positionError
r.mu.Unlock() r.mu.Unlock()
} }
@@ -157,19 +193,28 @@ func (r *results) checkControls() {
} }
var wg sync.WaitGroup var wg sync.WaitGroup
for i := rune(0); i < 128; i++ { for i := rune(0); i < 128; i++ {
s := string(i)
if i == 0 || i == '/' { if i == 0 || i == '/' {
// We're not even going to check NULL or / // We're not even going to check NULL or /
r.charNeedsEscaping[i] = true r.stringNeedsEscaping[s] = positionAll
continue continue
} }
wg.Add(1) wg.Add(1)
c := i go func(s string) {
go func() {
defer wg.Done() defer wg.Done()
token := <-tokens token := <-tokens
r.checkChar(c) r.checkStringPositions(s)
tokens <- token tokens <- token
}() }(s)
}
for _, s := range []string{"", "\xBF", "\xFE"} {
wg.Add(1)
go func(s string) {
defer wg.Done()
token := <-tokens
r.checkStringPositions(s)
tokens <- token
}(s)
} }
wg.Wait() wg.Wait()
fs.Infof(r.f, "Done trying to create control character file names") fs.Infof(r.f, "Done trying to create control character file names")
@@ -268,3 +313,35 @@ func readInfo(f fs.Fs) error {
r.Print() r.Print()
return nil return nil
} }
func (e position) String() string {
switch e {
case positionNone:
return "none"
case positionAll:
return "all"
}
var buf bytes.Buffer
if e&positionMiddle != 0 {
buf.WriteString("middle")
e &= ^positionMiddle
}
if e&positionLeft != 0 {
if buf.Len() != 0 {
buf.WriteRune(',')
}
buf.WriteString("left")
e &= ^positionLeft
}
if e&positionRight != 0 {
if buf.Len() != 0 {
buf.WriteRune(',')
}
buf.WriteString("right")
e &= ^positionRight
}
if e != positionNone {
panic("invalid position")
}
return buf.String()
}

40
cmd/info/process.sh Normal file
View File

@@ -0,0 +1,40 @@
set -euo pipefail
for f in info-*.log; do
for pos in middle left right; do
egrep -oe " Writing $pos position file [^ ]* \w+" $f | sort | cut -d' ' -f 7 > $f.write_$pos
egrep -oe " Getting $pos position file [^ ]* \w+" $f | sort | cut -d' ' -f 7 > $f.get_$pos
done
{
echo "${${f%.log}#info-}\t${${f%.log}#info-}\t${${f%.log}#info-}\t${${f%.log}#info-}\t${${f%.log}#info-}\t${${f%.log}#info-}"
echo "Write\tWrite\tWrite\tGet\tGet\tGet"
echo "Mid\tLeft\tRight\tMid\tLeft\tRight"
paste $f.write_{middle,left,right} $f.get_{middle,left,right}
} > $f.csv
done
for f in info-*.list; do
for pos in middle left right; do
cat $f | perl -lne 'print $1 if /^\s+[0-9]+\s+(.*)/' | grep -a "position-$pos-" | sort > $f.$pos
done
{
echo "${${f%.list}#info-}\t${${f%.list}#info-}\t${${f%.list}#info-}"
echo "List\tList\tList"
echo "Mid\tLeft\tRight"
for e in 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 30 31 32 33 34 35 36 37 38 39 3A 3B 3C 3D 3E 3F 40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F 50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F 60 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 73 74 75 76 77 78 79 7A 7B 7C 7D 7E 7F BF EFBCBC FE; do
echo -n $(perl -lne 'print "'$e'-$1" if /^position-middle-'$e'-(.*)-/' $f.middle | tr -d "\t\r" | grep -a . || echo Miss)
echo -n "\t"
echo -n $(perl -lne 'print "'$e'-$1" if /^(.*)-position-left-'$e'/' $f.left | tr -d "\t\r" | grep -a . || echo Miss)
echo -n "\t"
echo $(perl -lne 'print "'$e'-$1" if /^position-right-'$e'-(.*)/' $f.right | tr -d "\t\r" | grep -a . || echo Miss)
# echo -n $(grep -a "position-middle-$e-" $f.middle | tr -d "\t\r" || echo Miss)"\t"
# echo -n $(grep -a "position-left-$e" $f.left | tr -d "\t\r" || echo Miss)"\t"
# echo $(grep -a "position-right-$e-" $f.right | tr -d "\t\r" || echo Miss)
done
} > $f.csv
done
for f in info-*.list; do
paste ${f%.list}.log.csv $f.csv > ${f%.list}.full.csv
done
paste *.full.csv > info-complete.csv

3
cmd/info/test.cmd Normal file
View File

@@ -0,0 +1,3 @@
rclone.exe purge info
rclone.exe info -vv info > info-LocalWindows.log 2>&1
rclone.exe ls -vv info > info-LocalWindows.list 2>&1

43
cmd/info/test.sh Executable file
View File

@@ -0,0 +1,43 @@
#!/usr/bin/env zsh
#
# example usage:
# $GOPATH/src/github.com/ncw/rclone/cmd/info/test.sh --list | \
# parallel -P20 $GOPATH/src/github.com/ncw/rclone/cmd/info/test.sh
export PATH=$GOPATH/src/github.com/ncw/rclone:$PATH
typeset -A allRemotes
allRemotes=(
TestAmazonCloudDrive '--low-level-retries=2 --checkers=5'
TestB2 ''
TestBox ''
TestDrive '--tpslimit=5'
TestCrypt ''
TestDropbox '--checkers=1'
TestJottacloud ''
TestMega ''
TestOneDrive ''
TestOpenDrive '--low-level-retries=2 --checkers=5'
TestPcloud '--low-level-retries=2 --timeout=15s'
TestS3 ''
Local ''
)
set -euo pipefail
if [[ $# -eq 0 ]]; then
set -- ${(k)allRemotes[@]}
elif [[ $1 = --list ]]; then
printf '%s\n' ${(k)allRemotes[@]}
exit 0
fi
for remote; do
dir=$remote:infotest
if [[ $remote = Local ]]; then
dir=infotest
fi
rclone purge $dir || :
rclone info -vv $dir ${=allRemotes[$remote]} &> info-$remote.log
rclone ls -vv $dir &> info-$remote.list
done

View File

@@ -16,7 +16,7 @@ var (
func init() { func init() {
cmd.Root.AddCommand(commandDefintion) cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&listLong, "long", "l", listLong, "Show the type as well as names.") commandDefintion.Flags().BoolVarP(&listLong, "long", "", listLong, "Show the type as well as names.")
} }
var commandDefintion = &cobra.Command{ var commandDefintion = &cobra.Command{

View File

@@ -10,7 +10,6 @@ import (
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/hash" "github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/operations" "github.com/ncw/rclone/fs/operations"
"github.com/ncw/rclone/fs/walk"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@@ -67,8 +66,10 @@ output:
s - size s - size
t - modification time t - modification time
h - hash h - hash
i - ID of object if known i - ID of object
o - Original ID of underlying object
m - MimeType of object if known m - MimeType of object if known
e - encrypted name
So if you wanted the path, size and modification time, you would use So if you wanted the path, size and modification time, you would use
--format "pst", or maybe --format "tsp" to put the path last. --format "pst", or maybe --format "tsp" to put the path last.
@@ -161,6 +162,10 @@ func Lsf(fsrc fs.Fs, out io.Writer) error {
list.SetCSV(csv) list.SetCSV(csv)
list.SetDirSlash(dirSlash) list.SetDirSlash(dirSlash)
list.SetAbsolute(absolute) list.SetAbsolute(absolute)
var opt = operations.ListJSONOpt{
NoModTime: true,
Recurse: recurse,
}
for _, char := range format { for _, char := range format {
switch char { switch char {
@@ -168,38 +173,38 @@ func Lsf(fsrc fs.Fs, out io.Writer) error {
list.AddPath() list.AddPath()
case 't': case 't':
list.AddModTime() list.AddModTime()
opt.NoModTime = false
case 's': case 's':
list.AddSize() list.AddSize()
case 'h': case 'h':
list.AddHash(hashType) list.AddHash(hashType)
opt.ShowHash = true
case 'i': case 'i':
list.AddID() list.AddID()
case 'm': case 'm':
list.AddMimeType() list.AddMimeType()
case 'e':
list.AddEncrypted()
opt.ShowEncrypted = true
case 'o':
list.AddOrigID()
opt.ShowOrigIDs = true
default: default:
return errors.Errorf("Unknown format character %q", char) return errors.Errorf("Unknown format character %q", char)
} }
} }
return walk.Walk(fsrc, "", false, operations.ConfigMaxDepth(recurse), func(path string, entries fs.DirEntries, err error) error { return operations.ListJSON(fsrc, "", &opt, func(item *operations.ListJSONItem) error {
if err != nil { if item.IsDir {
fs.CountError(err) if filesOnly {
fs.Errorf(path, "error listing: %v", err) return nil
return nil }
} } else {
for _, entry := range entries { if dirsOnly {
_, isDir := entry.(fs.Directory) return nil
if isDir {
if filesOnly {
continue
}
} else {
if dirsOnly {
continue
}
} }
_, _ = fmt.Fprintln(out, list.Format(entry))
} }
_, _ = fmt.Fprintln(out, list.Format(item))
return nil return nil
}) })
} }

View File

@@ -60,7 +60,13 @@ If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".
When used without --recursive the Path will always be the same as Name. When used without --recursive the Path will always be the same as Name.
The time is in RFC3339 format with nanosecond precision. The time is in RFC3339 format with up to nanosecond precision. The
number of decimal digits in the seconds will depend on the precision
that the remote can hold the times, so if times are accurate to the
nearest millisecond (eg Google Drive) then 3 digits will always be
shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are
accurate to the nearest second (Dropbox, Box, WebDav etc) no digits
will be shown ("2017-05-31T16:15:57+01:00").
The whole output can be processed as a JSON blob, or alternatively it The whole output can be processed as a JSON blob, or alternatively it
can be processed line by line as each item is written one to a line. can be processed line by line as each item is written one to a line.

View File

@@ -37,6 +37,8 @@ func (d *Dir) Attr(ctx context.Context, a *fuse.Attr) (err error) {
a.Crtime = modTime a.Crtime = modTime
// FIXME include Valid so get some caching? // FIXME include Valid so get some caching?
// FIXME fs.Debugf(d.path, "Dir.Attr %+v", a) // FIXME fs.Debugf(d.path, "Dir.Attr %+v", a)
a.Size = 512
a.Blocks = 1
return nil return nil
} }

View File

@@ -45,7 +45,7 @@ func (fh *FileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *f
if err != nil { if err != nil {
return translateError(err) return translateError(err)
} }
resp.Size = int(n) resp.Size = n
return nil return nil
} }

View File

@@ -20,12 +20,12 @@ var (
) )
func randomSeekTest(size int64, in *os.File, name string) { func randomSeekTest(size int64, in *os.File, name string) {
startTime := time.Now()
start := rand.Int63n(size) start := rand.Int63n(size)
blockSize := rand.Intn(*maxBlockSize) blockSize := rand.Intn(*maxBlockSize)
if int64(blockSize) > size-start { if int64(blockSize) > size-start {
blockSize = int(size - start) blockSize = int(size - start)
} }
log.Printf("Reading %d from %d", blockSize, start)
_, err := in.Seek(start, io.SeekStart) _, err := in.Seek(start, io.SeekStart)
if err != nil { if err != nil {
@@ -37,6 +37,8 @@ func randomSeekTest(size int64, in *os.File, name string) {
if err != nil { if err != nil {
log.Fatalf("Read failed on %q: %v", name, err) log.Fatalf("Read failed on %q: %v", name, err)
} }
log.Printf("Reading %d from %d took %v ", blockSize, start, time.Since(startTime))
} }
func main() { func main() {
@@ -48,10 +50,12 @@ func main() {
rand.Seed(*randSeed) rand.Seed(*randSeed)
name := args[0] name := args[0]
openStart := time.Now()
in, err := os.Open(name) in, err := os.Open(name)
if err != nil { if err != nil {
log.Fatalf("Couldn't open %q: %v", name, err) log.Fatalf("Couldn't open %q: %v", name, err)
} }
log.Printf("File Open took %v", time.Since(openStart))
fi, err := in.Stat() fi, err := in.Stat()
if err != nil { if err != nil {

View File

@@ -10,11 +10,13 @@ import (
// Globals // Globals
var ( var (
deleteEmptySrcDirs = false deleteEmptySrcDirs = false
createEmptySrcDirs = false
) )
func init() { func init() {
cmd.Root.AddCommand(commandDefintion) cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&deleteEmptySrcDirs, "delete-empty-src-dirs", "", deleteEmptySrcDirs, "Delete empty source dirs after move") commandDefintion.Flags().BoolVarP(&deleteEmptySrcDirs, "delete-empty-src-dirs", "", deleteEmptySrcDirs, "Delete empty source dirs after move")
commandDefintion.Flags().BoolVarP(&createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after move")
} }
var commandDefintion = &cobra.Command{ var commandDefintion = &cobra.Command{
@@ -52,7 +54,7 @@ can speed transfers up greatly.
fsrc, srcFileName, fdst := cmd.NewFsSrcFileDst(args) fsrc, srcFileName, fdst := cmd.NewFsSrcFileDst(args)
cmd.Run(true, true, command, func() error { cmd.Run(true, true, command, func() error {
if srcFileName == "" { if srcFileName == "" {
return sync.MoveDir(fdst, fsrc, deleteEmptySrcDirs) return sync.MoveDir(fdst, fsrc, deleteEmptySrcDirs, createEmptySrcDirs)
} }
return operations.MoveFile(fdst, fsrc, srcFileName, srcFileName) return operations.MoveFile(fdst, fsrc, srcFileName, srcFileName)
}) })

View File

@@ -52,7 +52,7 @@ transfer.
cmd.Run(true, true, command, func() error { cmd.Run(true, true, command, func() error {
if srcFileName == "" { if srcFileName == "" {
return sync.MoveDir(fdst, fsrc, false) return sync.MoveDir(fdst, fsrc, false, false)
} }
return operations.MoveFile(fdst, fsrc, dstFileName, srcFileName) return operations.MoveFile(fdst, fsrc, dstFileName, srcFileName)
}) })

View File

@@ -10,6 +10,7 @@ import (
"sort" "sort"
"strings" "strings"
runewidth "github.com/mattn/go-runewidth"
"github.com/ncw/rclone/cmd" "github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/cmd/ncdu/scan" "github.com/ncw/rclone/cmd/ncdu/scan"
"github.com/ncw/rclone/fs" "github.com/ncw/rclone/fs"
@@ -122,7 +123,7 @@ func Printf(x, y int, fg, bg termbox.Attribute, format string, args ...interface
func Line(x, y, xmax int, fg, bg termbox.Attribute, spacer rune, msg string) { func Line(x, y, xmax int, fg, bg termbox.Attribute, spacer rune, msg string) {
for _, c := range msg { for _, c := range msg {
termbox.SetCell(x, y, c, fg, bg) termbox.SetCell(x, y, c, fg, bg)
x++ x += runewidth.RuneWidth(c)
if x >= xmax { if x >= xmax {
return return
} }

View File

@@ -158,7 +158,7 @@ func (cds *contentDirectoryService) Handle(action string, argsXML []byte, r *htt
}, nil }, nil
case "Browse": case "Browse":
var browse browse var browse browse
if err := xml.Unmarshal([]byte(argsXML), &browse); err != nil { if err := xml.Unmarshal(argsXML, &browse); err != nil {
return nil, err return nil, err
} }
obj, err := cds.objectFromID(browse.ObjectID) obj, err := cds.objectFromID(browse.ObjectID)
@@ -179,7 +179,7 @@ func (cds *contentDirectoryService) Handle(action string, argsXML []byte, r *htt
} }
return return
}():] }():]
if browse.RequestedCount != 0 && int(browse.RequestedCount) < len(objs) { if browse.RequestedCount != 0 && browse.RequestedCount < len(objs) {
objs = objs[:browse.RequestedCount] objs = objs[:browse.RequestedCount]
} }
result, err := xml.Marshal(objs) result, err := xml.Marshal(objs)

View File

@@ -0,0 +1,184 @@
package dlna
const connectionManagerServiceDescription = `<?xml version="1.0" encoding="UTF-8"?>
<scpd xmlns="urn:schemas-upnp-org:service-1-0">
<specVersion>
<major>1</major>
<minor>0</minor>
</specVersion>
<actionList>
<action>
<name>GetProtocolInfo</name>
<argumentList>
<argument>
<name>Source</name>
<direction>out</direction>
<relatedStateVariable>SourceProtocolInfo</relatedStateVariable>
</argument>
<argument>
<name>Sink</name>
<direction>out</direction>
<relatedStateVariable>SinkProtocolInfo</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>PrepareForConnection</name>
<argumentList>
<argument>
<name>RemoteProtocolInfo</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ProtocolInfo</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionManager</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionManager</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>Direction</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Direction</relatedStateVariable>
</argument>
<argument>
<name>ConnectionID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>AVTransportID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_AVTransportID</relatedStateVariable>
</argument>
<argument>
<name>RcsID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_RcsID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>ConnectionComplete</name>
<argumentList>
<argument>
<name>ConnectionID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetCurrentConnectionIDs</name>
<argumentList>
<argument>
<name>ConnectionIDs</name>
<direction>out</direction>
<relatedStateVariable>CurrentConnectionIDs</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetCurrentConnectionInfo</name>
<argumentList>
<argument>
<name>ConnectionID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>RcsID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_RcsID</relatedStateVariable>
</argument>
<argument>
<name>AVTransportID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_AVTransportID</relatedStateVariable>
</argument>
<argument>
<name>ProtocolInfo</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ProtocolInfo</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionManager</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionManager</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>Direction</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Direction</relatedStateVariable>
</argument>
<argument>
<name>Status</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionStatus</relatedStateVariable>
</argument>
</argumentList>
</action>
</actionList>
<serviceStateTable>
<stateVariable sendEvents="yes">
<name>SourceProtocolInfo</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>SinkProtocolInfo</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>CurrentConnectionIDs</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ConnectionStatus</name>
<dataType>string</dataType>
<allowedValueList>
<allowedValue>OK</allowedValue>
<allowedValue>ContentFormatMismatch</allowedValue>
<allowedValue>InsufficientBandwidth</allowedValue>
<allowedValue>UnreliableChannel</allowedValue>
<allowedValue>Unknown</allowedValue>
</allowedValueList>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ConnectionManager</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Direction</name>
<dataType>string</dataType>
<allowedValueList>
<allowedValue>Input</allowedValue>
<allowedValue>Output</allowedValue>
</allowedValueList>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ProtocolInfo</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ConnectionID</name>
<dataType>i4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_AVTransportID</name>
<dataType>i4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_RcsID</name>
<dataType>i4</dataType>
</stateVariable>
</serviceStateTable>
</scpd>`

View File

@@ -84,6 +84,21 @@ var services = []*service{
}, },
SCPD: contentDirectoryServiceDescription, SCPD: contentDirectoryServiceDescription,
}, },
{
Service: upnp.Service{
ServiceType: "urn:schemas-upnp-org:service:ConnectionManager:1",
ServiceId: "urn:upnp-org:serviceId:ConnectionManager",
ControlURL: serviceControlURL,
},
SCPD: connectionManagerServiceDescription,
},
}
func init() {
for _, s := range services {
p := path.Join("/scpd", s.ServiceId)
s.SCPDURL = p
}
} }
func devices() []string { func devices() []string {
@@ -250,9 +265,6 @@ func (s *server) initMux(mux *http.ServeMux) {
// Install handlers to serve SCPD for each UPnP service. // Install handlers to serve SCPD for each UPnP service.
for _, s := range services { for _, s := range services {
p := path.Join("/scpd", s.ServiceId)
s.SCPDURL = p
mux.HandleFunc(s.SCPDURL, func(serviceDesc string) http.HandlerFunc { mux.HandleFunc(s.SCPDURL, func(serviceDesc string) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) { return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("content-type", `text/xml; charset="utf-8"`) w.Header().Set("content-type", `text/xml; charset="utf-8"`)

View File

@@ -59,6 +59,11 @@ func TestRootSCPD(t *testing.T) {
// Make sure that the SCPD contains a CDS service. // Make sure that the SCPD contains a CDS service.
require.Contains(t, string(body), require.Contains(t, string(body),
"<serviceType>urn:schemas-upnp-org:service:ContentDirectory:1</serviceType>") "<serviceType>urn:schemas-upnp-org:service:ContentDirectory:1</serviceType>")
// Make sure that the SCPD contains a CM service.
require.Contains(t, string(body),
"<serviceType>urn:schemas-upnp-org:service:ConnectionManager:1</serviceType>")
// Ensure that the SCPD url is configured.
require.Regexp(t, "<SCPDURL>/.*</SCPDURL>", string(body))
} }
// Make sure that it serves content from the remote. // Make sure that it serves content from the remote.

View File

@@ -330,25 +330,12 @@ func (s *server) listObjects(w http.ResponseWriter, r *http.Request, remote stri
ls := listItems{} ls := listItems{}
// if remote supports ListR use that directly, otherwise use recursive Walk // if remote supports ListR use that directly, otherwise use recursive Walk
var err error err := walk.ListR(s.f, remote, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
if ListR := s.f.Features().ListR; ListR != nil { for _, entry := range entries {
err = ListR(remote, func(entries fs.DirEntries) error { ls.add(entry)
for _, entry := range entries { }
ls.add(entry) return nil
} })
return nil
})
} else {
err = walk.Walk(s.f, remote, true, -1, func(path string, entries fs.DirEntries, err error) error {
if err == nil {
for _, entry := range entries {
ls.add(entry)
}
}
return err
})
}
if err != nil { if err != nil {
_, err = fserrors.Cause(err) _, err = fserrors.Cause(err)
if err != fs.ErrorDirNotFound { if err != fs.ErrorDirNotFound {

View File

@@ -6,8 +6,13 @@ import (
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
var (
createEmptySrcDirs = false
)
func init() { func init() {
cmd.Root.AddCommand(commandDefintion) cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after sync")
} }
var commandDefintion = &cobra.Command{ var commandDefintion = &cobra.Command{
@@ -39,7 +44,7 @@ go there.
cmd.CheckArgs(2, 2, command, args) cmd.CheckArgs(2, 2, command, args)
fsrc, fdst := cmd.NewFsSrcDst(args) fsrc, fdst := cmd.NewFsSrcDst(args)
cmd.Run(true, true, command, func() error { cmd.Run(true, true, command, func() error {
return sync.Sync(fdst, fsrc) return sync.Sync(fdst, fsrc, createEmptySrcDirs)
}) })
}, },
} }

View File

@@ -29,6 +29,7 @@ Rclone is a command line program to sync files and directories to and from:
* {{< provider name="Hubic" home="https://hubic.com/" config="/hubic/" >}} * {{< provider name="Hubic" home="https://hubic.com/" config="/hubic/" >}}
* {{< provider name="Jottacloud" home="https://www.jottacloud.com/en/" config="/jottacloud/" >}} * {{< provider name="Jottacloud" home="https://www.jottacloud.com/en/" config="/jottacloud/" >}}
* {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}} * {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
* {{< provider name="Koofr" home="https://koofr.eu/" config="/koofr/" >}}
* {{< provider name="Memset Memstore" home="https://www.memset.com/cloud/storage/" config="/swift/" >}} * {{< provider name="Memset Memstore" home="https://www.memset.com/cloud/storage/" config="/swift/" >}}
* {{< provider name="Mega" home="https://mega.nz/" config="/mega/" >}} * {{< provider name="Mega" home="https://mega.nz/" config="/mega/" >}}
* {{< provider name="Microsoft Azure Blob Storage" home="https://azure.microsoft.com/en-us/services/storage/blobs/" config="/azureblob/" >}} * {{< provider name="Microsoft Azure Blob Storage" home="https://azure.microsoft.com/en-us/services/storage/blobs/" config="/azureblob/" >}}

View File

@@ -233,3 +233,18 @@ Contributors
* kayrus <kay.diam@gmail.com> * kayrus <kay.diam@gmail.com>
* Rémy Léone <remy.leone@gmail.com> * Rémy Léone <remy.leone@gmail.com>
* Wojciech Smigielski <wojciech.hieronim.smigielski@gmail.com> * Wojciech Smigielski <wojciech.hieronim.smigielski@gmail.com>
* weetmuts <oehrstroem@gmail.com>
* Jonathan <vanillajonathan@users.noreply.github.com>
* James Carpenter <orbsmiv@users.noreply.github.com>
* Vince <vince0villamora@gmail.com>
* Nestar47 <47841759+Nestar47@users.noreply.github.com>
* Six <brbsix@gmail.com>
* Alexandru Bumbacea <alexandru.bumbacea@booking.com>
* calisro <robert.calistri@gmail.com>
* Dr.Rx <david.rey@nventive.com>
* marcintustin <marcintustin@users.noreply.github.com>
* jaKa Močnik <jaka@koofr.net>
* Fionera <fionera@fionera.de>
* Dan Walters <dan@walters.io>
* Danil Semelenov <sgtpep@users.noreply.github.com>
* xopez <28950736+xopez@users.noreply.github.com>

View File

@@ -16,9 +16,11 @@ Here is an example of making a b2 configuration. First run
rclone config rclone config
This will guide you through an interactive setup process. You will This will guide you through an interactive setup process. To authenticate
need your account number (a short hex number) and key (a long hex you will either need your Account ID (a short hex number) and Master
number) which you can get from the b2 control panel. Application Key (a long hex number) OR an Application Key, which is the
recommended method. See below for further details on generating and using
an Application Key.
``` ```
No remotes found - make a new one No remotes found - make a new one
@@ -102,10 +104,10 @@ You can use these with rclone too; you will need to use rclone version 1.43
or later. or later.
Follow Backblaze's docs to create an Application Key with the required Follow Backblaze's docs to create an Application Key with the required
permission and add the `Application Key ID` as the `account` and the permission and add the `applicationKeyId` as the `account` and the
`Application Key` itself as the `key`. `Application Key` itself as the `key`.
Note that you must put the Application Key ID as the `account` - you Note that you must put the _applicationKeyId_ as the `account` you
can't use the master Account ID. If you try then B2 will return 401 can't use the master Account ID. If you try then B2 will return 401
errors. errors.
@@ -391,12 +393,21 @@ Upload chunk size. Must fit in memory.
When uploading large files, chunk the file into this size. Note that When uploading large files, chunk the file into this size. Note that
these chunks are buffered in memory and there might a maximum of these chunks are buffered in memory and there might a maximum of
"--transfers" chunks in progress at once. 5,000,000 Bytes is the "--transfers" chunks in progress at once. 5,000,000 Bytes is the
minimim size. minimum size.
- Config: chunk_size - Config: chunk_size
- Env Var: RCLONE_B2_CHUNK_SIZE - Env Var: RCLONE_B2_CHUNK_SIZE
- Type: SizeSuffix - Type: SizeSuffix
- Default: 96M - Default: 96M
#### --b2-disable-checksum
Disable checksums for large (> upload cutoff) files
- Config: disable_checksum
- Env Var: RCLONE_B2_DISABLE_CHECKSUM
- Type: bool
- Default: false
<!--- autogenerated options stop --> <!--- autogenerated options stop -->

View File

@@ -112,6 +112,17 @@ To copy a local directory to an Box directory called backup
rclone copy /home/source remote:backup rclone copy /home/source remote:backup
### Using rclone with an Enterprise account with SSO ###
If you have an "Enterprise" account type with Box with single sign on
(SSO), you need to create a password to use Box with rclone. This can
be done at your Enterprise Box account by going to Settings, "Account"
Tab, and then set the password in the "Authentication" field.
Once you have done this, you can setup your Enterprise Box account
using the same procedure detailed above in the, using the password you
have just set.
### Invalid refresh token ### ### Invalid refresh token ###
According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens): According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section-6-using-the-access-and-refresh-tokens):

View File

@@ -1,11 +1,140 @@
--- ---
title: "Documentation" title: "Documentation"
description: "Rclone Changelog" description: "Rclone Changelog"
date: "2018-11-24" date: "2019-02-09"
--- ---
# Changelog # Changelog
## v1.46 - 2019-02-09
* New backends
* Support Alibaba Cloud (Aliyun) OSS via the s3 backend (Nick Craig-Wood)
* New commands
* serve dlna: serves a remove via DLNA for the local network (nicolov)
* New Features
* copy, move: Restore deprecated `--no-traverse` flag (Nick Craig-Wood)
* This is useful for when transferring a small number of files into a large destination
* genautocomplete: Add remote path completion for bash completion (Christopher Peterson & Danil Semelenov)
* Buffer memory handling reworked to return memory to the OS better (Nick Craig-Wood)
* Buffer recycling library to replace sync.Pool
* Optionally use memory mapped memory for better memory shrinking
* Enable with `--use-mmap` if having memory problems - not default yet
* Parallelise reading of files specified by `--files-from` (Nick Craig-Wood)
* check: Add stats showing total files matched. (Dario Guzik)
* Allow rename/delete open files under Windows (Nick Craig-Wood)
* lsjson: Use exactly the correct number of decimal places in the seconds (Nick Craig-Wood)
* Add cookie support with cmdline switch `--use-cookies` for all HTTP based remotes (qip)
* Warn if `--checksum` is set but there are no hashes available (Nick Craig-Wood)
* Rework rate limiting (pacer) to be more accurate and allow bursting (Nick Craig-Wood)
* Improve error reporting for too many/few arguments in commands (Nick Craig-Wood)
* listremotes: Remove `-l` short flag as it conflicts with the new global flag (weetmuts)
* Make http serving with auth generate INFO messages on auth fail (Nick Craig-Wood)
* Bug Fixes
* Fix layout of stats (Nick Craig-Wood)
* Fix `--progress` crash under Windows Jenkins (Nick Craig-Wood)
* Fix transfer of google/onedrive docs by calling Rcat in Copy when size is -1 (Cnly)
* copyurl: Fix checking of `--dry-run` (Denis Skovpen)
* Mount
* Check that mountpoint and local directory to mount don't overlap (Nick Craig-Wood)
* Fix mount size under 32 bit Windows (Nick Craig-Wood)
* VFS
* Implement renaming of directories for backends without DirMove (Nick Craig-Wood)
* now all backends except b2 support renaming directories
* Implement `--vfs-cache-max-size` to limit the total size of the cache (Nick Craig-Wood)
* Add `--dir-perms` and `--file-perms` flags to set default permissions (Nick Craig-Wood)
* Fix deadlock on concurrent operations on a directory (Nick Craig-Wood)
* Fix deadlock between RWFileHandle.close and File.Remove (Nick Craig-Wood)
* Fix renaming/deleting open files with cache mode "writes" under Windows (Nick Craig-Wood)
* Fix panic on rename with `--dry-run` set (Nick Craig-Wood)
* Fix vfs/refresh with recurse=true needing the `--fast-list` flag
* Local
* Add support for `-l`/`--links` (symbolic link translation) (yair@unicorn)
* this works by showing links as `link.rclonelink` - see local backend docs for more info
* this errors if used with `-L`/`--copy-links`
* Fix renaming/deleting open files on Windows (Nick Craig-Wood)
* Crypt
* Check for maximum length before decrypting filename to fix panic (Garry McNulty)
* Azure Blob
* Allow building azureblob backend on *BSD (themylogin)
* Use the rclone HTTP client to support `--dump headers`, `--tpslimit` etc (Nick Craig-Wood)
* Use the s3 pacer for 0 delay in non error conditions (Nick Craig-Wood)
* Ignore directory markers (Nick Craig-Wood)
* Stop Mkdir attempting to create existing containers (Nick Craig-Wood)
* B2
* cleanup: will remove unfinished large files >24hrs old (Garry McNulty)
* For a bucket limited application key check the bucket name (Nick Craig-Wood)
* before this, rclone would use the authorised bucket regardless of what you put on the command line
* Added `--b2-disable-checksum` flag (Wojciech Smigielski)
* this enables large files to be uploaded without a SHA-1 hash for speed reasons
* Drive
* Set default pacer to 100ms for 10 tps (Nick Craig-Wood)
* This fits the Google defaults much better and reduces the 403 errors massively
* Add `--drive-pacer-min-sleep` and `--drive-pacer-burst` to control the pacer
* Improve ChangeNotify support for items with multiple parents (Fabian Möller)
* Fix ListR for items with multiple parents - this fixes oddities with `vfs/refresh` (Fabian Möller)
* Fix using `--drive-impersonate` and appfolders (Nick Craig-Wood)
* Fix google docs in rclone mount for some (not all) applications (Nick Craig-Wood)
* Dropbox
* Retry-After support for Dropbox backend (Mathieu Carbou)
* FTP
* Wait for 60 seconds for a connection to Close then declare it dead (Nick Craig-Wood)
* helps with indefinite hangs on some FTP servers
* Google Cloud Storage
* Update google cloud storage endpoints (weetmuts)
* HTTP
* Add an example with username and password which is supported but wasn't documented (Nick Craig-Wood)
* Fix backend with `--files-from` and non-existent files (Nick Craig-Wood)
* Hubic
* Make error message more informative if authentication fails (Nick Craig-Wood)
* Jottacloud
* Resume and deduplication support (Oliver Heyme)
* Use token auth for all API requests Don't store password anymore (Sebastian Bünger)
* Add support for 2-factor authentification (Sebastian Bünger)
* Mega
* Implement v2 account login which fixes logins for newer Mega accounts (Nick Craig-Wood)
* Return error if an unknown length file is attempted to be uploaded (Nick Craig-Wood)
* Add new error codes for better error reporting (Nick Craig-Wood)
* Onedrive
* Fix broken support for "shared with me" folders (Alex Chen)
* Fix root ID not normalised (Cnly)
* Return err instead of panic on unknown-sized uploads (Cnly)
* Qingstor
* Fix go routine leak on multipart upload errors (Nick Craig-Wood)
* Add upload chunk size/concurrency/cutoff control (Nick Craig-Wood)
* Default `--qingstor-upload-concurrency` to 1 to work around bug (Nick Craig-Wood)
* S3
* Implement `--s3-upload-cutoff` for single part uploads below this (Nick Craig-Wood)
* Change `--s3-upload-concurrency` default to 4 to increase perfomance (Nick Craig-Wood)
* Add `--s3-bucket-acl` to control bucket ACL (Nick Craig-Wood)
* Auto detect region for buckets on operation failure (Nick Craig-Wood)
* Add GLACIER storage class (William Cocker)
* Add Scaleway to s3 documentation (Rémy Léone)
* Add AWS endpoint eu-north-1 (weetmuts)
* SFTP
* Add support for PEM encrypted private keys (Fabian Möller)
* Add option to force the usage of an ssh-agent (Fabian Möller)
* Perform environment variable expansion on key-file (Fabian Möller)
* Fix rmdir on Windows based servers (eg CrushFTP) (Nick Craig-Wood)
* Fix rmdir deleting directory contents on some SFTP servers (Nick Craig-Wood)
* Fix error on dangling symlinks (Nick Craig-Wood)
* Swift
* Add `--swift-no-chunk` to disable segmented uploads in rcat/mount (Nick Craig-Wood)
* Introduce application credential auth support (kayrus)
* Fix memory usage by slimming Object (Nick Craig-Wood)
* Fix extra requests on upload (Nick Craig-Wood)
* Fix reauth on big files (Nick Craig-Wood)
* Union
* Fix poll-interval not working (Nick Craig-Wood)
* WebDAV
* Support About which means rclone mount will show the correct disk size (Nick Craig-Wood)
* Support MD5 and SHA1 hashes with Owncloud and Nextcloud (Nick Craig-Wood)
* Fail soft on time parsing errors (Nick Craig-Wood)
* Fix infinite loop on failed directory creation (Nick Craig-Wood)
* Fix identification of directories for Bitrix Site Manager (Nick Craig-Wood)
* Fix upload of 0 length files on some servers (Nick Craig-Wood)
* Fix if MKCOL fails with 423 Locked assume the directory exists (Nick Craig-Wood)
## v1.45 - 2018-11-24 ## v1.45 - 2018-11-24
* New backends * New backends

View File

@@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone" title: "rclone"
slug: rclone slug: rclone
url: /commands/rclone/ url: /commands/rclone/
@@ -26,283 +26,301 @@ rclone [flags]
### Options ### Options
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
-h, --help help for rclone --gcs-project-number string Project number.
--http-url string URL of http host to connect to --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-id string Hubic Client Id -h, --help help for rclone
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
-V, --version Print the version number --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-pass string Password. --swift-user string User name to log in (OS_USERNAME).
--webdav-url string URL of http host to connect to --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-user string User name --syslog Use Syslog for logging
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-id string Yandex Client Id --timeout duration IO idle timeout (default 5m0s)
--yandex-client-secret string Yandex Client Secret --tpslimit float Limit HTTP transactions per second to this.
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
@@ -355,4 +373,4 @@ rclone [flags]
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion. * [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number. * [rclone version](/commands/rclone_version/) - Show the version number.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone about" title: "rclone about"
slug: rclone_about slug: rclone_about
url: /commands/rclone_about/ url: /commands/rclone_about/
@@ -69,285 +69,303 @@ rclone about remote: [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone authorize" title: "rclone authorize"
slug: rclone_authorize slug: rclone_authorize
url: /commands/rclone_authorize/ url: /commands/rclone_authorize/
@@ -28,285 +28,303 @@ rclone authorize [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone cachestats" title: "rclone cachestats"
slug: rclone_cachestats slug: rclone_cachestats
url: /commands/rclone_cachestats/ url: /commands/rclone_cachestats/
@@ -27,285 +27,303 @@ rclone cachestats source: [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone cat" title: "rclone cat"
slug: rclone_cat slug: rclone_cat
url: /commands/rclone_cat/ url: /commands/rclone_cat/
@@ -49,285 +49,303 @@ rclone cat remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone check" title: "rclone check"
slug: rclone_check slug: rclone_check
url: /commands/rclone_check/ url: /commands/rclone_check/
@@ -43,285 +43,303 @@ rclone check source:path dest:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone cleanup" title: "rclone cleanup"
slug: rclone_cleanup slug: rclone_cleanup
url: /commands/rclone_cleanup/ url: /commands/rclone_cleanup/
@@ -28,285 +28,303 @@ rclone cleanup remote:path [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config" title: "rclone config"
slug: rclone_config slug: rclone_config
url: /commands/rclone_config/ url: /commands/rclone_config/
@@ -28,281 +28,299 @@ rclone config [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
@@ -318,4 +336,4 @@ rclone config [flags]
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. * [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote. * [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config create" title: "rclone config create"
slug: rclone_config_create slug: rclone_config_create
url: /commands/rclone_config_create/ url: /commands/rclone_config_create/
@@ -19,6 +19,15 @@ you would do:
rclone config create myremote swift env_auth true rclone config create myremote swift env_auth true
Note that if the config process would normally ask a question the
default is taken. Each time that happens rclone will print a message
saying how to affect the value taken.
So for example if you wanted to configure a Google Drive remote but
using remote authorization you would do this:
rclone config create mydrive drive config_is_local false
``` ```
rclone config create <name> <type> [<key> <value>]* [flags] rclone config create <name> <type> [<key> <value>]* [flags]
@@ -33,285 +42,303 @@ rclone config create <name> <type> [<key> <value>]* [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

View File

@@ -1,5 +1,5 @@
--- ---
date: 2018-11-24T13:43:29Z date: 2019-02-09T10:42:18Z
title: "rclone config delete" title: "rclone config delete"
slug: rclone_config_delete slug: rclone_config_delete
url: /commands/rclone_config_delete/ url: /commands/rclone_config_delete/
@@ -25,285 +25,303 @@ rclone config delete <name> [flags]
### Options inherited from parent commands ### Options inherited from parent commands
``` ```
--acd-auth-url string Auth server URL. --acd-auth-url string Auth server URL.
--acd-client-id string Amazon Application Client ID. --acd-client-id string Amazon Application Client ID.
--acd-client-secret string Amazon Application Client Secret. --acd-client-secret string Amazon Application Client Secret.
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G) --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-token-url string Token server url. --acd-token-url string Token server url.
--acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s) --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--alias-remote string Remote or path to alias. --alias-remote string Remote or path to alias.
--ask-password Allow prompt for password for encrypted configuration. (default true) --ask-password Allow prompt for password for encrypted configuration. (default true)
--auto-confirm If enabled, do not request console confirmation. --auto-confirm If enabled, do not request console confirmation.
--azureblob-access-tier string Access tier of blob: hot, cool or archive. --azureblob-access-tier string Access tier of blob: hot, cool or archive.
--azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL) --azureblob-account string Storage Account Name (leave blank to use connection string or SAS URL)
--azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M) --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
--azureblob-endpoint string Endpoint for the service --azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL) --azureblob-key string Storage Account Key (leave blank to use connection string or SAS URL)
--azureblob-list-chunk int Size of blob list. (default 5000) --azureblob-list-chunk int Size of blob list. (default 5000)
--azureblob-sas-url string SAS URL for container level access only --azureblob-sas-url string SAS URL for container level access only
--azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M) --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
--b2-account string Account ID or Application Key ID --b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M) --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
--b2-endpoint string Endpoint for the service. --b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files. --b2-endpoint string Endpoint for the service.
--b2-key string Application Key --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging. --b2-key string Application Key
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
--b2-versions Include old versions in directory listings. --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
--backup-dir string Make backups into hierarchy based in DIR. --b2-versions Include old versions in directory listings.
--bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name. --backup-dir string Make backups into hierarchy based in DIR.
--box-client-id string Box App Client Id. --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
--box-client-secret string Box App Client Secret --box-client-id string Box App Client Id.
--box-commit-retries int Max number of times to try committing a multipart file. (default 100) --box-client-secret string Box App Client Secret
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M) --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
--buffer-size int In memory buffer size when reading files for each --transfer. (default 16M) --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable. --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s) --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming. --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
--cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
--cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M) --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
--cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G) --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
--cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend") --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
--cache-db-purge Clear all the cached data for this remote on start. --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) --cache-db-purge Clear all the cached data for this remote on start.
--cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone") --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s) --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
--cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
--cache-plex-password string The password of the Plex user --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
--cache-plex-url string The URL of the Plex server --cache-plex-password string The password of the Plex user
--cache-plex-username string The username of the Plex user --cache-plex-url string The URL of the Plex server
--cache-read-retries int How many times to retry a read from a cache storage. (default 10) --cache-plex-username string The username of the Plex user
--cache-remote string Remote to cache. --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
--cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1) --cache-remote string Remote to cache.
--cache-tmp-upload-path string Directory to keep temporary files until they are uploaded. --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
--cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s) --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
--cache-workers int How many workers should run in parallel to download chunks. (default 4) --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
--cache-writes Cache file data on writes through the FS --cache-workers int How many workers should run in parallel to download chunks. (default 4)
--checkers int Number of checkers to run in parallel. (default 8) --cache-writes Cache file data on writes through the FS
-c, --checksum Skip based on checksum & size, not mod-time & size --checkers int Number of checkers to run in parallel. (default 8)
--config string Config file. (default "/home/ncw/.rclone.conf") -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--contimeout duration Connect timeout (default 1m0s) --config string Config file. (default "/home/ncw/.rclone.conf")
-L, --copy-links Follow symlinks and copy the pointed to item. --contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file -L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true) --cpuprofile string Write cpu profile to file
--crypt-filename-encryption string How to encrypt the filenames. (default "standard") --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-password string Password or pass phrase for encryption. --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. --crypt-password string Password or pass phrase for encryption.
--crypt-remote string Remote to encrypt/decrypt. --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
--crypt-show-mapping For all files listed show how the names encrypt. --crypt-remote string Remote to encrypt/decrypt.
--delete-after When synchronizing, delete files on destination after transferring (default) --crypt-show-mapping For all files listed show how the names encrypt.
--delete-before When synchronizing, delete files on destination before transferring --delete-after When synchronizing, delete files on destination after transferring (default)
--delete-during When synchronizing, delete files during transfer --delete-before When synchronizing, delete files on destination before transferring
--delete-excluded Delete files on dest excluded from sync --delete-during When synchronizing, delete files during transfer
--disable string Disable a comma separated list of features. Use help to see a list. --delete-excluded Delete files on dest excluded from sync
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded. --disable string Disable a comma separated list of features. Use help to see a list.
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
--drive-alternate-export Use alternate export URLs for google documents export., --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
--drive-auth-owner-only Only consider files owned by the authenticated user. --drive-alternate-export Use alternate export URLs for google documents export.,
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M) --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-client-id string Google Application Client Id --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-client-secret string Google Application Client Secret --drive-client-id string Google Application Client Id
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg") --drive-client-secret string Google Application Client Secret
--drive-formats string Deprecated: see export_formats --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-impersonate string Impersonate this user when using a service account. --drive-formats string Deprecated: see export_formats
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs. --drive-impersonate string Impersonate this user when using a service account.
--drive-keep-revision-forever Keep new head revision of each file forever. --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000) --drive-keep-revision-forever Keep new head revision of each file forever.
--drive-root-folder-id string ID of the root folder --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-scope string Scope that rclone should use when requesting access from drive. --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
--drive-service-account-credentials string Service Account Credentials JSON blob --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
--drive-service-account-file string Service Account Credentials JSON file path --drive-root-folder-id string ID of the root folder
--drive-shared-with-me Only show files that are shared with me. --drive-scope string Scope that rclone should use when requesting access from drive.
--drive-skip-gdocs Skip google documents in all listings. --drive-service-account-credentials string Service Account Credentials JSON blob
--drive-team-drive string ID of the Team Drive --drive-service-account-file string Service Account Credentials JSON file path
--drive-trashed-only Only show files that are in the trash. --drive-shared-with-me Only show files that are shared with me.
--drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M) --drive-skip-gdocs Skip google documents in all listings.
--drive-use-created-date Use file created date instead of modified date., --drive-team-drive string ID of the Team Drive
--drive-use-trash Send files to the trash instead of deleting permanently. (default true) --drive-trashed-only Only show files that are in the trash.
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off) --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
--dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M) --drive-use-created-date Use file created date instead of modified date.,
--dropbox-client-id string Dropbox App Client Id --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
--dropbox-client-secret string Dropbox App Client Secret --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
--dropbox-impersonate string Impersonate this user when using a business account. --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
-n, --dry-run Do a trial run with no permanent changes --dropbox-client-id string Dropbox App Client Id
--dump string List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles --dropbox-client-secret string Dropbox App Client Secret
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info --dropbox-impersonate string Impersonate this user when using a business account.
--dump-headers Dump HTTP bodies - may contain sensitive info -n, --dry-run Do a trial run with no permanent changes
--exclude stringArray Exclude files matching pattern --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--exclude-from stringArray Read exclude patterns from file --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--exclude-if-present string Exclude directories if filename is present --dump-headers Dump HTTP bodies - may contain sensitive info
--fast-list Use recursive list if available. Uses more memory but fewer transactions. --exclude stringArray Exclude files matching pattern
--files-from stringArray Read list of source-file names from file --exclude-from stringArray Read exclude patterns from file
-f, --filter stringArray Add a file-filtering rule --exclude-if-present string Exclude directories if filename is present
--filter-from stringArray Read filtering patterns from a file --fast-list Use recursive list if available. Uses more memory but fewer transactions.
--ftp-host string FTP host to connect to --files-from stringArray Read list of source-file names from file
--ftp-pass string FTP password -f, --filter stringArray Add a file-filtering rule
--ftp-port string FTP port, leave blank to use default (21) --filter-from stringArray Read filtering patterns from a file
--ftp-user string FTP username, leave blank for current username, $USER --ftp-host string FTP host to connect to
--gcs-bucket-acl string Access Control List for new buckets. --ftp-pass string FTP password
--gcs-client-id string Google Application Client Id --ftp-port string FTP port, leave blank to use default (21)
--gcs-client-secret string Google Application Client Secret --ftp-user string FTP username, leave blank for current username, $USER
--gcs-location string Location for the newly created buckets. --gcs-bucket-acl string Access Control List for new buckets.
--gcs-object-acl string Access Control List for new objects. --gcs-client-id string Google Application Client Id
--gcs-project-number string Project number. --gcs-client-secret string Google Application Client Secret
--gcs-service-account-file string Service Account Credentials JSON file path --gcs-location string Location for the newly created buckets.
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage. --gcs-object-acl string Access Control List for new objects.
--http-url string URL of http host to connect to --gcs-project-number string Project number.
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --gcs-service-account-file string Service Account Credentials JSON file path
--hubic-client-id string Hubic Client Id --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--hubic-client-secret string Hubic Client Secret --http-url string URL of http host to connect to
--ignore-case Ignore case in filters (case insensitive) --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--ignore-checksum Skip post copy check of checksums. --hubic-client-id string Hubic Client Id
--ignore-errors delete even if there are I/O errors --hubic-client-secret string Hubic Client Secret
--ignore-existing Skip all files that exist on destination --hubic-no-chunk Don't chunk files during streaming upload.
--ignore-size Ignore size when skipping use mod-time or checksum. --ignore-case Ignore case in filters (case insensitive)
-I, --ignore-times Don't skip files that match size and time - transfer all files --ignore-checksum Skip post copy check of checksums.
--immutable Do not modify files. Fail if existing files have been modified. --ignore-errors delete even if there are I/O errors
--include stringArray Include files matching pattern --ignore-existing Skip all files that exist on destination
--include-from stringArray Read include patterns from file --ignore-size Ignore size when skipping use mod-time or checksum.
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash. -I, --ignore-times Don't skip files that match size and time - transfer all files
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M) --immutable Do not modify files. Fail if existing files have been modified.
--jottacloud-mountpoint string The mountpoint to use. --include stringArray Include files matching pattern
--jottacloud-pass string Password. --include-from stringArray Read include patterns from file
--jottacloud-unlink Remove existing public link to file/folder with link command rather than creating. --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
--jottacloud-user string User Name --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
--local-no-check-updated Don't check to see if the files change during upload --jottacloud-mountpoint string The mountpoint to use.
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated) --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
--local-nounc string Disable UNC (long path names) conversion on Windows --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
--log-file string Log everything to this file --jottacloud-user string User Name:
--log-format string Comma separated list of log format options (default "date,time") -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE") --local-no-check-updated Don't check to see if the files change during upload
--low-level-retries int Number of low level retries to do. (default 10) --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
--max-age duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) --local-nounc string Disable UNC (long path names) conversion on Windows
--max-backlog int Maximum number of objects in sync or check backlog. (default 10000) --log-file string Log everything to this file
--max-delete int When synchronizing, limit the number of deletes (default -1) --log-format string Comma separated list of log format options (default "date,time")
--max-depth int If set limits the recursion depth to this. (default -1) --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--max-size int Only transfer files smaller than this in k or suffix b|k|M|G (default off) --low-level-retries int Number of low level retries to do. (default 10)
--max-transfer int Maximum size of data to transfer. (default off) --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--mega-debug Output more debug from Mega. --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
--mega-hard-delete Delete files permanently rather than putting them into the trash. --max-delete int When synchronizing, limit the number of deletes (default -1)
--mega-pass string Password. --max-depth int If set limits the recursion depth to this. (default -1)
--mega-user string User name --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
--min-age duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --mega-debug Output more debug from Mega.
--min-size int Only transfer files bigger than this in k or suffix b|k|M|G (default off) --mega-hard-delete Delete files permanently rather than putting them into the trash.
--modify-window duration Max time diff to be considered the same (default 1ns) --mega-pass string Password.
--no-check-certificate Do not verify the server SSL certificate. Insecure. --mega-user string User name
--no-gzip-encoding Don't set Accept-Encoding: gzip. --memprofile string Write memory profile to file
--no-traverse Obsolete - does nothing. --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--no-update-modtime Don't update destination mod-time if files identical. --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only). --modify-window duration Max time diff to be considered the same (default 1ns)
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M) --no-check-certificate Do not verify the server SSL certificate. Insecure.
--onedrive-client-id string Microsoft App Client Id --no-gzip-encoding Don't set Accept-Encoding: gzip.
--onedrive-client-secret string Microsoft App Client Secret --no-traverse Don't traverse destination file system on copy.
--onedrive-drive-id string The ID of the drive to use --no-update-modtime Don't update destination mod-time if files identical.
--onedrive-drive-type string The type of the drive ( personal | business | documentLibrary ) -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings. --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
--opendrive-password string Password. --onedrive-client-id string Microsoft App Client Id
--opendrive-username string Username --onedrive-client-secret string Microsoft App Client Secret
--pcloud-client-id string Pcloud App Client Id --onedrive-drive-id string The ID of the drive to use
--pcloud-client-secret string Pcloud App Client Secret --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
-P, --progress Show progress during transfer. --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
--qingstor-access-key-id string QingStor Access Key ID --opendrive-password string Password.
--qingstor-connection-retries int Number of connection retries. (default 3) --opendrive-username string Username
--qingstor-endpoint string Enter a endpoint URL to connection QingStor API. --pcloud-client-id string Pcloud App Client Id
--qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. --pcloud-client-secret string Pcloud App Client Secret
--qingstor-secret-access-key string QingStor Secret Access Key (password) -P, --progress Show progress during transfer.
--qingstor-zone string Zone to connect to. --qingstor-access-key-id string QingStor Access Key ID
-q, --quiet Print as little stuff as possible --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
--rc Enable the remote control server. --qingstor-connection-retries int Number of connection retries. (default 3)
--rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572") --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate) --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
--rc-client-ca string Client certificate authority to verify clients with --qingstor-secret-access-key string QingStor Secret Access Key (password)
--rc-files string Path to local files to serve on the HTTP server. --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
--rc-htpasswd string htpasswd file - if not provided no authentication is done --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--rc-key string SSL PEM Private key --qingstor-zone string Zone to connect to.
--rc-max-header-bytes int Maximum size of request header (default 4096) -q, --quiet Print as little stuff as possible
--rc-no-auth Don't require auth for certain methods. --rc Enable the remote control server.
--rc-pass string Password for authentication. --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
--rc-realm string realm for authentication (default "rclone") --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-serve Enable the serving of remote objects. --rc-client-ca string Client certificate authority to verify clients with
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s) --rc-files string Path to local files to serve on the HTTP server.
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s) --rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-user string User name for authentication. --rc-key string SSL PEM Private key
--retries int Retry operations this many times if they fail (default 3) --rc-max-header-bytes int Maximum size of request header (default 4096)
--retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable) --rc-no-auth Don't require auth for certain methods.
--s3-access-key-id string AWS Access Key ID. --rc-pass string Password for authentication.
--s3-acl string Canned ACL used when creating buckets and storing or copying objects. --rc-realm string realm for authentication (default "rclone")
--s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M) --rc-serve Enable the serving of remote objects.
--s3-disable-checksum Don't store MD5 checksum with object metadata --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--s3-endpoint string Endpoint for S3 API. --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). --rc-user string User name for authentication.
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true) --retries int Retry operations this many times if they fail (default 3)
--s3-location-constraint string Location constraint - must be set to match the Region. --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
--s3-provider string Choose your S3 provider. --s3-access-key-id string AWS Access Key ID.
--s3-region string Region to connect to. --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
--s3-secret-access-key string AWS Secret Access Key (password) --s3-bucket-acl string Canned ACL used when creating buckets.
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3. --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
--s3-session-token string An AWS session token --s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key. --s3-endpoint string Endpoint for S3 API.
--s3-storage-class string The storage class to use when storing new objects in S3. --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
--s3-upload-concurrency int Concurrency for multipart uploads. (default 2) --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
--s3-v2-auth If true use v2 authentication. --s3-location-constraint string Location constraint - must be set to match the Region.
--sftp-ask-password Allow asking for SFTP password when needed. --s3-provider string Choose your S3 provider.
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available. --s3-region string Region to connect to.
--sftp-host string SSH host to connect to --s3-secret-access-key string AWS Secret Access Key (password)
--sftp-key-file string Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
--sftp-pass string SSH password, leave blank to use ssh-agent. --s3-session-token string An AWS session token
--sftp-path-override string Override path used by SSH connection. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
--sftp-port string SSH port, leave blank to use default (22) --s3-storage-class string The storage class to use when storing new objects in S3.
--sftp-set-modtime Set the modified time on the remote if set. (default true) --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
--sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
--sftp-user string SSH username, leave blank for current username, ncw --s3-v2-auth If true use v2 authentication.
--size-only Skip based on size only, not mod-time or checksum --sftp-ask-password Allow asking for SFTP password when needed.
--skip-links Don't warn about skipped symlinks. --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s) --sftp-host string SSH host to connect to
--stats-file-name-length int Max file name length in stats. 0 for no limit (default 40) --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO") --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
--stats-one-line Make the stats fit on one line. --sftp-key-use-agent When set forces the usage of the ssh-agent.
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes") --sftp-pass string SSH password, leave blank to use ssh-agent.
--streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k) --sftp-path-override string Override path used by SSH connection.
--suffix string Suffix for use with --backup-dir. --sftp-port string SSH port, leave blank to use default (22)
--swift-auth string Authentication URL for server (OS_AUTH_URL). --sftp-set-modtime Set the modified time on the remote if set. (default true)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --sftp-user string SSH username, leave blank for current username, ncw
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G) --size-only Skip based on size only, not mod-time or checksum
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --skip-links Don't warn about skipped symlinks.
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public") --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form. --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
--swift-key string API key or password (OS_PASSWORD). --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--swift-region string Region name - optional (OS_REGION_NAME) --stats-one-line Make the stats fit on one line.
--swift-storage-policy string The storage policy to use when creating a new container --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL) --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) --suffix string Suffix for use with --backup-dir.
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
--swift-user string User name to log in (OS_USERNAME). --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). --swift-auth string Authentication URL for server (OS_AUTH_URL).
--syslog Use Syslog for logging --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON") --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--timeout duration IO idle timeout (default 5m0s) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--tpslimit float Limit HTTP transactions per second to this. --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--track-renames When synchronizing, track file renames and do a server side move if possible --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
--transfers int Number of file transfers to run in parallel. (default 4) --swift-key string API key or password (OS_PASSWORD).
--union-remotes string List of space separated remotes. --swift-no-chunk Don't chunk files during streaming upload.
-u, --update Skip files that are newer on the destination. --swift-region string Region name - optional (OS_REGION_NAME)
--use-server-modtime Use server modified time instead of object metadata --swift-storage-policy string The storage policy to use when creating a new container
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.45") --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
-v, --verbose count Print lots more stuff (repeat for more) --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon) --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--webdav-pass string Password. --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--webdav-url string URL of http host to connect to --swift-user string User name to log in (OS_USERNAME).
--webdav-user string User name --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
--webdav-vendor string Name of the Webdav site/service/software you are using --syslog Use Syslog for logging
--yandex-client-id string Yandex Client Id --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--yandex-client-secret string Yandex Client Secret --timeout duration IO idle timeout (default 5m0s)
--yandex-unlink Remove existing public link to file/folder with link command rather than creating. --tpslimit float Limit HTTP transactions per second to this.
--tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
--union-remotes string List of space separated remotes.
-u, --update Skip files that are newer on the destination.
--use-cookies Enable session cookiejar.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.46")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-client-id string Yandex Client Id
--yandex-client-secret string Yandex Client Secret
--yandex-unlink Remove existing public link to file/folder with link command rather than creating.
``` ```
### SEE ALSO ### SEE ALSO
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 24-Nov-2018 ###### Auto generated by spf13/cobra on 9-Feb-2019

Some files were not shown because too many files have changed in this diff Show More