1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-26 05:03:43 +00:00

Compare commits

..

77 Commits

Author SHA1 Message Date
Henning Surmeier
ecb3f04370 webdav/sharepoint: renew cookies after 12hrs 2018-09-17 22:49:02 +02:00
Nick Craig-Wood
a25875170b webdav: Add another time format to fix #2574 2018-09-15 21:46:56 +01:00
Alex Chen
a288646419 onedrive: fix sometimes special chars in filenames not replaced 2018-09-14 21:38:55 +08:00
Nick Craig-Wood
b3d8bc61ac build: add example scripts for bisecting rclone and go 2018-09-14 11:17:51 +01:00
sandeepkru
7accd30da8 cmd and fs: Added new command settier which performs storage tier changes on
supported remotes
2018-09-12 21:09:08 +01:00
sandeepkru
9594fd0a0c fstests: Added integration tests on SetTier operation 2018-09-12 21:09:08 +01:00
sandeepkru
aac84c554a azureblob: Implemented settier command support on azureblob remote, this supports to
change tier on objects. Added internal test to check if feature flags are set correctly
2018-09-12 21:09:08 +01:00
sandeepkru
5716a58413 fs: Added new optional interfaces SetTierer, GetTierer and ListTierer, these are to
perform object tier changes on supported remotes
2018-09-12 21:09:08 +01:00
sandeepkru
233507bfe0 vendor: Update AzureSDK version to latest one, fixes failing integration tests 2018-09-12 08:14:38 +01:00
sandeepkru
5b27702b61 AzureBlob new sdk changes 2018-09-12 08:14:38 +01:00
Nick Craig-Wood
b2a6a97443 Add Craig Miskell to contributors 2018-09-11 07:57:16 +01:00
Craig Miskell
2543278c3f S3: Use (custom) pacer, to retry operations when reasonable - fixes #2503 2018-09-11 07:57:03 +01:00
Jon Craton
19cf3bb9e7 Fixed typo (duplicate word) (#2563) 2018-09-10 19:09:48 -07:00
Fabian Möller
3c44ef788a cache: add plex_insecure option to skip certificate validation
Fixes #2215
2018-09-10 21:19:25 +01:00
Fabian Möller
e5663de09e docs: add section for .plex.direct urls to cache 2018-09-10 21:15:18 +01:00
Nick Craig-Wood
7cfd4e56f8 Add Santiago Rodríguez to contributors
Another email address for Sandeep.
2018-09-10 20:49:55 +01:00
Santiago Rodríguez
282540c2d4 azureblob: add --azureblob-list-chunk parameter - Fixes #2390
This parameter can be used to adjust the size of the listing chunks
which can be used to workaround problems listing large buckets.
2018-09-10 20:45:06 +01:00
albertony
1e7a7d756f jottacloud: fix for --fast-list 2018-09-10 20:38:20 +01:00
Fabian Möller
f6ee0795ac cache: preserve leading / in wrapped remote path
When combining the remote value and the root path, preserve the absence
or presence of the / at the beginning of the wrapped remote path.

e.g. a remote "cloud:" and root path "dir" becomes "cloud:dir" instead
of "cloud:/dir".

Fixes #2553
2018-09-10 20:35:50 +01:00
Fabian Möller
eb5a95e7de crypt: preserve leading / in wrapped remote path
When combining the remote value and the root path, preserve the absence
or presence of the / at the beginning of the wrapped remote path.

e.g. a remote "cloud:" and root path "dir" becomes "cloud:dir" instead
of "cloud:/dir".
2018-09-10 20:35:50 +01:00
Sandeep
5b1c162fb2 Added sandeepkru to maintainers list (#2560) 2018-09-08 20:58:23 -07:00
albertony
d51501938a jottacloud: add link sharing support 2018-09-08 09:38:57 +01:00
Nick Craig-Wood
a823d518ac docs: changelog from v1.43.1 branch 2018-09-07 17:16:11 +01:00
Nick Craig-Wood
87d4e32a6b build: instructions on how to make a point release 2018-09-07 17:10:29 +01:00
Nick Craig-Wood
820d2a7149 Add Felix Brucker to contributors 2018-09-07 15:14:28 +01:00
Nick Craig-Wood
9fe39f25e1 union: add missing docs 2018-09-07 15:14:08 +01:00
Nick Craig-Wood
1b2cc781e5 union: fix so all integration tests pass
* Fix error handling in List and NewObject
* Fix Precision in case we have precision > time.Second
* Fix Features - all binary features are possible
* Fix integration tests using new test facilities
2018-09-07 15:14:08 +01:00
Nick Craig-Wood
e05ec2b77e fstests: Allow object name and fs check to be skipped 2018-09-07 15:14:08 +01:00
Felix Brucker
9e3ea3c6ac union: Implement union backend which reads from multiple backends 2018-09-07 15:14:08 +01:00
Anagh Kumar Baranwal
0fb12112f5 docs: display changes
- Reduced size of the social menu and increased the size of the content
- Added scrollable property to the index menus
- Fixed code wrapping issue

Fixes #2103

Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-09-07 14:54:22 +01:00
Cédric Connes
1b95ca2852 stats: handle FatalError and NoRetryError when reported to stats 2018-09-07 14:44:50 +01:00
albertony
e07a850be3 jottacloud: add permanent delete support: --jottacloud-hard-delete 2018-09-07 12:58:18 +01:00
albertony
3fccce625c jottacloud: add --fast-list support - fixes #2532 2018-09-07 12:49:39 +01:00
albertony
a1f935e815 jottacloud: minor improvement in quota info (omit if unlimited) 2018-09-07 12:49:39 +01:00
Alex Chen
22ee4151fd build: make CIs available for forks
This makes it possible to run CI on a fork of rclone which is useful for contributors.
2018-09-07 10:17:26 +01:00
Fabian Möller
cc23ad71ce ncdu: return error instead of log.Fatal in Show 2018-09-07 10:06:37 +01:00
sandeepkru
57b9fff904 azureblob - BugFix - Incorrect StageBlock invocation in multi-part uploads
fixes #2518. Incorrect formation of block list.
2018-09-06 22:24:40 +01:00
Alex Chen
692ad482dc onedrive: fix new fields not saved when editing old config - fixes #2527 2018-09-07 00:07:16 +08:00
Sebastian Bünger
c6f1c3c7f6 box: Implement link sharing. #2178 2018-09-04 22:01:22 +01:00
Nick Craig-Wood
164d1e05ca hubic: retry auth fetching if it fails to make hubic more reliable 2018-09-04 21:00:36 +01:00
Nick Craig-Wood
c644241392 hubic: fix uploads - fixes #2524
Uploads were broken because chunk size was set to zero.  This was a
consequence of the backend config re-organization which meant that
chunk size had lost its default.

Sharing some backend config between swift and hubic fixes the problem
and means hubic gains its own --hubic-chunk-size flag.
2018-09-04 20:27:48 +01:00
Cnly
89be5cadaa onedrive: fix make check 2018-09-05 00:37:52 +08:00
Cnly
f326f94b97 onedrive: use single-part upload for empty files - fixes #2520 2018-09-05 00:08:00 +08:00
Nick Craig-Wood
3d2117887d Add Anagh Kumar Baranwal to contributors 2018-09-04 16:16:46 +01:00
Anagh Kumar Baranwal
5a6750e1cd cache: documentation fix for cache-chunk-total-size - Fixes #2519
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-09-04 16:16:35 +01:00
Fabian Möller
6b8b9d19f3 googlecloudstorage: fix service_account_file been ignored - Fixes #2523 2018-09-04 15:31:20 +01:00
Nick Craig-Wood
4ca26eb38c cmd: fix crash with --progress and --stats 0 #2501 2018-09-04 14:39:48 +01:00
Alex Chen
37b2754f37 Add Alex Chen (Cnly) to MAINTAINERS.md 2018-09-04 08:40:01 +08:00
Nick Craig-Wood
172beb2ae3 Add cron410 to contributors 2018-09-03 20:28:15 +01:00
cron410
5eba392a04 cache: clarify docs 2018-09-03 20:28:15 +01:00
Fabian Möller
deda093637 cache: fix error return value of cache/fetch rc method 2018-09-03 17:32:11 +02:00
dcpu
a4c4019032 cache: improve performance by not sending info requests for cached chunks 2018-09-03 15:41:06 +01:00
Nick Craig-Wood
2e37942592 Add albertony to contributors 2018-09-03 15:31:25 +01:00
albertony
09d7bd2d40 config: don't create default config dir when user supplies --config
Avoid creating empty default configuration directory when user supplies path to config file.

Fixes #2514.
2018-09-03 15:30:53 +01:00
Nick Craig-Wood
ff0efb1501 Add Sheldon Rupp to contributors 2018-09-03 15:04:37 +01:00
Sheldon Rupp
0f1d4a7ca8 docs: fix typo 2018-09-03 15:03:49 +01:00
Fabian Möller
a0b3fd3a33 cache: fix worker scale down
Ensure that calling scaleWorkers will create/destroy the right amount of
workers.
2018-09-03 12:29:35 +02:00
Fabian Möller
cdbe3691b7 cache: add cache/fetch rc function 2018-09-03 12:29:35 +02:00
Fabian Möller
3a0b3b0f6e drive: reformat long API call lines 2018-09-03 12:22:05 +02:00
Nick Craig-Wood
d3afef3e1b Add dcpu to contributors 2018-09-02 18:11:42 +01:00
dcpu
f4aaec9ce5 log: Add --log-format flag - fixes #2424 2018-09-02 18:11:09 +01:00
Nick Craig-Wood
bd5d326160 build: enable caching of the go build cache for Travis and Appveyor 2018-09-02 17:43:09 +01:00
Cnly
5b9b9f1572 onedrive: graph: clarify option for root Sharepoint site 2018-09-02 16:06:25 +01:00
Cnly
571c8754de onedrive: graph: update docs 2018-09-02 16:06:25 +01:00
Cnly
fb9a95e68e onedrive: graph: Remove unnecessary error checks 2018-09-02 16:06:25 +01:00
Cnly
85e0839c8b onedrive: graph: Refine config handling 2018-09-02 16:06:25 +01:00
Cnly
1749fb8ebf onedrive: graph: Refine config keys naming 2018-09-02 16:06:25 +01:00
Oliver Heyme
e114be11ec onedrive: Removed upload cutoff and always do session uploads
Set modtime on copy

Added versioning issue to OneDrive documentation

(cherry picked from commit 7f74403)
2018-09-02 16:06:25 +01:00
Cnly
b709f73aab onedrive: rework to support Microsoft Graph
The initial work on this was done by Oliver Heyme with updates from
Cnly.

Oliver Heyme:

* Changed to Microsoft graph
* Enable writing
* Added more options for adding a OneDrive Remote
* Better error handling
* Send modDate at create upload session and fix list children

Cnly:

* Simple upload API only supports max 4MB files
* Fix supported hash types for different drive types
* Fix unchecked err

Co-authored-by: Oliver Heyme <olihey@googlemail.com>
Co-authored-by: Cnly <minecnly@gmail.com>
2018-09-02 16:06:25 +01:00
Nick Craig-Wood
05a615ef22 Add Dr. Tobias Quathamer to contributors 2018-09-02 15:51:47 +01:00
Dr. Tobias Quathamer
76450c01f3 cache: remove accidentally committed files
* Delete cache_upload_test.go.orig
* Delete cache_upload_test.go.rej
2018-09-02 15:51:22 +01:00
Nick Craig-Wood
86e64c626c Add Cédric Connes to contributors 2018-09-02 15:08:38 +01:00
Cédric Connes
9b827be418 local: skip bad symlinks in dir listing with -L enabled - fixes #1509 2018-09-02 14:47:54 +01:00
Nick Craig-Wood
7e5c6725c1 docs: Fix version in changelog 2018-09-01 18:40:34 +01:00
Nick Craig-Wood
543d75723b Start v1.43-DEV development 2018-09-01 18:37:48 +01:00
Nick Craig-Wood
b4d94f255a build: server side copy the current release files in 2018-09-01 18:22:19 +01:00
Nick Craig-Wood
b0dd218fea build: make tidy-beta for removing old beta releases 2018-09-01 18:21:54 +01:00
156 changed files with 3332 additions and 1721 deletions

View File

@@ -4,6 +4,9 @@ os: Windows Server 2012 R2
clone_folder: c:\gopath\src\github.com\ncw\rclone
cache:
- '%LocalAppData%\go-build'
environment:
GOPATH: C:\gopath
CPATH: C:\Program Files (x86)\WinFsp\inc\fuse
@@ -43,4 +46,4 @@ artifacts:
- path: build/*-v*.zip
deploy_script:
- IF "%APPVEYOR_PULL_REQUEST_NUMBER%" == "" make appveyor_upload
- IF "%APPVEYOR_REPO_NAME%" == "ncw/rclone" IF "%APPVEYOR_PULL_REQUEST_NUMBER%" == "" make appveyor_upload

View File

@@ -10,6 +10,7 @@ go:
- 1.10.x
- 1.11.x
- tip
go_import_path: github.com/ncw/rclone
before_install:
- if [[ $TRAVIS_OS_NAME == linux ]]; then sudo modprobe fuse ; sudo chmod 666 /dev/fuse ; sudo chown root:$USER /etc/fuse.conf ; fi
- if [[ $TRAVIS_OS_NAME == osx ]]; then brew update && brew tap caskroom/cask && brew cask install osxfuse ; fi
@@ -34,6 +35,9 @@ addons:
- libfuse-dev
- rpm
- pkg-config
cache:
directories:
- $HOME/.cache/go-build
matrix:
allow_failures:
- go: tip
@@ -41,11 +45,15 @@ matrix:
- os: osx
go: 1.11.x
env: GOTAGS=""
cache:
directories:
- $HOME/Library/Caches/go-build
deploy:
provider: script
script: make travis_beta
skip_cleanup: true
on:
repo: ncw/rclone
all_branches: true
go: 1.11.x
condition: $TRAVIS_PULL_REQUEST == false

View File

@@ -71,7 +71,7 @@ Make sure you
When you are done with that
git push origin my-new-feature
git push origin my-new-feature
Go to the Github website and click [Create pull
request](https://help.github.com/articles/creating-a-pull-request/).
@@ -81,6 +81,14 @@ You patch will get reviewed and you might get asked to fix some stuff.
If so, then make the changes in the same branch, squash the commits,
rebase it to master then push it to Github with `--force`.
## Enabling CI for your fork ##
The CI config files for rclone have taken care of forks of the project, so you can enable CI for your fork repo easily.
rclone currently uses [Travis CI](https://travis-ci.org/), [AppVeyor](https://ci.appveyor.com/), and
[Circle CI](https://circleci.com/) to build the project. To enable them for your fork, simply go into their
websites, find your fork of rclone, and enable building there.
## Testing ##
rclone's tests are run from the go testing framework, so at the top

View File

@@ -7,6 +7,8 @@ Current active maintainers of rclone are
* Ishuah Kariuki @ishuah
* Remus Bunduc @remusb - cache subsystem maintainer
* Fabian Möller @B4dM4n
* Alex Chen @Cnly
* Sandeep Ummadi @sandeepkru
**This is a work in progress Draft**

View File

@@ -12,7 +12,7 @@
<div id="header">
<h1 class="title">rclone(1) User Manual</h1>
<h2 class="author">Nick Craig-Wood</h2>
<h3 class="date">Sep 07, 2018</h3>
<h3 class="date">Sep 01, 2018</h3>
</div>
<h1 id="rclone">Rclone</h1>
<p><a href="https://rclone.org/"><img src="https://rclone.org/img/rclone-120x120.png" alt="Logo" /></a></p>
@@ -6879,26 +6879,7 @@ nounc = true</code></pre>
<h4 id="skip-links">--skip-links</h4>
<p>This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.</p>
<h1 id="changelog">Changelog</h1>
<h2 id="v1.43.1---2018-09-07">v1.43.1 - 2018-09-07</h2>
<p>Point release to fix hubic and azureblob backends.</p>
<ul>
<li>Bug Fixes
<ul>
<li>ncdu: Return error instead of log.Fatal in Show (Fabian Möller)</li>
<li>cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood)</li>
<li>docs: Tidy website display (Anagh Kumar Baranwal)</li>
</ul></li>
<li>Azure Blob:
<ul>
<li>Fix multi-part uploads. (sandeepkru)</li>
</ul></li>
<li>Hubic
<ul>
<li>Fix uploads (Nick Craig-Wood)</li>
<li>Retry auth fetching if it fails to make hubic more reliable (Nick Craig-Wood)</li>
</ul></li>
</ul>
<h2 id="v1.43---2018-09-01">v1.43 - 2018-09-01</h2>
<h2 id="v1.42---2018-09-01">v1.42 - 2018-09-01</h2>
<ul>
<li>New backends
<ul>

View File

@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Sep 07, 2018
% Sep 01, 2018
Rclone
======
@@ -11628,21 +11628,7 @@ points, as you explicitly acknowledge that they should be skipped.
# Changelog
## v1.43.1 - 2018-09-07
Point release to fix hubic and azureblob backends.
* Bug Fixes
* ncdu: Return error instead of log.Fatal in Show (Fabian Möller)
* cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood)
* docs: Tidy website display (Anagh Kumar Baranwal)
* Azure Blob:
* Fix multi-part uploads. (sandeepkru)
* Hubic
* Fix uploads (Nick Craig-Wood)
* Retry auth fetching if it fails to make hubic more reliable (Nick Craig-Wood)
## v1.43 - 2018-09-01
## v1.42 - 2018-09-01
* New backends
* Jottacloud (Sebastian Bünger)

View File

@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
Sep 07, 2018
Sep 01, 2018
@@ -11192,23 +11192,7 @@ points, as you explicitly acknowledge that they should be skipped.
CHANGELOG
v1.43.1 - 2018-09-07
Point release to fix hubic and azureblob backends.
- Bug Fixes
- ncdu: Return error instead of log.Fatal in Show (Fabian Möller)
- cmd: Fix crash with --progress and --stats 0 (Nick Craig-Wood)
- docs: Tidy website display (Anagh Kumar Baranwal)
- Azure Blob:
- Fix multi-part uploads. (sandeepkru)
- Hubic
- Fix uploads (Nick Craig-Wood)
- Retry auth fetching if it fails to make hubic more reliable
(Nick Craig-Wood)
v1.43 - 2018-09-01
v1.42 - 2018-09-01
- New backends
- Jottacloud (Sebastian Bünger)

View File

@@ -152,8 +152,8 @@ check_sign:
cd build && gpg --verify SHA256SUMS && gpg --decrypt SHA256SUMS | sha256sum -c
upload:
rclone -v copy --exclude '*current*' build/ memstore:downloads-rclone-org/$(TAG)
rclone -v copy --include '*current*' --include version.txt build/ memstore:downloads-rclone-org
rclone -P copy build/ memstore:downloads-rclone-org/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "memstore:downloads-rclone-org/$(TAG)/$$i" "memstore:downloads-rclone-org/$$j"'
upload_github:
./bin/upload-github $(TAG)
@@ -202,7 +202,7 @@ endif
# Fetch the binary builds from travis and appveyor
fetch_binaries:
rclone -v sync $(BETA_UPLOAD) build/
rclone -P sync $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w

View File

@@ -31,3 +31,25 @@ Early in the next release cycle update the vendored dependencies
* git status
* git add new files
* git commit -a -v
Making a point release. If rclone needs a point release due to some
horrendous bug, then
* git branch v1.XX v1.XX-fixes
* git cherry-pick any fixes
* Test (see above)
* make NEW_TAG=v1.XX.1 tag
* edit docs/content/changelog.md
* make TAG=v1.43.1 doc
* git commit -a -v -m "Version v1.XX.1"
* git tag -d -v1.XX.1
* git tag -s -m "Version v1.XX.1" v1.XX.1
* git push --tags -u origin v1.XX-fixes
* make BRANCH_PATH= TAG=v1.43.1 fetch_binaries
* make TAG=v1.43.1 tarball
* make TAG=v1.43.1 sign_upload
* make TAG=v1.43.1 check_sign
* make TAG=v1.43.1 upload
* make TAG=v1.43.1 upload_website
* make TAG=v1.43.1 upload_github
* NB this overwrites the current beta so after the release, rebuild the last travis build
* Announce!

View File

@@ -25,6 +25,7 @@ import (
_ "github.com/ncw/rclone/backend/s3"
_ "github.com/ncw/rclone/backend/sftp"
_ "github.com/ncw/rclone/backend/swift"
_ "github.com/ncw/rclone/backend/union"
_ "github.com/ncw/rclone/backend/webdav"
_ "github.com/ncw/rclone/backend/yandex"
)

View File

@@ -7,6 +7,7 @@ package azureblob
import (
"bytes"
"context"
"crypto/md5"
"encoding/base64"
"encoding/binary"
"encoding/hex"
@@ -37,7 +38,7 @@ const (
minSleep = 10 * time.Millisecond
maxSleep = 10 * time.Second
decayConstant = 1 // bigger for slower decay, exponential
listChunkSize = 5000 // number of items to read at once
maxListChunkSize = 5000 // number of items to read at once
modTimeKey = "mtime"
timeFormatIn = time.RFC3339
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
@@ -80,6 +81,11 @@ func init() {
Help: "Upload chunk size. Must fit in memory.",
Default: fs.SizeSuffix(defaultChunkSize),
Advanced: true,
}, {
Name: "list_chunk",
Help: "Size of blob list.",
Default: maxListChunkSize,
Advanced: true,
}, {
Name: "access_tier",
Help: "Access tier of blob, supports hot, cool and archive tiers.\nArchived blobs can be restored by setting access tier to hot or cool." +
@@ -91,13 +97,14 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
SASURL string `config:"sas_url"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
AccessTier string `config:"access_tier"`
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
SASURL string `config:"sas_url"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
ListChunkSize uint `config:"list_chunk"`
AccessTier string `config:"access_tier"`
}
// Fs represents a remote azure server
@@ -171,6 +178,32 @@ func parsePath(path string) (container, directory string, err error) {
return
}
// validateAccessTier checks if azureblob supports user supplied tier
func validateAccessTier(tier string) bool {
switch tier {
case string(azblob.AccessTierHot),
string(azblob.AccessTierCool),
string(azblob.AccessTierArchive):
// valid cases
return true
default:
return false
}
}
// validAccessTiers returns list of supported storage tiers on azureblob fs
func validAccessTiers() []string {
validTiers := [...]azblob.AccessTierType{azblob.AccessTierHot, azblob.AccessTierCool,
azblob.AccessTierArchive}
var tiers [len(validTiers)]string
for i, tier := range validTiers {
tiers[i] = string(tier)
}
return tiers[:]
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
401, // Unauthorized (eg "Token has expired")
@@ -211,6 +244,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if opt.ChunkSize > maxChunkSize {
return nil, errors.Errorf("azure: chunk size can't be greater than %v - was %v", maxChunkSize, opt.ChunkSize)
}
if opt.ListChunkSize > maxListChunkSize {
return nil, errors.Errorf("azure: blob list size can't be greater than %v - was %v", maxListChunkSize, opt.ListChunkSize)
}
container, directory, err := parsePath(root)
if err != nil {
return nil, err
@@ -221,15 +257,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if opt.AccessTier == "" {
opt.AccessTier = string(defaultAccessTier)
} else {
switch opt.AccessTier {
case string(azblob.AccessTierHot):
case string(azblob.AccessTierCool):
case string(azblob.AccessTierArchive):
// valid cases
default:
return nil, errors.Errorf("azure: Supported access tiers are %s, %s and %s", string(azblob.AccessTierHot), string(azblob.AccessTierCool), azblob.AccessTierArchive)
}
} else if !validateAccessTier(opt.AccessTier) {
return nil, errors.Errorf("Azure Blob: Supported access tiers are %s, %s and %s",
string(azblob.AccessTierHot), string(azblob.AccessTierCool), string(azblob.AccessTierArchive))
}
var (
@@ -239,7 +269,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
)
switch {
case opt.Account != "" && opt.Key != "":
credential := azblob.NewSharedKeyCredential(opt.Account, opt.Key)
credential, err := azblob.NewSharedKeyCredential(opt.Account, opt.Key)
if err != nil {
return nil, errors.Wrapf(err, "Failed to parse credentials")
}
u, err = url.Parse(fmt.Sprintf("https://%s.%s", opt.Account, opt.Endpoint))
if err != nil {
return nil, errors.Wrap(err, "failed to make azure storage url from account and endpoint")
@@ -285,6 +319,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ListTiers: true,
}).Fill(f)
if f.root != "" {
f.root += "/"
@@ -474,7 +511,7 @@ func (f *Fs) markContainerOK() {
// listDir lists a single directory
func (f *Fs) listDir(dir string) (entries fs.DirEntries, err error) {
err = f.list(dir, false, listChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
err = f.list(dir, false, f.opt.ListChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil {
return err
@@ -545,7 +582,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
return fs.ErrorListBucketRequired
}
list := walk.NewListRHelper(callback)
err = f.list(dir, true, listChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
err = f.list(dir, true, f.opt.ListChunkSize, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
if err != nil {
return err
@@ -566,11 +603,11 @@ type listContainerFn func(*azblob.ContainerItem) error
// listContainersToFn lists the containers to the function supplied
func (f *Fs) listContainersToFn(fn listContainerFn) error {
params := azblob.ListContainersSegmentOptions{
MaxResults: int32(listChunkSize),
MaxResults: int32(f.opt.ListChunkSize),
}
ctx := context.Background()
for marker := (azblob.Marker{}); marker.NotDone(); {
var response *azblob.ListContainersResponse
var response *azblob.ListContainersSegmentResponse
err := f.pacer.Call(func() (bool, error) {
var err error
response, err = f.svcURL.ListContainersSegment(ctx, marker, params)
@@ -752,7 +789,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
var startCopy *azblob.BlobStartCopyFromURLResponse
err = f.pacer.Call(func() (bool, error) {
startCopy, err = dstBlobURL.StartCopyFromURL(ctx, *source, nil, options, options)
startCopy, err = dstBlobURL.StartCopyFromURL(ctx, *source, nil, azblob.ModifiedAccessConditions{}, options)
return f.shouldRetry(err)
})
if err != nil {
@@ -837,9 +874,8 @@ func (o *Object) setMetadata(metadata azblob.Metadata) {
// o.md5
// o.meta
func (o *Object) decodeMetaDataFromPropertiesResponse(info *azblob.BlobGetPropertiesResponse) (err error) {
// NOTE - In BlobGetPropertiesResponse, Client library returns MD5 as base64 decoded string
// unlike BlobProperties in BlobItem (used in decodeMetadataFromBlob) which returns base64
// encoded bytes. Object needs to maintain this as base64 encoded string.
// NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain
// this as base64 encoded string.
o.md5 = base64.StdEncoding.EncodeToString(info.ContentMD5())
o.mimeType = info.ContentType()
o.size = info.ContentLength()
@@ -851,7 +887,9 @@ func (o *Object) decodeMetaDataFromPropertiesResponse(info *azblob.BlobGetProper
}
func (o *Object) decodeMetaDataFromBlob(info *azblob.BlobItem) (err error) {
o.md5 = string(info.Properties.ContentMD5)
// NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain
// this as base64 encoded string.
o.md5 = base64.StdEncoding.EncodeToString(info.Properties.ContentMD5)
o.mimeType = *info.Properties.ContentType
o.size = *info.Properties.ContentLength
o.modTime = info.Properties.LastModified
@@ -1125,11 +1163,14 @@ outer:
defer o.fs.uploadToken.Put()
fs.Debugf(o, "Uploading part %d/%d offset %v/%v part size %v", part+1, totalParts, fs.SizeSuffix(position), fs.SizeSuffix(size), fs.SizeSuffix(chunkSize))
// Upload the block, with MD5 for check
md5sum := md5.Sum(buf)
transactionalMD5 := md5sum[:]
err = o.fs.pacer.Call(func() (bool, error) {
bufferReader := bytes.NewReader(buf)
wrappedReader := wrap(bufferReader)
rs := readSeeker{wrappedReader, bufferReader}
_, err = blockBlobURL.StageBlock(ctx, blockID, &rs, ac)
_, err = blockBlobURL.StageBlock(ctx, blockID, &rs, ac, transactionalMD5)
return o.fs.shouldRetry(err)
})
@@ -1236,16 +1277,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
// Now, set blob tier based on configured access tier
desiredAccessTier := azblob.AccessTierType(o.fs.opt.AccessTier)
err = o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetTier(ctx, desiredAccessTier)
return o.fs.shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "Failed to set Blob Tier")
}
return nil
return o.SetTier(o.fs.opt.AccessTier)
}
// Remove an object
@@ -1270,6 +1302,46 @@ func (o *Object) AccessTier() azblob.AccessTierType {
return o.accessTier
}
// SetTier performs changing object tier
func (o *Object) SetTier(tier string) error {
if !validateAccessTier(tier) {
return errors.Errorf("Tier %s not supported by Azure Blob Storage", tier)
}
// Check if current tier already matches with desired tier
if o.GetTier() == tier {
return nil
}
desiredAccessTier := azblob.AccessTierType(tier)
blob := o.getBlobReference()
ctx := context.Background()
err := o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetTier(ctx, desiredAccessTier)
return o.fs.shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "Failed to set Blob Tier")
}
// Set access tier on local object also, this typically
// gets updated on get blob properties
o.accessTier = desiredAccessTier
fs.Debugf(o, "Successfully changed object tier to %s", tier)
return nil
}
// GetTier returns object tier in azure as string
func (o *Object) GetTier() string {
return string(o.accessTier)
}
// ListTiers returns list of storage tiers supported on this object
func (o *Object) ListTiers() []string {
return validAccessTiers()
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}

View File

@@ -0,0 +1,20 @@
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
package azureblob
import (
"testing"
"github.com/stretchr/testify/assert"
)
func (f *Fs) InternalTest(t *testing.T) {
// Check first feature flags are set on this
// remote
enabled := f.Features().ListTiers
assert.True(t, enabled)
enabled = f.Features().SetTier
assert.True(t, enabled)
enabled = f.Features().GetTier
assert.True(t, enabled)
}

View File

@@ -61,7 +61,7 @@ func (e *Error) Error() string {
var _ error = (*Error)(nil)
// ItemFields are the fields needed for FileInfo
var ItemFields = "type,id,sequence_id,etag,sha1,name,size,created_at,modified_at,content_created_at,content_modified_at,item_status"
var ItemFields = "type,id,sequence_id,etag,sha1,name,size,created_at,modified_at,content_created_at,content_modified_at,item_status,shared_link"
// Types of things in Item
const (
@@ -86,6 +86,10 @@ type Item struct {
ContentCreatedAt Time `json:"content_created_at"`
ContentModifiedAt Time `json:"content_modified_at"`
ItemStatus string `json:"item_status"` // active, trashed if the file has been moved to the trash, and deleted if the file has been permanently deleted
SharedLink struct {
URL string `json:"url,omitempty"`
Access string `json:"access,omitempty"`
} `json:"shared_link"`
}
// ModTime returns the modification time of the item
@@ -145,6 +149,14 @@ type CopyFile struct {
Parent Parent `json:"parent"`
}
// CreateSharedLink is the request for Public Link
type CreateSharedLink struct {
SharedLink struct {
URL string `json:"url,omitempty"`
Access string `json:"access,omitempty"`
} `json:"shared_link"`
}
// UploadSessionRequest is uses in Create Upload Session
type UploadSessionRequest struct {
FolderID string `json:"folder_id,omitempty"` // don't pass for update

View File

@@ -126,6 +126,7 @@ type Object struct {
size int64 // size of the object
modTime time.Time // modification time of the object
id string // ID of the object
publicLink string // Public Link for the object
sha1 string // SHA-1 of the object content
}
@@ -299,6 +300,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
return nil, err
}
f.features.Fill(&newF)
// return an error with an fs which points to the parent
return &newF, fs.ErrorIsFile
}
@@ -844,6 +846,46 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
return nil
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(remote string) (string, error) {
id, err := f.dirCache.FindDir(remote, false)
var opts rest.Opts
if err == nil {
fs.Debugf(f, "attempting to share directory '%s'", remote)
opts = rest.Opts{
Method: "PUT",
Path: "/folders/" + id,
Parameters: fieldsValue(),
}
} else {
fs.Debugf(f, "attempting to share single file '%s'", remote)
o, err := f.NewObject(remote)
if err != nil {
return "", err
}
if o.(*Object).publicLink != "" {
return o.(*Object).publicLink, nil
}
opts = rest.Opts{
Method: "PUT",
Path: "/files/" + o.(*Object).id,
Parameters: fieldsValue(),
}
}
shareLink := api.CreateSharedLink{}
var info api.Item
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &shareLink, &info)
return shouldRetry(resp, err)
})
return info.SharedLink.URL, err
}
// DirCacheFlush resets the directory cache - used in testing as an
// optional interface
func (f *Fs) DirCacheFlush() {
@@ -908,6 +950,7 @@ func (o *Object) setMetaData(info *api.Item) (err error) {
o.sha1 = info.SHA1
o.modTime = info.ModTime()
o.id = info.ID
o.publicLink = info.SharedLink.URL
return nil
}
@@ -1087,6 +1130,7 @@ var (
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

225
backend/cache/cache.go vendored
View File

@@ -6,10 +6,12 @@ import (
"context"
"fmt"
"io"
"math"
"os"
"os/signal"
"path"
"path/filepath"
"strconv"
"strings"
"sync"
"syscall"
@@ -79,6 +81,10 @@ func init() {
Help: "The plex token for authentication - auto set normally",
Hide: fs.OptionHideBoth,
Advanced: true,
}, {
Name: "plex_insecure",
Help: "Skip all certificate verifications when connecting to the Plex server",
Advanced: true,
}, {
Name: "chunk_size",
Help: "The size of a chunk. Lower value good for slow connections but can affect seamless reading.",
@@ -193,6 +199,7 @@ type Options struct {
PlexUsername string `config:"plex_username"`
PlexPassword string `config:"plex_password"`
PlexToken string `config:"plex_token"`
PlexInsecure bool `config:"plex_insecure"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
InfoAge fs.Duration `config:"info_age"`
ChunkTotalSize fs.SizeSuffix `config:"chunk_total_size"`
@@ -248,7 +255,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
return nil, err
}
if opt.ChunkTotalSize < opt.ChunkSize*fs.SizeSuffix(opt.TotalWorkers) {
return nil, errors.Errorf("don't set cache-total-chunk-size(%v) less than cache-chunk-size(%v) * cache-workers(%v)",
return nil, errors.Errorf("don't set cache-chunk-total-size(%v) less than cache-chunk-size(%v) * cache-workers(%v)",
opt.ChunkTotalSize, opt.ChunkSize, opt.TotalWorkers)
}
@@ -261,10 +268,15 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.Wrapf(err, "failed to clean root path %q", rootPath)
}
remotePath := path.Join(opt.Remote, rpath)
wrappedFs, wrapErr := fs.NewFs(remotePath)
wInfo, wName, wPath, wConfig, err := fs.ConfigFs(opt.Remote)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse remote %q to wrap", opt.Remote)
}
remotePath := path.Join(wPath, rootPath)
wrappedFs, wrapErr := wInfo.NewFs(wName, remotePath, wConfig)
if wrapErr != nil && wrapErr != fs.ErrorIsFile {
return nil, errors.Wrapf(wrapErr, "failed to make remote %q to wrap", remotePath)
return nil, errors.Wrapf(wrapErr, "failed to make remote %s:%s to wrap", wName, remotePath)
}
var fsErr error
fs.Debugf(name, "wrapped %v:%v at root %v", wrappedFs.Name(), wrappedFs.Root(), rpath)
@@ -290,7 +302,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
f.plexConnector = &plexConnector{}
if opt.PlexURL != "" {
if opt.PlexToken != "" {
f.plexConnector, err = newPlexConnectorWithToken(f, opt.PlexURL, opt.PlexToken)
f.plexConnector, err = newPlexConnectorWithToken(f, opt.PlexURL, opt.PlexToken, opt.PlexInsecure)
if err != nil {
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", opt.PlexURL)
}
@@ -300,7 +312,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, func(token string) {
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
@@ -455,6 +467,39 @@ Eg
Title: "Get cache stats",
Help: `
Show statistics for the cache remote.
`,
})
rc.Add(rc.Call{
Path: "cache/fetch",
Fn: f.rcFetch,
Title: "Fetch file chunks",
Help: `
Ensure the specified file chunks are cached on disk.
The chunks= parameter specifies the file chunks to check.
It takes a comma separated list of array slice indices.
The slice indices are similar to Python slices: start[:end]
start is the 0 based chunk number from the beginning of the file
to fetch inclusive. end is 0 based chunk number from the beginning
of the file to fetch exclisive.
Both values can be negative, in which case they count from the back
of the file. The value "-5:" represents the last 5 chunks of a file.
Some valid examples are:
":5,-5:" -> the first and last five chunks
"0,-2" -> the first and the second last chunk
"0:10" -> the first ten chunks
Any parameter with a key that starts with "file" can be used to
specify files to fetch, eg
rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
File names will automatically be encrypted when the a crypt remote
is used on top of the cache.
`,
})
@@ -472,6 +517,22 @@ func (f *Fs) httpStats(in rc.Params) (out rc.Params, err error) {
return out, nil
}
func (f *Fs) unwrapRemote(remote string) string {
remote = cleanPath(remote)
if remote != "" {
// if it's wrapped by crypt we need to check what format we got
if cryptFs, yes := f.isWrappedByCrypt(); yes {
_, err := cryptFs.DecryptFileName(remote)
// if it failed to decrypt then it is a decrypted format and we need to encrypt it
if err != nil {
return cryptFs.EncryptFileName(remote)
}
// else it's an encrypted format and we can use it as it is
}
}
return remote
}
func (f *Fs) httpExpireRemote(in rc.Params) (out rc.Params, err error) {
out = make(rc.Params)
remoteInt, ok := in["remote"]
@@ -485,20 +546,9 @@ func (f *Fs) httpExpireRemote(in rc.Params) (out rc.Params, err error) {
withData = true
}
if cleanPath(remote) != "" {
// if it's wrapped by crypt we need to check what format we got
if cryptFs, yes := f.isWrappedByCrypt(); yes {
_, err := cryptFs.DecryptFileName(remote)
// if it failed to decrypt then it is a decrypted format and we need to encrypt it
if err != nil {
remote = cryptFs.EncryptFileName(remote)
}
// else it's an encrypted format and we can use it as it is
}
if !f.cache.HasEntry(path.Join(f.Root(), remote)) {
return out, errors.Errorf("%s doesn't exist in cache", remote)
}
remote = f.unwrapRemote(remote)
if !f.cache.HasEntry(path.Join(f.Root(), remote)) {
return out, errors.Errorf("%s doesn't exist in cache", remote)
}
co := NewObject(f, remote)
@@ -528,6 +578,141 @@ func (f *Fs) httpExpireRemote(in rc.Params) (out rc.Params, err error) {
return out, nil
}
func (f *Fs) rcFetch(in rc.Params) (rc.Params, error) {
type chunkRange struct {
start, end int64
}
parseChunks := func(ranges string) (crs []chunkRange, err error) {
for _, part := range strings.Split(ranges, ",") {
var start, end int64 = 0, math.MaxInt64
switch ints := strings.Split(part, ":"); len(ints) {
case 1:
start, err = strconv.ParseInt(ints[0], 10, 64)
if err != nil {
return nil, errors.Errorf("invalid range: %q", part)
}
end = start + 1
case 2:
if ints[0] != "" {
start, err = strconv.ParseInt(ints[0], 10, 64)
if err != nil {
return nil, errors.Errorf("invalid range: %q", part)
}
}
if ints[1] != "" {
end, err = strconv.ParseInt(ints[1], 10, 64)
if err != nil {
return nil, errors.Errorf("invalid range: %q", part)
}
}
default:
return nil, errors.Errorf("invalid range: %q", part)
}
crs = append(crs, chunkRange{start: start, end: end})
}
return
}
walkChunkRange := func(cr chunkRange, size int64, cb func(chunk int64)) {
if size <= 0 {
return
}
chunks := (size-1)/f.ChunkSize() + 1
start, end := cr.start, cr.end
if start < 0 {
start += chunks
}
if end <= 0 {
end += chunks
}
if end <= start {
return
}
switch {
case start < 0:
start = 0
case start >= chunks:
return
}
switch {
case end <= start:
end = start + 1
case end >= chunks:
end = chunks
}
for i := start; i < end; i++ {
cb(i)
}
}
walkChunkRanges := func(crs []chunkRange, size int64, cb func(chunk int64)) {
for _, cr := range crs {
walkChunkRange(cr, size, cb)
}
}
v, ok := in["chunks"]
if !ok {
return nil, errors.New("missing chunks parameter")
}
s, ok := v.(string)
if !ok {
return nil, errors.New("invalid chunks parameter")
}
delete(in, "chunks")
crs, err := parseChunks(s)
if err != nil {
return nil, errors.Wrap(err, "invalid chunks parameter")
}
var files [][2]string
for k, v := range in {
if !strings.HasPrefix(k, "file") {
return nil, errors.Errorf("invalid parameter %s=%s", k, v)
}
switch v := v.(type) {
case string:
files = append(files, [2]string{v, f.unwrapRemote(v)})
default:
return nil, errors.Errorf("invalid parameter %s=%s", k, v)
}
}
type fileStatus struct {
Error string
FetchedChunks int
}
fetchedChunks := make(map[string]fileStatus, len(files))
for _, pair := range files {
file, remote := pair[0], pair[1]
var status fileStatus
o, err := f.NewObject(remote)
if err != nil {
fetchedChunks[file] = fileStatus{Error: err.Error()}
continue
}
co := o.(*Object)
err = co.refreshFromSource(true)
if err != nil {
fetchedChunks[file] = fileStatus{Error: err.Error()}
continue
}
handle := NewObjectHandle(co, f)
handle.UseMemory = false
handle.scaleWorkers(1)
walkChunkRanges(crs, co.Size(), func(chunk int64) {
_, err := handle.getChunk(chunk * f.ChunkSize())
if err != nil {
if status.Error == "" {
status.Error = err.Error()
}
} else {
status.FetchedChunks++
}
})
fetchedChunks[file] = status
}
return rc.Params{"status": fetchedChunks}, nil
}
// receiveChangeNotify is a wrapper to notifications sent from the wrapped FS about changed files
func (f *Fs) receiveChangeNotify(forgetPath string, entryType fs.EntryType) {
if crypt, yes := f.isWrappedByCrypt(); yes {

View File

@@ -1,455 +0,0 @@
// +build !plan9
package cache_test
import (
"math/rand"
"os"
"path"
"strconv"
"testing"
"time"
"fmt"
"github.com/ncw/rclone/backend/cache"
_ "github.com/ncw/rclone/backend/drive"
"github.com/ncw/rclone/fs"
"github.com/stretchr/testify/require"
)
func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
require.NoError(t, err)
}
func testInternalUploadQueueOneFile(t *testing.T, id string, rootFs fs.Fs, boltDb *cache.Persistent) {
// create some rand test data
testSize := int64(524288000)
testReader := runInstance.randomReader(t, testSize)
bu := runInstance.listenForBackgroundUpload(t, rootFs, "one")
runInstance.writeRemoteReader(t, rootFs, "one", testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(524416032), ti.Size())
} else {
require.Equal(t, testSize, ti.Size())
}
de1, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, de1, 1)
runInstance.completeBackgroundUpload(t, "one", bu)
// check if it was removed from temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
// check if it can be read
data2, err := runInstance.readDataFromRemote(t, rootFs, "one", 0, int64(1024), false)
require.NoError(t, err)
require.Len(t, data2, 1024)
}
func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
require.NoError(t, err)
// create some rand test data
testSize := int64(10485760)
testReader := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
time.Sleep(time.Second * 5)
//_ = os.Remove(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
//require.NoError(t, err)
err = runInstance.dirMove(t, rootFs, "one/test", "second/test")
require.NoError(t, err)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second/test")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadTempPathCleaned(t *testing.T) {
id := fmt.Sprintf("tiutpc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "5s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("one")
require.NoError(t, err)
err = rootFs.Mkdir("one/test")
require.NoError(t, err)
err = rootFs.Mkdir("second")
require.NoError(t, err)
// create some rand test data
testSize := int64(1048576)
testReader := runInstance.randomReader(t, testSize)
testReader2 := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.writeObjectReader(t, rootFs, "second/data.bin", testReader2)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
require.True(t, os.IsNotExist(err))
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second")))
require.False(t, os.IsNotExist(err))
runInstance.completeAllBackgroundUploads(t, rootFs, "second/data.bin")
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/data.bin")))
require.True(t, os.IsNotExist(err))
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir("test")
require.NoError(t, err)
minSize := 5242880
maxSize := 10485760
totalFiles := 10
rand.Seed(time.Now().Unix())
lastFile := ""
for i := 0; i < totalFiles; i++ {
size := int64(rand.Intn(maxSize-minSize) + minSize)
testReader := runInstance.randomReader(t, size)
remote := "test/" + strconv.Itoa(i) + ".bin"
runInstance.writeRemoteReader(t, rootFs, remote, testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, remote)))
require.NoError(t, err)
require.Equal(t, size, runInstance.cleanSize(t, ti.Size()))
if runInstance.wrappedIsExternal && i < totalFiles-1 {
time.Sleep(time.Second * 3)
}
lastFile = remote
}
// check if cache lists all files, likely temp upload didn't finish yet
de1, err := runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
// wait for background uploader to do its thing
runInstance.completeAllBackgroundUploads(t, rootFs, lastFile)
// retry until we have no more temp files and fail if they don't go down to 0
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test")))
require.True(t, os.IsNotExist(err))
// check if cache lists all files
de1, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
}
func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test DirMove - allowed
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
_, err = rootFs.NewObject("second/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.NoError(t, err)
_, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.Error(t, err)
var started bool
started, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "second/one")))
require.NoError(t, err)
require.False(t, started)
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Rmdir - allowed
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
require.Contains(t, err.Error(), "directory not empty")
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
started, err := boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.False(t, started)
require.NoError(t, err)
// test Move/Rename -- allowed
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.NoError(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/second")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/second", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove -- allowed
err = runInstance.rm(t, rootFs, "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.Error(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update -- allowed
firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated")
require.NoError(t, err)
obj2, err := rootFs.NewObject("test/one")
require.NoError(t, err)
data2 := runInstance.readDataFromObj(t, obj2, 0, int64(len("one content updated")), false)
require.Equal(t, "one content updated", string(data2))
tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(67), tmpInfo.Size())
} else {
require.Equal(t, int64(len(data2)), tmpInfo.Size())
}
// test SetModTime -- allowed
secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
require.NotEqual(t, secondModTime, firstModTime)
require.NotEqual(t, time.Time{}, firstModTime)
require.NotEqual(t, time.Time{}, secondModTime)
}
func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
err = boltDb.SetPendingUploadToStarted(runInstance.encryptRemoteIfNeeded(t, path.Join(rootFs.Root(), "test/one")))
require.NoError(t, err)
// test DirMove
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.Error(t, err)
}
// test Rmdir
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test Move/Rename
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.Error(t, err)
// try to read from it
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/second")
require.Error(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.Error(t, err)
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
_, err = rootFs.NewObject("test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove
err = runInstance.rm(t, rootFs, "test/one")
require.Error(t, err)
_, err = rootFs.NewObject("test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update - this seems to work. Why? FIXME
//firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated", func() {
// data2 := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len("one content updated")), true)
// require.Equal(t, "one content", string(data2))
//
// tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
// require.NoError(t, err)
// if runInstance.rootIsCrypt {
// require.Equal(t, int64(67), tmpInfo.Size())
// } else {
// require.Equal(t, int64(len(data2)), tmpInfo.Size())
// }
//})
//require.Error(t, err)
// test SetModTime -- seems to work cause of previous
//secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//require.Equal(t, secondModTime, firstModTime)
//require.NotEqual(t, time.Time{}, firstModTime)
//require.NotEqual(t, time.Time{}, secondModTime)
}

View File

@@ -1,12 +0,0 @@
--- cache_upload_test.go
+++ cache_upload_test.go
@@ -1500,9 +1469,6 @@ func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
}
r.tempFiles = nil
debug.FreeOSMemory()
- for k, v := range r.runDefaultFlagMap {
- _ = flag.Set(k, v)
- }
}
func (r *run) randomBytes(t *testing.T, size int64) []byte {

View File

@@ -49,12 +49,13 @@ type Handle struct {
offset int64
seenOffsets map[int64]bool
mu sync.Mutex
workersWg sync.WaitGroup
confirmReading chan bool
UseMemory bool
workers []*worker
closed bool
reading bool
workers int
maxWorkerID int
UseMemory bool
closed bool
reading bool
}
// NewObjectHandle returns a new Handle for an existing Object
@@ -95,7 +96,7 @@ func (r *Handle) String() string {
// startReadWorkers will start the worker pool
func (r *Handle) startReadWorkers() {
if r.hasAtLeastOneWorker() {
if r.workers > 0 {
return
}
totalWorkers := r.cacheFs().opt.TotalWorkers
@@ -117,26 +118,27 @@ func (r *Handle) startReadWorkers() {
// scaleOutWorkers will increase the worker pool count by the provided amount
func (r *Handle) scaleWorkers(desired int) {
current := len(r.workers)
current := r.workers
if current == desired {
return
}
if current > desired {
// scale in gracefully
for i := 0; i < current-desired; i++ {
for r.workers > desired {
r.preloadQueue <- -1
r.workers--
}
} else {
// scale out
for i := 0; i < desired-current; i++ {
for r.workers < desired {
w := &worker{
r: r,
ch: r.preloadQueue,
id: current + i,
id: r.maxWorkerID,
}
r.workersWg.Add(1)
r.workers++
r.maxWorkerID++
go w.run()
r.workers = append(r.workers, w)
}
}
// ignore first scale out from 0
@@ -148,7 +150,7 @@ func (r *Handle) scaleWorkers(desired int) {
func (r *Handle) confirmExternalReading() {
// if we have a max value of workers
// then we skip this step
if len(r.workers) > 1 ||
if r.workers > 1 ||
!r.cacheFs().plexConnector.isConfigured() {
return
}
@@ -178,7 +180,7 @@ func (r *Handle) queueOffset(offset int64) {
}
}
for i := 0; i < len(r.workers); i++ {
for i := 0; i < r.workers; i++ {
o := r.preloadOffset + int64(r.cacheFs().opt.ChunkSize)*int64(i)
if o < 0 || o >= r.cachedObject.Size() {
continue
@@ -193,16 +195,6 @@ func (r *Handle) queueOffset(offset int64) {
}
}
func (r *Handle) hasAtLeastOneWorker() bool {
oneWorker := false
for i := 0; i < len(r.workers); i++ {
if r.workers[i].isRunning() {
oneWorker = true
}
}
return oneWorker
}
// getChunk is called by the FS to retrieve a specific chunk of known start and size from where it can find it
// it can be from transient or persistent cache
// it will also build the chunk from the cache's specific chunk boundaries and build the final desired chunk in a buffer
@@ -243,7 +235,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
// not found in ram or
// the worker didn't managed to download the chunk in time so we abort and close the stream
if err != nil || len(data) == 0 || !found {
if !r.hasAtLeastOneWorker() {
if r.workers == 0 {
fs.Errorf(r, "out of workers")
return nil, io.ErrUnexpectedEOF
}
@@ -304,14 +296,7 @@ func (r *Handle) Close() error {
close(r.preloadQueue)
r.closed = true
// wait for workers to complete their jobs before returning
waitCount := 3
for i := 0; i < len(r.workers); i++ {
waitIdx := 0
for r.workers[i].isRunning() && waitIdx < waitCount {
time.Sleep(time.Second)
waitIdx++
}
}
r.workersWg.Wait()
r.memory.db.Flush()
fs.Debugf(r, "cache reader closed %v", r.offset)
@@ -348,12 +333,9 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
}
type worker struct {
r *Handle
ch <-chan int64
rc io.ReadCloser
id int
running bool
mu sync.Mutex
r *Handle
rc io.ReadCloser
id int
}
// String is a representation of this worker
@@ -398,33 +380,19 @@ func (w *worker) reader(offset, end int64, closeOpen bool) (io.ReadCloser, error
})
}
func (w *worker) isRunning() bool {
w.mu.Lock()
defer w.mu.Unlock()
return w.running
}
func (w *worker) setRunning(f bool) {
w.mu.Lock()
defer w.mu.Unlock()
w.running = f
}
// run is the main loop for the worker which receives offsets to preload
func (w *worker) run() {
var err error
var data []byte
defer w.setRunning(false)
defer func() {
if w.rc != nil {
_ = w.rc.Close()
w.setRunning(false)
}
w.r.workersWg.Done()
}()
for {
chunkStart, open := <-w.ch
w.setRunning(true)
chunkStart, open := <-w.r.preloadQueue
if chunkStart < 0 || !open {
break
}

View File

@@ -208,11 +208,17 @@ func (o *Object) SetModTime(t time.Time) error {
// Open is used to request a specific part of the file using fs.RangeOption
func (o *Object) Open(options ...fs.OpenOption) (io.ReadCloser, error) {
if err := o.refreshFromSource(true); err != nil {
var err error
if o.Object == nil {
err = o.refreshFromSource(true)
} else {
err = o.refresh()
}
if err != nil {
return nil, err
}
var err error
cacheReader := NewObjectHandle(o, o.CacheFs)
var offset, limit int64 = 0, -1
for _, option := range options {

28
backend/cache/plex.go vendored
View File

@@ -3,6 +3,7 @@
package cache
import (
"crypto/tls"
"encoding/json"
"fmt"
"net/http"
@@ -54,6 +55,7 @@ type plexConnector struct {
username string
password string
token string
insecure bool
f *Fs
mu sync.Mutex
running bool
@@ -63,7 +65,7 @@ type plexConnector struct {
}
// newPlexConnector connects to a Plex server and generates a token
func newPlexConnector(f *Fs, plexURL, username, password string, saveToken func(string)) (*plexConnector, error) {
func newPlexConnector(f *Fs, plexURL, username, password string, insecure bool, saveToken func(string)) (*plexConnector, error) {
u, err := url.ParseRequestURI(strings.TrimRight(plexURL, "/"))
if err != nil {
return nil, err
@@ -75,6 +77,7 @@ func newPlexConnector(f *Fs, plexURL, username, password string, saveToken func(
username: username,
password: password,
token: "",
insecure: insecure,
stateCache: cache.New(time.Hour, time.Minute),
saveToken: saveToken,
}
@@ -83,7 +86,7 @@ func newPlexConnector(f *Fs, plexURL, username, password string, saveToken func(
}
// newPlexConnector connects to a Plex server and generates a token
func newPlexConnectorWithToken(f *Fs, plexURL, token string) (*plexConnector, error) {
func newPlexConnectorWithToken(f *Fs, plexURL, token string, insecure bool) (*plexConnector, error) {
u, err := url.ParseRequestURI(strings.TrimRight(plexURL, "/"))
if err != nil {
return nil, err
@@ -93,6 +96,7 @@ func newPlexConnectorWithToken(f *Fs, plexURL, token string) (*plexConnector, er
f: f,
url: u,
token: token,
insecure: insecure,
stateCache: cache.New(time.Hour, time.Minute),
}
pc.listenWebsocket()
@@ -107,14 +111,26 @@ func (p *plexConnector) closeWebsocket() {
p.running = false
}
func (p *plexConnector) websocketDial() (*websocket.Conn, error) {
u := strings.TrimRight(strings.Replace(strings.Replace(
p.url.String(), "http://", "ws://", 1), "https://", "wss://", 1), "/")
url := fmt.Sprintf(defPlexNotificationURL, u, p.token)
config, err := websocket.NewConfig(url, "http://localhost")
if err != nil {
return nil, err
}
if p.insecure {
config.TlsConfig = &tls.Config{InsecureSkipVerify: true}
}
return websocket.DialConfig(config)
}
func (p *plexConnector) listenWebsocket() {
p.runningMu.Lock()
defer p.runningMu.Unlock()
u := strings.Replace(p.url.String(), "http://", "ws://", 1)
u = strings.Replace(u, "https://", "wss://", 1)
conn, err := websocket.Dial(fmt.Sprintf(defPlexNotificationURL, strings.TrimRight(u, "/"), p.token),
"", "http://localhost")
conn, err := p.websocketDial()
if err != nil {
fs.Errorf("plex", "%v", err)
return

View File

@@ -130,16 +130,20 @@ func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point crypt remote at itself - check the value of the remote setting")
}
wInfo, wName, wPath, wConfig, err := fs.ConfigFs(remote)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse remote %q to wrap", remote)
}
// Look for a file first
remotePath := path.Join(remote, cipher.EncryptFileName(rpath))
wrappedFs, err := fs.NewFs(remotePath)
remotePath := path.Join(wPath, cipher.EncryptFileName(rpath))
wrappedFs, err := wInfo.NewFs(wName, remotePath, wConfig)
// if that didn't produce a file, look for a directory
if err != fs.ErrorIsFile {
remotePath = path.Join(remote, cipher.EncryptDirName(rpath))
wrappedFs, err = fs.NewFs(remotePath)
remotePath = path.Join(wPath, cipher.EncryptDirName(rpath))
wrappedFs, err = wInfo.NewFs(wName, remotePath, wConfig)
}
if err != fs.ErrorIsFile && err != nil {
return nil, errors.Wrapf(err, "failed to make remote %q to wrap", remotePath)
return nil, errors.Wrapf(err, "failed to make remote %s:%q to wrap", wName, remotePath)
}
f := &Fs{
Fs: wrappedFs,

View File

@@ -779,7 +779,10 @@ func (f *Fs) CreateDir(pathID, leaf string) (newID string, err error) {
}
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Create(createInfo).Fields(googleapi.Field(partialFields)).SupportsTeamDrives(f.isTeamDrive).Do()
info, err = f.svc.Files.Create(createInfo).
Fields(googleapi.Field(partialFields)).
SupportsTeamDrives(f.isTeamDrive).
Do()
return shouldRetry(err)
})
if err != nil {
@@ -807,7 +810,9 @@ func (f *Fs) exportFormats() map[string][]string {
var about *drive.About
var err error
err = f.pacer.Call(func() (bool, error) {
about, err = f.svc.About.Get().Fields("exportFormats").Do()
about, err = f.svc.About.Get().
Fields("exportFormats").
Do()
return shouldRetry(err)
})
if err != nil {
@@ -1178,7 +1183,12 @@ func (f *Fs) PutUnchecked(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOpt
// Make the API request to upload metadata and file data.
// Don't retry, return a retry error instead
err = f.pacer.CallNoRetry(func() (bool, error) {
info, err = f.svc.Files.Create(createInfo).Media(in, googleapi.ContentType("")).Fields(googleapi.Field(partialFields)).SupportsTeamDrives(f.isTeamDrive).KeepRevisionForever(f.opt.KeepRevisionForever).Do()
info, err = f.svc.Files.Create(createInfo).
Media(in, googleapi.ContentType("")).
Fields(googleapi.Field(partialFields)).
SupportsTeamDrives(f.isTeamDrive).
KeepRevisionForever(f.opt.KeepRevisionForever).
Do()
return shouldRetry(err)
})
if err != nil {
@@ -1217,7 +1227,12 @@ func (f *Fs) MergeDirs(dirs []fs.Directory) error {
fs.Infof(srcDir, "merging %q", info.Name)
// Move the file into the destination
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Files.Update(info.Id, nil).RemoveParents(srcDir.ID()).AddParents(dstDir.ID()).Fields("").SupportsTeamDrives(f.isTeamDrive).Do()
_, err = f.svc.Files.Update(info.Id, nil).
RemoveParents(srcDir.ID()).
AddParents(dstDir.ID()).
Fields("").
SupportsTeamDrives(f.isTeamDrive).
Do()
return shouldRetry(err)
})
if err != nil {
@@ -1254,9 +1269,15 @@ func (f *Fs) rmdir(directoryID string, useTrash bool) error {
info := drive.File{
Trashed: true,
}
_, err = f.svc.Files.Update(directoryID, &info).Fields("").SupportsTeamDrives(f.isTeamDrive).Do()
_, err = f.svc.Files.Update(directoryID, &info).
Fields("").
SupportsTeamDrives(f.isTeamDrive).
Do()
} else {
err = f.svc.Files.Delete(directoryID).Fields("").SupportsTeamDrives(f.isTeamDrive).Do()
err = f.svc.Files.Delete(directoryID).
Fields("").
SupportsTeamDrives(f.isTeamDrive).
Do()
}
return shouldRetry(err)
})
@@ -1335,7 +1356,11 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
var info *drive.File
err = o.fs.pacer.Call(func() (bool, error) {
info, err = o.fs.svc.Files.Copy(srcObj.id, createInfo).Fields(googleapi.Field(partialFields)).SupportsTeamDrives(f.isTeamDrive).KeepRevisionForever(f.opt.KeepRevisionForever).Do()
info, err = o.fs.svc.Files.Copy(srcObj.id, createInfo).
Fields(googleapi.Field(partialFields)).
SupportsTeamDrives(f.isTeamDrive).
KeepRevisionForever(f.opt.KeepRevisionForever).
Do()
return shouldRetry(err)
})
if err != nil {
@@ -1364,9 +1389,15 @@ func (f *Fs) Purge() error {
info := drive.File{
Trashed: true,
}
_, err = f.svc.Files.Update(f.dirCache.RootID(), &info).Fields("").SupportsTeamDrives(f.isTeamDrive).Do()
_, err = f.svc.Files.Update(f.dirCache.RootID(), &info).
Fields("").
SupportsTeamDrives(f.isTeamDrive).
Do()
} else {
err = f.svc.Files.Delete(f.dirCache.RootID()).Fields("").SupportsTeamDrives(f.isTeamDrive).Do()
err = f.svc.Files.Delete(f.dirCache.RootID()).
Fields("").
SupportsTeamDrives(f.isTeamDrive).
Do()
}
return shouldRetry(err)
})
@@ -1452,7 +1483,12 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
// Do the move
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Update(srcObj.id, dstInfo).RemoveParents(srcParentID).AddParents(dstParents).Fields(googleapi.Field(partialFields)).SupportsTeamDrives(f.isTeamDrive).Do()
info, err = f.svc.Files.Update(srcObj.id, dstInfo).
RemoveParents(srcParentID).
AddParents(dstParents).
Fields(googleapi.Field(partialFields)).
SupportsTeamDrives(f.isTeamDrive).
Do()
return shouldRetry(err)
})
if err != nil {
@@ -1489,7 +1525,10 @@ func (f *Fs) PublicLink(remote string) (link string, err error) {
err = f.pacer.Call(func() (bool, error) {
// TODO: On TeamDrives this might fail if lacking permissions to change ACLs.
// Need to either check `canShare` attribute on the object or see if a sufficient permission is already present.
_, err = f.svc.Permissions.Create(id, permission).Fields(googleapi.Field("id")).SupportsTeamDrives(f.isTeamDrive).Do()
_, err = f.svc.Permissions.Create(id, permission).
Fields(googleapi.Field("id")).
SupportsTeamDrives(f.isTeamDrive).
Do()
return shouldRetry(err)
})
if err != nil {
@@ -1584,7 +1623,12 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
Name: leaf,
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Files.Update(srcID, &patch).RemoveParents(srcDirectoryID).AddParents(dstDirectoryID).Fields("").SupportsTeamDrives(f.isTeamDrive).Do()
_, err = f.svc.Files.Update(srcID, &patch).
RemoveParents(srcDirectoryID).
AddParents(dstDirectoryID).
Fields("").
SupportsTeamDrives(f.isTeamDrive).
Do()
return shouldRetry(err)
})
if err != nil {
@@ -1621,7 +1665,9 @@ func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), pollInter
var err error
var startPageToken *drive.StartPageToken
err = f.pacer.Call(func() (bool, error) {
startPageToken, err = f.svc.Changes.GetStartPageToken().SupportsTeamDrives(f.isTeamDrive).Do()
startPageToken, err = f.svc.Changes.GetStartPageToken().
SupportsTeamDrives(f.isTeamDrive).
Do()
return shouldRetry(err)
})
if err != nil {
@@ -1635,7 +1681,8 @@ func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), pollInter
var changeList *drive.ChangeList
err = f.pacer.Call(func() (bool, error) {
changesCall := f.svc.Changes.List(pageToken).Fields("nextPageToken,newStartPageToken,changes(fileId,file(name,parents,mimeType))")
changesCall := f.svc.Changes.List(pageToken).
Fields("nextPageToken,newStartPageToken,changes(fileId,file(name,parents,mimeType))")
if f.opt.ListChunk > 0 {
changesCall.PageSize(f.opt.ListChunk)
}
@@ -1862,7 +1909,10 @@ func (o *Object) SetModTime(modTime time.Time) error {
// Set modified date
var info *drive.File
err = o.fs.pacer.Call(func() (bool, error) {
info, err = o.fs.svc.Files.Update(o.id, updateInfo).Fields(googleapi.Field(partialFields)).SupportsTeamDrives(o.fs.isTeamDrive).Do()
info, err = o.fs.svc.Files.Update(o.id, updateInfo).
Fields(googleapi.Field(partialFields)).
SupportsTeamDrives(o.fs.isTeamDrive).
Do()
return shouldRetry(err)
})
if err != nil {
@@ -2014,7 +2064,12 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
if size == 0 || size < int64(o.fs.opt.UploadCutoff) {
// Don't retry, return a retry error instead
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
info, err = o.fs.svc.Files.Update(o.id, updateInfo).Media(in, googleapi.ContentType("")).Fields(googleapi.Field(partialFields)).SupportsTeamDrives(o.fs.isTeamDrive).KeepRevisionForever(o.fs.opt.KeepRevisionForever).Do()
info, err = o.fs.svc.Files.Update(o.id, updateInfo).
Media(in, googleapi.ContentType("")).
Fields(googleapi.Field(partialFields)).
SupportsTeamDrives(o.fs.isTeamDrive).
KeepRevisionForever(o.fs.opt.KeepRevisionForever).
Do()
return shouldRetry(err)
})
if err != nil {
@@ -2042,9 +2097,15 @@ func (o *Object) Remove() error {
info := drive.File{
Trashed: true,
}
_, err = o.fs.svc.Files.Update(o.id, &info).Fields("").SupportsTeamDrives(o.fs.isTeamDrive).Do()
_, err = o.fs.svc.Files.Update(o.id, &info).
Fields("").
SupportsTeamDrives(o.fs.isTeamDrive).
Do()
} else {
err = o.fs.svc.Files.Delete(o.id).Fields("").SupportsTeamDrives(o.fs.isTeamDrive).Do()
err = o.fs.svc.Files.Delete(o.id).
Fields("").
SupportsTeamDrives(o.fs.isTeamDrive).
Do()
}
return shouldRetry(err)
})

View File

@@ -345,7 +345,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
// try loading service account credentials from env variable, then from a file
if opt.ServiceAccountCredentials != "" && opt.ServiceAccountFile != "" {
if opt.ServiceAccountCredentials == "" && opt.ServiceAccountFile != "" {
loadedCreds, err := ioutil.ReadFile(os.ExpandEnv(opt.ServiceAccountFile))
if err != nil {
return nil, errors.Wrap(err, "error opening service account credentials file")

View File

@@ -233,16 +233,17 @@ GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>/.../<file>
// JottaFile represents a Jottacloud file
type JottaFile struct {
XMLName xml.Name
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
State string `xml:"currentRevision>state"`
CreatedAt Time `xml:"currentRevision>created"`
ModifiedAt Time `xml:"currentRevision>modified"`
Updated Time `xml:"currentRevision>updated"`
Size int64 `xml:"currentRevision>size"`
MimeType string `xml:"currentRevision>mime"`
MD5 string `xml:"currentRevision>md5"`
XMLName xml.Name
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
PublicSharePath string `xml:"publicSharePath"`
State string `xml:"currentRevision>state"`
CreatedAt Time `xml:"currentRevision>created"`
ModifiedAt Time `xml:"currentRevision>modified"`
Updated Time `xml:"currentRevision>updated"`
Size int64 `xml:"currentRevision>size"`
MimeType string `xml:"currentRevision>mime"`
MD5 string `xml:"currentRevision>md5"`
}
// Error is a custom Error for wrapping Jottacloud error responses

View File

@@ -25,6 +25,7 @@ import (
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
@@ -39,6 +40,7 @@ const (
defaultMountpoint = "Sync"
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com"
shareURL = "https://www.jottacloud.com/"
cachePrefix = "rclone-jcmd5-"
)
@@ -71,6 +73,16 @@ func init() {
Help: "Files bigger than this will be cached on disk to calculate the MD5 if required.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true,
}, {
Name: "hard_delete",
Help: "Delete files permanently rather than putting them into the trash.",
Default: false,
Advanced: true,
}, {
Name: "unlink",
Help: "Remove existing public link to file/folder with link command rather than creating.",
Default: false,
Advanced: true,
}},
})
}
@@ -81,6 +93,8 @@ type Options struct {
Pass string `config:"pass"`
Mountpoint string `config:"mountpoint"`
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
HardDelete bool `config:"hard_delete"`
Unlink bool `config:"unlink"`
}
// Fs represents a remote jottacloud
@@ -227,9 +241,14 @@ func errorHandler(resp *http.Response) error {
return errResponse
}
// filePathRaw returns an unescaped file path (f.root, file)
func (f *Fs) filePathRaw(file string) string {
return path.Join(f.endpointURL, replaceReservedChars(path.Join(f.root, file)))
}
// filePath returns a escaped file path (f.root, file)
func (f *Fs) filePath(file string) string {
return rest.URLPathEscape(path.Join(f.endpointURL, replaceReservedChars(path.Join(f.root, file))))
return rest.URLPathEscape(f.filePathRaw(file))
}
// filePath returns a escaped file path (f.root, remote)
@@ -425,6 +444,102 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
return entries, nil
}
// listFileDirFn is called from listFileDir to handle an object.
type listFileDirFn func(fs.DirEntry) error
// List the objects and directories into entries, from a
// special kind of JottaFolder representing a FileDirLis
func (f *Fs) listFileDir(remoteStartPath string, startFolder *api.JottaFolder, fn listFileDirFn) error {
pathPrefix := "/" + f.filePathRaw("") // Non-escaped prefix of API paths to be cut off, to be left with the remote path including the remoteStartPath
pathPrefixLength := len(pathPrefix)
startPath := path.Join(pathPrefix, remoteStartPath) // Non-escaped API path up to and including remoteStartPath, to decide if it should be created as a new dir object
startPathLength := len(startPath)
for i := range startFolder.Folders {
folder := &startFolder.Folders[i]
if folder.Deleted {
return nil
}
folderPath := path.Join(folder.Path, folder.Name)
remoteDirLength := len(folderPath) - pathPrefixLength
var remoteDir string
if remoteDirLength > 0 {
remoteDir = restoreReservedChars(folderPath[pathPrefixLength+1:])
if remoteDirLength > startPathLength {
d := fs.NewDir(remoteDir, time.Time(folder.ModifiedAt))
err := fn(d)
if err != nil {
return err
}
}
}
for i := range folder.Files {
file := &folder.Files[i]
if file.Deleted || file.State != "COMPLETED" {
continue
}
remoteFile := path.Join(remoteDir, restoreReservedChars(file.Name))
o, err := f.newObjectWithInfo(remoteFile, file)
if err != nil {
return err
}
err = fn(o)
if err != nil {
return err
}
}
}
return nil
}
// ListR lists the objects and directories of the Fs starting
// from dir recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
//
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
opts := rest.Opts{
Method: "GET",
Path: f.filePath(dir),
Parameters: url.Values{},
}
opts.Parameters.Set("mode", "list")
var resp *http.Response
var result api.JottaFolder // Could be JottaFileDirList, but JottaFolder is close enough
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
// does not exist
if apiErr.StatusCode == http.StatusNotFound {
return fs.ErrorDirNotFound
}
}
return errors.Wrap(err, "couldn't list files")
}
list := walk.NewListRHelper(callback)
err = f.listFileDir(dir, &result, func(entry fs.DirEntry) error {
return list.Add(entry)
})
if err != nil {
return err
}
return list.Flush()
}
// Creates from the parameters passed in a half finished Object which
// must have setMetaData called on it
//
@@ -498,7 +613,11 @@ func (f *Fs) purgeCheck(dir string, check bool) (err error) {
NoResponse: true,
}
opts.Parameters.Set("dlDir", "true")
if f.opt.HardDelete {
opts.Parameters.Set("rmDir", "true")
} else {
opts.Parameters.Set("dlDir", "true")
}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
@@ -506,7 +625,7 @@ func (f *Fs) purgeCheck(dir string, check bool) (err error) {
return shouldRetry(resp, err)
})
if err != nil {
return errors.Wrap(err, "rmdir failed")
return errors.Wrap(err, "couldn't purge directory")
}
// TODO: Parse response?
@@ -579,7 +698,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
info, err := f.copyOrMove("cp", srcObj.filePath(), remote)
if err != nil {
return nil, errors.Wrap(err, "copy failed")
return nil, errors.Wrap(err, "couldn't copy file")
}
return f.newObjectWithInfo(remote, info)
@@ -609,7 +728,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
info, err := f.copyOrMove("mv", srcObj.filePath(), remote)
if err != nil {
return nil, errors.Wrap(err, "move failed")
return nil, errors.Wrap(err, "couldn't move file")
}
return f.newObjectWithInfo(remote, info)
@@ -653,11 +772,57 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
_, err = f.copyOrMove("mvDir", path.Join(f.endpointURL, replaceReservedChars(srcPath))+"/", dstRemote)
if err != nil {
return errors.Wrap(err, "moveDir failed")
return errors.Wrap(err, "couldn't move directory")
}
return nil
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(remote string) (link string, err error) {
opts := rest.Opts{
Method: "GET",
Path: f.filePath(remote),
Parameters: url.Values{},
}
if f.opt.Unlink {
opts.Parameters.Set("mode", "disableShare")
} else {
opts.Parameters.Set("mode", "enableShare")
}
var resp *http.Response
var result api.JottaFile
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
return shouldRetry(resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
// does not exist
if apiErr.StatusCode == http.StatusNotFound {
return "", fs.ErrorObjectNotFound
}
}
if err != nil {
if f.opt.Unlink {
return "", errors.Wrap(err, "couldn't remove public link")
}
return "", errors.Wrap(err, "couldn't create public link")
}
if f.opt.Unlink {
if result.PublicSharePath != "" {
return "", errors.Errorf("couldn't remove public link - %q", result.PublicSharePath)
}
return "", nil
}
if result.PublicSharePath == "" {
return "", errors.New("couldn't create public link - no link path received")
}
link = path.Join(shareURL, result.PublicSharePath)
return link, nil
}
// About gets quota information
func (f *Fs) About() (*fs.Usage, error) {
info, err := f.getAccountInfo()
@@ -666,8 +831,11 @@ func (f *Fs) About() (*fs.Usage, error) {
}
usage := &fs.Usage{
Total: fs.NewUsageValue(info.Capacity),
Used: fs.NewUsageValue(info.Usage),
Used: fs.NewUsageValue(info.Usage),
}
if info.Capacity > 0 {
usage.Total = fs.NewUsageValue(info.Capacity)
usage.Free = fs.NewUsageValue(info.Capacity - info.Usage)
}
return usage, nil
}
@@ -914,7 +1082,11 @@ func (o *Object) Remove() error {
Parameters: url.Values{},
}
opts.Parameters.Set("dl", "true")
if o.fs.opt.HardDelete {
opts.Parameters.Set("rm", "true")
} else {
opts.Parameters.Set("dl", "true")
}
return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallXML(&opts, nil, nil)
@@ -924,12 +1096,14 @@ func (o *Object) Remove() error {
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
)

View File

@@ -16,8 +16,10 @@ import (
"unicode/utf8"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
@@ -280,6 +282,13 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
// Follow symlinks if required
if f.opt.FollowSymlinks && (mode&os.ModeSymlink) != 0 {
fi, err = os.Stat(newPath)
if os.IsNotExist(err) {
// Skip bad symlinks
err = fserrors.NoRetryError(errors.Wrap(err, "symlink"))
fs.Errorf(newRemote, "Listing error: %v", err)
accounting.Stats.Error(err)
continue
}
if err != nil {
return nil, err
}

View File

@@ -91,9 +91,10 @@ func (t *Timestamp) UnmarshalJSON(data []byte) error {
// ItemReference groups data needed to reference a OneDrive item
// across the service into a single structure.
type ItemReference struct {
DriveID string `json:"driveId"` // Unique identifier for the Drive that contains the item. Read-only.
ID string `json:"id"` // Unique identifier for the item. Read/Write.
Path string `json:"path"` // Path that used to navigate to the item. Read/Write.
DriveID string `json:"driveId"` // Unique identifier for the Drive that contains the item. Read-only.
ID string `json:"id"` // Unique identifier for the item. Read/Write.
Path string `json:"path"` // Path that used to navigate to the item. Read/Write.
DriveType string `json:"driveType"` // Type of the drive, Read-Only
}
// RemoteItemFacet groups data needed to reference a OneDrive remote item
@@ -244,7 +245,6 @@ type MoveItemRequest struct {
// Copy Item
// Upload From URL
type AsyncOperationStatus struct {
Operation string `json:"operation"` // The type of job being run.
PercentageComplete float64 `json:"percentageComplete"` // An float value between 0 and 100 that indicates the percentage complete.
Status string `json:"status"` // A string value that maps to an enumeration of possible values about the status of the job. "notStarted | inProgress | completed | updating | failed | deletePending | deleteFailed | waiting"
}

View File

@@ -10,7 +10,6 @@ import (
"io"
"log"
"net/http"
"net/url"
"path"
"strings"
"time"
@@ -33,48 +32,32 @@ import (
)
const (
rclonePersonalClientID = "0000000044165769"
rclonePersonalEncryptedClientSecret = "ugVWLNhKkVT1-cbTRO-6z1MlzwdW6aMwpKgNaFG-qXjEn_WfDnG9TVyRA5yuoliU"
rcloneBusinessClientID = "52857fec-4bc2-483f-9f1b-5fe28e97532c"
rcloneBusinessEncryptedClientSecret = "6t4pC8l6L66SFYVIi8PgECDyjXy_ABo1nsTaE-Lr9LpzC6yT4vNOwHsakwwdEui0O6B0kX8_xbBLj91J"
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
rootURLPersonal = "https://api.onedrive.com/v1.0/drive" // root URL for requests
discoveryServiceURL = "https://api.office.com/discovery/"
configResourceURL = "resource_url"
rcloneClientID = "b15665d9-eda6-4092-8539-0eec376afd59"
rcloneEncryptedClientSecret = "_JUdzh3LnKNqSPcf4Wu5fgMFIQOI8glZu_akYgR8yf6egowNBg-R"
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
graphURL = "https://graph.microsoft.com/v1.0"
configDriveID = "drive_id"
configDriveType = "drive_type"
driveTypePersonal = "personal"
driveTypeBusiness = "business"
driveTypeSharepoint = "documentLibrary"
)
// Globals
var (
// Description of how to auth for this app for a personal account
oauthPersonalConfig = &oauth2.Config{
Scopes: []string{
"wl.signin", // Allow single sign-on capabilities
"wl.offline_access", // Allow receiving a refresh token
"onedrive.readwrite", // r/w perms to all of a user's OneDrive files
},
Endpoint: oauth2.Endpoint{
AuthURL: "https://login.live.com/oauth20_authorize.srf",
TokenURL: "https://login.live.com/oauth20_token.srf",
},
ClientID: rclonePersonalClientID,
ClientSecret: obscure.MustReveal(rclonePersonalEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
// Description of how to auth for this app for a business account
oauthBusinessConfig = &oauth2.Config{
oauthConfig = &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: "https://login.microsoftonline.com/common/oauth2/authorize",
TokenURL: "https://login.microsoftonline.com/common/oauth2/token",
AuthURL: "https://login.microsoftonline.com/common/oauth2/v2.0/authorize",
TokenURL: "https://login.microsoftonline.com/common/oauth2/v2.0/token",
},
ClientID: rcloneBusinessClientID,
ClientSecret: obscure.MustReveal(rcloneBusinessEncryptedClientSecret),
Scopes: []string{"Files.Read", "Files.ReadWrite", "Files.Read.All", "Files.ReadWrite.All", "offline_access"},
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
oauthBusinessResource = oauth2.SetAuthURLParam("resource", discoveryServiceURL)
sharedURL = "https://api.onedrive.com/v1.0/drives" // root URL for remote shared resources
)
// Register with Fs
@@ -84,147 +67,143 @@ func init() {
Description: "Microsoft OneDrive",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
// choose account type
fmt.Printf("Choose OneDrive account type?\n")
fmt.Printf(" * Say b for a OneDrive business account\n")
fmt.Printf(" * Say p for a personal OneDrive account\n")
isPersonal := config.Command([]string{"bBusiness", "pPersonal"}) == 'p'
err := oauthutil.Config("onedrive", name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return
}
if isPersonal {
// for personal accounts we don't safe a field about the account
err := oauthutil.Config("onedrive", name, m, oauthPersonalConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
} else {
err := oauthutil.ConfigErrorCheck("onedrive", name, m, func(req *http.Request) oauthutil.AuthError {
var resp oauthutil.AuthError
// Are we running headless?
if automatic, _ := m.Get(config.ConfigAutomatic); automatic != "" {
// Yes, okay we are done
return
}
resp.Name = req.URL.Query().Get("error")
resp.Code = strings.Split(req.URL.Query().Get("error_description"), ":")[0] // error_description begins with XXXXXXXXXXXX:
resp.Description = strings.Join(strings.Split(req.URL.Query().Get("error_description"), ":")[1:], ":")
resp.HelpURL = "https://rclone.org/onedrive/#troubleshooting"
return resp
}, oauthBusinessConfig, oauthBusinessResource)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return
}
type driveResource struct {
DriveID string `json:"id"`
DriveName string `json:"name"`
DriveType string `json:"driveType"`
}
type drivesResponse struct {
Drives []driveResource `json:"value"`
}
// Are we running headless?
if automatic, _ := m.Get(config.ConfigAutomatic); automatic != "" {
// Yes, okay we are done
return
}
type siteResource struct {
SiteID string `json:"id"`
SiteName string `json:"displayName"`
SiteURL string `json:"webUrl"`
}
type siteResponse struct {
Sites []siteResource `json:"value"`
}
type serviceResource struct {
ServiceAPIVersion string `json:"serviceApiVersion"`
ServiceEndpointURI string `json:"serviceEndpointUri"`
ServiceResourceID string `json:"serviceResourceId"`
}
type serviceResponse struct {
Services []serviceResource `json:"value"`
}
oAuthClient, _, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure OneDrive: %v", err)
}
srv := rest.NewClient(oAuthClient)
oAuthClient, _, err := oauthutil.NewClient(name, m, oauthBusinessConfig)
if err != nil {
log.Fatalf("Failed to configure OneDrive: %v", err)
return
}
srv := rest.NewClient(oAuthClient)
var opts rest.Opts
var finalDriveID string
var siteID string
switch config.Choose("Your choice",
[]string{"onedrive", "sharepoint", "driveid", "siteid", "search"},
[]string{"OneDrive Personal or Business", "Root Sharepoint site", "Type in driveID", "Type in SiteID", "Search a Sharepoint site"},
false) {
opts := rest.Opts{
Method: "GET",
RootURL: discoveryServiceURL,
Path: "/v2.0/me/services",
}
services := serviceResponse{}
resp, err := srv.CallJSON(&opts, nil, &services)
if err != nil {
fs.Errorf(nil, "Failed to query available services: %v", err)
return
}
if resp.StatusCode != 200 {
fs.Errorf(nil, "Failed to query available services: Got HTTP error code %d", resp.StatusCode)
return
}
var resourcesURL []string
var resourcesID []string
for _, service := range services.Services {
if service.ServiceAPIVersion == "v2.0" {
resourcesID = append(resourcesID, service.ServiceResourceID)
resourcesURL = append(resourcesURL, service.ServiceEndpointURI)
}
// we only support 2.0 API
fs.Infof(nil, "Skipping API %s endpoint %s", service.ServiceAPIVersion, service.ServiceEndpointURI)
}
var foundService string
if len(resourcesID) == 0 {
fs.Errorf(nil, "No Service found")
return
} else if len(resourcesID) == 1 {
foundService = resourcesID[0]
} else {
foundService = config.Choose("Choose resource URL", resourcesID, resourcesURL, false)
}
m.Set(configResourceURL, foundService)
oauthBusinessResource = oauth2.SetAuthURLParam("resource", foundService)
// get the token from the inital config
// we need to update the token with a resource
// specific token we will query now
token, err := oauthutil.GetToken(name, m)
if err != nil {
fs.Errorf(nil, "Error while getting token: %s", err)
return
}
// values for the token query
values := url.Values{}
values.Set("refresh_token", token.RefreshToken)
values.Set("grant_type", "refresh_token")
values.Set("resource", foundService)
values.Set("client_id", oauthBusinessConfig.ClientID)
values.Set("client_secret", oauthBusinessConfig.ClientSecret)
case "onedrive":
opts = rest.Opts{
Method: "POST",
RootURL: oauthBusinessConfig.Endpoint.TokenURL,
ContentType: "application/x-www-form-urlencoded",
Body: strings.NewReader(values.Encode()),
Method: "GET",
RootURL: graphURL,
Path: "/me/drives",
}
case "sharepoint":
opts = rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/sites/root/drives",
}
case "driveid":
fmt.Printf("Paste your Drive ID here> ")
finalDriveID = config.ReadLine()
case "siteid":
fmt.Printf("Paste your Site ID here> ")
siteID = config.ReadLine()
case "search":
fmt.Printf("What to search for> ")
searchTerm := config.ReadLine()
opts = rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/sites?search=" + searchTerm,
}
// tokenJSON is the struct representing the HTTP response from OAuth2
// providers returning a token in JSON form.
// we are only interested in the new tokens, all other fields we don't care
type tokenJSON struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
}
jsonToken := tokenJSON{}
resp, err = srv.CallJSON(&opts, nil, &jsonToken)
sites := siteResponse{}
_, err := srv.CallJSON(&opts, nil, &sites)
if err != nil {
fs.Errorf(nil, "Failed to get resource token: %v", err)
return
}
if resp.StatusCode != 200 {
fs.Errorf(nil, "Failed to get resource token: Got HTTP error code %d", resp.StatusCode)
return
log.Fatalf("Failed to query available sites: %v", err)
}
// update the tokens
token.AccessToken = jsonToken.AccessToken
token.RefreshToken = jsonToken.RefreshToken
// finally save them in the config
err = oauthutil.PutToken(name, m, token, true)
if err != nil {
fs.Errorf(nil, "Error while setting token: %s", err)
if len(sites.Sites) == 0 {
log.Fatalf("Search for '%s' returned no results", searchTerm)
} else {
fmt.Printf("Found %d sites, please select the one you want to use:\n", len(sites.Sites))
for index, site := range sites.Sites {
fmt.Printf("%d: %s (%s) id=%s\n", index, site.SiteName, site.SiteURL, site.SiteID)
}
siteID = sites.Sites[config.ChooseNumber("Chose drive to use:", 0, len(sites.Sites)-1)].SiteID
}
}
// if we have a siteID we need to ask for the drives
if siteID != "" {
opts = rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/sites/" + siteID + "/drives",
}
}
// We don't have the final ID yet?
// query Microsoft Graph
if finalDriveID == "" {
drives := drivesResponse{}
_, err := srv.CallJSON(&opts, nil, &drives)
if err != nil {
log.Fatalf("Failed to query available drives: %v", err)
}
if len(drives.Drives) == 0 {
log.Fatalf("No drives found")
} else {
fmt.Printf("Found %d drives, please select the one you want to use:\n", len(drives.Drives))
for index, drive := range drives.Drives {
fmt.Printf("%d: %s (%s) id=%s\n", index, drive.DriveName, drive.DriveType, drive.DriveID)
}
finalDriveID = drives.Drives[config.ChooseNumber("Chose drive to use:", 0, len(drives.Drives)-1)].DriveID
}
}
// Test the driveID and get drive type
opts = rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/drives/" + finalDriveID + "/root"}
var rootItem api.Item
_, err = srv.CallJSON(&opts, nil, &rootItem)
if err != nil {
log.Fatalf("Failed to query root for drive %s: %v", finalDriveID, err)
}
fmt.Printf("Found drive '%s' of type '%s', URL: %s\nIs that okay?\n", rootItem.Name, rootItem.ParentReference.DriveType, rootItem.WebURL)
// This does not work, YET :)
if !config.Confirm() {
log.Fatalf("Cancelled by user")
}
m.Set(configDriveID, finalDriveID)
m.Set(configDriveType, rootItem.ParentReference.DriveType)
config.SaveConfig()
},
Options: []fs.Option{{
Name: config.ConfigClientID,
@@ -237,14 +216,25 @@ func init() {
Help: "Chunk size to upload files with - must be multiple of 320k.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true,
}, {
Name: "drive_id",
Help: "The ID of the drive to use",
Default: "",
Advanced: true,
}, {
Name: "drive_type",
Help: "The type of the drive ( personal | business | documentLibrary )",
Default: "",
Advanced: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
ResourceURL string `config:"resource_url"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DriveID string `config:"drive_id"`
DriveType string `config:"drive_type"`
}
// Fs represents a remote one drive
@@ -257,7 +247,8 @@ type Fs struct {
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
isBusiness bool // true if this is an OneDrive Business account
driveID string // ID to use for querying Microsoft Graph
driveType string // https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/resources/drive
}
// Object describes a one drive object
@@ -327,9 +318,17 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(path string) (info *api.Item, resp *http.Response, err error) {
opts := rest.Opts{
Method: "GET",
Path: "/root:/" + rest.URLPathEscape(replaceReservedChars(path)),
var opts rest.Opts
if len(path) == 0 {
opts = rest.Opts{
Method: "GET",
Path: "/root",
}
} else {
opts = rest.Opts{
Method: "GET",
Path: "/root:/" + rest.URLPathEscape(replaceReservedChars(path)),
}
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
@@ -364,23 +363,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if opt.ChunkSize%(320*1024) != 0 {
return nil, errors.Errorf("chunk size %d is not a multiple of 320k", opt.ChunkSize)
}
// if we have a resource URL it's a business account otherwise a personal one
isBusiness := opt.ResourceURL != ""
var rootURL string
var oauthConfig *oauth2.Config
if !isBusiness {
// personal account setup
oauthConfig = oauthPersonalConfig
rootURL = rootURLPersonal
} else {
// business account setup
oauthConfig = oauthBusinessConfig
rootURL = opt.ResourceURL + "_api/v2.0/drives/me"
sharedURL = opt.ResourceURL + "_api/v2.0/drives"
// update the URL in the AuthOptions
oauthBusinessResource = oauth2.SetAuthURLParam("resource", opt.ResourceURL)
if opt.DriveID == "" || opt.DriveType == "" {
log.Fatalf("Unable to get drive_id and drive_type. If you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend.")
}
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
@@ -388,19 +375,17 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
isBusiness: isBusiness,
name: name,
root: root,
opt: *opt,
driveID: opt.DriveID,
driveType: opt.DriveType,
srv: rest.NewClient(oAuthClient).SetRoot(graphURL + "/drives/" + opt.DriveID),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
}
f.features = (&fs.Features{
CaseInsensitive: true,
// OneDrive for business doesn't support mime types properly
// so we disable it until resolved
// https://github.com/OneDrive/onedrive-api-docs/issues/643
ReadMimeType: !f.isBusiness,
CaseInsensitive: true,
ReadMimeType: true,
CanHaveEmptyDirectories: true,
}).Fill(f)
f.srv.SetErrorHandler(errorHandler)
@@ -546,8 +531,7 @@ type listAllFn func(*api.Item) bool
func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
// Top parameter asks for bigger pages of data
// https://dev.onedrive.com/odata/optional-query-parameters.htm
opts := newOptsCall(dirID, "GET", "/children?top=1000")
opts := newOptsCall(dirID, "GET", "/children?$top=1000")
OUTER:
for {
var result api.ListChildrenResponse
@@ -757,16 +741,11 @@ func (f *Fs) Precision() time.Duration {
func (f *Fs) waitForJob(location string, o *Object) error {
deadline := time.Now().Add(fs.Config.Timeout)
for time.Now().Before(deadline) {
opts := rest.Opts{
Method: "GET",
RootURL: location,
IgnoreStatus: true, // Ignore the http status response since it seems to return valid info on 500 errors
}
var resp *http.Response
var err error
var body []byte
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
resp, err = http.Get(location)
if err != nil {
return fserrors.ShouldRetry(err), err
}
@@ -782,19 +761,18 @@ func (f *Fs) waitForJob(location string, o *Object) error {
if err != nil {
return errors.Wrapf(err, "async status result not JSON: %q", body)
}
// See if we decoded anything...
if !(status.Operation == "" && status.PercentageComplete == 0 && status.Status == "") {
if status.Status == "failed" || status.Status == "deleteFailed" {
return errors.Errorf("%s: async operation %q returned %q", o.remote, status.Operation, status.Status)
switch status.Status {
case "failed":
case "deleteFailed":
{
return errors.Errorf("%s: async operation returned %q", o.remote, status.Status)
}
} else if resp.StatusCode == 200 {
var info api.Item
err = json.Unmarshal(body, &info)
if err != nil {
return errors.Wrapf(err, "async item result not JSON: %q", body)
}
return o.setMetaData(&info)
case "completed":
err = o.readMetaData()
return errors.Wrapf(err, "async operation completed but readMetaData failed")
}
time.Sleep(1 * time.Second)
}
return errors.Errorf("async operation didn't complete after %v", fs.Config.Timeout)
@@ -833,7 +811,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
}
// Copy the object
opts := newOptsCall(srcObj.id, "POST", "/action.copy")
opts := newOptsCall(srcObj.id, "POST", "/copy")
opts.ExtraHeaders = map[string]string{"Prefer": "respond-async"}
opts.NoResponse = true
@@ -843,7 +821,8 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
copyReq := api.CopyItemRequest{
Name: &replacedLeaf,
ParentReference: api.ItemReference{
ID: id,
DriveID: f.driveID,
ID: id,
},
}
var resp *http.Response
@@ -1079,10 +1058,10 @@ func (f *Fs) About() (usage *fs.Usage, err error) {
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
if f.isBusiness {
return hash.Set(hash.QuickXorHash)
if f.driveType == driveTypePersonal {
return hash.Set(hash.SHA1)
}
return hash.Set(hash.SHA1)
return hash.Set(hash.QuickXorHash)
}
// ------------------------------------------------------------
@@ -1112,16 +1091,16 @@ func (o *Object) srvPath() string {
// Hash returns the SHA-1 of an object returning a lowercase hex string
func (o *Object) Hash(t hash.Type) (string, error) {
if o.fs.isBusiness {
if t != hash.QuickXorHash {
return "", hash.ErrUnsupported
if o.fs.driveType == driveTypePersonal {
if t == hash.SHA1 {
return o.sha1, nil
}
} else {
if t == hash.QuickXorHash {
return o.quickxorhash, nil
}
return o.quickxorhash, nil
}
if t != hash.SHA1 {
return "", hash.ErrUnsupported
}
return o.sha1, nil
return "", hash.ErrUnsupported
}
// Size returns the size of an object in bytes
@@ -1282,12 +1261,12 @@ func (o *Object) createUploadSession(modTime time.Time) (response *api.CreateUpl
opts = rest.Opts{
Method: "POST",
RootURL: rootURL,
Path: "/" + drive + "/items/" + id + ":/" + rest.URLPathEscape(leaf) + ":/upload.createSession",
Path: "/" + drive + "/items/" + id + ":/" + rest.URLPathEscape(replaceReservedChars(leaf)) + ":/createUploadSession",
}
} else {
opts = rest.Opts{
Method: "POST",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/upload.createSession",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()) + ":/createUploadSession",
}
}
createRequest := api.CreateUploadRequest{}
@@ -1349,6 +1328,10 @@ func (o *Object) cancelUploadSession(url string) (err error) {
// uploadMultipart uploads a file using multipart upload
func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (info *api.Item, err error) {
if size <= 0 {
panic("size passed into uploadMultipart must be > 0")
}
// Create upload session
fs.Debugf(o, "Starting multipart upload")
session, err := o.createUploadSession(modTime)
@@ -1389,8 +1372,14 @@ func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (i
return info, nil
}
// uploadSinglepart uploads a file as a single part
// Update the content of a remote file within 4MB size in one single request
// This function will set modtime after uploading, which will create a new version for the remote file
func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (info *api.Item, err error) {
if size < 0 || size > int64(fs.SizeSuffix(4*1024*1024)) {
panic("size passed into uploadSinglepart must be >= 0 and <= 4MiB")
}
fs.Debugf(o, "Starting singlepart upload")
var resp *http.Response
var opts rest.Opts
_, directoryID, _ := o.fs.dirCache.FindPath(o.remote, false)
@@ -1411,12 +1400,12 @@ func (o *Object) uploadSinglepart(in io.Reader, size int64, modTime time.Time) (
Body: in,
}
}
// for go1.8 (see release notes) we must nil the Body if we want a
// "Content-Length: 0" header which onedrive requires for all files.
if size == 0 {
opts.Body = nil
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err)
})
@@ -1443,15 +1432,17 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
modTime := src.ModTime()
var info *api.Item
if size <= 0 {
// This is for 0 length files, or files with an unknown size
if size > 0 {
info, err = o.uploadMultipart(in, size, modTime)
} else if size == 0 {
info, err = o.uploadSinglepart(in, size, modTime)
} else {
info, err = o.uploadMultipart(in, size, modTime)
panic("src file size must be >= 0")
}
if err != nil {
return err
}
return o.setMetaData(info)
}
@@ -1489,7 +1480,7 @@ func newOptsCall(id string, method string, route string) (opts rest.Opts) {
func parseDirID(ID string) (string, string, string) {
if strings.Index(ID, "#") >= 0 {
s := strings.Split(ID, "#")
return s[1], s[0], sharedURL
return s[1], s[0], graphURL + "/drives"
}
return ID, "", ""
}

View File

@@ -39,9 +39,11 @@ import (
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/rest"
"github.com/ncw/swift"
"github.com/pkg/errors"
@@ -570,6 +572,7 @@ const (
maxRetries = 10 // number of retries to make of operations
maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY
maxFileSize = 5 * 1024 * 1024 * 1024 * 1024 // largest possible upload file size
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
)
// Options defines the configuration for this backend
@@ -604,6 +607,7 @@ type Fs struct {
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
bucketDeleted bool // true if we have deleted the bucket
pacer *pacer.Pacer // To pace the API calls
}
// Object describes a s3 object
@@ -649,6 +653,37 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// retryErrorCodes is a slice of error codes that we will retry
// See: https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
var retryErrorCodes = []int{
409, // Conflict - various states that could be resolved on a retry
503, // Service Unavailable/Slow Down - "Reduce your request rate"
}
//S3 is pretty resilient, and the built in retry handling is probably sufficient
// as it should notice closed connections and timeouts which are the most likely
// sort of failure modes
func shouldRetry(err error) (bool, error) {
// If this is an awserr object, try and extract more useful information to determine if we should retry
if awsError, ok := err.(awserr.Error); ok {
// Simple case, check the original embedded error in case it's generically retriable
if fserrors.ShouldRetry(awsError.OrigErr()) {
return true, err
}
//Failing that, if it's a RequestFailure it's probably got an http status code we can check
if reqErr, ok := err.(awserr.RequestFailure); ok {
for _, e := range retryErrorCodes {
if reqErr.StatusCode() == e {
return true, err
}
}
}
}
//Ok, not an awserr, check for generic failure conditions
return fserrors.ShouldRetry(err), err
}
// Pattern to match a s3 path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
@@ -774,6 +809,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
c: c,
bucket: bucket,
ses: ses,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.S3Pacer),
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -787,7 +823,10 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
Bucket: &f.bucket,
Key: &directory,
}
_, err = f.c.HeadObject(&req)
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.HeadObject(&req)
return shouldRetry(err)
})
if err == nil {
f.root = path.Dir(directory)
if f.root == "." {
@@ -864,7 +903,12 @@ func (f *Fs) list(dir string, recurse bool, fn listFn) error {
MaxKeys: &maxKeys,
Marker: marker,
}
resp, err := f.c.ListObjects(&req)
var resp *s3.ListObjectsOutput
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.ListObjects(&req)
return shouldRetry(err)
})
if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok {
if awsErr.StatusCode() == http.StatusNotFound {
@@ -989,7 +1033,11 @@ func (f *Fs) listBuckets(dir string) (entries fs.DirEntries, err error) {
return nil, fs.ErrorListBucketRequired
}
req := s3.ListBucketsInput{}
resp, err := f.c.ListBuckets(&req)
var resp *s3.ListBucketsOutput
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.ListBuckets(&req)
return shouldRetry(err)
})
if err != nil {
return nil, err
}
@@ -1074,7 +1122,10 @@ func (f *Fs) dirExists() (bool, error) {
req := s3.HeadBucketInput{
Bucket: &f.bucket,
}
_, err := f.c.HeadBucket(&req)
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.HeadBucket(&req)
return shouldRetry(err)
})
if err == nil {
return true, nil
}
@@ -1111,7 +1162,10 @@ func (f *Fs) Mkdir(dir string) error {
LocationConstraint: &f.opt.LocationConstraint,
}
}
_, err := f.c.CreateBucket(&req)
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.CreateBucket(&req)
return shouldRetry(err)
})
if err, ok := err.(awserr.Error); ok {
if err.Code() == "BucketAlreadyOwnedByYou" {
err = nil
@@ -1136,7 +1190,10 @@ func (f *Fs) Rmdir(dir string) error {
req := s3.DeleteBucketInput{
Bucket: &f.bucket,
}
_, err := f.c.DeleteBucket(&req)
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.DeleteBucket(&req)
return shouldRetry(err)
})
if err == nil {
f.bucketOK = false
f.bucketDeleted = true
@@ -1183,7 +1240,10 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
CopySource: &source,
MetadataDirective: aws.String(s3.MetadataDirectiveCopy),
}
_, err = f.c.CopyObject(&req)
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.CopyObject(&req)
return shouldRetry(err)
})
if err != nil {
return nil, err
}
@@ -1260,7 +1320,12 @@ func (o *Object) readMetaData() (err error) {
Bucket: &o.fs.bucket,
Key: &key,
}
resp, err := o.fs.c.HeadObject(&req)
var resp *s3.HeadObjectOutput
err = o.fs.pacer.Call(func() (bool, error) {
var err error
resp, err = o.fs.c.HeadObject(&req)
return shouldRetry(err)
})
if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok {
if awsErr.StatusCode() == http.StatusNotFound {
@@ -1344,7 +1409,10 @@ func (o *Object) SetModTime(modTime time.Time) error {
Metadata: o.meta,
MetadataDirective: &directive,
}
_, err = o.fs.c.CopyObject(&req)
err = o.fs.pacer.Call(func() (bool, error) {
_, err := o.fs.c.CopyObject(&req)
return shouldRetry(err)
})
return err
}
@@ -1371,7 +1439,12 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
}
}
}
resp, err := o.fs.c.GetObject(&req)
var resp *s3.GetObjectOutput
err = o.fs.pacer.Call(func() (bool, error) {
var err error
resp, err = o.fs.c.GetObject(&req)
return shouldRetry(err)
})
if err, ok := err.(awserr.RequestFailure); ok {
if err.Code() == "InvalidObjectState" {
return nil, errors.Errorf("Object in GLACIER, restore first: %v", key)
@@ -1450,7 +1523,10 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
_, err = uploader.Upload(&req)
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = uploader.Upload(&req)
return shouldRetry(err)
})
if err != nil {
return err
}
@@ -1468,7 +1544,10 @@ func (o *Object) Remove() error {
Bucket: &o.fs.bucket,
Key: &key,
}
_, err := o.fs.c.DeleteObject(&req)
err := o.fs.pacer.Call(func() (bool, error) {
_, err := o.fs.c.DeleteObject(&req)
return shouldRetry(err)
})
return err
}

227
backend/union/union.go Normal file
View File

@@ -0,0 +1,227 @@
package union
import (
"fmt"
"io"
"path"
"path/filepath"
"strings"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/hash"
"github.com/pkg/errors"
)
// Register with Fs
func init() {
fsi := &fs.RegInfo{
Name: "union",
Description: "Builds a stackable unification remote, which can appear to merge the contents of several remotes",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remotes",
Help: "List of space separated remotes.\nCan be 'remotea:test/dir remoteb:', '\"remotea:test/space dir\" remoteb:', etc.\nThe last remote is used to write to.",
Required: true,
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Remotes fs.SpaceSepList `config:"remotes"`
}
// Fs represents a remote acd server
type Fs struct {
name string // name of this remote
features *fs.Features // optional features
opt Options // options for this Fs
root string // the path we are working on
remotes []fs.Fs // slice of remotes
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("union root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Rmdir removes the root directory of the Fs object
func (f *Fs) Rmdir(dir string) error {
return f.remotes[len(f.remotes)-1].Rmdir(dir)
}
// Hashes returns hash.HashNone to indicate remote hashing is unavailable
func (f *Fs) Hashes() hash.Set {
// This could probably be set if all remotes share the same hashing algorithm
return hash.Set(hash.None)
}
// Mkdir makes the root directory of the Fs object
func (f *Fs) Mkdir(dir string) error {
return f.remotes[len(f.remotes)-1].Mkdir(dir)
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.remotes[len(f.remotes)-1].Put(in, src, options...)
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
set := make(map[string]fs.DirEntry)
found := false
for _, remote := range f.remotes {
var remoteEntries, err = remote.List(dir)
if err == fs.ErrorDirNotFound {
continue
}
if err != nil {
return nil, errors.Wrapf(err, "List failed on %v", remote)
}
found = true
for _, remoteEntry := range remoteEntries {
set[remoteEntry.Remote()] = remoteEntry
}
}
if !found {
return nil, fs.ErrorDirNotFound
}
for key := range set {
entries = append(entries, set[key])
}
return entries, nil
}
// NewObject creates a new remote union file object based on the first Object it finds (reverse remote order)
func (f *Fs) NewObject(path string) (fs.Object, error) {
for i := range f.remotes {
var remote = f.remotes[len(f.remotes)-i-1]
var obj, err = remote.NewObject(path)
if err == fs.ErrorObjectNotFound {
continue
}
if err != nil {
return nil, errors.Wrapf(err, "NewObject failed on %v", remote)
}
return obj, nil
}
return nil, fs.ErrorObjectNotFound
}
// Precision is the greatest Precision of all remotes
func (f *Fs) Precision() time.Duration {
var greatestPrecision time.Duration
for _, remote := range f.remotes {
if remote.Precision() > greatestPrecision {
greatestPrecision = remote.Precision()
}
}
return greatestPrecision
}
// NewFs constructs an Fs from the path.
//
// The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if len(opt.Remotes) == 0 {
return nil, errors.New("union can't point to an empty remote - check the value of the remotes setting")
}
if len(opt.Remotes) == 1 {
return nil, errors.New("union can't point to a single remote - check the value of the remotes setting")
}
for _, remote := range opt.Remotes {
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point union remote at itself - check the value of the remote setting")
}
}
var remotes []fs.Fs
for i := range opt.Remotes {
// Last remote first so we return the correct (last) matching fs in case of fs.ErrorIsFile
var remote = opt.Remotes[len(opt.Remotes)-i-1]
_, configName, fsPath, err := fs.ParseRemote(remote)
if err != nil {
return nil, err
}
var rootString = path.Join(fsPath, filepath.ToSlash(root))
if configName != "local" {
rootString = configName + ":" + rootString
}
myFs, err := fs.NewFs(rootString)
if err != nil {
if err == fs.ErrorIsFile {
return myFs, err
}
return nil, err
}
remotes = append(remotes, myFs)
}
// Reverse the remotes again so they are in the order as before
for i, j := 0, len(remotes)-1; i < j; i, j = i+1, j-1 {
remotes[i], remotes[j] = remotes[j], remotes[i]
}
f := &Fs{
name: name,
root: root,
opt: *opt,
remotes: remotes,
}
var features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
}).Fill(f)
for _, remote := range f.remotes {
features = features.Mask(remote)
}
f.features = features
return f, nil
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
)

View File

@@ -0,0 +1,18 @@
// Test Union filesystem interface
package union_test
import (
"testing"
_ "github.com/ncw/rclone/backend/local"
"github.com/ncw/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestUnion:",
NilObject: nil,
SkipFsMatch: true,
})
}

View File

@@ -12,6 +12,9 @@ import (
const (
// Wed, 27 Sep 2017 14:28:34 GMT
timeFormat = time.RFC1123
// The same as time.RFC1123 with optional leading zeros on the date
// see https://github.com/ncw/rclone/issues/2574
noZerosRFC1123 = "Mon, _2 Jan 2006 15:04:05 MST"
)
// Multistatus contains responses returned from an HTTP 207 return code
@@ -138,9 +141,10 @@ func (t *Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
// Possible time formats to parse the time with
var timeFormats = []string{
timeFormat, // Wed, 27 Sep 2017 14:28:34 GMT (as per RFC)
time.RFC1123Z, // Fri, 05 Jan 2018 14:14:38 +0000 (as used by mydrive.ch)
time.UnixDate, // Wed May 17 15:31:58 UTC 2017 (as used in an internal server)
timeFormat, // Wed, 27 Sep 2017 14:28:34 GMT (as per RFC)
time.RFC1123Z, // Fri, 05 Jan 2018 14:14:38 +0000 (as used by mydrive.ch)
time.UnixDate, // Wed May 17 15:31:58 UTC 2017 (as used in an internal server)
noZerosRFC1123, // Fri, 7 Sep 2018 08:49:58 GMT (as used by server in #2574)
}
// UnmarshalXML turns XML into a Time

View File

@@ -0,0 +1,32 @@
package odrvcookie
import (
"time"
"github.com/ncw/rclone/lib/rest"
)
// CookieRenew holds information for the renew
type CookieRenew struct {
srv *rest.Client
timer *time.Ticker
renewFn func()
}
// NewRenew returns and starts a CookieRenew
func NewRenew(interval time.Duration, renewFn func()) *CookieRenew {
renew := CookieRenew{
timer: time.NewTicker(interval),
renewFn: renewFn,
}
go renew.Renew()
return &renew
}
// Renew calls the renewFn for every tick
func (c *CookieRenew) Renew() {
for {
<-c.timer.C // wait for tick
c.renewFn()
}
}

View File

@@ -372,6 +372,17 @@ func (f *Fs) setQuirks(vendor string) error {
if err != nil {
return err
}
odrvcookie.NewRenew(12*time.Hour, func() {
spCookies, err := spCk.Cookies()
if err != nil {
fs.Errorf("could not renew cookies: %s", err.Error())
return
}
f.srv.SetCookie(&spCookies.FedAuth, &spCookies.RtFa)
fs.Debugf(spCookies, "successfully renewed sharepoint cookies")
})
f.srv.SetCookie(&spCookies.FedAuth, &spCookies.RtFa)
// sharepoint, unlike the other vendors, only lists files if the depth header is set to 0

23
bin/bisect-go-rclone.sh Executable file
View File

@@ -0,0 +1,23 @@
#!/bin/bash
# An example script to run when bisecting go with git bisect -run when
# looking for an rclone regression
# Run this from the go root
set -e
# Compile the go version
cd src
./make.bash
# Make sure we are using it
source ~/bin/use-go1.11
go version
# Compile rclone
cd ~/go/src/github.com/ncw/rclone
make
# run the failing test
go run -race race.go

15
bin/bisect-rclone.sh Executable file
View File

@@ -0,0 +1,15 @@
#!/bin/bash
# Example script for git-bisect -run
# Run from the project root
set -e
# Compile
make
rclone version
# Test whatever it is that is going wrong
truncate -s 10M /tmp/10M
rclone delete azure:rclone-test1/10M || true
rclone --retries 1 copyto -vv /tmp/10M azure:rclone-test1/10M --azureblob-upload-cutoff 1M

View File

@@ -44,6 +44,7 @@ docs = [
"swift.md",
"pcloud.md",
"sftp.md",
"union.md",
"webdav.md",
"yandex.md",

17
bin/tidy-beta Executable file
View File

@@ -0,0 +1,17 @@
#!/bin/sh
# Use this script after a release to tidy the betas
version="$1"
if [ "$version" = "" ]; then
echo "Syntax: $0 <version> [delete]"
exit 1
fi
dry_run="--dry-run"
if [ "$2" = "delete" ]; then
dry_run=""
else
echo "Running in --dry-run mode"
echo "Use '$0 $version delete' to actually delete files"
fi
rclone ${dry_run} --fast-list -P --checkers 16 --transfers 16 delete --include "**/${version}**" memstore:beta-rclone-org

View File

@@ -47,6 +47,7 @@ import (
_ "github.com/ncw/rclone/cmd/rmdir"
_ "github.com/ncw/rclone/cmd/rmdirs"
_ "github.com/ncw/rclone/cmd/serve"
_ "github.com/ncw/rclone/cmd/settier"
_ "github.com/ncw/rclone/cmd/sha1sum"
_ "github.com/ncw/rclone/cmd/size"
_ "github.com/ncw/rclone/cmd/sync"

View File

@@ -311,11 +311,11 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
}
break
}
if fserrors.IsFatalError(err) {
if fserrors.IsFatalError(err) || accounting.Stats.HadFatalError() {
fs.Errorf(nil, "Fatal error received - not attempting retries")
break
}
if fserrors.IsNoRetryError(err) {
if fserrors.IsNoRetryError(err) || (accounting.Stats.Errored() && !accounting.Stats.HadRetryError()) {
fs.Errorf(nil, "Can't retry this error - not attempting retries")
break
}

View File

@@ -34,7 +34,9 @@ without account.
if err != nil {
return err
}
fmt.Println(link)
if link != "" {
fmt.Println(link)
}
return nil
})
},

54
cmd/settier/settier.go Normal file
View File

@@ -0,0 +1,54 @@
package settier
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs/operations"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "settier tier remote:path",
Short: `Changes storage class/tier of objects in remote.`,
Long: `
rclone settier changes storage tier or class at remote if supported.
Few cloud storage services provides different storage classes on objects,
for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive,
Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
Note that, certain tier chages make objects not available to access immediately.
For example tiering to archive in azure blob storage makes objects in frozen state,
user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object
inaccessible.true
You can use it to tier single object
rclone settier Cool remote:path/file
Or use rclone filters to set tier on only specific files
rclone --include "*.txt" settier Hot remote:path/dir
Or just provide remote directory and all files in directory will be tiered
rclone settier tier remote:path/dir
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
tier := args[0]
input := args[1:]
fsrc := cmd.NewFsSrc(input)
cmd.Run(false, false, command, func() error {
isSupported := fsrc.Features().SetTier
if !isSupported {
return errors.Errorf("Remote %s does not support settier", fsrc.Name())
}
return operations.SetTier(fsrc, tier)
})
},
}

View File

@@ -58,8 +58,9 @@ Features
* [Sync](/commands/rclone_sync/) (one way) mode to make a directory identical
* [Check](/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
* Optional encryption ([Crypt](/crypt/))
* Optional cache ([Cache](/cache/))
* ([Encryption](/crypt/)) backend
* ([Cache](/cache/)) backend
* ([Union](/union/)) backend
* Optional FUSE mount ([rclone mount](/commands/rclone_mount/))
Links

View File

@@ -171,7 +171,7 @@ Contributors
* themylogin <themylogin@gmail.com>
* Onno Zweers <onno.zweers@surfsara.nl>
* Jasper Lievisse Adriaanse <jasper@humppa.nl>
* sandeepkru <sandeep.ummadi@gmail.com>
* sandeepkru <sandeep.ummadi@gmail.com> <sandeepkru@users.noreply.github.com>
* HerrH <atomtigerzoo@users.noreply.github.com>
* Andrew <4030760+sparkyman215@users.noreply.github.com>
* dan smith <XX1011@gmail.com>
@@ -186,3 +186,13 @@ Contributors
* Alex Chen <Cnly@users.noreply.github.com>
* Denis <deniskovpen@gmail.com>
* bsteiss <35940619+bsteiss@users.noreply.github.com>
* Cédric Connes <cedric.connes@gmail.com>
* Dr. Tobias Quathamer <toddy15@users.noreply.github.com>
* dcpu <42736967+dcpu@users.noreply.github.com>
* Sheldon Rupp <me@shel.io>
* albertony <12441419+albertony@users.noreply.github.com>
* cron410 <cron410@gmail.com>
* Anagh Kumar Baranwal <anaghk.dos@gmail.com>
* Felix Brucker <felix@felixbrucker.com>
* Santiago Rodríguez <scollazo@users.noreply.github.com>
* Craig Miskell <craig.miskell@fluxfederation.com>

View File

@@ -184,6 +184,13 @@ Upload chunk size. Default 4MB. Note that this is stored in memory
and there may be up to `--transfers` chunks stored at once in memory.
This can be at most 100MB.
#### --azureblob-list-chunk=SIZE ####
List blob limit. Default is the maximum, 5000. `List blobs` requests
are permitted 2 minutes per megabyte to complete. If an operation is
taking longer than 2 minutes per megabyte on average, it will time out ( [source](https://docs.microsoft.com/en-us/rest/api/storageservices/setting-timeouts-for-blob-service-operations#exceptions-to-default-timeout-interval) ). This limit the number of blobs items to return, to avoid the time out.
#### --azureblob-access-tier=Hot/Cool/Archive ####
Azure storage supports blob tiering, you can configure tier in advanced

View File

@@ -1,4 +1,4 @@
---
-----
title: "Cache"
description: "Rclone docs for cache remote"
date: "2017-09-03"
@@ -133,7 +133,8 @@ to the cloud provider without interrupting the reading (small blip can happen th
Files are uploaded in sequence and only one file is uploaded at a time.
Uploads will be stored in a queue and be processed based on the order they were added.
The queue and the temporary storage is persistent across restarts and even purges of the cache.
The queue and the temporary storage is persistent across restarts but
can be cleared on startup with the `--cache-db-purge` flag.
### Write Support ###
@@ -181,6 +182,28 @@ and password) in your remote and it will be automatically enabled.
Affected settings:
- `cache-workers`: _Configured value_ during confirmed playback or _1_ all the other times
##### Certificate Validation #####
When the Plex server is configured to only accept secure connections, it is
possible to use `.plex.direct` URL's to ensure certificate validation succeeds.
These URL's are used by Plex internally to connect to the Plex server securely.
The format for this URL's is the following:
https://ip-with-dots-replaced.server-hash.plex.direct:32400/
The `ip-with-dots-replaced` part can be any IPv4 address, where the dots
have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`.
To get the `server-hash` part, the easiest way is to visit
https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token
This page will list all the available Plex servers for your account
with at least one `.plex.direct` link for each. Copy one URL and replace
the IP address with the desired address. This can be used as the
`plex_url` value.
### Known issues ###
#### Mount and --dir-cache-time ####
@@ -242,6 +265,19 @@ which makes it think we're downloading the full file instead of small chunks.
Organizing the remotes in this order yelds better results:
<span style="color:green">**cloud remote** -> **cache** -> **crypt**</span>
#### absolute remote paths ####
`cache` can not differentiate between relative and absolute paths for the wrapped remote.
Any path given in the `remote` config setting and on the command line will be passed to
the wrapped remote as is, but for storing the chunks on disk the path will be made
relative by removing any leading `/` character.
This behavior is irrelevant for most backend types, but there are backends where a leading `/`
changes the effective directory, e.g. in the `sftp` backend paths starting with a `/` are
relative to the root of the SSH server and paths without are relative to the user home directory.
As a result `sftp:bin` and `sftp:/bin` will share the same cache folder, even if they represent
a different directory on the SSH server.
### Cache and Remote Control (--rc) ###
Cache supports the new `--rc` mode in rclone and can be remote controlled through the following end points:
By default, the listener is disabled if you do not add the flag.
@@ -281,7 +317,7 @@ then `--cache-chunk-path` will use the same path as `--cache-db-path`.
#### --cache-db-purge ####
Flag to clear all the cached data for this remote before.
Flag to clear all the cached data for this remote on start.
**Default**: not set
@@ -292,7 +328,7 @@ connections. If the chunk size is changed, any downloaded chunks will be invalid
**Default**: 5M
#### --cache-total-chunk-size=SIZE ####
#### --cache-chunk-total-size=SIZE ####
The total size that the chunks can take up on the local disk. If `cache`
exceeds this value then it will start to the delete the oldest chunks until
@@ -303,7 +339,7 @@ it goes under this value.
#### --cache-chunk-clean-interval=DURATION ####
How often should `cache` perform cleanups of the chunk storage. The default value
should be ok for most people. If you find that `cache` goes over `cache-total-chunk-size`
should be ok for most people. If you find that `cache` goes over `cache-chunk-total-size`
too often then try to lower this value to force it to perform cleanups more often.
**Default**: 1m

View File

@@ -1,7 +1,7 @@
---
title: "Documentation"
description: "Rclone Changelog"
date: "2018-09-07"
date: "2018-09-01"
---
# Changelog

View File

@@ -1,12 +1,12 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone"
slug: rclone
url: /commands/rclone/
---
## rclone
Sync files and directories to and from local and remote object stores - v1.43.1
Sync files and directories to and from local and remote object stores - v1.43
### Synopsis
@@ -176,7 +176,6 @@ rclone [flags]
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
-h, --help help for rclone
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -304,7 +303,7 @@ rclone [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
@@ -364,4 +363,4 @@ rclone [flags]
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone about"
slug: rclone_about
url: /commands/rclone_about/
@@ -184,7 +184,6 @@ rclone about remote: [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -312,7 +311,7 @@ rclone about remote: [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -325,6 +324,6 @@ rclone about remote: [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
@@ -143,7 +143,6 @@ rclone authorize [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -271,7 +270,7 @@ rclone authorize [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -284,6 +283,6 @@ rclone authorize [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone cachestats"
slug: rclone_cachestats
url: /commands/rclone_cachestats/
@@ -142,7 +142,6 @@ rclone cachestats source: [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -270,7 +269,7 @@ rclone cachestats source: [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -283,6 +282,6 @@ rclone cachestats source: [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/
@@ -164,7 +164,6 @@ rclone cat remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -292,7 +291,7 @@ rclone cat remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -305,6 +304,6 @@ rclone cat remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
@@ -158,7 +158,6 @@ rclone check source:path dest:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -286,7 +285,7 @@ rclone check source:path dest:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -299,6 +298,6 @@ rclone check source:path dest:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
@@ -143,7 +143,6 @@ rclone cleanup remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -271,7 +270,7 @@ rclone cleanup remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -284,6 +283,6 @@ rclone cleanup remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
@@ -143,7 +143,6 @@ rclone config [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -271,7 +270,7 @@ rclone config [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -284,7 +283,7 @@ rclone config [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
* [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options.
* [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote <name>.
* [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON.
@@ -295,4 +294,4 @@ rclone config [flags]
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config create"
slug: rclone_config_create
url: /commands/rclone_config_create/
@@ -148,7 +148,6 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -276,7 +275,7 @@ rclone config create <name> <type> [<key> <value>]* [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -291,4 +290,4 @@ rclone config create <name> <type> [<key> <value>]* [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config delete"
slug: rclone_config_delete
url: /commands/rclone_config_delete/
@@ -140,7 +140,6 @@ rclone config delete <name> [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -268,7 +267,7 @@ rclone config delete <name> [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -283,4 +282,4 @@ rclone config delete <name> [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config dump"
slug: rclone_config_dump
url: /commands/rclone_config_dump/
@@ -140,7 +140,6 @@ rclone config dump [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -268,7 +267,7 @@ rclone config dump [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -283,4 +282,4 @@ rclone config dump [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config edit"
slug: rclone_config_edit
url: /commands/rclone_config_edit/
@@ -143,7 +143,6 @@ rclone config edit [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -271,7 +270,7 @@ rclone config edit [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -286,4 +285,4 @@ rclone config edit [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config file"
slug: rclone_config_file
url: /commands/rclone_config_file/
@@ -140,7 +140,6 @@ rclone config file [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -268,7 +267,7 @@ rclone config file [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -283,4 +282,4 @@ rclone config file [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config password"
slug: rclone_config_password
url: /commands/rclone_config_password/
@@ -147,7 +147,6 @@ rclone config password <name> [<key> <value>]+ [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -275,7 +274,7 @@ rclone config password <name> [<key> <value>]+ [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -290,4 +289,4 @@ rclone config password <name> [<key> <value>]+ [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config providers"
slug: rclone_config_providers
url: /commands/rclone_config_providers/
@@ -140,7 +140,6 @@ rclone config providers [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -268,7 +267,7 @@ rclone config providers [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -283,4 +282,4 @@ rclone config providers [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config show"
slug: rclone_config_show
url: /commands/rclone_config_show/
@@ -140,7 +140,6 @@ rclone config show [<remote>] [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -268,7 +267,7 @@ rclone config show [<remote>] [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -283,4 +282,4 @@ rclone config show [<remote>] [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone config update"
slug: rclone_config_update
url: /commands/rclone_config_update/
@@ -147,7 +147,6 @@ rclone config update <name> [<key> <value>]+ [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -275,7 +274,7 @@ rclone config update <name> [<key> <value>]+ [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -290,4 +289,4 @@ rclone config update <name> [<key> <value>]+ [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
@@ -176,7 +176,6 @@ rclone copy source:path dest:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -304,7 +303,7 @@ rclone copy source:path dest:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -317,6 +316,6 @@ rclone copy source:path dest:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone copyto"
slug: rclone_copyto
url: /commands/rclone_copyto/
@@ -166,7 +166,6 @@ rclone copyto source:path dest:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -294,7 +293,7 @@ rclone copyto source:path dest:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -307,6 +306,6 @@ rclone copyto source:path dest:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone copyurl"
slug: rclone_copyurl
url: /commands/rclone_copyurl/
@@ -143,7 +143,6 @@ rclone copyurl https://example.com dest:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -271,7 +270,7 @@ rclone copyurl https://example.com dest:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -284,6 +283,6 @@ rclone copyurl https://example.com dest:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone cryptcheck"
slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/
@@ -168,7 +168,6 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -296,7 +295,7 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -309,6 +308,6 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone cryptdecode"
slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/
@@ -152,7 +152,6 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -280,7 +279,7 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -293,6 +292,6 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone dbhashsum"
slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/
@@ -145,7 +145,6 @@ rclone dbhashsum remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -273,7 +272,7 @@ rclone dbhashsum remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -286,6 +285,6 @@ rclone dbhashsum remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/
@@ -221,7 +221,6 @@ rclone dedupe [mode] remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -349,7 +348,7 @@ rclone dedupe [mode] remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -362,6 +361,6 @@ rclone dedupe [mode] remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/
@@ -157,7 +157,6 @@ rclone delete remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -285,7 +284,7 @@ rclone delete remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -298,6 +297,6 @@ rclone delete remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone deletefile"
slug: rclone_deletefile
url: /commands/rclone_deletefile/
@@ -144,7 +144,6 @@ rclone deletefile remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -272,7 +271,7 @@ rclone deletefile remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -285,6 +284,6 @@ rclone deletefile remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone genautocomplete"
slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/
@@ -139,7 +139,6 @@ Run with --help to list the supported shells.
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -267,7 +266,7 @@ Run with --help to list the supported shells.
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -280,8 +279,8 @@ Run with --help to list the supported shells.
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
* [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone.
* [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone genautocomplete bash"
slug: rclone_genautocomplete_bash
url: /commands/rclone_genautocomplete_bash/
@@ -155,7 +155,6 @@ rclone genautocomplete bash [output_file] [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -283,7 +282,7 @@ rclone genautocomplete bash [output_file] [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -298,4 +297,4 @@ rclone genautocomplete bash [output_file] [flags]
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone genautocomplete zsh"
slug: rclone_genautocomplete_zsh
url: /commands/rclone_genautocomplete_zsh/
@@ -155,7 +155,6 @@ rclone genautocomplete zsh [output_file] [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -283,7 +282,7 @@ rclone genautocomplete zsh [output_file] [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -298,4 +297,4 @@ rclone genautocomplete zsh [output_file] [flags]
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone gendocs"
slug: rclone_gendocs
url: /commands/rclone_gendocs/
@@ -143,7 +143,6 @@ rclone gendocs output_directory [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -271,7 +270,7 @@ rclone gendocs output_directory [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -284,6 +283,6 @@ rclone gendocs output_directory [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone hashsum"
slug: rclone_hashsum
url: /commands/rclone_hashsum/
@@ -157,7 +157,6 @@ rclone hashsum <hash> remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -285,7 +284,7 @@ rclone hashsum <hash> remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -298,6 +297,6 @@ rclone hashsum <hash> remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone link"
slug: rclone_link
url: /commands/rclone_link/
@@ -150,7 +150,6 @@ rclone link remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -278,7 +277,7 @@ rclone link remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -291,6 +290,6 @@ rclone link remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone listremotes"
slug: rclone_listremotes
url: /commands/rclone_listremotes/
@@ -145,7 +145,6 @@ rclone listremotes [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -273,7 +272,7 @@ rclone listremotes [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -286,6 +285,6 @@ rclone listremotes [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone ls"
slug: rclone_ls
url: /commands/rclone_ls/
@@ -174,7 +174,6 @@ rclone ls remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -302,7 +301,7 @@ rclone ls remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -315,6 +314,6 @@ rclone ls remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone lsd"
slug: rclone_lsd
url: /commands/rclone_lsd/
@@ -185,7 +185,6 @@ rclone lsd remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -313,7 +312,7 @@ rclone lsd remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -326,6 +325,6 @@ rclone lsd remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone lsf"
slug: rclone_lsf
url: /commands/rclone_lsf/
@@ -263,7 +263,6 @@ rclone lsf remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -391,7 +390,7 @@ rclone lsf remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -404,6 +403,6 @@ rclone lsf remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone lsjson"
slug: rclone_lsjson
url: /commands/rclone_lsjson/
@@ -203,7 +203,6 @@ rclone lsjson remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -331,7 +330,7 @@ rclone lsjson remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -344,6 +343,6 @@ rclone lsjson remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone lsl"
slug: rclone_lsl
url: /commands/rclone_lsl/
@@ -174,7 +174,6 @@ rclone lsl remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -302,7 +301,7 @@ rclone lsl remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -315,6 +314,6 @@ rclone lsl remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone md5sum"
slug: rclone_md5sum
url: /commands/rclone_md5sum/
@@ -143,7 +143,6 @@ rclone md5sum remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -271,7 +270,7 @@ rclone md5sum remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -284,6 +283,6 @@ rclone md5sum remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone mkdir"
slug: rclone_mkdir
url: /commands/rclone_mkdir/
@@ -140,7 +140,6 @@ rclone mkdir remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -268,7 +267,7 @@ rclone mkdir remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -281,6 +280,6 @@ rclone mkdir remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone mount"
slug: rclone_mount
url: /commands/rclone_mount/
@@ -92,7 +92,7 @@ systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the
uploads. Look at the **EXPERIMENTAL** [file caching](#file-caching)
for solutions to make mount mount more reliable.
for solutions to make mount more reliable.
### Attribute caching
@@ -444,7 +444,6 @@ rclone mount remote:path /path/to/mountpoint [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -572,7 +571,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -585,6 +584,6 @@ rclone mount remote:path /path/to/mountpoint [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone move"
slug: rclone_move
url: /commands/rclone_move/
@@ -160,7 +160,6 @@ rclone move source:path dest:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -288,7 +287,7 @@ rclone move source:path dest:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -301,6 +300,6 @@ rclone move source:path dest:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone moveto"
slug: rclone_moveto
url: /commands/rclone_moveto/
@@ -169,7 +169,6 @@ rclone moveto source:path dest:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -297,7 +296,7 @@ rclone moveto source:path dest:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -310,6 +309,6 @@ rclone moveto source:path dest:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone ncdu"
slug: rclone_ncdu
url: /commands/rclone_ncdu/
@@ -167,7 +167,6 @@ rclone ncdu remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -295,7 +294,7 @@ rclone ncdu remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -308,6 +307,6 @@ rclone ncdu remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone obscure"
slug: rclone_obscure
url: /commands/rclone_obscure/
@@ -140,7 +140,6 @@ rclone obscure password [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -268,7 +267,7 @@ rclone obscure password [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -281,6 +280,6 @@ rclone obscure password [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone purge"
slug: rclone_purge
url: /commands/rclone_purge/
@@ -144,7 +144,6 @@ rclone purge remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -272,7 +271,7 @@ rclone purge remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -285,6 +284,6 @@ rclone purge remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone rc"
slug: rclone_rc
url: /commands/rclone_rc/
@@ -150,7 +150,6 @@ rclone rc commands parameter [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -278,7 +277,7 @@ rclone rc commands parameter [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -291,6 +290,6 @@ rclone rc commands parameter [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone rcat"
slug: rclone_rcat
url: /commands/rclone_rcat/
@@ -162,7 +162,6 @@ rclone rcat remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -290,7 +289,7 @@ rclone rcat remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -303,6 +302,6 @@ rclone rcat remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone rmdir"
slug: rclone_rmdir
url: /commands/rclone_rmdir/
@@ -142,7 +142,6 @@ rclone rmdir remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -270,7 +269,7 @@ rclone rmdir remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -283,6 +282,6 @@ rclone rmdir remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone rmdirs"
slug: rclone_rmdirs
url: /commands/rclone_rmdirs/
@@ -150,7 +150,6 @@ rclone rmdirs remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -278,7 +277,7 @@ rclone rmdirs remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -291,6 +290,6 @@ rclone rmdirs remote:path [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone serve"
slug: rclone_serve
url: /commands/rclone_serve/
@@ -146,7 +146,6 @@ rclone serve <protocol> [opts] <remote> [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -274,7 +273,7 @@ rclone serve <protocol> [opts] <remote> [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -287,9 +286,9 @@ rclone serve <protocol> [opts] <remote> [flags]
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43.1
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.43
* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
* [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone serve http"
slug: rclone_serve_http
url: /commands/rclone_serve_http/
@@ -356,7 +356,6 @@ rclone serve http remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -484,7 +483,7 @@ rclone serve http remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -499,4 +498,4 @@ rclone serve http remote:path [flags]
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

View File

@@ -1,5 +1,5 @@
---
date: 2018-09-07T16:09:26+01:00
date: 2018-09-01T12:54:54+01:00
title: "rclone serve restic"
slug: rclone_serve_restic
url: /commands/rclone_serve_restic/
@@ -276,7 +276,6 @@ rclone serve restic remote:path [flags]
--gcs-service-account-file string Service Account Credentials JSON file path
--gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
--http-url string URL of http host to connect to
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
--hubic-client-id string Hubic Client Id
--hubic-client-secret string Hubic Client Secret
--ignore-checksum Skip post copy check of checksums.
@@ -404,7 +403,7 @@ rclone serve restic remote:path [flags]
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43.1")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.43")
-v, --verbose count Print lots more stuff (repeat for more)
--webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
--webdav-pass string Password.
@@ -419,4 +418,4 @@ rclone serve restic remote:path [flags]
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
###### Auto generated by spf13/cobra on 7-Sep-2018
###### Auto generated by spf13/cobra on 1-Sep-2018

Some files were not shown because too many files have changed in this diff Show More