1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-11 21:13:35 +00:00

Compare commits

..

1 Commits
v1.36 ... v1.02

Author SHA1 Message Date
Nick Craig-Wood
094cc27621 Version 1.02 2014-07-19 13:12:20 +01:00
1138 changed files with 4726 additions and 392808 deletions

4
.gitignore vendored
View File

@@ -1,5 +1,9 @@
*~
_junk/
rclone
rclonetest/rclonetest
build
docs/public
README.html
README.txt
rclone.1

View File

@@ -1,2 +0,0 @@
default_dependencies: false
cli: false

View File

@@ -1,34 +1,11 @@
language: go
sudo: false
osx_image: xcode7.3
os:
- linux
go:
- 1.5.4
- 1.6.4
- 1.7.4
- 1.8
- tip
install:
- git fetch --unshallow --tags
- make vars
- make build_dep
- 1.1.2
- 1.2.2
- 1.3
- tip
script:
- make check
- make quicktest
env:
matrix:
secure: gU8gCV9R8Kv/Gn0SmCP37edpfIbPoSvsub48GK7qxJdTU628H0KOMiZW/T0gtV5d67XJZ4eKnhJYlxwwxgSgfejO32Rh5GlYEKT/FuVoH0BD72dM1GDFLSrUiUYOdoHvf/BKIFA3dJFT4lk2ASy4Zh7SEoXHG6goBlqUpYx8hVA=
matrix:
allow_failures:
- go: tip
include:
- os: osx
go: 1.8
deploy:
provider: script
script: make travis_beta
on:
branch: master
go: 1.8
condition: "`uname` == 'Linux'"
- go get ./...
- go test -v ./...

View File

@@ -1,191 +0,0 @@
# Contributing to rclone #
This is a short guide on how to contribute things to rclone.
## Reporting a bug ##
If you've just got a question or aren't sure if you've found a bug
then please use the [rclone forum](https://forum.rclone.org/) instead
of filing an issue.
When filing an issue, please include the following information if
possible as well as a description of the problem. Make sure you test
with the [latest beta of rclone](http://beta.rclone.org/):
* Rclone version (eg output from `rclone -V`)
* Which OS you are using and how many bits (eg Windows 7, 64 bit)
* The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
* A log of the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`)
* if the log contains secrets then edit the file with a text editor first to obscure them
## Submitting a pull request ##
If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via Github.
If it is a big feature then make an issue first so it can be discussed.
You'll need a Go environment set up with GOPATH set. See [the Go
getting started docs](https://golang.org/doc/install) for more info.
First in your web browser press the fork button on [rclone's Github
page](https://github.com/ncw/rclone).
Now in your terminal
go get github.com/ncw/rclone
cd $GOPATH/src/github.com/ncw/rclone
git remote rename origin upstream
git remote add origin git@github.com:YOURUSER/rclone.git
Make a branch to add your new feature
git checkout -b my-new-feature
And get hacking.
When ready - run the unit tests for the code you changed
go test -v
Note that you may need to make a test remote, eg `TestSwift` for some
of the unit tests.
Note the top level Makefile targets
* make check
* make test
Both of these will be run by Travis when you make a pull request but
you can do this yourself locally too. These require some extra go
packages which you can install with
* make build_dep
Make sure you
* Add documentation for a new feature (see below for where)
* Add unit tests for a new feature
* squash commits down to one per feature
* rebase to master `git rebase master`
When you are done with that
git push origin my-new-feature
Go to the Github website and click [Create pull
request](https://help.github.com/articles/creating-a-pull-request/).
You patch will get reviewed and you might get asked to fix some stuff.
If so, then make the changes in the same branch, squash the commits,
rebase it to master then push it to Github with `--force`.
## Testing ##
rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests.
go test -v ./...
rclone contains a mixture of unit tests and integration tests.
Because it is difficult (and in some respects pointless) to test cloud
storage systems by mocking all their interfaces, rclone unit tests can
run against any of the backends. This is done by making specially
named remotes in the default config file.
If you wanted to test changes in the `drive` backend, then you would
need to make a remote called `TestDrive`.
You can then run the unit tests in the drive directory. These tests
are skipped if `TestDrive:` isn't defined.
cd drive
go test -v
You can then run the integration tests which tests all of rclone's
operations. Normally these get run against the local filing system,
but they can be run against any of the remotes.
cd ../fs
go test -v -remote TestDrive:
go test -v -remote TestDrive: -subdir
If you want to run all the integration tests against all the remotes,
then run in that directory
go run test_all.go
## Writing Documentation ##
If you are adding a new feature then please update the documentation.
If you add a new flag, then if it is a general flag, document it in
`docs/content/docs.md` - the flags there are supposed to be in
alphabetical order. If it is a remote specific flag, then document it
in `docs/content/remote.md`.
The only documentation you need to edit are the `docs/content/*.md`
files. The MANUAL.*, rclone.1, web site etc are all auto generated
from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature.
Documentation for rclone sub commands is with their code, eg
`cmd/ls/ls.go`.
## Making a release ##
There are separate instructions for making a release in the RELEASE.md
file.
## Updating the vendor dirctory ##
Do these commands to update the entire build directory to the latest
version of all the dependencies. This should be done early in the
release cycle. Individual dependencies can be added with `godep get`.
* make build_dep
* make update
## Writing a new backend ##
Choose a name. The docs here will use `remote` as an example.
Note that in rclone terminology a file system backend is called a
remote or an fs.
Research
* Look at the interfaces defined in `fs/fs.go`
* Study one or more of the existing remotes
Getting going
* Create `remote/remote.go` (copy this from a similar fs)
* Add your fs to the imports in `fs/all/all.go`
Unit tests
* Create a config entry called `TestRemote` for the unit tests to use
* Add your fs to the end of `fstest/fstests/gen_tests.go`
* generate `remote/remote_test.go` unit tests `cd fstest/fstests; go generate`
* Make sure all tests pass with `go test -v`
Integration tests
* Add your fs to `fs/test_all.go`
* Make sure integration tests pass with
* `cd fs`
* `go test -v -remote TestRemote:` and
* `go test -v -remote TestRemote: -subdir`
Add your fs to the docs
* `README.md` - main Github page
* `docs/content/remote.md` - main docs page
* `docs/content/overview.md` - overview docs
* `docs/content/docs.md` - list of remotes in config section
* `docs/content/about.md` - front page of rclone.org
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
* `bin/make_manual.py` - add the page to the `docs` constant

513
Godeps/Godeps.json generated
View File

@@ -1,513 +0,0 @@
{
"ImportPath": "github.com/ncw/rclone",
"GoVersion": "go1.7",
"GodepVersion": "v75",
"Packages": [
"./..."
],
"Deps": [
{
"ImportPath": "bazil.org/fuse",
"Rev": "371fbbdaa8987b715bdd21d6adc4c9b20155f748"
},
{
"ImportPath": "bazil.org/fuse/fs",
"Rev": "371fbbdaa8987b715bdd21d6adc4c9b20155f748"
},
{
"ImportPath": "bazil.org/fuse/fuseutil",
"Rev": "371fbbdaa8987b715bdd21d6adc4c9b20155f748"
},
{
"ImportPath": "cloud.google.com/go/compute/metadata",
"Comment": "v0.6.0-68-g0b87d14",
"Rev": "0b87d14d90086b53a97dfbd66f3000f7f112b494"
},
{
"ImportPath": "cloud.google.com/go/internal",
"Comment": "v0.6.0-68-g0b87d14",
"Rev": "0b87d14d90086b53a97dfbd66f3000f7f112b494"
},
{
"ImportPath": "github.com/Unknwon/goconfig",
"Rev": "87a46d97951ee1ea20ed3b24c25646a79e87ba5d"
},
{
"ImportPath": "github.com/VividCortex/ewma",
"Comment": "v1.0-20-gc595cd8",
"Rev": "c595cd886c223c6c28fc9ae2727a61b5e4693d85"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/awserr",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/awsutil",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/client",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/client/metadata",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/corehandlers",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/credentials",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/credentials/endpointcreds",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/credentials/stscreds",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/defaults",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/ec2metadata",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/endpoints",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/request",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/session",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/aws/signer/v4",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/query",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/query/queryutil",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/rest",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/restxml",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/private/waiter",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/s3",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/s3/s3iface",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/s3/s3manager",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/aws/aws-sdk-go/service/sts",
"Comment": "v1.6.24-1-g2d3b3bc",
"Rev": "2d3b3bc3aae6a09a9b194aa6eb71326fcbe2e918"
},
{
"ImportPath": "github.com/cpuguy83/go-md2man/md2man",
"Comment": "v1.0.6",
"Rev": "a65d4d2de4d5f7c74868dfa9b202a3c8be315aaa"
},
{
"ImportPath": "github.com/davecgh/go-spew/spew",
"Comment": "v1.1.0",
"Rev": "346938d642f2ec3594ed81d874461961cd0faa76"
},
{
"ImportPath": "github.com/go-ini/ini",
"Comment": "v1.24.0-2-gee900ca",
"Rev": "ee900ca565931451fe4e4409bcbd4316331cec1c"
},
{
"ImportPath": "github.com/golang/protobuf/proto",
"Rev": "8ee79997227bf9b34611aee7946ae64735e6fd93"
},
{
"ImportPath": "github.com/google/go-querystring/query",
"Rev": "53e6ce116135b80d037921a7fdd5138cf32d7a8a"
},
{
"ImportPath": "github.com/googleapis/gax-go",
"Rev": "da06d194a00e19ce00d9011a13931c3f6f6887c7"
},
{
"ImportPath": "github.com/inconshreveable/mousetrap",
"Rev": "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
},
{
"ImportPath": "github.com/jmespath/go-jmespath",
"Comment": "0.2.2-14-gbd40a43",
"Rev": "bd40a432e4c76585ef6b72d3fd96fb9b6dc7b68d"
},
{
"ImportPath": "github.com/kr/fs",
"Rev": "2788f0dbd16903de03cb8186e5c7d97b69ad387b"
},
{
"ImportPath": "github.com/ncw/go-acd",
"Rev": "7954f1fad2bda6a7836999003e4481d6e32edc1e"
},
{
"ImportPath": "github.com/ncw/swift",
"Rev": "6c1b1510538e1f00d49a558b7b9b87d71bc454d6"
},
{
"ImportPath": "github.com/pkg/errors",
"Comment": "v0.8.0-2-g248dadf",
"Rev": "248dadf4e9068a0b3e79f02ed0a610d935de5302"
},
{
"ImportPath": "github.com/pkg/sftp",
"Rev": "ff7e52ffd762466ebd2c4e710d5436dccc539f54"
},
{
"ImportPath": "github.com/pmezard/go-difflib/difflib",
"Comment": "v1.0.0",
"Rev": "792786c7400a136282c1664665ae0a8db921c6c2"
},
{
"ImportPath": "github.com/rfjakob/eme",
"Comment": "v1.0-15-gfd00240",
"Rev": "fd00240838d2e0fe6b2c58bf5b27db843d828ad5"
},
{
"ImportPath": "github.com/russross/blackfriday",
"Comment": "v1.4-40-g5f33e7b",
"Rev": "5f33e7b7878355cd2b7e6b8eefc48a5472c69f70"
},
{
"ImportPath": "github.com/shurcooL/sanitized_anchor_name",
"Rev": "1dba4b3954bc059efc3991ec364f9f9a35f597d2"
},
{
"ImportPath": "github.com/skratchdot/open-golang/open",
"Rev": "75fb7ed4208cf72d323d7d02fd1a5964a7a9073c"
},
{
"ImportPath": "github.com/spf13/cobra",
"Rev": "b5d8e8f46a2f829f755b6e33b454e25c61c935e1"
},
{
"ImportPath": "github.com/spf13/cobra/doc",
"Rev": "b5d8e8f46a2f829f755b6e33b454e25c61c935e1"
},
{
"ImportPath": "github.com/spf13/pflag",
"Rev": "9ff6c6923cfffbcd502984b8e0c80539a94968b7"
},
{
"ImportPath": "github.com/stacktic/dropbox",
"Rev": "58f839b21094d5e0af7caf613599830589233d20"
},
{
"ImportPath": "github.com/stretchr/testify/assert",
"Comment": "v1.1.4-27-g4d4bfba",
"Rev": "4d4bfba8f1d1027c4fdbe371823030df51419987"
},
{
"ImportPath": "github.com/stretchr/testify/require",
"Comment": "v1.1.4-27-g4d4bfba",
"Rev": "4d4bfba8f1d1027c4fdbe371823030df51419987"
},
{
"ImportPath": "github.com/tsenart/tb",
"Rev": "19f4c3d79d2bd67d0911b2e310b999eeea4454c1"
},
{
"ImportPath": "golang.org/x/crypto/curve25519",
"Rev": "453249f01cfeb54c3d549ddb75ff152ca243f9d8"
},
{
"ImportPath": "golang.org/x/crypto/ed25519",
"Rev": "453249f01cfeb54c3d549ddb75ff152ca243f9d8"
},
{
"ImportPath": "golang.org/x/crypto/ed25519/internal/edwards25519",
"Rev": "453249f01cfeb54c3d549ddb75ff152ca243f9d8"
},
{
"ImportPath": "golang.org/x/crypto/nacl/secretbox",
"Rev": "453249f01cfeb54c3d549ddb75ff152ca243f9d8"
},
{
"ImportPath": "golang.org/x/crypto/pbkdf2",
"Rev": "453249f01cfeb54c3d549ddb75ff152ca243f9d8"
},
{
"ImportPath": "golang.org/x/crypto/poly1305",
"Rev": "453249f01cfeb54c3d549ddb75ff152ca243f9d8"
},
{
"ImportPath": "golang.org/x/crypto/salsa20/salsa",
"Rev": "453249f01cfeb54c3d549ddb75ff152ca243f9d8"
},
{
"ImportPath": "golang.org/x/crypto/scrypt",
"Rev": "453249f01cfeb54c3d549ddb75ff152ca243f9d8"
},
{
"ImportPath": "golang.org/x/crypto/ssh",
"Rev": "453249f01cfeb54c3d549ddb75ff152ca243f9d8"
},
{
"ImportPath": "golang.org/x/crypto/ssh/agent",
"Rev": "453249f01cfeb54c3d549ddb75ff152ca243f9d8"
},
{
"ImportPath": "golang.org/x/crypto/ssh/terminal",
"Rev": "453249f01cfeb54c3d549ddb75ff152ca243f9d8"
},
{
"ImportPath": "golang.org/x/net/context",
"Rev": "b4690f45fa1cafc47b1c280c2e75116efe40cc13"
},
{
"ImportPath": "golang.org/x/net/context/ctxhttp",
"Rev": "b4690f45fa1cafc47b1c280c2e75116efe40cc13"
},
{
"ImportPath": "golang.org/x/net/http2",
"Rev": "b4690f45fa1cafc47b1c280c2e75116efe40cc13"
},
{
"ImportPath": "golang.org/x/net/http2/hpack",
"Rev": "b4690f45fa1cafc47b1c280c2e75116efe40cc13"
},
{
"ImportPath": "golang.org/x/net/idna",
"Rev": "b4690f45fa1cafc47b1c280c2e75116efe40cc13"
},
{
"ImportPath": "golang.org/x/net/internal/timeseries",
"Rev": "b4690f45fa1cafc47b1c280c2e75116efe40cc13"
},
{
"ImportPath": "golang.org/x/net/lex/httplex",
"Rev": "b4690f45fa1cafc47b1c280c2e75116efe40cc13"
},
{
"ImportPath": "golang.org/x/net/trace",
"Rev": "b4690f45fa1cafc47b1c280c2e75116efe40cc13"
},
{
"ImportPath": "golang.org/x/oauth2",
"Rev": "b9780ec78894ab900c062d58ee3076cd9b2a4501"
},
{
"ImportPath": "golang.org/x/oauth2/google",
"Rev": "b9780ec78894ab900c062d58ee3076cd9b2a4501"
},
{
"ImportPath": "golang.org/x/oauth2/internal",
"Rev": "b9780ec78894ab900c062d58ee3076cd9b2a4501"
},
{
"ImportPath": "golang.org/x/oauth2/jws",
"Rev": "b9780ec78894ab900c062d58ee3076cd9b2a4501"
},
{
"ImportPath": "golang.org/x/oauth2/jwt",
"Rev": "b9780ec78894ab900c062d58ee3076cd9b2a4501"
},
{
"ImportPath": "golang.org/x/sys/unix",
"Rev": "075e574b89e4c2d22f2286a7e2b919519c6f3547"
},
{
"ImportPath": "golang.org/x/text/transform",
"Rev": "85c29909967d7f171f821e7a42e7b7af76fb9598"
},
{
"ImportPath": "golang.org/x/text/unicode/norm",
"Rev": "85c29909967d7f171f821e7a42e7b7af76fb9598"
},
{
"ImportPath": "google.golang.org/api/drive/v2",
"Rev": "bc20c61134e1d25265dd60049f5735381e79b631"
},
{
"ImportPath": "google.golang.org/api/gensupport",
"Rev": "bc20c61134e1d25265dd60049f5735381e79b631"
},
{
"ImportPath": "google.golang.org/api/googleapi",
"Rev": "bc20c61134e1d25265dd60049f5735381e79b631"
},
{
"ImportPath": "google.golang.org/api/googleapi/internal/uritemplates",
"Rev": "bc20c61134e1d25265dd60049f5735381e79b631"
},
{
"ImportPath": "google.golang.org/api/storage/v1",
"Rev": "bc20c61134e1d25265dd60049f5735381e79b631"
},
{
"ImportPath": "google.golang.org/appengine",
"Comment": "v1.0.0-28-g2e4a801",
"Rev": "2e4a801b39fc199db615bfca7d0b9f8cd9580599"
},
{
"ImportPath": "google.golang.org/appengine/internal",
"Comment": "v1.0.0-28-g2e4a801",
"Rev": "2e4a801b39fc199db615bfca7d0b9f8cd9580599"
},
{
"ImportPath": "google.golang.org/appengine/internal/app_identity",
"Comment": "v1.0.0-28-g2e4a801",
"Rev": "2e4a801b39fc199db615bfca7d0b9f8cd9580599"
},
{
"ImportPath": "google.golang.org/appengine/internal/base",
"Comment": "v1.0.0-28-g2e4a801",
"Rev": "2e4a801b39fc199db615bfca7d0b9f8cd9580599"
},
{
"ImportPath": "google.golang.org/appengine/internal/datastore",
"Comment": "v1.0.0-28-g2e4a801",
"Rev": "2e4a801b39fc199db615bfca7d0b9f8cd9580599"
},
{
"ImportPath": "google.golang.org/appengine/internal/log",
"Comment": "v1.0.0-28-g2e4a801",
"Rev": "2e4a801b39fc199db615bfca7d0b9f8cd9580599"
},
{
"ImportPath": "google.golang.org/appengine/internal/modules",
"Comment": "v1.0.0-28-g2e4a801",
"Rev": "2e4a801b39fc199db615bfca7d0b9f8cd9580599"
},
{
"ImportPath": "google.golang.org/appengine/internal/remote_api",
"Comment": "v1.0.0-28-g2e4a801",
"Rev": "2e4a801b39fc199db615bfca7d0b9f8cd9580599"
},
{
"ImportPath": "google.golang.org/grpc",
"Comment": "v1.0.5-52-gd0c32ee",
"Rev": "d0c32ee6a441117d49856d6120ca9552af413ee0"
},
{
"ImportPath": "google.golang.org/grpc/codes",
"Comment": "v1.0.5-52-gd0c32ee",
"Rev": "d0c32ee6a441117d49856d6120ca9552af413ee0"
},
{
"ImportPath": "google.golang.org/grpc/credentials",
"Comment": "v1.0.5-52-gd0c32ee",
"Rev": "d0c32ee6a441117d49856d6120ca9552af413ee0"
},
{
"ImportPath": "google.golang.org/grpc/grpclog",
"Comment": "v1.0.5-52-gd0c32ee",
"Rev": "d0c32ee6a441117d49856d6120ca9552af413ee0"
},
{
"ImportPath": "google.golang.org/grpc/internal",
"Comment": "v1.0.5-52-gd0c32ee",
"Rev": "d0c32ee6a441117d49856d6120ca9552af413ee0"
},
{
"ImportPath": "google.golang.org/grpc/metadata",
"Comment": "v1.0.5-52-gd0c32ee",
"Rev": "d0c32ee6a441117d49856d6120ca9552af413ee0"
},
{
"ImportPath": "google.golang.org/grpc/naming",
"Comment": "v1.0.5-52-gd0c32ee",
"Rev": "d0c32ee6a441117d49856d6120ca9552af413ee0"
},
{
"ImportPath": "google.golang.org/grpc/peer",
"Comment": "v1.0.5-52-gd0c32ee",
"Rev": "d0c32ee6a441117d49856d6120ca9552af413ee0"
},
{
"ImportPath": "google.golang.org/grpc/stats",
"Comment": "v1.0.5-52-gd0c32ee",
"Rev": "d0c32ee6a441117d49856d6120ca9552af413ee0"
},
{
"ImportPath": "google.golang.org/grpc/tap",
"Comment": "v1.0.5-52-gd0c32ee",
"Rev": "d0c32ee6a441117d49856d6120ca9552af413ee0"
},
{
"ImportPath": "google.golang.org/grpc/transport",
"Comment": "v1.0.5-52-gd0c32ee",
"Rev": "d0c32ee6a441117d49856d6120ca9552af413ee0"
},
{
"ImportPath": "gopkg.in/yaml.v2",
"Rev": "a3f3340b5840cee44f372bddb5880fcbc419b46a"
}
]
}

5
Godeps/Readme generated
View File

@@ -1,5 +0,0 @@
This directory tree is generated automatically by godep.
Please do not edit.
See https://github.com/tools/godep for more information.

View File

@@ -1,17 +0,0 @@
When filing an issue, please include the following information if possible as well as a description of the problem. Make sure you test with the latest beta of rclone.
http://beta.rclone.org/
http://rclone.org/downloads/
If you've just got a question or aren't sure if you've found a bug then please use the [rclone forum](https://forum.rclone.org/) instead of filing an issue.
> What is your rclone version (eg output from `rclone -V`)
> Which OS you are using and how many bits (eg Windows 7, 64 bit)
> Which cloud storage system are you using? (eg Google Drive)
> The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
> A log from the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`)

File diff suppressed because it is too large Load Diff

6109
MANUAL.md

File diff suppressed because it is too large Load Diff

6052
MANUAL.txt

File diff suppressed because it is too large Load Diff

128
Makefile
View File

@@ -1,141 +1,59 @@
SHELL = /bin/bash
TAG := $(shell echo `git describe --tags`-`git rev-parse --abbrev-ref HEAD` | sed 's/-\([0-9]\)-/-0\1-/; s/-\(HEAD\|master\)$$//')
TAG := $(shell git describe --tags)
LAST_TAG := $(shell git describe --tags --abbrev=0)
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f", $$_)')
GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... | grep -v /vendor/ )
GO_LATEST := $(findstring go1.8,$(GO_VERSION))
BETA_URL := http://beta.rclone.org/$(TAG)/
# Only needed for Go 1.5
export GO15VENDOREXPERIMENT=1
.PHONY: rclone
rclone: *.go */*.go
@go version
go build
rclone:
touch fs/version.go
go install -v --ldflags "-s -X github.com/ncw/rclone/fs.Version=$(TAG)"
cp -av `go env GOPATH`/bin/rclone .
doc: rclone.1 README.html README.txt
vars:
@echo SHELL="'$(SHELL)'"
@echo TAG="'$(TAG)'"
@echo LAST_TAG="'$(LAST_TAG)'"
@echo NEW_TAG="'$(NEW_TAG)'"
@echo GO_VERSION="'$(GO_VERSION)'"
@echo GO_LATEST="'$(GO_LATEST)'"
@echo BETA_URL="'$(BETA_URL)'"
rclone.1: README.md
pandoc -s --from markdown --to man README.md -o rclone.1
# Full suite of integration tests
test: rclone
go test $(GO_FILES)
cd fs && go run test_all.go
README.html: README.md
pandoc -s --from markdown_github --to html README.md -o README.html
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(GO_FILES)
RCLONE_CONFIG="/notfound" go test -cpu=2 -race $(GO_FILES)
# Do source code quality checks
check: rclone
ifdef GO_LATEST
go tool vet -printfuncs Debugf,Infof,Logf,Errorf . 2>&1 | grep -E -v vendor/ ; test $$? -eq 1
errcheck $(GO_FILES)
find . -name \*.go | grep -v /vendor/ | xargs goimports -d | grep . ; test $$? -eq 1
go list ./... | grep -v /vendor/ | xargs -i golint {} | grep -E -v '(StorageUrl|CdnUrl)' ; test $$? -eq 1
else
@echo Skipping tests as not on Go stable
endif
# Get the build dependencies
build_dep:
ifdef GO_LATEST
go get -u github.com/kisielk/errcheck
go get -u golang.org/x/tools/cmd/goimports
go get -u github.com/golang/lint/golint
go get -u github.com/inconshreveable/mousetrap
go get -u github.com/tools/godep
endif
# Update dependencies
update:
rm -rf Godeps vendor
go get -t -u -f -v ./...
godep save ./...
doc: rclone.1 MANUAL.html MANUAL.txt
rclone.1: MANUAL.md
pandoc -s --from markdown --to man MANUAL.md -o rclone.1
MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs
./bin/make_manual.py
MANUAL.html: MANUAL.md
pandoc -s --from markdown --to html MANUAL.md -o MANUAL.html
MANUAL.txt: MANUAL.md
pandoc -s --from markdown --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
rclone gendocs docs/content/commands/
README.txt: README.md
pandoc -s --from markdown_github --to plain README.md -o README.txt
install: rclone
install -d ${DESTDIR}/usr/bin
install -t ${DESTDIR}/usr/bin ${GOPATH}/bin/rclone
install -t ${DESTDIR}/usr/bin rclone
clean:
go clean ./...
find . -name \*~ | xargs -r rm -f
rm -rf build docs/public
rm -f rclone rclonetest/rclonetest
rm -f rclone rclonetest/rclonetest rclone.1 README.html README.txt
website:
cd docs && hugo
upload_website: website
rclone -v sync docs/public memstore:www-rclone-org
./rclone -v sync docs/public memstore:www-rclone-org
upload:
rclone -v copy build/ memstore:downloads-rclone-org
upload_github:
./bin/upload-github $(TAG)
./rclone -v copy build/ memstore:downloads-rclone-org
cross: doc
go run bin/cross-compile.go -release current $(TAG)
./cross-compile $(TAG)
beta:
go run bin/cross-compile.go $(TAG)β
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)β
@echo Beta release ready at http://pub.rclone.org/$(TAG)%CE%B2/
travis_beta:
git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(TAG)β
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ memstore:beta-rclone-org/$(TAG)
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' build/ memstore:beta-rclone-org
@echo Beta release ready at $(BETA_URL)
serve: website
serve:
cd docs && hugo server -v -w
tag: doc
tag:
@echo "Old tag is $(LAST_TAG)"
@echo "New tag is $(NEW_TAG)"
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEW_TAG)\"\n" | gofmt > fs/version.go
echo -e "package fs\n const Version = \"$(NEW_TAG)\"\n" | gofmt > fs/version.go
perl -lpe 's/VERSION/${NEW_TAG}/g; s/DATE/'`date -I`'/g;' docs/content/downloads.md.in > docs/content/downloads.md
git tag $(NEW_TAG)
@echo "Edit the new changelog in docs/content/changelog.md"
@echo " * $(NEW_TAG) -" `date -I` >> docs/content/changelog.md
@git log $(LAST_TAG)..$(NEW_TAG) --oneline >> docs/content/changelog.md
@echo "Add this to changelog in README.md"
@echo " * $(NEW_TAG) -" `date -I`
@git log $(LAST_TAG)..$(NEW_TAG) --oneline
@echo "Then commit the changes"
@echo git commit -m \"Version $(NEW_TAG)\" -a -v
@echo git commit -m "Version $(NEW_TAG)" -a -v
@echo "And finally run make retag before make cross etc"
retag:
git tag -f $(LAST_TAG)
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(LAST_TAG)-DEV\"\n" | gofmt > fs/version.go
git commit -m "Start $(LAST_TAG)-DEV development" fs/version.go
gen_tests:
cd fstest/fstests && go generate

325
README.md
View File

@@ -1,16 +1,12 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Jul 7, 2014
Rclone
======
[![Logo](http://rclone.org/img/rclone-120x120.png)](http://rclone.org/)
[Website](http://rclone.org) |
[Documentation](http://rclone.org/docs/) |
[Contributing](CONTRIBUTING.md) |
[Changelog](http://rclone.org/changelog/) |
[Installation](http://rclone.org/install/) |
[Forum](https://forum.rclone.org/)
[G+](https://google.com/+RcloneOrg)
[![Build Status](https://travis-ci.org/ncw/rclone.svg?branch=master)](https://travis-ci.org/ncw/rclone) [![Windows Build Status](https://ci.appveyor.com/api/projects/status/github/ncw/rclone?branch=master&passingText=windows%20-%20ok&svg=true)](https://ci.appveyor.com/project/ncw/rclone) [![GoDoc](https://godoc.org/github.com/ncw/rclone?status.svg)](https://godoc.org/github.com/ncw/rclone)
Rclone is a command line program to sync files and directories to and from
* Google Drive
@@ -18,33 +14,312 @@ Rclone is a command line program to sync files and directories to and from
* Openstack Swift / Rackspace cloud files / Memset Memstore
* Dropbox
* Google Cloud Storage
* Amazon Drive
* Microsoft One Drive
* Hubic
* Backblaze B2
* Yandex Disk
* SFTP
* The local filesystem
Features
* MD5/SHA1 hashes checked at all times for file integrity
* MD5SUMs checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* Copy mode to just copy new/changed files
* Sync (one way) mode to make a directory identical
* Check mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
* Optional encryption (Crypt)
* Optional FUSE mount
* Sync mode to make a directory identical
* Check mode to check all MD5SUMs
* Can sync to and from network, eg two different Drive accounts
See the home page for installation, usage, documentation, changelog
and configuration walkthroughs.
See the Home page for more documentation and configuration walkthroughs.
* http://rclone.org/
Install
-------
Rclone is a Go program and comes as a single binary file.
Download the binary for your OS from
* http://rclone.org/downloads/
Or alternatively if you have Go installed use
go install github.com/ncw/rclone
and this will build the binary in `$GOPATH/bin`.
Configure
---------
First you'll need to configure rclone. As the object storage systems
have quite complicated authentication these are kept in a config file
`.rclone.conf` in your home directory by default. (You can use the
`--config` option to choose a different config file.)
The easiest way to make the config is to run rclone with the config
option, Eg
rclone config
Usage
-----
Rclone syncs a directory tree from local to remote.
Its basic syntax is
Syntax: [options] subcommand <parameters> <parameters...>
See below for how to specify the source and destination paths.
Subcommands
-----------
rclone copy source:path dest:path
Copy the source to the destination. Doesn't transfer
unchanged files, testing first by modification time then by
MD5SUM. Doesn't delete files from the destination.
rclone sync source:path dest:path
Sync the source to the destination. Doesn't transfer
unchanged files, testing first by modification time then by
MD5SUM. Deletes any files that exist in source that don't
exist in destination. Since this can cause data loss, test
first with the `--dry-run` flag.
rclone ls [remote:path]
List all the objects in the the path with sizes.
rclone lsl [remote:path]
List all the objects in the the path with sizes and timestamps.
rclone lsd [remote:path]
List all directories/objects/buckets in the the path.
rclone mkdir remote:path
Make the path if it doesn't already exist
rclone rmdir remote:path
Remove the path. Note that you can't remove a path with
objects in it, use purge for that.
rclone purge remote:path
Remove the path and all of its contents.
rclone check source:path dest:path
Checks the files in the source and destination match. It
compares sizes and MD5SUMs and prints a report of files which
don't match. It doesn't alter the source or destination.
rclone md5sum remote:path
Produces an md5sum file for all the objects in the path. This is in
the same format as the standard md5sum tool produces.
General options:
```
--checkers=8: Number of checkers to run in parallel.
--config="~/.rclone.conf": Config file.
-n, --dry-run=false: Do a trial run with no permanent changes
--modify-window=1ns: Max time diff to be considered the same
-q, --quiet=false: Print as little stuff as possible
--stats=1m0s: Interval to print stats
--transfers=4: Number of file transfers to run in parallel.
-v, --verbose=false: Print lots more stuff
```
Developer options:
```
--cpuprofile="": Write cpu profile to file
```
Local Filesystem
----------------
Paths are specified as normal filesystem paths, so
rclone sync /home/source /tmp/destination
Will sync `/home/source` to `/tmp/destination`
Swift / Rackspace cloudfiles / Memset Memstore
----------------------------------------------
Paths are specified as remote:container (or remote: for the `lsd`
command.) You may put subdirectories in too, eg
`remote:container/path/to/dir`.
So to copy a local directory to a swift container called backup:
rclone sync /home/source swift:backup
The modified time is stored as metadata on the object as
`X-Object-Meta-Mtime` as floating point since the epoch.
This is a defacto standard (used in the official python-swiftclient
amongst others) for storing the modification time (as read using
os.Stat) for an object.
Amazon S3
---------
Paths are specified as remote:bucket. You may put subdirectories in
too, eg `remote:bucket/path/to/dir`.
So to copy a local directory to a s3 container called backup
rclone sync /home/source s3:backup
The modified time is stored as metadata on the object as
`X-Amz-Meta-Mtime` as floating point since the epoch.
Google drive
------------
Paths are specified as remote:path Drive paths may be as deep as required.
The initial setup for drive involves getting a token from Google drive
which you need to do in your browser. `rclone config` walks you
through it.
To copy a local directory to a drive directory called backup
rclone copy /home/source remote:backup
Google drive stores modification times accurate to 1 ms natively.
Dropbox
-------
Paths are specified as remote:path Dropbox paths may be as deep as required.
The initial setup for dropbox involves getting a token from Dropbox
which you need to do in your browser. `rclone config` walks you
through it.
To copy a local directory to a drive directory called backup
rclone copy /home/source dropbox:backup
Md5sums and timestamps in RFC3339 format accurate to 1ns are stored in
a Dropbox datastore called "rclone". Dropbox datastores are limited
to 100,000 rows so this is the maximum number of files rclone can
manage on Dropbox.
Google Cloud Storage
--------------------
Paths are specified as remote:path Google Cloud Storage paths may be
as deep as required.
The initial setup for Google Cloud Storage involves getting a token
from Google which you need to do in your browser. `rclone config`
walks you through it.
To copy a local directory to a google cloud storage directory called backup
rclone copy /home/source remote:backup
Google google cloud storage stores md5sums natively and rclone stores
modification times as metadata on the object, under the "mtime" key in
RFC3339 format accurate to 1ns.
Single file copies
------------------
Rclone can copy single files
rclone src:path/to/file dst:path/dir
Or
rclone src:path/to/file dst:path/to/file
Note that you can't rename the file if you are copying from one file to another.
License
-------
This is free software under the terms of MIT the license (check the
COPYING file included in this package).
Bugs
----
* Drive: Sometimes get: Failed to copy: Upload failed: googleapi: Error 403: Rate Limit Exceeded
* quota is 100.0 requests/second/user
* Empty directories left behind with Local and Drive
* eg purging a local directory with subdirectories doesn't work
Changelog
---------
* v1.02 - 2014-07-19
* Implement Dropbox remote
* Implement Google Cloud Storage remote
* Verify Md5sums and Sizes after copies
* Remove times from "ls" command - lists sizes only
* Add add "lsl" - lists times and sizes
* Add "md5sum" command
* v1.01 - 2014-07-04
* drive: fix transfer of big files using up lots of memory
* v1.00 - 2014-07-03
* drive: fix whole second dates
* v0.99 - 2014-06-26
* Fix --dry-run not working
* Make compatible with go 1.1
* v0.98 - 2014-05-30
* s3: Treat missing Content-Length as 0 for some ceph installations
* rclonetest: add file with a space in
* v0.97 - 2014-05-05
* Implement copying of single files
* s3 & swift: support paths inside containers/buckets
* v0.96 - 2014-04-24
* drive: Fix multiple files of same name being created
* drive: Use o.Update and fs.Put to optimise transfers
* Add version number, -V and --version
* v0.95 - 2014-03-28
* rclone.org: website, docs and graphics
* drive: fix path parsing
* v0.94 - 2014-03-27
* Change remote format one last time
* GNU style flags
* v0.93 - 2014-03-16
* drive: store token in config file
* cross compile other versions
* set strict permissions on config file
* v0.92 - 2014-03-15
* Config fixes and --config option
* v0.91 - 2014-03-15
* Make config file
* v0.90 - 2013-06-27
* Project named rclone
* v0.00 - 2012-11-18
* Project started
Contact and support
-------------------
The project website is at:
* https://github.com/ncw/rclone
There you can file bug reports, ask for help or send pull requests.
Authors
-------
* Nick Craig-Wood <nick@craig-wood.com>
Contributors
------------
* Your name goes here!

View File

@@ -1,34 +0,0 @@
Extra required software for making a release
* [github-release](https://github.com/aktau/github-release) for uploading packages
* pandoc for making the html and man pages
Making a release
* git status - make sure everything is checked in
* Check travis & appveyor builds are green
* make check
* make test
* make tag
* edit docs/content/changelog.md
* make doc
* git status - to check for new man pages - git add them
* # Update version number in snapcraft.yml
* git commit -a -v -m "Version v1.XX"
* make retag
* # Set the GOPATH for a current stable go compiler
* make cross
* make upload
* make upload_website
* git push --tags origin master
* git push --tags origin master:stable # update the stable branch for packager.io
* make upload_github
Early in the next release cycle update the vendored dependencies
* make update
* git status
* git add new files
* carry forward any patches to vendor stuff
* git commit -a -v
## Make version number go to -DEV and check in
Make the version number be just in a file?

File diff suppressed because it is too large Load Diff

View File

@@ -1,63 +0,0 @@
// Test AmazonCloudDrive filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package amazonclouddrive_test
import (
"testing"
"github.com/ncw/rclone/amazonclouddrive"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*amazonclouddrive.Object)(nil))
fstests.RemoteName = "TestAmazonCloudDrive:"
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -1,20 +0,0 @@
version: "{build}"
os: Windows Server 2012 R2
clone_folder: c:\gopath\src\github.com\ncw\rclone
environment:
GOPATH: c:\gopath
install:
- echo %PATH%
- echo %GOPATH%
- go version
- go env
- go install
build_script:
- rmdir vendor\bazil.org\fuse /s /q
- go test -cpu=2 ./...
- go test -cpu=2 -short -race ./...

View File

@@ -1,301 +0,0 @@
package api
import (
"fmt"
"path"
"strconv"
"strings"
"time"
"github.com/ncw/rclone/fs"
)
// Error describes a B2 error response
type Error struct {
Status int `json:"status"` // The numeric HTTP status code. Always matches the status in the HTTP response.
Code string `json:"code"` // A single-identifier code that identifies the error.
Message string `json:"message"` // A human-readable message, in English, saying what went wrong.
}
// Error statisfies the error interface
func (e *Error) Error() string {
return fmt.Sprintf("%s (%d %s)", e.Message, e.Status, e.Code)
}
// Fatal statisfies the Fatal interface
//
// It indicates which errors should be treated as fatal
func (e *Error) Fatal() bool {
return e.Status == 403 // 403 errors shouldn't be retried
}
var _ fs.Fataler = (*Error)(nil)
// Account describes a B2 account
type Account struct {
ID string `json:"accountId"` // The identifier for the account.
}
// Bucket describes a B2 bucket
type Bucket struct {
ID string `json:"bucketId"`
AccountID string `json:"accountId"`
Name string `json:"bucketName"`
Type string `json:"bucketType"`
}
// Timestamp is a UTC time when this file was uploaded. It is a base
// 10 number of milliseconds since midnight, January 1, 1970 UTC. This
// fits in a 64 bit integer such as the type "long" in the programming
// language Java. It is intended to be compatible with Java's time
// long. For example, it can be passed directly into the java call
// Date.setTime(long time).
type Timestamp time.Time
// MarshalJSON turns a Timestamp into JSON (in UTC)
func (t *Timestamp) MarshalJSON() (out []byte, err error) {
timestamp := (*time.Time)(t).UTC().UnixNano()
return []byte(strconv.FormatInt(timestamp/1E6, 10)), nil
}
// UnmarshalJSON turns JSON into a Timestamp
func (t *Timestamp) UnmarshalJSON(data []byte) error {
timestamp, err := strconv.ParseInt(string(data), 10, 64)
if err != nil {
return err
}
*t = Timestamp(time.Unix(timestamp/1E3, (timestamp%1E3)*1E6).UTC())
return nil
}
const versionFormat = "-v2006-01-02-150405.000"
// AddVersion adds the timestamp as a version string into the filename passed in.
func (t Timestamp) AddVersion(remote string) string {
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
s := (time.Time)(t).Format(versionFormat)
// Replace the '.' with a '-'
s = strings.Replace(s, ".", "-", -1)
return base + s + ext
}
// RemoveVersion removes the timestamp from a filename as a version string.
//
// It returns the new file name and a timestamp, or the old filename
// and a zero timestamp.
func RemoveVersion(remote string) (t Timestamp, newRemote string) {
newRemote = remote
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
if len(base) < len(versionFormat) {
return
}
versionStart := len(base) - len(versionFormat)
// Check it ends in -xxx
if base[len(base)-4] != '-' {
return
}
// Replace with .xxx for parsing
base = base[:len(base)-4] + "." + base[len(base)-3:]
newT, err := time.Parse(versionFormat, base[versionStart:])
if err != nil {
return
}
return Timestamp(newT), base[:versionStart] + ext
}
// IsZero returns true if the timestamp is unitialised
func (t Timestamp) IsZero() bool {
return (time.Time)(t).IsZero()
}
// Equal compares two timestamps
//
// If either are !IsZero then it returns false
func (t Timestamp) Equal(s Timestamp) bool {
if (time.Time)(t).IsZero() {
return false
}
if (time.Time)(s).IsZero() {
return false
}
return (time.Time)(t).Equal((time.Time)(s))
}
// File is info about a file
type File struct {
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
Action string `json:"action"` // Either "upload" or "hide". "upload" means a file that was uploaded to B2 Cloud Storage. "hide" means a file version marking the file as hidden, so that it will not show up in b2_list_file_names. The result of b2_list_file_names will contain only "upload". The result of b2_list_file_versions may have both.
Size int64 `json:"size"` // The number of bytes in the file.
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
ContentType string `json:"contentType"` // The MIME type of the file.
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
}
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct {
AccountID string `json:"accountId"` // The identifier for the account.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
}
// ListBucketsResponse is as returned from the b2_list_buckets call
type ListBucketsResponse struct {
Buckets []Bucket `json:"buckets"`
}
// ListFileNamesRequest is as passed to b2_list_file_names or b2_list_file_versions
type ListFileNamesRequest struct {
BucketID string `json:"bucketId"` // required - The bucket to look for file names in.
StartFileName string `json:"startFileName,omitempty"` // optional - The first file name to return. If there is a file with this name, it will be returned in the list. If not, the first file name after this the first one after this name.
MaxFileCount int `json:"maxFileCount,omitempty"` // optional - The maximum number of files to return from this call. The default value is 100, and the maximum allowed is 1000.
StartFileID string `json:"startFileId,omitempty"` // optional - What to pass in to startFileId for the next search to continue where this one left off.
Prefix string `json:"prefix,omitempty"` // optional - Files returned will be limited to those with the given prefix. Defaults to the empty string, which matches all files.
Delimiter string `json:"delimiter,omitempty"` // Files returned will be limited to those within the top folder, or any one subfolder. Defaults to NULL. Folder names will also be returned. The delimiter character will be used to "break" file names into folders.
}
// ListFileNamesResponse is as received from b2_list_file_names or b2_list_file_versions
type ListFileNamesResponse struct {
Files []File `json:"files"` // An array of objects, each one describing one file.
NextFileName *string `json:"nextFileName"` // What to pass in to startFileName for the next search to continue where this one left off, or null if there are no more files.
NextFileID *string `json:"nextFileId"` // What to pass in to startFileId for the next search to continue where this one left off, or null if there are no more files.
}
// GetUploadURLRequest is passed to b2_get_upload_url
type GetUploadURLRequest struct {
BucketID string `json:"bucketId"` // The ID of the bucket that you want to upload to.
}
// GetUploadURLResponse is received from b2_get_upload_url
type GetUploadURLResponse struct {
BucketID string `json:"bucketId"` // The unique ID of the bucket.
UploadURL string `json:"uploadUrl"` // The URL that can be used to upload files to this bucket, see b2_upload_file.
AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when uploading files to this bucket, see b2_upload_file.
}
// FileInfo is received from b2_upload_file, b2_get_file_info and b2_finish_large_file
type FileInfo struct {
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
Action string `json:"action"` // Either "upload" or "hide". "upload" means a file that was uploaded to B2 Cloud Storage. "hide" means a file version marking the file as hidden, so that it will not show up in b2_list_file_names. The result of b2_list_file_names will contain only "upload". The result of b2_list_file_versions may have both.
AccountID string `json:"accountId"` // Your account ID.
BucketID string `json:"bucketId"` // The bucket that the file is in.
Size int64 `json:"contentLength"` // The number of bytes stored in the file.
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
ContentType string `json:"contentType"` // The MIME type of the file.
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
}
// CreateBucketRequest is used to create a bucket
type CreateBucketRequest struct {
AccountID string `json:"accountId"`
Name string `json:"bucketName"`
Type string `json:"bucketType"`
}
// DeleteBucketRequest is used to create a bucket
type DeleteBucketRequest struct {
ID string `json:"bucketId"`
AccountID string `json:"accountId"`
}
// DeleteFileRequest is used to delete a file version
type DeleteFileRequest struct {
ID string `json:"fileId"` // The ID of the file, as returned by b2_upload_file, b2_list_file_names, or b2_list_file_versions.
Name string `json:"fileName"` // The name of this file.
}
// HideFileRequest is used to delete a file
type HideFileRequest struct {
BucketID string `json:"bucketId"` // The bucket containing the file to hide.
Name string `json:"fileName"` // The name of the file to hide.
}
// GetFileInfoRequest is used to return a FileInfo struct with b2_get_file_info
type GetFileInfoRequest struct {
ID string `json:"fileId"` // The ID of the file, as returned by b2_upload_file, b2_list_file_names, or b2_list_file_versions.
}
// StartLargeFileRequest (b2_start_large_file) Prepares for uploading the parts of a large file.
//
// If the original source of the file being uploaded has a last
// modified time concept, Backblaze recommends using
// src_last_modified_millis as the name, and a string holding the base
// 10 number number of milliseconds since midnight, January 1, 1970
// UTC. This fits in a 64 bit integer such as the type "long" in the
// programming language Java. It is intended to be compatible with
// Java's time long. For example, it can be passed directly into the
// Java call Date.setTime(long time).
//
// If the caller knows the SHA1 of the entire large file being
// uploaded, Backblaze recommends using large_file_sha1 as the name,
// and a 40 byte hex string representing the SHA1.
//
// Example: { "src_last_modified_millis" : "1452802803026", "large_file_sha1" : "a3195dc1e7b46a2ff5da4b3c179175b75671e80d", "color": "blue" }
type StartLargeFileRequest struct {
BucketID string `json:"bucketId"` //The ID of the bucket that the file will go in.
Name string `json:"fileName"` // The name of the file. See Files for requirements on file names.
ContentType string `json:"contentType"` // The MIME type of the content of the file, which will be returned in the Content-Type header when downloading the file. Use the Content-Type b2/x-auto to automatically set the stored Content-Type post upload. In the case where a file extension is absent or the lookup fails, the Content-Type is set to application/octet-stream.
Info map[string]string `json:"fileInfo"` // A JSON object holding the name/value pairs for the custom file info.
}
// StartLargeFileResponse is the response to StartLargeFileRequest
type StartLargeFileResponse struct {
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId"` // The unique ID of the bucket.
ContentType string `json:"contentType"` // The MIME type of the file.
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
}
// GetUploadPartURLRequest is passed to b2_get_upload_part_url
type GetUploadPartURLRequest struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
}
// GetUploadPartURLResponse is received from b2_get_upload_url
type GetUploadPartURLResponse struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
UploadURL string `json:"uploadUrl"` // The URL that can be used to upload files to this bucket, see b2_upload_part.
AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when uploading files to this bucket, see b2_upload_part.
}
// UploadPartResponse is the response to b2_upload_part
type UploadPartResponse struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
PartNumber int64 `json:"partNumber"` // Which part this is (starting from 1)
Size int64 `json:"contentLength"` // The number of bytes stored in the file.
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
}
// FinishLargeFileRequest is passed to b2_finish_large_file
//
// The response is a FileInfo object (with extra AccountID and BucketID fields which we ignore).
//
// Large files do not have a SHA1 checksum. The value will always be "none".
type FinishLargeFileRequest struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
SHA1s []string `json:"partSha1Array"` // A JSON array of hex SHA1 checksums of the parts of the large file. This is a double-check that the right parts were uploaded in the right order, and that none were missed. Note that the part numbers start at 1, and the SHA1 of the part 1 is the first string in the array, at index 0.
}
// CancelLargeFileRequest is passed to b2_finish_large_file
//
// The response is a CancelLargeFileResponse
type CancelLargeFileRequest struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
}
// CancelLargeFileResponse is the response to CancelLargeFileRequest
type CancelLargeFileResponse struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
Name string `json:"fileName"` // The name of this file.
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId"` // The unique ID of the bucket.
}

View File

@@ -1,87 +0,0 @@
package api_test
import (
"testing"
"time"
"github.com/ncw/rclone/b2/api"
"github.com/ncw/rclone/fstest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var (
emptyT api.Timestamp
t0 = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123456789Z"))
t0r = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123000000Z"))
t1 = api.Timestamp(fstest.Time("2001-02-03T04:05:06.123000000Z"))
)
func TestTimestampMarshalJSON(t *testing.T) {
resB, err := t0.MarshalJSON()
res := string(resB)
require.NoError(t, err)
assert.Equal(t, "3661123", res)
resB, err = t1.MarshalJSON()
res = string(resB)
require.NoError(t, err)
assert.Equal(t, "981173106123", res)
}
func TestTimestampUnmarshalJSON(t *testing.T) {
var tActual api.Timestamp
err := tActual.UnmarshalJSON([]byte("981173106123"))
require.NoError(t, err)
assert.Equal(t, (time.Time)(t1), (time.Time)(tActual))
}
func TestTimestampAddVersion(t *testing.T) {
for _, test := range []struct {
t api.Timestamp
in string
expected string
}{
{t0, "potato.txt", "potato-v1970-01-01-010101-123.txt"},
{t1, "potato", "potato-v2001-02-03-040506-123"},
{t1, "", "-v2001-02-03-040506-123"},
} {
actual := test.t.AddVersion(test.in)
assert.Equal(t, test.expected, actual, test.in)
}
}
func TestTimestampRemoveVersion(t *testing.T) {
for _, test := range []struct {
in string
expectedT api.Timestamp
expectedRemote string
}{
{"potato.txt", emptyT, "potato.txt"},
{"potato-v1970-01-01-010101-123.txt", t0r, "potato.txt"},
{"potato-v2001-02-03-040506-123", t1, "potato"},
{"-v2001-02-03-040506-123", t1, ""},
{"potato-v2A01-02-03-040506-123", emptyT, "potato-v2A01-02-03-040506-123"},
{"potato-v2001-02-03-040506=123", emptyT, "potato-v2001-02-03-040506=123"},
} {
actualT, actualRemote := api.RemoveVersion(test.in)
assert.Equal(t, test.expectedT, actualT, test.in)
assert.Equal(t, test.expectedRemote, actualRemote, test.in)
}
}
func TestTimestampIsZero(t *testing.T) {
assert.True(t, emptyT.IsZero())
assert.False(t, t0.IsZero())
assert.False(t, t1.IsZero())
}
func TestTimestampEqual(t *testing.T) {
assert.False(t, emptyT.Equal(emptyT))
assert.False(t, t0.Equal(emptyT))
assert.False(t, emptyT.Equal(t0))
assert.False(t, t0.Equal(t1))
assert.False(t, t1.Equal(t0))
assert.True(t, t0.Equal(t0))
assert.True(t, t1.Equal(t1))
}

1358
b2/b2.go

File diff suppressed because it is too large Load Diff

View File

@@ -1,170 +0,0 @@
package b2
import (
"testing"
"time"
"github.com/ncw/rclone/fstest"
)
// Test b2 string encoding
// https://www.backblaze.com/b2/docs/string_encoding.html
var encodeTest = []struct {
fullyEncoded string
minimallyEncoded string
plainText string
}{
{fullyEncoded: "%20", minimallyEncoded: "+", plainText: " "},
{fullyEncoded: "%21", minimallyEncoded: "!", plainText: "!"},
{fullyEncoded: "%22", minimallyEncoded: "%22", plainText: "\""},
{fullyEncoded: "%23", minimallyEncoded: "%23", plainText: "#"},
{fullyEncoded: "%24", minimallyEncoded: "$", plainText: "$"},
{fullyEncoded: "%25", minimallyEncoded: "%25", plainText: "%"},
{fullyEncoded: "%26", minimallyEncoded: "%26", plainText: "&"},
{fullyEncoded: "%27", minimallyEncoded: "'", plainText: "'"},
{fullyEncoded: "%28", minimallyEncoded: "(", plainText: "("},
{fullyEncoded: "%29", minimallyEncoded: ")", plainText: ")"},
{fullyEncoded: "%2A", minimallyEncoded: "*", plainText: "*"},
{fullyEncoded: "%2B", minimallyEncoded: "%2B", plainText: "+"},
{fullyEncoded: "%2C", minimallyEncoded: "%2C", plainText: ","},
{fullyEncoded: "%2D", minimallyEncoded: "-", plainText: "-"},
{fullyEncoded: "%2E", minimallyEncoded: ".", plainText: "."},
{fullyEncoded: "%2F", minimallyEncoded: "/", plainText: "/"},
{fullyEncoded: "%30", minimallyEncoded: "0", plainText: "0"},
{fullyEncoded: "%31", minimallyEncoded: "1", plainText: "1"},
{fullyEncoded: "%32", minimallyEncoded: "2", plainText: "2"},
{fullyEncoded: "%33", minimallyEncoded: "3", plainText: "3"},
{fullyEncoded: "%34", minimallyEncoded: "4", plainText: "4"},
{fullyEncoded: "%35", minimallyEncoded: "5", plainText: "5"},
{fullyEncoded: "%36", minimallyEncoded: "6", plainText: "6"},
{fullyEncoded: "%37", minimallyEncoded: "7", plainText: "7"},
{fullyEncoded: "%38", minimallyEncoded: "8", plainText: "8"},
{fullyEncoded: "%39", minimallyEncoded: "9", plainText: "9"},
{fullyEncoded: "%3A", minimallyEncoded: ":", plainText: ":"},
{fullyEncoded: "%3B", minimallyEncoded: ";", plainText: ";"},
{fullyEncoded: "%3C", minimallyEncoded: "%3C", plainText: "<"},
{fullyEncoded: "%3D", minimallyEncoded: "=", plainText: "="},
{fullyEncoded: "%3E", minimallyEncoded: "%3E", plainText: ">"},
{fullyEncoded: "%3F", minimallyEncoded: "%3F", plainText: "?"},
{fullyEncoded: "%40", minimallyEncoded: "@", plainText: "@"},
{fullyEncoded: "%41", minimallyEncoded: "A", plainText: "A"},
{fullyEncoded: "%42", minimallyEncoded: "B", plainText: "B"},
{fullyEncoded: "%43", minimallyEncoded: "C", plainText: "C"},
{fullyEncoded: "%44", minimallyEncoded: "D", plainText: "D"},
{fullyEncoded: "%45", minimallyEncoded: "E", plainText: "E"},
{fullyEncoded: "%46", minimallyEncoded: "F", plainText: "F"},
{fullyEncoded: "%47", minimallyEncoded: "G", plainText: "G"},
{fullyEncoded: "%48", minimallyEncoded: "H", plainText: "H"},
{fullyEncoded: "%49", minimallyEncoded: "I", plainText: "I"},
{fullyEncoded: "%4A", minimallyEncoded: "J", plainText: "J"},
{fullyEncoded: "%4B", minimallyEncoded: "K", plainText: "K"},
{fullyEncoded: "%4C", minimallyEncoded: "L", plainText: "L"},
{fullyEncoded: "%4D", minimallyEncoded: "M", plainText: "M"},
{fullyEncoded: "%4E", minimallyEncoded: "N", plainText: "N"},
{fullyEncoded: "%4F", minimallyEncoded: "O", plainText: "O"},
{fullyEncoded: "%50", minimallyEncoded: "P", plainText: "P"},
{fullyEncoded: "%51", minimallyEncoded: "Q", plainText: "Q"},
{fullyEncoded: "%52", minimallyEncoded: "R", plainText: "R"},
{fullyEncoded: "%53", minimallyEncoded: "S", plainText: "S"},
{fullyEncoded: "%54", minimallyEncoded: "T", plainText: "T"},
{fullyEncoded: "%55", minimallyEncoded: "U", plainText: "U"},
{fullyEncoded: "%56", minimallyEncoded: "V", plainText: "V"},
{fullyEncoded: "%57", minimallyEncoded: "W", plainText: "W"},
{fullyEncoded: "%58", minimallyEncoded: "X", plainText: "X"},
{fullyEncoded: "%59", minimallyEncoded: "Y", plainText: "Y"},
{fullyEncoded: "%5A", minimallyEncoded: "Z", plainText: "Z"},
{fullyEncoded: "%5B", minimallyEncoded: "%5B", plainText: "["},
{fullyEncoded: "%5C", minimallyEncoded: "%5C", plainText: "\\"},
{fullyEncoded: "%5D", minimallyEncoded: "%5D", plainText: "]"},
{fullyEncoded: "%5E", minimallyEncoded: "%5E", plainText: "^"},
{fullyEncoded: "%5F", minimallyEncoded: "_", plainText: "_"},
{fullyEncoded: "%60", minimallyEncoded: "%60", plainText: "`"},
{fullyEncoded: "%61", minimallyEncoded: "a", plainText: "a"},
{fullyEncoded: "%62", minimallyEncoded: "b", plainText: "b"},
{fullyEncoded: "%63", minimallyEncoded: "c", plainText: "c"},
{fullyEncoded: "%64", minimallyEncoded: "d", plainText: "d"},
{fullyEncoded: "%65", minimallyEncoded: "e", plainText: "e"},
{fullyEncoded: "%66", minimallyEncoded: "f", plainText: "f"},
{fullyEncoded: "%67", minimallyEncoded: "g", plainText: "g"},
{fullyEncoded: "%68", minimallyEncoded: "h", plainText: "h"},
{fullyEncoded: "%69", minimallyEncoded: "i", plainText: "i"},
{fullyEncoded: "%6A", minimallyEncoded: "j", plainText: "j"},
{fullyEncoded: "%6B", minimallyEncoded: "k", plainText: "k"},
{fullyEncoded: "%6C", minimallyEncoded: "l", plainText: "l"},
{fullyEncoded: "%6D", minimallyEncoded: "m", plainText: "m"},
{fullyEncoded: "%6E", minimallyEncoded: "n", plainText: "n"},
{fullyEncoded: "%6F", minimallyEncoded: "o", plainText: "o"},
{fullyEncoded: "%70", minimallyEncoded: "p", plainText: "p"},
{fullyEncoded: "%71", minimallyEncoded: "q", plainText: "q"},
{fullyEncoded: "%72", minimallyEncoded: "r", plainText: "r"},
{fullyEncoded: "%73", minimallyEncoded: "s", plainText: "s"},
{fullyEncoded: "%74", minimallyEncoded: "t", plainText: "t"},
{fullyEncoded: "%75", minimallyEncoded: "u", plainText: "u"},
{fullyEncoded: "%76", minimallyEncoded: "v", plainText: "v"},
{fullyEncoded: "%77", minimallyEncoded: "w", plainText: "w"},
{fullyEncoded: "%78", minimallyEncoded: "x", plainText: "x"},
{fullyEncoded: "%79", minimallyEncoded: "y", plainText: "y"},
{fullyEncoded: "%7A", minimallyEncoded: "z", plainText: "z"},
{fullyEncoded: "%7B", minimallyEncoded: "%7B", plainText: "{"},
{fullyEncoded: "%7C", minimallyEncoded: "%7C", plainText: "|"},
{fullyEncoded: "%7D", minimallyEncoded: "%7D", plainText: "}"},
{fullyEncoded: "%7E", minimallyEncoded: "~", plainText: "~"},
{fullyEncoded: "%7F", minimallyEncoded: "%7F", plainText: "\u007f"},
{fullyEncoded: "%E8%87%AA%E7%94%B1", minimallyEncoded: "%E8%87%AA%E7%94%B1", plainText: "自由"},
{fullyEncoded: "%F0%90%90%80", minimallyEncoded: "%F0%90%90%80", plainText: "𐐀"},
}
func TestUrlEncode(t *testing.T) {
for _, test := range encodeTest {
got := urlEncode(test.plainText)
if got != test.minimallyEncoded && got != test.fullyEncoded {
t.Errorf("urlEncode(%q) got %q wanted %q or %q", test.plainText, got, test.minimallyEncoded, test.fullyEncoded)
}
}
}
func TestTimeString(t *testing.T) {
for _, test := range []struct {
in time.Time
want string
}{
{fstest.Time("1970-01-01T00:00:00.000000000Z"), "0"},
{fstest.Time("2001-02-03T04:05:10.123123123Z"), "981173110123"},
{fstest.Time("2001-02-03T05:05:10.123123123+01:00"), "981173110123"},
} {
got := timeString(test.in)
if test.want != got {
t.Logf("%v: want %v got %v", test.in, test.want, got)
}
}
}
func TestParseTimeString(t *testing.T) {
for _, test := range []struct {
in string
want time.Time
wantError string
}{
{"0", fstest.Time("1970-01-01T00:00:00.000000000Z"), ""},
{"981173110123", fstest.Time("2001-02-03T04:05:10.123000000Z"), ""},
{"", time.Time{}, ""},
{"potato", time.Time{}, `strconv.ParseInt: parsing "potato": invalid syntax`},
} {
o := Object{}
err := o.parseTimeString(test.in)
got := o.modTime
var gotError string
if err != nil {
gotError = err.Error()
}
if test.want != got {
t.Logf("%v: want %v got %v", test.in, test.want, got)
}
if test.wantError != gotError {
t.Logf("%v: want error %v got error %v", test.in, test.wantError, gotError)
}
}
}

View File

@@ -1,63 +0,0 @@
// Test B2 filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package b2_test
import (
"testing"
"github.com/ncw/rclone/b2"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*b2.Object)(nil))
fstests.RemoteName = "TestB2:"
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -1,302 +0,0 @@
// Upload large files for b2
//
// Docs - https://www.backblaze.com/b2/docs/large_files.html
package b2
import (
"bytes"
"crypto/sha1"
"fmt"
"io"
"sync"
"github.com/ncw/rclone/b2/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/rest"
"github.com/pkg/errors"
)
// largeUpload is used to control the upload of large files which need chunking
type largeUpload struct {
f *Fs // parent Fs
o *Object // object being uploaded
in io.Reader // read the data from here
id string // ID of the file being uploaded
size int64 // total size
parts int64 // calculated number of parts
sha1s []string // slice of SHA1s for each part
uploadMu sync.Mutex // lock for upload variable
uploads []*api.GetUploadPartURLResponse // result of get upload URL calls
}
// newLargeUpload starts an upload of object o from in with metadata in src
func (f *Fs) newLargeUpload(o *Object, in io.Reader, src fs.ObjectInfo) (up *largeUpload, err error) {
remote := o.remote
size := src.Size()
parts := size / int64(chunkSize)
if size%int64(chunkSize) != 0 {
parts++
}
if parts > maxParts {
return nil, errors.Errorf("%q too big (%d bytes) makes too many parts %d > %d - increase --b2-chunk-size", remote, size, parts, maxParts)
}
modTime := src.ModTime()
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
}
bucketID, err := f.getBucketID()
if err != nil {
return nil, err
}
var request = api.StartLargeFileRequest{
BucketID: bucketID,
Name: o.fs.root + remote,
ContentType: fs.MimeType(src),
Info: map[string]string{
timeKey: timeString(modTime),
},
}
// Set the SHA1 if known
if calculatedSha1, err := src.Hash(fs.HashSHA1); err == nil && calculatedSha1 != "" {
request.Info[sha1Key] = calculatedSha1
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
up = &largeUpload{
f: f,
o: o,
in: in,
id: response.ID,
size: size,
parts: parts,
sha1s: make([]string, parts),
}
return up, nil
}
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
//
// This should be returned with returnUploadURL when finished
func (up *largeUpload) getUploadURL() (upload *api.GetUploadPartURLResponse, err error) {
up.uploadMu.Lock()
defer up.uploadMu.Unlock()
if len(up.uploads) == 0 {
opts := rest.Opts{
Method: "POST",
Path: "/b2_get_upload_part_url",
}
var request = api.GetUploadPartURLRequest{
ID: up.id,
}
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(&opts, &request, &upload)
return up.f.shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get upload URL")
}
} else {
upload, up.uploads = up.uploads[0], up.uploads[1:]
}
return upload, nil
}
// returnUploadURL returns the UploadURL to the cache
func (up *largeUpload) returnUploadURL(upload *api.GetUploadPartURLResponse) {
if upload == nil {
return
}
up.uploadMu.Lock()
up.uploads = append(up.uploads, upload)
up.uploadMu.Unlock()
}
// clearUploadURL clears the current UploadURL and the AuthorizationToken
func (up *largeUpload) clearUploadURL() {
up.uploadMu.Lock()
up.uploads = nil
up.uploadMu.Unlock()
}
// Transfer a chunk
func (up *largeUpload) transferChunk(part int64, body []byte) error {
calculatedSHA1 := fmt.Sprintf("%x", sha1.Sum(body))
up.sha1s[part-1] = calculatedSHA1
size := int64(len(body))
err := up.f.pacer.Call(func() (bool, error) {
fs.Debugf(up.o, "Sending chunk %d length %d", part, len(body))
// Get upload URL
upload, err := up.getUploadURL()
if err != nil {
return false, err
}
// Authorization
//
// An upload authorization token, from b2_get_upload_part_url.
//
// X-Bz-Part-Number
//
// A number from 1 to 10000. The parts uploaded for one file
// must have contiguous numbers, starting with 1.
//
// Content-Length
//
// The number of bytes in the file being uploaded. Note that
// this header is required; you cannot leave it out and just
// use chunked encoding. The minimum size of every part but
// the last one is 100MB.
//
// X-Bz-Content-Sha1
//
// The SHA1 checksum of the this part of the file. B2 will
// check this when the part is uploaded, to make sure that the
// data arrived correctly. The same SHA1 checksum must be
// passed to b2_finish_large_file.
opts := rest.Opts{
Method: "POST",
Absolute: true,
Path: upload.UploadURL,
Body: fs.AccountPart(up.o, bytes.NewBuffer(body)),
ExtraHeaders: map[string]string{
"Authorization": upload.AuthorizationToken,
"X-Bz-Part-Number": fmt.Sprintf("%d", part),
sha1Header: calculatedSHA1,
},
ContentLength: &size,
}
var response api.UploadPartResponse
resp, err := up.f.srv.CallJSON(&opts, nil, &response)
retry, err := up.f.shouldRetry(resp, err)
// On retryable error clear PartUploadURL
if retry {
fs.Debugf(up.o, "Clearing part upload URL because of error: %v", err)
upload = nil
}
up.returnUploadURL(upload)
return retry, err
})
if err != nil {
fs.Debugf(up.o, "Error sending chunk %d: %v", part, err)
} else {
fs.Debugf(up.o, "Done sending chunk %d", part)
}
return err
}
// finish closes off the large upload
func (up *largeUpload) finish() error {
opts := rest.Opts{
Method: "POST",
Path: "/b2_finish_large_file",
}
var request = api.FinishLargeFileRequest{
ID: up.id,
SHA1s: up.sha1s,
}
var response api.FileInfo
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(&opts, &request, &response)
return up.f.shouldRetry(resp, err)
})
if err != nil {
return err
}
return up.o.decodeMetaDataFileInfo(&response)
}
// cancel aborts the large upload
func (up *largeUpload) cancel() error {
opts := rest.Opts{
Method: "POST",
Path: "/b2_cancel_large_file",
}
var request = api.CancelLargeFileRequest{
ID: up.id,
}
var response api.CancelLargeFileResponse
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(&opts, &request, &response)
return up.f.shouldRetry(resp, err)
})
return err
}
// Upload uploads the chunks from the input
func (up *largeUpload) Upload() error {
fs.Debugf(up.o, "Starting upload of large file in %d chunks (id %q)", up.parts, up.id)
remaining := up.size
errs := make(chan error, 1)
var wg sync.WaitGroup
var err error
fs.AccountByPart(up.o) // Cancel whole file accounting before reading
outer:
for part := int64(1); part <= up.parts; part++ {
// Check any errors
select {
case err = <-errs:
break outer
default:
}
reqSize := remaining
if reqSize >= int64(chunkSize) {
reqSize = int64(chunkSize)
}
// Get a block of memory
buf := up.f.getUploadBlock()[:reqSize]
// Read the chunk
_, err = io.ReadFull(up.in, buf)
if err != nil {
up.f.putUploadBlock(buf)
break outer
}
// Transfer the chunk
wg.Add(1)
go func(part int64, buf []byte) {
defer wg.Done()
defer up.f.putUploadBlock(buf)
err := up.transferChunk(part, buf)
if err != nil {
select {
case errs <- err:
default:
}
}
}(part, buf)
remaining -= reqSize
}
wg.Wait()
if err == nil {
select {
case err = <-errs:
default:
}
}
if err != nil {
fs.Debugf(up.o, "Cancelling large file upload due to error: %v", err)
cancelErr := up.cancel()
if cancelErr != nil {
fs.Errorf(up.o, "Failed to cancel large file upload: %v", cancelErr)
}
return err
}
// Check any errors
fs.Debugf(up.o, "Finishing large file upload")
return up.finish()
}

View File

@@ -1,162 +0,0 @@
// +build ignore
// Cross compile rclone - in go because I hate bash ;-)
package main
import (
"flag"
"log"
"os"
"os/exec"
"path/filepath"
"runtime"
"strings"
"sync"
"time"
)
var (
// Flags
debug = flag.Bool("d", false, "Print commands instead of running them.")
parallel = flag.Int("parallel", runtime.NumCPU(), "Number of commands to run in parallel.")
copyAs = flag.String("release", "", "Make copies of the releases with this name")
gitLog = flag.String("git-log", "", "git log to include as well")
)
// GOOS/GOARCH pairs we build for
var osarches = []string{
"windows/386",
"windows/amd64",
"darwin/386",
"darwin/amd64",
"linux/386",
"linux/amd64",
"linux/arm",
"linux/arm64",
"linux/mips",
"linux/mipsle",
"freebsd/386",
"freebsd/amd64",
"freebsd/arm",
"netbsd/386",
"netbsd/amd64",
"netbsd/arm",
"openbsd/386",
"openbsd/amd64",
"plan9/386",
"plan9/amd64",
"solaris/amd64",
}
// runEnv - run a shell command with env
func runEnv(args, env []string) {
if *debug {
args = append([]string{"echo"}, args...)
}
cmd := exec.Command(args[0], args[1:]...)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if env != nil {
cmd.Env = append(os.Environ(), env...)
}
err := cmd.Run()
if err != nil {
log.Fatalf("Failed to run %v: %v", args, err)
}
}
// run a shell command
func run(args ...string) {
runEnv(args, nil)
}
// build the binary in dir
func compileArch(version, goos, goarch, dir string) {
log.Printf("Compiling %s/%s", goos, goarch)
output := filepath.Join(dir, "rclone")
if goos == "windows" {
output += ".exe"
}
err := os.MkdirAll(dir, 0777)
if err != nil {
log.Fatalf("Failed to mkdir: %v", err)
}
args := []string{
"go", "build",
"--ldflags", "-s -X github.com/ncw/rclone/fs.Version=" + version,
"-i",
"-o", output,
"..",
}
env := []string{
"GOOS=" + goos,
"GOARCH=" + goarch,
"CGO_ENABLED=0",
}
runEnv(args, env)
// Now build the zip
run("cp", "-a", "../MANUAL.txt", filepath.Join(dir, "README.txt"))
run("cp", "-a", "../MANUAL.html", filepath.Join(dir, "README.html"))
run("cp", "-a", "../rclone.1", dir)
if *gitLog != "" {
run("cp", "-a", *gitLog, dir)
}
zip := dir + ".zip"
run("zip", "-r9", zip, dir)
if *copyAs != "" {
copyAsZip := strings.Replace(zip, "-"+version, "-"+*copyAs, 1)
run("ln", zip, copyAsZip)
}
run("rm", "-rf", dir)
log.Printf("Done compiling %s/%s", goos, goarch)
}
func compile(version string) {
start := time.Now()
wg := new(sync.WaitGroup)
run := make(chan func(), *parallel)
for i := 0; i < *parallel; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for f := range run {
f()
}
}()
}
for _, osarch := range osarches {
parts := strings.Split(osarch, "/")
if len(parts) != 2 {
log.Fatalf("Bad osarch %q", osarch)
}
goos, goarch := parts[0], parts[1]
userGoos := goos
if goos == "darwin" {
userGoos = "osx"
}
dir := filepath.Join("rclone-" + version + "-" + userGoos + "-" + goarch)
run <- func() {
compileArch(version, goos, goarch, dir)
}
}
close(run)
wg.Wait()
log.Printf("Compiled %d arches in %v", len(osarches), time.Since(start))
}
func main() {
flag.Parse()
args := flag.Args()
if len(args) != 1 {
log.Fatalf("Syntax: %s <version>", os.Args[0])
}
version := args[0]
run("rm", "-rf", "build")
run("mkdir", "build")
err := os.Chdir("build")
if err != nil {
log.Fatalf("Couldn't cd into build dir: %v", err)
}
compile(version)
}

View File

@@ -1,135 +0,0 @@
#!/usr/bin/python
"""
Make single page versions of the documentation for release and
conversion into man pages etc.
"""
import os
import re
from datetime import datetime
docpath = "docs/content"
outfile = "MANUAL.md"
# Order to add docs segments to make outfile
docs = [
"about.md",
"install.md",
"docs.md",
"remote_setup.md",
"filtering.md",
"overview.md",
"drive.md",
"s3.md",
"swift.md",
"dropbox.md",
"googlecloudstorage.md",
"amazonclouddrive.md",
"onedrive.md",
"hubic.md",
"b2.md",
"yandex.md",
"sftp.md",
"crypt.md",
"local.md",
"changelog.md",
"bugs.md",
"faq.md",
"licence.md",
"authors.md",
"contact.md",
]
# Order to put the commands in - any not on here will be in sorted order
commands_order = [
"rclone_config.md",
"rclone_copy.md",
"rclone_sync.md",
"rclone_move.md",
"rclone_delete.md",
"rclone_purge.md",
"rclone_mkdir.md",
"rclone_rmdir.md",
"rclone_check.md",
"rclone_ls.md",
"rclone_lsd.md",
"rclone_lsl.md",
"rclone_md5sum.md",
"rclone_sha1sum.md",
"rclone_size.md",
"rclone_version.md",
"rclone_cleanup.md",
"rclone_dedupe.md",
]
# Docs which aren't made into outfile
ignore_docs = [
"downloads.md",
"privacy.md",
"donate.md",
]
def read_doc(doc):
"""Read file as a string"""
path = os.path.join(docpath, doc)
with open(path) as fd:
contents = fd.read()
parts = contents.split("---\n", 2)
if len(parts) != 3:
raise ValueError("Couldn't find --- markers: found %d parts" % len(parts))
contents = parts[2].strip()+"\n\n"
# Remove icons
contents = re.sub(r'<i class="fa.*?</i>\s*', "", contents)
# Make [...](/links/) absolute
contents = re.sub(r'\((\/.*?\/)\)', r"(http://rclone.org\1)", contents)
return contents
def check_docs(docpath):
"""Check all the docs are in docpath"""
files = set(f for f in os.listdir(docpath) if f.endswith(".md"))
files -= set(ignore_docs)
docs_set = set(docs)
if files == docs_set:
return
print "Files on disk but not in docs variable: %s" % ", ".join(files - docs_set)
print "Files in docs variable but not on disk: %s" % ", ".join(docs_set - files)
raise ValueError("Missing files")
def read_command(command):
doc = read_doc("commands/"+command)
doc = re.sub(r"### Options inherited from parent commands.*$", "", doc, 0, re.S)
doc = doc.strip()+"\n"
return doc
def read_commands(docpath):
"""Reads the commands an makes them into a single page"""
files = set(f for f in os.listdir(docpath + "/commands") if f.endswith(".md"))
docs = []
for command in commands_order:
docs.append(read_command(command))
files.remove(command)
for command in sorted(files):
if command != "rclone.md":
docs.append(read_command(command))
return "\n".join(docs)
def main():
check_docs(docpath)
command_docs = read_commands(docpath)
with open(outfile, "w") as out:
out.write("""\
%% rclone(1) User Manual
%% Nick Craig-Wood
%% %s
""" % datetime.now().strftime("%b %d, %Y"))
for doc in docs:
contents = read_doc(doc)
# Substitute the commands into doc.md
if doc == "docs.md":
contents = re.sub(r"The main rclone commands.*?for the full list.", command_docs, contents, 0, re.S)
out.write(contents)
print "Written '%s'" % outfile
if __name__ == "__main__":
main()

View File

@@ -1,4 +0,0 @@
# Encrypted rclone configuration File
RCLONE_ENCRYPT_V0:
XIkAr3p+y+zai82cHFH8UoW1y1XTe6dpTzo/g4uSwqI2pfsnSSJ4JbAsRZ9nGVpx3NzROKEewlusVHNokiA4/nD4NbT+2DJrpMLg/OtLREICfuRk3tVWPKLGsmA+TLKU+IfQMO4LfrrCe2DF/lW0qA5Xu16E0Vn++jNhbwW2oB+JTkaGka8Ae3CyisM/3NUGnCOG/yb5wLH7ybUstNYPHsNFCiU1brFXQ4DNIbUFMmca+5S44vrOWvhp9QijQXlG7/JjwrkqbB/LK2gMJPTuhY2OW+4tRw1IoCXbWmwJXv5xmhPqanW92A==

View File

@@ -1,50 +0,0 @@
#!/bin/bash
#
# Upload a release
#
# Needs github-release from https://github.com/aktau/github-release
set -e
REPO="rclone"
if [ "$1" == "" ]; then
echo "Syntax: $0 Version"
exit 1
fi
VERSION="$1"
if [ "$GITHUB_USER" == "" ]; then
echo 1>&2 "Need GITHUB_USER environment variable"
exit 1
fi
if [ "$GITHUB_TOKEN" == "" ]; then
echo 1>&2 "Need GITHUB_TOKEN environment variable"
exit 1
fi
echo "Making release ${VERSION}"
github-release release \
--repo ${REPO} \
--tag ${VERSION} \
--name "rclone" \
--description "Rclone - rsync for cloud storage. Sync files to and from many cloud storage providers."
for build in `ls build | grep -v current`; do
echo "Uploading ${build}"
base="${build%.*}"
parts=(${base//-/ })
os=${parts[3]}
arch=${parts[4]}
github-release upload \
--repo ${REPO} \
--tag ${VERSION} \
--name "${build}" \
--file build/${build}
done
github-release info \
--repo ${REPO} \
--tag ${VERSION}
echo "Done"

View File

@@ -1,37 +0,0 @@
// Package all imports all the commands
package all
import (
// Active commands
_ "github.com/ncw/rclone/cmd"
_ "github.com/ncw/rclone/cmd/authorize"
_ "github.com/ncw/rclone/cmd/cat"
_ "github.com/ncw/rclone/cmd/check"
_ "github.com/ncw/rclone/cmd/cleanup"
_ "github.com/ncw/rclone/cmd/config"
_ "github.com/ncw/rclone/cmd/copy"
_ "github.com/ncw/rclone/cmd/copyto"
_ "github.com/ncw/rclone/cmd/cryptcheck"
_ "github.com/ncw/rclone/cmd/dedupe"
_ "github.com/ncw/rclone/cmd/delete"
_ "github.com/ncw/rclone/cmd/genautocomplete"
_ "github.com/ncw/rclone/cmd/gendocs"
_ "github.com/ncw/rclone/cmd/listremotes"
_ "github.com/ncw/rclone/cmd/ls"
_ "github.com/ncw/rclone/cmd/lsd"
_ "github.com/ncw/rclone/cmd/lsl"
_ "github.com/ncw/rclone/cmd/md5sum"
_ "github.com/ncw/rclone/cmd/memtest"
_ "github.com/ncw/rclone/cmd/mkdir"
_ "github.com/ncw/rclone/cmd/mount"
_ "github.com/ncw/rclone/cmd/move"
_ "github.com/ncw/rclone/cmd/moveto"
_ "github.com/ncw/rclone/cmd/obscure"
_ "github.com/ncw/rclone/cmd/purge"
_ "github.com/ncw/rclone/cmd/rmdir"
_ "github.com/ncw/rclone/cmd/rmdirs"
_ "github.com/ncw/rclone/cmd/sha1sum"
_ "github.com/ncw/rclone/cmd/size"
_ "github.com/ncw/rclone/cmd/sync"
_ "github.com/ncw/rclone/cmd/version"
)

View File

@@ -1,45 +0,0 @@
package cmd
// Atexit handling
import (
"os"
"os/signal"
"sync"
"github.com/ncw/rclone/fs"
)
var (
atExitFns []func()
atExitOnce sync.Once
atExitRegisterOnce sync.Once
)
// AtExit registers a function to be added on exit
func AtExit(fn func()) {
atExitFns = append(atExitFns, fn)
// Run AtExit handlers on SIGINT or SIGTERM so everything gets
// tidied up properly
atExitRegisterOnce.Do(func() {
go func() {
ch := make(chan os.Signal, 1)
signal.Notify(ch, os.Interrupt) // syscall.SIGINT, syscall.SIGTERM, syscall.SIGQUIT
sig := <-ch
fs.Infof(nil, "Signal received: %s", sig)
runAtExitFunctions()
fs.Infof(nil, "Exiting...")
os.Exit(0)
}()
})
}
// Runs all the AtExit functions if they haven't been run already
func runAtExitFunctions() {
atExitOnce.Do(func() {
for _, fn := range atExitFns {
fn()
}
})
}

View File

@@ -1,24 +0,0 @@
package authorize
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "authorize",
Short: `Remote authorization.`,
Long: `
Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 3, command, args)
fs.Authorize(args)
},
}

View File

@@ -1,80 +0,0 @@
package cat
import (
"io"
"io/ioutil"
"log"
"os"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
// Globals
var (
head = int64(0)
tail = int64(0)
offset = int64(0)
count = int64(-1)
discard = false
)
func init() {
cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().Int64VarP(&head, "head", "", head, "Only print the first N characters.")
commandDefintion.Flags().Int64VarP(&tail, "tail", "", tail, "Only print the last N characters.")
commandDefintion.Flags().Int64VarP(&offset, "offset", "", offset, "Start printing at offset N (or from end if -ve).")
commandDefintion.Flags().Int64VarP(&count, "count", "", count, "Only print N characters.")
commandDefintion.Flags().BoolVarP(&discard, "discard", "", discard, "Discard the output instead of printing.")
}
var commandDefintion = &cobra.Command{
Use: "cat remote:path",
Short: `Concatenates any files and sends them to stdout.`,
Long: `
rclone cat sends any files to standard output.
You can use it like this to output a single file
rclone cat remote:path/to/file
Or like this to output any file in dir or subdirectories.
rclone cat remote:path/to/dir
Or like this to output any .txt files in dir or subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
Use the --head flag to print characters only at the start, --tail for
the end and --offset and --count to print a section in the middle.
Note that if offset is negative it will count from the end, so
--offset -1 --count 1 is equivalent to --tail 1.
`,
Run: func(command *cobra.Command, args []string) {
usedOffset := offset != 0 || count >= 0
usedHead := head > 0
usedTail := tail > 0
if usedHead && usedTail || usedHead && usedOffset || usedTail && usedOffset {
log.Fatalf("Can only use one of --head, --tail or --offset with --count")
}
if head > 0 {
offset = 0
count = head
}
if tail > 0 {
offset = -tail
count = -1
}
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
var w io.Writer = os.Stdout
if discard {
w = ioutil.Discard
}
cmd.Run(false, false, command, func() error {
return fs.Cat(fsrc, w, offset, count)
})
},
}

View File

@@ -1,45 +0,0 @@
package check
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
// Globals
var (
download = false
)
func init() {
cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&download, "download", "", download, "Check by downloading rather than with hash.")
}
var commandDefintion = &cobra.Command{
Use: "check source:path dest:path",
Short: `Checks the files in the source and destination match.`,
Long: `
Checks the files in the source and destination match. It compares
sizes and hashes (MD5 or SHA1) and logs a report of files which don't
match. It doesn't alter the source or destination.
If you supply the --size-only flag, it will only compare the sizes not
the hashes as well. Use this for a quick check.
If you supply the --download flag, it will download the data from
both remotes and check them against each other on the fly. This can
be useful for remotes that don't support hashes or if you really want
to check all the data.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, fdst := cmd.NewFsSrcDst(args)
cmd.Run(false, false, command, func() error {
if download {
return fs.CheckDownload(fdst, fsrc)
}
return fs.Check(fdst, fsrc)
})
},
}

View File

@@ -1,27 +0,0 @@
package cleanup
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "cleanup remote:path",
Short: `Clean up the remote if possible`,
Long: `
Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(true, false, command, func() error {
return fs.CleanUp(fsrc)
})
},
}

View File

@@ -1,378 +0,0 @@
// Package cmd implemnts the rclone command
//
// It is in a sub package so it's internals can be re-used elsewhere
package cmd
// FIXME only attach the remote flags when using a remote???
// would probably mean bringing all the flags in to here? Or define some flagsets in fs...
import (
"fmt"
"log"
"os"
"path"
"regexp"
"runtime"
"runtime/pprof"
"strings"
"time"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/ncw/rclone/fs"
)
// Globals
var (
// Flags
cpuProfile = fs.StringP("cpuprofile", "", "", "Write cpu profile to file")
memProfile = fs.StringP("memprofile", "", "", "Write memory profile to file")
statsInterval = fs.DurationP("stats", "", time.Minute*1, "Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable)")
dataRateUnit = fs.StringP("stats-unit", "", "bytes", "Show data rate in stats as either 'bits' or 'bytes'/s")
version bool
retries = fs.IntP("retries", "", 3, "Retry operations this many times if they fail")
)
// Root is the main rclone command
var Root = &cobra.Command{
Use: "rclone",
Short: "Sync files and directories to and from local and remote object stores - " + fs.Version,
Long: `
Rclone is a command line program to sync files and directories to and
from various cloud storage systems, such as:
* Google Drive
* Amazon S3
* Openstack Swift / Rackspace cloud files / Memset Memstore
* Dropbox
* Google Cloud Storage
* Amazon Drive
* Microsoft One Drive
* Hubic
* Backblaze B2
* Yandex Disk
* The local filesystem
Features
* MD5/SHA1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* Copy mode to just copy new/changed files
* Sync (one way) mode to make a directory identical
* Check mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
See the home page for installation, usage, documentation, changelog
and configuration walkthroughs.
* http://rclone.org/
`,
PersistentPostRun: func(cmd *cobra.Command, args []string) {
fs.Debugf("rclone", "Version %q finishing with parameters %q", fs.Version, os.Args)
runAtExitFunctions()
},
}
// runRoot implements the main rclone command with no subcommands
func runRoot(cmd *cobra.Command, args []string) {
if version {
ShowVersion()
os.Exit(0)
} else {
_ = Root.Usage()
fmt.Fprintf(os.Stderr, "Command not found.\n")
os.Exit(1)
}
}
func init() {
Root.Run = runRoot
Root.Flags().BoolVarP(&version, "version", "V", false, "Print the version number")
cobra.OnInitialize(initConfig)
}
// ShowVersion prints the version to stdout
func ShowVersion() {
fmt.Printf("rclone %s\n", fs.Version)
}
// newFsFile creates a dst Fs from a name but may point to a file.
//
// It returns a string with the file name if points to a file
func newFsFile(remote string) (fs.Fs, string) {
fsInfo, configName, fsPath, err := fs.ParseRemote(remote)
if err != nil {
fs.Stats.Error()
log.Fatalf("Failed to create file system for %q: %v", remote, err)
}
f, err := fsInfo.NewFs(configName, fsPath)
switch err {
case fs.ErrorIsFile:
return f, path.Base(fsPath)
case nil:
return f, ""
default:
fs.Stats.Error()
log.Fatalf("Failed to create file system for %q: %v", remote, err)
}
return nil, ""
}
// newFsSrc creates a src Fs from a name
//
// It returns a string with the file name if limiting to one file
//
// This can point to a file
func newFsSrc(remote string) (fs.Fs, string) {
f, fileName := newFsFile(remote)
if fileName != "" {
if !fs.Config.Filter.InActive() {
fs.Stats.Error()
log.Fatalf("Can't limit to single files when using filters: %v", remote)
}
// Limit transfers to this file
err := fs.Config.Filter.AddFile(fileName)
if err != nil {
fs.Stats.Error()
log.Fatalf("Failed to limit to single file %q: %v", remote, err)
}
// Set --no-traverse as only one file
fs.Config.NoTraverse = true
}
return f, fileName
}
// newFsDst creates a dst Fs from a name
//
// This must point to a directory
func newFsDst(remote string) fs.Fs {
f, err := fs.NewFs(remote)
if err != nil {
fs.Stats.Error()
log.Fatalf("Failed to create file system for %q: %v", remote, err)
}
return f
}
// NewFsSrcDst creates a new src and dst fs from the arguments
func NewFsSrcDst(args []string) (fs.Fs, fs.Fs) {
fsrc, _ := newFsSrc(args[0])
fdst := newFsDst(args[1])
fs.CalculateModifyWindow(fdst, fsrc)
return fsrc, fdst
}
// RemoteSplit splits a remote into a parent and a leaf
//
// if it returns leaf as an empty string then remote is a directory
//
// if it returns parent as an empty string then that means the current directory
//
// The returned values have the property that parent + leaf == remote
func RemoteSplit(remote string) (parent string, leaf string) {
// Split remote on :
i := strings.Index(remote, ":")
remoteName := ""
remotePath := remote
if i >= 0 {
remoteName = remote[:i+1]
remotePath = remote[i+1:]
} else if strings.HasSuffix(remotePath, "/") {
// if no : and ends with / must be directory
return remotePath, ""
}
// Construct new remote name without last segment
parent, leaf = path.Split(remotePath)
return remoteName + parent, leaf
}
// NewFsSrcDstFiles creates a new src and dst fs from the arguments
// If src is a file then srcFileName and dstFileName will be non-empty
func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs, dstFileName string) {
fsrc, srcFileName = newFsSrc(args[0])
// If copying a file...
dstRemote := args[1]
if srcFileName != "" {
dstRemote, dstFileName = RemoteSplit(dstRemote)
if dstRemote == "" {
dstRemote = "."
}
if dstFileName == "" {
log.Fatalf("%q is a directory", args[1])
}
}
fdst = newFsDst(dstRemote)
fs.CalculateModifyWindow(fdst, fsrc)
return
}
// NewFsSrc creates a new src fs from the arguments
func NewFsSrc(args []string) fs.Fs {
fsrc, _ := newFsSrc(args[0])
fs.CalculateModifyWindow(fsrc)
return fsrc
}
// NewFsDst creates a new dst fs from the arguments
//
// Dst fs-es can't point to single files
func NewFsDst(args []string) fs.Fs {
fdst := newFsDst(args[0])
fs.CalculateModifyWindow(fdst)
return fdst
}
// ShowStats returns true if the user added a `--stats` flag to the command line.
//
// This is called by Run to override the default value of the
// showStats passed in.
func ShowStats() bool {
statsIntervalFlag := pflag.Lookup("stats")
return statsIntervalFlag != nil && statsIntervalFlag.Changed
}
// Run the function with stats and retries if required
func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
var err error
var stopStats chan struct{}
if !showStats && ShowStats() {
showStats = true
}
if showStats {
stopStats = StartStats()
}
for try := 1; try <= *retries; try++ {
err = f()
if !Retry || (err == nil && !fs.Stats.Errored()) {
if try > 1 {
fs.Errorf(nil, "Attempt %d/%d succeeded", try, *retries)
}
break
}
if fs.IsFatalError(err) {
fs.Errorf(nil, "Fatal error received - not attempting retries")
break
}
if fs.IsNoRetryError(err) {
fs.Errorf(nil, "Can't retry this error - not attempting retries")
break
}
if err != nil {
fs.Errorf(nil, "Attempt %d/%d failed with %d errors and: %v", try, *retries, fs.Stats.GetErrors(), err)
} else {
fs.Errorf(nil, "Attempt %d/%d failed with %d errors", try, *retries, fs.Stats.GetErrors())
}
if try < *retries {
fs.Stats.ResetErrors()
}
}
if showStats {
close(stopStats)
}
if err != nil {
log.Fatalf("Failed to %s: %v", cmd.Name(), err)
}
if showStats && (fs.Stats.Errored() || *statsInterval > 0) {
fs.Infof(nil, "%s", fs.Stats)
}
fs.Debugf(nil, "Go routines at exit %d\n", runtime.NumGoroutine())
if fs.Stats.Errored() {
os.Exit(1)
}
}
// CheckArgs checks there are enough arguments and prints a message if not
func CheckArgs(MinArgs, MaxArgs int, cmd *cobra.Command, args []string) {
if len(args) < MinArgs {
_ = cmd.Usage()
fmt.Fprintf(os.Stderr, "Command %s needs %d arguments mininum\n", cmd.Name(), MinArgs)
os.Exit(1)
} else if len(args) > MaxArgs {
_ = cmd.Usage()
fmt.Fprintf(os.Stderr, "Command %s needs %d arguments maximum\n", cmd.Name(), MaxArgs)
os.Exit(1)
}
}
// StartStats prints the stats every statsInterval
//
// It returns a channel which should be closed to stop the stats.
func StartStats() chan struct{} {
stopStats := make(chan struct{})
if *statsInterval > 0 {
go func() {
ticker := time.NewTicker(*statsInterval)
for {
select {
case <-ticker.C:
fs.Stats.Log()
case <-stopStats:
ticker.Stop()
return
}
}
}()
}
return stopStats
}
// initConfig is run by cobra after initialising the flags
func initConfig() {
// Start the logger
fs.InitLogging()
// Load the rest of the config now we have started the logger
fs.LoadConfig()
// Write the args for debug purposes
fs.Debugf("rclone", "Version %q starting with parameters %q", fs.Version, os.Args)
// Setup CPU profiling if desired
if *cpuProfile != "" {
fs.Infof(nil, "Creating CPU profile %q\n", *cpuProfile)
f, err := os.Create(*cpuProfile)
if err != nil {
fs.Stats.Error()
log.Fatal(err)
}
err = pprof.StartCPUProfile(f)
if err != nil {
fs.Stats.Error()
log.Fatal(err)
}
AtExit(func() {
pprof.StopCPUProfile()
})
}
// Setup memory profiling if desired
if *memProfile != "" {
AtExit(func() {
fs.Infof(nil, "Saving Memory profile %q\n", *memProfile)
f, err := os.Create(*memProfile)
if err != nil {
fs.Stats.Error()
log.Fatal(err)
}
err = pprof.WriteHeapProfile(f)
if err != nil {
fs.Stats.Error()
log.Fatal(err)
}
err = f.Close()
if err != nil {
fs.Stats.Error()
log.Fatal(err)
}
})
}
if m, _ := regexp.MatchString("^(bits|bytes)$", *dataRateUnit); m == false {
fs.Errorf(nil, "Invalid unit passed to --stats-unit. Defaulting to bytes.")
fs.Config.DataRateUnit = "bytes"
} else {
fs.Config.DataRateUnit = *dataRateUnit
}
}

View File

@@ -1,35 +0,0 @@
package cmd
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
)
func TestRemoteSplit(t *testing.T) {
for _, test := range []struct {
remote, wantParent, wantLeaf string
}{
{"", "", ""},
{"remote:", "remote:", ""},
{"remote:potato", "remote:", "potato"},
{"remote:/", "remote:/", ""},
{"remote:/potato", "remote:/", "potato"},
{"remote:/potato/potato", "remote:/potato/", "potato"},
{"remote:potato/sausage", "remote:potato/", "sausage"},
{"/", "/", ""},
{"/root", "/", "root"},
{"/a/b", "/a/", "b"},
{"root", "", "root"},
{"a/b", "a/", "b"},
{"root/", "root/", ""},
{"a/b/", "a/b/", ""},
} {
gotParent, gotLeaf := RemoteSplit(test.remote)
assert.Equal(t, test.wantParent, gotParent, test.remote)
assert.Equal(t, test.wantLeaf, gotLeaf, test.remote)
assert.Equal(t, test.remote, gotParent+gotLeaf, fmt.Sprintf("%s: %q + %q != %q", test.remote, gotParent, gotLeaf, test.remote))
}
}

View File

@@ -1,20 +0,0 @@
package config
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "config",
Short: `Enter an interactive configuration session.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
fs.EditConfig()
},
}

View File

@@ -1,63 +0,0 @@
package copy
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "copy source:path dest:path",
Short: `Copy files from source to dest, skipping already copied`,
Long: `
Copy the source to the destination. Doesn't transfer
unchanged files, testing by size and modification time or
MD5SUM. Doesn't delete files from the destination.
Note that it is always the contents of the directory that is synced,
not the directory so when source:path is a directory, it's the
contents of source:path that are copied, not the directory name and
contents.
If dest:path doesn't exist, it is created and the source:path contents
go there.
For example
rclone copy source:sourcepath dest:destpath
Let's say there are two files in sourcepath
sourcepath/one.txt
sourcepath/two.txt
This copies them to
destpath/one.txt
destpath/two.txt
Not to
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
If you are familiar with ` + "`rsync`" + `, rclone always works as if you had
written a trailing / - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the
source or destination.
See the ` + "`--no-traverse`" + ` option for controlling whether rclone lists
the destination directory or not.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, fdst := cmd.NewFsSrcDst(args)
cmd.Run(true, true, command, func() error {
return fs.CopyDir(fdst, fsrc)
})
},
}

View File

@@ -1,53 +0,0 @@
package copyto
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "copyto source:path dest:path",
Short: `Copy files from source to dest, skipping already copied`,
Long: `
If source:path is a file or directory then it copies it to a file or
directory named dest:path.
This can be used to upload single files to other than their current
name. If the source is a directory then it acts exactly like the copy
command.
So
rclone copyto src dst
where src and dst are rclone paths, either remote:path or
/path/to/local or C:\windows\path\if\on\windows.
This will:
if src is file
copy it to dst, overwriting an existing file if it exists
if src is directory
copy it to dst, overwriting existing files if they exist
see copy command for full details
This doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. It doesn't delete files from the
destination.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, srcFileName, fdst, dstFileName := cmd.NewFsSrcDstFiles(args)
cmd.Run(true, true, command, func() error {
if srcFileName == "" {
return fs.CopyDir(fdst, fsrc)
}
return fs.CopyFile(fdst, fsrc, dstFileName, srcFileName)
})
},
}

View File

@@ -1,101 +0,0 @@
package cryptcheck
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/crypt"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "cryptcheck remote:path cryptedremote:path",
Short: `Cryptcheck checks the integritity of a crypted remote.`,
Long: `
rclone cryptcheck checks a remote against a crypted remote. This is
the equivalent of running rclone check, but able to check the
checksums of the crypted remote.
For it to work the underlying remote of the cryptedremote must support
some kind of checksum.
It works by reading the nonce from each file on the cryptedremote: and
using that to encrypt each file on the remote:. It then checks the
checksum of the underlying file on the cryptedremote: against the
checksum of the file it has just encrypted.
Use it like this
rclone cryptcheck /path/to/files encryptedremote:path
You can use it like this also, but that will involve downloading all
the files in remote:path.
rclone cryptcheck remote:path encryptedremote:path
After it has run it will log the status of the encryptedremote:.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, fdst := cmd.NewFsSrcDst(args)
cmd.Run(false, true, command, func() error {
return cryptCheck(fdst, fsrc)
})
},
}
// cryptCheck checks the integrity of a crypted remote
func cryptCheck(fdst, fsrc fs.Fs) error {
// Check to see fcrypt is a crypt
fcrypt, ok := fdst.(*crypt.Fs)
if !ok {
return errors.Errorf("%s:%s is not a crypt remote", fdst.Name(), fdst.Root())
}
// Find a hash to use
funderlying := fcrypt.UnWrap()
hashType := funderlying.Hashes().GetOne()
if hashType == fs.HashNone {
return errors.Errorf("%s:%s does not support any hashes", funderlying.Name(), funderlying.Root())
}
fs.Infof(nil, "Using %v for hash comparisons", hashType)
// checkIdentical checks to see if dst and src are identical
//
// it returns true if differences were found
// it also returns whether it couldn't be hashed
checkIdentical := func(dst, src fs.Object) (differ bool, noHash bool) {
cryptDst := dst.(*crypt.Object)
underlyingDst := cryptDst.UnWrap()
underlyingHash, err := underlyingDst.Hash(hashType)
if err != nil {
fs.Stats.Error()
fs.Errorf(dst, "Error reading hash from underlying %v: %v", underlyingDst, err)
return true, false
}
if underlyingHash == "" {
return false, true
}
cryptHash, err := fcrypt.ComputeHash(cryptDst, src, hashType)
if err != nil {
fs.Stats.Error()
fs.Errorf(dst, "Error computing hash: %v", err)
return true, false
}
if cryptHash == "" {
return false, true
}
if cryptHash != underlyingHash {
fs.Stats.Error()
fs.Errorf(src, "hashes differ (%s:%s) %q vs (%s:%s) %q", fdst.Name(), fdst.Root(), cryptHash, fsrc.Name(), fsrc.Root(), underlyingHash)
return true, false
}
fs.Debugf(src, "OK")
return false, false
}
return fs.CheckFn(fcrypt, fsrc, checkIdentical)
}

View File

@@ -1,113 +0,0 @@
package dedupe
import (
"log"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
var (
dedupeMode = fs.DeduplicateInteractive
)
func init() {
cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().VarP(&dedupeMode, "dedupe-mode", "", "Dedupe mode interactive|skip|first|newest|oldest|rename.")
}
var commandDefintion = &cobra.Command{
Use: "dedupe [mode] remote:path",
Short: `Interactively find duplicate files delete/rename them.`,
Long: `
By default ` + "`" + `dedup` + "`" + ` interactively finds duplicate files and offers to
delete all but one or rename them to be different. Only useful with
Google Drive which can have duplicate file names.
The ` + "`" + `dedupe` + "`" + ` command will delete all but one of any identical (same
md5sum) files it finds without confirmation. This means that for most
duplicated files the ` + "`" + `dedupe` + "`" + ` command will not be interactive. You
can use ` + "`" + `--dry-run` + "`" + ` to see what would happen without doing anything.
Here is an example run.
Before - with duplicates
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
6048320 2016-03-05 16:23:11.775000000 one.txt
564374 2016-03-05 16:23:06.731000000 one.txt
6048320 2016-03-05 16:18:26.092000000 one.txt
6048320 2016-03-05 16:22:46.185000000 two.txt
1744073 2016-03-05 16:22:38.104000000 two.txt
564374 2016-03-05 16:22:52.118000000 two.txt
Now the ` + "`" + `dedupe` + "`" + ` session
$ rclone dedupe drive:dupes
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
one.txt: Found 4 duplicates - deleting identical copies
one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
one.txt: 2 duplicates remain
1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 duplicates - deleting identical copies
two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> r
two-1.txt: renamed from: two.txt
two-2.txt: renamed from: two.txt
two-3.txt: renamed from: two.txt
The result being
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
564374 2016-03-05 16:22:52.118000000 two-1.txt
6048320 2016-03-05 16:22:46.185000000 two-2.txt
1744073 2016-03-05 16:22:38.104000000 two-3.txt
Dedupe can be run non interactively using the ` + "`" + `--dedupe-mode` + "`" + ` flag or by using an extra parameter with the same value
* ` + "`" + `--dedupe-mode interactive` + "`" + ` - interactive as above.
* ` + "`" + `--dedupe-mode skip` + "`" + ` - removes identical files then skips anything left.
* ` + "`" + `--dedupe-mode first` + "`" + ` - removes identical files then keeps the first one.
* ` + "`" + `--dedupe-mode newest` + "`" + ` - removes identical files then keeps the newest one.
* ` + "`" + `--dedupe-mode oldest` + "`" + ` - removes identical files then keeps the oldest one.
* ` + "`" + `--dedupe-mode rename` + "`" + ` - removes identical files then renames the rest to be different.
For example to rename all the identically named photos in your Google Photos directory, do
rclone dedupe --dedupe-mode rename "drive:Google Photos"
Or
rclone dedupe rename "drive:Google Photos"
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 2, command, args)
if len(args) > 1 {
err := dedupeMode.Set(args[0])
if err != nil {
log.Fatal(err)
}
args = args[1:]
}
fdst := cmd.NewFsSrc(args)
cmd.Run(false, false, command, func() error {
return fs.Deduplicate(fdst, dedupeMode)
})
},
}

View File

@@ -1,41 +0,0 @@
package delete
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "delete remote:path",
Short: `Remove the contents of path.`,
Long: `
Remove the contents of path. Unlike ` + "`" + `purge` + "`" + ` it obeys include/exclude
filters so can be used to selectively delete files.
Eg delete all files bigger than 100MBytes
Check what would be deleted first (use either)
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
Then delete
rclone --min-size 100M delete remote:path
That reads "delete everything with a minimum size of 100 MB", hence
delete all files bigger than 100MBytes.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(true, false, command, func() error {
return fs.Delete(fsrc)
})
},
}

View File

@@ -1,44 +0,0 @@
package genautocomplete
import (
"log"
"github.com/ncw/rclone/cmd"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "genautocomplete [output_file]",
Short: `Output bash completion script for rclone.`,
Long: `
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will
probably need to be run with sudo or as root, eg
sudo rclone genautocomplete
Logout and login again to use the autocompletion scripts, or source
them directly
. /etc/bash_completion
If you supply a command line argument the script will be written
there.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1, command, args)
out := "/etc/bash_completion.d/rclone"
if len(args) > 0 {
out = args[0]
}
err := cmd.Root.GenBashCompletionFile(out)
if err != nil {
log.Fatal(err)
}
},
}

View File

@@ -1,55 +0,0 @@
package gendocs
import (
"fmt"
"os"
"path"
"path/filepath"
"strings"
"time"
"github.com/ncw/rclone/cmd"
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
const gendocFrontmatterTemplate = `---
date: %s
title: "%s"
slug: %s
url: %s
---
`
var commandDefintion = &cobra.Command{
Use: "gendocs output_directory",
Short: `Output markdown docs for rclone to the directory supplied.`,
Long: `
This produces markdown docs for the rclone commands to the directory
supplied. These are in a format suitable for hugo to render into the
rclone.org website.`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
out := args[0]
err := os.MkdirAll(out, 0777)
if err != nil {
return err
}
now := time.Now().Format(time.RFC3339)
prepender := func(filename string) string {
name := filepath.Base(filename)
base := strings.TrimSuffix(name, path.Ext(name))
url := "/commands/" + strings.ToLower(base) + "/"
return fmt.Sprintf(gendocFrontmatterTemplate, now, strings.Replace(base, "_", " ", -1), base, url)
}
linkHandler := func(name string) string {
base := strings.TrimSuffix(name, path.Ext(name))
return "/commands/" + strings.ToLower(base) + "/"
}
return doc.GenMarkdownTreeCustom(cmd.Root, out, prepender, linkHandler)
},
}

View File

@@ -1,49 +0,0 @@
package ls
import (
"fmt"
"sort"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
// Globals
var (
listLong bool
)
func init() {
cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&listLong, "long", "l", listLong, "Show the type as well as names.")
}
var commandDefintion = &cobra.Command{
Use: "listremotes",
Short: `List all the remotes in the config file.`,
Long: `
rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
remotes := fs.ConfigFileSections()
sort.Strings(remotes)
maxlen := 1
for _, remote := range remotes {
if len(remote) > maxlen {
maxlen = len(remote)
}
}
for _, remote := range remotes {
if listLong {
remoteType := fs.ConfigFileGet(remote, "type", "UNKNOWN")
fmt.Printf("%-*s %s\n", maxlen+1, remote+":", remoteType)
} else {
fmt.Printf("%s:\n", remote)
}
}
},
}

View File

@@ -1,25 +0,0 @@
package ls
import (
"os"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "ls remote:path",
Short: `List all the objects in the path with size and path.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, false, command, func() error {
return fs.List(fsrc, os.Stdout)
})
},
}

View File

@@ -1,25 +0,0 @@
package lsd
import (
"os"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "lsd remote:path",
Short: `List all directories/containers/buckets in the path.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, false, command, func() error {
return fs.ListDir(fsrc, os.Stdout)
})
},
}

View File

@@ -1,25 +0,0 @@
package lsl
import (
"os"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "lsl remote:path",
Short: `List all the objects path with modification time, size and path.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, false, command, func() error {
return fs.ListLong(fsrc, os.Stdout)
})
},
}

View File

@@ -1,29 +0,0 @@
package md5sum
import (
"os"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "md5sum remote:path",
Short: `Produces an md5sum file for all the objects in the path.`,
Long: `
Produces an md5sum file for all the objects in the path. This
is in the same format as the standard md5sum tool produces.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, false, command, func() error {
return fs.Md5sum(fsrc, os.Stdout)
})
},
}

View File

@@ -1,49 +0,0 @@
package memtest
import (
"runtime"
"sync"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "memtest remote:path",
Short: `Load all the objects at remote:path and report memory stats.`,
Hidden: true,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, false, command, func() error {
objects, _, err := fs.Count(fsrc)
if err != nil {
return err
}
objs := make([]fs.Object, 0, objects)
var before, after runtime.MemStats
runtime.GC()
runtime.ReadMemStats(&before)
var mu sync.Mutex
err = fs.ListFn(fsrc, func(o fs.Object) {
mu.Lock()
objs = append(objs, o)
mu.Unlock()
})
if err != nil {
return err
}
runtime.GC()
runtime.ReadMemStats(&after)
usedMemory := after.Alloc - before.Alloc
fs.Logf(nil, "%d objects took %d bytes, %.1f bytes/object", len(objs), usedMemory, float64(usedMemory)/float64(len(objs)))
fs.Logf(nil, "System memory changed from %d to %d bytes a change of %d bytes", before.Sys, after.Sys, after.Sys-before.Sys)
return nil
})
},
}

View File

@@ -1,23 +0,0 @@
package mkdir
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "mkdir remote:path",
Short: `Make the path if it doesn't already exist.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fdst := cmd.NewFsDst(args)
cmd.Run(true, false, command, func() error {
return fs.Mkdir(fdst, "")
})
},
}

View File

@@ -1,62 +0,0 @@
// +build linux darwin freebsd
package mount
import (
"time"
"github.com/ncw/rclone/fs"
)
// info to create a new object
type createInfo struct {
f fs.Fs
remote string
}
func newCreateInfo(f fs.Fs, remote string) *createInfo {
return &createInfo{
f: f,
remote: remote,
}
}
// Fs returns read only access to the Fs that this object is part of
func (ci *createInfo) Fs() fs.Info {
return ci.f
}
// String returns the remote path
func (ci *createInfo) String() string {
return ci.remote
}
// Remote returns the remote path
func (ci *createInfo) Remote() string {
return ci.remote
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (ci *createInfo) Hash(fs.HashType) (string, error) {
return "", fs.ErrHashUnsupported
}
// ModTime returns the modification date of the file
// It should return a best guess if one isn't available
func (ci *createInfo) ModTime() time.Time {
return time.Now()
}
// Size returns the size of the file
func (ci *createInfo) Size() int64 {
// FIXME this means this won't work with all remotes...
return 0
}
// Storable says whether this object can be stored
func (ci *createInfo) Storable() bool {
return true
}
var _ fs.ObjectInfo = (*createInfo)(nil)

View File

@@ -1,453 +0,0 @@
// +build linux darwin freebsd
package mount
import (
"os"
"path"
"sync"
"time"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
// DirEntry describes the contents of a directory entry
//
// It can be a file or a directory
//
// node may be nil, but o may not
type DirEntry struct {
o fs.BasicInfo
node fusefs.Node
}
// Dir represents a directory entry
type Dir struct {
f fs.Fs
path string
modTime time.Time
mu sync.RWMutex // protects the following
read time.Time // time directory entry last read
items map[string]*DirEntry
}
func newDir(f fs.Fs, fsDir *fs.Dir) *Dir {
return &Dir{
f: f,
path: fsDir.Name,
modTime: fsDir.When,
}
}
// rename should be called after the directory is renamed
//
// Reset the directory to new state, discarding all the objects and
// reading everything again
func (d *Dir) rename(newParent *Dir, fsDir *fs.Dir) {
d.path = fsDir.Name
d.modTime = fsDir.When
d.items = nil
d.read = time.Time{}
}
// addObject adds a new object or directory to the directory
//
// note that we add new objects rather than updating old ones
func (d *Dir) addObject(o fs.BasicInfo, node fusefs.Node) *DirEntry {
item := &DirEntry{
o: o,
node: node,
}
d.mu.Lock()
d.items[path.Base(o.Remote())] = item
d.mu.Unlock()
return item
}
// delObject removes an object from the directory
func (d *Dir) delObject(leaf string) {
d.mu.Lock()
delete(d.items, leaf)
d.mu.Unlock()
}
// read the directory
func (d *Dir) readDir() error {
d.mu.Lock()
defer d.mu.Unlock()
when := time.Now()
if d.read.IsZero() {
fs.Debugf(d.path, "Reading directory")
} else {
age := when.Sub(d.read)
if age < dirCacheTime {
return nil
}
fs.Debugf(d.path, "Re-reading directory (%v old)", age)
}
entries, err := fs.ListDirSorted(d.f, false, d.path)
if err == fs.ErrorDirNotFound {
// We treat directory not found as empty because we
// create directories on the fly
} else if err != nil {
return err
}
// NB when we re-read a directory after its cache has expired
// we drop the old files which should lead to correct
// behaviour but may not be very efficient.
// Keep a note of the previous contents of the directory
oldItems := d.items
// Cache the items by name
d.items = make(map[string]*DirEntry, len(entries))
for _, entry := range entries {
switch item := entry.(type) {
case fs.Object:
obj := item
name := path.Base(obj.Remote())
d.items[name] = &DirEntry{
o: obj,
node: nil,
}
case *fs.Dir:
dir := item
name := path.Base(dir.Remote())
// Use old dir value if it exists
if oldItem, ok := oldItems[name]; ok {
if _, ok := oldItem.o.(*fs.Dir); ok {
d.items[name] = oldItem
continue
}
}
d.items[name] = &DirEntry{
o: dir,
node: nil,
}
default:
err = errors.Errorf("unknown type %T", item)
fs.Errorf(d.path, "readDir error: %v", err)
return err
}
}
d.read = when
return nil
}
// lookup a single item in the directory
//
// returns fuse.ENOENT if not found.
func (d *Dir) lookup(leaf string) (*DirEntry, error) {
err := d.readDir()
if err != nil {
return nil, err
}
d.mu.RLock()
item, ok := d.items[leaf]
d.mu.RUnlock()
if !ok {
return nil, fuse.ENOENT
}
return item, nil
}
// Check to see if a directory is empty
func (d *Dir) isEmpty() (bool, error) {
err := d.readDir()
if err != nil {
return false, err
}
d.mu.RLock()
defer d.mu.RUnlock()
return len(d.items) == 0, nil
}
// Check interface satsified
var _ fusefs.Node = (*Dir)(nil)
// Attr updates the attribes of a directory
func (d *Dir) Attr(ctx context.Context, a *fuse.Attr) error {
a.Gid = gid
a.Uid = uid
a.Mode = os.ModeDir | dirPerms
a.Atime = d.modTime
a.Mtime = d.modTime
a.Ctime = d.modTime
a.Crtime = d.modTime
// FIXME include Valid so get some caching?
fs.Debugf(d.path, "Dir.Attr %+v", a)
return nil
}
// lookupNode calls lookup then makes sure the node is not nil in the DirEntry
func (d *Dir) lookupNode(leaf string) (item *DirEntry, err error) {
item, err = d.lookup(leaf)
if err != nil {
return nil, err
}
if item.node != nil {
return item, nil
}
var node fusefs.Node
switch x := item.o.(type) {
case fs.Object:
node, err = newFile(d, x), nil
case *fs.Dir:
node, err = newDir(d.f, x), nil
default:
err = errors.Errorf("unknown type %T", item)
}
if err != nil {
return nil, err
}
item = d.addObject(item.o, node)
return item, nil
}
// Check interface satisfied
var _ fusefs.NodeRequestLookuper = (*Dir)(nil)
// Lookup looks up a specific entry in the receiver.
//
// Lookup should return a Node corresponding to the entry. If the
// name does not exist in the directory, Lookup should return ENOENT.
//
// Lookup need not to handle the names "." and "..".
func (d *Dir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.LookupResponse) (node fusefs.Node, err error) {
path := path.Join(d.path, req.Name)
fs.Debugf(path, "Dir.Lookup")
item, err := d.lookupNode(req.Name)
if err != nil {
if err != fuse.ENOENT {
fs.Errorf(path, "Dir.Lookup error: %v", err)
}
return nil, err
}
fs.Debugf(path, "Dir.Lookup OK")
return item.node, nil
}
// Check interface satisfied
var _ fusefs.HandleReadDirAller = (*Dir)(nil)
// ReadDirAll reads the contents of the directory
func (d *Dir) ReadDirAll(ctx context.Context) (dirents []fuse.Dirent, err error) {
fs.Debugf(d.path, "Dir.ReadDirAll")
err = d.readDir()
if err != nil {
fs.Debugf(d.path, "Dir.ReadDirAll error: %v", err)
return nil, err
}
d.mu.RLock()
defer d.mu.RUnlock()
for _, item := range d.items {
var dirent fuse.Dirent
switch x := item.o.(type) {
case fs.Object:
dirent = fuse.Dirent{
// Inode FIXME ???
Type: fuse.DT_File,
Name: path.Base(x.Remote()),
}
case *fs.Dir:
dirent = fuse.Dirent{
// Inode FIXME ???
Type: fuse.DT_Dir,
Name: path.Base(x.Remote()),
}
default:
err = errors.Errorf("unknown type %T", item)
fs.Errorf(d.path, "Dir.ReadDirAll error: %v", err)
return nil, err
}
dirents = append(dirents, dirent)
}
fs.Debugf(d.path, "Dir.ReadDirAll OK with %d entries", len(dirents))
return dirents, nil
}
var _ fusefs.NodeCreater = (*Dir)(nil)
// Create makes a new file
func (d *Dir) Create(ctx context.Context, req *fuse.CreateRequest, resp *fuse.CreateResponse) (fusefs.Node, fusefs.Handle, error) {
path := path.Join(d.path, req.Name)
fs.Debugf(path, "Dir.Create")
src := newCreateInfo(d.f, path)
// This gets added to the directory when the file is written
file := newFile(d, nil)
fh, err := newWriteFileHandle(d, file, src)
if err != nil {
fs.Errorf(path, "Dir.Create error: %v", err)
return nil, nil, err
}
fs.Debugf(path, "Dir.Create OK")
return file, fh, nil
}
var _ fusefs.NodeMkdirer = (*Dir)(nil)
// Mkdir creates a new directory
func (d *Dir) Mkdir(ctx context.Context, req *fuse.MkdirRequest) (fusefs.Node, error) {
path := path.Join(d.path, req.Name)
fs.Debugf(path, "Dir.Mkdir")
err := d.f.Mkdir(path)
if err != nil {
fs.Errorf(path, "Dir.Mkdir failed to create directory: %v", err)
return nil, err
}
fsDir := &fs.Dir{
Name: path,
When: time.Now(),
}
dir := newDir(d.f, fsDir)
d.addObject(fsDir, dir)
fs.Debugf(path, "Dir.Mkdir OK")
return dir, nil
}
var _ fusefs.NodeRemover = (*Dir)(nil)
// Remove removes the entry with the given name from
// the receiver, which must be a directory. The entry to be removed
// may correspond to a file (unlink) or to a directory (rmdir).
func (d *Dir) Remove(ctx context.Context, req *fuse.RemoveRequest) error {
path := path.Join(d.path, req.Name)
fs.Debugf(path, "Dir.Remove")
item, err := d.lookupNode(req.Name)
if err != nil {
fs.Errorf(path, "Dir.Remove error: %v", err)
return err
}
switch x := item.o.(type) {
case fs.Object:
err = x.Remove()
if err != nil {
fs.Errorf(path, "Dir.Remove file error: %v", err)
return err
}
case *fs.Dir:
// Check directory is empty first
dir := item.node.(*Dir)
empty, err := dir.isEmpty()
if err != nil {
fs.Errorf(path, "Dir.Remove dir error: %v", err)
return err
}
if !empty {
// return fuse.ENOTEMPTY - doesn't exist though so use EEXIST
fs.Errorf(path, "Dir.Remove not empty")
return fuse.EEXIST
}
// remove directory
err = d.f.Rmdir(path)
if err != nil {
fs.Errorf(path, "Dir.Remove failed to remove directory: %v", err)
return err
}
default:
fs.Errorf(path, "Dir.Remove unknown type %T", item)
return errors.Errorf("unknown type %T", item)
}
// Remove the item from the directory listing
d.delObject(req.Name)
fs.Debugf(path, "Dir.Remove OK")
return nil
}
// Check interface satisfied
var _ fusefs.NodeRenamer = (*Dir)(nil)
// Rename the file
func (d *Dir) Rename(ctx context.Context, req *fuse.RenameRequest, newDir fusefs.Node) error {
oldPath := path.Join(d.path, req.OldName)
destDir, ok := newDir.(*Dir)
if !ok {
err := errors.Errorf("Unknown Dir type %T", newDir)
fs.Errorf(oldPath, "Dir.Rename error: %v", err)
return err
}
newPath := path.Join(destDir.path, req.NewName)
fs.Debugf(oldPath, "Dir.Rename to %q", newPath)
oldItem, err := d.lookupNode(req.OldName)
if err != nil {
fs.Errorf(oldPath, "Dir.Rename error: %v", err)
return err
}
var newObj fs.BasicInfo
oldNode := oldItem.node
switch x := oldItem.o.(type) {
case fs.Object:
oldObject := x
// FIXME: could Copy then Delete if Move not available
// - though care needed if case insensitive...
doMove := d.f.Features().Move
if doMove == nil {
err := errors.Errorf("Fs %q can't rename files (no Move)", d.f)
fs.Errorf(oldPath, "Dir.Rename error: %v", err)
return err
}
newObject, err := doMove(oldObject, newPath)
if err != nil {
fs.Errorf(oldPath, "Dir.Rename error: %v", err)
return err
}
newObj = newObject
// Update the node with the new details
if oldNode != nil {
if oldFile, ok := oldNode.(*File); ok {
fs.Debugf(oldItem.o, "Updating file with %v %p", newObject, oldFile)
oldFile.rename(destDir, newObject)
}
}
case *fs.Dir:
doDirMove := d.f.Features().DirMove
if doDirMove == nil {
err := errors.Errorf("Fs %q can't rename directories (no DirMove)", d.f)
fs.Errorf(oldPath, "Dir.Rename error: %v", err)
return err
}
srcRemote := x.Name
dstRemote := newPath
err = doDirMove(d.f, srcRemote, dstRemote)
if err != nil {
fs.Errorf(oldPath, "Dir.Rename error: %v", err)
return err
}
newDir := new(fs.Dir)
*newDir = *x
newDir.Name = newPath
newObj = newDir
// Update the node with the new details
if oldNode != nil {
if oldDir, ok := oldNode.(*Dir); ok {
fs.Debugf(oldItem.o, "Updating dir with %v %p", newDir, oldDir)
oldDir.rename(destDir, newDir)
}
}
default:
err = errors.Errorf("unknown type %T", oldItem)
fs.Errorf(d.path, "Dir.ReadDirAll error: %v", err)
return err
}
// Show moved - delete from old dir and add to new
d.delObject(req.OldName)
destDir.addObject(newObj, oldNode)
fs.Debugf(newPath, "Dir.Rename renamed from %q", oldPath)
return nil
}
// Check interface satisfied
var _ fusefs.NodeFsyncer = (*Dir)(nil)
// Fsync the directory
//
// Note that we don't do anything except return OK
func (d *Dir) Fsync(ctx context.Context, req *fuse.FsyncRequest) error {
return nil
}

View File

@@ -1,135 +0,0 @@
// +build linux darwin freebsd
package mount
import (
"io/ioutil"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestDirLs(t *testing.T) {
run.skipIfNoFUSE(t)
run.checkDir(t, "")
run.mkdir(t, "a directory")
run.createFile(t, "a file", "hello")
run.checkDir(t, "a directory/|a file 5")
run.rmdir(t, "a directory")
run.rm(t, "a file")
run.checkDir(t, "")
}
func TestDirCreateAndRemoveDir(t *testing.T) {
run.skipIfNoFUSE(t)
run.mkdir(t, "dir")
run.mkdir(t, "dir/subdir")
run.checkDir(t, "dir/|dir/subdir/")
// Check we can't delete a directory with stuff in
err := os.Remove(run.path("dir"))
assert.Error(t, err, "file exists")
// Now delete subdir then dir - should produce no errors
run.rmdir(t, "dir/subdir")
run.checkDir(t, "dir/")
run.rmdir(t, "dir")
run.checkDir(t, "")
}
func TestDirCreateAndRemoveFile(t *testing.T) {
run.skipIfNoFUSE(t)
run.mkdir(t, "dir")
run.createFile(t, "dir/file", "potato")
run.checkDir(t, "dir/|dir/file 6")
// Check we can't delete a directory with stuff in
err := os.Remove(run.path("dir"))
assert.Error(t, err, "file exists")
// Now delete file
run.rm(t, "dir/file")
run.checkDir(t, "dir/")
run.rmdir(t, "dir")
run.checkDir(t, "")
}
func TestDirRenameFile(t *testing.T) {
run.skipIfNoFUSE(t)
run.mkdir(t, "dir")
run.createFile(t, "file", "potato")
run.checkDir(t, "dir/|file 6")
err := os.Rename(run.path("file"), run.path("file2"))
require.NoError(t, err)
run.checkDir(t, "dir/|file2 6")
data, err := ioutil.ReadFile(run.path("file2"))
require.NoError(t, err)
assert.Equal(t, "potato", string(data))
err = os.Rename(run.path("file2"), run.path("dir/file3"))
require.NoError(t, err)
run.checkDir(t, "dir/|dir/file3 6")
data, err = ioutil.ReadFile(run.path("dir/file3"))
require.NoError(t, err)
assert.Equal(t, "potato", string(data))
run.rm(t, "dir/file3")
run.rmdir(t, "dir")
run.checkDir(t, "")
}
func TestDirRenameEmptyDir(t *testing.T) {
run.skipIfNoFUSE(t)
run.mkdir(t, "dir")
run.mkdir(t, "dir1")
run.checkDir(t, "dir/|dir1/")
err := os.Rename(run.path("dir1"), run.path("dir/dir2"))
require.NoError(t, err)
run.checkDir(t, "dir/|dir/dir2/")
err = os.Rename(run.path("dir/dir2"), run.path("dir/dir3"))
require.NoError(t, err)
run.checkDir(t, "dir/|dir/dir3/")
run.rmdir(t, "dir/dir3")
run.rmdir(t, "dir")
run.checkDir(t, "")
}
func TestDirRenameFullDir(t *testing.T) {
run.skipIfNoFUSE(t)
run.mkdir(t, "dir")
run.mkdir(t, "dir1")
run.createFile(t, "dir1/potato.txt", "maris piper")
run.checkDir(t, "dir/|dir1/|dir1/potato.txt 11")
err := os.Rename(run.path("dir1"), run.path("dir/dir2"))
require.NoError(t, err)
run.checkDir(t, "dir/|dir/dir2/|dir/dir2/potato.txt 11")
err = os.Rename(run.path("dir/dir2"), run.path("dir/dir3"))
require.NoError(t, err)
run.checkDir(t, "dir/|dir/dir3/|dir/dir3/potato.txt 11")
run.rm(t, "dir/dir3/potato.txt")
run.rmdir(t, "dir/dir3")
run.rmdir(t, "dir")
run.checkDir(t, "")
}

View File

@@ -1,168 +0,0 @@
// +build linux darwin freebsd
package mount
import (
"sync"
"sync/atomic"
"time"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
// File represents a file
type File struct {
size int64 // size of file - read and written with atomic int64 - must be 64 bit aligned
d *Dir // parent directory - read only
mu sync.RWMutex // protects the following
o fs.Object // NB o may be nil if file is being written
writers int // number of writers for this file
}
// newFile creates a new File
func newFile(d *Dir, o fs.Object) *File {
return &File{
d: d,
o: o,
}
}
// rename should be called to update f.o and f.d after a rename
func (f *File) rename(d *Dir, o fs.Object) {
f.mu.Lock()
f.o = o
f.d = d
f.mu.Unlock()
}
// addWriters increments or decrements the writers
func (f *File) addWriters(n int) {
f.mu.Lock()
f.writers += n
f.mu.Unlock()
}
// Check interface satisfied
var _ fusefs.Node = (*File)(nil)
// Attr fills out the attributes for the file
func (f *File) Attr(ctx context.Context, a *fuse.Attr) error {
f.mu.Lock()
defer f.mu.Unlock()
a.Gid = gid
a.Uid = uid
a.Mode = filePerms
// if o is nil it isn't valid yet, so return the size so far
if f.o == nil {
a.Size = uint64(atomic.LoadInt64(&f.size))
} else {
a.Size = uint64(f.o.Size())
if !noModTime {
modTime := f.o.ModTime()
a.Atime = modTime
a.Mtime = modTime
a.Ctime = modTime
a.Crtime = modTime
}
}
a.Blocks = (a.Size + 511) / 512
fs.Debugf(f.o, "File.Attr %+v", a)
return nil
}
// Update the size while writing
func (f *File) written(n int64) {
atomic.AddInt64(&f.size, n)
}
// Update the object when written
func (f *File) setObject(o fs.Object) {
f.mu.Lock()
defer f.mu.Unlock()
f.o = o
f.d.addObject(o, f)
}
// Wait for f.o to become non nil for a short time returning it or an
// error
//
// Call without the mutex held
func (f *File) waitForValidObject() (o fs.Object, err error) {
for i := 0; i < 50; i++ {
f.mu.Lock()
o = f.o
writers := f.writers
f.mu.Unlock()
if o != nil {
return o, nil
}
if writers == 0 {
return nil, errors.New("can't open file - writer failed")
}
time.Sleep(100 * time.Millisecond)
}
return nil, fuse.ENOENT
}
// Check interface satisfied
var _ fusefs.NodeOpener = (*File)(nil)
// Open the file for read or write
func (f *File) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenResponse) (fh fusefs.Handle, err error) {
// if o is nil it isn't valid yet
o, err := f.waitForValidObject()
if err != nil {
return nil, err
}
fs.Debugf(o, "File.Open %v", req.Flags)
switch {
case req.Flags.IsReadOnly():
if noSeek {
resp.Flags |= fuse.OpenNonSeekable
}
fh, err = newReadFileHandle(o)
err = errors.Wrap(err, "open for read")
case req.Flags.IsWriteOnly() || (req.Flags.IsReadWrite() && (req.Flags&fuse.OpenTruncate) != 0):
resp.Flags |= fuse.OpenNonSeekable
src := newCreateInfo(f.d.f, o.Remote())
fh, err = newWriteFileHandle(f.d, f, src)
err = errors.Wrap(err, "open for write")
case req.Flags.IsReadWrite():
err = errors.New("can't open for read and write simultaneously")
default:
err = errors.Errorf("can't figure out how to open with flags %v", req.Flags)
}
/*
// File was opened in append-only mode, all writes will go to end
// of file. OS X does not provide this information.
OpenAppend OpenFlags = syscall.O_APPEND
OpenCreate OpenFlags = syscall.O_CREAT
OpenDirectory OpenFlags = syscall.O_DIRECTORY
OpenExclusive OpenFlags = syscall.O_EXCL
OpenNonblock OpenFlags = syscall.O_NONBLOCK
OpenSync OpenFlags = syscall.O_SYNC
OpenTruncate OpenFlags = syscall.O_TRUNC
*/
if err != nil {
fs.Errorf(o, "File.Open failed: %v", err)
return nil, err
}
return fh, nil
}
// Check interface satisfied
var _ fusefs.NodeFsyncer = (*File)(nil)
// Fsync the file
//
// Note that we don't do anything except return OK
func (f *File) Fsync(ctx context.Context, req *fuse.FsyncRequest) error {
return nil
}

View File

@@ -1,127 +0,0 @@
// FUSE main Fs
// +build linux darwin freebsd
package mount
import (
"time"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/fs"
"golang.org/x/net/context"
)
// FS represents the top level filing system
type FS struct {
f fs.Fs
}
// Check interface satistfied
var _ fusefs.FS = (*FS)(nil)
// Root returns the root node
func (f *FS) Root() (fusefs.Node, error) {
fs.Debugf(f.f, "Root()")
fsDir := &fs.Dir{
Name: "",
When: time.Now(),
}
return newDir(f.f, fsDir), nil
}
// mountOptions configures the options from the command line flags
func mountOptions(device string) (options []fuse.MountOption) {
options = []fuse.MountOption{
fuse.MaxReadahead(uint32(maxReadAhead)),
fuse.Subtype("rclone"),
fuse.FSName(device), fuse.VolumeName(device),
fuse.NoAppleDouble(),
fuse.NoAppleXattr(),
// Options from benchmarking in the fuse module
//fuse.MaxReadahead(64 * 1024 * 1024),
//fuse.AsyncRead(), - FIXME this causes
// ReadFileHandle.Read error: read /home/files/ISOs/xubuntu-15.10-desktop-amd64.iso: bad file descriptor
// which is probably related to errors people are having
//fuse.WritebackCache(),
}
if allowNonEmpty {
options = append(options, fuse.AllowNonEmptyMount())
}
if allowOther {
options = append(options, fuse.AllowOther())
}
if allowRoot {
options = append(options, fuse.AllowRoot())
}
if defaultPermissions {
options = append(options, fuse.DefaultPermissions())
}
if readOnly {
options = append(options, fuse.ReadOnly())
}
if writebackCache {
options = append(options, fuse.WritebackCache())
}
return options
}
// mount the file system
//
// The mount point will be ready when this returns.
//
// returns an error, and an error channel for the serve process to
// report an error when fusermount is called.
func mount(f fs.Fs, mountpoint string) (<-chan error, error) {
fs.Debugf(f, "Mounting on %q", mountpoint)
c, err := fuse.Mount(mountpoint, mountOptions(f.Name()+":"+f.Root())...)
if err != nil {
return nil, err
}
filesys := &FS{
f: f,
}
server := fusefs.New(c, nil)
// Serve the mount point in the background returning error to errChan
errChan := make(chan error, 1)
go func() {
err := server.Serve(filesys)
closeErr := c.Close()
if err == nil {
err = closeErr
}
errChan <- err
}()
// check if the mount process has an error to report
<-c.Ready
if err := c.MountError; err != nil {
return nil, err
}
return errChan, nil
}
// Check interface satsified
var _ fusefs.FSStatfser = (*FS)(nil)
// Statfs is called to obtain file system metadata.
// It should write that data to resp.
func (f *FS) Statfs(ctx context.Context, req *fuse.StatfsRequest, resp *fuse.StatfsResponse) error {
const blockSize = 4096
const fsBlocks = (1 << 50) / blockSize
resp.Blocks = fsBlocks // Total data blocks in file system.
resp.Bfree = fsBlocks // Free blocks in file system.
resp.Bavail = fsBlocks // Free blocks in file system if you're not root.
resp.Files = 1E9 // Total files in file system.
resp.Ffree = 1E9 // Free files in file system.
resp.Bsize = blockSize // Block size
resp.Namelen = 255 // Maximum file name length?
resp.Frsize = blockSize // Fragment size, smallest addressable data size in the file system.
return nil
}

View File

@@ -1,270 +0,0 @@
// +build linux darwin freebsd
// Test suite for rclonefs
package mount
import (
"flag"
"fmt"
"io/ioutil"
"log"
"os"
"os/exec"
"path"
"strings"
"testing"
"github.com/ncw/rclone/fs"
_ "github.com/ncw/rclone/fs/all"
"github.com/ncw/rclone/fstest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Globals
var (
RemoteName = flag.String("remote", "", "Remote to test with, defaults to local filesystem")
SubDir = flag.Bool("subdir", false, "Set to test with a sub directory")
Verbose = flag.Bool("verbose", false, "Set to enable logging")
DumpHeaders = flag.Bool("dump-headers", false, "Set to dump headers (needs -verbose)")
DumpBodies = flag.Bool("dump-bodies", false, "Set to dump bodies (needs -verbose)")
Individual = flag.Bool("individual", false, "Make individual bucket/container/directory for each test - much slower")
LowLevelRetries = flag.Int("low-level-retries", 10, "Number of low level retries")
)
// TestMain drives the tests
func TestMain(m *testing.M) {
flag.Parse()
run = newRun()
rc := m.Run()
run.Finalise()
os.Exit(rc)
}
// Run holds the remotes for a test run
type Run struct {
mountPath string
fremote fs.Fs
fremoteName string
cleanRemote func()
umountResult <-chan error
skip bool
}
// run holds the master Run data
var run *Run
// newRun initialise the remote mount for testing and returns a run
// object.
//
// r.fremote is an empty remote Fs
//
// Finalise() will tidy them away when done.
func newRun() *Run {
r := &Run{
umountResult: make(chan error, 1),
}
// Never ask for passwords, fail instead.
// If your local config is encrypted set environment variable
// "RCLONE_CONFIG_PASS=hunter2" (or your password)
*fs.AskPassword = false
fs.LoadConfig()
if *Verbose {
fs.Config.LogLevel = fs.LogLevelDebug
}
fs.Config.DumpHeaders = *DumpHeaders
fs.Config.DumpBodies = *DumpBodies
fs.Config.LowLevelRetries = *LowLevelRetries
var err error
r.fremote, r.fremoteName, r.cleanRemote, err = fstest.RandomRemote(*RemoteName, *SubDir)
if err != nil {
log.Fatalf("Failed to open remote %q: %v", *RemoteName, err)
}
err = r.fremote.Mkdir("")
if err != nil {
log.Fatalf("Failed to open mkdir %q: %v", *RemoteName, err)
}
r.mountPath, err = ioutil.TempDir("", "rclonefs-mount")
if err != nil {
log.Fatalf("Failed to create mount dir: %v", err)
}
// Mount it up
r.mount()
return r
}
func (r *Run) mount() {
log.Printf("mount %q %q", r.fremote, r.mountPath)
var err error
r.umountResult, err = mount(r.fremote, r.mountPath)
if err != nil {
log.Printf("mount failed: %v", err)
r.skip = true
}
log.Printf("mount OK")
}
func (r *Run) umount() {
if r.skip {
log.Printf("FUSE not found so skipping umount")
return
}
log.Printf("Calling fusermount -u %q", r.mountPath)
err := exec.Command("fusermount", "-u", r.mountPath).Run()
if err != nil {
log.Printf("fusermount failed: %v", err)
}
log.Printf("Waiting for umount")
err = <-r.umountResult
if err != nil {
log.Fatalf("umount failed: %v", err)
}
}
func (r *Run) skipIfNoFUSE(t *testing.T) {
if r.skip {
t.Skip("FUSE not found so skipping test")
}
}
// Finalise cleans the remote and unmounts
func (r *Run) Finalise() {
r.umount()
r.cleanRemote()
err := os.RemoveAll(r.mountPath)
if err != nil {
log.Printf("Failed to clean mountPath %q: %v", r.mountPath, err)
}
}
func (r *Run) path(filepath string) string {
return path.Join(run.mountPath, filepath)
}
type dirMap map[string]struct{}
// Create a dirMap from a string
func newDirMap(dirString string) (dm dirMap) {
dm = make(dirMap)
for _, entry := range strings.Split(dirString, "|") {
if entry != "" {
dm[entry] = struct{}{}
}
}
return dm
}
// Returns a dirmap with only the files in
func (dm dirMap) filesOnly() dirMap {
newDm := make(dirMap)
for name := range dm {
if !strings.HasSuffix(name, "/") {
newDm[name] = struct{}{}
}
}
return newDm
}
// reads the local tree into dir
func (r *Run) readLocal(t *testing.T, dir dirMap, filepath string) {
realPath := r.path(filepath)
files, err := ioutil.ReadDir(realPath)
require.NoError(t, err)
for _, fi := range files {
name := path.Join(filepath, fi.Name())
if fi.IsDir() {
dir[name+"/"] = struct{}{}
r.readLocal(t, dir, name)
assert.Equal(t, fi.Mode().Perm(), os.FileMode(dirPerms))
} else {
dir[fmt.Sprintf("%s %d", name, fi.Size())] = struct{}{}
assert.Equal(t, fi.Mode().Perm(), os.FileMode(filePerms))
}
}
}
// reads the remote tree into dir
func (r *Run) readRemote(t *testing.T, dir dirMap, filepath string) {
objs, dirs, err := fs.NewLister().SetLevel(1).Start(r.fremote, filepath).GetAll()
if err == fs.ErrorDirNotFound {
return
}
require.NoError(t, err)
for _, obj := range objs {
dir[fmt.Sprintf("%s %d", obj.Remote(), obj.Size())] = struct{}{}
}
for _, d := range dirs {
name := d.Remote()
dir[name+"/"] = struct{}{}
r.readRemote(t, dir, name)
}
}
// checkDir checks the local and remote against the string passed in
func (r *Run) checkDir(t *testing.T, dirString string) {
dm := newDirMap(dirString)
localDm := make(dirMap)
r.readLocal(t, localDm, "")
remoteDm := make(dirMap)
r.readRemote(t, remoteDm, "")
// Ignore directories for remote compare
assert.Equal(t, dm.filesOnly(), remoteDm.filesOnly(), "expected vs remote")
assert.Equal(t, dm, localDm, "expected vs fuse mount")
}
func (r *Run) createFile(t *testing.T, filepath string, contents string) {
filepath = r.path(filepath)
err := ioutil.WriteFile(filepath, []byte(contents), 0600)
require.NoError(t, err)
}
func (r *Run) readFile(t *testing.T, filepath string) string {
filepath = r.path(filepath)
result, err := ioutil.ReadFile(filepath)
require.NoError(t, err)
return string(result)
}
func (r *Run) mkdir(t *testing.T, filepath string) {
filepath = r.path(filepath)
err := os.Mkdir(filepath, 0700)
require.NoError(t, err)
}
func (r *Run) rm(t *testing.T, filepath string) {
filepath = r.path(filepath)
err := os.Remove(filepath)
require.NoError(t, err)
}
func (r *Run) rmdir(t *testing.T, filepath string) {
filepath = r.path(filepath)
err := os.Remove(filepath)
require.NoError(t, err)
}
// Check that the Fs is mounted by seeing if the mountpoint is
// in the mount output
func TestMount(t *testing.T) {
run.skipIfNoFUSE(t)
out, err := exec.Command("mount").Output()
require.NoError(t, err)
assert.Contains(t, string(out), run.mountPath)
}
// Check root directory is present and correct
func TestRoot(t *testing.T) {
run.skipIfNoFUSE(t)
fi, err := os.Lstat(run.mountPath)
require.NoError(t, err)
assert.True(t, fi.IsDir())
assert.Equal(t, fi.Mode().Perm(), os.FileMode(dirPerms))
}

View File

@@ -1,178 +0,0 @@
// Package mount implents a FUSE mounting system for rclone remotes.
// +build linux darwin freebsd
package mount
import (
"log"
"os"
"time"
"bazil.org/fuse"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
"github.com/spf13/cobra"
"golang.org/x/sys/unix"
)
// Globals
var (
noModTime = false
debugFUSE = false
noSeek = false
dirCacheTime = 5 * 60 * time.Second
// mount options
readOnly = false
allowNonEmpty = false
allowRoot = false
allowOther = false
defaultPermissions = false
writebackCache = false
maxReadAhead fs.SizeSuffix = 128 * 1024
umask = 0
uid = uint32(unix.Geteuid())
gid = uint32(unix.Getegid())
// foreground = false
// default permissions for directories - modified by umask in Mount
dirPerms = os.FileMode(0777)
filePerms = os.FileMode(0666)
)
func init() {
umask = unix.Umask(0) // read the umask
unix.Umask(umask) // set it back to what it was
cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&noModTime, "no-modtime", "", noModTime, "Don't read the modification time (can speed things up).")
commandDefintion.Flags().BoolVarP(&debugFUSE, "debug-fuse", "", debugFUSE, "Debug the FUSE internals - needs -v.")
commandDefintion.Flags().BoolVarP(&noSeek, "no-seek", "", noSeek, "Don't allow seeking in files.")
commandDefintion.Flags().DurationVarP(&dirCacheTime, "dir-cache-time", "", dirCacheTime, "Time to cache directory entries for.")
// mount options
commandDefintion.Flags().BoolVarP(&readOnly, "read-only", "", readOnly, "Mount read-only.")
commandDefintion.Flags().BoolVarP(&allowNonEmpty, "allow-non-empty", "", allowNonEmpty, "Allow mounting over a non-empty directory.")
commandDefintion.Flags().BoolVarP(&allowRoot, "allow-root", "", allowRoot, "Allow access to root user.")
commandDefintion.Flags().BoolVarP(&allowOther, "allow-other", "", allowOther, "Allow access to other users.")
commandDefintion.Flags().BoolVarP(&defaultPermissions, "default-permissions", "", defaultPermissions, "Makes kernel enforce access control based on the file mode.")
commandDefintion.Flags().BoolVarP(&writebackCache, "write-back-cache", "", writebackCache, "Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.")
commandDefintion.Flags().VarP(&maxReadAhead, "max-read-ahead", "", "The number of bytes that can be prefetched for sequential reads.")
commandDefintion.Flags().IntVarP(&umask, "umask", "", umask, "Override the permission bits set by the filesystem.")
commandDefintion.Flags().Uint32VarP(&uid, "uid", "", uid, "Override the uid field set by the filesystem.")
commandDefintion.Flags().Uint32VarP(&gid, "gid", "", gid, "Override the gid field set by the filesystem.")
//commandDefintion.Flags().BoolVarP(&foreground, "foreground", "", foreground, "Do not detach.")
}
var commandDefintion = &cobra.Command{
Use: "mount remote:path /path/to/mountpoint",
Short: `Mount the remote as a mountpoint. **EXPERIMENTAL**`,
Long: `
rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's
cloud storage systems as a file system with FUSE.
This is **EXPERIMENTAL** - use with care.
First set up your remote using ` + "`rclone config`" + `. Check it works with ` + "`rclone ls`" + ` etc.
Start the mount like this (note the & on the end to put rclone in the background).
rclone mount remote:path/to/files /path/to/local/mount &
Stop the mount with
fusermount -u /path/to/local/mount
Or if that fails try
fusermount -z -u /path/to/local/mount
Or with OS X
umount /path/to/local/mount
### Limitations ###
This can only write files seqentially, it can only seek when reading.
This means that many applications won't work with their files on an
rclone mount.
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2,
Hubic) won't work from the root - you will need to specify a bucket,
or a path within the bucket. So ` + "`swift:`" + ` won't work whereas
` + "`swift:bucket`" + ` will as will ` + "`swift:bucket/path`" + `.
None of these support the concept of directories, so empty
directories will have a tendency to disappear once they fall out of
the directory cache.
Only supported on Linux, FreeBSD and OS X at the moment.
### rclone mount vs rclone sync/copy ##
File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the
uploads. This might happen in the future, but for the moment rclone
mount won't do that, so will be less reliable than the rclone command.
### Filters ###
Note that all the rclone filters can be used to select a subset of the
files to be visible in the mount.
### Bugs ###
* All the remotes should work for read, but some may not for write
* those which need to know the size in advance won't - eg B2
* maybe should pass in size as -1 to mean work it out
* Or put in an an upload cache to cache the files on disk first
### TODO ###
* Check hashes on upload/download
* Preserve timestamps
* Move directories
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fdst := cmd.NewFsDst(args)
err := Mount(fdst, args[1])
if err != nil {
log.Fatalf("Fatal error: %v", err)
}
},
}
// Mount mounts the remote at mountpoint.
//
// If noModTime is set then it
func Mount(f fs.Fs, mountpoint string) error {
if debugFUSE {
fuse.Debug = func(msg interface{}) {
fs.Debugf("fuse", "%v", msg)
}
}
// Set permissions
dirPerms = 0777 &^ os.FileMode(umask)
filePerms = 0666 &^ os.FileMode(umask)
// Show stats if the user has specifically requested them
if cmd.ShowStats() {
stopStats := cmd.StartStats()
defer close(stopStats)
}
// Mount it
errChan, err := mount(f, mountpoint)
if err != nil {
return errors.Wrap(err, "failed to mount FUSE fs")
}
// Wait for umount
err = <-errChan
if err != nil {
return errors.Wrap(err, "failed to umount FUSE fs")
}
return nil
}

View File

@@ -1,6 +0,0 @@
// Build for mount for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !linux,!darwin,!freebsd
package mount

View File

@@ -1,214 +0,0 @@
// +build linux darwin freebsd
package mount
import (
"io"
"sync"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/fs"
"golang.org/x/net/context"
)
// ReadFileHandle is an open for read file handle on a File
type ReadFileHandle struct {
mu sync.Mutex
closed bool // set if handle has been closed
r *fs.Account
o fs.Object
readCalled bool // set if read has been called
offset int64
}
func newReadFileHandle(o fs.Object) (*ReadFileHandle, error) {
r, err := o.Open()
if err != nil {
return nil, err
}
fh := &ReadFileHandle{
o: o,
r: fs.NewAccount(r, o).WithBuffer(), // account the transfer
}
fs.Stats.Transferring(fh.o.Remote())
return fh, nil
}
// Check interface satisfied
var _ fusefs.Handle = (*ReadFileHandle)(nil)
// Check interface satisfied
var _ fusefs.HandleReader = (*ReadFileHandle)(nil)
// seek to a new offset
//
// if reopen is true, then we won't attempt to use an io.Seeker interface
//
// Must be called with fh.mu held
func (fh *ReadFileHandle) seek(offset int64, reopen bool) (err error) {
fh.r.StopBuffering() // stop the background reading first
oldReader := fh.r.GetReader()
r := oldReader
// Can we seek it directly?
if do, ok := oldReader.(io.Seeker); !reopen && ok {
fs.Debugf(fh.o, "ReadFileHandle.seek from %d to %d (io.Seeker)", fh.offset, offset)
_, err = do.Seek(offset, 0)
if err != nil {
fs.Debugf(fh.o, "ReadFileHandle.Read io.Seeker failed: %v", err)
return err
}
} else {
fs.Debugf(fh.o, "ReadFileHandle.seek from %d to %d", fh.offset, offset)
// close old one
err = oldReader.Close()
if err != nil {
fs.Debugf(fh.o, "ReadFileHandle.Read seek close old failed: %v", err)
}
// re-open with a seek
r, err = fh.o.Open(&fs.SeekOption{Offset: offset})
if err != nil {
fs.Debugf(fh.o, "ReadFileHandle.Read seek failed: %v", err)
return err
}
}
fh.r.UpdateReader(r)
fh.offset = offset
return nil
}
// Read from the file handle
func (fh *ReadFileHandle) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) (err error) {
fh.mu.Lock()
defer fh.mu.Unlock()
fs.Debugf(fh.o, "ReadFileHandle.Read size %d offset %d", req.Size, req.Offset)
if fh.closed {
fs.Errorf(fh.o, "ReadFileHandle.Read error: %v", errClosedFileHandle)
return errClosedFileHandle
}
doSeek := req.Offset != fh.offset
var n int
var newOffset int64
retries := 0
buf := make([]byte, req.Size)
doReopen := false
for {
if doSeek {
// Are we attempting to seek beyond the end of the
// file - if so just return EOF leaving the underlying
// file in an unchanged state.
if req.Offset >= fh.o.Size() {
fs.Debugf(fh.o, "ReadFileHandle.Read attempt to read beyond end of file: %d > %d", req.Offset, fh.o.Size())
resp.Data = nil
return nil
}
// Otherwise do the seek
err = fh.seek(req.Offset, doReopen)
} else {
err = nil
}
if err == nil {
if req.Size > 0 {
fh.readCalled = true
}
// One exception to the above is if we fail to fully populate a
// page cache page; a read into page cache is always page aligned.
// Make sure we never serve a partial read, to avoid that.
n, err = io.ReadFull(fh.r, buf)
newOffset = fh.offset + int64(n)
// if err == nil && rand.Intn(10) == 0 {
// err = errors.New("random error")
// }
if err == nil {
break
} else if (err == io.ErrUnexpectedEOF || err == io.EOF) && newOffset == fh.o.Size() {
// Have read to end of file - reset error
err = nil
break
}
}
if retries >= fs.Config.LowLevelRetries {
break
}
retries++
fs.Errorf(fh.o, "ReadFileHandle.Read error: low level retry %d/%d: %v", retries, fs.Config.LowLevelRetries, err)
doSeek = true
doReopen = true
}
if err != nil {
fs.Errorf(fh.o, "ReadFileHandle.Read error: %v", err)
} else {
resp.Data = buf[:n]
fh.offset = newOffset
fs.Debugf(fh.o, "ReadFileHandle.Read OK")
}
return err
}
// close the file handle returning errClosedFileHandle if it has been
// closed already.
//
// Must be called with fh.mu held
func (fh *ReadFileHandle) close() error {
if fh.closed {
return errClosedFileHandle
}
fh.closed = true
fs.Stats.DoneTransferring(fh.o.Remote(), true)
return fh.r.Close()
}
// Check interface satisfied
var _ fusefs.HandleFlusher = (*ReadFileHandle)(nil)
// Flush is called each time the file or directory is closed.
// Because there can be multiple file descriptors referring to a
// single opened file, Flush can be called multiple times.
func (fh *ReadFileHandle) Flush(ctx context.Context, req *fuse.FlushRequest) error {
fh.mu.Lock()
defer fh.mu.Unlock()
fs.Debugf(fh.o, "ReadFileHandle.Flush")
// Ignore the Flush as there is nothing we can sensibly do and
// it seems quite common for Flush to be called from
// different threads each of which have read some data.
if false {
// If Read hasn't been called then ignore the Flush - Release
// will pick it up
if !fh.readCalled {
fs.Debugf(fh.o, "ReadFileHandle.Flush ignoring flush on unread handle")
return nil
}
err := fh.close()
if err != nil {
fs.Errorf(fh.o, "ReadFileHandle.Flush error: %v", err)
return err
}
}
fs.Debugf(fh.o, "ReadFileHandle.Flush OK")
return nil
}
var _ fusefs.HandleReleaser = (*ReadFileHandle)(nil)
// Release is called when we are finished with the file handle
//
// It isn't called directly from userspace so the error is ignored by
// the kernel
func (fh *ReadFileHandle) Release(ctx context.Context, req *fuse.ReleaseRequest) error {
fh.mu.Lock()
defer fh.mu.Unlock()
if fh.closed {
fs.Debugf(fh.o, "ReadFileHandle.Release nothing to do")
return nil
}
fs.Debugf(fh.o, "ReadFileHandle.Release closing")
err := fh.close()
if err != nil {
fs.Errorf(fh.o, "ReadFileHandle.Release error: %v", err)
} else {
fs.Debugf(fh.o, "ReadFileHandle.Release OK")
}
return err
}

View File

@@ -1,129 +0,0 @@
// +build linux darwin freebsd
package mount
import (
"io"
"io/ioutil"
"os"
"syscall"
"testing"
"github.com/stretchr/testify/assert"
)
// Read by byte including don't read any bytes
func TestReadByByte(t *testing.T) {
run.skipIfNoFUSE(t)
var data = []byte("hellohello")
run.createFile(t, "testfile", string(data))
run.checkDir(t, "testfile 10")
for i := 0; i < len(data); i++ {
fd, err := os.Open(run.path("testfile"))
assert.NoError(t, err)
for j := 0; j < i; j++ {
buf := make([]byte, 1)
n, err := io.ReadFull(fd, buf)
assert.NoError(t, err)
assert.Equal(t, 1, n)
assert.Equal(t, buf[0], data[j])
}
err = fd.Close()
assert.NoError(t, err)
}
run.rm(t, "testfile")
}
// Test double close
func TestReadFileDoubleClose(t *testing.T) {
run.skipIfNoFUSE(t)
run.createFile(t, "testdoubleclose", "hello")
in, err := os.Open(run.path("testdoubleclose"))
assert.NoError(t, err)
fd := in.Fd()
fd1, err := syscall.Dup(int(fd))
assert.NoError(t, err)
fd2, err := syscall.Dup(int(fd))
assert.NoError(t, err)
// close one of the dups - should produce no error
err = syscall.Close(fd1)
assert.NoError(t, err)
// read from the file
buf := make([]byte, 1)
_, err = in.Read(buf)
assert.NoError(t, err)
// close it
err = in.Close()
assert.NoError(t, err)
// read from the other dup - should produce no error as this
// file is now buffered
n, err := syscall.Read(fd2, buf)
assert.NoError(t, err)
assert.Equal(t, 1, n)
// close the dup - should not produce an error
err = syscall.Close(fd2)
assert.NoError(t, err, "input/output error")
run.rm(t, "testdoubleclose")
}
// Test seeking
func TestReadSeek(t *testing.T) {
run.skipIfNoFUSE(t)
var data = []byte("helloHELLO")
run.createFile(t, "testfile", string(data))
run.checkDir(t, "testfile 10")
fd, err := os.Open(run.path("testfile"))
assert.NoError(t, err)
// Seek to half way
_, err = fd.Seek(5, 0)
assert.NoError(t, err)
buf, err := ioutil.ReadAll(fd)
assert.NoError(t, err)
assert.Equal(t, buf, []byte("HELLO"))
// Test seeking to the end
_, err = fd.Seek(10, 0)
assert.NoError(t, err)
buf, err = ioutil.ReadAll(fd)
assert.NoError(t, err)
assert.Equal(t, buf, []byte(""))
// Test seeking beyond the end
_, err = fd.Seek(1000000, 0)
assert.NoError(t, err)
buf, err = ioutil.ReadAll(fd)
assert.NoError(t, err)
assert.Equal(t, buf, []byte(""))
// Now back to the start
_, err = fd.Seek(0, 0)
assert.NoError(t, err)
buf, err = ioutil.ReadAll(fd)
assert.NoError(t, err)
assert.Equal(t, buf, []byte("helloHELLO"))
err = fd.Close()
assert.NoError(t, err)
run.rm(t, "testfile")
}

View File

@@ -1,72 +0,0 @@
// +build ignore
// Read blocks out of a single file to time the seeking code
package main
import (
"flag"
"io"
"log"
"math/rand"
"os"
"time"
)
var (
// Flags
iterations = flag.Int("n", 25, "Iterations to try")
maxBlockSize = flag.Int("b", 1024*1024, "Max block size to read")
randSeed = flag.Int64("seed", 1, "Seed for the random number generator")
)
func randomSeekTest(size int64, in *os.File, name string) {
start := rand.Int63n(size)
blockSize := rand.Intn(*maxBlockSize)
if int64(blockSize) > size-start {
blockSize = int(size - start)
}
log.Printf("Reading %d from %d", blockSize, start)
_, err := in.Seek(start, 0)
if err != nil {
log.Fatalf("Seek failed on %q: %v", name, err)
}
buf := make([]byte, blockSize)
_, err = io.ReadFull(in, buf)
if err != nil {
log.Fatalf("Read failed on %q: %v", name, err)
}
}
func main() {
flag.Parse()
args := flag.Args()
if len(args) != 1 {
log.Fatalf("Require 1 file as argument")
}
rand.Seed(*randSeed)
name := args[0]
in, err := os.Open(name)
if err != nil {
log.Fatalf("Couldn't open %q: %v", name, err)
}
fi, err := in.Stat()
if err != nil {
log.Fatalf("Couldn't stat %q: %v", name, err)
}
start := time.Now()
for i := 0; i < *iterations; i++ {
randomSeekTest(fi.Size(), in, name)
}
dt := time.Since(start)
log.Printf("That took %v for %d iterations, %v per iteration", dt, *iterations, dt/time.Duration(*iterations))
err = in.Close()
if err != nil {
log.Fatalf("Error closing %q: %v", name, err)
}
}

View File

@@ -1,115 +0,0 @@
// +build ignore
// Read two files with lots of seeking to stress test the seek code
package main
import (
"bytes"
"flag"
"io"
"io/ioutil"
"log"
"math/rand"
"os"
"time"
)
var (
// Flags
iterations = flag.Int("n", 1E6, "Iterations to try")
maxBlockSize = flag.Int("b", 1024*1024, "Max block size to read")
)
func init() {
rand.Seed(time.Now().UnixNano())
}
func randomSeekTest(size int64, in1, in2 *os.File, file1, file2 string) {
start := rand.Int63n(size)
blockSize := rand.Intn(*maxBlockSize)
if int64(blockSize) > size-start {
blockSize = int(size - start)
}
log.Printf("Reading %d from %d", blockSize, start)
_, err := in1.Seek(start, 0)
if err != nil {
log.Fatalf("Seek failed on %q: %v", file1, err)
}
_, err = in2.Seek(start, 0)
if err != nil {
log.Fatalf("Seek failed on %q: %v", file2, err)
}
buf1 := make([]byte, blockSize)
n1, err := io.ReadFull(in1, buf1)
if err != nil {
log.Fatalf("Read failed on %q: %v", file1, err)
}
buf2 := make([]byte, blockSize)
n2, err := io.ReadFull(in2, buf2)
if err != nil {
log.Fatalf("Read failed on %q: %v", file2, err)
}
if n1 != n2 {
log.Fatalf("Read different lengths %d (%q) != %d (%q)", n1, file1, n2, file2)
}
if !bytes.Equal(buf1, buf2) {
log.Printf("Dumping different blocks")
err = ioutil.WriteFile("/tmp/z1", buf1, 0777)
if err != nil {
log.Fatalf("Failed to write /tmp/z1: %v", err)
}
err = ioutil.WriteFile("/tmp/z2", buf2, 0777)
if err != nil {
log.Fatalf("Failed to write /tmp/z2: %v", err)
}
log.Fatalf("Read different contents - saved in /tmp/z1 and /tmp/z2")
}
}
func main() {
flag.Parse()
args := flag.Args()
if len(args) != 2 {
log.Fatalf("Require 2 files as argument")
}
file1, file2 := args[0], args[1]
in1, err := os.Open(file1)
if err != nil {
log.Fatalf("Couldn't open %q: %v", file1, err)
}
in2, err := os.Open(file2)
if err != nil {
log.Fatalf("Couldn't open %q: %v", file2, err)
}
fi1, err := in1.Stat()
if err != nil {
log.Fatalf("Couldn't stat %q: %v", file1, err)
}
fi2, err := in2.Stat()
if err != nil {
log.Fatalf("Couldn't stat %q: %v", file2, err)
}
if fi1.Size() != fi2.Size() {
log.Fatalf("Files not the same size")
}
for i := 0; i < *iterations; i++ {
randomSeekTest(fi1.Size(), in1, in2, file1, file2)
}
err = in1.Close()
if err != nil {
log.Fatalf("Error closing %q: %v", file1, err)
}
err = in2.Close()
if err != nil {
log.Fatalf("Error closing %q: %v", file2, err)
}
}

View File

@@ -1,129 +0,0 @@
// +build ignore
// Read lots files with lots of simultaneous seeking to stress test the seek code
package main
import (
"flag"
"io"
"log"
"math/rand"
"os"
"path/filepath"
"sort"
"sync"
"time"
)
var (
// Flags
iterations = flag.Int("n", 1E6, "Iterations to try")
maxBlockSize = flag.Int("b", 1024*1024, "Max block size to read")
simultaneous = flag.Int("transfers", 16, "Number of simultaneous files to open")
seeksPerFile = flag.Int("seeks", 8, "Seeks per file")
mask = flag.Int64("mask", 0, "mask for seek, eg 0x7fff")
)
func init() {
rand.Seed(time.Now().UnixNano())
}
func seekTest(n int, file string) {
in, err := os.Open(file)
if err != nil {
log.Fatalf("Couldn't open %q: %v", file, err)
}
fi, err := in.Stat()
if err != nil {
log.Fatalf("Couldn't stat %q: %v", file, err)
}
size := fi.Size()
// FIXME make sure we try start and end
maxBlockSize := *maxBlockSize
if int64(maxBlockSize) > size {
maxBlockSize = int(size)
}
for i := 0; i < n; i++ {
start := rand.Int63n(size)
if *mask != 0 {
start &^= *mask
}
blockSize := rand.Intn(maxBlockSize)
beyondEnd := false
switch rand.Intn(10) {
case 0:
start = 0
case 1:
start = size - int64(blockSize)
case 2:
// seek beyond the end
start = size + int64(blockSize)
beyondEnd = true
default:
}
if !beyondEnd && int64(blockSize) > size-start {
blockSize = int(size - start)
}
log.Printf("%s: Reading %d from %d", file, blockSize, start)
_, err = in.Seek(start, 0)
if err != nil {
log.Fatalf("Seek failed on %q: %v", file, err)
}
buf := make([]byte, blockSize)
n, err := io.ReadFull(in, buf)
if beyondEnd && err == io.EOF {
// OK
} else if err != nil {
log.Fatalf("Read failed on %q: %v (%d)", file, err, n)
}
}
err = in.Close()
if err != nil {
log.Fatalf("Error closing %q: %v", file, err)
}
}
// Find all the files in dir
func findFiles(dir string) (files []string) {
filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if info.Mode().IsRegular() && info.Size() > 0 {
files = append(files, path)
}
return nil
})
sort.Strings(files)
return files
}
func main() {
flag.Parse()
args := flag.Args()
if len(args) != 1 {
log.Fatalf("Require a directory as argument")
}
dir := args[0]
files := findFiles(dir)
jobs := make(chan string, *simultaneous)
var wg sync.WaitGroup
wg.Add(*simultaneous)
for i := 0; i < *simultaneous; i++ {
go func() {
defer wg.Done()
for file := range jobs {
seekTest(*seeksPerFile, file)
}
}()
}
for i := 0; i < *iterations; i++ {
i := rand.Intn(len(files))
jobs <- files[i]
//jobs <- files[i]
}
close(jobs)
wg.Wait()
}

View File

@@ -1,160 +0,0 @@
// +build linux darwin freebsd
package mount
import (
"errors"
"io"
"sync"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/fs"
"golang.org/x/net/context"
)
var errClosedFileHandle = errors.New("Attempt to use closed file handle")
// WriteFileHandle is an open for write handle on a File
type WriteFileHandle struct {
mu sync.Mutex
closed bool // set if handle has been closed
remote string
pipeReader *io.PipeReader
pipeWriter *io.PipeWriter
o fs.Object
result chan error
file *File
writeCalled bool // set the first time Write() is called
}
// Check interface satisfied
var _ fusefs.Handle = (*WriteFileHandle)(nil)
func newWriteFileHandle(d *Dir, f *File, src fs.ObjectInfo) (*WriteFileHandle, error) {
fh := &WriteFileHandle{
remote: src.Remote(),
result: make(chan error, 1),
file: f,
}
fh.pipeReader, fh.pipeWriter = io.Pipe()
r := fs.NewAccountSizeName(fh.pipeReader, 0, src.Remote()).WithBuffer() // account the transfer
go func() {
o, err := d.f.Put(r, src)
fh.o = o
fh.result <- err
}()
fh.file.addWriters(1)
fs.Stats.Transferring(fh.remote)
return fh, nil
}
// Check interface satisfied
var _ fusefs.HandleWriter = (*WriteFileHandle)(nil)
// Write data to the file handle
func (fh *WriteFileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) error {
fs.Debugf(fh.remote, "WriteFileHandle.Write len=%d", len(req.Data))
fh.mu.Lock()
defer fh.mu.Unlock()
if fh.closed {
fs.Errorf(fh.remote, "WriteFileHandle.Write error: %v", errClosedFileHandle)
return errClosedFileHandle
}
fh.writeCalled = true
// FIXME should probably check the file isn't being seeked?
n, err := fh.pipeWriter.Write(req.Data)
resp.Size = n
fh.file.written(int64(n))
if err != nil {
fs.Errorf(fh.remote, "WriteFileHandle.Write error: %v", err)
return err
}
fs.Debugf(fh.remote, "WriteFileHandle.Write OK (%d bytes written)", n)
return nil
}
// close the file handle returning errClosedFileHandle if it has been
// closed already.
//
// Must be called with fh.mu held
func (fh *WriteFileHandle) close() error {
if fh.closed {
return errClosedFileHandle
}
fh.closed = true
fs.Stats.DoneTransferring(fh.remote, true)
fh.file.addWriters(-1)
writeCloseErr := fh.pipeWriter.Close()
err := <-fh.result
readCloseErr := fh.pipeReader.Close()
if err == nil {
fh.file.setObject(fh.o)
err = writeCloseErr
}
if err == nil {
err = readCloseErr
}
return err
}
// Check interface satisfied
var _ fusefs.HandleFlusher = (*WriteFileHandle)(nil)
// Flush is called on each close() of a file descriptor. So if a
// filesystem wants to return write errors in close() and the file has
// cached dirty data, this is a good place to write back data and
// return any errors. Since many applications ignore close() errors
// this is not always useful.
//
// NOTE: The flush() method may be called more than once for each
// open(). This happens if more than one file descriptor refers to an
// opened file due to dup(), dup2() or fork() calls. It is not
// possible to determine if a flush is final, so each flush should be
// treated equally. Multiple write-flush sequences are relatively
// rare, so this shouldn't be a problem.
//
// Filesystems shouldn't assume that flush will always be called after
// some writes, or that if will be called at all.
func (fh *WriteFileHandle) Flush(ctx context.Context, req *fuse.FlushRequest) error {
fh.mu.Lock()
defer fh.mu.Unlock()
fs.Debugf(fh.remote, "WriteFileHandle.Flush")
// If Write hasn't been called then ignore the Flush - Release
// will pick it up
if !fh.writeCalled {
fs.Debugf(fh.remote, "WriteFileHandle.Flush ignoring flush on unwritten handle")
return nil
}
err := fh.close()
if err != nil {
fs.Errorf(fh.remote, "WriteFileHandle.Flush error: %v", err)
} else {
fs.Debugf(fh.remote, "WriteFileHandle.Flush OK")
}
return err
}
var _ fusefs.HandleReleaser = (*WriteFileHandle)(nil)
// Release is called when we are finished with the file handle
//
// It isn't called directly from userspace so the error is ignored by
// the kernel
func (fh *WriteFileHandle) Release(ctx context.Context, req *fuse.ReleaseRequest) error {
fh.mu.Lock()
defer fh.mu.Unlock()
if fh.closed {
fs.Debugf(fh.remote, "WriteFileHandle.Release nothing to do")
return nil
}
fs.Debugf(fh.remote, "WriteFileHandle.Release closing")
err := fh.close()
if err != nil {
fs.Errorf(fh.remote, "WriteFileHandle.Release error: %v", err)
} else {
fs.Debugf(fh.remote, "WriteFileHandle.Release OK")
}
return err
}

View File

@@ -1,119 +0,0 @@
// +build linux darwin freebsd
package mount
import (
"os"
"syscall"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Test writing a file with no write()'s to it
func TestWriteFileNoWrite(t *testing.T) {
run.skipIfNoFUSE(t)
fd, err := os.Create(run.path("testnowrite"))
assert.NoError(t, err)
err = fd.Close()
assert.NoError(t, err)
run.checkDir(t, "testnowrite 0")
run.rm(t, "testnowrite")
}
// Test open file in directory listing
func FIXMETestWriteOpenFileInDirListing(t *testing.T) {
run.skipIfNoFUSE(t)
fd, err := os.Create(run.path("testnowrite"))
assert.NoError(t, err)
run.checkDir(t, "testnowrite 0")
err = fd.Close()
assert.NoError(t, err)
run.rm(t, "testnowrite")
}
// Test writing a file and reading it back
func TestWriteFileWrite(t *testing.T) {
run.skipIfNoFUSE(t)
run.createFile(t, "testwrite", "data")
run.checkDir(t, "testwrite 4")
contents := run.readFile(t, "testwrite")
assert.Equal(t, "data", contents)
run.rm(t, "testwrite")
}
// Test overwriting a file
func TestWriteFileOverwrite(t *testing.T) {
run.skipIfNoFUSE(t)
run.createFile(t, "testwrite", "data")
run.checkDir(t, "testwrite 4")
run.createFile(t, "testwrite", "potato")
contents := run.readFile(t, "testwrite")
assert.Equal(t, "potato", contents)
run.rm(t, "testwrite")
}
// Test double close
func TestWriteFileDoubleClose(t *testing.T) {
run.skipIfNoFUSE(t)
out, err := os.Create(run.path("testdoubleclose"))
assert.NoError(t, err)
fd := out.Fd()
fd1, err := syscall.Dup(int(fd))
assert.NoError(t, err)
fd2, err := syscall.Dup(int(fd))
assert.NoError(t, err)
// close one of the dups - should produce no error
err = syscall.Close(fd1)
assert.NoError(t, err)
// write to the file
buf := []byte("hello")
n, err := out.Write(buf)
assert.NoError(t, err)
assert.Equal(t, 5, n)
// close it
err = out.Close()
assert.NoError(t, err)
// write to the other dup - should produce an error
n, err = syscall.Write(fd2, buf)
assert.Error(t, err, "input/output error")
// close the dup - should produce an error
err = syscall.Close(fd2)
assert.Error(t, err, "input/output error")
run.rm(t, "testdoubleclose")
}
// Test Fsync
//
// NB the code for this is in file.go rather than write.go
func TestWriteFileFsync(t *testing.T) {
filepath := run.path("to be synced")
fd, err := os.Create(filepath)
require.NoError(t, err)
_, err = fd.Write([]byte("hello"))
require.NoError(t, err)
err = fd.Sync()
require.NoError(t, err)
err = fd.Close()
require.NoError(t, err)
}

View File

@@ -1,41 +0,0 @@
package move
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "move source:path dest:path",
Short: `Move files from source to dest.`,
Long: `
Moves the contents of the source directory to the destination
directory. Rclone will error if the source and destination overlap and
the remote does not support a server side directory move operation.
If no filters are in use and if possible this will server side move
` + "`source:path`" + ` into ` + "`dest:path`" + `. After this ` + "`source:path`" + ` will no
longer longer exist.
Otherwise for each file in ` + "`source:path`" + ` selected by the filters (if
any) this will move it into ` + "`dest:path`" + `. If possible a server side
move will be used, otherwise it will copy it (server side if possible)
into ` + "`dest:path`" + ` then delete the original (if no errors on copy) in
` + "`source:path`" + `.
**Important**: Since this can cause data loss, test first with the
--dry-run flag.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, fdst := cmd.NewFsSrcDst(args)
cmd.Run(true, true, command, func() error {
return fs.MoveDir(fdst, fsrc)
})
},
}

View File

@@ -1,57 +0,0 @@
package moveto
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "moveto source:path dest:path",
Short: `Move file or directory from source to dest.`,
Long: `
If source:path is a file or directory then it moves it to a file or
directory named dest:path.
This can be used to rename files or upload single files to other than
their existing name. If the source is a directory then it acts exacty
like the move command.
So
rclone moveto src dst
where src and dst are rclone paths, either remote:path or
/path/to/local or C:\windows\path\if\on\windows.
This will:
if src is file
move it to dst, overwriting an existing file if it exists
if src is directory
move it to dst, overwriting existing files if they exist
see move command for full details
This doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. src will be deleted on successful
transfer.
**Important**: Since this can cause data loss, test first with the
--dry-run flag.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, srcFileName, fdst, dstFileName := cmd.NewFsSrcDstFiles(args)
cmd.Run(true, true, command, func() error {
if srcFileName == "" {
return fs.MoveDir(fdst, fsrc)
}
return fs.MoveFile(fdst, fsrc, dstFileName, srcFileName)
})
},
}

View File

@@ -1,26 +0,0 @@
package obscure
import (
"fmt"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "obscure password",
Short: `Obscure password for use in the rclone.conf`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
cmd.Run(false, false, command, func() error {
obscure := fs.MustObscure(args[0])
fmt.Println(obscure)
return nil
})
},
}

View File

@@ -1,28 +0,0 @@
package purge
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "purge remote:path",
Short: `Remove the path and all of its contents.`,
Long: `
Remove the path and all of its contents. Note that this does not obey
include/exclude filters - everything will be removed. Use ` + "`" + `delete` + "`" + ` if
you want to selectively delete files.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fdst := cmd.NewFsDst(args)
cmd.Run(true, false, command, func() error {
return fs.Purge(fdst)
})
},
}

View File

@@ -1,26 +0,0 @@
package rmdir
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "rmdir remote:path",
Short: `Remove the path if empty.`,
Long: `
Remove the path. Note that you can't remove a path with
objects in it, use purge for that.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fdst := cmd.NewFsDst(args)
cmd.Run(true, false, command, func() error {
return fs.Rmdir(fdst, "")
})
},
}

View File

@@ -1,31 +0,0 @@
package rmdir
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(rmdirsCmd)
}
var rmdirsCmd = &cobra.Command{
Use: "rmdirs remote:path",
Short: `Remove any empty directoryies under the path.`,
Long: `This removes any empty directories (or directories that only contain
empty directories) under the path that it finds, including the path if
it has nothing in.
This is useful for tidying up remotes that rclone has left a lot of
empty directories in.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fdst := cmd.NewFsDst(args)
cmd.Run(true, false, command, func() error {
return fs.Rmdirs(fdst, "")
})
},
}

View File

@@ -1,29 +0,0 @@
package sha1sum
import (
"os"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "sha1sum remote:path",
Short: `Produces an sha1sum file for all the objects in the path.`,
Long: `
Produces an sha1sum file for all the objects in the path. This
is in the same format as the standard sha1sum tool produces.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, false, command, func() error {
return fs.Sha1sum(fsrc, os.Stdout)
})
},
}

View File

@@ -1,31 +0,0 @@
package size
import (
"fmt"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "size remote:path",
Short: `Prints the total size and number of objects in remote:path.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, false, command, func() error {
objects, size, err := fs.Count(fsrc)
if err != nil {
return err
}
fmt.Printf("Total objects: %d\n", objects)
fmt.Printf("Total size: %s (%d Bytes)\n", fs.SizeSuffix(size).Unit("Bytes"), size)
return nil
})
},
}

View File

@@ -1,43 +0,0 @@
package sync
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "sync source:path dest:path",
Short: `Make source and dest identical, modifying destination only.`,
Long: `
Sync the source to the destination, changing the destination
only. Doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. Destination is updated to match
source, including deleting files if necessary.
**Important**: Since this can cause data loss, test first with the
` + "`" + `--dry-run` + "`" + ` flag to see exactly what would be copied and deleted.
Note that files in the destination won't be deleted if there were any
errors at any point.
It is always the contents of the directory that is synced, not the
directory so when source:path is a directory, it's the contents of
source:path that are copied, not the directory name and contents. See
extended explanation in the ` + "`" + `copy` + "`" + ` command above if unsure.
If dest:path doesn't exist, it is created and the source:path contents
go there.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, fdst := cmd.NewFsSrcDst(args)
cmd.Run(true, true, command, func() error {
return fs.Sync(fdst, fsrc)
})
},
}

View File

@@ -1,19 +0,0 @@
package version
import (
"github.com/ncw/rclone/cmd"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "version",
Short: `Show the version number.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
cmd.ShowVersion()
},
}

31
cross-compile Executable file
View File

@@ -0,0 +1,31 @@
#!/bin/sh
set -e
# This uses gox from https://github.com/mitchellh/gox
# Make sure you've run gox -build-toolchain
if [ "$1" == "" ]; then
echo "Syntax: $0 Version"
exit 1
fi
VERSION="$1"
rm -rf build
gox -output "build/{{.Dir}}-${VERSION}-{{.OS}}-{{.Arch}}/{{.Dir}}"
mv build/rclone-${VERSION}-darwin-amd64 build/rclone-${VERSION}-osx-amd64
mv build/rclone-${VERSION}-darwin-386 build/rclone-${VERSION}-osx-386
cd build
for d in `ls`; do
cp -a ../README.txt $d/
cp -a ../README.html $d/
cp -a ../rclone.1 $d/
zip -r9 $d.zip $d
rm -rf $d
done
cd ..

View File

@@ -1,796 +0,0 @@
package crypt
import (
"bytes"
"crypto/aes"
gocipher "crypto/cipher"
"crypto/rand"
"encoding/base32"
"fmt"
"io"
"strings"
"sync"
"unicode/utf8"
"github.com/ncw/rclone/crypt/pkcs7"
"github.com/pkg/errors"
"golang.org/x/crypto/nacl/secretbox"
"golang.org/x/crypto/scrypt"
"github.com/rfjakob/eme"
)
// Constants
const (
nameCipherBlockSize = aes.BlockSize
fileMagic = "RCLONE\x00\x00"
fileMagicSize = len(fileMagic)
fileNonceSize = 24
fileHeaderSize = fileMagicSize + fileNonceSize
blockHeaderSize = secretbox.Overhead
blockDataSize = 64 * 1024
blockSize = blockHeaderSize + blockDataSize
encryptedSuffix = ".bin" // when file name encryption is off we add this suffix to make sure the cloud provider doesn't process the file
)
// Errors returned by cipher
var (
ErrorBadDecryptUTF8 = errors.New("bad decryption - utf-8 invalid")
ErrorBadDecryptControlChar = errors.New("bad decryption - contains control chars")
ErrorNotAMultipleOfBlocksize = errors.New("not a multiple of blocksize")
ErrorTooShortAfterDecode = errors.New("too short after base32 decode")
ErrorEncryptedFileTooShort = errors.New("file is too short to be encrypted")
ErrorEncryptedFileBadHeader = errors.New("file has truncated block header")
ErrorEncryptedBadMagic = errors.New("not an encrypted file - bad magic string")
ErrorEncryptedBadBlock = errors.New("failed to authenticate decrypted block - bad password?")
ErrorBadBase32Encoding = errors.New("bad base32 filename encoding")
ErrorFileClosed = errors.New("file already closed")
ErrorNotAnEncryptedFile = errors.New("not an encrypted file - no \"" + encryptedSuffix + "\" suffix")
ErrorBadSeek = errors.New("Seek beyond end of file")
defaultSalt = []byte{0xA8, 0x0D, 0xF4, 0x3A, 0x8F, 0xBD, 0x03, 0x08, 0xA7, 0xCA, 0xB8, 0x3E, 0x58, 0x1F, 0x86, 0xB1}
)
// Global variables
var (
fileMagicBytes = []byte(fileMagic)
)
// ReadSeekCloser is the interface of the read handles
type ReadSeekCloser interface {
io.Reader
io.Seeker
io.Closer
}
// OpenAtOffset opens the file handle at the offset given
type OpenAtOffset func(offset int64) (io.ReadCloser, error)
// Cipher is used to swap out the encryption implementations
type Cipher interface {
// EncryptFileName encrypts a file path
EncryptFileName(string) string
// DecryptFileName decrypts a file path, returns error if decrypt was invalid
DecryptFileName(string) (string, error)
// EncryptDirName encrypts a directory path
EncryptDirName(string) string
// DecryptDirName decrypts a directory path, returns error if decrypt was invalid
DecryptDirName(string) (string, error)
// EncryptData
EncryptData(io.Reader) (io.Reader, error)
// DecryptData
DecryptData(io.ReadCloser) (io.ReadCloser, error)
// DecryptDataSeek decrypt at a given position
DecryptDataSeek(open OpenAtOffset, offset int64) (ReadSeekCloser, error)
// EncryptedSize calculates the size of the data when encrypted
EncryptedSize(int64) int64
// DecryptedSize calculates the size of the data when decrypted
DecryptedSize(int64) (int64, error)
}
// NameEncryptionMode is the type of file name encryption in use
type NameEncryptionMode int
// NameEncryptionMode levels
const (
NameEncryptionOff NameEncryptionMode = iota
NameEncryptionStandard
)
// NewNameEncryptionMode turns a string into a NameEncryptionMode
func NewNameEncryptionMode(s string) (mode NameEncryptionMode, err error) {
s = strings.ToLower(s)
switch s {
case "off":
mode = NameEncryptionOff
case "standard":
mode = NameEncryptionStandard
default:
err = errors.Errorf("Unknown file name encryption mode %q", s)
}
return mode, err
}
// String turns mode into a human readable string
func (mode NameEncryptionMode) String() (out string) {
switch mode {
case NameEncryptionOff:
out = "off"
case NameEncryptionStandard:
out = "standard"
default:
out = fmt.Sprintf("Unknown mode #%d", mode)
}
return out
}
type cipher struct {
dataKey [32]byte // Key for secretbox
nameKey [32]byte // 16,24 or 32 bytes
nameTweak [nameCipherBlockSize]byte // used to tweak the name crypto
block gocipher.Block
mode NameEncryptionMode
buffers sync.Pool // encrypt/decrypt buffers
cryptoRand io.Reader // read crypto random numbers from here
}
// newCipher initialises the cipher. If salt is "" then it uses a built in salt val
func newCipher(mode NameEncryptionMode, password, salt string) (*cipher, error) {
c := &cipher{
mode: mode,
cryptoRand: rand.Reader,
}
c.buffers.New = func() interface{} {
return make([]byte, blockSize)
}
err := c.Key(password, salt)
if err != nil {
return nil, err
}
return c, nil
}
// Key creates all the internal keys from the password passed in using
// scrypt.
//
// If salt is "" we use a fixed salt just to make attackers lives
// slighty harder than using no salt.
//
// Note that empty passsword makes all 0x00 keys which is used in the
// tests.
func (c *cipher) Key(password, salt string) (err error) {
const keySize = len(c.dataKey) + len(c.nameKey) + len(c.nameTweak)
var saltBytes = defaultSalt
if salt != "" {
saltBytes = []byte(salt)
}
var key []byte
if password == "" {
key = make([]byte, keySize)
} else {
key, err = scrypt.Key([]byte(password), saltBytes, 16384, 8, 1, keySize)
if err != nil {
return err
}
}
copy(c.dataKey[:], key)
copy(c.nameKey[:], key[len(c.dataKey):])
copy(c.nameTweak[:], key[len(c.dataKey)+len(c.nameKey):])
// Key the name cipher
c.block, err = aes.NewCipher(c.nameKey[:])
return err
}
// getBlock gets a block from the pool of size blockSize
func (c *cipher) getBlock() []byte {
return c.buffers.Get().([]byte)
}
// putBlock returns a block to the pool of size blockSize
func (c *cipher) putBlock(buf []byte) {
if len(buf) != blockSize {
panic("bad blocksize returned to pool")
}
c.buffers.Put(buf)
}
// check to see if the byte string is valid with no control characters
// from 0x00 to 0x1F and is a valid UTF-8 string
func checkValidString(buf []byte) error {
for i := range buf {
c := buf[i]
if c >= 0x00 && c < 0x20 || c == 0x7F {
return ErrorBadDecryptControlChar
}
}
if !utf8.Valid(buf) {
return ErrorBadDecryptUTF8
}
return nil
}
// encodeFileName encodes a filename using a modified version of
// standard base32 as described in RFC4648
//
// The standard encoding is modified in two ways
// * it becomes lower case (no-one likes upper case filenames!)
// * we strip the padding character `=`
func encodeFileName(in []byte) string {
encoded := base32.HexEncoding.EncodeToString(in)
encoded = strings.TrimRight(encoded, "=")
return strings.ToLower(encoded)
}
// decodeFileName decodes a filename as encoded by encodeFileName
func decodeFileName(in string) ([]byte, error) {
if strings.HasSuffix(in, "=") {
return nil, ErrorBadBase32Encoding
}
// First figure out how many padding characters to add
roundUpToMultipleOf8 := (len(in) + 7) &^ 7
equals := roundUpToMultipleOf8 - len(in)
in = strings.ToUpper(in) + "========"[:equals]
return base32.HexEncoding.DecodeString(in)
}
// encryptSegment encrypts a path segment
//
// This uses EME with AES
//
// EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the
// 2003 paper "A Parallelizable Enciphering Mode" by Halevi and
// Rogaway.
//
// This makes for determinstic encryption which is what we want - the
// same filename must encrypt to the same thing.
//
// This means that
// * filenames with the same name will encrypt the same
// * filenames which start the same won't have a common prefix
func (c *cipher) encryptSegment(plaintext string) string {
if plaintext == "" {
return ""
}
paddedPlaintext := pkcs7.Pad(nameCipherBlockSize, []byte(plaintext))
ciphertext := eme.Transform(c.block, c.nameTweak[:], paddedPlaintext, eme.DirectionEncrypt)
return encodeFileName(ciphertext)
}
// decryptSegment decrypts a path segment
func (c *cipher) decryptSegment(ciphertext string) (string, error) {
if ciphertext == "" {
return "", nil
}
rawCiphertext, err := decodeFileName(ciphertext)
if err != nil {
return "", err
}
if len(rawCiphertext)%nameCipherBlockSize != 0 {
return "", ErrorNotAMultipleOfBlocksize
}
if len(rawCiphertext) == 0 {
// not possible if decodeFilename() working correctly
return "", ErrorTooShortAfterDecode
}
paddedPlaintext := eme.Transform(c.block, c.nameTweak[:], rawCiphertext, eme.DirectionDecrypt)
plaintext, err := pkcs7.Unpad(nameCipherBlockSize, paddedPlaintext)
if err != nil {
return "", err
}
err = checkValidString(plaintext)
if err != nil {
return "", err
}
return string(plaintext), err
}
// encryptFileName encrypts a file path
func (c *cipher) encryptFileName(in string) string {
segments := strings.Split(in, "/")
for i := range segments {
segments[i] = c.encryptSegment(segments[i])
}
return strings.Join(segments, "/")
}
// EncryptFileName encrypts a file path
func (c *cipher) EncryptFileName(in string) string {
if c.mode == NameEncryptionOff {
return in + encryptedSuffix
}
return c.encryptFileName(in)
}
// EncryptDirName encrypts a directory path
func (c *cipher) EncryptDirName(in string) string {
if c.mode == NameEncryptionOff {
return in
}
return c.encryptFileName(in)
}
// decryptFileName decrypts a file path
func (c *cipher) decryptFileName(in string) (string, error) {
segments := strings.Split(in, "/")
for i := range segments {
var err error
segments[i], err = c.decryptSegment(segments[i])
if err != nil {
return "", err
}
}
return strings.Join(segments, "/"), nil
}
// DecryptFileName decrypts a file path
func (c *cipher) DecryptFileName(in string) (string, error) {
if c.mode == NameEncryptionOff {
remainingLength := len(in) - len(encryptedSuffix)
if remainingLength > 0 && strings.HasSuffix(in, encryptedSuffix) {
return in[:remainingLength], nil
}
return "", ErrorNotAnEncryptedFile
}
return c.decryptFileName(in)
}
// DecryptDirName decrypts a directory path
func (c *cipher) DecryptDirName(in string) (string, error) {
if c.mode == NameEncryptionOff {
return in, nil
}
return c.decryptFileName(in)
}
// nonce is an NACL secretbox nonce
type nonce [fileNonceSize]byte
// pointer returns the nonce as a *[24]byte for secretbox
func (n *nonce) pointer() *[fileNonceSize]byte {
return (*[fileNonceSize]byte)(n)
}
// fromReader fills the nonce from an io.Reader - normally the OSes
// crypto random number generator
func (n *nonce) fromReader(in io.Reader) error {
read, err := io.ReadFull(in, (*n)[:])
if read != fileNonceSize {
return errors.Wrap(err, "short read of nonce")
}
return nil
}
// fromBuf fills the nonce from the buffer passed in
func (n *nonce) fromBuf(buf []byte) {
read := copy((*n)[:], buf)
if read != fileNonceSize {
panic("buffer to short to read nonce")
}
}
// carry 1 up the nonce from position i
func (n *nonce) carry(i int) {
for ; i < len(*n); i++ {
digit := (*n)[i]
newDigit := digit + 1
(*n)[i] = newDigit
if newDigit >= digit {
// exit if no carry
break
}
}
}
// increment to add 1 to the nonce
func (n *nonce) increment() {
n.carry(0)
}
// add an uint64 to the nonce
func (n *nonce) add(x uint64) {
carry := uint16(0)
for i := 0; i < 8; i++ {
digit := (*n)[i]
xDigit := byte(x)
x >>= 8
carry += uint16(digit) + uint16(xDigit)
(*n)[i] = byte(carry)
carry >>= 8
}
if carry != 0 {
n.carry(8)
}
}
// encrypter encrypts an io.Reader on the fly
type encrypter struct {
mu sync.Mutex
in io.Reader
c *cipher
nonce nonce
buf []byte
readBuf []byte
bufIndex int
bufSize int
err error
}
// newEncrypter creates a new file handle encrypting on the fly
func (c *cipher) newEncrypter(in io.Reader, nonce *nonce) (*encrypter, error) {
fh := &encrypter{
in: in,
c: c,
buf: c.getBlock(),
readBuf: c.getBlock(),
bufSize: fileHeaderSize,
}
// Initialise nonce
if nonce != nil {
fh.nonce = *nonce
} else {
err := fh.nonce.fromReader(c.cryptoRand)
if err != nil {
return nil, err
}
}
// Copy magic into buffer
copy(fh.buf, fileMagicBytes)
// Copy nonce into buffer
copy(fh.buf[fileMagicSize:], fh.nonce[:])
return fh, nil
}
// Read as per io.Reader
func (fh *encrypter) Read(p []byte) (n int, err error) {
fh.mu.Lock()
defer fh.mu.Unlock()
if fh.err != nil {
return 0, fh.err
}
if fh.bufIndex >= fh.bufSize {
// Read data
// FIXME should overlap the reads with a go-routine and 2 buffers?
readBuf := fh.readBuf[:blockDataSize]
n, err = io.ReadFull(fh.in, readBuf)
if n == 0 {
// err can't be nil since:
// n == len(buf) if and only if err == nil.
return fh.finish(err)
}
// possibly err != nil here, but we will process the
// data and the next call to ReadFull will return 0, err
// Write nonce to start of block
copy(fh.buf, fh.nonce[:])
// Encrypt the block using the nonce
block := fh.buf
secretbox.Seal(block[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey)
fh.bufIndex = 0
fh.bufSize = blockHeaderSize + n
fh.nonce.increment()
}
n = copy(p, fh.buf[fh.bufIndex:fh.bufSize])
fh.bufIndex += n
return n, nil
}
// finish sets the final error and tidies up
func (fh *encrypter) finish(err error) (int, error) {
if fh.err != nil {
return 0, fh.err
}
fh.err = err
fh.c.putBlock(fh.buf)
fh.buf = nil
fh.c.putBlock(fh.readBuf)
fh.readBuf = nil
return 0, err
}
// Encrypt data encrypts the data stream
func (c *cipher) EncryptData(in io.Reader) (io.Reader, error) {
out, err := c.newEncrypter(in, nil)
if err != nil {
return nil, err
}
return out, nil
}
// decrypter decrypts an io.ReaderCloser on the fly
type decrypter struct {
mu sync.Mutex
rc io.ReadCloser
nonce nonce
initialNonce nonce
c *cipher
buf []byte
readBuf []byte
bufIndex int
bufSize int
err error
open OpenAtOffset
}
// newDecrypter creates a new file handle decrypting on the fly
func (c *cipher) newDecrypter(rc io.ReadCloser) (*decrypter, error) {
fh := &decrypter{
rc: rc,
c: c,
buf: c.getBlock(),
readBuf: c.getBlock(),
}
// Read file header (magic + nonce)
readBuf := fh.readBuf[:fileHeaderSize]
_, err := io.ReadFull(fh.rc, readBuf)
if err == io.EOF || err == io.ErrUnexpectedEOF {
// This read from 0..fileHeaderSize-1 bytes
return nil, fh.finishAndClose(ErrorEncryptedFileTooShort)
} else if err != nil {
return nil, fh.finishAndClose(err)
}
// check the magic
if !bytes.Equal(readBuf[:fileMagicSize], fileMagicBytes) {
return nil, fh.finishAndClose(ErrorEncryptedBadMagic)
}
// retreive the nonce
fh.nonce.fromBuf(readBuf[fileMagicSize:])
fh.initialNonce = fh.nonce
return fh, nil
}
// newDecrypterSeek creates a new file handle decrypting on the fly
func (c *cipher) newDecrypterSeek(open OpenAtOffset, offset int64) (fh *decrypter, err error) {
// Open initially with no seek
rc, err := open(0)
if err != nil {
return nil, err
}
// Open the stream which fills in the nonce
fh, err = c.newDecrypter(rc)
if err != nil {
return nil, err
}
fh.open = open // will be called by fh.Seek
if offset != 0 {
_, err = fh.Seek(offset, 0)
if err != nil {
_ = fh.Close()
return nil, err
}
}
return fh, nil
}
// read data into internal buffer - call with fh.mu held
func (fh *decrypter) fillBuffer() (err error) {
// FIXME should overlap the reads with a go-routine and 2 buffers?
readBuf := fh.readBuf
n, err := io.ReadFull(fh.rc, readBuf)
if n == 0 {
// err can't be nil since:
// n == len(buf) if and only if err == nil.
return err
}
// possibly err != nil here, but we will process the data and
// the next call to ReadFull will return 0, err
// Check header + 1 byte exists
if n <= blockHeaderSize {
if err != nil {
return err // return pending error as it is likely more accurate
}
return ErrorEncryptedFileBadHeader
}
// Decrypt the block using the nonce
block := fh.buf
_, ok := secretbox.Open(block[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey)
if !ok {
if err != nil {
return err // return pending error as it is likely more accurate
}
return ErrorEncryptedBadBlock
}
fh.bufIndex = 0
fh.bufSize = n - blockHeaderSize
fh.nonce.increment()
return nil
}
// Read as per io.Reader
func (fh *decrypter) Read(p []byte) (n int, err error) {
fh.mu.Lock()
defer fh.mu.Unlock()
if fh.err != nil {
return 0, fh.err
}
if fh.bufIndex >= fh.bufSize {
err = fh.fillBuffer()
if err != nil {
return 0, fh.finish(err)
}
}
n = copy(p, fh.buf[fh.bufIndex:fh.bufSize])
fh.bufIndex += n
return n, nil
}
// Seek as per io.Seeker
func (fh *decrypter) Seek(offset int64, whence int) (int64, error) {
fh.mu.Lock()
defer fh.mu.Unlock()
if fh.open == nil {
return 0, fh.finish(errors.New("can't seek - not initialised with newDecrypterSeek"))
}
if whence != 0 {
return 0, fh.finish(errors.New("can only seek from the start"))
}
// Reset error or return it if not EOF
if fh.err == io.EOF {
fh.unFinish()
} else if fh.err != nil {
return 0, fh.err
}
// blocks we need to seek, plus bytes we need to discard
blocks, discard := offset/blockDataSize, offset%blockDataSize
// Offset in underlying stream we need to seek
underlyingOffset := int64(fileHeaderSize) + blocks*(blockHeaderSize+blockDataSize)
// Move the nonce on the correct number of blocks from the start
fh.nonce = fh.initialNonce
fh.nonce.add(uint64(blocks))
// Can we seek underlying stream directly?
if do, ok := fh.rc.(io.Seeker); ok {
// Seek underlying stream directly
_, err := do.Seek(underlyingOffset, 0)
if err != nil {
return 0, fh.finish(err)
}
} else {
// if not reopen with seek
_ = fh.rc.Close() // close underlying file
fh.rc = nil
// Re-open the underlying object with the offset given
rc, err := fh.open(underlyingOffset)
if err != nil {
return 0, fh.finish(errors.Wrap(err, "couldn't reopen file with offset"))
}
// Set the file handle
fh.rc = rc
}
// Fill the buffer
err := fh.fillBuffer()
if err != nil {
return 0, fh.finish(err)
}
// Discard bytes from the buffer
if int(discard) > fh.bufSize {
return 0, fh.finish(ErrorBadSeek)
}
fh.bufIndex = int(discard)
return offset, nil
}
// finish sets the final error and tidies up
func (fh *decrypter) finish(err error) error {
if fh.err != nil {
return fh.err
}
fh.err = err
fh.c.putBlock(fh.buf)
fh.buf = nil
fh.c.putBlock(fh.readBuf)
fh.readBuf = nil
return err
}
// unFinish undoes the effects of finish
func (fh *decrypter) unFinish() {
// Clear error
fh.err = nil
// reinstate the buffers
fh.buf = fh.c.getBlock()
fh.readBuf = fh.c.getBlock()
// Empty the buffer
fh.bufIndex = 0
fh.bufSize = 0
}
// Close
func (fh *decrypter) Close() error {
fh.mu.Lock()
defer fh.mu.Unlock()
// Check already closed
if fh.err == ErrorFileClosed {
return fh.err
}
// Closed before reading EOF so not finish()ed yet
if fh.err == nil {
_ = fh.finish(io.EOF)
}
// Show file now closed
fh.err = ErrorFileClosed
if fh.rc == nil {
return nil
}
return fh.rc.Close()
}
// finishAndClose does finish then Close()
//
// Used when we are returning a nil fh from new
func (fh *decrypter) finishAndClose(err error) error {
_ = fh.finish(err)
_ = fh.Close()
return err
}
// DecryptData decrypts the data stream
func (c *cipher) DecryptData(rc io.ReadCloser) (io.ReadCloser, error) {
out, err := c.newDecrypter(rc)
if err != nil {
return nil, err
}
return out, nil
}
// DecryptDataSeek decrypts the data stream from offset
//
// The open function must return a ReadCloser opened to the offset supplied
//
// You must use this form of DecryptData if you might want to Seek the file handle
func (c *cipher) DecryptDataSeek(open OpenAtOffset, offset int64) (ReadSeekCloser, error) {
out, err := c.newDecrypterSeek(open, offset)
if err != nil {
return nil, err
}
return out, nil
}
// EncryptedSize calculates the size of the data when encrypted
func (c *cipher) EncryptedSize(size int64) int64 {
blocks, residue := size/blockDataSize, size%blockDataSize
encryptedSize := int64(fileHeaderSize) + blocks*(blockHeaderSize+blockDataSize)
if residue != 0 {
encryptedSize += blockHeaderSize + residue
}
return encryptedSize
}
// DecryptedSize calculates the size of the data when decrypted
func (c *cipher) DecryptedSize(size int64) (int64, error) {
size -= int64(fileHeaderSize)
if size < 0 {
return 0, ErrorEncryptedFileTooShort
}
blocks, residue := size/blockSize, size%blockSize
decryptedSize := blocks * blockDataSize
if residue != 0 {
residue -= blockHeaderSize
if residue <= 0 {
return 0, ErrorEncryptedFileBadHeader
}
}
decryptedSize += residue
return decryptedSize, nil
}
// check interfaces
var (
_ Cipher = (*cipher)(nil)
_ io.ReadCloser = (*decrypter)(nil)
_ io.Reader = (*encrypter)(nil)
)

File diff suppressed because it is too large Load Diff

View File

@@ -1,603 +0,0 @@
// Package crypt provides wrappers for Fs and Object which implement encryption
package crypt
import (
"fmt"
"io"
"path"
"strings"
"sync"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
)
// Globals
var (
// Flags
cryptShowMapping = fs.BoolP("crypt-show-mapping", "", false, "For all files listed show how the names encrypt.")
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "crypt",
Description: "Encrypt/Decrypt a remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",
Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
}, {
Name: "filename_encryption",
Help: "How to encrypt the filenames.",
Examples: []fs.OptionExample{
{
Value: "off",
Help: "Don't encrypt the file names. Adds a \".bin\" extension only.",
}, {
Value: "standard",
Help: "Encrypt the filenames see the docs for the details.",
},
},
}, {
Name: "password",
Help: "Password or pass phrase for encryption.",
IsPassword: true,
}, {
Name: "password2",
Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.",
IsPassword: true,
Optional: true,
}},
})
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, rpath string) (fs.Fs, error) {
mode, err := NewNameEncryptionMode(fs.ConfigFileGet(name, "filename_encryption", "standard"))
if err != nil {
return nil, err
}
password := fs.ConfigFileGet(name, "password", "")
if password == "" {
return nil, errors.New("password not set in config file")
}
password, err = fs.Reveal(password)
if err != nil {
return nil, errors.Wrap(err, "failed to decrypt password")
}
salt := fs.ConfigFileGet(name, "password2", "")
if salt != "" {
salt, err = fs.Reveal(salt)
if err != nil {
return nil, errors.Wrap(err, "failed to decrypt password2")
}
}
cipher, err := newCipher(mode, password, salt)
if err != nil {
return nil, errors.Wrap(err, "failed to make cipher")
}
remote := fs.ConfigFileGet(name, "remote")
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point crypt remote at itself - check the value of the remote setting")
}
// Look for a file first
remotePath := path.Join(remote, cipher.EncryptFileName(rpath))
wrappedFs, err := fs.NewFs(remotePath)
// if that didn't produce a file, look for a directory
if err != fs.ErrorIsFile {
remotePath = path.Join(remote, cipher.EncryptDirName(rpath))
wrappedFs, err = fs.NewFs(remotePath)
}
if err != fs.ErrorIsFile && err != nil {
return nil, errors.Wrapf(err, "failed to make remote %q to wrap", remotePath)
}
f := &Fs{
Fs: wrappedFs,
name: name,
root: rpath,
cipher: cipher,
mode: mode,
}
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
f.features = (&fs.Features{
CaseInsensitive: mode == NameEncryptionOff,
DuplicateFiles: true,
ReadMimeType: false, // MimeTypes not supported with crypt
WriteMimeType: false,
}).Fill(f).Mask(wrappedFs)
return f, err
}
// Fs represents a wrapped fs.Fs
type Fs struct {
fs.Fs
name string
root string
features *fs.Features // optional features
cipher Cipher
mode NameEncryptionMode
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("Encrypted %s", f.Fs.String())
}
// List the Fs into a channel
func (f *Fs) List(opts fs.ListOpts, dir string) {
f.Fs.List(f.newListOpts(opts, dir), f.cipher.EncryptDirName(dir))
}
// NewObject finds the Object at remote.
func (f *Fs) NewObject(remote string) (fs.Object, error) {
o, err := f.Fs.NewObject(f.cipher.EncryptFileName(remote))
if err != nil {
return nil, err
}
return f.newObject(o), nil
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo) (fs.Object, error) {
wrappedIn, err := f.cipher.EncryptData(in)
if err != nil {
return nil, err
}
o, err := f.Fs.Put(wrappedIn, f.newObjectInfo(src))
if err != nil {
return nil, err
}
return f.newObject(o), nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() fs.HashSet {
return fs.HashSet(fs.HashNone)
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(dir string) error {
return f.Fs.Mkdir(f.cipher.EncryptDirName(dir))
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(dir string) error {
return f.Fs.Rmdir(f.cipher.EncryptDirName(dir))
}
// Purge all files in the root and the root directory
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge() error {
do := f.Fs.Features().Purge
if do == nil {
return fs.ErrorCantPurge
}
return do()
}
// Copy src to this remote using server side copy operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
do := f.Fs.Features().Copy
if do == nil {
return nil, fs.ErrorCantCopy
}
o, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantCopy
}
oResult, err := do(o.Object, f.cipher.EncryptFileName(remote))
if err != nil {
return nil, err
}
return f.newObject(oResult), nil
}
// Move src to this remote using server side move operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
do := f.Fs.Features().Move
if do == nil {
return nil, fs.ErrorCantMove
}
o, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantMove
}
oResult, err := do(o.Object, f.cipher.EncryptFileName(remote))
if err != nil {
return nil, err
}
return f.newObject(oResult), nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
do := f.Fs.Features().DirMove
if do == nil {
return fs.ErrorCantDirMove
}
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
return do(srcFs.Fs, f.cipher.EncryptDirName(srcRemote), f.cipher.EncryptDirName(dstRemote))
}
// PutUnchecked uploads the object
//
// This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that.
func (f *Fs) PutUnchecked(in io.Reader, src fs.ObjectInfo) (fs.Object, error) {
do := f.Fs.Features().PutUnchecked
if do == nil {
return nil, errors.New("can't PutUnchecked")
}
wrappedIn, err := f.cipher.EncryptData(in)
if err != nil {
return nil, err
}
o, err := do(wrappedIn, f.newObjectInfo(src))
if err != nil {
return nil, err
}
return f.newObject(o), nil
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
// otherwise cleaning up old versions of files.
func (f *Fs) CleanUp() error {
do := f.Fs.Features().CleanUp
if do == nil {
return errors.New("can't CleanUp")
}
return do()
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs {
return f.Fs
}
// ComputeHash takes the nonce from o, and encrypts the contents of
// src with it, and calcuates the hash given by HashType on the fly
//
// Note that we break lots of encapsulation in this function.
func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType fs.HashType) (hash string, err error) {
// Read the nonce - opening the file is sufficient to read the nonce in
in, err := o.Open()
if err != nil {
return "", errors.Wrap(err, "failed to read nonce")
}
nonce := in.(*decrypter).nonce
// fs.Debugf(o, "Read nonce % 2x", nonce)
// Check nonce isn't all zeros
isZero := true
for i := range nonce {
if nonce[i] != 0 {
isZero = false
}
}
if isZero {
fs.Errorf(o, "empty nonce read")
}
// Close in once we have read the nonce
err = in.Close()
if err != nil {
return "", errors.Wrap(err, "failed to close nonce read")
}
// Open the src for input
in, err = src.Open()
if err != nil {
return "", errors.Wrap(err, "failed to open src")
}
defer fs.CheckClose(in, &err)
// Now encrypt the src with the nonce
out, err := f.cipher.(*cipher).newEncrypter(in, &nonce)
if err != nil {
return "", errors.Wrap(err, "failed to make encrypter")
}
// pipe into hash
m := fs.NewMultiHasher()
_, err = io.Copy(m, out)
if err != nil {
return "", errors.Wrap(err, "failed to hash data")
}
return m.Sums()[hashType], nil
}
// Object describes a wrapped for being read from the Fs
//
// This decrypts the remote name and decrypts the data
type Object struct {
fs.Object
f *Fs
}
func (f *Fs) newObject(o fs.Object) *Object {
return &Object{
Object: o,
f: f,
}
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.f
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Remote()
}
// Remote returns the remote path
func (o *Object) Remote() string {
remote := o.Object.Remote()
decryptedName, err := o.f.cipher.DecryptFileName(remote)
if err != nil {
fs.Debugf(remote, "Undecryptable file name: %v", err)
return remote
}
return decryptedName
}
// Size returns the size of the file
func (o *Object) Size() int64 {
size, err := o.f.cipher.DecryptedSize(o.Object.Size())
if err != nil {
fs.Debugf(o, "Bad size for decrypt: %v", err)
}
return size
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(hash fs.HashType) (string, error) {
return "", nil
}
// UnWrap returns the wrapped Object
func (o *Object) UnWrap() fs.Object {
return o.Object
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(options ...fs.OpenOption) (rc io.ReadCloser, err error) {
var offset int64
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
offset = x.Offset
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
rc, err = o.f.cipher.DecryptDataSeek(func(underlyingOffset int64) (io.ReadCloser, error) {
if underlyingOffset == 0 {
// Open with no seek
return o.Object.Open()
}
// Open stream with a seek of underlyingOffset
return o.Object.Open(&fs.SeekOption{Offset: underlyingOffset})
}, offset)
if err != nil {
return nil, err
}
return rc, err
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(in io.Reader, src fs.ObjectInfo) error {
wrappedIn, err := o.f.cipher.EncryptData(in)
if err != nil {
return err
}
return o.Object.Update(wrappedIn, o.f.newObjectInfo(src))
}
// newDir returns a dir with the Name decrypted
func (f *Fs) newDir(dir *fs.Dir) *fs.Dir {
new := *dir
remote := dir.Name
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Undecryptable dir name: %v", err)
} else {
new.Name = decryptedRemote
}
return &new
}
// ObjectInfo describes a wrapped fs.ObjectInfo for being the source
//
// This encrypts the remote name and adjusts the size
type ObjectInfo struct {
fs.ObjectInfo
f *Fs
}
func (f *Fs) newObjectInfo(src fs.ObjectInfo) *ObjectInfo {
return &ObjectInfo{
ObjectInfo: src,
f: f,
}
}
// Fs returns read only access to the Fs that this object is part of
func (o *ObjectInfo) Fs() fs.Info {
return o.f
}
// Remote returns the remote path
func (o *ObjectInfo) Remote() string {
return o.f.cipher.EncryptFileName(o.ObjectInfo.Remote())
}
// Size returns the size of the file
func (o *ObjectInfo) Size() int64 {
return o.f.cipher.EncryptedSize(o.ObjectInfo.Size())
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *ObjectInfo) Hash(hash fs.HashType) (string, error) {
return "", nil
}
// ListOpts wraps a listopts decrypting the directory listing and
// replacing the Objects
type ListOpts struct {
fs.ListOpts
f *Fs
dir string // dir we are listing
mu sync.Mutex // to protect dirs
dirs map[string]struct{} // keep track of synthetic directory objects added
}
// Make a ListOpts wrapper
func (f *Fs) newListOpts(lo fs.ListOpts, dir string) *ListOpts {
if dir != "" {
dir += "/"
}
return &ListOpts{
ListOpts: lo,
f: f,
dir: dir,
dirs: make(map[string]struct{}),
}
}
// Level gets the recursion level for this listing.
//
// Fses may ignore this, but should implement it for improved efficiency if possible.
//
// Level 1 means list just the contents of the directory
//
// Each returned item must have less than level `/`s in.
func (lo *ListOpts) Level() int {
return lo.ListOpts.Level()
}
// Add an object to the output.
// If the function returns true, the operation has been aborted.
// Multiple goroutines can safely add objects concurrently.
func (lo *ListOpts) Add(obj fs.Object) (abort bool) {
remote := obj.Remote()
decryptedRemote, err := lo.f.cipher.DecryptFileName(remote)
if err != nil {
fs.Debugf(remote, "Skipping undecryptable file name: %v", err)
return lo.ListOpts.IsFinished()
}
if *cryptShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
return lo.ListOpts.Add(lo.f.newObject(obj))
}
// AddDir adds a directory to the output.
// If the function returns true, the operation has been aborted.
// Multiple goroutines can safely add objects concurrently.
func (lo *ListOpts) AddDir(dir *fs.Dir) (abort bool) {
remote := dir.Name
decryptedRemote, err := lo.f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Skipping undecryptable dir name: %v", err)
return lo.ListOpts.IsFinished()
}
if *cryptShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
return lo.ListOpts.AddDir(lo.f.newDir(dir))
}
// IncludeDirectory returns whether this directory should be
// included in the listing (and recursed into or not).
func (lo *ListOpts) IncludeDirectory(remote string) bool {
decryptedRemote, err := lo.f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Not including undecryptable directory name: %v", err)
return false
}
return lo.ListOpts.IncludeDirectory(decryptedRemote)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)
_ fs.ObjectInfo = (*ObjectInfo)(nil)
_ fs.Object = (*Object)(nil)
_ fs.ListOpts = (*ListOpts)(nil)
)

View File

@@ -1,64 +0,0 @@
// Test Crypt filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package crypt_test
import (
"testing"
"github.com/ncw/rclone/crypt"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
_ "github.com/ncw/rclone/local"
)
func TestSetup2(t *testing.T) {
fstests.NilObject = fs.Object((*crypt.Object)(nil))
fstests.RemoteName = "TestCrypt2:"
}
// Generic tests for the Fs
func TestInit2(t *testing.T) { fstests.TestInit(t) }
func TestFsString2(t *testing.T) { fstests.TestFsString(t) }
func TestFsRmdirEmpty2(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound2(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir2(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir2(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty2(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty2(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsNewObjectNotFound2(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile12(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError2(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile22(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile12(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile22(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListDirRoot2(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListSubdir2(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListLevel22(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListFile12(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject2(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsNewObjectDir2(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsListFile1and22(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsCopy2(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove2(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove2(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull2(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision2(t *testing.T) { fstests.TestFsPrecision(t) }
func TestObjectString2(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs2(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote2(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes2(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime2(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType2(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime2(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize2(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen2(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek2(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectUpdate2(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable2(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile2(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound2(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove2(t *testing.T) { fstests.TestObjectRemove(t) }
func TestObjectPurge2(t *testing.T) { fstests.TestObjectPurge(t) }
func TestFinalise2(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -1,27 +0,0 @@
package crypt_test
import (
"os"
"path/filepath"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
// Create the TestCrypt: remote
func init() {
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard")
name := "TestCrypt"
tempdir2 := filepath.Join(os.TempDir(), "rclone-crypt-test-off")
name2 := name + "2"
fstests.ExtraConfig = []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: fs.MustObscure("potato")},
{Name: name, Key: "filename_encryption", Value: "standard"},
{Name: name2, Key: "type", Value: "crypt"},
{Name: name2, Key: "remote", Value: tempdir2},
{Name: name2, Key: "password", Value: fs.MustObscure("potato2")},
{Name: name2, Key: "filename_encryption", Value: "off"},
}
}

View File

@@ -1,64 +0,0 @@
// Test Crypt filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package crypt_test
import (
"testing"
"github.com/ncw/rclone/crypt"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
_ "github.com/ncw/rclone/local"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*crypt.Object)(nil))
fstests.RemoteName = "TestCrypt:"
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -1,63 +0,0 @@
// Package pkcs7 implements PKCS#7 padding
//
// This is a standard way of encoding variable length buffers into
// buffers which are a multiple of an underlying crypto block size.
package pkcs7
import "github.com/pkg/errors"
// Errors Unpad can return
var (
ErrorPaddingNotFound = errors.New("Bad PKCS#7 padding - not padded")
ErrorPaddingNotAMultiple = errors.New("Bad PKCS#7 padding - not a multiple of blocksize")
ErrorPaddingTooLong = errors.New("Bad PKCS#7 padding - too long")
ErrorPaddingTooShort = errors.New("Bad PKCS#7 padding - too short")
ErrorPaddingNotAllTheSame = errors.New("Bad PKCS#7 padding - not all the same")
)
// Pad buf using PKCS#7 to a multiple of n.
//
// Appends the padding to buf - make a copy of it first if you don't
// want it modified.
func Pad(n int, buf []byte) []byte {
if n <= 1 || n >= 256 {
panic("bad multiple")
}
length := len(buf)
padding := n - (length % n)
for i := 0; i < padding; i++ {
buf = append(buf, byte(padding))
}
if (len(buf) % n) != 0 {
panic("padding failed")
}
return buf
}
// Unpad buf using PKCS#7 from a multiple of n returning a slice of
// buf or an error if malformed.
func Unpad(n int, buf []byte) ([]byte, error) {
if n <= 1 || n >= 256 {
panic("bad multiple")
}
length := len(buf)
if length == 0 {
return nil, ErrorPaddingNotFound
}
if (length % n) != 0 {
return nil, ErrorPaddingNotAMultiple
}
padding := int(buf[length-1])
if padding > n {
return nil, ErrorPaddingTooLong
}
if padding == 0 {
return nil, ErrorPaddingTooShort
}
for i := 0; i < padding; i++ {
if buf[length-1-i] != byte(padding) {
return nil, ErrorPaddingNotAllTheSame
}
}
return buf[:length-padding], nil
}

View File

@@ -1,73 +0,0 @@
package pkcs7
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
)
func TestPad(t *testing.T) {
for _, test := range []struct {
n int
in string
expected string
}{
{8, "", "\x08\x08\x08\x08\x08\x08\x08\x08"},
{8, "1", "1\x07\x07\x07\x07\x07\x07\x07"},
{8, "12", "12\x06\x06\x06\x06\x06\x06"},
{8, "123", "123\x05\x05\x05\x05\x05"},
{8, "1234", "1234\x04\x04\x04\x04"},
{8, "12345", "12345\x03\x03\x03"},
{8, "123456", "123456\x02\x02"},
{8, "1234567", "1234567\x01"},
{8, "abcdefgh", "abcdefgh\x08\x08\x08\x08\x08\x08\x08\x08"},
{8, "abcdefgh1", "abcdefgh1\x07\x07\x07\x07\x07\x07\x07"},
{8, "abcdefgh12", "abcdefgh12\x06\x06\x06\x06\x06\x06"},
{8, "abcdefgh123", "abcdefgh123\x05\x05\x05\x05\x05"},
{8, "abcdefgh1234", "abcdefgh1234\x04\x04\x04\x04"},
{8, "abcdefgh12345", "abcdefgh12345\x03\x03\x03"},
{8, "abcdefgh123456", "abcdefgh123456\x02\x02"},
{8, "abcdefgh1234567", "abcdefgh1234567\x01"},
{8, "abcdefgh12345678", "abcdefgh12345678\x08\x08\x08\x08\x08\x08\x08\x08"},
{16, "", "\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10"},
{16, "a", "a\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f"},
} {
actual := Pad(test.n, []byte(test.in))
assert.Equal(t, test.expected, string(actual), fmt.Sprintf("Pad %d %q", test.n, test.in))
recovered, err := Unpad(test.n, actual)
assert.NoError(t, err)
assert.Equal(t, []byte(test.in), recovered, fmt.Sprintf("Unpad %d %q", test.n, test.in))
}
assert.Panics(t, func() { Pad(1, []byte("")) }, "bad multiple")
assert.Panics(t, func() { Pad(256, []byte("")) }, "bad multiple")
}
func TestUnpad(t *testing.T) {
// We've tested the OK decoding in TestPad, now test the error cases
for _, test := range []struct {
n int
in string
err error
}{
{8, "", ErrorPaddingNotFound},
{8, "1", ErrorPaddingNotAMultiple},
{8, "12", ErrorPaddingNotAMultiple},
{8, "123", ErrorPaddingNotAMultiple},
{8, "1234", ErrorPaddingNotAMultiple},
{8, "12345", ErrorPaddingNotAMultiple},
{8, "123456", ErrorPaddingNotAMultiple},
{8, "1234567", ErrorPaddingNotAMultiple},
{8, "1234567\xFF", ErrorPaddingTooLong},
{8, "1234567\x09", ErrorPaddingTooLong},
{8, "1234567\x00", ErrorPaddingTooShort},
{8, "123456\x01\x02", ErrorPaddingNotAllTheSame},
{8, "\x07\x08\x08\x08\x08\x08\x08\x08", ErrorPaddingNotAllTheSame},
} {
result, actualErr := Unpad(test.n, []byte(test.in))
assert.Equal(t, test.err, actualErr, fmt.Sprintf("Unpad %d %q", test.n, test.in))
assert.Equal(t, result, []byte(nil))
}
assert.Panics(t, func() { _, _ = Unpad(1, []byte("")) }, "bad multiple")
assert.Panics(t, func() { _, _ = Unpad(256, []byte("")) }, "bad multiple")
}

View File

@@ -1,307 +0,0 @@
// Package dircache provides a simple cache for caching directory to path lookups
package dircache
// _methods are called without the lock
import (
"log"
"strings"
"sync"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
)
// DirCache caches paths to directory IDs and vice versa
type DirCache struct {
cacheMu sync.RWMutex
cache map[string]string
invCache map[string]string
mu sync.Mutex
fs DirCacher // Interface to find and make stuff
trueRootID string // ID of the absolute root
root string // the path we are working on
rootID string // ID of the root directory
rootParentID string // ID of the root's parent directory
foundRoot bool // Whether we have found the root or not
}
// DirCacher describes an interface for doing the low level directory work
type DirCacher interface {
FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error)
CreateDir(pathID, leaf string) (newID string, err error)
}
// New makes a DirCache
//
// The cache is safe for concurrent use
func New(root string, trueRootID string, fs DirCacher) *DirCache {
d := &DirCache{
trueRootID: trueRootID,
root: root,
fs: fs,
}
d.Flush()
d.ResetRoot()
return d
}
// Get an ID given a path
func (dc *DirCache) Get(path string) (id string, ok bool) {
dc.cacheMu.RLock()
id, ok = dc.cache[path]
dc.cacheMu.RUnlock()
return
}
// GetInv gets a path given an ID
func (dc *DirCache) GetInv(id string) (path string, ok bool) {
dc.cacheMu.RLock()
path, ok = dc.invCache[id]
dc.cacheMu.RUnlock()
return
}
// Put a path, id into the map
func (dc *DirCache) Put(path, id string) {
dc.cacheMu.Lock()
dc.cache[path] = id
dc.invCache[id] = path
dc.cacheMu.Unlock()
}
// Flush the map of all data
func (dc *DirCache) Flush() {
dc.cacheMu.Lock()
dc.cache = make(map[string]string)
dc.invCache = make(map[string]string)
dc.cacheMu.Unlock()
}
// FlushDir flushes the map of all data starting with dir
//
// If dir is empty then this is equivalent to calling ResetRoot
func (dc *DirCache) FlushDir(dir string) {
if dir == "" {
dc.ResetRoot()
return
}
dc.cacheMu.Lock()
// Delete the root dir
ID, ok := dc.cache[dir]
if ok {
delete(dc.cache, dir)
delete(dc.invCache, ID)
}
// And any sub directories
dir += "/"
for key, ID := range dc.cache {
if strings.HasPrefix(key, dir) {
delete(dc.cache, key)
delete(dc.invCache, ID)
}
}
dc.cacheMu.Unlock()
}
// SplitPath splits a path into directory, leaf
//
// Path shouldn't start or end with a /
//
// If there are no slashes then directory will be "" and leaf = path
func SplitPath(path string) (directory, leaf string) {
lastSlash := strings.LastIndex(path, "/")
if lastSlash >= 0 {
directory = path[:lastSlash]
leaf = path[lastSlash+1:]
} else {
directory = ""
leaf = path
}
return
}
// FindDir finds the directory passed in returning the directory ID
// starting from pathID
//
// Path shouldn't start or end with a /
//
// If create is set it will make the directory if not found
//
// Algorithm:
// Look in the cache for the path, if found return the pathID
// If not found strip the last path off the path and recurse
// Now have a parent directory id, so look in the parent for self and return it
func (dc *DirCache) FindDir(path string, create bool) (pathID string, err error) {
dc.mu.Lock()
defer dc.mu.Unlock()
return dc._findDir(path, create)
}
// Look for the root and in the cache - safe to call without the mu
func (dc *DirCache) _findDirInCache(path string) string {
// fmt.Println("Finding",path,"create",create,"cache",cache)
// If it is the root, then return it
if path == "" {
// fmt.Println("Root")
return dc.rootID
}
// If it is in the cache then return it
pathID, ok := dc.Get(path)
if ok {
// fmt.Println("Cache hit on", path)
return pathID
}
return ""
}
// Unlocked findDir - must have mu
func (dc *DirCache) _findDir(path string, create bool) (pathID string, err error) {
pathID = dc._findDirInCache(path)
if pathID != "" {
return pathID, nil
}
// Split the path into directory, leaf
directory, leaf := SplitPath(path)
// Recurse and find pathID for parent directory
parentPathID, err := dc._findDir(directory, create)
if err != nil {
return "", err
}
// Find the leaf in parentPathID
pathID, found, err := dc.fs.FindLeaf(parentPathID, leaf)
if err != nil {
return "", err
}
// If not found create the directory if required or return an error
if !found {
if create {
pathID, err = dc.fs.CreateDir(parentPathID, leaf)
if err != nil {
return "", errors.Wrap(err, "failed to make directory")
}
} else {
return "", fs.ErrorDirNotFound
}
}
// Store the leaf directory in the cache
dc.Put(path, pathID)
// fmt.Println("Dir", path, "is", pathID)
return pathID, nil
}
// FindPath finds the leaf and directoryID from a path
//
// If create is set parent directories will be created if they don't exist
func (dc *DirCache) FindPath(path string, create bool) (leaf, directoryID string, err error) {
dc.mu.Lock()
defer dc.mu.Unlock()
directory, leaf := SplitPath(path)
directoryID, err = dc._findDir(directory, create)
return
}
// FindRoot finds the root directory if not already found
//
// Resets the root directory
//
// If create is set it will make the directory if not found
func (dc *DirCache) FindRoot(create bool) error {
dc.mu.Lock()
defer dc.mu.Unlock()
if dc.foundRoot {
return nil
}
rootID, err := dc._findDir(dc.root, create)
if err != nil {
return err
}
dc.foundRoot = true
dc.rootID = rootID
// Find the parent of the root while we still have the root
// directory tree cached
rootParentPath, _ := SplitPath(dc.root)
dc.rootParentID, _ = dc.Get(rootParentPath)
// Reset the tree based on dc.root
dc.Flush()
// Put the root directory in
dc.Put("", dc.rootID)
return nil
}
// FindRootAndPath finds the root first if not found then finds leaf and directoryID from a path
//
// If create is set parent directories will be created if they don't exist
func (dc *DirCache) FindRootAndPath(path string, create bool) (leaf, directoryID string, err error) {
err = dc.FindRoot(create)
if err != nil {
return
}
return dc.FindPath(path, create)
}
// FoundRoot returns whether the root directory has been found yet
//
// Call this from FindLeaf or CreateDir only
func (dc *DirCache) FoundRoot() bool {
return dc.foundRoot
}
// RootID returns the ID of the root directory
//
// This should be called after FindRoot
func (dc *DirCache) RootID() string {
dc.mu.Lock()
defer dc.mu.Unlock()
if !dc.foundRoot {
log.Fatalf("Internal Error: RootID() called before FindRoot")
}
return dc.rootID
}
// RootParentID returns the ID of the parent of the root directory
//
// This should be called after FindRoot
func (dc *DirCache) RootParentID() (string, error) {
dc.mu.Lock()
defer dc.mu.Unlock()
if !dc.foundRoot {
return "", errors.New("internal error: RootID() called before FindRoot")
}
if dc.rootParentID == "" {
return "", errors.New("internal error: didn't find rootParentID")
}
if dc.rootID == dc.trueRootID {
return "", errors.New("is root directory")
}
return dc.rootParentID, nil
}
// ResetRoot resets the root directory to the absolute root and clears
// the DirCache
func (dc *DirCache) ResetRoot() {
dc.mu.Lock()
defer dc.mu.Unlock()
dc.foundRoot = false
dc.Flush()
// Put the true root in
dc.rootID = dc.trueRootID
// Put the root directory in
dc.Put("", dc.rootID)
}

View File

@@ -1,82 +0,0 @@
// Listing utility functions for fses which use dircache
package dircache
import (
"sync"
"github.com/ncw/rclone/fs"
)
// ListDirJob describe a directory listing that needs to be done
type ListDirJob struct {
DirID string
Path string
Depth int
}
// ListDirer describes the interface necessary to use ListDir
type ListDirer interface {
// ListDir reads the directory specified by the job into out, returning any more jobs
ListDir(out fs.ListOpts, job ListDirJob) (jobs []ListDirJob, err error)
}
// listDir lists the directory using a recursive list from the root
//
// It does this in parallel, calling f.ListDir to do the actual reading
func listDir(f ListDirer, out fs.ListOpts, dirID string, path string) {
// Start some directory listing go routines
var wg sync.WaitGroup // sync closing of go routines
var traversing sync.WaitGroup // running directory traversals
buffer := out.Buffer()
in := make(chan ListDirJob, buffer)
for i := 0; i < buffer; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for job := range in {
jobs, err := f.ListDir(out, job)
if err != nil {
out.SetError(err)
fs.Debugf(f, "Error reading %s: %s", path, err)
} else {
traversing.Add(len(jobs))
go func() {
// Now we have traversed this directory, send these
// jobs off for traversal in the background
for _, job := range jobs {
in <- job
}
}()
}
traversing.Done()
}
}()
}
// Start the process
traversing.Add(1)
in <- ListDirJob{DirID: dirID, Path: path, Depth: out.Level() - 1}
traversing.Wait()
close(in)
wg.Wait()
}
// List walks the path returning iles and directories into out
func (dc *DirCache) List(f ListDirer, out fs.ListOpts, dir string) {
defer out.Finished()
err := dc.FindRoot(false)
if err != nil {
out.SetError(err)
return
}
id, err := dc.FindDir(dir, false)
if err != nil {
out.SetError(err)
return
}
if dir != "" {
dir += "/"
}
listDir(f, out, id, dir)
}

View File

@@ -1,15 +1,11 @@
{
"indexes": {
"tag": "tags",
"group": "groups",
"menu": "menu"
},
"baseurl": "http://rclone.org",
"title": "rclone - rsync for cloud storage",
"description": "rclone - rsync for cloud storage: google drive, s3, swift, cloudfiles, dropbox, memstore...",
"canonifyurls": false,
"blackfriday": {
"smartDashes": false,
"plainIDAnchors": true
}
}
{
"indexes": {
"tag": "tags",
"group": "groups",
"menu": "menu"
},
"baseurl": "http://rclone.org",
"title": "rclone - rsync for cloud storage",
"description": "rclone - rsync for cloud storage: google drive, s3, swift, cloudfiles, dropbox, memstore...",
"canonifyurls": true
}

View File

@@ -1,8 +1,8 @@
---
title: "Rclone"
description: "rclone syncs files to and from Google Drive, S3, Swift, Cloudfiles, Dropbox, Google Cloud Storage and Amazon Drive."
description: "rclone syncs files to and from Google Drive, S3, Swift, Cloudfiles, Dropbox and Google Cloud Storage."
type: page
date: "2015-09-06"
date: "2014-07-17"
groups: ["about"]
---
@@ -18,30 +18,21 @@ Rclone is a command line program to sync files and directories to and from
* Openstack Swift / Rackspace cloud files / Memset Memstore
* Dropbox
* Google Cloud Storage
* Amazon Drive
* Microsoft One Drive
* Hubic
* Backblaze B2
* Yandex Disk
* SFTP
* The local filesystem
Features
* MD5/SHA1 hashes checked at all times for file integrity
* MD5SUMs checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* [Copy](/commands/rclone_copy/) mode to just copy new/changed files
* [Sync](/commands/rclone_sync/) (one way) mode to make a directory identical
* [Check](/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
* Optional encryption ([Crypt](/crypt/))
* Optional FUSE mount ([rclone mount](/commands/rclone_mount/))
* Copy mode to just copy new/changed files
* Sync mode to make a directory identical
* Check mode to check all MD5SUMs
* Can sync to and from network, eg two different Drive accounts
Links
* <i class="fa fa-home"></i> [Home page](http://rclone.org/)
* <i class="fa fa-github"></i> [Github project page for source and bug tracker](http://github.com/ncw/rclone)
* <i class="fa fa-comments"></i> [Rclone Forum](https://forum.rclone.org)
* <i class="fa fa-google-plus"></i> <a href="https://google.com/+RcloneOrg" rel="publisher">Google+ page</a>
* <i class="fa fa-google-plus"></i> <a href="https://google.com/+RcloneOrg" rel="publisher">Google+ page</a></li>
* <i class="fa fa-cloud-download"></i>[Downloads](/downloads/)

View File

@@ -1,190 +0,0 @@
---
title: "Amazon Drive"
description: "Rclone docs for Amazon Drive"
date: "2016-07-11"
---
<i class="fa fa-amazon"></i> Amazon Drive
-----------------------------------------
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
The initial setup for Amazon Drive involves getting a token from
Amazon which you need to do in your browser. `rclone config` walks
you through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
This will guide you through an interactive setup process:
```
n) New remote
d) Delete remote
q) Quit config
e/n/d/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
5 / Encrypt/Decrypt a remote
\ "crypt"
6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
7 / Google Drive
\ "drive"
8 / Hubic
\ "hubic"
9 / Local Disk
\ "local"
10 / Microsoft OneDrive
\ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
12 / SSH/SFTP Connection
\ "sftp"
13 / Yandex Disk
\ "yandex"
Storage> 1
Amazon Application Client Id - leave blank normally.
client_id>
Amazon Application Client Secret - leave blank normally.
client_secret>
Remote config
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
--------------------
[remote]
client_id =
client_secret =
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
See the [remote setup docs](/remote_setup/) for how to set it up on a
machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the
token as returned from Amazon. This only runs from the moment it
opens your browser to the moment you get back the verification
code. This is on `http://127.0.0.1:53682/` and this it may require
you to unblock it temporarily if you are running a host firewall.
Once configured you can then use `rclone` like this,
List directories in top level of your Amazon Drive
rclone lsd remote:
List all the files in your Amazon Drive
rclone ls remote:
To copy a local directory to an Amazon Drive directory called backup
rclone copy /home/source remote:backup
### Modified time and MD5SUMs ###
Amazon Drive doesn't allow modification times to be changed via
the API so these won't be accurate or used for syncing.
It does store MD5SUMs so for a more accurate sync, you can use the
`--checksum` flag.
### Deleting files ###
Any files you delete with rclone will end up in the trash. Amazon
don't provide an API to permanently delete files, nor to empty the
trash, so you will have to do that with one of Amazon's apps or via
the Amazon Drive website. As of November 17, 2016, files are
automatically deleted by Amazon from the trash after 30 days.
### Using with non `.com` Amazon accounts ###
Let's say you usually use `amazon.co.uk`. When you authenticate with
rclone it will take you to an `amazon.com` page to log in. Your
`amazon.co.uk` email and password should work here just fine.
### Specific options ###
Here are the command line options specific to this cloud storage
system.
#### --acd-templink-threshold=SIZE ####
Files this size or more will be downloaded via their `tempLink`. This
is to work around a problem with Amazon Drive which blocks downloads
of files bigger than about 10GB. The default for this is 9GB which
shouldn't need to be changed.
To download files above this threshold, rclone requests a `tempLink`
which downloads the file through a temporary URL directly from the
underlying S3 storage.
#### --acd-upload-wait-per-gb=TIME ####
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This
happens sometimes for files over 1GB in size and nearly every time for
files bigger than 10GB. This parameter controls the time rclone waits
for the file to appear.
The default value for this parameter is 3 minutes per GB, so by
default it will wait 3 minutes for every GB uploaded to see if the
file appears.
You can disable this feature by setting it to 0. This may cause
conflict errors as rclone retries the failed upload but the file will
most likely appear correctly eventually.
These values were determined empirically by observing lots of uploads
of big files for a range of file sizes.
Upload with the `-v` flag to see more info about what rclone is doing
in this situation.
### Limitations ###
Note that Amazon Drive is case insensitive so you can't have a
file called "Hello.doc" and one called "hello.doc".
Amazon Drive has rate limiting so you may notice errors in the
sync (429 errors). rclone will automatically retry the sync up to 3
times by default (see `--retries` flag) which should hopefully work
around this problem.
Amazon Drive has an internal limit of file sizes that can be uploaded
to the service. This limit is not officially published, but all files
larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file.
This means that larger files are likely to fail.
Unfortunatly there is no way for rclone to see that this failure is
because of file size, so it will retry the operation, as any other
failure. To avoid this problem, use `--max-size 50000M` option to limit
the maximum size of uploaded files. Note that `--max-size` does not split
files into segments, it only ignores files over this size.

View File

@@ -1,63 +0,0 @@
---
title: "Authors"
description: "Rclone Authors and Contributors"
date: "2016-11-02"
---
Authors
-------
* Nick Craig-Wood <nick@craig-wood.com>
Contributors
------------
* Alex Couper <amcouper@gmail.com>
* Leonid Shalupov <leonid@shalupov.com>
* Shimon Doodkin <helpmepro1@gmail.com>
* Colin Nicholson <colin@colinn.com>
* Klaus Post <klauspost@gmail.com>
* Sergey Tolmachev <tolsi.ru@gmail.com>
* Adriano Aurélio Meirelles <adriano@atinge.com>
* C. Bess <cbess@users.noreply.github.com>
* Dmitry Burdeev <dibu28@gmail.com>
* Joseph Spurrier <github@josephspurrier.com>
* Björn Harrtell <bjorn@wololo.org>
* Xavier Lucas <xavier.lucas@corp.ovh.com>
* Werner Beroux <werner@beroux.com>
* Brian Stengaard <brian@stengaard.eu>
* Jakub Gedeon <jgedeon@sofi.com>
* Jim Tittsler <jwt@onjapan.net>
* Michal Witkowski <michal@improbable.io>
* Fabian Ruff <fabian.ruff@sap.com>
* Leigh Klotz <klotz@quixey.com>
* Romain Lapray <lapray.romain@gmail.com>
* Justin R. Wilson <jrw972@gmail.com>
* Antonio Messina <antonio.s.messina@gmail.com>
* Stefan G. Weichinger <office@oops.co.at>
* Per Cederberg <cederberg@gmail.com>
* Radek Šenfeld <rush@logic.cz>
* Fredrik Fornwall <fredrik@fornwall.net>
* Asko Tamm <asko@deekit.net>
* xor-zz <xor@gstocco.com>
* Tomasz Mazur <tmazur90@gmail.com>
* Marco Paganini <paganini@paganini.net>
* Felix Bünemann <buenemann@louis.info>
* Durval Menezes <jmrclone@durval.com>
* Luiz Carlos Rumbelsperger Viana <maxd13_luiz_carlos@hotmail.com>
* Stefan Breunig <stefan-github@yrden.de>
* Alishan Ladhani <ali-l@users.noreply.github.com>
* 0xJAKE <0xJAKE@users.noreply.github.com>
* Thibault Molleman <thibaultmol@users.noreply.github.com>
* Scott McGillivray <scott.mcgillivray@gmail.com>
* Bjørn Erik Pedersen <bjorn.erik.pedersen@gmail.com>
* Lukas Loesche <lukas@mesosphere.io>
* emyarod <allllaboutyou@gmail.com>
* T.C. Ferguson <tcf909@gmail.com>
* Brandur <brandur@mutelight.org>
* Dario Giovannetti <dev@dariogiovannetti.net>
* Károly Oláh <okaresz@aol.com>
* Jon Yergatian <jon@macfanatic.ca>
* Jack Schmidt <github@mowsey.org>
* Dedsec1 <Dedsec1@users.noreply.github.com>
* Hisham Zarka <hzarka@gmail.com>

View File

@@ -1,299 +0,0 @@
---
title: "B2"
description: "Backblaze B2"
date: "2016-10-25"
---
<i class="fa fa-fire"></i>Backblaze B2
----------------------------------------
B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/).
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
Here is an example of making a b2 configuration. First run
rclone config
This will guide you through an interactive setup process. You will
need your account number (a short hex number) and key (a long hex
number) which you can get from the b2 control panel.
```
No remotes found - make a new one
n) New remote
q) Quit config
n/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
5 / Encrypt/Decrypt a remote
\ "crypt"
6 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
7 / Google Drive
\ "drive"
8 / Hubic
\ "hubic"
9 / Local Disk
\ "local"
10 / Microsoft OneDrive
\ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
12 / SSH/SFTP Connection
\ "sftp"
13 / Yandex Disk
\ "yandex"
Storage> 3
Account ID
account> 123456789abc
Application Key
key> 0123456789abcdef0123456789abcdef0123456789
Endpoint for the service - leave blank normally.
endpoint>
Remote config
--------------------
[remote]
account = 123456789abc
key = 0123456789abcdef0123456789abcdef0123456789
endpoint =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This remote is called `remote` and can now be used like this
See all buckets
rclone lsd remote:
Make a new bucket
rclone mkdir remote:bucket
List the contents of a bucket
rclone ls remote:bucket
Sync `/home/local/directory` to the remote bucket, deleting any
excess files in the bucket.
rclone sync /home/local/directory remote:bucket
### Modified time ###
The modified time is stored as metadata on the object as
`X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
in the Backblaze standard. Other tools should be able to use this as
a modified time.
Modified times are used in syncing and are fully supported except in
the case of updating a modification time on an existing object. In
this case the object will be uploaded again as B2 doesn't have an API
method to set the modification time independent of doing an upload.
### SHA1 checksums ###
The SHA1 checksums of the files are checked on upload and download and
will be used in the syncing process.
Large files which are uploaded in chunks will store their SHA1 on the
object as `X-Bz-Info-large_file_sha1` as recommended by Backblaze.
### Transfers ###
Backblaze recommends that you do lots of transfers simultaneously for
maximum speed. In tests from my SSD equiped laptop the optimum
setting is about `--transfers 32` though higher numbers may be used
for a slight speed improvement. The optimum number for you may vary
depending on your hardware, how big the files are, how much you want
to load your computer, etc. The default of `--transfers 4` is
definitely too low for Backblaze B2 though.
Note that uploading big files (bigger than 200 MB by default) will use
a 96 MB RAM buffer by default. There can be at most `--transfers` of
these in use at any moment, so this sets the upper limit on the memory
used.
### Versions ###
When rclone uploads a new version of a file it creates a [new version
of it](https://www.backblaze.com/b2/docs/file_versions.html).
Likewise when you delete a file, the old version will still be
available.
Old versions of files are visible using the `--b2-versions` flag.
If you wish to remove all the old versions then you can use the
`rclone cleanup remote:bucket` command which will delete all the old
versions of files, leaving the current ones intact. You can also
supply a path and only old versions under that path will be deleted,
eg `rclone cleanup remote:bucket/path/to/stuff`.
When you `purge` a bucket, the current and the old versions will be
deleted then the bucket will be deleted.
However `delete` will cause the current versions of the files to
become hidden old versions.
Here is a session showing the listing and and retreival of an old
version followed by a `cleanup` of the old versions.
Show current version and all the versions with `--b2-versions` flag.
```
$ rclone -q ls b2:cleanup-test
9 one.txt
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
```
Retreive an old verson
```
$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
$ ls -l /tmp/one-v2016-07-04-141003-000.txt
-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
```
Clean up all the old versions and show that they've gone.
```
$ rclone -q cleanup b2:cleanup-test
$ rclone -q ls b2:cleanup-test
9 one.txt
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
```
### Data usage ###
It is useful to know how many requests are sent to the server in different scenarios.
All copy commands send the following 4 requests:
```
/b2api/v1/b2_authorize_account
/b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names
```
The `b2_list_file_names` request will be sent once for every 1k files
in the remote path, providing the checksum and modification time of
the listed files. As of version 1.33 issue
[#818](https://github.com/ncw/rclone/issues/818) causes extra requests
to be sent when using B2 with Crypt. When a copy operation does not
require any files to be uploaded, no more requests will be sent.
Uploading files that do not require chunking, will send 2 requests per
file upload:
```
/b2api/v1/b2_get_upload_url
/b2api/v1/b2_upload_file/
```
Uploading files requiring chunking, will send 2 requests (one each to
start and finish the upload) and another 2 requests for each chunk:
```
/b2api/v1/b2_start_large_file
/b2api/v1/b2_get_upload_part_url
/b2api/v1/b2_upload_part/
/b2api/v1/b2_finish_large_file
```
### B2 with crypt ###
When using B2 with `crypt` files are encrypted into a temporary
location and streamed from there. This is required to calculate the
encrypted file's checksum before beginning the upload. On Windows the
%TMPDIR% environment variable is used as the temporary location. If
the file requires chunking, both the chunking and encryption will take
place in memory.
### Specific options ###
Here are the command line options specific to this cloud storage
system.
#### --b2-chunk-size valuee=SIZE ####
When uploading large files chunk the file into this size. Note that
these chunks are buffered in memory and there might a maximum of
`--transfers` chunks in progress at once. 100,000,000 Bytes is the
minimim size (default 96M).
#### --b2-upload-cutoff=SIZE ####
Cutoff for switching to chunked upload (default 190.735 MiB == 200
MB). Files above this size will be uploaded in chunks of
`--b2-chunk-size`.
This value should be set no larger than 4.657GiB (== 5GB) as this is
the largest file size that can be uploaded.
#### --b2-test-mode=FLAG ####
This is for debugging purposes only.
Setting FLAG to one of the strings below will cause b2 to return
specific errors for debugging purposes.
* `fail_some_uploads`
* `expire_some_account_authorization_tokens`
* `force_cap_exceeded`
These will be set in the `X-Bz-Test-Mode` header which is documented
in the [b2 integrations
checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).
#### --b2-versions ####
When set rclone will show and act on older versions of files. For example
Listing without `--b2-versions`
```
$ rclone -q ls b2:cleanup-test
9 one.txt
```
And with
```
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
```
Showing that the current version is unchanged but older versions can
be seen. These have the UTC date that they were uploaded to the
server to the nearest millisecond appended to them.
Note that when using `--b2-versions` no file write operations are
permitted, so you can't upload files or delete them.

View File

@@ -1,31 +0,0 @@
---
title: "Bugs"
description: "Rclone Bugs and Limitations"
date: "2014-06-16"
---
Bugs and Limitations
--------------------
### Empty directories are left behind / not created ##
With remotes that have a concept of directory, eg Local and Drive,
empty directories may be left behind, or not created when one was
expected.
This is because rclone doesn't have a concept of a directory - it only
works on objects. Most of the object storage systems can't actually
store a directory so there is nowhere for rclone to store anything
about directories.
You can work round this to some extent with the`purge` command which
will delete everything under the path, **inluding** empty directories.
This may be fixed at some point in
[Issue #100](https://github.com/ncw/rclone/issues/100)
### Directory timestamps aren't preserved ##
For the same reason as the above, rclone doesn't have a concept of a
directory - it only works on objects, therefore it can't preserve the
timestamps of directories.

View File

@@ -1,609 +0,0 @@
---
title: "Documentation"
description: "Rclone Changelog"
date: "2016-11-06"
---
Changelog
---------
* v1.36 - 2017-03-18
* New Features
* SFTP remote (Jack Schmidt)
* Re-implement sync routine to work a directory at a time reducing memory usage
* Logging revamped to be more inline with rsync - now much quieter
* -v only shows transfers
* -vv is for full debug
* --syslog to log to syslog on capable platforms
* Implement --backup-dir and --suffix
* Implement --track-renames (initial implementation by Bjørn Erik Pedersen)
* Add time-based bandwidth limits (Lukas Loesche)
* rclone cryptcheck: checks integrity of crypt remotes
* Allow all config file variables and options to be set from environment variables
* Add --buffer-size parameter to control buffer size for copy
* Make --delete-after the default
* Add --ignore-checksum flag (fixed by Hisham Zarka)
* rclone check: Add --download flag to check all the data, not just hashes
* rclone cat: add --head, --tail, --offset, --count and --discard
* rclone config: when choosing from a list, allow the value to be entered too
* rclone config: allow rename and copy of remotes
* rclone obscure: for generating encrypted passwords for rclone's config (T.C. Ferguson)
* Comply with XDG Base Directory specification (Dario Giovannetti)
* this moves the default location of the config file in a backwards compatible way
* Release changes
* Ubuntu snap support (Dedsec1)
* Compile with go 1.8
* MIPS/Linux big and little endian support
* Bug Fixes
* Fix copyto copying things to the wrong place if the destination dir didn't exist
* Fix parsing of remotes in moveto and copyto
* Fix --delete-before deleting files on copy
* Fix --files-from with an empty file copying everything
* Fix sync: don't update mod times if --dry-run set
* Fix MimeType propagation
* Fix filters to add ** rules to directory rules
* Local
* Implement -L, --copy-links flag to allow rclone to follow symlinks
* Open files in write only mode so rclone can write to an rclone mount
* Fix unnormalised unicode causing problems reading directories
* Fix interaction between -x flag and --max-depth
* Mount
* Implement proper directory handling (mkdir, rmdir, renaming)
* Make include and exclude filters apply to mount
* Implement read and write async buffers - control with --buffer-size
* Fix fsync on for directories
* Fix retry on network failure when reading off crypt
* Crypt
* Add --crypt-show-mapping to show encrypted file mapping
* Fix crypt writer getting stuck in a loop
* **IMPORTANT** this bug had the potential to cause data corruption when
* reading data from a network based remote and
* writing to a crypt on Google Drive
* Use the cryptcheck command to validate your data if you are concerned
* If syncing two crypt remotes, sync the unencrypted remote
* Amazon Drive
* Fix panics on Move (rename)
* Fix panic on token expiry
* B2
* Fix inconsistent listings and rclone check
* Fix uploading empty files with go1.8
* Constrain memory usage when doing multipart uploads
* Fix upload url not being refreshed properly
* Drive
* Fix Rmdir on directories with trashed files
* Fix "Ignoring unknown object" when downloading
* Add --drive-list-chunk
* Add --drive-skip-gdocs (Károly Oláh)
* OneDrive
* Implement Move
* Fix Copy
* Fix overwrite detection in Copy
* Fix waitForJob to parse errors correctly
* Use token renewer to stop auth errors on long uploads
* Fix uploading empty files with go1.8
* Google Cloud Storage
* Fix depth 1 directory listings
* Yandex
* Fix single level directory listing
* Dropbox
* Normalise the case for single level directory listings
* Fix depth 1 listing
* S3
* Added ca-central-1 region (Jon Yergatian)
* v1.35 - 2017-01-02
* New Features
* moveto and copyto commands for choosing a destination name on copy/move
* rmdirs command to recursively delete empty directories
* Allow repeated --include/--exclude/--filter options
* Only show transfer stats on commands which transfer stuff
* show stats on any command using the `--stats` flag
* Allow overlapping directories in move when server side dir move is supported
* Add --stats-unit option - thanks Scott McGillivray
* Bug Fixes
* Fix the config file being overwritten when two rclones are running
* Make rclone lsd obey the filters properly
* Fix compilation on mips
* Fix not transferring files that don't differ in size
* Fix panic on nil retry/fatal error
* Mount
* Retry reads on error - should help with reliability a lot
* Report the modification times for directories from the remote
* Add bandwidth accounting and limiting (fixes --bwlimit)
* If --stats provided will show stats and which files are transferring
* Support R/W files if truncate is set.
* Implement statfs interface so df works
* Note that write is now supported on Amazon Drive
* Report number of blocks in a file - thanks Stefan Breunig
* Crypt
* Prevent the user pointing crypt at itself
* Fix failed to authenticate decrypted block errors
* these will now return the underlying unexpected EOF instead
* Amazon Drive
* Add support for server side move and directory move - thanks Stefan Breunig
* Fix nil pointer deref on size attribute
* B2
* Use new prefix and delimiter parameters in directory listings
* This makes --max-depth 1 dir listings as used in mount much faster
* Reauth the account while doing uploads too - should help with token expiry
* Drive
* Make DirMove more efficient and complain about moving the root
* Create destination directory on Move()
* v1.34 - 2016-11-06
* New Features
* Stop single file and `--files-from` operations iterating through the source bucket.
* Stop removing failed upload to cloud storage remotes
* Make ContentType be preserved for cloud to cloud copies
* Add support to toggle bandwidth limits via SIGUSR2 - thanks Marco Paganini
* `rclone check` shows count of hashes that couldn't be checked
* `rclone listremotes` command
* Support linux/arm64 build - thanks Fredrik Fornwall
* Remove `Authorization:` lines from `--dump-headers` output
* Bug Fixes
* Ignore files with control characters in the names
* Fix `rclone move` command
* Delete src files which already existed in dst
* Fix deletion of src file when dst file older
* Fix `rclone check` on crypted file systems
* Make failed uploads not count as "Transferred"
* Make sure high level retries show with `-q`
* Use a vendor directory with godep for repeatable builds
* `rclone mount` - FUSE
* Implement FUSE mount options
* `--no-modtime`, `--debug-fuse`, `--read-only`, `--allow-non-empty`, `--allow-root`, `--allow-other`
* `--default-permissions`, `--write-back-cache`, `--max-read-ahead`, `--umask`, `--uid`, `--gid`
* Add `--dir-cache-time` to control caching of directory entries
* Implement seek for files opened for read (useful for video players)
* with `-no-seek` flag to disable
* Fix crash on 32 bit ARM (alignment of 64 bit counter)
* ...and many more internal fixes and improvements!
* Crypt
* Don't show encrypted password in configurator to stop confusion
* Amazon Drive
* New wait for upload option `--acd-upload-wait-per-gb`
* upload timeouts scale by file size and can be disabled
* Add 502 Bad Gateway to list of errors we retry
* Fix overwriting a file with a zero length file
* Fix ACD file size warning limit - thanks Felix Bünemann
* Local
* Unix: implement `-x`/`--one-file-system` to stay on a single file system
* thanks Durval Menezes and Luiz Carlos Rumbelsperger Viana
* Windows: ignore the symlink bit on files
* Windows: Ignore directory based junction points
* B2
* Make sure each upload has at least one upload slot - fixes strange upload stats
* Fix uploads when using crypt
* Fix download of large files (sha1 mismatch)
* Return error when we try to create a bucket which someone else owns
* Update B2 docs with Data usage, and Crypt section - thanks Tomasz Mazur
* S3
* Command line and config file support for
* Setting/overriding ACL - thanks Radek Senfeld
* Setting storage class - thanks Asko Tamm
* Drive
* Make exponential backoff work exactly as per Google specification
* add `.epub`, `.odp` and `.tsv` as export formats.
* Swift
* Don't read metadata for directory marker objects
* v1.33 - 2016-08-24
* New Features
* Implement encryption
* data encrypted in NACL secretbox format
* with optional file name encryption
* New commands
* rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL)
* works on Linux, FreeBSD and OS X (need testers for the last 2!)
* rclone cat - outputs remote file or files to the terminal
* rclone genautocomplete - command to make a bash completion script for rclone
* Editing a remote using `rclone config` now goes through the wizard
* Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386 processors
* Use cobra for sub commands and docs generation
* drive
* Document how to make your own client_id
* s3
* User-configurable Amazon S3 ACL (thanks Radek Šenfeld)
* b2
* Fix stats accounting for upload - no more jumping to 100% done
* On cleanup delete hide marker if it is the current file
* New B2 API endpoint (thanks Per Cederberg)
* Set maximum backoff to 5 Minutes
* onedrive
* Fix URL escaping in file names - eg uploading files with `+` in them.
* amazon cloud drive
* Fix token expiry during large uploads
* Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
* local
* Fix filenames with invalid UTF-8 not being uploaded
* Fix problem with some UTF-8 characters on OS X
* v1.32 - 2016-07-13
* Backblaze B2
* Fix upload of files large files not in root
* v1.31 - 2016-07-13
* New Features
* Reduce memory on sync by about 50%
* Implement --no-traverse flag to stop copy traversing the destination remote.
* This can be used to reduce memory usage down to the smallest possible.
* Useful to copy a small number of files into a large destination folder.
* Implement cleanup command for emptying trash / removing old versions of files
* Currently B2 only
* Single file handling improved
* Now copied with --files-from
* Automatically sets --no-traverse when copying a single file
* Info on using installing with ansible - thanks Stefan Weichinger
* Implement --no-update-modtime flag to stop rclone fixing the remote modified times.
* Bug Fixes
* Fix move command - stop it running for overlapping Fses - this was causing data loss.
* Local
* Fix incomplete hashes - this was causing problems for B2.
* Amazon Drive
* Rename Amazon Cloud Drive to Amazon Drive - no changes to config file needed.
* Swift
* Add support for non-default project domain - thanks Antonio Messina.
* S3
* Add instructions on how to use rclone with minio.
* Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.
* Skip setting the modified time for objects > 5GB as it isn't possible.
* Backblaze B2
* Add --b2-versions flag so old versions can be listed and retreived.
* Treat 403 errors (eg cap exceeded) as fatal.
* Implement cleanup command for deleting old file versions.
* Make error handling compliant with B2 integrations notes.
* Fix handling of token expiry.
* Implement --b2-test-mode to set `X-Bz-Test-Mode` header.
* Set cutoff for chunked upload to 200MB as per B2 guidelines.
* Make upload multi-threaded.
* Dropbox
* Don't retry 461 errors.
* v1.30 - 2016-06-18
* New Features
* Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables
* Directory include filtering for efficiency
* --max-depth parameter
* Better error reporting
* More to come
* Retry more errors
* Add --ignore-size flag - for uploading images to onedrive
* Log -v output to stdout by default
* Display the transfer stats in more human readable form
* Make 0 size files specifiable with `--max-size 0b`
* Add `b` suffix so we can specify bytes in --bwlimit, --min-size etc
* Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz
* Bug Fixes
* Fix retry doing one too many retries
* Local
* Fix problems with OS X and UTF-8 characters
* Amazon Drive
* Check a file exists before uploading to help with 408 Conflict errors
* Reauth on 401 errors - this has been causing a lot of problems
* Work around spurious 403 errors
* Restart directory listings on error
* Google Drive
* Check a file exists before uploading to help with duplicates
* Fix retry of multipart uploads
* Backblaze B2
* Implement large file uploading
* S3
* Add AES256 server-side encryption for - thanks Justin R. Wilson
* Google Cloud Storage
* Make sure we don't use conflicting content types on upload
* Add service account support - thanks Michal Witkowski
* Swift
* Add auth version parameter
* Add domain option for openstack (v3 auth) - thanks Fabian Ruff
* v1.29 - 2016-04-18
* New Features
* Implement `-I, --ignore-times` for unconditional upload
* Improve `dedupe`command
* Now removes identical copies without asking
* Now obeys `--dry-run`
* Implement `--dedupe-mode` for non interactive running
* `--dedupe-mode interactive` - interactive the default.
* `--dedupe-mode skip` - removes identical files then skips anything left.
* `--dedupe-mode first` - removes identical files then keeps the first one.
* `--dedupe-mode newest` - removes identical files then keeps the newest one.
* `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
* Bug fixes
* Make rclone check obey the `--size-only` flag.
* Use "application/octet-stream" if discovered mime type is invalid.
* Fix missing "quit" option when there are no remotes.
* Google Drive
* Increase default chunk size to 8 MB - increases upload speed of big files
* Speed up directory listings and make more reliable
* Add missing retries for Move and DirMove - increases reliability
* Preserve mime type on file update
* Backblaze B2
* Enable mod time syncing
* This means that B2 will now check modification times
* It will upload new files to update the modification times
* (there isn't an API to just set the mod time.)
* If you want the old behaviour use `--size-only`.
* Update API to new version
* Fix parsing of mod time when not in metadata
* Swift/Hubic
* Don't return an MD5SUM for static large objects
* S3
* Fix uploading files bigger than 50GB
* v1.28 - 2016-03-01
* New Features
* Configuration file encryption - thanks Klaus Post
* Improve `rclone config` adding more help and making it easier to understand
* Implement `-u`/`--update` so creation times can be used on all remotes
* Implement `--low-level-retries` flag
* Optionally disable gzip compression on downloads with `--no-gzip-encoding`
* Bug fixes
* Don't make directories if `--dry-run` set
* Fix and document the `move` command
* Fix redirecting stderr on unix-like OSes when using `--log-file`
* Fix `delete` command to wait until all finished - fixes missing deletes.
* Backblaze B2
* Use one upload URL per go routine fixes `more than one upload using auth token`
* Add pacing, retries and reauthentication - fixes token expiry problems
* Upload without using a temporary file from local (and remotes which support SHA1)
* Fix reading metadata for all files when it shouldn't have been
* Drive
* Fix listing drive documents at root
* Disable copy and move for Google docs
* Swift
* Fix uploading of chunked files with non ASCII characters
* Allow setting of `storage_url` in the config - thanks Xavier Lucas
* S3
* Allow IAM role and credentials from environment variables - thanks Brian Stengaard
* Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon
* Amazon Drive
* Retry on more things to make directory listings more reliable
* v1.27 - 2016-01-31
* New Features
* Easier headless configuration with `rclone authorize`
* Add support for multiple hash types - we now check SHA1 as well as MD5 hashes.
* `delete` command which does obey the filters (unlike `purge`)
* `dedupe` command to deduplicate a remote. Useful with Google Drive.
* Add `--ignore-existing` flag to skip all files that exist on destination.
* Add `--delete-before`, `--delete-during`, `--delete-after` flags.
* Add `--memprofile` flag to debug memory use.
* Warn the user about files with same name but different case
* Make `--include` rules add their implict exclude * at the end of the filter list
* Deprecate compiling with go1.3
* Amazon Drive
* Fix download of files > 10 GB
* Fix directory traversal ("Next token is expired") for large directory listings
* Remove 409 conflict from error codes we will retry - stops very long pauses
* Backblaze B2
* SHA1 hashes now checked by rclone core
* Drive
* Add `--drive-auth-owner-only` to only consider files owned by the user - thanks Björn Harrtell
* Export Google documents
* Dropbox
* Make file exclusion error controllable with -q
* Swift
* Fix upload from unprivileged user.
* S3
* Fix updating of mod times of files with `+` in.
* Local
* Add local file system option to disable UNC on Windows.
* v1.26 - 2016-01-02
* New Features
* Yandex storage backend - thank you Dmitry Burdeev ("dibu")
* Implement Backblaze B2 storage backend
* Add --min-age and --max-age flags - thank you Adriano Aurélio Meirelles
* Make ls/lsl/md5sum/size/check obey includes and excludes
* Fixes
* Fix crash in http logging
* Upload releases to github too
* Swift
* Fix sync for chunked files
* One Drive
* Re-enable server side copy
* Don't mask HTTP error codes with JSON decode error
* S3
* Fix corrupting Content-Type on mod time update (thanks Joseph Spurrier)
* v1.25 - 2015-11-14
* New features
* Implement Hubic storage system
* Fixes
* Fix deletion of some excluded files without --delete-excluded
* This could have deleted files unexpectedly on sync
* Always check first with `--dry-run`!
* Swift
* Stop SetModTime losing metadata (eg X-Object-Manifest)
* This could have caused data loss for files > 5GB in size
* Use ContentType from Object to avoid lookups in listings
* One Drive
* disable server side copy as it seems to be broken at Microsoft
* v1.24 - 2015-11-07
* New features
* Add support for Microsoft One Drive
* Add `--no-check-certificate` option to disable server certificate verification
* Add async readahead buffer for faster transfer of big files
* Fixes
* Allow spaces in remotes and check remote names for validity at creation time
* Allow '&' and disallow ':' in Windows filenames.
* Swift
* Ignore directory marker objects where appropriate - allows working with Hubic
* Don't delete the container if fs wasn't at root
* S3
* Don't delete the bucket if fs wasn't at root
* Google Cloud Storage
* Don't delete the bucket if fs wasn't at root
* v1.23 - 2015-10-03
* New features
* Implement `rclone size` for measuring remotes
* Fixes
* Fix headless config for drive and gcs
* Tell the user they should try again if the webserver method failed
* Improve output of `--dump-headers`
* S3
* Allow anonymous access to public buckets
* Swift
* Stop chunked operations logging "Failed to read info: Object Not Found"
* Use Content-Length on uploads for extra reliability
* v1.22 - 2015-09-28
* Implement rsync like include and exclude flags
* swift
* Support files > 5GB - thanks Sergey Tolmachev
* v1.21 - 2015-09-22
* New features
* Display individual transfer progress
* Make lsl output times in localtime
* Fixes
* Fix allowing user to override credentials again in Drive, GCS and ACD
* Amazon Drive
* Implement compliant pacing scheme
* Google Drive
* Make directory reads concurrent for increased speed.
* v1.20 - 2015-09-15
* New features
* Amazon Drive support
* Oauth support redone - fix many bugs and improve usability
* Use "golang.org/x/oauth2" as oauth libary of choice
* Improve oauth usability for smoother initial signup
* drive, googlecloudstorage: optionally use auto config for the oauth token
* Implement --dump-headers and --dump-bodies debug flags
* Show multiple matched commands if abbreviation too short
* Implement server side move where possible
* local
* Always use UNC paths internally on Windows - fixes a lot of bugs
* dropbox
* force use of our custom transport which makes timeouts work
* Thanks to Klaus Post for lots of help with this release
* v1.19 - 2015-08-28
* New features
* Server side copies for s3/swift/drive/dropbox/gcs
* Move command - uses server side copies if it can
* Implement --retries flag - tries 3 times by default
* Build for plan9/amd64 and solaris/amd64 too
* Fixes
* Make a current version download with a fixed URL for scripting
* Ignore rmdir in limited fs rather than throwing error
* dropbox
* Increase chunk size to improve upload speeds massively
* Issue an error message when trying to upload bad file name
* v1.18 - 2015-08-17
* drive
* Add `--drive-use-trash` flag so rclone trashes instead of deletes
* Add "Forbidden to download" message for files with no downloadURL
* dropbox
* Remove datastore
* This was deprecated and it caused a lot of problems
* Modification times and MD5SUMs no longer stored
* Fix uploading files > 2GB
* s3
* use official AWS SDK from github.com/aws/aws-sdk-go
* **NB** will most likely require you to delete and recreate remote
* enable multipart upload which enables files > 5GB
* tested with Ceph / RadosGW / S3 emulation
* many thanks to Sam Liston and Brian Haymore at the [Utah
Center for High Performance Computing](https://www.chpc.utah.edu/) for a Ceph test account
* misc
* Show errors when reading the config file
* Do not print stats in quiet mode - thanks Leonid Shalupov
* Add FAQ
* Fix created directories not obeying umask
* Linux installation instructions - thanks Shimon Doodkin
* v1.17 - 2015-06-14
* dropbox: fix case insensitivity issues - thanks Leonid Shalupov
* v1.16 - 2015-06-09
* Fix uploading big files which was causing timeouts or panics
* Don't check md5sum after download with --size-only
* v1.15 - 2015-06-06
* Add --checksum flag to only discard transfers by MD5SUM - thanks Alex Couper
* Implement --size-only flag to sync on size not checksum & modtime
* Expand docs and remove duplicated information
* Document rclone's limitations with directories
* dropbox: update docs about case insensitivity
* v1.14 - 2015-05-21
* local: fix encoding of non utf-8 file names - fixes a duplicate file problem
* drive: docs about rate limiting
* google cloud storage: Fix compile after API change in "google.golang.org/api/storage/v1"
* v1.13 - 2015-05-10
* Revise documentation (especially sync)
* Implement --timeout and --conntimeout
* s3: ignore etags from multipart uploads which aren't md5sums
* v1.12 - 2015-03-15
* drive: Use chunked upload for files above a certain size
* drive: add --drive-chunk-size and --drive-upload-cutoff parameters
* drive: switch to insert from update when a failed copy deletes the upload
* core: Log duplicate files if they are detected
* v1.11 - 2015-03-04
* swift: add region parameter
* drive: fix crash on failed to update remote mtime
* In remote paths, change native directory separators to /
* Add synchronization to ls/lsl/lsd output to stop corruptions
* Ensure all stats/log messages to go stderr
* Add --log-file flag to log everything (including panics) to file
* Make it possible to disable stats printing with --stats=0
* Implement --bwlimit to limit data transfer bandwidth
* v1.10 - 2015-02-12
* s3: list an unlimited number of items
* Fix getting stuck in the configurator
* v1.09 - 2015-02-07
* windows: Stop drive letters (eg C:) getting mixed up with remotes (eg drive:)
* local: Fix directory separators on Windows
* drive: fix rate limit exceeded errors
* v1.08 - 2015-02-04
* drive: fix subdirectory listing to not list entire drive
* drive: Fix SetModTime
* dropbox: adapt code to recent library changes
* v1.07 - 2014-12-23
* google cloud storage: fix memory leak
* v1.06 - 2014-12-12
* Fix "Couldn't find home directory" on OSX
* swift: Add tenant parameter
* Use new location of Google API packages
* v1.05 - 2014-08-09
* Improved tests and consequently lots of minor fixes
* core: Fix race detected by go race detector
* core: Fixes after running errcheck
* drive: reset root directory on Rmdir and Purge
* fs: Document that Purger returns error on empty directory, test and fix
* google cloud storage: fix ListDir on subdirectory
* google cloud storage: re-read metadata in SetModTime
* s3: make reading metadata more reliable to work around eventual consistency problems
* s3: strip trailing / from ListDir()
* swift: return directories without / in ListDir
* v1.04 - 2014-07-21
* google cloud storage: Fix crash on Update
* v1.03 - 2014-07-20
* swift, s3, dropbox: fix updated files being marked as corrupted
* Make compile with go 1.1 again
* v1.02 - 2014-07-19
* Implement Dropbox remote
* Implement Google Cloud Storage remote
* Verify Md5sums and Sizes after copies
* Remove times from "ls" command - lists sizes only
* Add add "lsl" - lists times and sizes
* Add "md5sum" command
* v1.01 - 2014-07-04
* drive: fix transfer of big files using up lots of memory
* v1.00 - 2014-07-03
* drive: fix whole second dates
* v0.99 - 2014-06-26
* Fix --dry-run not working
* Make compatible with go 1.1
* v0.98 - 2014-05-30
* s3: Treat missing Content-Length as 0 for some ceph installations
* rclonetest: add file with a space in
* v0.97 - 2014-05-05
* Implement copying of single files
* s3 & swift: support paths inside containers/buckets
* v0.96 - 2014-04-24
* drive: Fix multiple files of same name being created
* drive: Use o.Update and fs.Put to optimise transfers
* Add version number, -V and --version
* v0.95 - 2014-03-28
* rclone.org: website, docs and graphics
* drive: fix path parsing
* v0.94 - 2014-03-27
* Change remote format one last time
* GNU style flags
* v0.93 - 2014-03-16
* drive: store token in config file
* cross compile other versions
* set strict permissions on config file
* v0.92 - 2014-03-15
* Config fixes and --config option
* v0.91 - 2014-03-15
* Make config file
* v0.90 - 2013-06-27
* Project named rclone
* v0.00 - 2012-11-18
* Project started

View File

@@ -1,167 +0,0 @@
---
date: 2017-03-18T11:14:54Z
title: "rclone"
slug: rclone
url: /commands/rclone/
---
## rclone
Sync files and directories to and from local and remote object stores - v1.36
### Synopsis
Rclone is a command line program to sync files and directories to and
from various cloud storage systems, such as:
* Google Drive
* Amazon S3
* Openstack Swift / Rackspace cloud files / Memset Memstore
* Dropbox
* Google Cloud Storage
* Amazon Drive
* Microsoft One Drive
* Hubic
* Backblaze B2
* Yandex Disk
* The local filesystem
Features
* MD5/SHA1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* Copy mode to just copy new/changed files
* Sync (one way) mode to make a directory identical
* Check mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
See the home page for installation, usage, documentation, changelog
and configuration walkthroughs.
* http://rclone.org/
```
rclone
```
### Options
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--buffer-size int Buffer size when copying files. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-auth Dump HTTP headers with auth info
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose count[=-1] Print lots more stuff (repeat for more)
-V, --version Print the version number
```
### SEE ALSO
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization.
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied
* [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integritity of a crypted remote.
* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files delete/rename them.
* [rclone delete](/commands/rclone_delete/) - Remove the contents of path.
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output bash completion script for rclone.
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file.
* [rclone ls](/commands/rclone_ls/) - List all the objects in the path with size and path.
* [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the path.
* [rclone lsl](/commands/rclone_lsl/) - List all the objects path with modification time, size and path.
* [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path.
* [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist.
* [rclone mount](/commands/rclone_mount/) - Mount the remote as a mountpoint. **EXPERIMENTAL**
* [rclone move](/commands/rclone_move/) - Move files from source to dest.
* [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest.
* [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone.conf
* [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents.
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty.
* [rclone rmdirs](/commands/rclone_rmdirs/) - Remove any empty directoryies under the path.
* [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path.
* [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path.
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
* [rclone version](/commands/rclone_version/) - Show the version number.
###### Auto generated by spf13/cobra on 18-Mar-2017

View File

@@ -1,111 +0,0 @@
---
date: 2017-03-18T11:14:54Z
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
---
## rclone authorize
Remote authorization.
### Synopsis
Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
```
rclone authorize
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--buffer-size int Buffer size when copying files. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-auth Dump HTTP headers with auth info
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
###### Auto generated by spf13/cobra on 18-Mar-2017

View File

@@ -1,137 +0,0 @@
---
date: 2017-03-18T11:14:54Z
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/
---
## rclone cat
Concatenates any files and sends them to stdout.
### Synopsis
rclone cat sends any files to standard output.
You can use it like this to output a single file
rclone cat remote:path/to/file
Or like this to output any file in dir or subdirectories.
rclone cat remote:path/to/dir
Or like this to output any .txt files in dir or subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
Use the --head flag to print characters only at the start, --tail for
the end and --offset and --count to print a section in the middle.
Note that if offset is negative it will count from the end, so
--offset -1 --count 1 is equivalent to --tail 1.
```
rclone cat remote:path
```
### Options
```
--count int Only print N characters. (default -1)
--discard Discard the output instead of printing.
--head int Only print the first N characters.
--offset int Start printing at offset N (or from end if -ve).
--tail int Only print the last N characters.
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--buffer-size int Buffer size when copying files. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-auth Dump HTTP headers with auth info
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
###### Auto generated by spf13/cobra on 18-Mar-2017

View File

@@ -1,126 +0,0 @@
---
date: 2017-03-18T11:14:54Z
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
---
## rclone check
Checks the files in the source and destination match.
### Synopsis
Checks the files in the source and destination match. It compares
sizes and hashes (MD5 or SHA1) and logs a report of files which don't
match. It doesn't alter the source or destination.
If you supply the --size-only flag, it will only compare the sizes not
the hashes as well. Use this for a quick check.
If you supply the --download flag, it will download the data from
both remotes and check them against each other on the fly. This can
be useful for remotes that don't support hashes or if you really want
to check all the data.
```
rclone check source:path dest:path
```
### Options
```
--download Check by downloading rather than with hash.
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--buffer-size int Buffer size when copying files. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-auth Dump HTTP headers with auth info
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
###### Auto generated by spf13/cobra on 18-Mar-2017

View File

@@ -1,111 +0,0 @@
---
date: 2017-03-18T11:14:54Z
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
---
## rclone cleanup
Clean up the remote if possible
### Synopsis
Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
```
rclone cleanup remote:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--buffer-size int Buffer size when copying files. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-auth Dump HTTP headers with auth info
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
###### Auto generated by spf13/cobra on 18-Mar-2017

View File

@@ -1,108 +0,0 @@
---
date: 2017-03-18T11:14:54Z
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
---
## rclone config
Enter an interactive configuration session.
### Synopsis
Enter an interactive configuration session.
```
rclone config
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--buffer-size int Buffer size when copying files. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-auth Dump HTTP headers with auth info
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
###### Auto generated by spf13/cobra on 18-Mar-2017

View File

@@ -1,147 +0,0 @@
---
date: 2017-03-18T11:14:54Z
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
---
## rclone copy
Copy files from source to dest, skipping already copied
### Synopsis
Copy the source to the destination. Doesn't transfer
unchanged files, testing by size and modification time or
MD5SUM. Doesn't delete files from the destination.
Note that it is always the contents of the directory that is synced,
not the directory so when source:path is a directory, it's the
contents of source:path that are copied, not the directory name and
contents.
If dest:path doesn't exist, it is created and the source:path contents
go there.
For example
rclone copy source:sourcepath dest:destpath
Let's say there are two files in sourcepath
sourcepath/one.txt
sourcepath/two.txt
This copies them to
destpath/one.txt
destpath/two.txt
Not to
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
If you are familiar with `rsync`, rclone always works as if you had
written a trailing / - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the
source or destination.
See the `--no-traverse` option for controlling whether rclone lists
the destination directory or not.
```
rclone copy source:path dest:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--buffer-size int Buffer size when copying files. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-auth Dump HTTP headers with auth info
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
###### Auto generated by spf13/cobra on 18-Mar-2017

View File

@@ -1,134 +0,0 @@
---
date: 2017-03-18T11:14:54Z
title: "rclone copyto"
slug: rclone_copyto
url: /commands/rclone_copyto/
---
## rclone copyto
Copy files from source to dest, skipping already copied
### Synopsis
If source:path is a file or directory then it copies it to a file or
directory named dest:path.
This can be used to upload single files to other than their current
name. If the source is a directory then it acts exactly like the copy
command.
So
rclone copyto src dst
where src and dst are rclone paths, either remote:path or
/path/to/local or C:\windows\path\if\on\windows.
This will:
if src is file
copy it to dst, overwriting an existing file if it exists
if src is directory
copy it to dst, overwriting existing files if they exist
see copy command for full details
This doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. It doesn't delete files from the
destination.
```
rclone copyto source:path dest:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
--acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
--b2-versions Include old versions in directory listings.
--backup-dir string Make backups into hierarchy based in DIR.
--buffer-size int Buffer size when copying files. (default 16M)
--bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
-L, --copy-links Follow symlinks and copy the pointed to item.
--cpuprofile string Write cpu profile to file
--crypt-show-mapping For all files listed show how the names encrypt.
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-skip-gdocs Skip google documents in all listings.
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
-n, --dry-run Do a trial run with no permanent changes
--dump-auth Dump HTTP headers with auth info
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--old-sync-method Temporary flag to select old sync method
-x, --one-file-system Don't cross filesystem boundaries.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36
###### Auto generated by spf13/cobra on 18-Mar-2017

Some files were not shown because too many files have changed in this diff Show More