mirror of
https://github.com/rclone/rclone.git
synced 2026-01-02 16:43:28 +00:00
Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
094cc27621 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -4,3 +4,6 @@ rclone
|
||||
rclonetest/rclonetest
|
||||
build
|
||||
docs/public
|
||||
README.html
|
||||
README.txt
|
||||
rclone.1
|
||||
|
||||
22
.travis.yml
22
.travis.yml
@@ -1,23 +1,11 @@
|
||||
language: go
|
||||
sudo: false
|
||||
|
||||
os:
|
||||
- linux
|
||||
|
||||
# - osx
|
||||
|
||||
go:
|
||||
- 1.5.3
|
||||
- 1.6
|
||||
- 1.1.2
|
||||
- 1.2.2
|
||||
- 1.3
|
||||
- tip
|
||||
|
||||
install:
|
||||
- go get -t ./...
|
||||
- go get -u github.com/kisielk/errcheck
|
||||
- go get -u golang.org/x/tools/cmd/goimports
|
||||
- go get -u github.com/golang/lint/golint
|
||||
|
||||
script:
|
||||
- make check
|
||||
- go test ./...
|
||||
- go test -cpu=2 -race ./...
|
||||
- go get ./...
|
||||
- go test -v ./...
|
||||
|
||||
161
CONTRIBUTING.md
161
CONTRIBUTING.md
@@ -1,161 +0,0 @@
|
||||
# Contributing to rclone #
|
||||
|
||||
This is a short guide on how to contribute things to rclone.
|
||||
|
||||
## Reporting a bug ##
|
||||
|
||||
Bug reports are welcome. Please when submitting add:
|
||||
|
||||
* Rclone version (eg output from `rclone -V`)
|
||||
* Which OS you are using and how many bits (eg Windows 7, 64 bit)
|
||||
* The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
|
||||
* A log of the command with the `-v` flag (eg output from `rclone -v copy /tmp remote:tmp`)
|
||||
* if the log contains secrets then edit the file with a text editor first to obscure them
|
||||
|
||||
## Submitting a pull request ##
|
||||
|
||||
If you find a bug that you'd like to fix, or a new feature that you'd
|
||||
like to implement then please submit a pull request via Github.
|
||||
|
||||
If it is a big feature then make an issue first so it can be discussed.
|
||||
|
||||
You'll need a Go environment set up with GOPATH set. See [the Go
|
||||
getting started docs](https://golang.org/doc/install) for more info.
|
||||
|
||||
First in your web browser press the fork button on [rclone's Github
|
||||
page](https://github.com/ncw/rclone).
|
||||
|
||||
Now in your terminal
|
||||
|
||||
go get github.com/ncw/rclone
|
||||
cd $GOPATH/src/github.com/ncw/rclone
|
||||
git remote rename origin upstream
|
||||
git remote add origin git@github.com:YOURUSER/rclone.git
|
||||
|
||||
Make a branch to add your new feature
|
||||
|
||||
git checkout -b my-new-feature
|
||||
|
||||
And get hacking.
|
||||
|
||||
When ready - run the unit tests for the code you changed
|
||||
|
||||
go test -v
|
||||
|
||||
Note that you make need to make a test remote, eg `TestSwift` for some
|
||||
of the unit tests.
|
||||
|
||||
Note the top level Makefile targets
|
||||
|
||||
* make check
|
||||
* make test
|
||||
|
||||
Both of these will be run by Travis when you make a pull request but
|
||||
you can do this yourself locally too.
|
||||
|
||||
Make sure you
|
||||
|
||||
* Add documentation for a new feature
|
||||
* Add unit tests for a new feature
|
||||
* squash commits down to one per feature
|
||||
* rebase to master `git rebase master`
|
||||
|
||||
When you are done with that
|
||||
|
||||
git push origin my-new-feature
|
||||
|
||||
Go to the Github website and click [Create pull
|
||||
request](https://help.github.com/articles/creating-a-pull-request/).
|
||||
|
||||
You patch will get reviewed and you might get asked to fix some stuff.
|
||||
|
||||
If so, then make the changes in the same branch, squash the commits,
|
||||
rebase it to master then push it to Github with `--force`.
|
||||
|
||||
## Testing ##
|
||||
|
||||
rclone's tests are run from the go testing framework, so at the top
|
||||
level you can run this to run all the tests.
|
||||
|
||||
go test -v ./...
|
||||
|
||||
rclone contains a mixture of unit tests and integration tests.
|
||||
Because it is difficult (and in some respects pointless) to test cloud
|
||||
storage systems by mocking all their interfaces, rclone unit tests can
|
||||
run against any of the backends. This is done by making specially
|
||||
named remotes in the default config file.
|
||||
|
||||
If you wanted to test changes in the `drive` backend, then you would
|
||||
need to make a remote called `TestDrive`.
|
||||
|
||||
You can then run the unit tests in the drive directory. These tests
|
||||
are skipped if `TestDrive:` isn't defined.
|
||||
|
||||
cd drive
|
||||
go test -v
|
||||
|
||||
You can then run the integration tests which tests all of rclone's
|
||||
operations. Normally these get run against the local filing system,
|
||||
but they can be run against any of the remotes.
|
||||
|
||||
cd ../fs
|
||||
go test -v -remote TestDrive:
|
||||
go test -v -remote TestDrive: -subdir
|
||||
|
||||
If you want to run all the integration tests against all the remotes,
|
||||
then run in that directory
|
||||
|
||||
go run test_all.go
|
||||
|
||||
## Making a release ##
|
||||
|
||||
There are separate instructions for making a release in the RELEASE.md
|
||||
file - doing the first few steps is useful before making a
|
||||
contribution.
|
||||
|
||||
* go get -u -f -v ./...
|
||||
* make check
|
||||
* make test
|
||||
* make tag
|
||||
|
||||
## Writing a new backend ##
|
||||
|
||||
Choose a name. The docs here will use `remote` as an example.
|
||||
|
||||
Note that in rclone terminology a file system backend is called a
|
||||
remote or an fs.
|
||||
|
||||
Research
|
||||
|
||||
* Look at the interfaces defined in `fs/fs.go`
|
||||
* Study one or more of the existing remotes
|
||||
|
||||
Getting going
|
||||
|
||||
* Create `remote/remote.go` (copy this from a similar fs)
|
||||
* Add your fs to the imports in `fs/all/all.go`
|
||||
|
||||
Unit tests
|
||||
|
||||
* Create a config entry called `TestRemote` for the unit tests to use
|
||||
* Add your fs to the end of `fstest/fstests/gen_tests.go`
|
||||
* generate `remote/remote_test.go` unit tests `cd fstest/fstests; go generate`
|
||||
* Make sure all tests pass with `go test -v`
|
||||
|
||||
Integration tests
|
||||
|
||||
* Add your fs to `fs/test_all.go`
|
||||
* Make sure integration tests pass with
|
||||
* `cd fs`
|
||||
* `go test -v -remote TestRemote:` and
|
||||
* `go test -v -remote TestRemote: -subdir`
|
||||
|
||||
Add your fs to the docs
|
||||
|
||||
* `README.md` - main Github page
|
||||
* `docs/content/remote.md` - main docs page
|
||||
* `docs/content/overview.md` - overview docs
|
||||
* `docs/content/docs.md` - list of remotes in config section
|
||||
* `docs/content/about.md` - front page of rclone.org
|
||||
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
|
||||
* `make_manual.py` - add the page to the `docs` constant
|
||||
@@ -1,13 +0,0 @@
|
||||
When filing an issue, please include the following information if
|
||||
possible as well as a description of the problem.
|
||||
|
||||
> What is your rclone version (eg output from `rclone -V`)
|
||||
|
||||
> Which OS you are using and how many bits (eg Windows 7, 64 bit)
|
||||
|
||||
> Which cloud storage system are you using? (eg Google Drive)
|
||||
|
||||
> The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
|
||||
|
||||
> A log from the command with the `-v` flag (eg output from `rclone -v copy /tmp remote:tmp`)
|
||||
|
||||
2430
MANUAL.html
2430
MANUAL.html
File diff suppressed because it is too large
Load Diff
3366
MANUAL.txt
3366
MANUAL.txt
File diff suppressed because it is too large
Load Diff
62
Makefile
62
Makefile
@@ -1,85 +1,59 @@
|
||||
SHELL = /bin/bash
|
||||
TAG := $(shell git describe --tags)
|
||||
LAST_TAG := $(shell git describe --tags --abbrev=0)
|
||||
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f", $$_)')
|
||||
|
||||
rclone:
|
||||
rclone: *.go */*.go
|
||||
@go version
|
||||
go install -v ./...
|
||||
go build
|
||||
|
||||
test: rclone
|
||||
go test ./...
|
||||
cd fs && go run test_all.go
|
||||
doc: rclone.1 README.html README.txt
|
||||
|
||||
check: rclone
|
||||
go vet ./...
|
||||
errcheck ./...
|
||||
goimports -d . | grep . ; test $$? -eq 1
|
||||
golint ./... | grep -E -v '(StorageUrl|CdnUrl)' ; test $$? -eq 1
|
||||
rclone.1: README.md
|
||||
pandoc -s --from markdown --to man README.md -o rclone.1
|
||||
|
||||
doc: rclone.1 MANUAL.html MANUAL.txt
|
||||
README.html: README.md
|
||||
pandoc -s --from markdown_github --to html README.md -o README.html
|
||||
|
||||
rclone.1: MANUAL.md
|
||||
pandoc -s --from markdown --to man MANUAL.md -o rclone.1
|
||||
|
||||
MANUAL.md: make_manual.py docs/content/*.md
|
||||
./make_manual.py
|
||||
|
||||
MANUAL.html: MANUAL.md
|
||||
pandoc -s --from markdown --to html MANUAL.md -o MANUAL.html
|
||||
|
||||
MANUAL.txt: MANUAL.md
|
||||
pandoc -s --from markdown --to plain MANUAL.md -o MANUAL.txt
|
||||
README.txt: README.md
|
||||
pandoc -s --from markdown_github --to plain README.md -o README.txt
|
||||
|
||||
install: rclone
|
||||
install -d ${DESTDIR}/usr/bin
|
||||
install -t ${DESTDIR}/usr/bin ${GOPATH}/bin/rclone
|
||||
install -t ${DESTDIR}/usr/bin rclone
|
||||
|
||||
clean:
|
||||
go clean ./...
|
||||
find . -name \*~ | xargs -r rm -f
|
||||
rm -rf build docs/public
|
||||
rm -f rclone rclonetest/rclonetest
|
||||
rm -f rclone rclonetest/rclonetest rclone.1 README.html README.txt
|
||||
|
||||
website:
|
||||
cd docs && hugo
|
||||
|
||||
upload_website: website
|
||||
rclone -v sync docs/public memstore:www-rclone-org
|
||||
./rclone -v sync docs/public memstore:www-rclone-org
|
||||
|
||||
upload:
|
||||
rclone -v copy build/ memstore:downloads-rclone-org
|
||||
|
||||
upload_github:
|
||||
./upload-github $(TAG)
|
||||
./rclone -v copy build/ memstore:downloads-rclone-org
|
||||
|
||||
cross: doc
|
||||
./cross-compile $(TAG)
|
||||
|
||||
beta:
|
||||
./cross-compile $(TAG)β
|
||||
rm build/*-current-*
|
||||
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)β
|
||||
@echo Beta release ready at http://pub.rclone.org/$(TAG)%CE%B2/
|
||||
|
||||
serve: website
|
||||
serve:
|
||||
cd docs && hugo server -v -w
|
||||
|
||||
tag: doc
|
||||
tag:
|
||||
@echo "Old tag is $(LAST_TAG)"
|
||||
@echo "New tag is $(NEW_TAG)"
|
||||
echo -e "package fs\n\n// Version of rclone\nconst Version = \"$(NEW_TAG)\"\n" | gofmt > fs/version.go
|
||||
echo -e "package fs\n const Version = \"$(NEW_TAG)\"\n" | gofmt > fs/version.go
|
||||
perl -lpe 's/VERSION/${NEW_TAG}/g; s/DATE/'`date -I`'/g;' docs/content/downloads.md.in > docs/content/downloads.md
|
||||
git tag $(NEW_TAG)
|
||||
@echo "Add this to changelog in docs/content/changelog.md"
|
||||
@echo "Add this to changelog in README.md"
|
||||
@echo " * $(NEW_TAG) -" `date -I`
|
||||
@git log $(LAST_TAG)..$(NEW_TAG) --oneline
|
||||
@echo "Then commit the changes"
|
||||
@echo git commit -m \"Version $(NEW_TAG)\" -a -v
|
||||
@echo git commit -m "Version $(NEW_TAG)" -a -v
|
||||
@echo "And finally run make retag before make cross etc"
|
||||
|
||||
retag:
|
||||
git tag -f $(LAST_TAG)
|
||||
|
||||
gen_tests:
|
||||
cd fstest/fstests && go generate
|
||||
|
||||
321
README.md
321
README.md
@@ -1,15 +1,12 @@
|
||||
% rclone(1) User Manual
|
||||
% Nick Craig-Wood
|
||||
% Jul 7, 2014
|
||||
|
||||
Rclone
|
||||
======
|
||||
|
||||
[](http://rclone.org/)
|
||||
|
||||
[Website](http://rclone.org) |
|
||||
[Documentation](http://rclone.org/docs/) |
|
||||
[Contributing](CONTRIBUTING.md) |
|
||||
[Changelog](http://rclone.org/changelog/) |
|
||||
[Installation](http://rclone.org/install/) |
|
||||
[G+](https://google.com/+RcloneOrg)
|
||||
|
||||
|
||||
[](https://travis-ci.org/ncw/rclone) [](https://ci.appveyor.com/project/ncw/rclone) [](https://godoc.org/github.com/ncw/rclone)
|
||||
|
||||
Rclone is a command line program to sync files and directories to and from
|
||||
|
||||
* Google Drive
|
||||
@@ -17,30 +14,312 @@ Rclone is a command line program to sync files and directories to and from
|
||||
* Openstack Swift / Rackspace cloud files / Memset Memstore
|
||||
* Dropbox
|
||||
* Google Cloud Storage
|
||||
* Amazon Cloud Drive
|
||||
* Microsoft One Drive
|
||||
* Hubic
|
||||
* Backblaze B2
|
||||
* Yandex Disk
|
||||
* The local filesystem
|
||||
|
||||
Features
|
||||
|
||||
* MD5/SHA1 hashes checked at all times for file integrity
|
||||
* MD5SUMs checked at all times for file integrity
|
||||
* Timestamps preserved on files
|
||||
* Partial syncs supported on a whole file basis
|
||||
* Copy mode to just copy new/changed files
|
||||
* Sync (one way) mode to make a directory identical
|
||||
* Check mode to check for file hash equality
|
||||
* Can sync to and from network, eg two different cloud accounts
|
||||
* Sync mode to make a directory identical
|
||||
* Check mode to check all MD5SUMs
|
||||
* Can sync to and from network, eg two different Drive accounts
|
||||
|
||||
See the home page for installation, usage, documentation, changelog
|
||||
and configuration walkthroughs.
|
||||
See the Home page for more documentation and configuration walkthroughs.
|
||||
|
||||
* http://rclone.org/
|
||||
|
||||
Install
|
||||
-------
|
||||
|
||||
Rclone is a Go program and comes as a single binary file.
|
||||
|
||||
Download the binary for your OS from
|
||||
|
||||
* http://rclone.org/downloads/
|
||||
|
||||
Or alternatively if you have Go installed use
|
||||
|
||||
go install github.com/ncw/rclone
|
||||
|
||||
and this will build the binary in `$GOPATH/bin`.
|
||||
|
||||
Configure
|
||||
---------
|
||||
|
||||
First you'll need to configure rclone. As the object storage systems
|
||||
have quite complicated authentication these are kept in a config file
|
||||
`.rclone.conf` in your home directory by default. (You can use the
|
||||
`--config` option to choose a different config file.)
|
||||
|
||||
The easiest way to make the config is to run rclone with the config
|
||||
option, Eg
|
||||
|
||||
rclone config
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
Rclone syncs a directory tree from local to remote.
|
||||
|
||||
Its basic syntax is
|
||||
|
||||
Syntax: [options] subcommand <parameters> <parameters...>
|
||||
|
||||
See below for how to specify the source and destination paths.
|
||||
|
||||
Subcommands
|
||||
-----------
|
||||
|
||||
rclone copy source:path dest:path
|
||||
|
||||
Copy the source to the destination. Doesn't transfer
|
||||
unchanged files, testing first by modification time then by
|
||||
MD5SUM. Doesn't delete files from the destination.
|
||||
|
||||
rclone sync source:path dest:path
|
||||
|
||||
Sync the source to the destination. Doesn't transfer
|
||||
unchanged files, testing first by modification time then by
|
||||
MD5SUM. Deletes any files that exist in source that don't
|
||||
exist in destination. Since this can cause data loss, test
|
||||
first with the `--dry-run` flag.
|
||||
|
||||
rclone ls [remote:path]
|
||||
|
||||
List all the objects in the the path with sizes.
|
||||
|
||||
rclone lsl [remote:path]
|
||||
|
||||
List all the objects in the the path with sizes and timestamps.
|
||||
|
||||
rclone lsd [remote:path]
|
||||
|
||||
List all directories/objects/buckets in the the path.
|
||||
|
||||
rclone mkdir remote:path
|
||||
|
||||
Make the path if it doesn't already exist
|
||||
|
||||
rclone rmdir remote:path
|
||||
|
||||
Remove the path. Note that you can't remove a path with
|
||||
objects in it, use purge for that.
|
||||
|
||||
rclone purge remote:path
|
||||
|
||||
Remove the path and all of its contents.
|
||||
|
||||
rclone check source:path dest:path
|
||||
|
||||
Checks the files in the source and destination match. It
|
||||
compares sizes and MD5SUMs and prints a report of files which
|
||||
don't match. It doesn't alter the source or destination.
|
||||
|
||||
rclone md5sum remote:path
|
||||
|
||||
Produces an md5sum file for all the objects in the path. This is in
|
||||
the same format as the standard md5sum tool produces.
|
||||
|
||||
General options:
|
||||
|
||||
```
|
||||
--checkers=8: Number of checkers to run in parallel.
|
||||
--config="~/.rclone.conf": Config file.
|
||||
-n, --dry-run=false: Do a trial run with no permanent changes
|
||||
--modify-window=1ns: Max time diff to be considered the same
|
||||
-q, --quiet=false: Print as little stuff as possible
|
||||
--stats=1m0s: Interval to print stats
|
||||
--transfers=4: Number of file transfers to run in parallel.
|
||||
-v, --verbose=false: Print lots more stuff
|
||||
```
|
||||
|
||||
Developer options:
|
||||
|
||||
```
|
||||
--cpuprofile="": Write cpu profile to file
|
||||
```
|
||||
|
||||
Local Filesystem
|
||||
----------------
|
||||
|
||||
Paths are specified as normal filesystem paths, so
|
||||
|
||||
rclone sync /home/source /tmp/destination
|
||||
|
||||
Will sync `/home/source` to `/tmp/destination`
|
||||
|
||||
Swift / Rackspace cloudfiles / Memset Memstore
|
||||
----------------------------------------------
|
||||
|
||||
Paths are specified as remote:container (or remote: for the `lsd`
|
||||
command.) You may put subdirectories in too, eg
|
||||
`remote:container/path/to/dir`.
|
||||
|
||||
So to copy a local directory to a swift container called backup:
|
||||
|
||||
rclone sync /home/source swift:backup
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
`X-Object-Meta-Mtime` as floating point since the epoch.
|
||||
|
||||
This is a defacto standard (used in the official python-swiftclient
|
||||
amongst others) for storing the modification time (as read using
|
||||
os.Stat) for an object.
|
||||
|
||||
Amazon S3
|
||||
---------
|
||||
|
||||
Paths are specified as remote:bucket. You may put subdirectories in
|
||||
too, eg `remote:bucket/path/to/dir`.
|
||||
|
||||
So to copy a local directory to a s3 container called backup
|
||||
|
||||
rclone sync /home/source s3:backup
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
`X-Amz-Meta-Mtime` as floating point since the epoch.
|
||||
|
||||
Google drive
|
||||
------------
|
||||
|
||||
Paths are specified as remote:path Drive paths may be as deep as required.
|
||||
|
||||
The initial setup for drive involves getting a token from Google drive
|
||||
which you need to do in your browser. `rclone config` walks you
|
||||
through it.
|
||||
|
||||
To copy a local directory to a drive directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
Google drive stores modification times accurate to 1 ms natively.
|
||||
|
||||
Dropbox
|
||||
-------
|
||||
|
||||
Paths are specified as remote:path Dropbox paths may be as deep as required.
|
||||
|
||||
The initial setup for dropbox involves getting a token from Dropbox
|
||||
which you need to do in your browser. `rclone config` walks you
|
||||
through it.
|
||||
|
||||
To copy a local directory to a drive directory called backup
|
||||
|
||||
rclone copy /home/source dropbox:backup
|
||||
|
||||
Md5sums and timestamps in RFC3339 format accurate to 1ns are stored in
|
||||
a Dropbox datastore called "rclone". Dropbox datastores are limited
|
||||
to 100,000 rows so this is the maximum number of files rclone can
|
||||
manage on Dropbox.
|
||||
|
||||
Google Cloud Storage
|
||||
--------------------
|
||||
|
||||
Paths are specified as remote:path Google Cloud Storage paths may be
|
||||
as deep as required.
|
||||
|
||||
The initial setup for Google Cloud Storage involves getting a token
|
||||
from Google which you need to do in your browser. `rclone config`
|
||||
walks you through it.
|
||||
|
||||
To copy a local directory to a google cloud storage directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
Google google cloud storage stores md5sums natively and rclone stores
|
||||
modification times as metadata on the object, under the "mtime" key in
|
||||
RFC3339 format accurate to 1ns.
|
||||
|
||||
Single file copies
|
||||
------------------
|
||||
|
||||
Rclone can copy single files
|
||||
|
||||
rclone src:path/to/file dst:path/dir
|
||||
|
||||
Or
|
||||
|
||||
rclone src:path/to/file dst:path/to/file
|
||||
|
||||
Note that you can't rename the file if you are copying from one file to another.
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
This is free software under the terms of MIT the license (check the
|
||||
COPYING file included in this package).
|
||||
|
||||
Bugs
|
||||
----
|
||||
|
||||
* Drive: Sometimes get: Failed to copy: Upload failed: googleapi: Error 403: Rate Limit Exceeded
|
||||
* quota is 100.0 requests/second/user
|
||||
* Empty directories left behind with Local and Drive
|
||||
* eg purging a local directory with subdirectories doesn't work
|
||||
|
||||
Changelog
|
||||
---------
|
||||
* v1.02 - 2014-07-19
|
||||
* Implement Dropbox remote
|
||||
* Implement Google Cloud Storage remote
|
||||
* Verify Md5sums and Sizes after copies
|
||||
* Remove times from "ls" command - lists sizes only
|
||||
* Add add "lsl" - lists times and sizes
|
||||
* Add "md5sum" command
|
||||
* v1.01 - 2014-07-04
|
||||
* drive: fix transfer of big files using up lots of memory
|
||||
* v1.00 - 2014-07-03
|
||||
* drive: fix whole second dates
|
||||
* v0.99 - 2014-06-26
|
||||
* Fix --dry-run not working
|
||||
* Make compatible with go 1.1
|
||||
* v0.98 - 2014-05-30
|
||||
* s3: Treat missing Content-Length as 0 for some ceph installations
|
||||
* rclonetest: add file with a space in
|
||||
* v0.97 - 2014-05-05
|
||||
* Implement copying of single files
|
||||
* s3 & swift: support paths inside containers/buckets
|
||||
* v0.96 - 2014-04-24
|
||||
* drive: Fix multiple files of same name being created
|
||||
* drive: Use o.Update and fs.Put to optimise transfers
|
||||
* Add version number, -V and --version
|
||||
* v0.95 - 2014-03-28
|
||||
* rclone.org: website, docs and graphics
|
||||
* drive: fix path parsing
|
||||
* v0.94 - 2014-03-27
|
||||
* Change remote format one last time
|
||||
* GNU style flags
|
||||
* v0.93 - 2014-03-16
|
||||
* drive: store token in config file
|
||||
* cross compile other versions
|
||||
* set strict permissions on config file
|
||||
* v0.92 - 2014-03-15
|
||||
* Config fixes and --config option
|
||||
* v0.91 - 2014-03-15
|
||||
* Make config file
|
||||
* v0.90 - 2013-06-27
|
||||
* Project named rclone
|
||||
* v0.00 - 2012-11-18
|
||||
* Project started
|
||||
|
||||
|
||||
Contact and support
|
||||
-------------------
|
||||
|
||||
The project website is at:
|
||||
|
||||
* https://github.com/ncw/rclone
|
||||
|
||||
There you can file bug reports, ask for help or send pull requests.
|
||||
|
||||
Authors
|
||||
-------
|
||||
|
||||
* Nick Craig-Wood <nick@craig-wood.com>
|
||||
|
||||
Contributors
|
||||
------------
|
||||
|
||||
* Your name goes here!
|
||||
|
||||
24
RELEASE.md
24
RELEASE.md
@@ -1,24 +0,0 @@
|
||||
Required software for making a release
|
||||
* [github-release](https://github.com/aktau/github-release) for uploading packages
|
||||
* [gox](https://github.com/mitchellh/gox) for cross compiling
|
||||
* Run `gox -build-toolchain`
|
||||
* This assumes you have your own source checkout
|
||||
* pandoc for making the html and man pages
|
||||
* errcheck - go get github.com/kisielk/errcheck
|
||||
* golint - go get github.com/golang/lint
|
||||
|
||||
Making a release
|
||||
* go get -t -u -f -v ./...
|
||||
* make check
|
||||
* make test
|
||||
* make tag
|
||||
* edit docs/content/changelog.md
|
||||
* make doc
|
||||
* git commit -a -v
|
||||
* make retag
|
||||
* # Set the GOPATH for a gox enabled compiler - . ~/bin/go-cross - not required for go >= 1.5
|
||||
* make cross
|
||||
* make upload
|
||||
* make upload_website
|
||||
* git push --tags origin master
|
||||
* make upload_github
|
||||
@@ -1,820 +0,0 @@
|
||||
// Package amazonclouddrive provides an interface to the Amazon Cloud
|
||||
// Drive object storage system.
|
||||
package amazonclouddrive
|
||||
|
||||
/*
|
||||
|
||||
FIXME make searching for directory in id and file in id more efficient
|
||||
- use the name: search parameter - remember the escaping rules
|
||||
- use Folder GetNode and GetFile
|
||||
|
||||
FIXME make the default for no files and no dirs be (FILE & FOLDER) so
|
||||
we ignore assets completely!
|
||||
*/
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"regexp"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ncw/go-acd"
|
||||
"github.com/ncw/rclone/dircache"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/oauthutil"
|
||||
"github.com/ncw/rclone/pacer"
|
||||
"github.com/spf13/pflag"
|
||||
"golang.org/x/oauth2"
|
||||
)
|
||||
|
||||
const (
|
||||
rcloneClientID = "amzn1.application-oa2-client.6bf18d2d1f5b485c94c8988bb03ad0e7"
|
||||
rcloneEncryptedClientSecret = "k8/NyszKm5vEkZXAwsbGkd6C3NrbjIqMg4qEhIeF14Szub2wur+/teS3ubXgsLe9//+tr/qoqK+lq6mg8vWkoA=="
|
||||
folderKind = "FOLDER"
|
||||
fileKind = "FILE"
|
||||
assetKind = "ASSET"
|
||||
statusAvailable = "AVAILABLE"
|
||||
timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z
|
||||
minSleep = 20 * time.Millisecond
|
||||
warnFileSize = 50 << 30 // Display warning for files larger than this size
|
||||
)
|
||||
|
||||
// Globals
|
||||
var (
|
||||
// Flags
|
||||
tempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
|
||||
// Description of how to auth for this app
|
||||
acdConfig = &oauth2.Config{
|
||||
Scopes: []string{"clouddrive:read_all", "clouddrive:write"},
|
||||
Endpoint: oauth2.Endpoint{
|
||||
AuthURL: "https://www.amazon.com/ap/oa",
|
||||
TokenURL: "https://api.amazon.com/auth/o2/token",
|
||||
},
|
||||
ClientID: rcloneClientID,
|
||||
ClientSecret: fs.Reveal(rcloneEncryptedClientSecret),
|
||||
RedirectURL: oauthutil.RedirectURL,
|
||||
}
|
||||
)
|
||||
|
||||
// Register with Fs
|
||||
func init() {
|
||||
fs.Register(&fs.RegInfo{
|
||||
Name: "amazon cloud drive",
|
||||
Description: "Amazon Cloud Drive",
|
||||
NewFs: NewFs,
|
||||
Config: func(name string) {
|
||||
err := oauthutil.Config("amazon cloud drive", name, acdConfig)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
},
|
||||
Options: []fs.Option{{
|
||||
Name: fs.ConfigClientID,
|
||||
Help: "Amazon Application Client Id - leave blank normally.",
|
||||
}, {
|
||||
Name: fs.ConfigClientSecret,
|
||||
Help: "Amazon Application Client Secret - leave blank normally.",
|
||||
}},
|
||||
})
|
||||
pflag.VarP(&tempLinkThreshold, "acd-templink-threshold", "", "Files >= this size will be downloaded via their tempLink.")
|
||||
}
|
||||
|
||||
// Fs represents a remote acd server
|
||||
type Fs struct {
|
||||
name string // name of this remote
|
||||
c *acd.Client // the connection to the acd server
|
||||
noAuthClient *http.Client // unauthenticated http client
|
||||
root string // the path we are working on
|
||||
dirCache *dircache.DirCache // Map of directory path to directory id
|
||||
pacer *pacer.Pacer // pacer for API calls
|
||||
}
|
||||
|
||||
// Object describes a acd object
|
||||
//
|
||||
// Will definitely have info but maybe not meta
|
||||
type Object struct {
|
||||
fs *Fs // what this object is part of
|
||||
remote string // The remote path
|
||||
info *acd.Node // Info from the acd object if known
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
// Name of the remote (as passed into NewFs)
|
||||
func (f *Fs) Name() string {
|
||||
return f.name
|
||||
}
|
||||
|
||||
// Root of the remote (as passed into NewFs)
|
||||
func (f *Fs) Root() string {
|
||||
return f.root
|
||||
}
|
||||
|
||||
// String converts this Fs to a string
|
||||
func (f *Fs) String() string {
|
||||
return fmt.Sprintf("Amazon cloud drive root '%s'", f.root)
|
||||
}
|
||||
|
||||
// Pattern to match a acd path
|
||||
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
|
||||
|
||||
// parsePath parses an acd 'url'
|
||||
func parsePath(path string) (root string) {
|
||||
root = strings.Trim(path, "/")
|
||||
return
|
||||
}
|
||||
|
||||
// retryErrorCodes is a slice of error codes that we will retry
|
||||
var retryErrorCodes = []int{
|
||||
400, // Bad request (seen in "Next token is expired")
|
||||
401, // Unauthorized (seen in "Token has expired")
|
||||
408, // Request Timeout
|
||||
429, // Rate exceeded.
|
||||
500, // Get occasional 500 Internal Server Error
|
||||
503, // Service Unavailable
|
||||
504, // Gateway Time-out
|
||||
}
|
||||
|
||||
// shouldRetry returns a boolean as to whether this resp and err
|
||||
// deserve to be retried. It returns the err as a convenience
|
||||
func shouldRetry(resp *http.Response, err error) (bool, error) {
|
||||
if err == io.EOF {
|
||||
return true, err
|
||||
}
|
||||
return fs.ShouldRetry(err) || fs.ShouldRetryHTTP(resp, retryErrorCodes), err
|
||||
}
|
||||
|
||||
// NewFs constructs an Fs from the path, container:path
|
||||
func NewFs(name, root string) (fs.Fs, error) {
|
||||
root = parsePath(root)
|
||||
oAuthClient, err := oauthutil.NewClient(name, acdConfig)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to configure amazon cloud drive: %v", err)
|
||||
}
|
||||
|
||||
c := acd.NewClient(oAuthClient)
|
||||
c.UserAgent = fs.UserAgent
|
||||
f := &Fs{
|
||||
name: name,
|
||||
root: root,
|
||||
c: c,
|
||||
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
|
||||
noAuthClient: fs.Config.Client(),
|
||||
}
|
||||
|
||||
// Update endpoints
|
||||
var resp *http.Response
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
_, resp, err = f.c.Account.GetEndpoints()
|
||||
return shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to get endpoints: %v", err)
|
||||
}
|
||||
|
||||
// Get rootID
|
||||
var rootInfo *acd.Folder
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
rootInfo, resp, err = f.c.Nodes.GetRoot()
|
||||
return shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil || rootInfo.Id == nil {
|
||||
return nil, fmt.Errorf("Failed to get root: %v", err)
|
||||
}
|
||||
|
||||
f.dirCache = dircache.New(root, *rootInfo.Id, f)
|
||||
|
||||
// Find the current root
|
||||
err = f.dirCache.FindRoot(false)
|
||||
if err != nil {
|
||||
// Assume it is a file
|
||||
newRoot, remote := dircache.SplitPath(root)
|
||||
newF := *f
|
||||
newF.dirCache = dircache.New(newRoot, *rootInfo.Id, &newF)
|
||||
newF.root = newRoot
|
||||
// Make new Fs which is the parent
|
||||
err = newF.dirCache.FindRoot(false)
|
||||
if err != nil {
|
||||
// No root so return old f
|
||||
return f, nil
|
||||
}
|
||||
obj := newF.newFsObjectWithInfo(remote, nil)
|
||||
if obj == nil {
|
||||
// File doesn't exist so return old f
|
||||
return f, nil
|
||||
}
|
||||
// return a Fs Limited to this object
|
||||
return fs.NewLimited(&newF, obj), nil
|
||||
}
|
||||
return f, nil
|
||||
}
|
||||
|
||||
// Return an FsObject from a path
|
||||
//
|
||||
// May return nil if an error occurred
|
||||
func (f *Fs) newFsObjectWithInfo(remote string, info *acd.Node) fs.Object {
|
||||
o := &Object{
|
||||
fs: f,
|
||||
remote: remote,
|
||||
}
|
||||
if info != nil {
|
||||
// Set info but not meta
|
||||
o.info = info
|
||||
} else {
|
||||
err := o.readMetaData() // reads info and meta, returning an error
|
||||
if err != nil {
|
||||
// logged already FsDebug("Failed to read info: %s", err)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return o
|
||||
}
|
||||
|
||||
// NewFsObject returns an FsObject from a path
|
||||
//
|
||||
// May return nil if an error occurred
|
||||
func (f *Fs) NewFsObject(remote string) fs.Object {
|
||||
return f.newFsObjectWithInfo(remote, nil)
|
||||
}
|
||||
|
||||
// FindLeaf finds a directory of name leaf in the folder with ID pathID
|
||||
func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error) {
|
||||
//fs.Debug(f, "FindLeaf(%q, %q)", pathID, leaf)
|
||||
folder := acd.FolderFromId(pathID, f.c.Nodes)
|
||||
var resp *http.Response
|
||||
var subFolder *acd.Folder
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
subFolder, resp, err = folder.GetFolder(leaf)
|
||||
return shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
if err == acd.ErrorNodeNotFound {
|
||||
//fs.Debug(f, "...Not found")
|
||||
return "", false, nil
|
||||
}
|
||||
//fs.Debug(f, "...Error %v", err)
|
||||
return "", false, err
|
||||
}
|
||||
if subFolder.Status != nil && *subFolder.Status != statusAvailable {
|
||||
fs.Debug(f, "Ignoring folder %q in state %q", leaf, *subFolder.Status)
|
||||
time.Sleep(1 * time.Second) // FIXME wait for problem to go away!
|
||||
return "", false, nil
|
||||
}
|
||||
//fs.Debug(f, "...Found(%q, %v)", *subFolder.Id, leaf)
|
||||
return *subFolder.Id, true, nil
|
||||
}
|
||||
|
||||
// CreateDir makes a directory with pathID as parent and name leaf
|
||||
func (f *Fs) CreateDir(pathID, leaf string) (newID string, err error) {
|
||||
//fmt.Printf("CreateDir(%q, %q)\n", pathID, leaf)
|
||||
folder := acd.FolderFromId(pathID, f.c.Nodes)
|
||||
var resp *http.Response
|
||||
var info *acd.Folder
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
info, resp, err = folder.CreateFolder(leaf)
|
||||
return shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
//fmt.Printf("...Error %v\n", err)
|
||||
return "", err
|
||||
}
|
||||
//fmt.Printf("...Id %q\n", *info.Id)
|
||||
return *info.Id, nil
|
||||
}
|
||||
|
||||
// list the objects into the function supplied
|
||||
//
|
||||
// If directories is set it only sends directories
|
||||
// User function to process a File item from listAll
|
||||
//
|
||||
// Should return true to finish processing
|
||||
type listAllFn func(*acd.Node) bool
|
||||
|
||||
// Lists the directory required calling the user function on each item found
|
||||
//
|
||||
// If the user fn ever returns true then it early exits with found = true
|
||||
func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
|
||||
query := "parents:" + dirID
|
||||
if directoriesOnly {
|
||||
query += " AND kind:" + folderKind
|
||||
} else if filesOnly {
|
||||
query += " AND kind:" + fileKind
|
||||
} else {
|
||||
// FIXME none of these work
|
||||
//query += " AND kind:(" + fileKind + " OR " + folderKind + ")"
|
||||
//query += " AND (kind:" + fileKind + " OR kind:" + folderKind + ")"
|
||||
}
|
||||
opts := acd.NodeListOptions{
|
||||
Filters: query,
|
||||
}
|
||||
var nodes []*acd.Node
|
||||
//var resp *http.Response
|
||||
OUTER:
|
||||
for {
|
||||
var resp *http.Response
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
nodes, resp, err = f.c.Nodes.GetNodes(&opts)
|
||||
return shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(f, "Couldn't list files: %v", err)
|
||||
break
|
||||
}
|
||||
if nodes == nil {
|
||||
break
|
||||
}
|
||||
for _, node := range nodes {
|
||||
if node.Name != nil && node.Id != nil && node.Kind != nil && node.Status != nil {
|
||||
// Ignore nodes if not AVAILABLE
|
||||
if *node.Status != statusAvailable {
|
||||
continue
|
||||
}
|
||||
if fn(node) {
|
||||
found = true
|
||||
break OUTER
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Path should be directory path either "" or "path/"
|
||||
//
|
||||
// List the directory using a recursive list from the root
|
||||
//
|
||||
// This fetches the minimum amount of stuff but does more API calls
|
||||
// which makes it slow
|
||||
func (f *Fs) listDirRecursive(dirID string, path string, out fs.ObjectsChan) error {
|
||||
var subError error
|
||||
// Make the API request
|
||||
var wg sync.WaitGroup
|
||||
_, err := f.listAll(dirID, "", false, false, func(node *acd.Node) bool {
|
||||
// Recurse on directories
|
||||
switch *node.Kind {
|
||||
case folderKind:
|
||||
wg.Add(1)
|
||||
folder := path + *node.Name + "/"
|
||||
fs.Debug(f, "Reading %s", folder)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
err := f.listDirRecursive(*node.Id, folder, out)
|
||||
if err != nil {
|
||||
subError = err
|
||||
fs.ErrorLog(f, "Error reading %s:%s", folder, err)
|
||||
}
|
||||
}()
|
||||
return false
|
||||
case fileKind:
|
||||
if fs := f.newFsObjectWithInfo(path+*node.Name, node); fs != nil {
|
||||
out <- fs
|
||||
}
|
||||
default:
|
||||
// ignore ASSET etc
|
||||
}
|
||||
return false
|
||||
})
|
||||
wg.Wait()
|
||||
fs.Debug(f, "Finished reading %s", path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if subError != nil {
|
||||
return subError
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Path should be directory path either "" or "path/"
|
||||
//
|
||||
// List the directory using a recursive list from the root
|
||||
//
|
||||
// This fetches the minimum amount of stuff but does more API calls
|
||||
// which makes it slow
|
||||
func (f *Fs) listDirNonRecursive(dirID string, path string, out fs.ObjectsChan) error {
|
||||
// Start some directory listing go routines
|
||||
var wg sync.WaitGroup // sync closing of go routines
|
||||
var traversing sync.WaitGroup // running directory traversals
|
||||
type dirListJob struct {
|
||||
dirID string
|
||||
path string
|
||||
}
|
||||
in := make(chan dirListJob, fs.Config.Checkers)
|
||||
errs := make(chan error, fs.Config.Checkers)
|
||||
for i := 0; i < fs.Config.Checkers; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for job := range in {
|
||||
var jobs []dirListJob
|
||||
fs.Debug(f, "Reading %q", job.path)
|
||||
// Make the API request
|
||||
_, err := f.listAll(job.dirID, "", false, false, func(node *acd.Node) bool {
|
||||
// Recurse on directories
|
||||
switch *node.Kind {
|
||||
case folderKind:
|
||||
jobs = append(jobs, dirListJob{dirID: *node.Id, path: job.path + *node.Name + "/"})
|
||||
case fileKind:
|
||||
if fs := f.newFsObjectWithInfo(job.path+*node.Name, node); fs != nil {
|
||||
out <- fs
|
||||
}
|
||||
default:
|
||||
// ignore ASSET etc
|
||||
}
|
||||
return false
|
||||
})
|
||||
fs.Debug(f, "Finished reading %q", job.path)
|
||||
if err != nil {
|
||||
fs.ErrorLog(f, "Error reading %s: %s", path, err)
|
||||
errs <- err
|
||||
}
|
||||
// FIXME stop traversal on error?
|
||||
traversing.Add(len(jobs))
|
||||
go func() {
|
||||
// Now we have traversed this directory, send these jobs off for traversal in
|
||||
// the background
|
||||
for _, job := range jobs {
|
||||
in <- job
|
||||
}
|
||||
}()
|
||||
traversing.Done()
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Collect the errors
|
||||
wg.Add(1)
|
||||
var errResult error
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for err := range errs {
|
||||
errResult = err
|
||||
}
|
||||
}()
|
||||
|
||||
// Start the process
|
||||
traversing.Add(1)
|
||||
in <- dirListJob{dirID: dirID, path: path}
|
||||
traversing.Wait()
|
||||
close(in)
|
||||
close(errs)
|
||||
wg.Wait()
|
||||
|
||||
return errResult
|
||||
}
|
||||
|
||||
// List walks the path returning a channel of FsObjects
|
||||
func (f *Fs) List() fs.ObjectsChan {
|
||||
out := make(fs.ObjectsChan, fs.Config.Checkers)
|
||||
go func() {
|
||||
defer close(out)
|
||||
err := f.dirCache.FindRoot(false)
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(f, "Couldn't find root: %s", err)
|
||||
} else {
|
||||
err = f.listDirNonRecursive(f.dirCache.RootID(), "", out)
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(f, "List failed: %s", err)
|
||||
}
|
||||
}
|
||||
}()
|
||||
return out
|
||||
}
|
||||
|
||||
// ListDir lists the directories
|
||||
func (f *Fs) ListDir() fs.DirChan {
|
||||
out := make(fs.DirChan, fs.Config.Checkers)
|
||||
go func() {
|
||||
defer close(out)
|
||||
err := f.dirCache.FindRoot(false)
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(f, "Couldn't find root: %s", err)
|
||||
} else {
|
||||
_, err := f.listAll(f.dirCache.RootID(), "", true, false, func(item *acd.Node) bool {
|
||||
dir := &fs.Dir{
|
||||
Name: *item.Name,
|
||||
Bytes: -1,
|
||||
Count: -1,
|
||||
}
|
||||
dir.When, _ = time.Parse(timeFormat, *item.ModifiedDate)
|
||||
out <- dir
|
||||
return false
|
||||
})
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(f, "ListDir failed: %s", err)
|
||||
}
|
||||
}
|
||||
}()
|
||||
return out
|
||||
}
|
||||
|
||||
// Put the object into the container
|
||||
//
|
||||
// Copy the reader in to the new object which is returned
|
||||
//
|
||||
// The new object may have been created if an error is returned
|
||||
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo) (fs.Object, error) {
|
||||
remote := src.Remote()
|
||||
size := src.Size()
|
||||
// Temporary Object under construction
|
||||
o := &Object{
|
||||
fs: f,
|
||||
remote: remote,
|
||||
}
|
||||
leaf, directoryID, err := f.dirCache.FindPath(remote, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if size > warnFileSize {
|
||||
fs.Debug(f, "Warning: file %q may fail because it is too big. Use --max-size=%dGB to skip large files.", remote, warnFileSize>>30)
|
||||
}
|
||||
folder := acd.FolderFromId(directoryID, o.fs.c.Nodes)
|
||||
var info *acd.File
|
||||
var resp *http.Response
|
||||
err = f.pacer.CallNoRetry(func() (bool, error) {
|
||||
if src.Size() != 0 {
|
||||
info, resp, err = folder.Put(in, leaf)
|
||||
} else {
|
||||
info, resp, err = folder.PutSized(in, size, leaf)
|
||||
}
|
||||
return shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
o.info = info.Node
|
||||
return o, nil
|
||||
}
|
||||
|
||||
// Mkdir creates the container if it doesn't exist
|
||||
func (f *Fs) Mkdir() error {
|
||||
return f.dirCache.FindRoot(true)
|
||||
}
|
||||
|
||||
// purgeCheck remotes the root directory, if check is set then it
|
||||
// refuses to do so if it has anything in
|
||||
func (f *Fs) purgeCheck(check bool) error {
|
||||
if f.root == "" {
|
||||
return fmt.Errorf("Can't purge root directory")
|
||||
}
|
||||
dc := f.dirCache
|
||||
err := dc.FindRoot(false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
rootID := dc.RootID()
|
||||
|
||||
if check {
|
||||
// check directory is empty
|
||||
empty := true
|
||||
_, err = f.listAll(rootID, "", false, false, func(node *acd.Node) bool {
|
||||
switch *node.Kind {
|
||||
case folderKind:
|
||||
empty = false
|
||||
return true
|
||||
case fileKind:
|
||||
empty = false
|
||||
return true
|
||||
default:
|
||||
fs.Debug("Found ASSET %s", *node.Id)
|
||||
}
|
||||
return false
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !empty {
|
||||
return fmt.Errorf("Directory not empty")
|
||||
}
|
||||
}
|
||||
|
||||
node := acd.NodeFromId(rootID, f.c.Nodes)
|
||||
var resp *http.Response
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
resp, err = node.Trash()
|
||||
return shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
f.dirCache.ResetRoot()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Rmdir deletes the root folder
|
||||
//
|
||||
// Returns an error if it isn't empty
|
||||
func (f *Fs) Rmdir() error {
|
||||
return f.purgeCheck(true)
|
||||
}
|
||||
|
||||
// Precision return the precision of this Fs
|
||||
func (f *Fs) Precision() time.Duration {
|
||||
return fs.ModTimeNotSupported
|
||||
}
|
||||
|
||||
// Hashes returns the supported hash sets.
|
||||
func (f *Fs) Hashes() fs.HashSet {
|
||||
return fs.HashSet(fs.HashMD5)
|
||||
}
|
||||
|
||||
// Copy src to this remote using server side copy operations.
|
||||
//
|
||||
// This is stored with the remote path given
|
||||
//
|
||||
// It returns the destination Object and a possible error
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantCopy
|
||||
//func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
|
||||
// srcObj, ok := src.(*Object)
|
||||
// if !ok {
|
||||
// fs.Debug(src, "Can't copy - not same remote type")
|
||||
// return nil, fs.ErrorCantCopy
|
||||
// }
|
||||
// srcFs := srcObj.fs
|
||||
// _, err := f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
|
||||
// if err != nil {
|
||||
// return nil, err
|
||||
// }
|
||||
// return f.NewFsObject(remote), nil
|
||||
//}
|
||||
|
||||
// Purge deletes all the files and the container
|
||||
//
|
||||
// Optional interface: Only implement this if you have a way of
|
||||
// deleting all the files quicker than just running Remove() on the
|
||||
// result of List()
|
||||
func (f *Fs) Purge() error {
|
||||
return f.purgeCheck(false)
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
// Fs returns the parent Fs
|
||||
func (o *Object) Fs() fs.Info {
|
||||
return o.fs
|
||||
}
|
||||
|
||||
// Return a string version
|
||||
func (o *Object) String() string {
|
||||
if o == nil {
|
||||
return "<nil>"
|
||||
}
|
||||
return o.remote
|
||||
}
|
||||
|
||||
// Remote returns the remote path
|
||||
func (o *Object) Remote() string {
|
||||
return o.remote
|
||||
}
|
||||
|
||||
// Hash returns the Md5sum of an object returning a lowercase hex string
|
||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
||||
if t != fs.HashMD5 {
|
||||
return "", fs.ErrHashUnsupported
|
||||
}
|
||||
if o.info.ContentProperties.Md5 != nil {
|
||||
return *o.info.ContentProperties.Md5, nil
|
||||
}
|
||||
return "", nil
|
||||
}
|
||||
|
||||
// Size returns the size of an object in bytes
|
||||
func (o *Object) Size() int64 {
|
||||
return int64(*o.info.ContentProperties.Size)
|
||||
}
|
||||
|
||||
// readMetaData gets the metadata if it hasn't already been fetched
|
||||
//
|
||||
// it also sets the info
|
||||
func (o *Object) readMetaData() (err error) {
|
||||
if o.info != nil {
|
||||
return nil
|
||||
}
|
||||
leaf, directoryID, err := o.fs.dirCache.FindPath(o.remote, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
folder := acd.FolderFromId(directoryID, o.fs.c.Nodes)
|
||||
var resp *http.Response
|
||||
var info *acd.File
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
info, resp, err = folder.GetFile(leaf)
|
||||
return shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
fs.Debug(o, "Failed to read info: %s", err)
|
||||
return err
|
||||
}
|
||||
o.info = info.Node
|
||||
return nil
|
||||
}
|
||||
|
||||
// ModTime returns the modification time of the object
|
||||
//
|
||||
//
|
||||
// It attempts to read the objects mtime and if that isn't present the
|
||||
// LastModified returned in the http headers
|
||||
func (o *Object) ModTime() time.Time {
|
||||
err := o.readMetaData()
|
||||
if err != nil {
|
||||
fs.Log(o, "Failed to read metadata: %s", err)
|
||||
return time.Now()
|
||||
}
|
||||
modTime, err := time.Parse(timeFormat, *o.info.ModifiedDate)
|
||||
if err != nil {
|
||||
fs.Log(o, "Failed to read mtime from object: %s", err)
|
||||
return time.Now()
|
||||
}
|
||||
return modTime
|
||||
}
|
||||
|
||||
// SetModTime sets the modification time of the local fs object
|
||||
func (o *Object) SetModTime(modTime time.Time) error {
|
||||
// FIXME not implemented
|
||||
return fs.ErrorCantSetModTime
|
||||
}
|
||||
|
||||
// Storable returns a boolean showing whether this object storable
|
||||
func (o *Object) Storable() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// Open an object for read
|
||||
func (o *Object) Open() (in io.ReadCloser, err error) {
|
||||
bigObject := o.Size() >= int64(tempLinkThreshold)
|
||||
if bigObject {
|
||||
fs.Debug(o, "Dowloading large object via tempLink")
|
||||
}
|
||||
file := acd.File{Node: o.info}
|
||||
var resp *http.Response
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
if !bigObject {
|
||||
in, resp, err = file.Open()
|
||||
} else {
|
||||
in, resp, err = file.OpenTempURL(o.fs.noAuthClient)
|
||||
}
|
||||
return shouldRetry(resp, err)
|
||||
})
|
||||
return in, err
|
||||
}
|
||||
|
||||
// Update the object with the contents of the io.Reader, modTime and size
|
||||
//
|
||||
// The new object may have been created if an error is returned
|
||||
func (o *Object) Update(in io.Reader, src fs.ObjectInfo) error {
|
||||
size := src.Size()
|
||||
file := acd.File{Node: o.info}
|
||||
var info *acd.File
|
||||
var resp *http.Response
|
||||
var err error
|
||||
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
||||
if size != 0 {
|
||||
info, resp, err = file.OverwriteSized(in, size)
|
||||
} else {
|
||||
info, resp, err = file.Overwrite(in)
|
||||
}
|
||||
return shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.info = info.Node
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove an object
|
||||
func (o *Object) Remove() error {
|
||||
var resp *http.Response
|
||||
var err error
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
resp, err = o.info.Trash()
|
||||
return shouldRetry(resp, err)
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
// Check the interfaces are satisfied
|
||||
var (
|
||||
_ fs.Fs = (*Fs)(nil)
|
||||
_ fs.Purger = (*Fs)(nil)
|
||||
// _ fs.Copier = (*Fs)(nil)
|
||||
// _ fs.Mover = (*Fs)(nil)
|
||||
// _ fs.DirMover = (*Fs)(nil)
|
||||
_ fs.Object = (*Object)(nil)
|
||||
)
|
||||
@@ -1,56 +0,0 @@
|
||||
// Test AmazonCloudDrive filesystem interface
|
||||
//
|
||||
// Automatically generated - DO NOT EDIT
|
||||
// Regenerate with: make gen_tests
|
||||
package amazonclouddrive_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ncw/rclone/amazonclouddrive"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/fstest/fstests"
|
||||
)
|
||||
|
||||
func init() {
|
||||
fstests.NilObject = fs.Object((*amazonclouddrive.Object)(nil))
|
||||
fstests.RemoteName = "TestAmazonCloudDrive:"
|
||||
}
|
||||
|
||||
// Generic tests for the Fs
|
||||
func TestInit(t *testing.T) { fstests.TestInit(t) }
|
||||
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
|
||||
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
|
||||
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
|
||||
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
|
||||
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
|
||||
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
|
||||
func TestFsNewFsObjectNotFound(t *testing.T) { fstests.TestFsNewFsObjectNotFound(t) }
|
||||
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
|
||||
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
|
||||
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
|
||||
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
|
||||
func TestFsListRoot(t *testing.T) { fstests.TestFsListRoot(t) }
|
||||
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
|
||||
func TestFsNewFsObject(t *testing.T) { fstests.TestFsNewFsObject(t) }
|
||||
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
|
||||
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
|
||||
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
|
||||
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
|
||||
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
|
||||
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
|
||||
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
|
||||
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
|
||||
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
|
||||
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
|
||||
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
|
||||
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
|
||||
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
|
||||
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
|
||||
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
|
||||
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
|
||||
func TestLimitedFs(t *testing.T) { fstests.TestLimitedFs(t) }
|
||||
func TestLimitedFsNotFound(t *testing.T) { fstests.TestLimitedFsNotFound(t) }
|
||||
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
|
||||
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
|
||||
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }
|
||||
20
appveyor.yml
20
appveyor.yml
@@ -1,20 +0,0 @@
|
||||
version: "{build}"
|
||||
|
||||
os: Windows Server 2012 R2
|
||||
|
||||
clone_folder: c:\gopath\src\github.com\ncw\rclone
|
||||
|
||||
environment:
|
||||
GOPATH: c:\gopath
|
||||
|
||||
install:
|
||||
- echo %PATH%
|
||||
- echo %GOPATH%
|
||||
- go version
|
||||
- go env
|
||||
- go get -t -d ./...
|
||||
|
||||
build_script:
|
||||
- go vet ./...
|
||||
- go test -cpu=2 ./...
|
||||
- go test -cpu=2 -short -race ./...
|
||||
151
b2/api/types.go
151
b2/api/types.go
@@ -1,151 +0,0 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Error describes a B2 error response
|
||||
type Error struct {
|
||||
Status int `json:"status"` // The numeric HTTP status code. Always matches the status in the HTTP response.
|
||||
Code string `json:"code"` // A single-identifier code that identifies the error.
|
||||
Message string `json:"message"` // A human-readable message, in English, saying what went wrong.
|
||||
}
|
||||
|
||||
// Error statisfies the error interface
|
||||
func (e *Error) Error() string {
|
||||
return fmt.Sprintf("%s (%d %s)", e.Message, e.Status, e.Code)
|
||||
}
|
||||
|
||||
// Account describes a B2 account
|
||||
type Account struct {
|
||||
ID string `json:"accountId"` // The identifier for the account.
|
||||
}
|
||||
|
||||
// Bucket describes a B2 bucket
|
||||
type Bucket struct {
|
||||
ID string `json:"bucketId"`
|
||||
AccountID string `json:"accountId"`
|
||||
Name string `json:"bucketName"`
|
||||
Type string `json:"bucketType"`
|
||||
}
|
||||
|
||||
// Timestamp is a UTC time when this file was uploaded. It is a base
|
||||
// 10 number of milliseconds since midnight, January 1, 1970 UTC. This
|
||||
// fits in a 64 bit integer such as the type "long" in the programming
|
||||
// language Java. It is intended to be compatible with Java's time
|
||||
// long. For example, it can be passed directly into the java call
|
||||
// Date.setTime(long time).
|
||||
type Timestamp time.Time
|
||||
|
||||
// MarshalJSON turns a Timestamp into JSON (in UTC)
|
||||
func (t *Timestamp) MarshalJSON() (out []byte, err error) {
|
||||
timestamp := (*time.Time)(t).UTC().UnixNano()
|
||||
return []byte(strconv.FormatInt(timestamp/1E6, 10)), nil
|
||||
}
|
||||
|
||||
// UnmarshalJSON turns JSON into a Timestamp
|
||||
func (t *Timestamp) UnmarshalJSON(data []byte) error {
|
||||
timestamp, err := strconv.ParseInt(string(data), 10, 64)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*t = Timestamp(time.Unix(timestamp/1E3, (timestamp%1E3)*1E6))
|
||||
return nil
|
||||
}
|
||||
|
||||
// File is info about a file
|
||||
type File struct {
|
||||
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
|
||||
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
|
||||
Action string `json:"action"` // Either "upload" or "hide". "upload" means a file that was uploaded to B2 Cloud Storage. "hide" means a file version marking the file as hidden, so that it will not show up in b2_list_file_names. The result of b2_list_file_names will contain only "upload". The result of b2_list_file_versions may have both.
|
||||
Size int64 `json:"size"` // The number of bytes in the file.
|
||||
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
|
||||
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
|
||||
ContentType string `json:"contentType"` // The MIME type of the file.
|
||||
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
|
||||
}
|
||||
|
||||
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
|
||||
type AuthorizeAccountResponse struct {
|
||||
AccountID string `json:"accountId"` // The identifier for the account.
|
||||
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
|
||||
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
|
||||
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
|
||||
}
|
||||
|
||||
// ListBucketsResponse is as returned from the b2_list_buckets call
|
||||
type ListBucketsResponse struct {
|
||||
Buckets []Bucket `json:"buckets"`
|
||||
}
|
||||
|
||||
// ListFileNamesRequest is as passed to b2_list_file_names or b2_list_file_versions
|
||||
type ListFileNamesRequest struct {
|
||||
BucketID string `json:"bucketId"` // required - The bucket to look for file names in.
|
||||
StartFileName string `json:"startFileName,omitempty"` // optional - The first file name to return. If there is a file with this name, it will be returned in the list. If not, the first file name after this the first one after this name.
|
||||
MaxFileCount int `json:"maxFileCount,omitempty"` // optional - The maximum number of files to return from this call. The default value is 100, and the maximum allowed is 1000.
|
||||
StartFileID string `json:"startFileId,omitempty"` // optional - What to pass in to startFileId for the next search to continue where this one left off.
|
||||
}
|
||||
|
||||
// ListFileNamesResponse is as received from b2_list_file_names or b2_list_file_versions
|
||||
type ListFileNamesResponse struct {
|
||||
Files []File `json:"files"` // An array of objects, each one describing one file.
|
||||
NextFileName *string `json:"nextFileName"` // What to pass in to startFileName for the next search to continue where this one left off, or null if there are no more files.
|
||||
NextFileID *string `json:"nextFileId"` // What to pass in to startFileId for the next search to continue where this one left off, or null if there are no more files.
|
||||
}
|
||||
|
||||
// GetUploadURLRequest is passed to b2_get_upload_url
|
||||
type GetUploadURLRequest struct {
|
||||
BucketID string `json:"bucketId"` // The ID of the bucket that you want to upload to.
|
||||
}
|
||||
|
||||
// GetUploadURLResponse is received from b2_get_upload_url
|
||||
type GetUploadURLResponse struct {
|
||||
BucketID string `json:"bucketId"` // The unique ID of the bucket.
|
||||
UploadURL string `json:"uploadUrl"` // The URL that can be used to upload files to this bucket, see b2_upload_file.
|
||||
AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when uploading files to this bucket, see b2_upload_file.
|
||||
}
|
||||
|
||||
// FileInfo is received from b2_upload_file and b2_get_file_info
|
||||
type FileInfo struct {
|
||||
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
|
||||
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
|
||||
Action string `json:"action"` // Either "upload" or "hide". "upload" means a file that was uploaded to B2 Cloud Storage. "hide" means a file version marking the file as hidden, so that it will not show up in b2_list_file_names. The result of b2_list_file_names will contain only "upload". The result of b2_list_file_versions may have both.
|
||||
AccountID string `json:"accountId"` // Your account ID.
|
||||
BucketID string `json:"bucketId"` // The bucket that the file is in.
|
||||
Size int64 `json:"contentLength"` // The number of bytes stored in the file.
|
||||
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
|
||||
ContentType string `json:"contentType"` // The MIME type of the file.
|
||||
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
|
||||
}
|
||||
|
||||
// CreateBucketRequest is used to create a bucket
|
||||
type CreateBucketRequest struct {
|
||||
AccountID string `json:"accountId"`
|
||||
Name string `json:"bucketName"`
|
||||
Type string `json:"bucketType"`
|
||||
}
|
||||
|
||||
// DeleteBucketRequest is used to create a bucket
|
||||
type DeleteBucketRequest struct {
|
||||
ID string `json:"bucketId"`
|
||||
AccountID string `json:"accountId"`
|
||||
}
|
||||
|
||||
// DeleteFileRequest is used to delete a file version
|
||||
type DeleteFileRequest struct {
|
||||
ID string `json:"fileId"` // The ID of the file, as returned by b2_upload_file, b2_list_file_names, or b2_list_file_versions.
|
||||
Name string `json:"fileName"` // The name of this file.
|
||||
}
|
||||
|
||||
// HideFileRequest is used to delete a file
|
||||
type HideFileRequest struct {
|
||||
BucketID string `json:"bucketId"` // The bucket containing the file to hide.
|
||||
Name string `json:"fileName"` // The name of the file to hide.
|
||||
}
|
||||
|
||||
// GetFileInfoRequest is used to return a FileInfo struct with b2_get_file_info
|
||||
type GetFileInfoRequest struct {
|
||||
ID string `json:"fileId"` // The ID of the file, as returned by b2_upload_file, b2_list_file_names, or b2_list_file_versions.
|
||||
}
|
||||
@@ -1,170 +0,0 @@
|
||||
package b2
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/ncw/rclone/fstest"
|
||||
)
|
||||
|
||||
// Test b2 string encoding
|
||||
// https://www.backblaze.com/b2/docs/string_encoding.html
|
||||
|
||||
var encodeTest = []struct {
|
||||
fullyEncoded string
|
||||
minimallyEncoded string
|
||||
plainText string
|
||||
}{
|
||||
{fullyEncoded: "%20", minimallyEncoded: "+", plainText: " "},
|
||||
{fullyEncoded: "%21", minimallyEncoded: "!", plainText: "!"},
|
||||
{fullyEncoded: "%22", minimallyEncoded: "%22", plainText: "\""},
|
||||
{fullyEncoded: "%23", minimallyEncoded: "%23", plainText: "#"},
|
||||
{fullyEncoded: "%24", minimallyEncoded: "$", plainText: "$"},
|
||||
{fullyEncoded: "%25", minimallyEncoded: "%25", plainText: "%"},
|
||||
{fullyEncoded: "%26", minimallyEncoded: "%26", plainText: "&"},
|
||||
{fullyEncoded: "%27", minimallyEncoded: "'", plainText: "'"},
|
||||
{fullyEncoded: "%28", minimallyEncoded: "(", plainText: "("},
|
||||
{fullyEncoded: "%29", minimallyEncoded: ")", plainText: ")"},
|
||||
{fullyEncoded: "%2A", minimallyEncoded: "*", plainText: "*"},
|
||||
{fullyEncoded: "%2B", minimallyEncoded: "%2B", plainText: "+"},
|
||||
{fullyEncoded: "%2C", minimallyEncoded: "%2C", plainText: ","},
|
||||
{fullyEncoded: "%2D", minimallyEncoded: "-", plainText: "-"},
|
||||
{fullyEncoded: "%2E", minimallyEncoded: ".", plainText: "."},
|
||||
{fullyEncoded: "%2F", minimallyEncoded: "/", plainText: "/"},
|
||||
{fullyEncoded: "%30", minimallyEncoded: "0", plainText: "0"},
|
||||
{fullyEncoded: "%31", minimallyEncoded: "1", plainText: "1"},
|
||||
{fullyEncoded: "%32", minimallyEncoded: "2", plainText: "2"},
|
||||
{fullyEncoded: "%33", minimallyEncoded: "3", plainText: "3"},
|
||||
{fullyEncoded: "%34", minimallyEncoded: "4", plainText: "4"},
|
||||
{fullyEncoded: "%35", minimallyEncoded: "5", plainText: "5"},
|
||||
{fullyEncoded: "%36", minimallyEncoded: "6", plainText: "6"},
|
||||
{fullyEncoded: "%37", minimallyEncoded: "7", plainText: "7"},
|
||||
{fullyEncoded: "%38", minimallyEncoded: "8", plainText: "8"},
|
||||
{fullyEncoded: "%39", minimallyEncoded: "9", plainText: "9"},
|
||||
{fullyEncoded: "%3A", minimallyEncoded: ":", plainText: ":"},
|
||||
{fullyEncoded: "%3B", minimallyEncoded: ";", plainText: ";"},
|
||||
{fullyEncoded: "%3C", minimallyEncoded: "%3C", plainText: "<"},
|
||||
{fullyEncoded: "%3D", minimallyEncoded: "=", plainText: "="},
|
||||
{fullyEncoded: "%3E", minimallyEncoded: "%3E", plainText: ">"},
|
||||
{fullyEncoded: "%3F", minimallyEncoded: "%3F", plainText: "?"},
|
||||
{fullyEncoded: "%40", minimallyEncoded: "@", plainText: "@"},
|
||||
{fullyEncoded: "%41", minimallyEncoded: "A", plainText: "A"},
|
||||
{fullyEncoded: "%42", minimallyEncoded: "B", plainText: "B"},
|
||||
{fullyEncoded: "%43", minimallyEncoded: "C", plainText: "C"},
|
||||
{fullyEncoded: "%44", minimallyEncoded: "D", plainText: "D"},
|
||||
{fullyEncoded: "%45", minimallyEncoded: "E", plainText: "E"},
|
||||
{fullyEncoded: "%46", minimallyEncoded: "F", plainText: "F"},
|
||||
{fullyEncoded: "%47", minimallyEncoded: "G", plainText: "G"},
|
||||
{fullyEncoded: "%48", minimallyEncoded: "H", plainText: "H"},
|
||||
{fullyEncoded: "%49", minimallyEncoded: "I", plainText: "I"},
|
||||
{fullyEncoded: "%4A", minimallyEncoded: "J", plainText: "J"},
|
||||
{fullyEncoded: "%4B", minimallyEncoded: "K", plainText: "K"},
|
||||
{fullyEncoded: "%4C", minimallyEncoded: "L", plainText: "L"},
|
||||
{fullyEncoded: "%4D", minimallyEncoded: "M", plainText: "M"},
|
||||
{fullyEncoded: "%4E", minimallyEncoded: "N", plainText: "N"},
|
||||
{fullyEncoded: "%4F", minimallyEncoded: "O", plainText: "O"},
|
||||
{fullyEncoded: "%50", minimallyEncoded: "P", plainText: "P"},
|
||||
{fullyEncoded: "%51", minimallyEncoded: "Q", plainText: "Q"},
|
||||
{fullyEncoded: "%52", minimallyEncoded: "R", plainText: "R"},
|
||||
{fullyEncoded: "%53", minimallyEncoded: "S", plainText: "S"},
|
||||
{fullyEncoded: "%54", minimallyEncoded: "T", plainText: "T"},
|
||||
{fullyEncoded: "%55", minimallyEncoded: "U", plainText: "U"},
|
||||
{fullyEncoded: "%56", minimallyEncoded: "V", plainText: "V"},
|
||||
{fullyEncoded: "%57", minimallyEncoded: "W", plainText: "W"},
|
||||
{fullyEncoded: "%58", minimallyEncoded: "X", plainText: "X"},
|
||||
{fullyEncoded: "%59", minimallyEncoded: "Y", plainText: "Y"},
|
||||
{fullyEncoded: "%5A", minimallyEncoded: "Z", plainText: "Z"},
|
||||
{fullyEncoded: "%5B", minimallyEncoded: "%5B", plainText: "["},
|
||||
{fullyEncoded: "%5C", minimallyEncoded: "%5C", plainText: "\\"},
|
||||
{fullyEncoded: "%5D", minimallyEncoded: "%5D", plainText: "]"},
|
||||
{fullyEncoded: "%5E", minimallyEncoded: "%5E", plainText: "^"},
|
||||
{fullyEncoded: "%5F", minimallyEncoded: "_", plainText: "_"},
|
||||
{fullyEncoded: "%60", minimallyEncoded: "%60", plainText: "`"},
|
||||
{fullyEncoded: "%61", minimallyEncoded: "a", plainText: "a"},
|
||||
{fullyEncoded: "%62", minimallyEncoded: "b", plainText: "b"},
|
||||
{fullyEncoded: "%63", minimallyEncoded: "c", plainText: "c"},
|
||||
{fullyEncoded: "%64", minimallyEncoded: "d", plainText: "d"},
|
||||
{fullyEncoded: "%65", minimallyEncoded: "e", plainText: "e"},
|
||||
{fullyEncoded: "%66", minimallyEncoded: "f", plainText: "f"},
|
||||
{fullyEncoded: "%67", minimallyEncoded: "g", plainText: "g"},
|
||||
{fullyEncoded: "%68", minimallyEncoded: "h", plainText: "h"},
|
||||
{fullyEncoded: "%69", minimallyEncoded: "i", plainText: "i"},
|
||||
{fullyEncoded: "%6A", minimallyEncoded: "j", plainText: "j"},
|
||||
{fullyEncoded: "%6B", minimallyEncoded: "k", plainText: "k"},
|
||||
{fullyEncoded: "%6C", minimallyEncoded: "l", plainText: "l"},
|
||||
{fullyEncoded: "%6D", minimallyEncoded: "m", plainText: "m"},
|
||||
{fullyEncoded: "%6E", minimallyEncoded: "n", plainText: "n"},
|
||||
{fullyEncoded: "%6F", minimallyEncoded: "o", plainText: "o"},
|
||||
{fullyEncoded: "%70", minimallyEncoded: "p", plainText: "p"},
|
||||
{fullyEncoded: "%71", minimallyEncoded: "q", plainText: "q"},
|
||||
{fullyEncoded: "%72", minimallyEncoded: "r", plainText: "r"},
|
||||
{fullyEncoded: "%73", minimallyEncoded: "s", plainText: "s"},
|
||||
{fullyEncoded: "%74", minimallyEncoded: "t", plainText: "t"},
|
||||
{fullyEncoded: "%75", minimallyEncoded: "u", plainText: "u"},
|
||||
{fullyEncoded: "%76", minimallyEncoded: "v", plainText: "v"},
|
||||
{fullyEncoded: "%77", minimallyEncoded: "w", plainText: "w"},
|
||||
{fullyEncoded: "%78", minimallyEncoded: "x", plainText: "x"},
|
||||
{fullyEncoded: "%79", minimallyEncoded: "y", plainText: "y"},
|
||||
{fullyEncoded: "%7A", minimallyEncoded: "z", plainText: "z"},
|
||||
{fullyEncoded: "%7B", minimallyEncoded: "%7B", plainText: "{"},
|
||||
{fullyEncoded: "%7C", minimallyEncoded: "%7C", plainText: "|"},
|
||||
{fullyEncoded: "%7D", minimallyEncoded: "%7D", plainText: "}"},
|
||||
{fullyEncoded: "%7E", minimallyEncoded: "~", plainText: "~"},
|
||||
{fullyEncoded: "%7F", minimallyEncoded: "%7F", plainText: "\u007f"},
|
||||
{fullyEncoded: "%E8%87%AA%E7%94%B1", minimallyEncoded: "%E8%87%AA%E7%94%B1", plainText: "自由"},
|
||||
{fullyEncoded: "%F0%90%90%80", minimallyEncoded: "%F0%90%90%80", plainText: "𐐀"},
|
||||
}
|
||||
|
||||
func TestUrlEncode(t *testing.T) {
|
||||
for _, test := range encodeTest {
|
||||
got := urlEncode(test.plainText)
|
||||
if got != test.minimallyEncoded && got != test.fullyEncoded {
|
||||
t.Errorf("urlEncode(%q) got %q wanted %q or %q", test.plainText, got, test.minimallyEncoded, test.fullyEncoded)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestTimeString(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in time.Time
|
||||
want string
|
||||
}{
|
||||
{fstest.Time("1970-01-01T00:00:00.000000000Z"), "0"},
|
||||
{fstest.Time("2001-02-03T04:05:10.123123123Z"), "981173110123"},
|
||||
{fstest.Time("2001-02-03T05:05:10.123123123+01:00"), "981173110123"},
|
||||
} {
|
||||
got := timeString(test.in)
|
||||
if test.want != got {
|
||||
t.Logf("%v: want %v got %v", test.in, test.want, got)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestParseTimeString(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
want time.Time
|
||||
wantError string
|
||||
}{
|
||||
{"0", fstest.Time("1970-01-01T00:00:00.000000000Z"), ""},
|
||||
{"981173110123", fstest.Time("2001-02-03T04:05:10.123000000Z"), ""},
|
||||
{"", time.Time{}, ""},
|
||||
{"potato", time.Time{}, `strconv.ParseInt: parsing "potato": invalid syntax`},
|
||||
} {
|
||||
o := Object{}
|
||||
err := o.parseTimeString(test.in)
|
||||
got := o.modTime
|
||||
var gotError string
|
||||
if err != nil {
|
||||
gotError = err.Error()
|
||||
}
|
||||
if test.want != got {
|
||||
t.Logf("%v: want %v got %v", test.in, test.want, got)
|
||||
}
|
||||
if test.wantError != gotError {
|
||||
t.Logf("%v: want error %v got error %v", test.in, test.wantError, gotError)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
@@ -1,56 +0,0 @@
|
||||
// Test B2 filesystem interface
|
||||
//
|
||||
// Automatically generated - DO NOT EDIT
|
||||
// Regenerate with: make gen_tests
|
||||
package b2_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ncw/rclone/b2"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/fstest/fstests"
|
||||
)
|
||||
|
||||
func init() {
|
||||
fstests.NilObject = fs.Object((*b2.Object)(nil))
|
||||
fstests.RemoteName = "TestB2:"
|
||||
}
|
||||
|
||||
// Generic tests for the Fs
|
||||
func TestInit(t *testing.T) { fstests.TestInit(t) }
|
||||
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
|
||||
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
|
||||
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
|
||||
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
|
||||
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
|
||||
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
|
||||
func TestFsNewFsObjectNotFound(t *testing.T) { fstests.TestFsNewFsObjectNotFound(t) }
|
||||
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
|
||||
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
|
||||
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
|
||||
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
|
||||
func TestFsListRoot(t *testing.T) { fstests.TestFsListRoot(t) }
|
||||
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
|
||||
func TestFsNewFsObject(t *testing.T) { fstests.TestFsNewFsObject(t) }
|
||||
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
|
||||
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
|
||||
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
|
||||
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
|
||||
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
|
||||
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
|
||||
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
|
||||
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
|
||||
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
|
||||
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
|
||||
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
|
||||
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
|
||||
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
|
||||
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
|
||||
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
|
||||
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
|
||||
func TestLimitedFs(t *testing.T) { fstests.TestLimitedFs(t) }
|
||||
func TestLimitedFsNotFound(t *testing.T) { fstests.TestLimitedFsNotFound(t) }
|
||||
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
|
||||
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
|
||||
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }
|
||||
@@ -1,9 +1,9 @@
|
||||
#!/bin/bash
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
# This uses gox from https://github.com/mitchellh/gox
|
||||
# Make sure you've run gox -build-toolchain - not required for go >= 1.5
|
||||
# Make sure you've run gox -build-toolchain
|
||||
|
||||
if [ "$1" == "" ]; then
|
||||
echo "Syntax: $0 Version"
|
||||
@@ -13,9 +13,7 @@ VERSION="$1"
|
||||
|
||||
rm -rf build
|
||||
|
||||
gox -output "build/{{.Dir}}-${VERSION}-{{.OS}}-{{.Arch}}/{{.Dir}}" -os "darwin linux freebsd openbsd windows freebsd netbsd plan9 solaris"
|
||||
# Not implemented yet: nacl dragonfly android
|
||||
# gox -osarch-list for definitive list
|
||||
gox -output "build/{{.Dir}}-${VERSION}-{{.OS}}-{{.Arch}}/{{.Dir}}"
|
||||
|
||||
mv build/rclone-${VERSION}-darwin-amd64 build/rclone-${VERSION}-osx-amd64
|
||||
mv build/rclone-${VERSION}-darwin-386 build/rclone-${VERSION}-osx-386
|
||||
@@ -23,12 +21,10 @@ mv build/rclone-${VERSION}-darwin-386 build/rclone-${VERSION}-osx-386
|
||||
cd build
|
||||
|
||||
for d in `ls`; do
|
||||
cp -a ../MANUAL.txt $d/README.txt
|
||||
cp -a ../MANUAL.html $d/README.html
|
||||
cp -a ../README.txt $d/
|
||||
cp -a ../README.html $d/
|
||||
cp -a ../rclone.1 $d/
|
||||
zip -r9 $d.zip $d
|
||||
d_current=${d/-${VERSION}/-current}
|
||||
ln $d.zip $d_current.zip
|
||||
rm -rf $d
|
||||
done
|
||||
|
||||
|
||||
@@ -1,272 +0,0 @@
|
||||
// Package dircache provides a simple cache for caching directory to path lookups
|
||||
package dircache
|
||||
|
||||
// _methods are called without the lock
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// DirCache caches paths to directory IDs and vice versa
|
||||
type DirCache struct {
|
||||
cacheMu sync.RWMutex
|
||||
cache map[string]string
|
||||
invCache map[string]string
|
||||
mu sync.Mutex
|
||||
fs DirCacher // Interface to find and make stuff
|
||||
trueRootID string // ID of the absolute root
|
||||
root string // the path we are working on
|
||||
rootID string // ID of the root directory
|
||||
rootParentID string // ID of the root's parent directory
|
||||
foundRoot bool // Whether we have found the root or not
|
||||
}
|
||||
|
||||
// DirCacher describes an interface for doing the low level directory work
|
||||
type DirCacher interface {
|
||||
FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error)
|
||||
CreateDir(pathID, leaf string) (newID string, err error)
|
||||
}
|
||||
|
||||
// New makes a DirCache
|
||||
//
|
||||
// The cache is safe for concurrent use
|
||||
func New(root string, trueRootID string, fs DirCacher) *DirCache {
|
||||
d := &DirCache{
|
||||
trueRootID: trueRootID,
|
||||
root: root,
|
||||
fs: fs,
|
||||
}
|
||||
d.Flush()
|
||||
d.ResetRoot()
|
||||
return d
|
||||
}
|
||||
|
||||
// Get an ID given a path
|
||||
func (dc *DirCache) Get(path string) (id string, ok bool) {
|
||||
dc.cacheMu.RLock()
|
||||
id, ok = dc.cache[path]
|
||||
dc.cacheMu.RUnlock()
|
||||
return
|
||||
}
|
||||
|
||||
// GetInv gets a path given an ID
|
||||
func (dc *DirCache) GetInv(id string) (path string, ok bool) {
|
||||
dc.cacheMu.RLock()
|
||||
path, ok = dc.invCache[id]
|
||||
dc.cacheMu.RUnlock()
|
||||
return
|
||||
}
|
||||
|
||||
// Put a path, id into the map
|
||||
func (dc *DirCache) Put(path, id string) {
|
||||
dc.cacheMu.Lock()
|
||||
dc.cache[path] = id
|
||||
dc.invCache[id] = path
|
||||
dc.cacheMu.Unlock()
|
||||
}
|
||||
|
||||
// Flush the map of all data
|
||||
func (dc *DirCache) Flush() {
|
||||
dc.cacheMu.Lock()
|
||||
dc.cache = make(map[string]string)
|
||||
dc.invCache = make(map[string]string)
|
||||
dc.cacheMu.Unlock()
|
||||
}
|
||||
|
||||
// SplitPath splits a path into directory, leaf
|
||||
//
|
||||
// Path shouldn't start or end with a /
|
||||
//
|
||||
// If there are no slashes then directory will be "" and leaf = path
|
||||
func SplitPath(path string) (directory, leaf string) {
|
||||
lastSlash := strings.LastIndex(path, "/")
|
||||
if lastSlash >= 0 {
|
||||
directory = path[:lastSlash]
|
||||
leaf = path[lastSlash+1:]
|
||||
} else {
|
||||
directory = ""
|
||||
leaf = path
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// FindDir finds the directory passed in returning the directory ID
|
||||
// starting from pathID
|
||||
//
|
||||
// Path shouldn't start or end with a /
|
||||
//
|
||||
// If create is set it will make the directory if not found
|
||||
//
|
||||
// Algorithm:
|
||||
// Look in the cache for the path, if found return the pathID
|
||||
// If not found strip the last path off the path and recurse
|
||||
// Now have a parent directory id, so look in the parent for self and return it
|
||||
func (dc *DirCache) FindDir(path string, create bool) (pathID string, err error) {
|
||||
dc.mu.Lock()
|
||||
defer dc.mu.Unlock()
|
||||
return dc._findDir(path, create)
|
||||
}
|
||||
|
||||
// Look for the root and in the cache - safe to call without the mu
|
||||
func (dc *DirCache) _findDirInCache(path string) string {
|
||||
// fmt.Println("Finding",path,"create",create,"cache",cache)
|
||||
// If it is the root, then return it
|
||||
if path == "" {
|
||||
// fmt.Println("Root")
|
||||
return dc.rootID
|
||||
}
|
||||
|
||||
// If it is in the cache then return it
|
||||
pathID, ok := dc.Get(path)
|
||||
if ok {
|
||||
// fmt.Println("Cache hit on", path)
|
||||
return pathID
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// Unlocked findDir - must have mu
|
||||
func (dc *DirCache) _findDir(path string, create bool) (pathID string, err error) {
|
||||
pathID = dc._findDirInCache(path)
|
||||
if pathID != "" {
|
||||
return pathID, nil
|
||||
}
|
||||
|
||||
// Split the path into directory, leaf
|
||||
directory, leaf := SplitPath(path)
|
||||
|
||||
// Recurse and find pathID for parent directory
|
||||
parentPathID, err := dc._findDir(directory, create)
|
||||
if err != nil {
|
||||
return "", err
|
||||
|
||||
}
|
||||
|
||||
// Find the leaf in parentPathID
|
||||
pathID, found, err := dc.fs.FindLeaf(parentPathID, leaf)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// If not found create the directory if required or return an error
|
||||
if !found {
|
||||
if create {
|
||||
pathID, err = dc.fs.CreateDir(parentPathID, leaf)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("Failed to make directory: %v", err)
|
||||
}
|
||||
} else {
|
||||
return "", fmt.Errorf("Couldn't find directory: %q", path)
|
||||
}
|
||||
}
|
||||
|
||||
// Store the leaf directory in the cache
|
||||
dc.Put(path, pathID)
|
||||
|
||||
// fmt.Println("Dir", path, "is", pathID)
|
||||
return pathID, nil
|
||||
}
|
||||
|
||||
// FindPath finds the leaf and directoryID from a path
|
||||
//
|
||||
// If create is set parent directories will be created if they don't exist
|
||||
func (dc *DirCache) FindPath(path string, create bool) (leaf, directoryID string, err error) {
|
||||
dc.mu.Lock()
|
||||
defer dc.mu.Unlock()
|
||||
directory, leaf := SplitPath(path)
|
||||
directoryID, err = dc._findDir(directory, create)
|
||||
if err != nil {
|
||||
if create {
|
||||
err = fmt.Errorf("Couldn't find or make directory %q: %s", directory, err)
|
||||
} else {
|
||||
err = fmt.Errorf("Couldn't find directory %q: %s", directory, err)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// FindRoot finds the root directory if not already found
|
||||
//
|
||||
// Resets the root directory
|
||||
//
|
||||
// If create is set it will make the directory if not found
|
||||
func (dc *DirCache) FindRoot(create bool) error {
|
||||
dc.mu.Lock()
|
||||
defer dc.mu.Unlock()
|
||||
if dc.foundRoot {
|
||||
return nil
|
||||
}
|
||||
rootID, err := dc._findDir(dc.root, create)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
dc.foundRoot = true
|
||||
dc.rootID = rootID
|
||||
|
||||
// Find the parent of the root while we still have the root
|
||||
// directory tree cached
|
||||
rootParentPath, _ := SplitPath(dc.root)
|
||||
dc.rootParentID, _ = dc.Get(rootParentPath)
|
||||
|
||||
// Reset the tree based on dc.root
|
||||
dc.Flush()
|
||||
// Put the root directory in
|
||||
dc.Put("", dc.rootID)
|
||||
return nil
|
||||
}
|
||||
|
||||
// FoundRoot returns whether the root directory has been found yet
|
||||
//
|
||||
// Call this from FindLeaf or CreateDir only
|
||||
func (dc *DirCache) FoundRoot() bool {
|
||||
return dc.foundRoot
|
||||
}
|
||||
|
||||
// RootID returns the ID of the root directory
|
||||
//
|
||||
// This should be called after FindRoot
|
||||
func (dc *DirCache) RootID() string {
|
||||
dc.mu.Lock()
|
||||
defer dc.mu.Unlock()
|
||||
if !dc.foundRoot {
|
||||
log.Fatalf("Internal Error: RootID() called before FindRoot")
|
||||
}
|
||||
return dc.rootID
|
||||
}
|
||||
|
||||
// RootParentID returns the ID of the parent of the root directory
|
||||
//
|
||||
// This should be called after FindRoot
|
||||
func (dc *DirCache) RootParentID() (string, error) {
|
||||
dc.mu.Lock()
|
||||
defer dc.mu.Unlock()
|
||||
if !dc.foundRoot {
|
||||
return "", fmt.Errorf("Internal Error: RootID() called before FindRoot")
|
||||
}
|
||||
if dc.rootParentID == "" {
|
||||
return "", fmt.Errorf("Internal Error: Didn't find rootParentID")
|
||||
}
|
||||
if dc.rootID == dc.trueRootID {
|
||||
return "", fmt.Errorf("Is root directory")
|
||||
}
|
||||
return dc.rootParentID, nil
|
||||
}
|
||||
|
||||
// ResetRoot resets the root directory to the absolute root and clears
|
||||
// the DirCache
|
||||
func (dc *DirCache) ResetRoot() {
|
||||
dc.mu.Lock()
|
||||
defer dc.mu.Unlock()
|
||||
dc.foundRoot = false
|
||||
dc.Flush()
|
||||
|
||||
// Put the true root in
|
||||
dc.rootID = dc.trueRootID
|
||||
|
||||
// Put the root directory in
|
||||
dc.Put("", dc.rootID)
|
||||
}
|
||||
@@ -1,8 +1,8 @@
|
||||
---
|
||||
title: "Rclone"
|
||||
description: "rclone syncs files to and from Google Drive, S3, Swift, Cloudfiles, Dropbox, Google Cloud Storage and Amazon Cloud Drive."
|
||||
description: "rclone syncs files to and from Google Drive, S3, Swift, Cloudfiles, Dropbox and Google Cloud Storage."
|
||||
type: page
|
||||
date: "2015-09-06"
|
||||
date: "2014-07-17"
|
||||
groups: ["about"]
|
||||
---
|
||||
|
||||
@@ -18,22 +18,17 @@ Rclone is a command line program to sync files and directories to and from
|
||||
* Openstack Swift / Rackspace cloud files / Memset Memstore
|
||||
* Dropbox
|
||||
* Google Cloud Storage
|
||||
* Amazon Cloud Drive
|
||||
* Microsoft One Drive
|
||||
* Hubic
|
||||
* Backblaze B2
|
||||
* Yandex Disk
|
||||
* The local filesystem
|
||||
|
||||
Features
|
||||
|
||||
* MD5/SHA1 hashes checked at all times for file integrity
|
||||
* MD5SUMs checked at all times for file integrity
|
||||
* Timestamps preserved on files
|
||||
* Partial syncs supported on a whole file basis
|
||||
* Copy mode to just copy new/changed files
|
||||
* Sync (one way) mode to make a directory identical
|
||||
* Check mode to check for file hash equality
|
||||
* Can sync to and from network, eg two different cloud accounts
|
||||
* Sync mode to make a directory identical
|
||||
* Check mode to check all MD5SUMs
|
||||
* Can sync to and from network, eg two different Drive accounts
|
||||
|
||||
Links
|
||||
|
||||
|
||||
@@ -1,150 +0,0 @@
|
||||
---
|
||||
title: "Amazon Cloud Drive"
|
||||
description: "Rclone docs for Amazon Cloud Drive"
|
||||
date: "2015-09-06"
|
||||
---
|
||||
|
||||
<i class="fa fa-amazon"></i> Amazon Cloud Drive
|
||||
-----------------------------------------
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
Paths may be as deep as required, eg `remote:directory/subdirectory`.
|
||||
|
||||
The initial setup for Amazon cloud drive involves getting a token from
|
||||
Amazon which you need to do in your browser. `rclone config` walks
|
||||
you through it.
|
||||
|
||||
Here is an example of how to make a remote called `remote`. First run:
|
||||
|
||||
rclone config
|
||||
|
||||
This will guide you through an interactive setup process:
|
||||
|
||||
```
|
||||
n) New remote
|
||||
d) Delete remote
|
||||
q) Quit config
|
||||
e/n/d/q> n
|
||||
name> remote
|
||||
Type of storage to configure.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Cloud Drive
|
||||
\ "amazon cloud drive"
|
||||
2 / Amazon S3 (also Dreamhost, Ceph)
|
||||
\ "s3"
|
||||
3 / Backblaze B2
|
||||
\ "b2"
|
||||
4 / Dropbox
|
||||
\ "dropbox"
|
||||
5 / Google Cloud Storage (this is not Google Drive)
|
||||
\ "google cloud storage"
|
||||
6 / Google Drive
|
||||
\ "drive"
|
||||
7 / Hubic
|
||||
\ "hubic"
|
||||
8 / Local Disk
|
||||
\ "local"
|
||||
9 / Microsoft OneDrive
|
||||
\ "onedrive"
|
||||
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
||||
\ "swift"
|
||||
11 / Yandex Disk
|
||||
\ "yandex"
|
||||
Storage> 1
|
||||
Amazon Application Client Id - leave blank normally.
|
||||
client_id>
|
||||
Amazon Application Client Secret - leave blank normally.
|
||||
client_secret>
|
||||
Remote config
|
||||
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
|
||||
Log in and authorize rclone for access
|
||||
Waiting for code...
|
||||
Got code
|
||||
--------------------
|
||||
[remote]
|
||||
client_id =
|
||||
client_secret =
|
||||
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
|
||||
--------------------
|
||||
y) Yes this is OK
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
See the [remote setup docs](/remote_setup/) for how to set it up on a
|
||||
machine with no Internet browser available.
|
||||
|
||||
Note that rclone runs a webserver on your local machine to collect the
|
||||
token as returned from Amazon. This only runs from the moment it
|
||||
opens your browser to the moment you get back the verification
|
||||
code. This is on `http://127.0.0.1:53682/` and this it may require
|
||||
you to unblock it temporarily if you are running a host firewall.
|
||||
|
||||
Once configured you can then use `rclone` like this,
|
||||
|
||||
List directories in top level of your Amazon cloud drive
|
||||
|
||||
rclone lsd remote:
|
||||
|
||||
List all the files in your Amazon cloud drive
|
||||
|
||||
rclone ls remote:
|
||||
|
||||
To copy a local directory to an Amazon cloud drive directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and MD5SUMs ###
|
||||
|
||||
Amazon cloud drive doesn't allow modification times to be changed via
|
||||
the API so these won't be accurate or used for syncing.
|
||||
|
||||
It does store MD5SUMs so for a more accurate sync, you can use the
|
||||
`--checksum` flag.
|
||||
|
||||
### Deleting files ###
|
||||
|
||||
Any files you delete with rclone will end up in the trash. Amazon
|
||||
don't provide an API to permanently delete files, nor to empty the
|
||||
trash, so you will have to do that with one of Amazon's apps or via
|
||||
the Amazon cloud drive website.
|
||||
|
||||
### Specific options ###
|
||||
|
||||
Here are the command line options specific to this cloud storage
|
||||
system.
|
||||
|
||||
#### --acd-templink-threshold=SIZE ####
|
||||
|
||||
Files this size or more will be downloaded via their `tempLink`. This
|
||||
is to work around a problem with Amazon Cloud Drive which blocks
|
||||
downloads of files bigger than about 10GB. The default for this is
|
||||
9GB which shouldn't need to be changed.
|
||||
|
||||
To download files above this threshold, rclone requests a `tempLink`
|
||||
which downloads the file through a temporary URL directly from the
|
||||
underlying S3 storage.
|
||||
|
||||
### Limitations ###
|
||||
|
||||
Note that Amazon cloud drive is case insensitive so you can't have a
|
||||
file called "Hello.doc" and one called "hello.doc".
|
||||
|
||||
Amazon cloud drive has rate limiting so you may notice errors in the
|
||||
sync (429 errors). rclone will automatically retry the sync up to 3
|
||||
times by default (see `--retries` flag) which should hopefully work
|
||||
around this problem.
|
||||
|
||||
Amazon cloud drive has an internal limit of file sizes that can be
|
||||
uploaded to the service. This limit is not officially published,
|
||||
but all files larger than this will fail.
|
||||
|
||||
At the time of writing (Jan 2016) is in the area of 50GB per file.
|
||||
This means that larger files are likely to fail.
|
||||
|
||||
Unfortunatly there is no way for rclone to see that this failure is
|
||||
because of file size, so it will retry the operation, as any other
|
||||
failure. To avoid this problem, use `--max-size=50GB` option to limit
|
||||
the maximum size of uploaded files.
|
||||
@@ -1,29 +0,0 @@
|
||||
---
|
||||
title: "Authors"
|
||||
description: "Rclone Authors and Contributors"
|
||||
date: "2015-09-28"
|
||||
---
|
||||
|
||||
Authors
|
||||
-------
|
||||
|
||||
* Nick Craig-Wood <nick@craig-wood.com>
|
||||
|
||||
Contributors
|
||||
------------
|
||||
|
||||
* Alex Couper <amcouper@gmail.com>
|
||||
* Leonid Shalupov <leonid@shalupov.com>
|
||||
* Shimon Doodkin <helpmepro1@gmail.com>
|
||||
* Colin Nicholson <colin@colinn.com>
|
||||
* Klaus Post <klauspost@gmail.com>
|
||||
* Sergey Tolmachev <tolsi.ru@gmail.com>
|
||||
* Adriano Aurélio Meirelles <adriano@atinge.com>
|
||||
* C. Bess <cbess@users.noreply.github.com>
|
||||
* Dmitry Burdeev <dibu28@gmail.com>
|
||||
* Joseph Spurrier <github@josephspurrier.com>
|
||||
* Björn Harrtell <bjorn@wololo.org>
|
||||
* Xavier Lucas <xavier.lucas@corp.ovh.com>
|
||||
* Werner Beroux <werner@beroux.com>
|
||||
* Brian Stengaard <brian@stengaard.eu>
|
||||
* Jakub Gedeon <jgedeon@sofi.com>
|
||||
@@ -1,137 +0,0 @@
|
||||
---
|
||||
title: "B2"
|
||||
description: "Backblaze B2"
|
||||
date: "2015-12-29"
|
||||
---
|
||||
|
||||
<i class="fa fa-fire"></i>Backblaze B2
|
||||
----------------------------------------
|
||||
|
||||
B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/).
|
||||
|
||||
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
|
||||
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
|
||||
|
||||
Here is an example of making a b2 configuration. First run
|
||||
|
||||
rclone config
|
||||
|
||||
This will guide you through an interactive setup process. You will
|
||||
need your account number (a short hex number) and key (a long hex
|
||||
number) which you can get from the b2 control panel.
|
||||
|
||||
```
|
||||
No remotes found - make a new one
|
||||
n) New remote
|
||||
q) Quit config
|
||||
n/q> n
|
||||
name> remote
|
||||
Type of storage to configure.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Cloud Drive
|
||||
\ "amazon cloud drive"
|
||||
2 / Amazon S3 (also Dreamhost, Ceph)
|
||||
\ "s3"
|
||||
3 / Backblaze B2
|
||||
\ "b2"
|
||||
4 / Dropbox
|
||||
\ "dropbox"
|
||||
5 / Google Cloud Storage (this is not Google Drive)
|
||||
\ "google cloud storage"
|
||||
6 / Google Drive
|
||||
\ "drive"
|
||||
7 / Hubic
|
||||
\ "hubic"
|
||||
8 / Local Disk
|
||||
\ "local"
|
||||
9 / Microsoft OneDrive
|
||||
\ "onedrive"
|
||||
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
||||
\ "swift"
|
||||
11 / Yandex Disk
|
||||
\ "yandex"
|
||||
Storage> 3
|
||||
Account ID
|
||||
account> 123456789abc
|
||||
Application Key
|
||||
key> 0123456789abcdef0123456789abcdef0123456789
|
||||
Endpoint for the service - leave blank normally.
|
||||
endpoint>
|
||||
Remote config
|
||||
--------------------
|
||||
[remote]
|
||||
account = 123456789abc
|
||||
key = 0123456789abcdef0123456789abcdef0123456789
|
||||
endpoint =
|
||||
--------------------
|
||||
y) Yes this is OK
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
This remote is called `remote` and can now be used like this
|
||||
|
||||
See all buckets
|
||||
|
||||
rclone lsd remote:
|
||||
|
||||
Make a new bucket
|
||||
|
||||
rclone mkdir remote:bucket
|
||||
|
||||
List the contents of a bucket
|
||||
|
||||
rclone ls remote:bucket
|
||||
|
||||
Sync `/home/local/directory` to the remote bucket, deleting any
|
||||
excess files in the bucket.
|
||||
|
||||
rclone sync /home/local/directory remote:bucket
|
||||
|
||||
### Modified time ###
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
`X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
|
||||
in the Backblaze standard. Other tools should be able to use this as
|
||||
a modified time.
|
||||
|
||||
Modified times are used in syncing and are fully supported except in
|
||||
the case of updating a modification time on an existing object. In
|
||||
this case the object will be uploaded again as B2 doesn't have an API
|
||||
method to set the modification time independent of doing an upload.
|
||||
|
||||
### SHA1 checksums ###
|
||||
|
||||
The SHA1 checksums of the files are checked on upload and download and
|
||||
will be used in the syncing process. You can use the `--checksum` flag.
|
||||
|
||||
### Versions ###
|
||||
|
||||
When rclone uploads a new version of a file it creates a [new version
|
||||
of it](https://www.backblaze.com/b2/docs/file_versions.html).
|
||||
Likewise when you delete a file, the old version will still be
|
||||
available.
|
||||
|
||||
The old versions of files are visible in the B2 web interface, but not
|
||||
via rclone yet.
|
||||
|
||||
Rclone doesn't provide any way of managing old versions (downloading
|
||||
them or deleting them) at the moment. When you `purge` a bucket, all
|
||||
the old versions will be deleted.
|
||||
|
||||
### Transfers ###
|
||||
|
||||
Backblaze recommends that you do lots of transfers simultaneously for
|
||||
maximum speed. In tests from my SSD equiped laptop the optimum
|
||||
setting is about `--transfers 32` though higher numbers may be used
|
||||
for a slight speed improvement. The optimum number for you may vary
|
||||
depending on your hardware, how big the files are, how much you want
|
||||
to load your computer, etc. The default of `--transfers 4` is
|
||||
definitely too low for Backblaze B2 though.
|
||||
|
||||
### API ###
|
||||
|
||||
Here are [some notes I made on the backblaze
|
||||
API](https://gist.github.com/ncw/166dabf352b399f1cc1c) while
|
||||
integrating it with rclone which detail the changes I'd like to see.
|
||||
@@ -1,31 +0,0 @@
|
||||
---
|
||||
title: "Bugs"
|
||||
description: "Rclone Bugs and Limitations"
|
||||
date: "2014-06-16"
|
||||
---
|
||||
|
||||
Bugs and Limitations
|
||||
--------------------
|
||||
|
||||
### Empty directories are left behind / not created ##
|
||||
|
||||
With remotes that have a concept of directory, eg Local and Drive,
|
||||
empty directories may be left behind, or not created when one was
|
||||
expected.
|
||||
|
||||
This is because rclone doesn't have a concept of a directory - it only
|
||||
works on objects. Most of the object storage systems can't actually
|
||||
store a directory so there is nowhere for rclone to store anything
|
||||
about directories.
|
||||
|
||||
You can work round this to some extent with the`purge` command which
|
||||
will delete everything under the path, **inluding** empty directories.
|
||||
|
||||
This may be fixed at some point in
|
||||
[Issue #100](https://github.com/ncw/rclone/issues/100)
|
||||
|
||||
### Directory timestamps aren't preserved ##
|
||||
|
||||
For the same reason as the above, rclone doesn't have a concept of a
|
||||
directory - it only works on objects, therefore it can't preserve the
|
||||
timestamps of directories.
|
||||
@@ -1,327 +0,0 @@
|
||||
---
|
||||
title: "Documentation"
|
||||
description: "Rclone Changelog"
|
||||
date: "2016-04-18"
|
||||
---
|
||||
|
||||
Changelog
|
||||
---------
|
||||
|
||||
* v1.29 - 2016-04-18
|
||||
* New Features
|
||||
* Implement `-I, --ignore-times` for unconditional upload
|
||||
* Improve `dedupe`command
|
||||
* Now removes identical copies without asking
|
||||
* Now obeys `--dry-run`
|
||||
* Implement `--dedupe-mode` for non interactive running
|
||||
* `--dedupe-mode interactive` - interactive the default.
|
||||
* `--dedupe-mode skip` - removes identical files then skips anything left.
|
||||
* `--dedupe-mode first` - removes identical files then keeps the first one.
|
||||
* `--dedupe-mode newest` - removes identical files then keeps the newest one.
|
||||
* `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
|
||||
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
|
||||
* Bug fixes
|
||||
* Make rclone check obey the `--size-only` flag.
|
||||
* Use "application/octet-stream" if discovered mime type is invalid.
|
||||
* Fix missing "quit" option when there are no remotes.
|
||||
* Google Drive
|
||||
* Increase default chunk size to 8 MB - increases upload speed of big files
|
||||
* Speed up directory listings and make more reliable
|
||||
* Add missing retries for Move and DirMove - increases reliability
|
||||
* Preserve mime type on file update
|
||||
* Backblaze B2
|
||||
* Enable mod time syncing
|
||||
* This means that B2 will now check modification times
|
||||
* It will upload new files to update the modification times
|
||||
* (there isn't an API to just set the mod time.)
|
||||
* If you want the old behaviour use `--size-only`.
|
||||
* Update API to new version
|
||||
* Fix parsing of mod time when not in metadata
|
||||
* Swift/Hubic
|
||||
* Don't return an MD5SUM for static large objects
|
||||
* S3
|
||||
* Fix uploading files bigger than 50GB
|
||||
* v1.28 - 2016-03-01
|
||||
* New Features
|
||||
* Configuration file encryption - thanks Klaus Post
|
||||
* Improve `rclone config` adding more help and making it easier to understand
|
||||
* Implement `-u`/`--update` so creation times can be used on all remotes
|
||||
* Implement `--low-level-retries` flag
|
||||
* Optionally disable gzip compression on downloads with `--no-gzip-encoding`
|
||||
* Bug fixes
|
||||
* Don't make directories if `--dry-run` set
|
||||
* Fix and document the `move` command
|
||||
* Fix redirecting stderr on unix-like OSes when using `--log-file`
|
||||
* Fix `delete` command to wait until all finished - fixes missing deletes.
|
||||
* Backblaze B2
|
||||
* Use one upload URL per go routine fixes `more than one upload using auth token`
|
||||
* Add pacing, retries and reauthentication - fixes token expiry problems
|
||||
* Upload without using a temporary file from local (and remotes which support SHA1)
|
||||
* Fix reading metadata for all files when it shouldn't have been
|
||||
* Drive
|
||||
* Fix listing drive documents at root
|
||||
* Disable copy and move for Google docs
|
||||
* Swift
|
||||
* Fix uploading of chunked files with non ASCII characters
|
||||
* Allow setting of `storage_url` in the config - thanks Xavier Lucas
|
||||
* S3
|
||||
* Allow IAM role and credentials from environment variables - thanks Brian Stengaard
|
||||
* Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon
|
||||
* Amazon Cloud Drive
|
||||
* Retry on more things to make directory listings more reliable
|
||||
* v1.27 - 2016-01-31
|
||||
* New Features
|
||||
* Easier headless configuration with `rclone authorize`
|
||||
* Add support for multiple hash types - we now check SHA1 as well as MD5 hashes.
|
||||
* `delete` command which does obey the filters (unlike `purge`)
|
||||
* `dedupe` command to deduplicate a remote. Useful with Google Drive.
|
||||
* Add `--ignore-existing` flag to skip all files that exist on destination.
|
||||
* Add `--delete-before`, `--delete-during`, `--delete-after` flags.
|
||||
* Add `--memprofile` flag to debug memory use.
|
||||
* Warn the user about files with same name but different case
|
||||
* Make `--include` rules add their implict exclude * at the end of the filter list
|
||||
* Deprecate compiling with go1.3
|
||||
* Amazon Cloud Drive
|
||||
* Fix download of files > 10 GB
|
||||
* Fix directory traversal ("Next token is expired") for large directory listings
|
||||
* Remove 409 conflict from error codes we will retry - stops very long pauses
|
||||
* Backblaze B2
|
||||
* SHA1 hashes now checked by rclone core
|
||||
* Drive
|
||||
* Add `--drive-auth-owner-only` to only consider files owned by the user - thanks Björn Harrtell
|
||||
* Export Google documents
|
||||
* Dropbox
|
||||
* Make file exclusion error controllable with -q
|
||||
* Swift
|
||||
* Fix upload from unprivileged user.
|
||||
* S3
|
||||
* Fix updating of mod times of files with `+` in.
|
||||
* Local
|
||||
* Add local file system option to disable UNC on Windows.
|
||||
* v1.26 - 2016-01-02
|
||||
* New Features
|
||||
* Yandex storage backend - thank you Dmitry Burdeev ("dibu")
|
||||
* Implement Backblaze B2 storage backend
|
||||
* Add --min-age and --max-age flags - thank you Adriano Aurélio Meirelles
|
||||
* Make ls/lsl/md5sum/size/check obey includes and excludes
|
||||
* Fixes
|
||||
* Fix crash in http logging
|
||||
* Upload releases to github too
|
||||
* Swift
|
||||
* Fix sync for chunked files
|
||||
* One Drive
|
||||
* Re-enable server side copy
|
||||
* Don't mask HTTP error codes with JSON decode error
|
||||
* S3
|
||||
* Fix corrupting Content-Type on mod time update (thanks Joseph Spurrier)
|
||||
* v1.25 - 2015-11-14
|
||||
* New features
|
||||
* Implement Hubic storage system
|
||||
* Fixes
|
||||
* Fix deletion of some excluded files without --delete-excluded
|
||||
* This could have deleted files unexpectedly on sync
|
||||
* Always check first with `--dry-run`!
|
||||
* Swift
|
||||
* Stop SetModTime losing metadata (eg X-Object-Manifest)
|
||||
* This could have caused data loss for files > 5GB in size
|
||||
* Use ContentType from Object to avoid lookups in listings
|
||||
* One Drive
|
||||
* disable server side copy as it seems to be broken at Microsoft
|
||||
* v1.24 - 2015-11-07
|
||||
* New features
|
||||
* Add support for Microsoft One Drive
|
||||
* Add `--no-check-certificate` option to disable server certificate verification
|
||||
* Add async readahead buffer for faster transfer of big files
|
||||
* Fixes
|
||||
* Allow spaces in remotes and check remote names for validity at creation time
|
||||
* Allow '&' and disallow ':' in Windows filenames.
|
||||
* Swift
|
||||
* Ignore directory marker objects where appropriate - allows working with Hubic
|
||||
* Don't delete the container if fs wasn't at root
|
||||
* S3
|
||||
* Don't delete the bucket if fs wasn't at root
|
||||
* Google Cloud Storage
|
||||
* Don't delete the bucket if fs wasn't at root
|
||||
* v1.23 - 2015-10-03
|
||||
* New features
|
||||
* Implement `rclone size` for measuring remotes
|
||||
* Fixes
|
||||
* Fix headless config for drive and gcs
|
||||
* Tell the user they should try again if the webserver method failed
|
||||
* Improve output of `--dump-headers`
|
||||
* S3
|
||||
* Allow anonymous access to public buckets
|
||||
* Swift
|
||||
* Stop chunked operations logging "Failed to read info: Object Not Found"
|
||||
* Use Content-Length on uploads for extra reliability
|
||||
* v1.22 - 2015-09-28
|
||||
* Implement rsync like include and exclude flags
|
||||
* swift
|
||||
* Support files > 5GB - thanks Sergey Tolmachev
|
||||
* v1.21 - 2015-09-22
|
||||
* New features
|
||||
* Display individual transfer progress
|
||||
* Make lsl output times in localtime
|
||||
* Fixes
|
||||
* Fix allowing user to override credentials again in Drive, GCS and ACD
|
||||
* Amazon Cloud Drive
|
||||
* Implement compliant pacing scheme
|
||||
* Google Drive
|
||||
* Make directory reads concurrent for increased speed.
|
||||
* v1.20 - 2015-09-15
|
||||
* New features
|
||||
* Amazon Cloud Drive support
|
||||
* Oauth support redone - fix many bugs and improve usability
|
||||
* Use "golang.org/x/oauth2" as oauth libary of choice
|
||||
* Improve oauth usability for smoother initial signup
|
||||
* drive, googlecloudstorage: optionally use auto config for the oauth token
|
||||
* Implement --dump-headers and --dump-bodies debug flags
|
||||
* Show multiple matched commands if abbreviation too short
|
||||
* Implement server side move where possible
|
||||
* local
|
||||
* Always use UNC paths internally on Windows - fixes a lot of bugs
|
||||
* dropbox
|
||||
* force use of our custom transport which makes timeouts work
|
||||
* Thanks to Klaus Post for lots of help with this release
|
||||
* v1.19 - 2015-08-28
|
||||
* New features
|
||||
* Server side copies for s3/swift/drive/dropbox/gcs
|
||||
* Move command - uses server side copies if it can
|
||||
* Implement --retries flag - tries 3 times by default
|
||||
* Build for plan9/amd64 and solaris/amd64 too
|
||||
* Fixes
|
||||
* Make a current version download with a fixed URL for scripting
|
||||
* Ignore rmdir in limited fs rather than throwing error
|
||||
* dropbox
|
||||
* Increase chunk size to improve upload speeds massively
|
||||
* Issue an error message when trying to upload bad file name
|
||||
* v1.18 - 2015-08-17
|
||||
* drive
|
||||
* Add `--drive-use-trash` flag so rclone trashes instead of deletes
|
||||
* Add "Forbidden to download" message for files with no downloadURL
|
||||
* dropbox
|
||||
* Remove datastore
|
||||
* This was deprecated and it caused a lot of problems
|
||||
* Modification times and MD5SUMs no longer stored
|
||||
* Fix uploading files > 2GB
|
||||
* s3
|
||||
* use official AWS SDK from github.com/aws/aws-sdk-go
|
||||
* **NB** will most likely require you to delete and recreate remote
|
||||
* enable multipart upload which enables files > 5GB
|
||||
* tested with Ceph / RadosGW / S3 emulation
|
||||
* many thanks to Sam Liston and Brian Haymore at the [Utah
|
||||
Center for High Performance Computing](https://www.chpc.utah.edu/) for a Ceph test account
|
||||
* misc
|
||||
* Show errors when reading the config file
|
||||
* Do not print stats in quiet mode - thanks Leonid Shalupov
|
||||
* Add FAQ
|
||||
* Fix created directories not obeying umask
|
||||
* Linux installation instructions - thanks Shimon Doodkin
|
||||
* v1.17 - 2015-06-14
|
||||
* dropbox: fix case insensitivity issues - thanks Leonid Shalupov
|
||||
* v1.16 - 2015-06-09
|
||||
* Fix uploading big files which was causing timeouts or panics
|
||||
* Don't check md5sum after download with --size-only
|
||||
* v1.15 - 2015-06-06
|
||||
* Add --checksum flag to only discard transfers by MD5SUM - thanks Alex Couper
|
||||
* Implement --size-only flag to sync on size not checksum & modtime
|
||||
* Expand docs and remove duplicated information
|
||||
* Document rclone's limitations with directories
|
||||
* dropbox: update docs about case insensitivity
|
||||
* v1.14 - 2015-05-21
|
||||
* local: fix encoding of non utf-8 file names - fixes a duplicate file problem
|
||||
* drive: docs about rate limiting
|
||||
* google cloud storage: Fix compile after API change in "google.golang.org/api/storage/v1"
|
||||
* v1.13 - 2015-05-10
|
||||
* Revise documentation (especially sync)
|
||||
* Implement --timeout and --conntimeout
|
||||
* s3: ignore etags from multipart uploads which aren't md5sums
|
||||
* v1.12 - 2015-03-15
|
||||
* drive: Use chunked upload for files above a certain size
|
||||
* drive: add --drive-chunk-size and --drive-upload-cutoff parameters
|
||||
* drive: switch to insert from update when a failed copy deletes the upload
|
||||
* core: Log duplicate files if they are detected
|
||||
* v1.11 - 2015-03-04
|
||||
* swift: add region parameter
|
||||
* drive: fix crash on failed to update remote mtime
|
||||
* In remote paths, change native directory separators to /
|
||||
* Add synchronization to ls/lsl/lsd output to stop corruptions
|
||||
* Ensure all stats/log messages to go stderr
|
||||
* Add --log-file flag to log everything (including panics) to file
|
||||
* Make it possible to disable stats printing with --stats=0
|
||||
* Implement --bwlimit to limit data transfer bandwidth
|
||||
* v1.10 - 2015-02-12
|
||||
* s3: list an unlimited number of items
|
||||
* Fix getting stuck in the configurator
|
||||
* v1.09 - 2015-02-07
|
||||
* windows: Stop drive letters (eg C:) getting mixed up with remotes (eg drive:)
|
||||
* local: Fix directory separators on Windows
|
||||
* drive: fix rate limit exceeded errors
|
||||
* v1.08 - 2015-02-04
|
||||
* drive: fix subdirectory listing to not list entire drive
|
||||
* drive: Fix SetModTime
|
||||
* dropbox: adapt code to recent library changes
|
||||
* v1.07 - 2014-12-23
|
||||
* google cloud storage: fix memory leak
|
||||
* v1.06 - 2014-12-12
|
||||
* Fix "Couldn't find home directory" on OSX
|
||||
* swift: Add tenant parameter
|
||||
* Use new location of Google API packages
|
||||
* v1.05 - 2014-08-09
|
||||
* Improved tests and consequently lots of minor fixes
|
||||
* core: Fix race detected by go race detector
|
||||
* core: Fixes after running errcheck
|
||||
* drive: reset root directory on Rmdir and Purge
|
||||
* fs: Document that Purger returns error on empty directory, test and fix
|
||||
* google cloud storage: fix ListDir on subdirectory
|
||||
* google cloud storage: re-read metadata in SetModTime
|
||||
* s3: make reading metadata more reliable to work around eventual consistency problems
|
||||
* s3: strip trailing / from ListDir()
|
||||
* swift: return directories without / in ListDir
|
||||
* v1.04 - 2014-07-21
|
||||
* google cloud storage: Fix crash on Update
|
||||
* v1.03 - 2014-07-20
|
||||
* swift, s3, dropbox: fix updated files being marked as corrupted
|
||||
* Make compile with go 1.1 again
|
||||
* v1.02 - 2014-07-19
|
||||
* Implement Dropbox remote
|
||||
* Implement Google Cloud Storage remote
|
||||
* Verify Md5sums and Sizes after copies
|
||||
* Remove times from "ls" command - lists sizes only
|
||||
* Add add "lsl" - lists times and sizes
|
||||
* Add "md5sum" command
|
||||
* v1.01 - 2014-07-04
|
||||
* drive: fix transfer of big files using up lots of memory
|
||||
* v1.00 - 2014-07-03
|
||||
* drive: fix whole second dates
|
||||
* v0.99 - 2014-06-26
|
||||
* Fix --dry-run not working
|
||||
* Make compatible with go 1.1
|
||||
* v0.98 - 2014-05-30
|
||||
* s3: Treat missing Content-Length as 0 for some ceph installations
|
||||
* rclonetest: add file with a space in
|
||||
* v0.97 - 2014-05-05
|
||||
* Implement copying of single files
|
||||
* s3 & swift: support paths inside containers/buckets
|
||||
* v0.96 - 2014-04-24
|
||||
* drive: Fix multiple files of same name being created
|
||||
* drive: Use o.Update and fs.Put to optimise transfers
|
||||
* Add version number, -V and --version
|
||||
* v0.95 - 2014-03-28
|
||||
* rclone.org: website, docs and graphics
|
||||
* drive: fix path parsing
|
||||
* v0.94 - 2014-03-27
|
||||
* Change remote format one last time
|
||||
* GNU style flags
|
||||
* v0.93 - 2014-03-16
|
||||
* drive: store token in config file
|
||||
* cross compile other versions
|
||||
* set strict permissions on config file
|
||||
* v0.92 - 2014-03-15
|
||||
* Config fixes and --config option
|
||||
* v0.91 - 2014-03-15
|
||||
* Make config file
|
||||
* v0.90 - 2013-06-27
|
||||
* Project named rclone
|
||||
* v0.00 - 2012-11-18
|
||||
* Project started
|
||||
@@ -5,17 +5,8 @@ date: "2014-04-26"
|
||||
---
|
||||
|
||||
Contact the rclone project
|
||||
--------------------------
|
||||
|
||||
The project website is at:
|
||||
|
||||
* https://github.com/ncw/rclone
|
||||
|
||||
There you can file bug reports, ask for help or contribute pull
|
||||
requests.
|
||||
|
||||
See also
|
||||
|
||||
* [Github project page for source, reporting bugs and pull requests](http://github.com/ncw/rclone)
|
||||
* <a href="https://google.com/+RcloneOrg" rel="publisher">Google+ page for general comments</a></li>
|
||||
|
||||
Or email [Nick Craig-Wood](mailto:nick@craig-wood.com)
|
||||
|
||||
@@ -1,9 +1,22 @@
|
||||
---
|
||||
title: "Documentation"
|
||||
description: "Rclone Usage"
|
||||
date: "2015-06-06"
|
||||
description: "Rclone Documentation"
|
||||
date: "2014-07-17"
|
||||
---
|
||||
|
||||
Install
|
||||
-------
|
||||
|
||||
Rclone is a Go program and comes as a single binary file.
|
||||
|
||||
[Download](/downloads/) the relevant binary.
|
||||
|
||||
Or alternatively if you have Go installed use
|
||||
|
||||
go get github.com/ncw/rclone
|
||||
|
||||
and this will build the binary in `$GOPATH/bin`.
|
||||
|
||||
Configure
|
||||
---------
|
||||
|
||||
@@ -17,19 +30,12 @@ option:
|
||||
|
||||
rclone config
|
||||
|
||||
See the following for detailed instructions for
|
||||
See below for detailed instructions for
|
||||
|
||||
* [Google drive](/drive/)
|
||||
* [Amazon S3](/s3/)
|
||||
* [Swift / Rackspace Cloudfiles / Memset Memstore](/swift/)
|
||||
* [Dropbox](/dropbox/)
|
||||
* [Google Cloud Storage](/googlecloudstorage/)
|
||||
* [Local filesystem](/local/)
|
||||
* [Amazon Cloud Drive](/amazonclouddrive/)
|
||||
* [Backblaze B2](/b2/)
|
||||
* [Hubic](/hubic/)
|
||||
* [Microsoft One Drive](/onedrive/)
|
||||
* [Yandex Disk](/yandex/)
|
||||
|
||||
Usage
|
||||
-----
|
||||
@@ -49,647 +55,104 @@ You can define as many storage paths as you like in the config file.
|
||||
Subcommands
|
||||
-----------
|
||||
|
||||
### rclone copy source:path dest:path ###
|
||||
rclone copy source:path dest:path
|
||||
|
||||
Copy the source to the destination. Doesn't transfer
|
||||
unchanged files, testing by size and modification time or
|
||||
unchanged files, testing first by modification time then by
|
||||
MD5SUM. Doesn't delete files from the destination.
|
||||
|
||||
Note that it is always the contents of the directory that is synced,
|
||||
not the directory so when source:path is a directory, it's the
|
||||
contents of source:path that are copied, not the directory name and
|
||||
contents.
|
||||
rclone sync source:path dest:path
|
||||
|
||||
If dest:path doesn't exist, it is created and the source:path contents
|
||||
go there.
|
||||
Sync the source to the destination. Doesn't transfer
|
||||
unchanged files, testing first by modification time then by
|
||||
MD5SUM. Deletes any files that exist in source that don't
|
||||
exist in destination. Since this can cause data loss, test
|
||||
first with the -dry-run flag.
|
||||
|
||||
For example
|
||||
rclone ls [remote:path]
|
||||
|
||||
rclone copy source:sourcepath dest:destpath
|
||||
List all the objects in the the path with sizes.
|
||||
|
||||
Let's say there are two files in sourcepath
|
||||
rclone lsl [remote:path]
|
||||
|
||||
sourcepath/one.txt
|
||||
sourcepath/two.txt
|
||||
List all the objects in the the path with sizes and timestamps.
|
||||
|
||||
This copies them to
|
||||
rclone lsd [remote:path]
|
||||
|
||||
destpath/one.txt
|
||||
destpath/two.txt
|
||||
List all directories/objects/buckets in the the path.
|
||||
|
||||
Not to
|
||||
|
||||
destpath/sourcepath/one.txt
|
||||
destpath/sourcepath/two.txt
|
||||
|
||||
If you are familiar with `rsync`, rclone always works as if you had
|
||||
written a trailing / - meaning "copy the contents of this directory".
|
||||
This applies to all commands and whether you are talking about the
|
||||
source or destination.
|
||||
|
||||
### rclone sync source:path dest:path ###
|
||||
|
||||
Sync the source to the destination, changing the destination
|
||||
only. Doesn't transfer unchanged files, testing by size and
|
||||
modification time or MD5SUM. Destination is updated to match
|
||||
source, including deleting files if necessary.
|
||||
|
||||
**Important**: Since this can cause data loss, test first with the
|
||||
`--dry-run` flag to see exactly what would be copied and deleted.
|
||||
|
||||
Note that files in the destination won't be deleted if there were any
|
||||
errors at any point.
|
||||
|
||||
It is always the contents of the directory that is synced, not the
|
||||
directory so when source:path is a directory, it's the contents of
|
||||
source:path that are copied, not the directory name and contents. See
|
||||
extended explanation in the `copy` command above if unsure.
|
||||
|
||||
If dest:path doesn't exist, it is created and the source:path contents
|
||||
go there.
|
||||
|
||||
### move source:path dest:path ###
|
||||
|
||||
Moves the source to the destination.
|
||||
|
||||
If there are no filters in use this is equivalent to a copy followed
|
||||
by a purge, but may using server side operations to speed it up if
|
||||
possible.
|
||||
|
||||
If filters are in use then it is equivalent to a copy followed by
|
||||
delete, followed by an rmdir (which only removes the directory if
|
||||
empty). The individual file moves will be moved with srver side
|
||||
operations if possible.
|
||||
|
||||
**Important**: Since this can cause data loss, test first with the
|
||||
--dry-run flag.
|
||||
|
||||
### rclone ls remote:path ###
|
||||
|
||||
List all the objects in the the path with size and path.
|
||||
|
||||
### rclone lsd remote:path ###
|
||||
|
||||
List all directories/containers/buckets in the the path.
|
||||
|
||||
### rclone lsl remote:path ###
|
||||
|
||||
List all the objects in the the path with modification time,
|
||||
size and path.
|
||||
|
||||
### rclone md5sum remote:path ###
|
||||
|
||||
Produces an md5sum file for all the objects in the path. This
|
||||
is in the same format as the standard md5sum tool produces.
|
||||
|
||||
### rclone sha1sum remote:path ###
|
||||
|
||||
Produces an sha1sum file for all the objects in the path. This
|
||||
is in the same format as the standard sha1sum tool produces.
|
||||
|
||||
### rclone size remote:path ###
|
||||
|
||||
Prints the total size of objects in remote:path and the number of
|
||||
objects.
|
||||
|
||||
### rclone mkdir remote:path ###
|
||||
rclone mkdir remote:path
|
||||
|
||||
Make the path if it doesn't already exist
|
||||
|
||||
### rclone rmdir remote:path ###
|
||||
rclone rmdir remote:path
|
||||
|
||||
Remove the path. Note that you can't remove a path with
|
||||
objects in it, use purge for that.
|
||||
|
||||
### rclone purge remote:path ###
|
||||
rclone purge remote:path
|
||||
|
||||
Remove the path and all of its contents. Note that this does not obey
|
||||
include/exclude filters - everything will be removed. Use `delete` if
|
||||
you want to selectively delete files.
|
||||
Remove the path and all of its contents.
|
||||
|
||||
### rclone delete remote:path ###
|
||||
|
||||
Remove the contents of path. Unlike `purge` it obeys include/exclude
|
||||
filters so can be used to selectively delete files.
|
||||
|
||||
Eg delete all files bigger than 100MBytes
|
||||
|
||||
Check what would be deleted first (use either)
|
||||
|
||||
rclone --min-size 100M lsl remote:path
|
||||
rclone --dry-run --min-size 100M delete remote:path
|
||||
|
||||
Then delete
|
||||
|
||||
rclone --min-size 100M delete remote:path
|
||||
|
||||
That reads "delete everything with a minimum size of 100 MB", hence
|
||||
delete all files bigger than 100MBytes.
|
||||
|
||||
### rclone check source:path dest:path ###
|
||||
rclone check source:path dest:path
|
||||
|
||||
Checks the files in the source and destination match. It
|
||||
compares sizes and MD5SUMs and prints a report of files which
|
||||
don't match. It doesn't alter the source or destination.
|
||||
|
||||
`--size-only` may be used to only compare the sizes, not the MD5SUMs.
|
||||
rclone md5sum remote:path
|
||||
|
||||
### rclone dedupe remote:path ###
|
||||
|
||||
By default `dedup` interactively finds duplicate files and offers to
|
||||
delete all but one or rename them to be different. Only useful with
|
||||
Google Drive which can have duplicate file names.
|
||||
|
||||
The `dedupe` command will delete all but one of any identical (same
|
||||
md5sum) files it finds without confirmation. This means that for most
|
||||
duplicated files the `dedupe` command will not be interactive. You
|
||||
can use `--dry-run` to see what would happen without doing anything.
|
||||
|
||||
Here is an example run.
|
||||
|
||||
Before - with duplicates
|
||||
Produces an md5sum file for all the objects in the path. This is in
|
||||
the same format as the standard md5sum tool produces.
|
||||
General options:
|
||||
|
||||
```
|
||||
$ rclone lsl drive:dupes
|
||||
6048320 2016-03-05 16:23:16.798000000 one.txt
|
||||
6048320 2016-03-05 16:23:11.775000000 one.txt
|
||||
564374 2016-03-05 16:23:06.731000000 one.txt
|
||||
6048320 2016-03-05 16:18:26.092000000 one.txt
|
||||
6048320 2016-03-05 16:22:46.185000000 two.txt
|
||||
1744073 2016-03-05 16:22:38.104000000 two.txt
|
||||
564374 2016-03-05 16:22:52.118000000 two.txt
|
||||
--checkers=8: Number of checkers to run in parallel.
|
||||
--transfers=4: Number of file transfers to run in parallel.
|
||||
--config="~/.rclone.conf": Config file.
|
||||
-n, --dry-run=false: Do a trial run with no permanent changes
|
||||
--modify-window=1ns: Max time diff to be considered the same
|
||||
-q, --quiet=false: Print as little stuff as possible
|
||||
--stats=1m0s: Interval to print stats
|
||||
-v, --verbose=false: Print lots more stuff
|
||||
```
|
||||
|
||||
Now the `dedupe` session
|
||||
Developer options:
|
||||
|
||||
```
|
||||
$ rclone dedupe drive:dupes
|
||||
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
|
||||
one.txt: Found 4 duplicates - deleting identical copies
|
||||
one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
|
||||
one.txt: 2 duplicates remain
|
||||
1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
|
||||
2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
|
||||
s) Skip and do nothing
|
||||
k) Keep just one (choose which in next step)
|
||||
r) Rename all to be different (by changing file.jpg to file-1.jpg)
|
||||
s/k/r> k
|
||||
Enter the number of the file to keep> 1
|
||||
one.txt: Deleted 1 extra copies
|
||||
two.txt: Found 3 duplicates - deleting identical copies
|
||||
two.txt: 3 duplicates remain
|
||||
1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
|
||||
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
|
||||
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
|
||||
s) Skip and do nothing
|
||||
k) Keep just one (choose which in next step)
|
||||
r) Rename all to be different (by changing file.jpg to file-1.jpg)
|
||||
s/k/r> r
|
||||
two-1.txt: renamed from: two.txt
|
||||
two-2.txt: renamed from: two.txt
|
||||
two-3.txt: renamed from: two.txt
|
||||
--cpuprofile="": Write cpu profile to file
|
||||
```
|
||||
|
||||
The result being
|
||||
|
||||
```
|
||||
$ rclone lsl drive:dupes
|
||||
6048320 2016-03-05 16:23:16.798000000 one.txt
|
||||
564374 2016-03-05 16:22:52.118000000 two-1.txt
|
||||
6048320 2016-03-05 16:22:46.185000000 two-2.txt
|
||||
1744073 2016-03-05 16:22:38.104000000 two-3.txt
|
||||
```
|
||||
|
||||
Dedupe can be run non interactively using the `--dedupe-mode` flag.
|
||||
|
||||
* `--dedupe-mode interactive` - interactive as above.
|
||||
* `--dedupe-mode skip` - removes identical files then skips anything left.
|
||||
* `--dedupe-mode first` - removes identical files then keeps the first one.
|
||||
* `--dedupe-mode newest` - removes identical files then keeps the newest one.
|
||||
* `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
|
||||
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
|
||||
|
||||
For example to rename all the identically named photos in your Google Photos directory, do
|
||||
|
||||
rclone dedupe --dedupe-mode rename "drive:Google Photos"
|
||||
|
||||
### rclone config ###
|
||||
|
||||
Enter an interactive configuration session.
|
||||
|
||||
### rclone help ###
|
||||
|
||||
Prints help on rclone commands and options.
|
||||
|
||||
Server Side Copy
|
||||
----------------
|
||||
|
||||
Drive, S3, Dropbox, Swift and Google Cloud Storage support server side
|
||||
copy.
|
||||
|
||||
This means if you want to copy one folder to another then rclone won't
|
||||
download all the files and re-upload them; it will instruct the server
|
||||
to copy them in place.
|
||||
|
||||
Eg
|
||||
|
||||
rclone copy s3:oldbucket s3:newbucket
|
||||
|
||||
Will copy the contents of `oldbucket` to `newbucket` without
|
||||
downloading and re-uploading.
|
||||
|
||||
Remotes which don't support server side copy (eg local) **will**
|
||||
download and re-upload in this case.
|
||||
|
||||
Server side copies are used with `sync` and `copy` and will be
|
||||
identified in the log when using the `-v` flag.
|
||||
|
||||
Server side copies will only be attempted if the remote names are the
|
||||
same.
|
||||
|
||||
This can be used when scripting to make aged backups efficiently, eg
|
||||
|
||||
rclone sync remote:current-backup remote:previous-backup
|
||||
rclone sync /path/to/files remote:current-backup
|
||||
|
||||
Options
|
||||
License
|
||||
-------
|
||||
|
||||
Rclone has a number of options to control its behaviour.
|
||||
This is free software under the terms of MIT the license (check the
|
||||
COPYING file included in this package).
|
||||
|
||||
Options which use TIME use the go time parser. A duration string is a
|
||||
possibly signed sequence of decimal numbers, each with optional
|
||||
fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid
|
||||
time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
|
||||
Bugs
|
||||
----
|
||||
|
||||
Options which use SIZE use kByte by default. However a suffix of `k`
|
||||
for kBytes, `M` for MBytes and `G` for GBytes may be used. These are
|
||||
the binary units, eg 2\*\*10, 2\*\*20, 2\*\*30 respectively.
|
||||
* Doesn't sync individual files yet, only directories.
|
||||
* Drive: Sometimes get: Failed to copy: Upload failed: googleapi: Error 403: Rate Limit Exceeded
|
||||
* quota is 100.0 requests/second/user
|
||||
* Empty directories left behind with Local and Drive
|
||||
* eg purging a local directory with subdirectories doesn't work
|
||||
|
||||
### --bwlimit=SIZE ###
|
||||
Contact and support
|
||||
-------------------
|
||||
|
||||
Bandwidth limit in kBytes/s, or use suffix k|M|G. The default is `0`
|
||||
which means to not limit bandwidth.
|
||||
The project website is at:
|
||||
|
||||
For example to limit bandwidth usage to 10 MBytes/s use `--bwlimit 10M`
|
||||
* https://github.com/ncw/rclone
|
||||
|
||||
This only limits the bandwidth of the data transfer, it doesn't limit
|
||||
the bandwith of the directory listings etc.
|
||||
There you can file bug reports, ask for help or contribute patches.
|
||||
|
||||
### --checkers=N ###
|
||||
Authors
|
||||
-------
|
||||
|
||||
The number of checkers to run in parallel. Checkers do the equality
|
||||
checking of files during a sync. For some storage systems (eg s3,
|
||||
swift, dropbox) this can take a significant amount of time so they are
|
||||
run in parallel.
|
||||
* Nick Craig-Wood <nick@craig-wood.com>
|
||||
|
||||
The default is to run 8 checkers in parallel.
|
||||
|
||||
### -c, --checksum ###
|
||||
|
||||
Normally rclone will look at modification time and size of files to
|
||||
see if they are equal. If you set this flag then rclone will check
|
||||
the file hash and size to determine if files are equal.
|
||||
|
||||
This is useful when the remote doesn't support setting modified time
|
||||
and a more accurate sync is desired than just checking the file size.
|
||||
|
||||
This is very useful when transferring between remotes which store the
|
||||
same hash type on the object, eg Drive and Swift. For details of which
|
||||
remotes support which hash type see the table in the [overview
|
||||
section](/overview/).
|
||||
|
||||
Eg `rclone --checksum sync s3:/bucket swift:/bucket` would run much
|
||||
quicker than without the `--checksum` flag.
|
||||
|
||||
When using this flag, rclone won't update mtimes of remote files if
|
||||
they are incorrect as it would normally.
|
||||
|
||||
### --config=CONFIG_FILE ###
|
||||
|
||||
Specify the location of the rclone config file. Normally this is in
|
||||
your home directory as a file called `.rclone.conf`. If you run
|
||||
`rclone -h` and look at the help for the `--config` option you will
|
||||
see where the default location is for you. Use this flag to override
|
||||
the config location, eg `rclone --config=".myconfig" .config`.
|
||||
|
||||
### --contimeout=TIME ###
|
||||
|
||||
Set the connection timeout. This should be in go time format which
|
||||
looks like `5s` for 5 seconds, `10m` for 10 minutes, or `3h30m`.
|
||||
|
||||
The connection timeout is the amount of time rclone will wait for a
|
||||
connection to go through to a remote object storage system. It is
|
||||
`1m` by default.
|
||||
|
||||
### --dedupe-mode MODE ###
|
||||
|
||||
Mode to run dedupe command in. One of `interactive`, `skip`, `first`, `newest`, `oldest`, `rename`. The default is `interactive`. See the dedupe command for more information as to what these options mean.
|
||||
|
||||
### -n, --dry-run ###
|
||||
|
||||
Do a trial run with no permanent changes. Use this to see what rclone
|
||||
would do without actually doing it. Useful when setting up the `sync`
|
||||
command which deletes files in the destination.
|
||||
|
||||
### --ignore-existing ###
|
||||
|
||||
Using this option will make rclone unconditionally skip all files
|
||||
that exist on the destination, no matter the content of these files.
|
||||
|
||||
While this isn't a generally recommended option, it can be useful
|
||||
in cases where your files change due to encryption. However, it cannot
|
||||
correct partial transfers in case a transfer was interrupted.
|
||||
|
||||
### -I, --ignore-times ###
|
||||
|
||||
Using this option will cause rclone to unconditionally upload all
|
||||
files regardless of the state of files on the destination.
|
||||
|
||||
Normally rclone would skip any files that have the same
|
||||
modification time and are the same size (or have the same checksum if
|
||||
using `--checksum`).
|
||||
|
||||
### --log-file=FILE ###
|
||||
|
||||
Log all of rclone's output to FILE. This is not active by default.
|
||||
This can be useful for tracking down problems with syncs in
|
||||
combination with the `-v` flag.
|
||||
|
||||
### --low-level-retries NUMBER ###
|
||||
|
||||
This controls the number of low level retries rclone does.
|
||||
|
||||
A low level retry is used to retry a failing operation - typically one
|
||||
HTTP request. This might be uploading a chunk of a big file for
|
||||
example. You will see low level retries in the log with the `-v`
|
||||
flag.
|
||||
|
||||
This shouldn't need to be changed from the default in normal
|
||||
operations, however if you get a lot of low level retries you may wish
|
||||
to reduce the value so rclone moves on to a high level retry (see the
|
||||
`--retries` flag) quicker.
|
||||
|
||||
Disable low level retries with `--low-level-retries 1`.
|
||||
|
||||
### --modify-window=TIME ###
|
||||
|
||||
When checking whether a file has been modified, this is the maximum
|
||||
allowed time difference that a file can have and still be considered
|
||||
equivalent.
|
||||
|
||||
The default is `1ns` unless this is overridden by a remote. For
|
||||
example OS X only stores modification times to the nearest second so
|
||||
if you are reading and writing to an OS X filing system this will be
|
||||
`1s` by default.
|
||||
|
||||
This command line flag allows you to override that computed default.
|
||||
|
||||
### --no-gzip-encoding ###
|
||||
|
||||
Don't set `Accept-Encoding: gzip`. This means that rclone won't ask
|
||||
the server for compressed files automatically. Useful if you've set
|
||||
the server to return files with `Content-Encoding: gzip` but you
|
||||
uploaded compressed files.
|
||||
|
||||
There is no need to set this in normal operation, and doing so will
|
||||
decrease the network transfer efficiency of rclone.
|
||||
|
||||
### -q, --quiet ###
|
||||
|
||||
Normally rclone outputs stats and a completion message. If you set
|
||||
this flag it will make as little output as possible.
|
||||
|
||||
### --retries int ###
|
||||
|
||||
Retry the entire sync if it fails this many times it fails (default 3).
|
||||
|
||||
Some remotes can be unreliable and a few retries helps pick up the
|
||||
files which didn't get transferred because of errors.
|
||||
|
||||
Disable retries with `--retries 1`.
|
||||
|
||||
### --size-only ###
|
||||
|
||||
Normally rclone will look at modification time and size of files to
|
||||
see if they are equal. If you set this flag then rclone will check
|
||||
only the size.
|
||||
|
||||
This can be useful transferring files from dropbox which have been
|
||||
modified by the desktop sync client which doesn't set checksums of
|
||||
modification times in the same way as rclone.
|
||||
|
||||
When using this flag, rclone won't update mtimes of remote files if
|
||||
they are incorrect as it would normally.
|
||||
|
||||
### --stats=TIME ###
|
||||
|
||||
Rclone will print stats at regular intervals to show its progress.
|
||||
|
||||
This sets the interval.
|
||||
|
||||
The default is `1m`. Use 0 to disable.
|
||||
|
||||
### --delete-(before,during,after) ###
|
||||
|
||||
This option allows you to specify when files on your destination are
|
||||
deleted when you sync folders.
|
||||
|
||||
Specifying the value `--delete-before` will delete all files present on the
|
||||
destination, but not on the source *before* starting the transfer
|
||||
of any new or updated files.
|
||||
|
||||
Specifying `--delete-during` (default value) will delete files while checking
|
||||
and uploading files. This is usually the fastest option.
|
||||
|
||||
Specifying `--delete-after` will delay deletion of files until all new/updated
|
||||
files have been successfully transfered.
|
||||
|
||||
### --timeout=TIME ###
|
||||
|
||||
This sets the IO idle timeout. If a transfer has started but then
|
||||
becomes idle for this long it is considered broken and disconnected.
|
||||
|
||||
The default is `5m`. Set to 0 to disable.
|
||||
|
||||
### --transfers=N ###
|
||||
|
||||
The number of file transfers to run in parallel. It can sometimes be
|
||||
useful to set this to a smaller number if the remote is giving a lot
|
||||
of timeouts or bigger if you have lots of bandwidth and a fast remote.
|
||||
|
||||
The default is to run 4 file transfers in parallel.
|
||||
|
||||
### -u, --update ###
|
||||
|
||||
This forces rclone to skip any files which exist on the destination
|
||||
and have a modified time that is newer than the source file.
|
||||
|
||||
If an existing destination file has a modification time equal (within
|
||||
the computed modify window precision) to the source file's, it will be
|
||||
updated if the sizes are different.
|
||||
|
||||
On remotes which don't support mod time directly the time checked will
|
||||
be the uploaded time. This means that if uploading to one of these
|
||||
remoes, rclone will skip any files which exist on the destination and
|
||||
have an uploaded time that is newer than the modification time of the
|
||||
source file.
|
||||
|
||||
This can be useful when transferring to a remote which doesn't support
|
||||
mod times directly as it is more accurate than a `--size-only` check
|
||||
and faster than using `--checksum`.
|
||||
|
||||
### -v, --verbose ###
|
||||
|
||||
If you set this flag, rclone will become very verbose telling you
|
||||
about every file it considers and transfers.
|
||||
|
||||
Very useful for debugging.
|
||||
|
||||
### -V, --version ###
|
||||
|
||||
Prints the version number
|
||||
|
||||
Configuration Encryption
|
||||
------------------------
|
||||
Your configuration file contains information for logging in to
|
||||
your cloud services. This means that you should keep your
|
||||
`.rclone.conf` file in a secure location.
|
||||
|
||||
If you are in an environment where that isn't possible, you can
|
||||
add a password to your configuration. This means that you will
|
||||
have to enter the password every time you start rclone.
|
||||
|
||||
To add a password to your rclone configuration, execute `rclone config`.
|
||||
|
||||
```
|
||||
>rclone config
|
||||
Current remotes:
|
||||
|
||||
e) Edit existing remote
|
||||
n) New remote
|
||||
d) Delete remote
|
||||
s) Set configuration password
|
||||
q) Quit config
|
||||
e/n/d/s/q>
|
||||
```
|
||||
|
||||
Go into `s`, Set configuration password:
|
||||
```
|
||||
e/n/d/s/q> s
|
||||
Your configuration is not encrypted.
|
||||
If you add a password, you will protect your login information to cloud services.
|
||||
a) Add Password
|
||||
q) Quit to main menu
|
||||
a/q> a
|
||||
Enter NEW configuration password:
|
||||
password>
|
||||
Confirm NEW password:
|
||||
password>
|
||||
Password set
|
||||
Your configuration is encrypted.
|
||||
c) Change Password
|
||||
u) Unencrypt configuration
|
||||
q) Quit to main menu
|
||||
c/u/q>
|
||||
```
|
||||
|
||||
Your configuration is now encrypted, and every time you start rclone
|
||||
you will now be asked for the password. In the same menu you can
|
||||
change the password or completely remove encryption from your
|
||||
configuration.
|
||||
|
||||
There is no way to recover the configuration if you lose your password.
|
||||
|
||||
rclone uses [nacl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox)
|
||||
which in term uses XSalsa20 and Poly1305 to encrypt and authenticate
|
||||
your configuration with secret-key cryptography.
|
||||
The password is SHA-256 hashed, which produces the key for secretbox.
|
||||
The hashed password is not stored.
|
||||
|
||||
While this provides very good security, we do not recommend storing
|
||||
your encrypted rclone configuration in public, if it contains sensitive
|
||||
information, maybe except if you use a very strong password.
|
||||
|
||||
If it is safe in your environment, you can set the `RCLONE_CONFIG_PASS`
|
||||
environment variable to contain your password, in which case it will be
|
||||
used for decrypting the configuration.
|
||||
|
||||
If you are running rclone inside a script, you might want to disable
|
||||
password prompts. To do that, pass the parameter
|
||||
`--ask-password=false` to rclone. This will make rclone fail instead
|
||||
of asking for a password, if if `RCLONE_CONFIG_PASS` doesn't contain
|
||||
a valid password.
|
||||
|
||||
|
||||
Developer options
|
||||
-----------------
|
||||
|
||||
These options are useful when developing or debugging rclone. There
|
||||
are also some more remote specific options which aren't documented
|
||||
here which are used for testing. These start with remote name eg
|
||||
`--drive-test-option` - see the docs for the remote in question.
|
||||
|
||||
### --cpuprofile=FILE ###
|
||||
|
||||
Write CPU profile to file. This can be analysed with `go tool pprof`.
|
||||
|
||||
### --dump-bodies ###
|
||||
|
||||
Dump HTTP headers and bodies - may contain sensitive info. Can be
|
||||
very verbose. Useful for debugging only.
|
||||
|
||||
### --dump-filters ###
|
||||
|
||||
Dump the filters to the output. Useful to see exactly what include
|
||||
and exclude options are filtering on.
|
||||
|
||||
### --dump-headers ###
|
||||
|
||||
Dump HTTP headers - may contain sensitive info. Can be very verbose.
|
||||
Useful for debugging only.
|
||||
|
||||
### --memprofile=FILE ###
|
||||
|
||||
Write memory profile to file. This can be analysed with `go tool pprof`.
|
||||
|
||||
### --no-check-certificate=true/false ###
|
||||
|
||||
`--no-check-certificate` controls whether a client verifies the
|
||||
server's certificate chain and host name.
|
||||
If `--no-check-certificate` is true, TLS accepts any certificate
|
||||
presented by the server and any host name in that certificate.
|
||||
In this mode, TLS is susceptible to man-in-the-middle attacks.
|
||||
|
||||
This option defaults to `false`.
|
||||
|
||||
**This should be used only for testing.**
|
||||
|
||||
Filtering
|
||||
---------
|
||||
|
||||
For the filtering options
|
||||
|
||||
* `--delete-excluded`
|
||||
* `--filter`
|
||||
* `--filter-from`
|
||||
* `--exclude`
|
||||
* `--exclude-from`
|
||||
* `--include`
|
||||
* `--include-from`
|
||||
* `--files-from`
|
||||
* `--min-size`
|
||||
* `--max-size`
|
||||
* `--min-age`
|
||||
* `--max-age`
|
||||
* `--dump-filters`
|
||||
|
||||
See the [filtering section](/filtering/).
|
||||
|
||||
Exit Code
|
||||
---------
|
||||
|
||||
If any errors occurred during the command, rclone will set a non zero
|
||||
exit code. This allows scripts to detect when rclone operations have
|
||||
failed.
|
||||
Contributors
|
||||
------------
|
||||
|
||||
* Your name goes here!
|
||||
|
||||
@@ -1,30 +0,0 @@
|
||||
---
|
||||
title: "Flowers for My Wife"
|
||||
description: "Flowers for My Wife."
|
||||
type: page
|
||||
date: "2015-09-06"
|
||||
---
|
||||
|
||||
Flowers for My Wife
|
||||
===================
|
||||
|
||||
Rclone is a pure open source for love-not-money project. However I've
|
||||
had requests for a donation page and coding it does take me away from
|
||||
something else I love - my wonderful wife.
|
||||
|
||||
So if you would like to send a donation, I will use it to buy flowers
|
||||
for her which will make her very happy.
|
||||
|
||||
<form action="https://www.paypal.com/cgi-bin/webscr" method="post" target="_top">
|
||||
<input type="hidden" name="cmd" value="_s-xclick">
|
||||
<input type="hidden" name="hosted_button_id" value="XQMMNUD5ZY49J">
|
||||
<input type="image" src="https://www.paypalobjects.com/en_US/GB/i/btn/btn_donateCC_LG.gif" border="0" name="submit" alt="PayPal – The safer, easier way to pay online.">
|
||||
<img alt="" border="0" src="https://www.paypalobjects.com/en_GB/i/scr/pixel.gif" width="1" height="1">
|
||||
</form>
|
||||
|
||||
If you would prefer to express your gratitude by promoting the
|
||||
project, or helping with it, I'd be over the moon with that too!
|
||||
|
||||
Thanks
|
||||
|
||||
Nick
|
||||
@@ -2,75 +2,34 @@
|
||||
title: "Rclone downloads"
|
||||
description: "Download rclone binaries for your OS."
|
||||
type: page
|
||||
date: "2016-04-18"
|
||||
date: "2014-07-19"
|
||||
---
|
||||
|
||||
Rclone Download v1.29
|
||||
Rclone Download v1.02
|
||||
=====================
|
||||
|
||||
* Windows
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-windows-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-windows-amd64.zip)
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.02-windows-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.02-windows-amd64.zip)
|
||||
* OSX
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-osx-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-osx-amd64.zip)
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.02-osx-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.02-osx-amd64.zip)
|
||||
* Linux
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-linux-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-linux-amd64.zip)
|
||||
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.29-linux-arm.zip)
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.02-linux-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.02-linux-amd64.zip)
|
||||
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.02-linux-arm.zip)
|
||||
* FreeBSD
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-freebsd-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-freebsd-amd64.zip)
|
||||
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.29-freebsd-arm.zip)
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.02-freebsd-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.02-freebsd-amd64.zip)
|
||||
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.02-freebsd-arm.zip)
|
||||
* NetBSD
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-netbsd-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-netbsd-amd64.zip)
|
||||
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.29-netbsd-arm.zip)
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.02-netbsd-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.02-netbsd-amd64.zip)
|
||||
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-v1.02-netbsd-arm.zip)
|
||||
* OpenBSD
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-openbsd-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-openbsd-amd64.zip)
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.02-openbsd-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.02-openbsd-amd64.zip)
|
||||
* Plan 9
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.29-plan9-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-plan9-amd64.zip)
|
||||
* Solaris
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-v1.29-solaris-amd64.zip)
|
||||
|
||||
You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.29).
|
||||
|
||||
Downloads for scripting
|
||||
=======================
|
||||
|
||||
If you would like to download the current version (maybe from a
|
||||
script) from a URL which doesn't change then you can use these links.
|
||||
|
||||
* Windows
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-windows-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-windows-amd64.zip)
|
||||
* OSX
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-osx-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-osx-amd64.zip)
|
||||
* Linux
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-linux-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-linux-amd64.zip)
|
||||
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-current-linux-arm.zip)
|
||||
* FreeBSD
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-freebsd-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-freebsd-amd64.zip)
|
||||
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-current-freebsd-arm.zip)
|
||||
* NetBSD
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-netbsd-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-netbsd-amd64.zip)
|
||||
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-current-netbsd-arm.zip)
|
||||
* OpenBSD
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-openbsd-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-openbsd-amd64.zip)
|
||||
* Plan 9
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-plan9-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-plan9-amd64.zip)
|
||||
* Solaris
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-solaris-amd64.zip)
|
||||
|
||||
Older Downloads
|
||||
==============
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-v1.02-plan9-386.zip)
|
||||
|
||||
Older downloads can be found [here](http://downloads.rclone.org/)
|
||||
|
||||
@@ -31,46 +31,5 @@ Rclone Download VERSION
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-VERSION-openbsd-amd64.zip)
|
||||
* Plan 9
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-VERSION-plan9-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-VERSION-plan9-amd64.zip)
|
||||
* Solaris
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-VERSION-solaris-amd64.zip)
|
||||
|
||||
You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/VERSION).
|
||||
|
||||
Downloads for scripting
|
||||
=======================
|
||||
|
||||
If you would like to download the current version (maybe from a
|
||||
script) from a URL which doesn't change then you can use these links.
|
||||
|
||||
* Windows
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-windows-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-windows-amd64.zip)
|
||||
* OSX
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-osx-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-osx-amd64.zip)
|
||||
* Linux
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-linux-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-linux-amd64.zip)
|
||||
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-current-linux-arm.zip)
|
||||
* FreeBSD
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-freebsd-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-freebsd-amd64.zip)
|
||||
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-current-freebsd-arm.zip)
|
||||
* NetBSD
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-netbsd-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-netbsd-amd64.zip)
|
||||
* [ARM - 32 Bit](http://downloads.rclone.org/rclone-current-netbsd-arm.zip)
|
||||
* OpenBSD
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-openbsd-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-openbsd-amd64.zip)
|
||||
* Plan 9
|
||||
* [386 - 32 Bit](http://downloads.rclone.org/rclone-current-plan9-386.zip)
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-plan9-amd64.zip)
|
||||
* Solaris
|
||||
* [AMD64 - 64 Bit](http://downloads.rclone.org/rclone-current-solaris-amd64.zip)
|
||||
|
||||
Older Downloads
|
||||
==============
|
||||
|
||||
Older downloads can be found [here](http://downloads.rclone.org/)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
title: "Google drive"
|
||||
description: "Rclone docs for Google drive"
|
||||
date: "2016-04-12"
|
||||
date: "2014-04-26"
|
||||
---
|
||||
|
||||
<i class="fa fa-google"></i> Google Drive
|
||||
@@ -27,46 +27,22 @@ d) Delete remote
|
||||
q) Quit config
|
||||
e/n/d/q> n
|
||||
name> remote
|
||||
Type of storage to configure.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Cloud Drive
|
||||
\ "amazon cloud drive"
|
||||
2 / Amazon S3 (also Dreamhost, Ceph)
|
||||
\ "s3"
|
||||
3 / Backblaze B2
|
||||
\ "b2"
|
||||
4 / Dropbox
|
||||
\ "dropbox"
|
||||
5 / Google Cloud Storage (this is not Google Drive)
|
||||
\ "google cloud storage"
|
||||
6 / Google Drive
|
||||
\ "drive"
|
||||
7 / Hubic
|
||||
\ "hubic"
|
||||
8 / Local Disk
|
||||
\ "local"
|
||||
9 / Microsoft OneDrive
|
||||
\ "onedrive"
|
||||
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
||||
\ "swift"
|
||||
11 / Yandex Disk
|
||||
\ "yandex"
|
||||
Storage> 6
|
||||
Google Application Client Id - leave blank normally.
|
||||
What type of source is it?
|
||||
Choose a number from below
|
||||
1) swift
|
||||
2) s3
|
||||
3) local
|
||||
4) drive
|
||||
type> 4
|
||||
Google Application Client Id - leave blank to use rclone's.
|
||||
client_id>
|
||||
Google Application Client Secret - leave blank normally.
|
||||
Google Application Client Secret - leave blank to use rclone's.
|
||||
client_secret>
|
||||
Remote config
|
||||
Use auto config?
|
||||
* Say Y if not sure
|
||||
* Say N if you are working on a remote or headless machine or Y didn't work
|
||||
y) Yes
|
||||
n) No
|
||||
y/n> y
|
||||
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
|
||||
Log in and authorize rclone for access
|
||||
Waiting for code...
|
||||
Got code
|
||||
Go to the following link in your browser
|
||||
https://accounts.google.com/o/oauth2/auth?access_type=&approval_prompt=&client_id=XXXXXXXXXXXX.apps.googleusercontent.com&redirect_uri=urn%3XXXXX%3Awg%3Aoauth%3XX.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=state
|
||||
Log in, then type paste the token that is returned in the browser here
|
||||
Enter verification code> X/XXXXXXXXXXXXXXXXXX-XXXXXXXXX.XXXXXXXXX-XXXXX_XXXXXXX_XXXXXXX
|
||||
--------------------
|
||||
[remote]
|
||||
client_id =
|
||||
@@ -79,13 +55,6 @@ d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
Note that rclone runs a webserver on your local machine to collect the
|
||||
token as returned from Google if you use auto config mode. This only
|
||||
runs from the moment it opens your browser to the moment you get back
|
||||
the verification code. This is on `http://127.0.0.1:53682/` and this
|
||||
it may require you to unblock it temporarily if you are running a host
|
||||
firewall, or use manual mode.
|
||||
|
||||
You can then use it like this,
|
||||
|
||||
List directories in top level of your drive
|
||||
@@ -100,107 +69,7 @@ To copy a local directory to a drive directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time ###
|
||||
Modified time
|
||||
-------------
|
||||
|
||||
Google drive stores modification times accurate to 1 ms.
|
||||
|
||||
### Revisions ###
|
||||
|
||||
Google drive stores revisions of files. When you upload a change to
|
||||
an existing file to google drive using rclone it will create a new
|
||||
revision of that file.
|
||||
|
||||
Revisions follow the standard google policy which at time of writing
|
||||
was
|
||||
|
||||
* They are deleted after 30 days or 100 revisions (whatever comes first).
|
||||
* They do not count towards a user storage quota.
|
||||
|
||||
### Deleting files ###
|
||||
|
||||
By default rclone will delete files permanently when requested. If
|
||||
sending them to the trash is required instead then use the
|
||||
`--drive-use-trash` flag.
|
||||
|
||||
### Specific options ###
|
||||
|
||||
Here are the command line options specific to this cloud storage
|
||||
system.
|
||||
|
||||
#### --drive-chunk-size=SIZE ####
|
||||
|
||||
Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.
|
||||
|
||||
Making this larger will improve performance, but note that each chunk
|
||||
is buffered in memory one per transfer.
|
||||
|
||||
Reducing this will reduce memory usage but decrease performance.
|
||||
|
||||
#### --drive-full-list ####
|
||||
|
||||
No longer does anything - kept for backwards compatibility.
|
||||
|
||||
#### --drive-upload-cutoff=SIZE ####
|
||||
|
||||
File size cutoff for switching to chunked upload. Default is 8 MB.
|
||||
|
||||
#### --drive-use-trash ####
|
||||
|
||||
Send files to the trash instead of deleting permanently. Defaults to
|
||||
off, namely deleting files permanently.
|
||||
|
||||
#### --drive-auth-owner-only ####
|
||||
|
||||
Only consider files owned by the authenticated user. Requires
|
||||
that --drive-full-list=true (default).
|
||||
|
||||
#### --drive-formats ####
|
||||
|
||||
Google documents can only be exported from Google drive. When rclone
|
||||
downloads a Google doc it chooses a format to download depending upon
|
||||
this setting.
|
||||
|
||||
By default the formats are `docx,xlsx,pptx,svg` which are a sensible
|
||||
default for an editable document.
|
||||
|
||||
When choosing a format, rclone runs down the list provided in order
|
||||
and chooses the first file format the doc can be exported as from the
|
||||
list. If the file can't be exported to a format on the formats list,
|
||||
then rclone will choose a format from the default list.
|
||||
|
||||
If you prefer an archive copy then you might use `--drive-formats
|
||||
pdf`, or if you prefer openoffice/libreoffice formats you might use
|
||||
`--drive-formats ods,odt`.
|
||||
|
||||
Note that rclone adds the extension to the google doc, so if it is
|
||||
calles `My Spreadsheet` on google docs, it will be exported as `My
|
||||
Spreadsheet.xlsx` or `My Spreadsheet.pdf` etc.
|
||||
|
||||
Here are the possible extensions with their corresponding mime types.
|
||||
|
||||
| Extension | Mime Type | Description |
|
||||
| --------- |-----------| ------------|
|
||||
| csv | text/csv | Standard CSV format for Spreadsheets |
|
||||
| doc | application/msword | Micosoft Office Document |
|
||||
| docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Office Document |
|
||||
| html | text/html | An HTML Document |
|
||||
| jpg | image/jpeg | A JPEG Image File |
|
||||
| ods | application/vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet |
|
||||
| ods | application/x-vnd.oasis.opendocument.spreadsheet | Openoffice Spreadsheet |
|
||||
| odt | application/vnd.oasis.opendocument.text | Openoffice Document |
|
||||
| pdf | application/pdf | Adobe PDF Format |
|
||||
| png | image/png | PNG Image Format|
|
||||
| pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation | Microsoft Office Powerpoint |
|
||||
| rtf | application/rtf | Rich Text Format |
|
||||
| svg | image/svg+xml | Scalable Vector Graphics Format |
|
||||
| txt | text/plain | Plain Text |
|
||||
| xls | application/vnd.ms-excel | Microsoft Office Spreadsheet |
|
||||
| xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet |
|
||||
| zip | application/zip | A ZIP file of HTML, Images CSS |
|
||||
|
||||
### Limitations ###
|
||||
|
||||
Drive has quite a lot of rate limiting. This causes rclone to be
|
||||
limited to transferring about 2 files per second only. Individual
|
||||
files may be transferred much faster at 100s of MBytes/s but lots of
|
||||
small files can take a long time.
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
title: "Dropbox"
|
||||
description: "Rclone docs for Dropbox"
|
||||
date: "2016-02-21"
|
||||
date: "2014-07-17"
|
||||
---
|
||||
|
||||
<i class="fa fa-dropbox"></i> Dropbox
|
||||
@@ -28,34 +28,18 @@ d) Delete remote
|
||||
q) Quit config
|
||||
e/n/d/q> n
|
||||
name> remote
|
||||
Type of storage to configure.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Cloud Drive
|
||||
\ "amazon cloud drive"
|
||||
2 / Amazon S3 (also Dreamhost, Ceph)
|
||||
\ "s3"
|
||||
3 / Backblaze B2
|
||||
\ "b2"
|
||||
4 / Dropbox
|
||||
\ "dropbox"
|
||||
5 / Google Cloud Storage (this is not Google Drive)
|
||||
\ "google cloud storage"
|
||||
6 / Google Drive
|
||||
\ "drive"
|
||||
7 / Hubic
|
||||
\ "hubic"
|
||||
8 / Local Disk
|
||||
\ "local"
|
||||
9 / Microsoft OneDrive
|
||||
\ "onedrive"
|
||||
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
||||
\ "swift"
|
||||
11 / Yandex Disk
|
||||
\ "yandex"
|
||||
Storage> 4
|
||||
Dropbox App Key - leave blank normally.
|
||||
What type of source is it?
|
||||
Choose a number from below
|
||||
1) swift
|
||||
2) s3
|
||||
3) local
|
||||
4) google cloud storage
|
||||
5) dropbox
|
||||
6) drive
|
||||
type> 5
|
||||
Dropbox App Key - leave blank to use rclone's.
|
||||
app_key>
|
||||
Dropbox App Secret - leave blank normally.
|
||||
Dropbox App Secret - leave blank to use rclone's.
|
||||
app_secret>
|
||||
Remote config
|
||||
Please visit:
|
||||
@@ -87,43 +71,10 @@ To copy a local directory to a dropbox directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and MD5SUMs ###
|
||||
Modified time
|
||||
-------------
|
||||
|
||||
Dropbox doesn't provide the ability to set modification times in the
|
||||
V1 public API, so rclone can't support modified time with Dropbox.
|
||||
|
||||
This may change in the future - see these issues for details:
|
||||
|
||||
* [Dropbox V2 API](https://github.com/ncw/rclone/issues/349)
|
||||
* [Allow syncs for remotes that can't set modtime on existing objects](https://github.com/ncw/rclone/issues/348)
|
||||
|
||||
Dropbox doesn't return any sort of checksum (MD5 or SHA1).
|
||||
|
||||
Together that means that syncs to dropbox will effectively have the
|
||||
`--size-only` flag set.
|
||||
|
||||
### Specific options ###
|
||||
|
||||
Here are the command line options specific to this cloud storage
|
||||
system.
|
||||
|
||||
#### --dropbox-chunk-size=SIZE ####
|
||||
|
||||
Upload chunk size. Max 150M. The default is 128MB. Note that this
|
||||
isn't buffered into memory.
|
||||
|
||||
### Limitations ###
|
||||
|
||||
Note that Dropbox is case insensitive so you can't have a file called
|
||||
"Hello.doc" and one called "hello.doc".
|
||||
|
||||
There are some file names such as `thumbs.db` which Dropbox can't
|
||||
store. There is a full list of them in the ["Ignored Files" section
|
||||
of this document](https://www.dropbox.com/en/help/145). Rclone will
|
||||
issue an error message `File name disallowed - not uploading` if it
|
||||
attempt to upload one of those file names, but the sync won't fail.
|
||||
|
||||
If you have more than 10,000 files in a directory then `rclone purge
|
||||
dropbox:dir` will return the error `Failed to purge: There are too
|
||||
many files involved in this operation`. As a work-around do an
|
||||
`rclone delete dropbix:dir` followed by an `rclone rmdir dropbox:dir`.
|
||||
Md5sums and timestamps in RFC3339 format accurate to 1ns are stored in
|
||||
a Dropbox datastore called "rclone". Dropbox datastores are limited
|
||||
to 100,000 rows so this is the maximum number of files rclone can
|
||||
manage on Dropbox.
|
||||
|
||||
@@ -1,147 +0,0 @@
|
||||
---
|
||||
title: "FAQ"
|
||||
description: "Rclone Frequently Asked Questions"
|
||||
date: "2015-08-27"
|
||||
---
|
||||
|
||||
Frequently Asked Questions
|
||||
--------------------------
|
||||
|
||||
### Do all cloud storage systems support all rclone commands ###
|
||||
|
||||
Yes they do. All the rclone commands (eg `sync`, `copy` etc) will
|
||||
work on all the remote storage systems.
|
||||
|
||||
### Can I copy the config from one machine to another ###
|
||||
|
||||
Sure! Rclone stores all of its config in a single file. If you want
|
||||
to find this file, the simplest way is to run `rclone -h` and look at
|
||||
the help for the `--config` flag which will tell you where it is.
|
||||
|
||||
See the [remote setup docs](/remote_setup/) for more info.
|
||||
|
||||
### How do I configure rclone on a remote / headless box with no browser? ###
|
||||
|
||||
This has now been documented in its own [remote setup page](/remote_setup/).
|
||||
|
||||
### Can rclone sync directly from drive to s3 ###
|
||||
|
||||
Rclone can sync between two remote cloud storage systems just fine.
|
||||
|
||||
Note that it effectively downloads the file and uploads it again, so
|
||||
the node running rclone would need to have lots of bandwidth.
|
||||
|
||||
The syncs would be incremental (on a file by file basis).
|
||||
|
||||
Eg
|
||||
|
||||
rclone sync drive:Folder s3:bucket
|
||||
|
||||
|
||||
### Using rclone from multiple locations at the same time ###
|
||||
|
||||
You can use rclone from multiple places at the same time if you choose
|
||||
different subdirectory for the output, eg
|
||||
|
||||
```
|
||||
Server A> rclone sync /tmp/whatever remote:ServerA
|
||||
Server B> rclone sync /tmp/whatever remote:ServerB
|
||||
```
|
||||
|
||||
If you sync to the same directory then you should use rclone copy
|
||||
otherwise the two rclones may delete each others files, eg
|
||||
|
||||
```
|
||||
Server A> rclone copy /tmp/whatever remote:Backup
|
||||
Server B> rclone copy /tmp/whatever remote:Backup
|
||||
```
|
||||
|
||||
The file names you upload from Server A and Server B should be
|
||||
different in this case, otherwise some file systems (eg Drive) may
|
||||
make duplicates.
|
||||
|
||||
### Why doesn't rclone support partial transfers / binary diffs like rsync? ###
|
||||
|
||||
Rclone stores each file you transfer as a native object on the remote
|
||||
cloud storage system. This means that you can see the files you
|
||||
upload as expected using alternative access methods (eg using the
|
||||
Google Drive web interface). There is a 1:1 mapping between files on
|
||||
your hard disk and objects created in the cloud storage system.
|
||||
|
||||
Cloud storage systems (at least none I've come across yet) don't
|
||||
support partially uploading an object. You can't take an existing
|
||||
object, and change some bytes in the middle of it.
|
||||
|
||||
It would be possible to make a sync system which stored binary diffs
|
||||
instead of whole objects like rclone does, but that would break the
|
||||
1:1 mapping of files on your hard disk to objects in the remote cloud
|
||||
storage system.
|
||||
|
||||
All the cloud storage systems support partial downloads of content, so
|
||||
it would be possible to make partial downloads work. However to make
|
||||
this work efficiently this would require storing a significant amount
|
||||
of metadata, which breaks the desired 1:1 mapping of files to objects.
|
||||
|
||||
### Can rclone do bi-directional sync? ###
|
||||
|
||||
No, not at present. rclone only does uni-directional sync from A ->
|
||||
B. It may do in the future though since it has all the primitives - it
|
||||
just requires writing the algorithm to do it.
|
||||
|
||||
### Can I use rclone with an HTTP proxy? ###
|
||||
|
||||
Yes. rclone will use the environment variables `HTTP_PROXY`,
|
||||
`HTTPS_PROXY` and `NO_PROXY`, similar to cURL and other programs.
|
||||
|
||||
`HTTPS_PROXY` takes precedence over `HTTP_PROXY` for https requests.
|
||||
|
||||
The environment values may be either a complete URL or a "host[:port]",
|
||||
in which case the "http" scheme is assumed.
|
||||
|
||||
The `NO_PROXY` allows you to disable the proxy for specific hosts.
|
||||
Hosts must be comma separated, and can contain domains or parts.
|
||||
For instance "foo.com" also matches "bar.foo.com".
|
||||
|
||||
### Rclone gives x509: failed to load system roots and no roots provided error ###
|
||||
|
||||
This means that `rclone` can't file the SSL root certificates. Likely
|
||||
you are running `rclone` on a NAS with a cut-down Linux OS.
|
||||
|
||||
Rclone (via the Go runtime) tries to load the root certificates from
|
||||
these places on Linux.
|
||||
|
||||
"/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
|
||||
"/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL
|
||||
"/etc/ssl/ca-bundle.pem", // OpenSUSE
|
||||
"/etc/pki/tls/cacert.pem", // OpenELEC
|
||||
|
||||
So doing something like this should fix the problem. It also sets the
|
||||
time which is important for SSL to work properly.
|
||||
|
||||
```
|
||||
mkdir -p /etc/ssl/certs/
|
||||
curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
|
||||
ntpclient -s -h pool.ntp.org
|
||||
```
|
||||
|
||||
Note that you may need to add the `--insecure` option to the `curl` command line if it doesn't work without.
|
||||
|
||||
```
|
||||
curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
|
||||
```
|
||||
|
||||
### Rclone gives Failed to load config file: function not implemented error ###
|
||||
|
||||
Likely this means that you are running rclone on Linux version not
|
||||
supported by the go runtime, ie earlier than version 2.6.23.
|
||||
|
||||
See the [system requirements section in the go install
|
||||
docs](https://golang.org/doc/install) for full details.
|
||||
|
||||
### All my uploaded docx/xlsx/pptx files appear as archive/zip ###
|
||||
|
||||
This is caused by uploading these files from a Windows computer which
|
||||
hasn't got the Microsoft Office suite installed. The easiest way to
|
||||
fix is to install the Word viewer and the Microsoft Office
|
||||
Compatibility Pack for Word, Excel, and PowerPoint 2007 and later
|
||||
versions' file formats
|
||||
@@ -1,310 +0,0 @@
|
||||
---
|
||||
title: "Filtering"
|
||||
description: "Filtering, includes and excludes"
|
||||
date: "2016-02-09"
|
||||
---
|
||||
|
||||
# Filtering, includes and excludes #
|
||||
|
||||
Rclone has a sophisticated set of include and exclude rules. Some of
|
||||
these are based on patterns and some on other things like file size.
|
||||
|
||||
The filters are applied for the `copy`, `sync`, `move`, `ls`, `lsl`,
|
||||
`md5sum`, `sha1sum`, `size`, `delete` and `check` operations.
|
||||
Note that `purge` does not obey the filters.
|
||||
|
||||
Each path as it passes through rclone is matched against the include
|
||||
and exclude rules like `--include`, `--exclude`, `--include-from`,
|
||||
`--exclude-from`, `--filter`, or `--filter-from`. The simplest way to
|
||||
try them out is using the `ls` command, or `--dry-run` together with
|
||||
`-v`.
|
||||
|
||||
## Patterns ##
|
||||
|
||||
The patterns used to match files for inclusion or exclusion are based
|
||||
on "file globs" as used by the unix shell.
|
||||
|
||||
If the pattern starts with a `/` then it only matches at the top level
|
||||
of the directory tree, relative to the root of the remote.
|
||||
If it doesn't start with `/` then it is matched starting at the
|
||||
**end of the path**, but it will only match a complete path element:
|
||||
|
||||
file.jpg - matches "file.jpg"
|
||||
- matches "directory/file.jpg"
|
||||
- doesn't match "afile.jpg"
|
||||
- doesn't match "directory/afile.jpg"
|
||||
/file.jpg - matches "file.jpg" in the root directory of the remote
|
||||
- doesn't match "afile.jpg"
|
||||
- doesn't match "directory/file.jpg"
|
||||
|
||||
**Important** Note that you must use `/` in patterns and not `\` even
|
||||
if running on Windows.
|
||||
|
||||
A `*` matches anything but not a `/`.
|
||||
|
||||
*.jpg - matches "file.jpg"
|
||||
- matches "directory/file.jpg"
|
||||
- doesn't match "file.jpg/something"
|
||||
|
||||
Use `**` to match anything, including slashes (`/`).
|
||||
|
||||
dir/** - matches "dir/file.jpg"
|
||||
- matches "dir/dir1/dir2/file.jpg"
|
||||
- doesn't match "directory/file.jpg"
|
||||
- doesn't match "adir/file.jpg"
|
||||
|
||||
A `?` matches any character except a slash `/`.
|
||||
|
||||
l?ss - matches "less"
|
||||
- matches "lass"
|
||||
- doesn't match "floss"
|
||||
|
||||
A `[` and `]` together make a a character class, such as `[a-z]` or
|
||||
`[aeiou]` or `[[:alpha:]]`. See the [go regexp
|
||||
docs](https://golang.org/pkg/regexp/syntax/) for more info on these.
|
||||
|
||||
h[ae]llo - matches "hello"
|
||||
- matches "hallo"
|
||||
- doesn't match "hullo"
|
||||
|
||||
A `{` and `}` define a choice between elements. It should contain a
|
||||
comma seperated list of patterns, any of which might match. These
|
||||
patterns can contain wildcards.
|
||||
|
||||
{one,two}_potato - matches "one_potato"
|
||||
- matches "two_potato"
|
||||
- doesn't match "three_potato"
|
||||
- doesn't match "_potato"
|
||||
|
||||
Special characters can be escaped with a `\` before them.
|
||||
|
||||
\*.jpg - matches "*.jpg"
|
||||
\\.jpg - matches "\.jpg"
|
||||
\[one\].jpg - matches "[one].jpg"
|
||||
|
||||
### Differences between rsync and rclone patterns ###
|
||||
|
||||
Rclone implements bash style `{a,b,c}` glob matching which rsync doesn't.
|
||||
|
||||
Rclone ignores `/` at the end of a pattern.
|
||||
|
||||
Rclone always does a wildcard match so `\` must always escape a `\`.
|
||||
|
||||
## How the rules are used ##
|
||||
|
||||
Rclone maintains a list of include rules and exclude rules.
|
||||
|
||||
Each file is matched in order against the list until it finds a match.
|
||||
The file is then included or excluded according to the rule type.
|
||||
|
||||
If the matcher falls off the bottom of the list then the path is
|
||||
included.
|
||||
|
||||
For example given the following rules, `+` being include, `-` being
|
||||
exclude,
|
||||
|
||||
- secret*.jpg
|
||||
+ *.jpg
|
||||
+ *.png
|
||||
+ file2.avi
|
||||
- *
|
||||
|
||||
This would include
|
||||
|
||||
* `file1.jpg`
|
||||
* `file3.png`
|
||||
* `file2.avi`
|
||||
|
||||
This would exclude
|
||||
|
||||
* `secret17.jpg`
|
||||
* non `*.jpg` and `*.png`
|
||||
|
||||
## Adding filtering rules ##
|
||||
|
||||
Filtering rules are added with the following command line flags.
|
||||
|
||||
### `--exclude` - Exclude files matching pattern ###
|
||||
|
||||
Add a single exclude rule with `--exclude`.
|
||||
|
||||
Eg `--exclude *.bak` to exclude all bak files from the sync.
|
||||
|
||||
### `--exclude-from` - Read exclude patterns from file ###
|
||||
|
||||
Add exclude rules from a file.
|
||||
|
||||
Prepare a file like this `exclude-file.txt`
|
||||
|
||||
# a sample exclude rule file
|
||||
*.bak
|
||||
file2.jpg
|
||||
|
||||
Then use as `--exclude-from exclude-file.txt`. This will sync all
|
||||
files except those ending in `bak` and `file2.jpg`.
|
||||
|
||||
This is useful if you have a lot of rules.
|
||||
|
||||
### `--include` - Include files matching pattern ###
|
||||
|
||||
Add a single include rule with `--include`.
|
||||
|
||||
Eg `--include *.{png,jpg}` to include all `png` and `jpg` files in the
|
||||
backup and no others.
|
||||
|
||||
This adds an implicit `--exclude *` at the very end of the filter
|
||||
list. This means you can mix `--include` and `--include-from` with the
|
||||
other filters (eg `--exclude`) but you must include all the files you
|
||||
want in the include statement. If this doesn't provide enough
|
||||
flexibility then you must use `--filter-from`.
|
||||
|
||||
### `--include-from` - Read include patterns from file ###
|
||||
|
||||
Add include rules from a file.
|
||||
|
||||
Prepare a file like this `include-file.txt`
|
||||
|
||||
# a sample include rule file
|
||||
*.jpg
|
||||
*.png
|
||||
file2.avi
|
||||
|
||||
Then use as `--include-from include-file.txt`. This will sync all
|
||||
`jpg`, `png` files and `file2.avi`.
|
||||
|
||||
This is useful if you have a lot of rules.
|
||||
|
||||
This adds an implicit `--exclude *` at the very end of the filter
|
||||
list. This means you can mix `--include` and `--include-from` with the
|
||||
other filters (eg `--exclude`) but you must include all the files you
|
||||
want in the include statement. If this doesn't provide enough
|
||||
flexibility then you must use `--filter-from`.
|
||||
|
||||
### `--filter` - Add a file-filtering rule ###
|
||||
|
||||
This can be used to add a single include or exclude rule. Include
|
||||
rules start with `+ ` and exclude rules start with `- `. A special
|
||||
rule called `!` can be used to clear the existing rules.
|
||||
|
||||
Eg `--filter "- *.bak"` to exclude all bak files from the sync.
|
||||
|
||||
### `--filter-from` - Read filtering patterns from a file ###
|
||||
|
||||
Add include/exclude rules from a file.
|
||||
|
||||
Prepare a file like this `filter-file.txt`
|
||||
|
||||
# a sample exclude rule file
|
||||
- secret*.jpg
|
||||
+ *.jpg
|
||||
+ *.png
|
||||
+ file2.avi
|
||||
# exclude everything else
|
||||
- *
|
||||
|
||||
Then use as `--filter-from filter-file.txt`. The rules are processed
|
||||
in the order that they are defined.
|
||||
|
||||
This example will include all `jpg` and `png` files, exclude any files
|
||||
matching `secret*.jpg` and include `file2.avi`. Everything else will
|
||||
be excluded from the sync.
|
||||
|
||||
### `--files-from` - Read list of source-file names ###
|
||||
|
||||
This reads a list of file names from the file passed in and **only**
|
||||
these files are transferred. The filtering rules are ignored
|
||||
completely if you use this option.
|
||||
|
||||
Prepare a file like this `files-from.txt`
|
||||
|
||||
# comment
|
||||
file1.jpg
|
||||
file2.jpg
|
||||
|
||||
Then use as `--files-from files-from.txt`. This will only transfer
|
||||
`file1.jpg` and `file2.jpg` providing they exist.
|
||||
|
||||
### `--min-size` - Don't transfer any file smaller than this ###
|
||||
|
||||
This option controls the minimum size file which will be transferred.
|
||||
This defaults to `kBytes` but a suffix of `k`, `M`, or `G` can be
|
||||
used.
|
||||
|
||||
For example `--min-size 50k` means no files smaller than 50kByte will be
|
||||
transferred.
|
||||
|
||||
### `--max-size` - Don't transfer any file larger than this ###
|
||||
|
||||
This option controls the maximum size file which will be transferred.
|
||||
This defaults to `kBytes` but a suffix of `k`, `M`, or `G` can be
|
||||
used.
|
||||
|
||||
For example `--max-size 1G` means no files larger than 1GByte will be
|
||||
transferred.
|
||||
|
||||
### `--max-age` - Don't transfer any file older than this ###
|
||||
|
||||
This option controls the maximum age of files to transfer. Give in
|
||||
seconds or with a suffix of:
|
||||
|
||||
* `ms` - Milliseconds
|
||||
* `s` - Seconds
|
||||
* `m` - Minutes
|
||||
* `h` - Hours
|
||||
* `d` - Days
|
||||
* `w` - Weeks
|
||||
* `M` - Months
|
||||
* `y` - Years
|
||||
|
||||
For example `--max-age 2d` means no files older than 2 days will be
|
||||
transferred.
|
||||
|
||||
### `--min-age` - Don't transfer any file younger than this ###
|
||||
|
||||
This option controls the minimum age of files to transfer. Give in
|
||||
seconds or with a suffix (see `--max-age` for list of suffixes)
|
||||
|
||||
For example `--min-age 2d` means no files younger than 2 days will be
|
||||
transferred.
|
||||
|
||||
### `--delete-excluded` - Delete files on dest excluded from sync ###
|
||||
|
||||
**Important** this flag is dangerous - use with `--dry-run` and `-v` first.
|
||||
|
||||
When doing `rclone sync` this will delete any files which are excluded
|
||||
from the sync on the destination.
|
||||
|
||||
If for example you did a sync from `A` to `B` without the `--min-size 50k` flag
|
||||
|
||||
rclone sync A: B:
|
||||
|
||||
Then you repeated it like this with the `--delete-excluded`
|
||||
|
||||
rclone --min-size 50k --delete-excluded sync A: B:
|
||||
|
||||
This would delete all files on `B` which are less than 50 kBytes as
|
||||
these are now excluded from the sync.
|
||||
|
||||
Always test first with `--dry-run` and `-v` before using this flag.
|
||||
|
||||
### `--dump-filters` - dump the filters to the output ###
|
||||
|
||||
This dumps the defined filters to the output as regular expressions.
|
||||
|
||||
Useful for debugging.
|
||||
|
||||
## Quoting shell metacharacters ##
|
||||
|
||||
The examples above may not work verbatim in your shell as they have
|
||||
shell metacharacters in them (eg `*`), and may require quoting.
|
||||
|
||||
Eg linux, OSX
|
||||
|
||||
* `--include \*.jpg`
|
||||
* `--include '*.jpg'`
|
||||
* `--include='*.jpg'`
|
||||
|
||||
In Windows the expansion is done by the command not the shell so this
|
||||
should work fine
|
||||
|
||||
* `--include *.jpg`
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
title: "Google Cloud Storage"
|
||||
description: "Rclone docs for Google Cloud Storage"
|
||||
date: "2015-09-12"
|
||||
date: "2014-07-17"
|
||||
---
|
||||
|
||||
<i class="fa fa-google"></i> Google Cloud Storage
|
||||
@@ -26,34 +26,18 @@ d) Delete remote
|
||||
q) Quit config
|
||||
e/n/d/q> n
|
||||
name> remote
|
||||
Type of storage to configure.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Cloud Drive
|
||||
\ "amazon cloud drive"
|
||||
2 / Amazon S3 (also Dreamhost, Ceph)
|
||||
\ "s3"
|
||||
3 / Backblaze B2
|
||||
\ "b2"
|
||||
4 / Dropbox
|
||||
\ "dropbox"
|
||||
5 / Google Cloud Storage (this is not Google Drive)
|
||||
\ "google cloud storage"
|
||||
6 / Google Drive
|
||||
\ "drive"
|
||||
7 / Hubic
|
||||
\ "hubic"
|
||||
8 / Local Disk
|
||||
\ "local"
|
||||
9 / Microsoft OneDrive
|
||||
\ "onedrive"
|
||||
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
||||
\ "swift"
|
||||
11 / Yandex Disk
|
||||
\ "yandex"
|
||||
Storage> 5
|
||||
Google Application Client Id - leave blank normally.
|
||||
What type of source is it?
|
||||
Choose a number from below
|
||||
1) swift
|
||||
2) s3
|
||||
3) local
|
||||
4) google cloud storage
|
||||
5) dropbox
|
||||
6) drive
|
||||
type> 4
|
||||
Google Application Client Id - leave blank to use rclone's.
|
||||
client_id>
|
||||
Google Application Client Secret - leave blank normally.
|
||||
Google Application Client Secret - leave blank to use rclone's.
|
||||
client_secret>
|
||||
Project number optional - needed only for list/create/delete buckets - see your developer console.
|
||||
project_number> 12345678
|
||||
@@ -86,17 +70,10 @@ Choose a number from below, or type in your own value
|
||||
5) publicReadWrite
|
||||
bucket_acl> 2
|
||||
Remote config
|
||||
Remote config
|
||||
Use auto config?
|
||||
* Say Y if not sure
|
||||
* Say N if you are working on a remote or headless machine or Y didn't work
|
||||
y) Yes
|
||||
n) No
|
||||
y/n> y
|
||||
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
|
||||
Log in and authorize rclone for access
|
||||
Waiting for code...
|
||||
Got code
|
||||
Go to the following link in your browser
|
||||
https://accounts.google.com/o/oauth2/auth?access_type=&approval_prompt=&client_id=XXXXXXXXXXXX.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdevstorage.full_control&state=state
|
||||
Log in, then type paste the token that is returned in the browser here
|
||||
Enter verification code> x/xxxxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxxxxxxxxxxxxxxxxxxx_xxxxxxxx
|
||||
--------------------
|
||||
[remote]
|
||||
type = google cloud storage
|
||||
@@ -113,13 +90,6 @@ d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
Note that rclone runs a webserver on your local machine to collect the
|
||||
token as returned from Google if you use auto config mode. This only
|
||||
runs from the moment it opens your browser to the moment you get back
|
||||
the verification code. This is on `http://127.0.0.1:53682/` and this
|
||||
it may require you to unblock it temporarily if you are running a host
|
||||
firewall, or use manual mode.
|
||||
|
||||
This remote is called `remote` and can now be used like this
|
||||
|
||||
See all the buckets in your project
|
||||
@@ -139,7 +109,8 @@ files in the bucket.
|
||||
|
||||
rclone sync /home/local/directory remote:bucket
|
||||
|
||||
### Modified time ###
|
||||
Modified time
|
||||
-------------
|
||||
|
||||
Google google cloud storage stores md5sums natively and rclone stores
|
||||
modification times as metadata on the object, under the "mtime" key in
|
||||
|
||||
@@ -1,124 +0,0 @@
|
||||
---
|
||||
title: "Hubic"
|
||||
description: "Rclone docs for Hubic"
|
||||
date: "2016-03-16"
|
||||
---
|
||||
|
||||
<i class="fa fa-space-shuttle"></i> Hubic
|
||||
-----------------------------------------
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
Paths are specified as `remote:container` (or `remote:` for the `lsd`
|
||||
command.) You may put subdirectories in too, eg `remote:container/path/to/dir`.
|
||||
|
||||
The initial setup for Hubic involves getting a token from Hubic which
|
||||
you need to do in your browser. `rclone config` walks you through it.
|
||||
|
||||
Here is an example of how to make a remote called `remote`. First run:
|
||||
|
||||
rclone config
|
||||
|
||||
This will guide you through an interactive setup process:
|
||||
|
||||
```
|
||||
n) New remote
|
||||
s) Set configuration password
|
||||
n/s> n
|
||||
name> remote
|
||||
Type of storage to configure.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Cloud Drive
|
||||
\ "amazon cloud drive"
|
||||
2 / Amazon S3 (also Dreamhost, Ceph)
|
||||
\ "s3"
|
||||
3 / Backblaze B2
|
||||
\ "b2"
|
||||
4 / Dropbox
|
||||
\ "dropbox"
|
||||
5 / Google Cloud Storage (this is not Google Drive)
|
||||
\ "google cloud storage"
|
||||
6 / Google Drive
|
||||
\ "drive"
|
||||
7 / Hubic
|
||||
\ "hubic"
|
||||
8 / Local Disk
|
||||
\ "local"
|
||||
9 / Microsoft OneDrive
|
||||
\ "onedrive"
|
||||
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
||||
\ "swift"
|
||||
11 / Yandex Disk
|
||||
\ "yandex"
|
||||
Storage> 7
|
||||
Hubic Client Id - leave blank normally.
|
||||
client_id>
|
||||
Hubic Client Secret - leave blank normally.
|
||||
client_secret>
|
||||
Remote config
|
||||
Use auto config?
|
||||
* Say Y if not sure
|
||||
* Say N if you are working on a remote or headless machine
|
||||
y) Yes
|
||||
n) No
|
||||
y/n> y
|
||||
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
|
||||
Log in and authorize rclone for access
|
||||
Waiting for code...
|
||||
Got code
|
||||
--------------------
|
||||
[remote]
|
||||
client_id =
|
||||
client_secret =
|
||||
token = {"access_token":"XXXXXX"}
|
||||
--------------------
|
||||
y) Yes this is OK
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
See the [remote setup docs](/remote_setup/) for how to set it up on a
|
||||
machine with no Internet browser available.
|
||||
|
||||
Note that rclone runs a webserver on your local machine to collect the
|
||||
token as returned from Hubic. This only runs from the moment it opens
|
||||
your browser to the moment you get back the verification code. This
|
||||
is on `http://127.0.0.1:53682/` and this it may require you to unblock
|
||||
it temporarily if you are running a host firewall.
|
||||
|
||||
Once configured you can then use `rclone` like this,
|
||||
|
||||
List containers in the top level of your Hubic
|
||||
|
||||
rclone lsd remote:
|
||||
|
||||
List all the files in your Hubic
|
||||
|
||||
rclone ls remote:
|
||||
|
||||
To copy a local directory to an Hubic directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time ###
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
|
||||
ns.
|
||||
|
||||
This is a defacto standard (used in the official python-swiftclient
|
||||
amongst others) for storing the modification time for an object.
|
||||
|
||||
Note that Hubic wraps the Swift backend, so most of the properties of
|
||||
are the same.
|
||||
|
||||
### Limitations ###
|
||||
|
||||
This uses the normal OpenStack Swift mechanism to refresh the Swift
|
||||
API credentials and ignores the expires field returned by the Hubic
|
||||
API.
|
||||
|
||||
The Swift API doesn't return a correct MD5SUM for segmented files
|
||||
(Dynamic or Static Large Objects) so rclone won't check or use the
|
||||
MD5SUM for these.
|
||||
@@ -1,39 +0,0 @@
|
||||
---
|
||||
title: "Install"
|
||||
description: "Rclone Installation"
|
||||
date: "2016-03-28"
|
||||
------------------
|
||||
|
||||
Install
|
||||
-------
|
||||
|
||||
Rclone is a Go program and comes as a single binary file.
|
||||
|
||||
[Download](/downloads/) the relevant binary.
|
||||
|
||||
Or alternatively if you have Go 1.5+ installed use
|
||||
|
||||
go get github.com/ncw/rclone
|
||||
|
||||
and this will build the binary in `$GOPATH/bin`. If you have built
|
||||
rclone before then you will want to update its dependencies first with
|
||||
this
|
||||
|
||||
go get -u -v github.com/ncw/rclone/...
|
||||
|
||||
See the [Usage section](/docs/) of the docs for how to use rclone, or
|
||||
run `rclone -h`.
|
||||
|
||||
linux binary downloaded files install example
|
||||
-------
|
||||
|
||||
unzip rclone-v1.17-linux-amd64.zip
|
||||
cd rclone-v1.17-linux-amd64
|
||||
#copy binary file
|
||||
sudo cp rclone /usr/sbin/
|
||||
sudo chown root:root /usr/sbin/rclone
|
||||
sudo chmod 755 /usr/sbin/rclone
|
||||
#install manpage
|
||||
sudo mkdir -p /usr/local/share/man/man1
|
||||
sudo cp rclone.1 /usr/local/share/man/man1/
|
||||
sudo mandb
|
||||
@@ -1,34 +0,0 @@
|
||||
---
|
||||
title: "Licence"
|
||||
description: "Rclone Licence"
|
||||
date: "2015-06-06"
|
||||
---
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
This is free software under the terms of MIT the license (check the
|
||||
COPYING file included with the source code).
|
||||
|
||||
```
|
||||
Copyright (C) 2012 by Nick Craig-Wood http://www.craig-wood.com/nick/
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
```
|
||||
|
||||
@@ -16,61 +16,10 @@ Will sync `/home/source` to `/tmp/destination`
|
||||
These can be configured into the config file for consistencies sake,
|
||||
but it is probably easier not to.
|
||||
|
||||
### Modified time ###
|
||||
Modified time
|
||||
-------------
|
||||
|
||||
Rclone reads and writes the modified time using an accuracy determined by
|
||||
the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
|
||||
on OS X.
|
||||
|
||||
### Filenames ###
|
||||
|
||||
Filenames are expected to be encoded in UTF-8 on disk. This is the
|
||||
normal case for Windows and OS X.
|
||||
|
||||
There is a bit more uncertainty in the Linux world, but new
|
||||
distributions will have UTF-8 encoded files names. If you are using an
|
||||
old Linux filesystem with non UTF-8 file names (eg latin1) then you
|
||||
can use the `convmv` tool to convert the filesystem to UTF-8. This
|
||||
tool is available in most distributions' package managers.
|
||||
|
||||
If an invalid (non-UTF8) filename is read, the invalid caracters will
|
||||
be replaced with the unicode replacement character, '<27>'. `rclone`
|
||||
will emit a debug message in this case (use `-v` to see), eg
|
||||
|
||||
```
|
||||
Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
|
||||
```
|
||||
|
||||
### Long paths on Windows ###
|
||||
|
||||
Rclone handles long paths automatically, by converting all paths to long
|
||||
[UNC paths](https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx#maxpath)
|
||||
which allows paths up to 32,767 characters.
|
||||
|
||||
This is why you will see that your paths, for instance `c:\files` is
|
||||
converted to the UNC path `\\?\c:\files` in the output,
|
||||
and `\\server\share` is converted to `\\?\UNC\server\share`.
|
||||
|
||||
However, in rare cases this may cause problems with buggy file
|
||||
system drivers like [EncFS](https://github.com/ncw/rclone/issues/261).
|
||||
To disable UNC conversion globally, add this to your `.rclone.conf` file:
|
||||
|
||||
```
|
||||
[local]
|
||||
nounc = true
|
||||
```
|
||||
|
||||
If you want to selectively disable UNC, you can add it to a separate entry like this:
|
||||
|
||||
```
|
||||
[nounc]
|
||||
type = local
|
||||
nounc = true
|
||||
```
|
||||
And use rclone like this:
|
||||
|
||||
`rclone copy c:\src nounc:z:\dst`
|
||||
|
||||
This will use UNC paths on `c:\src` but not on `z:\dst`.
|
||||
Of course this will cause problems if the absolute path length of a
|
||||
file exceeds 258 characters on z, so only use this option if you have to.
|
||||
|
||||
@@ -1,149 +0,0 @@
|
||||
---
|
||||
title: "Microsoft One Drive"
|
||||
description: "Rclone docs for Microsoft One Drive"
|
||||
date: "2015-10-14"
|
||||
---
|
||||
|
||||
<i class="fa fa-windows"></i> Microsoft One Drive
|
||||
-----------------------------------------
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
Paths may be as deep as required, eg `remote:directory/subdirectory`.
|
||||
|
||||
The initial setup for One Drive involves getting a token from
|
||||
Microsoft which you need to do in your browser. `rclone config` walks
|
||||
you through it.
|
||||
|
||||
Here is an example of how to make a remote called `remote`. First run:
|
||||
|
||||
rclone config
|
||||
|
||||
This will guide you through an interactive setup process:
|
||||
|
||||
```
|
||||
No remotes found - make a new one
|
||||
n) New remote
|
||||
s) Set configuration password
|
||||
n/s> n
|
||||
name> remote
|
||||
Type of storage to configure.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Cloud Drive
|
||||
\ "amazon cloud drive"
|
||||
2 / Amazon S3 (also Dreamhost, Ceph)
|
||||
\ "s3"
|
||||
3 / Backblaze B2
|
||||
\ "b2"
|
||||
4 / Dropbox
|
||||
\ "dropbox"
|
||||
5 / Google Cloud Storage (this is not Google Drive)
|
||||
\ "google cloud storage"
|
||||
6 / Google Drive
|
||||
\ "drive"
|
||||
7 / Hubic
|
||||
\ "hubic"
|
||||
8 / Local Disk
|
||||
\ "local"
|
||||
9 / Microsoft OneDrive
|
||||
\ "onedrive"
|
||||
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
||||
\ "swift"
|
||||
11 / Yandex Disk
|
||||
\ "yandex"
|
||||
Storage> 9
|
||||
Microsoft App Client Id - leave blank normally.
|
||||
client_id>
|
||||
Microsoft App Client Secret - leave blank normally.
|
||||
client_secret>
|
||||
Remote config
|
||||
Use auto config?
|
||||
* Say Y if not sure
|
||||
* Say N if you are working on a remote or headless machine
|
||||
y) Yes
|
||||
n) No
|
||||
y/n> y
|
||||
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
|
||||
Log in and authorize rclone for access
|
||||
Waiting for code...
|
||||
Got code
|
||||
--------------------
|
||||
[remote]
|
||||
client_id =
|
||||
client_secret =
|
||||
token = {"access_token":"XXXXXX"}
|
||||
--------------------
|
||||
y) Yes this is OK
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
See the [remote setup docs](/remote_setup/) for how to set it up on a
|
||||
machine with no Internet browser available.
|
||||
|
||||
Note that rclone runs a webserver on your local machine to collect the
|
||||
token as returned from Microsoft. This only runs from the moment it
|
||||
opens your browser to the moment you get back the verification
|
||||
code. This is on `http://127.0.0.1:53682/` and this it may require
|
||||
you to unblock it temporarily if you are running a host firewall.
|
||||
|
||||
Once configured you can then use `rclone` like this,
|
||||
|
||||
List directories in top level of your One Drive
|
||||
|
||||
rclone lsd remote:
|
||||
|
||||
List all the files in your One Drive
|
||||
|
||||
rclone ls remote:
|
||||
|
||||
To copy a local directory to an One Drive directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and hashes ###
|
||||
|
||||
One Drive allows modification times to be set on objects accurate to 1
|
||||
second. These will be used to detect whether objects need syncing or
|
||||
not.
|
||||
|
||||
One drive supports SHA1 type hashes, so you can use `--checksum` flag.
|
||||
|
||||
|
||||
### Deleting files ###
|
||||
|
||||
Any files you delete with rclone will end up in the trash. Microsoft
|
||||
doesn't provide an API to permanently delete files, nor to empty the
|
||||
trash, so you will have to do that with one of Microsoft's apps or via
|
||||
the One Drive website.
|
||||
|
||||
### Specific options ###
|
||||
|
||||
Here are the command line options specific to this cloud storage
|
||||
system.
|
||||
|
||||
#### --onedrive-chunk-size=SIZE ####
|
||||
|
||||
Above this size files will be chunked - must be multiple of 320k. The
|
||||
default is 10MB. Note that the chunks will be buffered into memory.
|
||||
|
||||
#### --onedrive-upload-cutoff=SIZE ####
|
||||
|
||||
Cutoff for switching to chunked upload - must be <= 100MB. The default
|
||||
is 10MB.
|
||||
|
||||
### Limitations ###
|
||||
|
||||
Note that One Drive is case insensitive so you can't have a
|
||||
file called "Hello.doc" and one called "hello.doc".
|
||||
|
||||
Rclone only supports your default One Drive, and doesn't work with One
|
||||
Drive for business. Both these issues may be fixed at some point
|
||||
depending on user demand!
|
||||
|
||||
There are quite a few characters that can't be in One Drive file
|
||||
names. These can't occur on Windows platforms, but on non-Windows
|
||||
platforms they are common. Rclone will map these names to and from an
|
||||
identical looking unicode equivalent. For example if a file has a `?`
|
||||
in it will be mapped to `?` instead.
|
||||
@@ -1,79 +0,0 @@
|
||||
---
|
||||
title: "Overview of cloud storage systems"
|
||||
description: "Overview of cloud storage systems"
|
||||
type: page
|
||||
date: "2015-09-06"
|
||||
---
|
||||
|
||||
# Overview of cloud storage systems #
|
||||
|
||||
Each cloud storage system is slighly different. Rclone attempts to
|
||||
provide a unified interface to them, but some underlying differences
|
||||
show through.
|
||||
|
||||
## Features ##
|
||||
|
||||
Here is an overview of the major features of each cloud storage system.
|
||||
|
||||
| Name | Hash | ModTime | Case Insensitive | Duplicate Files |
|
||||
| ---------------------- |:-------:|:-------:|:----------------:|:---------------:|
|
||||
| Google Drive | MD5 | Yes | No | Yes |
|
||||
| Amazon S3 | MD5 | Yes | No | No |
|
||||
| Openstack Swift | MD5 | Yes | No | No |
|
||||
| Dropbox | - | No | Yes | No |
|
||||
| Google Cloud Storage | MD5 | Yes | No | No |
|
||||
| Amazon Cloud Drive | MD5 | No | Yes | No |
|
||||
| Microsoft One Drive | SHA1 | Yes | Yes | No |
|
||||
| Hubic | MD5 | Yes | No | No |
|
||||
| Backblaze B2 | SHA1 | Yes | No | No |
|
||||
| Yandex Disk | MD5 | Yes | No | No |
|
||||
| The local filesystem | All | Yes | Depends | No |
|
||||
|
||||
### Hash ###
|
||||
|
||||
The cloud storage system supports various hash types of the objects.
|
||||
The hashes are used when transferring data as an integrity check and
|
||||
can be specifically used with the `--checksum` flag in syncs and in
|
||||
the `check` command.
|
||||
|
||||
To use the checksum checks between filesystems they must support a
|
||||
common hash type.
|
||||
|
||||
### ModTime ###
|
||||
|
||||
The cloud storage system supports setting modification times on
|
||||
objects. If it does then this enables a using the modification times
|
||||
as part of the sync. If not then only the size will be checked by
|
||||
default, though the MD5SUM can be checked with the `--checksum` flag.
|
||||
|
||||
All cloud storage systems support some kind of date on the object and
|
||||
these will be set when transferring from the cloud storage system.
|
||||
|
||||
### Case Insensitive ###
|
||||
|
||||
If a cloud storage systems is case sensitive then it is possible to
|
||||
have two files which differ only in case, eg `file.txt` and
|
||||
`FILE.txt`. If a cloud storage system is case insensitive then that
|
||||
isn't possible.
|
||||
|
||||
This can cause problems when syncing between a case insensitive
|
||||
system and a case sensitive system. The symptom of this is that no
|
||||
matter how many times you run the sync it never completes fully.
|
||||
|
||||
The local filesystem may or may not be case sensitive depending on OS.
|
||||
|
||||
* Windows - usually case insensitive, though case is preserved
|
||||
* OSX - usually case insensitive, though it is possible to format case sensitive
|
||||
* Linux - usually case sensitive, but there are case insensitive file systems (eg FAT formatted USB keys)
|
||||
|
||||
Most of the time this doesn't cause any problems as people tend to
|
||||
avoid files whose name differs only by case even on case sensitive
|
||||
systems.
|
||||
|
||||
### Duplicate files ###
|
||||
|
||||
If a cloud storage system allows duplicate files then it can have two
|
||||
objects with the same name.
|
||||
|
||||
This confuses rclone greatly when syncing - use the `rclone dedupe`
|
||||
command to rename or remove duplicates.
|
||||
@@ -1,65 +0,0 @@
|
||||
---
|
||||
title: "Privacy Policy"
|
||||
description: "Rclone Privacy Policy"
|
||||
date: "2015-08-19"
|
||||
---
|
||||
|
||||
# Rclone Privacy Policy #
|
||||
|
||||
## What is this Privacy Policy for? ##
|
||||
|
||||
This privacy policy is for this website http://rclone.org and governs the privacy of its users who choose to use it.
|
||||
|
||||
The policy sets out the different areas where user privacy is concerned and outlines the obligations & requirements of the users, the website and website owners. Furthermore the way this website processes, stores and protects user data and information will also be detailed within this policy.
|
||||
|
||||
## The Website ##
|
||||
|
||||
This website and its owners take a proactive approach to user privacy and ensure the necessary steps are taken to protect the privacy of its users throughout their visiting experience. This website complies to all UK national laws and requirements for user privacy.
|
||||
|
||||
## Use of Cookies ##
|
||||
|
||||
This website uses cookies to better the users experience while visiting the website. Where applicable this website uses a cookie control system allowing the user on their first visit to the website to allow or disallow the use of cookies on their computer / device. This complies with recent legislation requirements for websites to obtain explicit consent from users before leaving behind or reading files such as cookies on a user's computer / device.
|
||||
|
||||
Cookies are small files saved to the user's computers hard drive that track, save and store information about the user's interactions and usage of the website. This allows the website, through its server to provide the users with a tailored experience within this website.
|
||||
|
||||
Users are advised that if they wish to deny the use and saving of cookies from this website on to their computers hard drive they should take necessary steps within their web browsers security settings to block all cookies from this website and its external serving vendors.
|
||||
|
||||
This website uses tracking software to monitor its visitors to better understand how they use it. This software is provided by Google Analytics which uses cookies to track visitor usage. The software will save a cookie to your computers hard drive in order to track and monitor your engagement and usage of the website, but will not store, save or collect personal information. You can read [Google's privacy policy here](http://www.google.com/privacy.html) for further information.
|
||||
|
||||
Other cookies may be stored to your computers hard drive by external vendors when this website uses referral programs, sponsored links or adverts. Such cookies are used for conversion and referral tracking and typically expire after 30 days, though some may take longer. No personal information is stored, saved or collected.
|
||||
|
||||
## Contact & Communication ##
|
||||
|
||||
Users contacting this website and/or its owners do so at their own discretion and provide any such personal details requested at their own risk. Your personal information is kept private and stored securely until a time it is no longer required or has no use, as detailed in the Data Protection Act 1998.
|
||||
|
||||
This website and its owners use any information submitted to provide you with further information about the products / services they offer or to assist you in answering any questions or queries you may have submitted.
|
||||
|
||||
## External Links ##
|
||||
|
||||
Although this website only looks to include quality, safe and relevant external links, users are advised adopt a policy of caution before clicking any external web links mentioned throughout this website.
|
||||
|
||||
The owners of this website cannot guarantee or verify the contents of any externally linked website despite their best efforts. Users should therefore note they click on external links at their own risk and this website and its owners cannot be held liable for any damages or implications caused by visiting any external links mentioned.
|
||||
|
||||
## Adverts and Sponsored Links ##
|
||||
|
||||
This website may contain sponsored links and adverts. These will typically be served through our advertising partners, to whom may have detailed privacy policies relating directly to the adverts they serve.
|
||||
|
||||
Clicking on any such adverts will send you to the advertisers website through a referral program which may use cookies and will track the number of referrals sent from this website. This may include the use of cookies which may in turn be saved on your computers hard drive. Users should therefore note they click on sponsored external links at their own risk and this website and its owners cannot be held liable for any damages or implications caused by visiting any external links mentioned.
|
||||
|
||||
### Social Media Platforms ##
|
||||
|
||||
Communication, engagement and actions taken through external social media platforms that this website and its owners participate on are subject to the terms and conditions as well as the privacy policies held with each social media platform respectively.
|
||||
|
||||
Users are advised to use social media platforms wisely and communicate / engage upon them with due care and caution in regard to their own privacy and personal details. This website nor its owners will ever ask for personal or sensitive information through social media platforms and encourage users wishing to discuss sensitive details to contact them through primary communication channels such as email.
|
||||
|
||||
This website may use social sharing buttons which help share web content directly from web pages to the social media platform in question. Users are advised before using such social sharing buttons that they do so at their own discretion and note that the social media platform may track and save your request to share a web page respectively through your social media platform account.
|
||||
|
||||
## Resources & Further Information ##
|
||||
|
||||
* [Data Protection Act 1998](http://www.legislation.gov.uk/ukpga/1998/29/contents)
|
||||
* [Privacy and Electronic Communications Regulations 2003](http://www.legislation.gov.uk/uksi/2003/2426/contents/made)
|
||||
* [Privacy and Electronic Communications Regulations 2003 - The Guide](https://ico.org.uk/for-organisations/guide-to-pecr/)
|
||||
* [Twitter Privacy Policy](http://twitter.com/privacy)
|
||||
* [Facebook Privacy Policy](http://www.facebook.com/about/privacy/)
|
||||
* [Google Privacy Policy](http://www.google.com/privacy.html)
|
||||
* [Sample Website Privacy Policy](http://www.jamieking.co.uk/resources/free_sample_privacy_policy.html)
|
||||
@@ -1,88 +0,0 @@
|
||||
---
|
||||
title: "Remote Setup"
|
||||
description: "Configuring rclone up on a remote / headless machine"
|
||||
date: "2016-01-07"
|
||||
---
|
||||
|
||||
# Configuring rclone on a remote / headless machine #
|
||||
|
||||
Some of the configurations (those involving oauth2) require an
|
||||
Internet connected web browser.
|
||||
|
||||
If you are trying to set rclone up on a remote or headless box with no
|
||||
browser available on it (eg a NAS or a server in a datacenter) then
|
||||
you will need to use an alternative means of configuration. There are
|
||||
two ways of doing it, described below.
|
||||
|
||||
## Configuring using rclone authorize ##
|
||||
|
||||
On the headless box
|
||||
|
||||
```
|
||||
...
|
||||
Remote config
|
||||
Use auto config?
|
||||
* Say Y if not sure
|
||||
* Say N if you are working on a remote or headless machine
|
||||
y) Yes
|
||||
n) No
|
||||
y/n> n
|
||||
For this to work, you will need rclone available on a machine that has a web browser available.
|
||||
Execute the following on your machine:
|
||||
rclone authorize "amazon cloud drive"
|
||||
Then paste the result below:
|
||||
result>
|
||||
```
|
||||
|
||||
Then on your main desktop machine
|
||||
|
||||
```
|
||||
rclone authorize "amazon cloud drive"
|
||||
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
|
||||
Log in and authorize rclone for access
|
||||
Waiting for code...
|
||||
Got code
|
||||
Paste the following into your remote machine --->
|
||||
SECRET_TOKEN
|
||||
<---End paste
|
||||
```
|
||||
|
||||
Then back to the headless box, paste in the code
|
||||
|
||||
```
|
||||
result> SECRET_TOKEN
|
||||
--------------------
|
||||
[acd12]
|
||||
client_id =
|
||||
client_secret =
|
||||
token = SECRET_TOKEN
|
||||
--------------------
|
||||
y) Yes this is OK
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d>
|
||||
```
|
||||
|
||||
## Configuring by copying the config file ##
|
||||
|
||||
Rclone stores all of its config in a single configuration file. This
|
||||
can easily be copied to configure a remote rclone.
|
||||
|
||||
So first configure rclone on your desktop machine
|
||||
|
||||
rclone config
|
||||
|
||||
to set up the config file.
|
||||
|
||||
Find the config file by running `rclone -h` and looking for the help for the `--config` option
|
||||
|
||||
```
|
||||
$ rclone -h
|
||||
[snip]
|
||||
--config="/home/user/.rclone.conf": Config file.
|
||||
[snip]
|
||||
```
|
||||
|
||||
Now transfer it to the remote box (scp, cut paste, ftp, sftp etc) and
|
||||
place it in the correct place (use `rclone -h` on the remote box to
|
||||
find out where).
|
||||
@@ -4,7 +4,7 @@ description: "Rclone docs for Amazon S3"
|
||||
date: "2014-04-26"
|
||||
---
|
||||
|
||||
<i class="fa fa-amazon"></i> Amazon S3
|
||||
<i class="fa fa-archive"></i> Amazon S3
|
||||
---------------------------------------
|
||||
|
||||
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
|
||||
@@ -19,122 +19,66 @@ This will guide you through an interactive setup process.
|
||||
```
|
||||
No remotes found - make a new one
|
||||
n) New remote
|
||||
s) Set configuration password
|
||||
n/s> n
|
||||
q) Quit config
|
||||
n/q> n
|
||||
name> remote
|
||||
Type of storage to configure.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Cloud Drive
|
||||
\ "amazon cloud drive"
|
||||
2 / Amazon S3 (also Dreamhost, Ceph)
|
||||
\ "s3"
|
||||
3 / Backblaze B2
|
||||
\ "b2"
|
||||
4 / Dropbox
|
||||
\ "dropbox"
|
||||
5 / Google Cloud Storage (this is not Google Drive)
|
||||
\ "google cloud storage"
|
||||
6 / Google Drive
|
||||
\ "drive"
|
||||
7 / Hubic
|
||||
\ "hubic"
|
||||
8 / Local Disk
|
||||
\ "local"
|
||||
9 / Microsoft OneDrive
|
||||
\ "onedrive"
|
||||
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
||||
\ "swift"
|
||||
11 / Yandex Disk
|
||||
\ "yandex"
|
||||
Storage> 2
|
||||
Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Enter AWS credentials in the next step
|
||||
\ "false"
|
||||
2 / Get AWS credentials from the environment (env vars or IAM)
|
||||
\ "true"
|
||||
env_auth> 1
|
||||
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
|
||||
access_key_id> access_key
|
||||
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
|
||||
secret_access_key> secret_key
|
||||
Region to connect to.
|
||||
Choose a number from below, or type in your own value
|
||||
/ The default endpoint - a good choice if you are unsure.
|
||||
1 | US Region, Northern Virginia or Pacific Northwest.
|
||||
| Leave location constraint empty.
|
||||
\ "us-east-1"
|
||||
/ US West (Oregon) Region
|
||||
2 | Needs location constraint us-west-2.
|
||||
\ "us-west-2"
|
||||
/ US West (Northern California) Region
|
||||
3 | Needs location constraint us-west-1.
|
||||
\ "us-west-1"
|
||||
/ EU (Ireland) Region Region
|
||||
4 | Needs location constraint EU or eu-west-1.
|
||||
\ "eu-west-1"
|
||||
/ EU (Frankfurt) Region
|
||||
5 | Needs location constraint eu-central-1.
|
||||
\ "eu-central-1"
|
||||
/ Asia Pacific (Singapore) Region
|
||||
6 | Needs location constraint ap-southeast-1.
|
||||
\ "ap-southeast-1"
|
||||
/ Asia Pacific (Sydney) Region
|
||||
7 | Needs location constraint ap-southeast-2.
|
||||
\ "ap-southeast-2"
|
||||
/ Asia Pacific (Tokyo) Region
|
||||
8 | Needs location constraint ap-northeast-1.
|
||||
\ "ap-northeast-1"
|
||||
/ South America (Sao Paulo) Region
|
||||
9 | Needs location constraint sa-east-1.
|
||||
\ "sa-east-1"
|
||||
/ If using an S3 clone that only understands v2 signatures
|
||||
10 | eg Ceph/Dreamhost
|
||||
| set this and make sure you set the endpoint.
|
||||
\ "other-v2-signature"
|
||||
/ If using an S3 clone that understands v4 signatures set this
|
||||
11 | and make sure you set the endpoint.
|
||||
\ "other-v4-signature"
|
||||
region> 1
|
||||
What type of source is it?
|
||||
Choose a number from below
|
||||
1) swift
|
||||
2) s3
|
||||
3) local
|
||||
4) drive
|
||||
type> 2
|
||||
AWS Access Key ID.
|
||||
access_key_id> accesskey
|
||||
AWS Secret Access Key (password).
|
||||
secret_access_key> secretaccesskey
|
||||
Endpoint for S3 API.
|
||||
Leave blank if using AWS to use the default endpoint for the region.
|
||||
Specify if using an S3 clone such as Ceph.
|
||||
endpoint>
|
||||
Location constraint - must be set to match the Region. Used when creating buckets only.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
|
||||
\ ""
|
||||
2 / US West (Oregon) Region.
|
||||
\ "us-west-2"
|
||||
3 / US West (Northern California) Region.
|
||||
\ "us-west-1"
|
||||
4 / EU (Ireland) Region.
|
||||
\ "eu-west-1"
|
||||
5 / EU Region.
|
||||
\ "EU"
|
||||
6 / Asia Pacific (Singapore) Region.
|
||||
\ "ap-southeast-1"
|
||||
7 / Asia Pacific (Sydney) Region.
|
||||
\ "ap-southeast-2"
|
||||
8 / Asia Pacific (Tokyo) Region.
|
||||
\ "ap-northeast-1"
|
||||
9 / South America (Sao Paulo) Region.
|
||||
\ "sa-east-1"
|
||||
* The default endpoint - a good choice if you are unsure.
|
||||
* US Region, Northern Virginia or Pacific Northwest.
|
||||
* Leave location constraint empty.
|
||||
1) https://s3.amazonaws.com/
|
||||
* US Region, Northern Virginia only.
|
||||
* Leave location constraint empty.
|
||||
2) https://s3-external-1.amazonaws.com
|
||||
[snip]
|
||||
* South America (Sao Paulo) Region
|
||||
* Needs location constraint sa-east-1.
|
||||
9) https://s3-sa-east-1.amazonaws.com
|
||||
endpoint> 1
|
||||
Location constraint - must be set to match the Endpoint.
|
||||
Choose a number from below, or type in your own value
|
||||
* Empty for US Region, Northern Virginia or Pacific Northwest.
|
||||
1)
|
||||
* US West (Oregon) Region.
|
||||
2) us-west-2
|
||||
[snip]
|
||||
* South America (Sao Paulo) Region.
|
||||
9) sa-east-1
|
||||
location_constraint> 1
|
||||
Remote config
|
||||
--------------------
|
||||
[remote]
|
||||
env_auth = false
|
||||
access_key_id = access_key
|
||||
secret_access_key = secret_key
|
||||
region = us-east-1
|
||||
endpoint =
|
||||
access_key_id = accesskey
|
||||
secret_access_key = secretaccesskey
|
||||
endpoint = https://s3.amazonaws.com/
|
||||
location_constraint =
|
||||
--------------------
|
||||
y) Yes this is OK
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
Current remotes:
|
||||
|
||||
Name Type
|
||||
==== ====
|
||||
remote s3
|
||||
|
||||
e) Edit existing remote
|
||||
n) New remote
|
||||
d) Delete remote
|
||||
q) Quit config
|
||||
e/n/d/q> q
|
||||
```
|
||||
|
||||
This remote is called `remote` and can now be used like this
|
||||
@@ -156,121 +100,8 @@ files in the bucket.
|
||||
|
||||
rclone sync /home/local/directory remote:bucket
|
||||
|
||||
### Modified time ###
|
||||
Modified time
|
||||
-------------
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
`X-Amz-Meta-Mtime` as floating point since the epoch accurate to 1 ns.
|
||||
|
||||
### Multipart uploads ###
|
||||
|
||||
rclone supports multipart uploads with S3 which means that it can
|
||||
upload files bigger than 5GB. Note that files uploaded with multipart
|
||||
upload don't have an MD5SUM.
|
||||
|
||||
### Buckets and Regions ###
|
||||
|
||||
With Amazon S3 you can list buckets (`rclone lsd`) using any region,
|
||||
but you can only access the content of a bucket from the region it was
|
||||
created in. If you attempt to access a bucket from the wrong region,
|
||||
you will get an error, `incorrect region, the bucket is not in 'XXX'
|
||||
region`.
|
||||
|
||||
### Authentication ###
|
||||
There are two ways to supply `rclone` with a set of AWS
|
||||
credentials. In order of precedence:
|
||||
|
||||
- Directly in the rclone configuration file (as configured by `rclone config`)
|
||||
- set `access_key_id` and `secret_access_key`
|
||||
- Runtime configuration:
|
||||
- set `env_auth` to `true` in the config file
|
||||
- Exporting the following environment variables before running `rclone`
|
||||
- Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY`
|
||||
- Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`
|
||||
- Running `rclone` on an EC2 instance with an IAM role
|
||||
|
||||
If none of these option actually end up providing `rclone` with AWS
|
||||
credentials then S3 interaction will be non-authenticated (see below).
|
||||
|
||||
### Anonymous access to public buckets ###
|
||||
|
||||
If you want to use rclone to access a public bucket, configure with a
|
||||
blank `access_key_id` and `secret_access_key`. Eg
|
||||
|
||||
```
|
||||
No remotes found - make a new one
|
||||
n) New remote
|
||||
q) Quit config
|
||||
n/q> n
|
||||
name> anons3
|
||||
What type of source is it?
|
||||
Choose a number from below
|
||||
1) amazon cloud drive
|
||||
2) b2
|
||||
3) drive
|
||||
4) dropbox
|
||||
5) google cloud storage
|
||||
6) swift
|
||||
7) hubic
|
||||
8) local
|
||||
9) onedrive
|
||||
10) s3
|
||||
11) yandex
|
||||
type> 10
|
||||
Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
|
||||
Choose a number from below, or type in your own value
|
||||
* Enter AWS credentials in the next step
|
||||
1) false
|
||||
* Get AWS credentials from the environment (env vars or IAM)
|
||||
2) true
|
||||
env_auth> 1
|
||||
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
|
||||
access_key_id>
|
||||
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
|
||||
secret_access_key>
|
||||
...
|
||||
```
|
||||
|
||||
Then use it as normal with the name of the public bucket, eg
|
||||
|
||||
rclone lsd anons3:1000genomes
|
||||
|
||||
You will be able to list and copy data but not upload it.
|
||||
|
||||
### Ceph ###
|
||||
|
||||
Ceph is an object storage system which presents an Amazon S3 interface.
|
||||
|
||||
To use rclone with ceph, you need to set the following parameters in
|
||||
the config.
|
||||
|
||||
```
|
||||
access_key_id = Whatever
|
||||
secret_access_key = Whatever
|
||||
endpoint = https://ceph.endpoint.goes.here/
|
||||
region = other-v2-signature
|
||||
```
|
||||
|
||||
Note also that Ceph sometimes puts `/` in the passwords it gives
|
||||
users. If you read the secret access key using the command line tools
|
||||
you will get a JSON blob with the `/` escaped as `\/`. Make sure you
|
||||
only write `/` in the secret access key.
|
||||
|
||||
Eg the dump from Ceph looks something like this (irrelevant keys
|
||||
removed).
|
||||
|
||||
```
|
||||
{
|
||||
"user_id": "xxx",
|
||||
"display_name": "xxxx",
|
||||
"keys": [
|
||||
{
|
||||
"user": "xxx",
|
||||
"access_key": "xxxxxx",
|
||||
"secret_key": "xxxxxx\/xxxx"
|
||||
}
|
||||
],
|
||||
}
|
||||
```
|
||||
|
||||
Because this is a json dump, it is encoding the `/` as `\/`, so if you
|
||||
use the secret key as `xxxxxx/xxxx` it will work fine.
|
||||
|
||||
@@ -25,68 +25,39 @@ This will guide you through an interactive setup process.
|
||||
```
|
||||
No remotes found - make a new one
|
||||
n) New remote
|
||||
s) Set configuration password
|
||||
n/s> n
|
||||
q) Quit config
|
||||
n/q> n
|
||||
name> remote
|
||||
Type of storage to configure.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Cloud Drive
|
||||
\ "amazon cloud drive"
|
||||
2 / Amazon S3 (also Dreamhost, Ceph)
|
||||
\ "s3"
|
||||
3 / Backblaze B2
|
||||
\ "b2"
|
||||
4 / Dropbox
|
||||
\ "dropbox"
|
||||
5 / Google Cloud Storage (this is not Google Drive)
|
||||
\ "google cloud storage"
|
||||
6 / Google Drive
|
||||
\ "drive"
|
||||
7 / Hubic
|
||||
\ "hubic"
|
||||
8 / Local Disk
|
||||
\ "local"
|
||||
9 / Microsoft OneDrive
|
||||
\ "onedrive"
|
||||
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
||||
\ "swift"
|
||||
11 / Yandex Disk
|
||||
\ "yandex"
|
||||
Storage> 10
|
||||
What type of source is it?
|
||||
Choose a number from below
|
||||
1) swift
|
||||
2) s3
|
||||
3) local
|
||||
4) drive
|
||||
type> 1
|
||||
User name to log in.
|
||||
user> user_name
|
||||
API key or password.
|
||||
key> password_or_api_key
|
||||
Authentication URL for server.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Rackspace US
|
||||
\ "https://auth.api.rackspacecloud.com/v1.0"
|
||||
2 / Rackspace UK
|
||||
\ "https://lon.auth.api.rackspacecloud.com/v1.0"
|
||||
3 / Rackspace v2
|
||||
\ "https://identity.api.rackspacecloud.com/v2.0"
|
||||
4 / Memset Memstore UK
|
||||
\ "https://auth.storage.memset.com/v1.0"
|
||||
5 / Memset Memstore UK v2
|
||||
\ "https://auth.storage.memset.com/v2.0"
|
||||
6 / OVH
|
||||
\ "https://auth.cloud.ovh.net/v2.0"
|
||||
* Rackspace US
|
||||
1) https://auth.api.rackspacecloud.com/v1.0
|
||||
* Rackspace UK
|
||||
2) https://lon.auth.api.rackspacecloud.com/v1.0
|
||||
* Rackspace v2
|
||||
3) https://identity.api.rackspacecloud.com/v2.0
|
||||
* Memset Memstore UK
|
||||
4) https://auth.storage.memset.com/v1.0
|
||||
* Memset Memstore UK v2
|
||||
5) https://auth.storage.memset.com/v2.0
|
||||
auth> 1
|
||||
Tenant name - optional
|
||||
tenant>
|
||||
Region name - optional
|
||||
region>
|
||||
Storage URL - optional
|
||||
storage_url>
|
||||
Remote config
|
||||
--------------------
|
||||
[remote]
|
||||
user = user_name
|
||||
key = password_or_api_key
|
||||
auth = https://auth.api.rackspacecloud.com/v1.0
|
||||
tenant =
|
||||
region =
|
||||
storage_url =
|
||||
--------------------
|
||||
y) Yes this is OK
|
||||
e) Edit this remote
|
||||
@@ -113,17 +84,8 @@ excess files in the container.
|
||||
|
||||
rclone sync /home/local/directory remote:container
|
||||
|
||||
### Specific options ###
|
||||
|
||||
Here are the command line options specific to this cloud storage
|
||||
system.
|
||||
|
||||
#### --swift-chunk-size=SIZE ####
|
||||
|
||||
Above this size files will be chunked into a _segments container. The
|
||||
default for this is 5GB which is its maximum value.
|
||||
|
||||
### Modified time ###
|
||||
Modified time
|
||||
-------------
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
|
||||
@@ -131,9 +93,3 @@ ns.
|
||||
|
||||
This is a defacto standard (used in the official python-swiftclient
|
||||
amongst others) for storing the modification time for an object.
|
||||
|
||||
### Limitations ###
|
||||
|
||||
The Swift API doesn't return a correct MD5SUM for segmented files
|
||||
(Dynamic or Static Large Objects) so rclone won't check or use the
|
||||
MD5SUM for these.
|
||||
|
||||
@@ -1,113 +0,0 @@
|
||||
---
|
||||
title: "Yandex"
|
||||
description: "Yandex Disk"
|
||||
date: "2015-12-30"
|
||||
---
|
||||
|
||||
<i class="fa fa-space-shuttle"></i>Yandex Disk
|
||||
----------------------------------------
|
||||
|
||||
[Yandex Disk](https://disk.yandex.com) is a cloud storage solution created by [Yandex](http://yandex.com).
|
||||
|
||||
Yandex paths may be as deep as required, eg `remote:directory/subdirectory`.
|
||||
|
||||
Here is an example of making a yandex configuration. First run
|
||||
|
||||
rclone config
|
||||
|
||||
This will guide you through an interactive setup process:
|
||||
|
||||
```
|
||||
No remotes found - make a new one
|
||||
n) New remote
|
||||
s) Set configuration password
|
||||
n/s> n
|
||||
name> remote
|
||||
Type of storage to configure.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Cloud Drive
|
||||
\ "amazon cloud drive"
|
||||
2 / Amazon S3 (also Dreamhost, Ceph)
|
||||
\ "s3"
|
||||
3 / Backblaze B2
|
||||
\ "b2"
|
||||
4 / Dropbox
|
||||
\ "dropbox"
|
||||
5 / Google Cloud Storage (this is not Google Drive)
|
||||
\ "google cloud storage"
|
||||
6 / Google Drive
|
||||
\ "drive"
|
||||
7 / Hubic
|
||||
\ "hubic"
|
||||
8 / Local Disk
|
||||
\ "local"
|
||||
9 / Microsoft OneDrive
|
||||
\ "onedrive"
|
||||
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
||||
\ "swift"
|
||||
11 / Yandex Disk
|
||||
\ "yandex"
|
||||
Storage> 11
|
||||
Yandex Client Id - leave blank normally.
|
||||
client_id>
|
||||
Yandex Client Secret - leave blank normally.
|
||||
client_secret>
|
||||
Remote config
|
||||
Use auto config?
|
||||
* Say Y if not sure
|
||||
* Say N if you are working on a remote or headless machine
|
||||
y) Yes
|
||||
n) No
|
||||
y/n> y
|
||||
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
|
||||
Log in and authorize rclone for access
|
||||
Waiting for code...
|
||||
Got code
|
||||
--------------------
|
||||
[remote]
|
||||
client_id =
|
||||
client_secret =
|
||||
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","expiry":"2016-12-29T12:27:11.362788025Z"}
|
||||
--------------------
|
||||
y) Yes this is OK
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
See the [remote setup docs](/remote_setup/) for how to set it up on a
|
||||
machine with no Internet browser available.
|
||||
|
||||
Note that rclone runs a webserver on your local machine to collect the
|
||||
token as returned from Yandex Disk. This only runs from the moment it
|
||||
opens your browser to the moment you get back the verification code.
|
||||
This is on `http://127.0.0.1:53682/` and this it may require you to
|
||||
unblock it temporarily if you are running a host firewall.
|
||||
|
||||
Once configured you can then use `rclone` like this,
|
||||
|
||||
See top level directories
|
||||
|
||||
rclone lsd remote:
|
||||
|
||||
Make a new directory
|
||||
|
||||
rclone mkdir remote:directory
|
||||
|
||||
List the contents of a directory
|
||||
|
||||
rclone ls remote:directory
|
||||
|
||||
Sync `/home/local/directory` to the remote path, deleting any
|
||||
excess files in the path.
|
||||
|
||||
rclone sync /home/local/directory remote:directory
|
||||
|
||||
### Modified time ###
|
||||
|
||||
Modified times are supported and are stored accurate to 1 ns in custom
|
||||
metadata called `rclone_modified` in RFC3339 with nanoseconds format.
|
||||
|
||||
### MD5 checksums ###
|
||||
|
||||
MD5 checksums are natively supported by Yandex Disk.
|
||||
@@ -2,7 +2,7 @@
|
||||
<div class="row">
|
||||
<hr>
|
||||
<div class="col-sm-12">
|
||||
<p>© <a href="http://www.craig-wood.com/nick/">Nick Craig-Wood</a> 2014-2016<br>
|
||||
<p>© <a href="http://www.craig-wood.com/nick/">Nick Craig-Wood</a> 2014<br>
|
||||
Website hosted on <a href="http://www.memset.com/cloud/storage/">Memset Memstore™</a>,
|
||||
uploaded with <a href="http://rclone.org">rclone</a>
|
||||
and built with <a href="https://github.com/spf13/hugo">Hugo</a></p>
|
||||
|
||||
@@ -3,7 +3,6 @@
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<meta name="description" content="{{ .Description }}">
|
||||
<meta name="author" content="Nick Craig-Wood">
|
||||
<link rel="shortcut icon" type="image/png" href="/img/rclone-16x16.png"/>
|
||||
<script>
|
||||
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
|
||||
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
|
||||
|
||||
@@ -7,44 +7,24 @@
|
||||
<span class="icon-bar"></span>
|
||||
<span class="icon-bar"></span>
|
||||
</button>
|
||||
<a class="navbar-brand" href="{{ .Site.BaseURL }}"><i class="fa fa-home"></i> {{ .Site.Title }}</a>
|
||||
<a class="navbar-brand" href="{{ .Site.BaseUrl }}"><i class="fa fa-home"></i> {{ .Site.Title }}</a>
|
||||
</div>
|
||||
<div class="collapse navbar-collapse navbar-ex1-collapse">
|
||||
<ul class="nav navbar-nav">
|
||||
<li><a href="/downloads/"><i class="fa fa-cloud-download"></i> Downloads</a></li>
|
||||
<li class="dropdown">
|
||||
<a href="#" class="dropdown-toggle" data-toggle="dropdown"><b class="caret"></b> Docs</a>
|
||||
<ul class="dropdown-menu">
|
||||
<li><a href="/install/"><i class="fa fa-book"></i> Installation</a></li>
|
||||
<li><a href="/docs/"><i class="fa fa-book"></i> Usage</a></li>
|
||||
<li><a href="/filtering/"><i class="fa fa-book"></i> Filtering</a></li>
|
||||
<li><a href="/changelog/"><i class="fa fa-book"></i> Changelog</a></li>
|
||||
<li><a href="/bugs/"><i class="fa fa-book"></i> Bugs</a></li>
|
||||
<li><a href="/faq/"><i class="fa fa-book"></i> FAQ</a></li>
|
||||
<li><a href="/licence/"><i class="fa fa-book"></i> Licence</a></li>
|
||||
<li><a href="/authors/"><i class="fa fa-book"></i> Authors</a></li>
|
||||
<li><a href="/donate/"><i class="fa fa-book"></i> Donate</a></li>
|
||||
<li><a href="/privacy/"><i class="fa fa-book"></i> Privacy Policy</a></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><a href="/docs/"><i class="fa fa-book"></i> Docs</a></li>
|
||||
<li><a href="/contact/"><i class="fa fa-envelope"></i> Contact</a></li>
|
||||
<li class="dropdown">
|
||||
<a href="#" class="dropdown-toggle" data-toggle="dropdown"><b class="caret"></b> Storage Systems</a>
|
||||
<ul class="dropdown-menu">
|
||||
<li><a href="/overview/"><i class="fa fa-archive"></i> Overview</a></li>
|
||||
<li><a href="/drive/"><i class="fa fa-google"></i> Drive</a></li>
|
||||
<li><a href="/s3/"><i class="fa fa-amazon"></i> S3</a></li>
|
||||
<li><a href="/s3/"><i class="fa fa-archive"></i> S3</a></li>
|
||||
<li><a href="/swift/"><i class="fa fa-space-shuttle"></i> Swift</a></li>
|
||||
<li><a href="/dropbox/"><i class="fa fa-dropbox"></i> Dropbox</a></li>
|
||||
<li><a href="/googlecloudstorage/"><i class="fa fa-google"></i> Google Cloud Storage</a></li>
|
||||
<li><a href="/amazonclouddrive/"><i class="fa fa-amazon"></i> Amazon Cloud Drive</a></li>
|
||||
<li><a href="/onedrive/"><i class="fa fa-windows"></i> Microsoft One Drive</a></li>
|
||||
<li><a href="/hubic/"><i class="fa fa-space-shuttle"></i> Hubic</a></li>
|
||||
<li><a href="/b2/"><i class="fa fa-fire"></i> Backblaze B2</a></li>
|
||||
<li><a href="/local/"><i class="fa fa-file"></i> Local</a></li>
|
||||
<li><a href="/yandex/"><i class="fa fa-space-shuttle"></i> Yandex Disk</a></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><a href="/contact/"><i class="fa fa-envelope"></i> Contact</a></li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
<div class="container">
|
||||
<div class="row">
|
||||
<div class="col-md-9">
|
||||
{{ range $key, $value := .Site.Taxonomies.groups.about.Pages }}
|
||||
{{ range $key, $value := .Site.Indexes.groups.about.Pages }}
|
||||
{{ $value.Content }}
|
||||
{{ end }}
|
||||
</div>
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
{{ range .Data.Pages }}
|
||||
<url>
|
||||
<loc>{{ .Permalink }}</loc>
|
||||
<lastmod>{{ safeHTML ( .Date.Format "2006-01-02T15:04:05-07:00" ) }}</lastmod>{{ with .Sitemap.ChangeFreq }}
|
||||
<lastmod>{{ safeHtml ( .Date.Format "2006-01-02T15:04:05-07:00" ) }}</lastmod>{{ with .Sitemap.ChangeFreq }}
|
||||
<changefreq>{{ . }}</changefreq>{{ end }}{{ if ge .Sitemap.Priority 0.0 }}
|
||||
<priority>{{ .Sitemap.Priority }}</priority>{{ end }}
|
||||
</url>
|
||||
|
||||
21
docs/static/css/custom.css
vendored
21
docs/static/css/custom.css
vendored
@@ -4,23 +4,4 @@ body {
|
||||
|
||||
footer {
|
||||
margin: 50px 0;
|
||||
}
|
||||
|
||||
table {
|
||||
background-color:#e0e0ff
|
||||
}
|
||||
|
||||
tbody td, th {
|
||||
border: 1px solid black;
|
||||
padding: 3px 7px 2px 7px;
|
||||
}
|
||||
|
||||
thead td, th {
|
||||
border: 1px solid black;
|
||||
padding: 3px 7px 2px 7px;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
tbody tr:nth-child(odd) {
|
||||
background-color:#d0d0ff
|
||||
}
|
||||
}
|
||||
542
docs/static/css/font-awesome.css
vendored
542
docs/static/css/font-awesome.css
vendored
@@ -1,21 +1,22 @@
|
||||
/*!
|
||||
* Font Awesome 4.4.0 by @davegandy - http://fontawesome.io - @fontawesome
|
||||
* Font Awesome 4.1.0 by @davegandy - http://fontawesome.io - @fontawesome
|
||||
* License - http://fontawesome.io/license (Font: SIL OFL 1.1, CSS: MIT License)
|
||||
*/
|
||||
/* FONT PATH
|
||||
* -------------------------- */
|
||||
@font-face {
|
||||
font-family: 'FontAwesome';
|
||||
src: url('../fonts/fontawesome-webfont.eot?v=4.4.0');
|
||||
src: url('../fonts/fontawesome-webfont.eot?#iefix&v=4.4.0') format('embedded-opentype'), url('../fonts/fontawesome-webfont.woff2?v=4.4.0') format('woff2'), url('../fonts/fontawesome-webfont.woff?v=4.4.0') format('woff'), url('../fonts/fontawesome-webfont.ttf?v=4.4.0') format('truetype'), url('../fonts/fontawesome-webfont.svg?v=4.4.0#fontawesomeregular') format('svg');
|
||||
src: url('../fonts/fontawesome-webfont.eot?v=4.1.0');
|
||||
src: url('../fonts/fontawesome-webfont.eot?#iefix&v=4.1.0') format('embedded-opentype'), url('../fonts/fontawesome-webfont.woff?v=4.1.0') format('woff'), url('../fonts/fontawesome-webfont.ttf?v=4.1.0') format('truetype'), url('../fonts/fontawesome-webfont.svg?v=4.1.0#fontawesomeregular') format('svg');
|
||||
font-weight: normal;
|
||||
font-style: normal;
|
||||
}
|
||||
.fa {
|
||||
display: inline-block;
|
||||
font: normal normal normal 14px/1 FontAwesome;
|
||||
font-size: inherit;
|
||||
text-rendering: auto;
|
||||
font-family: FontAwesome;
|
||||
font-style: normal;
|
||||
font-weight: normal;
|
||||
line-height: 1;
|
||||
-webkit-font-smoothing: antialiased;
|
||||
-moz-osx-font-smoothing: grayscale;
|
||||
}
|
||||
@@ -64,19 +65,6 @@
|
||||
border: solid 0.08em #eeeeee;
|
||||
border-radius: .1em;
|
||||
}
|
||||
.fa-pull-left {
|
||||
float: left;
|
||||
}
|
||||
.fa-pull-right {
|
||||
float: right;
|
||||
}
|
||||
.fa.fa-pull-left {
|
||||
margin-right: .3em;
|
||||
}
|
||||
.fa.fa-pull-right {
|
||||
margin-left: .3em;
|
||||
}
|
||||
/* Deprecated as of 4.4.0 */
|
||||
.pull-right {
|
||||
float: right;
|
||||
}
|
||||
@@ -90,24 +78,36 @@
|
||||
margin-left: .3em;
|
||||
}
|
||||
.fa-spin {
|
||||
-webkit-animation: fa-spin 2s infinite linear;
|
||||
animation: fa-spin 2s infinite linear;
|
||||
-webkit-animation: spin 2s infinite linear;
|
||||
-moz-animation: spin 2s infinite linear;
|
||||
-o-animation: spin 2s infinite linear;
|
||||
animation: spin 2s infinite linear;
|
||||
}
|
||||
.fa-pulse {
|
||||
-webkit-animation: fa-spin 1s infinite steps(8);
|
||||
animation: fa-spin 1s infinite steps(8);
|
||||
@-moz-keyframes spin {
|
||||
0% {
|
||||
-moz-transform: rotate(0deg);
|
||||
}
|
||||
100% {
|
||||
-moz-transform: rotate(359deg);
|
||||
}
|
||||
}
|
||||
@-webkit-keyframes fa-spin {
|
||||
@-webkit-keyframes spin {
|
||||
0% {
|
||||
-webkit-transform: rotate(0deg);
|
||||
transform: rotate(0deg);
|
||||
}
|
||||
100% {
|
||||
-webkit-transform: rotate(359deg);
|
||||
transform: rotate(359deg);
|
||||
}
|
||||
}
|
||||
@keyframes fa-spin {
|
||||
@-o-keyframes spin {
|
||||
0% {
|
||||
-o-transform: rotate(0deg);
|
||||
}
|
||||
100% {
|
||||
-o-transform: rotate(359deg);
|
||||
}
|
||||
}
|
||||
@keyframes spin {
|
||||
0% {
|
||||
-webkit-transform: rotate(0deg);
|
||||
transform: rotate(0deg);
|
||||
@@ -120,40 +120,43 @@
|
||||
.fa-rotate-90 {
|
||||
filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=1);
|
||||
-webkit-transform: rotate(90deg);
|
||||
-moz-transform: rotate(90deg);
|
||||
-ms-transform: rotate(90deg);
|
||||
-o-transform: rotate(90deg);
|
||||
transform: rotate(90deg);
|
||||
}
|
||||
.fa-rotate-180 {
|
||||
filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=2);
|
||||
-webkit-transform: rotate(180deg);
|
||||
-moz-transform: rotate(180deg);
|
||||
-ms-transform: rotate(180deg);
|
||||
-o-transform: rotate(180deg);
|
||||
transform: rotate(180deg);
|
||||
}
|
||||
.fa-rotate-270 {
|
||||
filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=3);
|
||||
-webkit-transform: rotate(270deg);
|
||||
-moz-transform: rotate(270deg);
|
||||
-ms-transform: rotate(270deg);
|
||||
-o-transform: rotate(270deg);
|
||||
transform: rotate(270deg);
|
||||
}
|
||||
.fa-flip-horizontal {
|
||||
filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1);
|
||||
-webkit-transform: scale(-1, 1);
|
||||
-moz-transform: scale(-1, 1);
|
||||
-ms-transform: scale(-1, 1);
|
||||
-o-transform: scale(-1, 1);
|
||||
transform: scale(-1, 1);
|
||||
}
|
||||
.fa-flip-vertical {
|
||||
filter: progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1);
|
||||
-webkit-transform: scale(1, -1);
|
||||
-moz-transform: scale(1, -1);
|
||||
-ms-transform: scale(1, -1);
|
||||
-o-transform: scale(1, -1);
|
||||
transform: scale(1, -1);
|
||||
}
|
||||
:root .fa-rotate-90,
|
||||
:root .fa-rotate-180,
|
||||
:root .fa-rotate-270,
|
||||
:root .fa-flip-horizontal,
|
||||
:root .fa-flip-vertical {
|
||||
filter: none;
|
||||
}
|
||||
.fa-stack {
|
||||
position: relative;
|
||||
display: inline-block;
|
||||
@@ -219,8 +222,6 @@
|
||||
.fa-check:before {
|
||||
content: "\f00c";
|
||||
}
|
||||
.fa-remove:before,
|
||||
.fa-close:before,
|
||||
.fa-times:before {
|
||||
content: "\f00d";
|
||||
}
|
||||
@@ -550,8 +551,7 @@
|
||||
.fa-arrows-h:before {
|
||||
content: "\f07e";
|
||||
}
|
||||
.fa-bar-chart-o:before,
|
||||
.fa-bar-chart:before {
|
||||
.fa-bar-chart-o:before {
|
||||
content: "\f080";
|
||||
}
|
||||
.fa-twitter-square:before {
|
||||
@@ -627,7 +627,6 @@
|
||||
.fa-twitter:before {
|
||||
content: "\f099";
|
||||
}
|
||||
.fa-facebook-f:before,
|
||||
.fa-facebook:before {
|
||||
content: "\f09a";
|
||||
}
|
||||
@@ -640,7 +639,6 @@
|
||||
.fa-credit-card:before {
|
||||
content: "\f09d";
|
||||
}
|
||||
.fa-feed:before,
|
||||
.fa-rss:before {
|
||||
content: "\f09e";
|
||||
}
|
||||
@@ -1278,8 +1276,7 @@
|
||||
.fa-male:before {
|
||||
content: "\f183";
|
||||
}
|
||||
.fa-gittip:before,
|
||||
.fa-gratipay:before {
|
||||
.fa-gittip:before {
|
||||
content: "\f184";
|
||||
}
|
||||
.fa-sun-o:before {
|
||||
@@ -1383,6 +1380,7 @@
|
||||
.fa-digg:before {
|
||||
content: "\f1a6";
|
||||
}
|
||||
.fa-pied-piper-square:before,
|
||||
.fa-pied-piper:before {
|
||||
content: "\f1a7";
|
||||
}
|
||||
@@ -1499,7 +1497,6 @@
|
||||
content: "\f1cc";
|
||||
}
|
||||
.fa-life-bouy:before,
|
||||
.fa-life-buoy:before,
|
||||
.fa-life-saver:before,
|
||||
.fa-support:before,
|
||||
.fa-life-ring:before {
|
||||
@@ -1522,8 +1519,6 @@
|
||||
.fa-git:before {
|
||||
content: "\f1d3";
|
||||
}
|
||||
.fa-y-combinator-square:before,
|
||||
.fa-yc-square:before,
|
||||
.fa-hacker-news:before {
|
||||
content: "\f1d4";
|
||||
}
|
||||
@@ -1569,458 +1564,3 @@
|
||||
.fa-bomb:before {
|
||||
content: "\f1e2";
|
||||
}
|
||||
.fa-soccer-ball-o:before,
|
||||
.fa-futbol-o:before {
|
||||
content: "\f1e3";
|
||||
}
|
||||
.fa-tty:before {
|
||||
content: "\f1e4";
|
||||
}
|
||||
.fa-binoculars:before {
|
||||
content: "\f1e5";
|
||||
}
|
||||
.fa-plug:before {
|
||||
content: "\f1e6";
|
||||
}
|
||||
.fa-slideshare:before {
|
||||
content: "\f1e7";
|
||||
}
|
||||
.fa-twitch:before {
|
||||
content: "\f1e8";
|
||||
}
|
||||
.fa-yelp:before {
|
||||
content: "\f1e9";
|
||||
}
|
||||
.fa-newspaper-o:before {
|
||||
content: "\f1ea";
|
||||
}
|
||||
.fa-wifi:before {
|
||||
content: "\f1eb";
|
||||
}
|
||||
.fa-calculator:before {
|
||||
content: "\f1ec";
|
||||
}
|
||||
.fa-paypal:before {
|
||||
content: "\f1ed";
|
||||
}
|
||||
.fa-google-wallet:before {
|
||||
content: "\f1ee";
|
||||
}
|
||||
.fa-cc-visa:before {
|
||||
content: "\f1f0";
|
||||
}
|
||||
.fa-cc-mastercard:before {
|
||||
content: "\f1f1";
|
||||
}
|
||||
.fa-cc-discover:before {
|
||||
content: "\f1f2";
|
||||
}
|
||||
.fa-cc-amex:before {
|
||||
content: "\f1f3";
|
||||
}
|
||||
.fa-cc-paypal:before {
|
||||
content: "\f1f4";
|
||||
}
|
||||
.fa-cc-stripe:before {
|
||||
content: "\f1f5";
|
||||
}
|
||||
.fa-bell-slash:before {
|
||||
content: "\f1f6";
|
||||
}
|
||||
.fa-bell-slash-o:before {
|
||||
content: "\f1f7";
|
||||
}
|
||||
.fa-trash:before {
|
||||
content: "\f1f8";
|
||||
}
|
||||
.fa-copyright:before {
|
||||
content: "\f1f9";
|
||||
}
|
||||
.fa-at:before {
|
||||
content: "\f1fa";
|
||||
}
|
||||
.fa-eyedropper:before {
|
||||
content: "\f1fb";
|
||||
}
|
||||
.fa-paint-brush:before {
|
||||
content: "\f1fc";
|
||||
}
|
||||
.fa-birthday-cake:before {
|
||||
content: "\f1fd";
|
||||
}
|
||||
.fa-area-chart:before {
|
||||
content: "\f1fe";
|
||||
}
|
||||
.fa-pie-chart:before {
|
||||
content: "\f200";
|
||||
}
|
||||
.fa-line-chart:before {
|
||||
content: "\f201";
|
||||
}
|
||||
.fa-lastfm:before {
|
||||
content: "\f202";
|
||||
}
|
||||
.fa-lastfm-square:before {
|
||||
content: "\f203";
|
||||
}
|
||||
.fa-toggle-off:before {
|
||||
content: "\f204";
|
||||
}
|
||||
.fa-toggle-on:before {
|
||||
content: "\f205";
|
||||
}
|
||||
.fa-bicycle:before {
|
||||
content: "\f206";
|
||||
}
|
||||
.fa-bus:before {
|
||||
content: "\f207";
|
||||
}
|
||||
.fa-ioxhost:before {
|
||||
content: "\f208";
|
||||
}
|
||||
.fa-angellist:before {
|
||||
content: "\f209";
|
||||
}
|
||||
.fa-cc:before {
|
||||
content: "\f20a";
|
||||
}
|
||||
.fa-shekel:before,
|
||||
.fa-sheqel:before,
|
||||
.fa-ils:before {
|
||||
content: "\f20b";
|
||||
}
|
||||
.fa-meanpath:before {
|
||||
content: "\f20c";
|
||||
}
|
||||
.fa-buysellads:before {
|
||||
content: "\f20d";
|
||||
}
|
||||
.fa-connectdevelop:before {
|
||||
content: "\f20e";
|
||||
}
|
||||
.fa-dashcube:before {
|
||||
content: "\f210";
|
||||
}
|
||||
.fa-forumbee:before {
|
||||
content: "\f211";
|
||||
}
|
||||
.fa-leanpub:before {
|
||||
content: "\f212";
|
||||
}
|
||||
.fa-sellsy:before {
|
||||
content: "\f213";
|
||||
}
|
||||
.fa-shirtsinbulk:before {
|
||||
content: "\f214";
|
||||
}
|
||||
.fa-simplybuilt:before {
|
||||
content: "\f215";
|
||||
}
|
||||
.fa-skyatlas:before {
|
||||
content: "\f216";
|
||||
}
|
||||
.fa-cart-plus:before {
|
||||
content: "\f217";
|
||||
}
|
||||
.fa-cart-arrow-down:before {
|
||||
content: "\f218";
|
||||
}
|
||||
.fa-diamond:before {
|
||||
content: "\f219";
|
||||
}
|
||||
.fa-ship:before {
|
||||
content: "\f21a";
|
||||
}
|
||||
.fa-user-secret:before {
|
||||
content: "\f21b";
|
||||
}
|
||||
.fa-motorcycle:before {
|
||||
content: "\f21c";
|
||||
}
|
||||
.fa-street-view:before {
|
||||
content: "\f21d";
|
||||
}
|
||||
.fa-heartbeat:before {
|
||||
content: "\f21e";
|
||||
}
|
||||
.fa-venus:before {
|
||||
content: "\f221";
|
||||
}
|
||||
.fa-mars:before {
|
||||
content: "\f222";
|
||||
}
|
||||
.fa-mercury:before {
|
||||
content: "\f223";
|
||||
}
|
||||
.fa-intersex:before,
|
||||
.fa-transgender:before {
|
||||
content: "\f224";
|
||||
}
|
||||
.fa-transgender-alt:before {
|
||||
content: "\f225";
|
||||
}
|
||||
.fa-venus-double:before {
|
||||
content: "\f226";
|
||||
}
|
||||
.fa-mars-double:before {
|
||||
content: "\f227";
|
||||
}
|
||||
.fa-venus-mars:before {
|
||||
content: "\f228";
|
||||
}
|
||||
.fa-mars-stroke:before {
|
||||
content: "\f229";
|
||||
}
|
||||
.fa-mars-stroke-v:before {
|
||||
content: "\f22a";
|
||||
}
|
||||
.fa-mars-stroke-h:before {
|
||||
content: "\f22b";
|
||||
}
|
||||
.fa-neuter:before {
|
||||
content: "\f22c";
|
||||
}
|
||||
.fa-genderless:before {
|
||||
content: "\f22d";
|
||||
}
|
||||
.fa-facebook-official:before {
|
||||
content: "\f230";
|
||||
}
|
||||
.fa-pinterest-p:before {
|
||||
content: "\f231";
|
||||
}
|
||||
.fa-whatsapp:before {
|
||||
content: "\f232";
|
||||
}
|
||||
.fa-server:before {
|
||||
content: "\f233";
|
||||
}
|
||||
.fa-user-plus:before {
|
||||
content: "\f234";
|
||||
}
|
||||
.fa-user-times:before {
|
||||
content: "\f235";
|
||||
}
|
||||
.fa-hotel:before,
|
||||
.fa-bed:before {
|
||||
content: "\f236";
|
||||
}
|
||||
.fa-viacoin:before {
|
||||
content: "\f237";
|
||||
}
|
||||
.fa-train:before {
|
||||
content: "\f238";
|
||||
}
|
||||
.fa-subway:before {
|
||||
content: "\f239";
|
||||
}
|
||||
.fa-medium:before {
|
||||
content: "\f23a";
|
||||
}
|
||||
.fa-yc:before,
|
||||
.fa-y-combinator:before {
|
||||
content: "\f23b";
|
||||
}
|
||||
.fa-optin-monster:before {
|
||||
content: "\f23c";
|
||||
}
|
||||
.fa-opencart:before {
|
||||
content: "\f23d";
|
||||
}
|
||||
.fa-expeditedssl:before {
|
||||
content: "\f23e";
|
||||
}
|
||||
.fa-battery-4:before,
|
||||
.fa-battery-full:before {
|
||||
content: "\f240";
|
||||
}
|
||||
.fa-battery-3:before,
|
||||
.fa-battery-three-quarters:before {
|
||||
content: "\f241";
|
||||
}
|
||||
.fa-battery-2:before,
|
||||
.fa-battery-half:before {
|
||||
content: "\f242";
|
||||
}
|
||||
.fa-battery-1:before,
|
||||
.fa-battery-quarter:before {
|
||||
content: "\f243";
|
||||
}
|
||||
.fa-battery-0:before,
|
||||
.fa-battery-empty:before {
|
||||
content: "\f244";
|
||||
}
|
||||
.fa-mouse-pointer:before {
|
||||
content: "\f245";
|
||||
}
|
||||
.fa-i-cursor:before {
|
||||
content: "\f246";
|
||||
}
|
||||
.fa-object-group:before {
|
||||
content: "\f247";
|
||||
}
|
||||
.fa-object-ungroup:before {
|
||||
content: "\f248";
|
||||
}
|
||||
.fa-sticky-note:before {
|
||||
content: "\f249";
|
||||
}
|
||||
.fa-sticky-note-o:before {
|
||||
content: "\f24a";
|
||||
}
|
||||
.fa-cc-jcb:before {
|
||||
content: "\f24b";
|
||||
}
|
||||
.fa-cc-diners-club:before {
|
||||
content: "\f24c";
|
||||
}
|
||||
.fa-clone:before {
|
||||
content: "\f24d";
|
||||
}
|
||||
.fa-balance-scale:before {
|
||||
content: "\f24e";
|
||||
}
|
||||
.fa-hourglass-o:before {
|
||||
content: "\f250";
|
||||
}
|
||||
.fa-hourglass-1:before,
|
||||
.fa-hourglass-start:before {
|
||||
content: "\f251";
|
||||
}
|
||||
.fa-hourglass-2:before,
|
||||
.fa-hourglass-half:before {
|
||||
content: "\f252";
|
||||
}
|
||||
.fa-hourglass-3:before,
|
||||
.fa-hourglass-end:before {
|
||||
content: "\f253";
|
||||
}
|
||||
.fa-hourglass:before {
|
||||
content: "\f254";
|
||||
}
|
||||
.fa-hand-grab-o:before,
|
||||
.fa-hand-rock-o:before {
|
||||
content: "\f255";
|
||||
}
|
||||
.fa-hand-stop-o:before,
|
||||
.fa-hand-paper-o:before {
|
||||
content: "\f256";
|
||||
}
|
||||
.fa-hand-scissors-o:before {
|
||||
content: "\f257";
|
||||
}
|
||||
.fa-hand-lizard-o:before {
|
||||
content: "\f258";
|
||||
}
|
||||
.fa-hand-spock-o:before {
|
||||
content: "\f259";
|
||||
}
|
||||
.fa-hand-pointer-o:before {
|
||||
content: "\f25a";
|
||||
}
|
||||
.fa-hand-peace-o:before {
|
||||
content: "\f25b";
|
||||
}
|
||||
.fa-trademark:before {
|
||||
content: "\f25c";
|
||||
}
|
||||
.fa-registered:before {
|
||||
content: "\f25d";
|
||||
}
|
||||
.fa-creative-commons:before {
|
||||
content: "\f25e";
|
||||
}
|
||||
.fa-gg:before {
|
||||
content: "\f260";
|
||||
}
|
||||
.fa-gg-circle:before {
|
||||
content: "\f261";
|
||||
}
|
||||
.fa-tripadvisor:before {
|
||||
content: "\f262";
|
||||
}
|
||||
.fa-odnoklassniki:before {
|
||||
content: "\f263";
|
||||
}
|
||||
.fa-odnoklassniki-square:before {
|
||||
content: "\f264";
|
||||
}
|
||||
.fa-get-pocket:before {
|
||||
content: "\f265";
|
||||
}
|
||||
.fa-wikipedia-w:before {
|
||||
content: "\f266";
|
||||
}
|
||||
.fa-safari:before {
|
||||
content: "\f267";
|
||||
}
|
||||
.fa-chrome:before {
|
||||
content: "\f268";
|
||||
}
|
||||
.fa-firefox:before {
|
||||
content: "\f269";
|
||||
}
|
||||
.fa-opera:before {
|
||||
content: "\f26a";
|
||||
}
|
||||
.fa-internet-explorer:before {
|
||||
content: "\f26b";
|
||||
}
|
||||
.fa-tv:before,
|
||||
.fa-television:before {
|
||||
content: "\f26c";
|
||||
}
|
||||
.fa-contao:before {
|
||||
content: "\f26d";
|
||||
}
|
||||
.fa-500px:before {
|
||||
content: "\f26e";
|
||||
}
|
||||
.fa-amazon:before {
|
||||
content: "\f270";
|
||||
}
|
||||
.fa-calendar-plus-o:before {
|
||||
content: "\f271";
|
||||
}
|
||||
.fa-calendar-minus-o:before {
|
||||
content: "\f272";
|
||||
}
|
||||
.fa-calendar-times-o:before {
|
||||
content: "\f273";
|
||||
}
|
||||
.fa-calendar-check-o:before {
|
||||
content: "\f274";
|
||||
}
|
||||
.fa-industry:before {
|
||||
content: "\f275";
|
||||
}
|
||||
.fa-map-pin:before {
|
||||
content: "\f276";
|
||||
}
|
||||
.fa-map-signs:before {
|
||||
content: "\f277";
|
||||
}
|
||||
.fa-map-o:before {
|
||||
content: "\f278";
|
||||
}
|
||||
.fa-map:before {
|
||||
content: "\f279";
|
||||
}
|
||||
.fa-commenting:before {
|
||||
content: "\f27a";
|
||||
}
|
||||
.fa-commenting-o:before {
|
||||
content: "\f27b";
|
||||
}
|
||||
.fa-houzz:before {
|
||||
content: "\f27c";
|
||||
}
|
||||
.fa-vimeo:before {
|
||||
content: "\f27d";
|
||||
}
|
||||
.fa-black-tie:before {
|
||||
content: "\f27e";
|
||||
}
|
||||
.fa-fonticons:before {
|
||||
content: "\f280";
|
||||
}
|
||||
|
||||
BIN
docs/static/fonts/FontAwesome.otf
vendored
BIN
docs/static/fonts/FontAwesome.otf
vendored
Binary file not shown.
BIN
docs/static/fonts/fontawesome-webfont.eot
vendored
Normal file → Executable file
BIN
docs/static/fonts/fontawesome-webfont.eot
vendored
Normal file → Executable file
Binary file not shown.
1064
docs/static/fonts/fontawesome-webfont.svg
vendored
Normal file → Executable file
1064
docs/static/fonts/fontawesome-webfont.svg
vendored
Normal file → Executable file
File diff suppressed because it is too large
Load Diff
|
Before Width: | Height: | Size: 348 KiB After Width: | Height: | Size: 248 KiB |
BIN
docs/static/fonts/fontawesome-webfont.ttf
vendored
Normal file → Executable file
BIN
docs/static/fonts/fontawesome-webfont.ttf
vendored
Normal file → Executable file
Binary file not shown.
BIN
docs/static/fonts/fontawesome-webfont.woff
vendored
Normal file → Executable file
BIN
docs/static/fonts/fontawesome-webfont.woff
vendored
Normal file → Executable file
Binary file not shown.
BIN
docs/static/fonts/fontawesome-webfont.woff2
vendored
BIN
docs/static/fonts/fontawesome-webfont.woff2
vendored
Binary file not shown.
BIN
docs/static/img/rclone-16x16.png
vendored
BIN
docs/static/img/rclone-16x16.png
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 1019 B |
1164
drive/drive.go
1164
drive/drive.go
File diff suppressed because it is too large
Load Diff
@@ -1,59 +0,0 @@
|
||||
package drive
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"google.golang.org/api/drive/v2"
|
||||
)
|
||||
|
||||
func TestInternalParseExtensions(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
want []string
|
||||
wantErr error
|
||||
}{
|
||||
{"doc", []string{"doc"}, nil},
|
||||
{" docx ,XLSX, pptx,svg", []string{"docx", "xlsx", "pptx", "svg"}, nil},
|
||||
{"docx,svg,Docx", []string{"docx", "svg"}, nil},
|
||||
{"docx,potato,docx", []string{"docx"}, fmt.Errorf(`Couldn't find mime type for extension "potato"`)},
|
||||
} {
|
||||
f := new(Fs)
|
||||
gotErr := f.parseExtensions(test.in)
|
||||
assert.Equal(t, test.wantErr, gotErr)
|
||||
assert.Equal(t, test.want, f.extensions)
|
||||
}
|
||||
|
||||
// Test it is appending
|
||||
f := new(Fs)
|
||||
assert.Nil(t, f.parseExtensions("docx,svg"))
|
||||
assert.Nil(t, f.parseExtensions("docx,svg,xlsx"))
|
||||
assert.Equal(t, []string{"docx", "svg", "xlsx"}, f.extensions)
|
||||
|
||||
}
|
||||
|
||||
func TestInternalFindExportFormat(t *testing.T) {
|
||||
item := new(drive.File)
|
||||
item.ExportLinks = map[string]string{
|
||||
"application/pdf": "http://pdf",
|
||||
"application/rtf": "http://rtf",
|
||||
}
|
||||
for _, test := range []struct {
|
||||
extensions []string
|
||||
wantExtension string
|
||||
wantLink string
|
||||
}{
|
||||
{[]string{}, "", ""},
|
||||
{[]string{"pdf"}, "pdf", "http://pdf"},
|
||||
{[]string{"pdf", "rtf", "xls"}, "pdf", "http://pdf"},
|
||||
{[]string{"xls", "rtf", "pdf"}, "rtf", "http://rtf"},
|
||||
{[]string{"xls", "csv", "svg"}, "", ""},
|
||||
} {
|
||||
f := new(Fs)
|
||||
f.extensions = test.extensions
|
||||
gotExtension, gotLink := f.findExportFormat("file", item)
|
||||
assert.Equal(t, test.wantExtension, gotExtension)
|
||||
assert.Equal(t, test.wantLink, gotLink)
|
||||
}
|
||||
}
|
||||
@@ -1,56 +0,0 @@
|
||||
// Test Drive filesystem interface
|
||||
//
|
||||
// Automatically generated - DO NOT EDIT
|
||||
// Regenerate with: make gen_tests
|
||||
package drive_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ncw/rclone/drive"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/fstest/fstests"
|
||||
)
|
||||
|
||||
func init() {
|
||||
fstests.NilObject = fs.Object((*drive.Object)(nil))
|
||||
fstests.RemoteName = "TestDrive:"
|
||||
}
|
||||
|
||||
// Generic tests for the Fs
|
||||
func TestInit(t *testing.T) { fstests.TestInit(t) }
|
||||
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
|
||||
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
|
||||
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
|
||||
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
|
||||
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
|
||||
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
|
||||
func TestFsNewFsObjectNotFound(t *testing.T) { fstests.TestFsNewFsObjectNotFound(t) }
|
||||
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
|
||||
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
|
||||
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
|
||||
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
|
||||
func TestFsListRoot(t *testing.T) { fstests.TestFsListRoot(t) }
|
||||
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
|
||||
func TestFsNewFsObject(t *testing.T) { fstests.TestFsNewFsObject(t) }
|
||||
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
|
||||
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
|
||||
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
|
||||
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
|
||||
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
|
||||
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
|
||||
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
|
||||
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
|
||||
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
|
||||
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
|
||||
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
|
||||
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
|
||||
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
|
||||
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
|
||||
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
|
||||
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
|
||||
func TestLimitedFs(t *testing.T) { fstests.TestLimitedFs(t) }
|
||||
func TestLimitedFsNotFound(t *testing.T) { fstests.TestLimitedFsNotFound(t) }
|
||||
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
|
||||
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
|
||||
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }
|
||||
247
drive/upload.go
247
drive/upload.go
@@ -1,247 +0,0 @@
|
||||
// Upload for drive
|
||||
//
|
||||
// Docs
|
||||
// Resumable upload: https://developers.google.com/drive/web/manage-uploads#resumable
|
||||
// Best practices: https://developers.google.com/drive/web/manage-uploads#best-practices
|
||||
// Files insert: https://developers.google.com/drive/v2/reference/files/insert
|
||||
// Files update: https://developers.google.com/drive/v2/reference/files/update
|
||||
//
|
||||
// This contains code adapted from google.golang.org/api (C) the GO AUTHORS
|
||||
|
||||
package drive
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"regexp"
|
||||
"strconv"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
"google.golang.org/api/drive/v2"
|
||||
"google.golang.org/api/googleapi"
|
||||
)
|
||||
|
||||
const (
|
||||
// statusResumeIncomplete is the code returned by the Google uploader when the transfer is not yet complete.
|
||||
statusResumeIncomplete = 308
|
||||
|
||||
// Number of times to try each chunk
|
||||
maxTries = 10
|
||||
)
|
||||
|
||||
// resumableUpload is used by the generated APIs to provide resumable uploads.
|
||||
// It is not used by developers directly.
|
||||
type resumableUpload struct {
|
||||
f *Fs
|
||||
remote string
|
||||
// URI is the resumable resource destination provided by the server after specifying "&uploadType=resumable".
|
||||
URI string
|
||||
// Media is the object being uploaded.
|
||||
Media io.Reader
|
||||
// MediaType defines the media type, e.g. "image/jpeg".
|
||||
MediaType string
|
||||
// ContentLength is the full size of the object being uploaded.
|
||||
ContentLength int64
|
||||
// Return value
|
||||
ret *drive.File
|
||||
}
|
||||
|
||||
// Upload the io.Reader in of size bytes with contentType and info
|
||||
func (f *Fs) Upload(in io.Reader, size int64, contentType string, info *drive.File, remote string) (*drive.File, error) {
|
||||
fileID := info.Id
|
||||
var body io.Reader
|
||||
body, err := googleapi.WithoutDataWrapper.JSONReader(info)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
params := make(url.Values)
|
||||
params.Set("alt", "json")
|
||||
params.Set("uploadType", "resumable")
|
||||
urls := "https://www.googleapis.com/upload/drive/v2/files"
|
||||
method := "POST"
|
||||
if fileID != "" {
|
||||
params.Set("setModifiedDate", "true")
|
||||
urls += "/{fileId}"
|
||||
method = "PUT"
|
||||
}
|
||||
urls += "?" + params.Encode()
|
||||
req, _ := http.NewRequest(method, urls, body)
|
||||
googleapi.Expand(req.URL, map[string]string{
|
||||
"fileId": fileID,
|
||||
})
|
||||
req.Header.Set("Content-Type", "application/json; charset=UTF-8")
|
||||
req.Header.Set("X-Upload-Content-Type", contentType)
|
||||
req.Header.Set("X-Upload-Content-Length", fmt.Sprintf("%v", size))
|
||||
req.Header.Set("User-Agent", fs.UserAgent)
|
||||
var res *http.Response
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
res, err = f.client.Do(req)
|
||||
if err == nil {
|
||||
defer googleapi.CloseBody(res)
|
||||
err = googleapi.CheckResponse(res)
|
||||
}
|
||||
return shouldRetry(err)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
loc := res.Header.Get("Location")
|
||||
rx := &resumableUpload{
|
||||
f: f,
|
||||
remote: remote,
|
||||
URI: loc,
|
||||
Media: in,
|
||||
MediaType: contentType,
|
||||
ContentLength: size,
|
||||
}
|
||||
return rx.Upload()
|
||||
}
|
||||
|
||||
// Make an http.Request for the range passed in
|
||||
func (rx *resumableUpload) makeRequest(start int64, body []byte) *http.Request {
|
||||
reqSize := int64(len(body))
|
||||
req, _ := http.NewRequest("POST", rx.URI, bytes.NewBuffer(body))
|
||||
req.ContentLength = reqSize
|
||||
if reqSize != 0 {
|
||||
req.Header.Set("Content-Range", fmt.Sprintf("bytes %v-%v/%v", start, start+reqSize-1, rx.ContentLength))
|
||||
} else {
|
||||
req.Header.Set("Content-Range", fmt.Sprintf("bytes */%v", rx.ContentLength))
|
||||
}
|
||||
req.Header.Set("Content-Type", rx.MediaType)
|
||||
req.Header.Set("User-Agent", fs.UserAgent)
|
||||
return req
|
||||
}
|
||||
|
||||
// rangeRE matches the transfer status response from the server. $1 is
|
||||
// the last byte index uploaded.
|
||||
var rangeRE = regexp.MustCompile(`^0\-(\d+)$`)
|
||||
|
||||
// Query drive for the amount transferred so far
|
||||
//
|
||||
// If error is nil, then start should be valid
|
||||
func (rx *resumableUpload) transferStatus() (start int64, err error) {
|
||||
req := rx.makeRequest(0, nil)
|
||||
res, err := rx.f.client.Do(req)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer googleapi.CloseBody(res)
|
||||
if res.StatusCode == http.StatusCreated || res.StatusCode == http.StatusOK {
|
||||
return rx.ContentLength, nil
|
||||
}
|
||||
if res.StatusCode != statusResumeIncomplete {
|
||||
err = googleapi.CheckResponse(res)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return 0, fmt.Errorf("unexpected http return code %v", res.StatusCode)
|
||||
}
|
||||
Range := res.Header.Get("Range")
|
||||
if m := rangeRE.FindStringSubmatch(Range); len(m) == 2 {
|
||||
start, err = strconv.ParseInt(m[1], 10, 64)
|
||||
if err == nil {
|
||||
return start, nil
|
||||
}
|
||||
}
|
||||
return 0, fmt.Errorf("unable to parse range %q", Range)
|
||||
}
|
||||
|
||||
// Transfer a chunk - caller must call googleapi.CloseBody(res) if err == nil || res != nil
|
||||
func (rx *resumableUpload) transferChunk(start int64, body []byte) (int, error) {
|
||||
req := rx.makeRequest(start, body)
|
||||
res, err := rx.f.client.Do(req)
|
||||
if err != nil {
|
||||
return 599, err
|
||||
}
|
||||
defer googleapi.CloseBody(res)
|
||||
if res.StatusCode == statusResumeIncomplete {
|
||||
return res.StatusCode, nil
|
||||
}
|
||||
err = googleapi.CheckResponse(res)
|
||||
if err != nil {
|
||||
return res.StatusCode, err
|
||||
}
|
||||
|
||||
// When the entire file upload is complete, the server
|
||||
// responds with an HTTP 201 Created along with any metadata
|
||||
// associated with this resource. If this request had been
|
||||
// updating an existing entity rather than creating a new one,
|
||||
// the HTTP response code for a completed upload would have
|
||||
// been 200 OK.
|
||||
//
|
||||
// So parse the response out of the body. We aren't expecting
|
||||
// any other 2xx codes, so we parse it unconditionaly on
|
||||
// StatusCode
|
||||
if err = json.NewDecoder(res.Body).Decode(&rx.ret); err != nil {
|
||||
return 598, err
|
||||
}
|
||||
|
||||
return res.StatusCode, nil
|
||||
}
|
||||
|
||||
// Upload uploads the chunks from the input
|
||||
// It retries each chunk maxTries times (with a pause of uploadPause between attempts).
|
||||
func (rx *resumableUpload) Upload() (*drive.File, error) {
|
||||
start := int64(0)
|
||||
buf := make([]byte, chunkSize)
|
||||
var StatusCode int
|
||||
for start < rx.ContentLength {
|
||||
reqSize := rx.ContentLength - start
|
||||
if reqSize >= int64(chunkSize) {
|
||||
reqSize = int64(chunkSize)
|
||||
} else {
|
||||
buf = buf[:reqSize]
|
||||
}
|
||||
|
||||
// Read the chunk
|
||||
_, err := io.ReadFull(rx.Media, buf)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Transfer the chunk
|
||||
err = rx.f.pacer.Call(func() (bool, error) {
|
||||
fs.Debug(rx.remote, "Sending chunk %d length %d", start, reqSize)
|
||||
StatusCode, err = rx.transferChunk(start, buf)
|
||||
again, err := shouldRetry(err)
|
||||
if StatusCode == statusResumeIncomplete || StatusCode == http.StatusCreated || StatusCode == http.StatusOK {
|
||||
again = false
|
||||
err = nil
|
||||
}
|
||||
return again, err
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
start += reqSize
|
||||
}
|
||||
// Resume or retry uploads that fail due to connection interruptions or
|
||||
// any 5xx errors, including:
|
||||
//
|
||||
// 500 Internal Server Error
|
||||
// 502 Bad Gateway
|
||||
// 503 Service Unavailable
|
||||
// 504 Gateway Timeout
|
||||
//
|
||||
// Use an exponential backoff strategy if any 5xx server error is
|
||||
// returned when resuming or retrying upload requests. These errors can
|
||||
// occur if a server is getting overloaded. Exponential backoff can help
|
||||
// alleviate these kinds of problems during periods of high volume of
|
||||
// requests or heavy network traffic. Other kinds of requests should not
|
||||
// be handled by exponential backoff but you can still retry a number of
|
||||
// them. When retrying these requests, limit the number of times you
|
||||
// retry them. For example your code could limit to ten retries or less
|
||||
// before reporting an error.
|
||||
//
|
||||
// Handle 404 Not Found errors when doing resumable uploads by starting
|
||||
// the entire upload over from the beginning.
|
||||
if rx.ret == nil {
|
||||
return nil, fs.RetryErrorf("Incomplete upload - retry, last error %d", StatusCode)
|
||||
}
|
||||
return rx.ret, nil
|
||||
}
|
||||
@@ -1,10 +1,22 @@
|
||||
// Package dropbox provides an interface to Dropbox object storage
|
||||
// Dropbox interface
|
||||
package dropbox
|
||||
|
||||
/*
|
||||
Limitations of dropbox
|
||||
|
||||
File system is case insensitive
|
||||
|
||||
The datastore is limited to 100,000 records which therefore is the
|
||||
limit of the number of files that rclone can use on dropbox.
|
||||
|
||||
FIXME only open datastore if we need it?
|
||||
|
||||
FIXME Getting this sometimes
|
||||
Failed to copy: Upload failed: invalid character '<' looking for beginning of value
|
||||
This is a JSON decode error - from Update / UploadByChunk
|
||||
- Caused by 500 error from dropbox
|
||||
- See https://github.com/stacktic/dropbox/issues/1
|
||||
- Possibly confusing dropbox with excess concurrency?
|
||||
*/
|
||||
|
||||
import (
|
||||
@@ -12,56 +24,49 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"path"
|
||||
"regexp"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/oauthutil"
|
||||
"github.com/spf13/pflag"
|
||||
"github.com/stacktic/dropbox"
|
||||
)
|
||||
|
||||
// Constants
|
||||
const (
|
||||
rcloneAppKey = "5jcck7diasz0rqy"
|
||||
rcloneEncryptedAppSecret = "m8WRxJ6b1Z/Y25fDwJWS"
|
||||
metadataLimit = dropbox.MetadataLimitDefault // max items to fetch at once
|
||||
)
|
||||
|
||||
var (
|
||||
// A regexp matching path names for files Dropbox ignores
|
||||
// See https://www.dropbox.com/en/help/145 - Ignored files
|
||||
ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r|\.dropbox|\.dropbox.attr)$`)
|
||||
// Upload chunk size - setting too small makes uploads slow.
|
||||
// Chunks aren't buffered into memory though so can set large.
|
||||
uploadChunkSize = fs.SizeSuffix(128 * 1024 * 1024)
|
||||
maxUploadChunkSize = fs.SizeSuffix(150 * 1024 * 1024)
|
||||
rcloneAppKey = "5jcck7diasz0rqy"
|
||||
rcloneAppSecret = "1n9m04y2zx7bf26"
|
||||
uploadChunkSize = 64 * 1024 // chunk size for upload
|
||||
metadataLimit = dropbox.MetadataLimitDefault // max items to fetch at once
|
||||
datastoreName = "rclone"
|
||||
tableName = "metadata"
|
||||
md5sumField = "md5sum"
|
||||
mtimeField = "mtime"
|
||||
maxCommitRetries = 5
|
||||
RFC3339In = time.RFC3339
|
||||
RFC3339Out = "2006-01-02T15:04:05.000000000Z07:00"
|
||||
)
|
||||
|
||||
// Register with Fs
|
||||
func init() {
|
||||
fs.Register(&fs.RegInfo{
|
||||
Name: "dropbox",
|
||||
Description: "Dropbox",
|
||||
NewFs: NewFs,
|
||||
Config: configHelper,
|
||||
fs.Register(&fs.FsInfo{
|
||||
Name: "dropbox",
|
||||
NewFs: NewFs,
|
||||
Config: Config,
|
||||
Options: []fs.Option{{
|
||||
Name: "app_key",
|
||||
Help: "Dropbox App Key - leave blank normally.",
|
||||
Help: "Dropbox App Key - leave blank to use rclone's.",
|
||||
}, {
|
||||
Name: "app_secret",
|
||||
Help: "Dropbox App Secret - leave blank normally.",
|
||||
Help: "Dropbox App Secret - leave blank to use rclone's.",
|
||||
}},
|
||||
})
|
||||
pflag.VarP(&uploadChunkSize, "dropbox-chunk-size", "", fmt.Sprintf("Upload chunk size. Max %v.", maxUploadChunkSize))
|
||||
}
|
||||
|
||||
// Configuration helper - called after the user has put in the defaults
|
||||
func configHelper(name string) {
|
||||
func Config(name string) {
|
||||
// See if already have a token
|
||||
token := fs.ConfigFile.MustValue(name, "token")
|
||||
if token != "" {
|
||||
@@ -72,10 +77,7 @@ func configHelper(name string) {
|
||||
}
|
||||
|
||||
// Get a dropbox
|
||||
db, err := newDropbox(name)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to create dropbox client: %v", err)
|
||||
}
|
||||
db := newDropbox(name)
|
||||
|
||||
// This method will ask the user to visit an URL and paste the generated code.
|
||||
if err := db.Auth(); err != nil {
|
||||
@@ -93,43 +95,37 @@ func configHelper(name string) {
|
||||
}
|
||||
}
|
||||
|
||||
// Fs represents a remote dropbox server
|
||||
type Fs struct {
|
||||
name string // name of this remote
|
||||
db *dropbox.Dropbox // the connection to the dropbox server
|
||||
root string // the path we are working on
|
||||
slashRoot string // root with "/" prefix, lowercase
|
||||
slashRootSlash string // root with "/" prefix and postfix, lowercase
|
||||
// FsDropbox represents a remote dropbox server
|
||||
type FsDropbox struct {
|
||||
db *dropbox.Dropbox // the connection to the dropbox server
|
||||
root string // the path we are working on
|
||||
slashRoot string // root with "/" prefix
|
||||
slashRootSlash string // root with "/" prefix and postix
|
||||
datastoreManager *dropbox.DatastoreManager
|
||||
datastore *dropbox.Datastore
|
||||
table *dropbox.Table
|
||||
datastoreMutex sync.Mutex // lock this when using the datastore
|
||||
datastoreErr error // pending errors on the datastore
|
||||
}
|
||||
|
||||
// Object describes a dropbox object
|
||||
type Object struct {
|
||||
fs *Fs // what this object is part of
|
||||
remote string // The remote path
|
||||
bytes int64 // size of the object
|
||||
modTime time.Time // time it was last modified
|
||||
hasMetadata bool // metadata is valid
|
||||
// FsObjectDropbox describes a dropbox object
|
||||
type FsObjectDropbox struct {
|
||||
dropbox *FsDropbox // what this object is part of
|
||||
remote string // The remote path
|
||||
md5sum string // md5sum of the object
|
||||
bytes int64 // size of the object
|
||||
modTime time.Time // time it was last modified
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
// Name of the remote (as passed into NewFs)
|
||||
func (f *Fs) Name() string {
|
||||
return f.name
|
||||
}
|
||||
|
||||
// Root of the remote (as passed into NewFs)
|
||||
func (f *Fs) Root() string {
|
||||
return f.root
|
||||
}
|
||||
|
||||
// String converts this Fs to a string
|
||||
func (f *Fs) String() string {
|
||||
// String converts this FsDropbox to a string
|
||||
func (f *FsDropbox) String() string {
|
||||
return fmt.Sprintf("Dropbox root '%s'", f.root)
|
||||
}
|
||||
|
||||
// Makes a new dropbox from the config
|
||||
func newDropbox(name string) (*dropbox.Dropbox, error) {
|
||||
func newDropbox(name string) *dropbox.Dropbox {
|
||||
db := dropbox.NewDropbox()
|
||||
|
||||
appKey := fs.ConfigFile.MustValue(name, "app_key")
|
||||
@@ -138,37 +134,34 @@ func newDropbox(name string) (*dropbox.Dropbox, error) {
|
||||
}
|
||||
appSecret := fs.ConfigFile.MustValue(name, "app_secret")
|
||||
if appSecret == "" {
|
||||
appSecret = fs.Reveal(rcloneEncryptedAppSecret)
|
||||
appSecret = rcloneAppSecret
|
||||
}
|
||||
|
||||
err := db.SetAppInfo(appKey, appSecret)
|
||||
return db, err
|
||||
db.SetAppInfo(appKey, appSecret)
|
||||
|
||||
return db
|
||||
}
|
||||
|
||||
// NewFs contstructs an Fs from the path, container:path
|
||||
// NewFs contstructs an FsDropbox from the path, container:path
|
||||
func NewFs(name, root string) (fs.Fs, error) {
|
||||
if uploadChunkSize > maxUploadChunkSize {
|
||||
return nil, fmt.Errorf("Chunk size too big, must be < %v", maxUploadChunkSize)
|
||||
}
|
||||
db, err := newDropbox(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
f := &Fs{
|
||||
name: name,
|
||||
db: db,
|
||||
db := newDropbox(name)
|
||||
f := &FsDropbox{
|
||||
db: db,
|
||||
}
|
||||
f.setRoot(root)
|
||||
|
||||
// Read the token from the config file
|
||||
token := fs.ConfigFile.MustValue(name, "token")
|
||||
|
||||
// Set our custom context which enables our custom transport for timeouts etc
|
||||
db.SetContext(oauthutil.Context())
|
||||
|
||||
// Authorize the client
|
||||
db.SetAccessToken(token)
|
||||
|
||||
// Make a db to store rclone metadata in
|
||||
f.datastoreManager = db.NewDatastoreManager()
|
||||
|
||||
// Open the datastore in the background
|
||||
go f.openDataStore()
|
||||
|
||||
// See if the root is actually an object
|
||||
entry, err := f.db.Metadata(f.slashRoot, false, false, "", "", metadataLimit)
|
||||
if err == nil && !entry.IsDir {
|
||||
@@ -187,24 +180,44 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||
}
|
||||
|
||||
// Sets root in f
|
||||
func (f *Fs) setRoot(root string) {
|
||||
func (f *FsDropbox) setRoot(root string) {
|
||||
f.root = strings.Trim(root, "/")
|
||||
lowerCaseRoot := strings.ToLower(f.root)
|
||||
|
||||
f.slashRoot = "/" + lowerCaseRoot
|
||||
f.slashRoot = "/" + f.root
|
||||
f.slashRootSlash = f.slashRoot
|
||||
if lowerCaseRoot != "" {
|
||||
if f.root != "" {
|
||||
f.slashRootSlash += "/"
|
||||
}
|
||||
}
|
||||
|
||||
// Opens the datastore in f
|
||||
func (f *FsDropbox) openDataStore() {
|
||||
f.datastoreMutex.Lock()
|
||||
defer f.datastoreMutex.Unlock()
|
||||
fs.Debug(f, "Open rclone datastore")
|
||||
// Open the rclone datastore
|
||||
var err error
|
||||
f.datastore, err = f.datastoreManager.OpenDatastore(datastoreName)
|
||||
if err != nil {
|
||||
fs.Log(f, "Failed to open datastore: %v", err)
|
||||
f.datastoreErr = err
|
||||
return
|
||||
}
|
||||
|
||||
// Get the table we are using
|
||||
f.table, err = f.datastore.GetTable(tableName)
|
||||
if err != nil {
|
||||
fs.Log(f, "Failed to open datastore table: %v", err)
|
||||
f.datastoreErr = err
|
||||
return
|
||||
}
|
||||
fs.Debug(f, "Open rclone datastore finished")
|
||||
}
|
||||
|
||||
// Return an FsObject from a path
|
||||
//
|
||||
// May return nil if an error occurred
|
||||
func (f *Fs) newFsObjectWithInfo(remote string, info *dropbox.Entry) fs.Object {
|
||||
o := &Object{
|
||||
fs: f,
|
||||
remote: remote,
|
||||
func (f *FsDropbox) newFsObjectWithInfo(remote string, info *dropbox.Entry) (fs.Object, error) {
|
||||
o := &FsObjectDropbox{
|
||||
dropbox: f,
|
||||
remote: remote,
|
||||
}
|
||||
if info != nil {
|
||||
o.setMetadataFromEntry(info)
|
||||
@@ -212,49 +225,49 @@ func (f *Fs) newFsObjectWithInfo(remote string, info *dropbox.Entry) fs.Object {
|
||||
err := o.readEntryAndSetMetadata()
|
||||
if err != nil {
|
||||
// logged already fs.Debug("Failed to read info: %s", err)
|
||||
return nil
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return o
|
||||
return o, nil
|
||||
}
|
||||
|
||||
// NewFsObject returns an FsObject from a path
|
||||
// Return an FsObject from a path
|
||||
//
|
||||
// May return nil if an error occurred
|
||||
func (f *Fs) NewFsObject(remote string) fs.Object {
|
||||
return f.newFsObjectWithInfo(remote, nil)
|
||||
func (f *FsDropbox) NewFsObjectWithInfo(remote string, info *dropbox.Entry) fs.Object {
|
||||
fs, _ := f.newFsObjectWithInfo(remote, info)
|
||||
// Errors have already been logged
|
||||
return fs
|
||||
}
|
||||
|
||||
// Strips the root off path and returns it
|
||||
func (f *Fs) stripRoot(path string) *string {
|
||||
lowercase := strings.ToLower(path)
|
||||
// Return an FsObject from a path
|
||||
//
|
||||
// May return nil if an error occurred
|
||||
func (f *FsDropbox) NewFsObject(remote string) fs.Object {
|
||||
return f.NewFsObjectWithInfo(remote, nil)
|
||||
}
|
||||
|
||||
if !strings.HasPrefix(lowercase, f.slashRootSlash) {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(f, "Path '%s' is not under root '%s'", path, f.slashRootSlash)
|
||||
return nil
|
||||
// Strips the root off entry and returns it
|
||||
func (f *FsDropbox) stripRoot(entry *dropbox.Entry) string {
|
||||
path := entry.Path
|
||||
if strings.HasPrefix(path, f.slashRootSlash) {
|
||||
path = path[len(f.slashRootSlash):]
|
||||
}
|
||||
|
||||
stripped := path[len(f.slashRootSlash):]
|
||||
return &stripped
|
||||
return path
|
||||
}
|
||||
|
||||
// Walk the root returning a channel of FsObjects
|
||||
func (f *Fs) list(out fs.ObjectsChan) {
|
||||
// Track path component case, it could be different for entries coming from DropBox API
|
||||
// See https://www.dropboxforum.com/hc/communities/public/questions/201665409-Wrong-character-case-of-folder-name-when-calling-listFolder-using-Sync-API?locale=en-us
|
||||
// and https://github.com/ncw/rclone/issues/53
|
||||
nameTree := newNameTree()
|
||||
func (f *FsDropbox) list(out fs.ObjectsChan) {
|
||||
cursor := ""
|
||||
for {
|
||||
deltaPage, err := f.db.Delta(cursor, f.slashRoot)
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(f, "Couldn't list: %s", err)
|
||||
fs.Log(f, "Couldn't list: %s", err)
|
||||
break
|
||||
} else {
|
||||
if deltaPage.Reset && cursor != "" {
|
||||
fs.ErrorLog(f, "Unexpected reset during listing - try again")
|
||||
fs.Log(f, "Unexpected reset during listing - try again")
|
||||
fs.Stats.Error()
|
||||
break
|
||||
}
|
||||
@@ -264,62 +277,33 @@ func (f *Fs) list(out fs.ObjectsChan) {
|
||||
entry := deltaEntry.Entry
|
||||
if entry == nil {
|
||||
// This notifies of a deleted object
|
||||
fs.Debug(f, "Deleting metadata for %q", deltaEntry.Path)
|
||||
key := metadataKey(deltaEntry.Path) // Path is lowercased
|
||||
err := f.deleteMetadata(key)
|
||||
if err != nil {
|
||||
fs.Debug(f, "Failed to delete metadata for %q", deltaEntry.Path)
|
||||
// Don't accumulate Error here
|
||||
}
|
||||
|
||||
} else {
|
||||
if len(entry.Path) <= 1 || entry.Path[0] != '/' {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(f, "dropbox API inconsistency: a path should always start with a slash and be at least 2 characters: %s", entry.Path)
|
||||
continue
|
||||
}
|
||||
|
||||
lastSlashIndex := strings.LastIndex(entry.Path, "/")
|
||||
|
||||
var parentPath string
|
||||
if lastSlashIndex == 0 {
|
||||
parentPath = ""
|
||||
} else {
|
||||
parentPath = entry.Path[1:lastSlashIndex]
|
||||
}
|
||||
lastComponent := entry.Path[lastSlashIndex+1:]
|
||||
|
||||
if entry.IsDir {
|
||||
nameTree.PutCaseCorrectDirectoryName(parentPath, lastComponent)
|
||||
// ignore directories
|
||||
} else {
|
||||
parentPathCorrectCase := nameTree.GetPathWithCorrectCase(parentPath)
|
||||
if parentPathCorrectCase != nil {
|
||||
path := f.stripRoot(*parentPathCorrectCase + "/" + lastComponent)
|
||||
if path == nil {
|
||||
// an error occurred and logged by stripRoot
|
||||
continue
|
||||
}
|
||||
|
||||
out <- f.newFsObjectWithInfo(*path, entry)
|
||||
} else {
|
||||
nameTree.PutFile(parentPath, lastComponent, entry)
|
||||
}
|
||||
path := f.stripRoot(entry)
|
||||
out <- f.NewFsObjectWithInfo(path, entry)
|
||||
}
|
||||
}
|
||||
}
|
||||
if !deltaPage.HasMore {
|
||||
break
|
||||
}
|
||||
cursor = deltaPage.Cursor.Cursor
|
||||
cursor = deltaPage.Cursor
|
||||
}
|
||||
}
|
||||
|
||||
walkFunc := func(caseCorrectFilePath string, entry *dropbox.Entry) {
|
||||
path := f.stripRoot("/" + caseCorrectFilePath)
|
||||
if path == nil {
|
||||
// an error occurred and logged by stripRoot
|
||||
return
|
||||
}
|
||||
|
||||
out <- f.newFsObjectWithInfo(*path, entry)
|
||||
}
|
||||
nameTree.WalkFiles(f.root, walkFunc)
|
||||
}
|
||||
|
||||
// List walks the path returning a channel of FsObjects
|
||||
func (f *Fs) List() fs.ObjectsChan {
|
||||
// Walk the path returning a channel of FsObjects
|
||||
func (f *FsDropbox) List() fs.ObjectsChan {
|
||||
out := make(fs.ObjectsChan, fs.Config.Checkers)
|
||||
go func() {
|
||||
defer close(out)
|
||||
@@ -328,29 +312,23 @@ func (f *Fs) List() fs.ObjectsChan {
|
||||
return out
|
||||
}
|
||||
|
||||
// ListDir walks the path returning a channel of FsObjects
|
||||
func (f *Fs) ListDir() fs.DirChan {
|
||||
// Walk the path returning a channel of FsObjects
|
||||
func (f *FsDropbox) ListDir() fs.DirChan {
|
||||
out := make(fs.DirChan, fs.Config.Checkers)
|
||||
go func() {
|
||||
defer close(out)
|
||||
entry, err := f.db.Metadata(f.root, true, false, "", "", metadataLimit)
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(f, "Couldn't list directories in root: %s", err)
|
||||
fs.Log(f, "Couldn't list directories in root: %s", err)
|
||||
} else {
|
||||
for i := range entry.Contents {
|
||||
entry := &entry.Contents[i]
|
||||
if entry.IsDir {
|
||||
name := f.stripRoot(entry.Path)
|
||||
if name == nil {
|
||||
// an error occurred and logged by stripRoot
|
||||
continue
|
||||
}
|
||||
|
||||
out <- &fs.Dir{
|
||||
Name: *name,
|
||||
Name: f.stripRoot(entry),
|
||||
When: time.Time(entry.ClientMtime),
|
||||
Bytes: entry.Bytes,
|
||||
Bytes: int64(entry.Bytes),
|
||||
Count: -1,
|
||||
}
|
||||
}
|
||||
@@ -380,17 +358,14 @@ func (rc *readCloser) Close() error {
|
||||
// Copy the reader in to the new object which is returned
|
||||
//
|
||||
// The new object may have been created if an error is returned
|
||||
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo) (fs.Object, error) {
|
||||
// Temporary Object under construction
|
||||
o := &Object{
|
||||
fs: f,
|
||||
remote: src.Remote(),
|
||||
}
|
||||
return o, o.Update(in, src)
|
||||
func (f *FsDropbox) Put(in io.Reader, remote string, modTime time.Time, size int64) (fs.Object, error) {
|
||||
// Temporary FsObject under construction
|
||||
o := &FsObjectDropbox{dropbox: f, remote: remote}
|
||||
return o, o.Update(in, modTime, size)
|
||||
}
|
||||
|
||||
// Mkdir creates the container if it doesn't exist
|
||||
func (f *Fs) Mkdir() error {
|
||||
func (f *FsDropbox) Mkdir() error {
|
||||
entry, err := f.db.Metadata(f.slashRoot, false, false, "", "", metadataLimit)
|
||||
if err == nil {
|
||||
if entry.IsDir {
|
||||
@@ -405,7 +380,7 @@ func (f *Fs) Mkdir() error {
|
||||
// Rmdir deletes the container
|
||||
//
|
||||
// Returns an error if it isn't empty
|
||||
func (f *Fs) Rmdir() error {
|
||||
func (f *FsDropbox) Rmdir() error {
|
||||
entry, err := f.db.Metadata(f.slashRoot, true, false, "", "", 16)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -416,41 +391,9 @@ func (f *Fs) Rmdir() error {
|
||||
return f.Purge()
|
||||
}
|
||||
|
||||
// Precision returns the precision
|
||||
func (f *Fs) Precision() time.Duration {
|
||||
return fs.ModTimeNotSupported
|
||||
}
|
||||
|
||||
// Copy src to this remote using server side copy operations.
|
||||
//
|
||||
// This is stored with the remote path given
|
||||
//
|
||||
// It returns the destination Object and a possible error
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantCopy
|
||||
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
|
||||
srcObj, ok := src.(*Object)
|
||||
if !ok {
|
||||
fs.Debug(src, "Can't copy - not same remote type")
|
||||
return nil, fs.ErrorCantCopy
|
||||
}
|
||||
|
||||
// Temporary Object under construction
|
||||
dstObj := &Object{
|
||||
fs: f,
|
||||
remote: remote,
|
||||
}
|
||||
|
||||
srcPath := srcObj.remotePath()
|
||||
dstPath := dstObj.remotePath()
|
||||
entry, err := f.db.Copy(srcPath, dstPath, false)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Copy failed: %s", err)
|
||||
}
|
||||
dstObj.setMetadataFromEntry(entry)
|
||||
return dstObj, nil
|
||||
// Return the precision
|
||||
func (fs *FsDropbox) Precision() time.Duration {
|
||||
return time.Nanosecond
|
||||
}
|
||||
|
||||
// Purge deletes all the files and the container
|
||||
@@ -458,119 +401,148 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
|
||||
// Optional interface: Only implement this if you have a way of
|
||||
// deleting all the files quicker than just running Remove() on the
|
||||
// result of List()
|
||||
func (f *Fs) Purge() error {
|
||||
func (f *FsDropbox) Purge() error {
|
||||
// Delete metadata first
|
||||
var wg sync.WaitGroup
|
||||
to_be_deleted := f.List()
|
||||
wg.Add(fs.Config.Transfers)
|
||||
for i := 0; i < fs.Config.Transfers; i++ {
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for dst := range to_be_deleted {
|
||||
o := dst.(*FsObjectDropbox)
|
||||
o.deleteMetadata()
|
||||
}
|
||||
}()
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
// Let dropbox delete the filesystem tree
|
||||
_, err := f.db.Delete(f.slashRoot)
|
||||
return err
|
||||
}
|
||||
|
||||
// Move src to this remote using server side move operations.
|
||||
// Tries the transaction in fn then calls commit, repeating until retry limit
|
||||
//
|
||||
// This is stored with the remote path given
|
||||
//
|
||||
// It returns the destination Object and a possible error
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantMove
|
||||
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
|
||||
srcObj, ok := src.(*Object)
|
||||
if !ok {
|
||||
fs.Debug(src, "Can't move - not same remote type")
|
||||
return nil, fs.ErrorCantMove
|
||||
// Holds datastore mutex while in progress
|
||||
func (f *FsDropbox) transaction(fn func() error) error {
|
||||
f.datastoreMutex.Lock()
|
||||
defer f.datastoreMutex.Unlock()
|
||||
if f.datastoreErr != nil {
|
||||
return f.datastoreErr
|
||||
}
|
||||
var err error
|
||||
for i := 1; i <= maxCommitRetries; i++ {
|
||||
err = fn()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Temporary Object under construction
|
||||
dstObj := &Object{
|
||||
fs: f,
|
||||
remote: remote,
|
||||
err = f.datastore.Commit()
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
fs.Debug(f, "Retrying transaction %d/%d", i, maxCommitRetries)
|
||||
}
|
||||
|
||||
srcPath := srcObj.remotePath()
|
||||
dstPath := dstObj.remotePath()
|
||||
entry, err := f.db.Move(srcPath, dstPath)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Move failed: %s", err)
|
||||
}
|
||||
dstObj.setMetadataFromEntry(entry)
|
||||
return dstObj, nil
|
||||
}
|
||||
|
||||
// DirMove moves src to this remote using server side move operations.
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantDirMove
|
||||
//
|
||||
// If destination exists then return fs.ErrorDirExists
|
||||
func (f *Fs) DirMove(src fs.Fs) error {
|
||||
srcFs, ok := src.(*Fs)
|
||||
if !ok {
|
||||
fs.Debug(srcFs, "Can't move directory - not same remote type")
|
||||
return fs.ErrorCantDirMove
|
||||
}
|
||||
|
||||
// Check if destination exists
|
||||
entry, err := f.db.Metadata(f.slashRoot, false, false, "", "", metadataLimit)
|
||||
if err == nil && !entry.IsDeleted {
|
||||
return fs.ErrorDirExists
|
||||
}
|
||||
|
||||
// Do the move
|
||||
_, err = f.db.Move(srcFs.slashRoot, f.slashRoot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("MoveDir failed: %v", err)
|
||||
return fmt.Errorf("Failed to commit metadata changes: %s", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Hashes returns the supported hash sets.
|
||||
func (f *Fs) Hashes() fs.HashSet {
|
||||
return fs.HashSet(fs.HashNone)
|
||||
// Deletes the medadata associated with this key
|
||||
func (f *FsDropbox) deleteMetadata(key string) error {
|
||||
return f.transaction(func() error {
|
||||
record, err := f.table.Get(key)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Couldn't get record: %s", err)
|
||||
}
|
||||
if record == nil {
|
||||
return nil
|
||||
}
|
||||
record.DeleteRecord()
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// Reads the record attached to key
|
||||
//
|
||||
// Holds datastore mutex while in progress
|
||||
func (f *FsDropbox) readRecord(key string) (*dropbox.Record, error) {
|
||||
f.datastoreMutex.Lock()
|
||||
defer f.datastoreMutex.Unlock()
|
||||
if f.datastoreErr != nil {
|
||||
return nil, f.datastoreErr
|
||||
}
|
||||
return f.table.Get(key)
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
// Fs returns the parent Fs
|
||||
func (o *Object) Fs() fs.Info {
|
||||
return o.fs
|
||||
// Return the parent Fs
|
||||
func (o *FsObjectDropbox) Fs() fs.Fs {
|
||||
return o.dropbox
|
||||
}
|
||||
|
||||
// Return a string version
|
||||
func (o *Object) String() string {
|
||||
func (o *FsObjectDropbox) String() string {
|
||||
if o == nil {
|
||||
return "<nil>"
|
||||
}
|
||||
return o.remote
|
||||
}
|
||||
|
||||
// Remote returns the remote path
|
||||
func (o *Object) Remote() string {
|
||||
// Return the remote path
|
||||
func (o *FsObjectDropbox) Remote() string {
|
||||
return o.remote
|
||||
}
|
||||
|
||||
// Hash is unsupported on Dropbox
|
||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
||||
return "", fs.ErrHashUnsupported
|
||||
// Md5sum returns the Md5sum of an object returning a lowercase hex string
|
||||
//
|
||||
// FIXME has to download the file!
|
||||
func (o *FsObjectDropbox) Md5sum() (string, error) {
|
||||
if o.md5sum != "" {
|
||||
return o.md5sum, nil
|
||||
}
|
||||
err := o.readMetaData()
|
||||
if err != nil {
|
||||
fs.Log(o, "Failed to read metadata: %s", err)
|
||||
return "", fmt.Errorf("Failed to read metadata: %s", err)
|
||||
|
||||
}
|
||||
|
||||
// For pre-existing files which have no md5sum can read it and set it?
|
||||
|
||||
// in, err := o.Open()
|
||||
// if err != nil {
|
||||
// return "", err
|
||||
// }
|
||||
// defer in.Close()
|
||||
// hash := md5.New()
|
||||
// _, err = io.Copy(hash, in)
|
||||
// if err != nil {
|
||||
// return "", err
|
||||
// }
|
||||
// o.md5sum = fmt.Sprintf("%x", hash.Sum(nil))
|
||||
return o.md5sum, nil
|
||||
}
|
||||
|
||||
// Size returns the size of an object in bytes
|
||||
func (o *Object) Size() int64 {
|
||||
func (o *FsObjectDropbox) Size() int64 {
|
||||
return o.bytes
|
||||
}
|
||||
|
||||
// setMetadataFromEntry sets the fs data from a dropbox.Entry
|
||||
//
|
||||
// This isn't a complete set of metadata and has an inacurate date
|
||||
func (o *Object) setMetadataFromEntry(info *dropbox.Entry) {
|
||||
o.bytes = info.Bytes
|
||||
func (o *FsObjectDropbox) setMetadataFromEntry(info *dropbox.Entry) {
|
||||
o.bytes = int64(info.Bytes)
|
||||
o.modTime = time.Time(info.ClientMtime)
|
||||
o.hasMetadata = true
|
||||
}
|
||||
|
||||
// Reads the entry from dropbox
|
||||
func (o *Object) readEntry() (*dropbox.Entry, error) {
|
||||
entry, err := o.fs.db.Metadata(o.remotePath(), false, false, "", "", metadataLimit)
|
||||
func (o *FsObjectDropbox) readEntry() (*dropbox.Entry, error) {
|
||||
entry, err := o.dropbox.db.Metadata(o.remotePath(), false, false, "", "", metadataLimit)
|
||||
if err != nil {
|
||||
fs.Debug(o, "Error reading file: %s", err)
|
||||
return nil, fmt.Errorf("Error reading file: %s", err)
|
||||
@@ -579,7 +551,7 @@ func (o *Object) readEntry() (*dropbox.Entry, error) {
|
||||
}
|
||||
|
||||
// Read entry if not set and set metadata from it
|
||||
func (o *Object) readEntryAndSetMetadata() error {
|
||||
func (o *FsObjectDropbox) readEntryAndSetMetadata() error {
|
||||
// Last resort set time from client
|
||||
if !o.modTime.IsZero() {
|
||||
return nil
|
||||
@@ -593,38 +565,83 @@ func (o *Object) readEntryAndSetMetadata() error {
|
||||
}
|
||||
|
||||
// Returns the remote path for the object
|
||||
func (o *Object) remotePath() string {
|
||||
return o.fs.slashRootSlash + o.remote
|
||||
func (o *FsObjectDropbox) remotePath() string {
|
||||
return o.dropbox.slashRootSlash + o.remote
|
||||
}
|
||||
|
||||
// Returns the key for the metadata database for a given path
|
||||
func metadataKey(path string) string {
|
||||
// NB File system is case insensitive
|
||||
path = strings.ToLower(path)
|
||||
hash := md5.New()
|
||||
_, _ = hash.Write([]byte(path))
|
||||
return fmt.Sprintf("%x", hash.Sum(nil))
|
||||
return fmt.Sprintf("%x", md5.Sum([]byte(path)))
|
||||
}
|
||||
|
||||
// Returns the key for the metadata database
|
||||
func (o *Object) metadataKey() string {
|
||||
func (o *FsObjectDropbox) metadataKey() string {
|
||||
return metadataKey(o.remotePath())
|
||||
}
|
||||
|
||||
// readMetaData gets the info if it hasn't already been fetched
|
||||
func (o *Object) readMetaData() (err error) {
|
||||
if o.hasMetadata {
|
||||
func (o *FsObjectDropbox) readMetaData() (err error) {
|
||||
if o.md5sum != "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
// fs.Debug(o, "Reading metadata from datastore")
|
||||
record, err := o.dropbox.readRecord(o.metadataKey())
|
||||
if err != nil {
|
||||
fs.Debug(o, "Couldn't read metadata: %s", err)
|
||||
record = nil
|
||||
}
|
||||
|
||||
if record != nil {
|
||||
// Read md5sum
|
||||
md5sumInterface, ok, err := record.Get(md5sumField)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !ok {
|
||||
fs.Debug(o, "Couldn't find md5sum in record")
|
||||
} else {
|
||||
md5sum, ok := md5sumInterface.(string)
|
||||
if !ok {
|
||||
fs.Debug(o, "md5sum not a string")
|
||||
} else {
|
||||
o.md5sum = md5sum
|
||||
}
|
||||
}
|
||||
|
||||
// read mtime
|
||||
mtimeInterface, ok, err := record.Get(mtimeField)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !ok {
|
||||
fs.Debug(o, "Couldn't find mtime in record")
|
||||
} else {
|
||||
mtime, ok := mtimeInterface.(string)
|
||||
if !ok {
|
||||
fs.Debug(o, "mtime not a string")
|
||||
} else {
|
||||
modTime, err := time.Parse(RFC3339In, mtime)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.modTime = modTime
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Last resort
|
||||
return o.readEntryAndSetMetadata()
|
||||
o.readEntryAndSetMetadata()
|
||||
return nil
|
||||
}
|
||||
|
||||
// ModTime returns the modification time of the object
|
||||
//
|
||||
// It attempts to read the objects mtime and if that isn't present the
|
||||
// LastModified returned in the http headers
|
||||
func (o *Object) ModTime() time.Time {
|
||||
func (o *FsObjectDropbox) ModTime() time.Time {
|
||||
err := o.readMetaData()
|
||||
if err != nil {
|
||||
fs.Log(o, "Failed to read metadata: %s", err)
|
||||
@@ -633,22 +650,67 @@ func (o *Object) ModTime() time.Time {
|
||||
return o.modTime
|
||||
}
|
||||
|
||||
// SetModTime sets the modification time of the local fs object
|
||||
//
|
||||
// Commits the datastore
|
||||
func (o *Object) SetModTime(modTime time.Time) error {
|
||||
// FIXME not implemented
|
||||
return fs.ErrorCantSetModTime
|
||||
// Sets the modification time of the local fs object into the record
|
||||
// FIXME if we don't set md5sum what will that do?
|
||||
func (o *FsObjectDropbox) setModTimeAndMd5sum(modTime time.Time, md5sum string) error {
|
||||
key := o.metadataKey()
|
||||
// fs.Debug(o, "Writing metadata to datastore")
|
||||
return o.dropbox.transaction(func() error {
|
||||
record, err := o.dropbox.table.GetOrInsert(key)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Couldn't read record: %s", err)
|
||||
}
|
||||
|
||||
if md5sum != "" {
|
||||
err = record.Set(md5sumField, md5sum)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Couldn't set md5sum record: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
if !modTime.IsZero() {
|
||||
mtime := modTime.Format(RFC3339Out)
|
||||
err := record.Set(mtimeField, mtime)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Couldn't set mtime record: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// Storable returns whether this object is storable
|
||||
func (o *Object) Storable() bool {
|
||||
// Deletes the medadata associated with this file
|
||||
//
|
||||
// It logs any errors
|
||||
func (o *FsObjectDropbox) deleteMetadata() {
|
||||
fs.Debug(o, "Deleting metadata from datastore")
|
||||
err := o.dropbox.deleteMetadata(o.metadataKey())
|
||||
if err != nil {
|
||||
fs.Log(o, "Error deleting metadata: %v", err)
|
||||
fs.Stats.Error()
|
||||
}
|
||||
}
|
||||
|
||||
// Sets the modification time of the local fs object
|
||||
//
|
||||
// Commits the datastore
|
||||
func (o *FsObjectDropbox) SetModTime(modTime time.Time) {
|
||||
err := o.setModTimeAndMd5sum(modTime, "")
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
fs.Log(o, err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
// Is this object storable
|
||||
func (o *FsObjectDropbox) Storable() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// Open an object for read
|
||||
func (o *Object) Open() (in io.ReadCloser, err error) {
|
||||
in, _, err = o.fs.db.Download(o.remotePath(), "", 0)
|
||||
func (o *FsObjectDropbox) Open() (in io.ReadCloser, err error) {
|
||||
in, _, err = o.dropbox.db.Download(o.remotePath(), "", 0)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -657,32 +719,28 @@ func (o *Object) Open() (in io.ReadCloser, err error) {
|
||||
// Copy the reader into the object updating modTime and size
|
||||
//
|
||||
// The new object may have been created if an error is returned
|
||||
func (o *Object) Update(in io.Reader, src fs.ObjectInfo) error {
|
||||
remote := o.remotePath()
|
||||
if ignoredFiles.MatchString(remote) {
|
||||
fs.Log(o, "File name disallowed - not uploading")
|
||||
return nil
|
||||
}
|
||||
entry, err := o.fs.db.UploadByChunk(ioutil.NopCloser(in), int(uploadChunkSize), remote, true, "")
|
||||
func (o *FsObjectDropbox) Update(in io.Reader, modTime time.Time, size int64) error {
|
||||
// Calculate md5sum as we upload it
|
||||
hash := md5.New()
|
||||
rc := &readCloser{in: io.TeeReader(in, hash)}
|
||||
entry, err := o.dropbox.db.UploadByChunk(rc, uploadChunkSize, o.remotePath(), true, "")
|
||||
if err != nil {
|
||||
return fmt.Errorf("Upload failed: %s", err)
|
||||
}
|
||||
o.setMetadataFromEntry(entry)
|
||||
return nil
|
||||
|
||||
md5sum := fmt.Sprintf("%x", hash.Sum(nil))
|
||||
return o.setModTimeAndMd5sum(modTime, md5sum)
|
||||
}
|
||||
|
||||
// Remove an object
|
||||
func (o *Object) Remove() error {
|
||||
_, err := o.fs.db.Delete(o.remotePath())
|
||||
func (o *FsObjectDropbox) Remove() error {
|
||||
o.deleteMetadata()
|
||||
_, err := o.dropbox.db.Delete(o.remotePath())
|
||||
return err
|
||||
}
|
||||
|
||||
// Check the interfaces are satisfied
|
||||
var (
|
||||
_ fs.Fs = (*Fs)(nil)
|
||||
_ fs.Copier = (*Fs)(nil)
|
||||
_ fs.Purger = (*Fs)(nil)
|
||||
_ fs.Mover = (*Fs)(nil)
|
||||
_ fs.DirMover = (*Fs)(nil)
|
||||
_ fs.Object = (*Object)(nil)
|
||||
)
|
||||
var _ fs.Fs = &FsDropbox{}
|
||||
var _ fs.Purger = &FsDropbox{}
|
||||
var _ fs.Object = &FsObjectDropbox{}
|
||||
|
||||
@@ -1,56 +0,0 @@
|
||||
// Test Dropbox filesystem interface
|
||||
//
|
||||
// Automatically generated - DO NOT EDIT
|
||||
// Regenerate with: make gen_tests
|
||||
package dropbox_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ncw/rclone/dropbox"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/fstest/fstests"
|
||||
)
|
||||
|
||||
func init() {
|
||||
fstests.NilObject = fs.Object((*dropbox.Object)(nil))
|
||||
fstests.RemoteName = "TestDropbox:"
|
||||
}
|
||||
|
||||
// Generic tests for the Fs
|
||||
func TestInit(t *testing.T) { fstests.TestInit(t) }
|
||||
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
|
||||
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
|
||||
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
|
||||
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
|
||||
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
|
||||
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
|
||||
func TestFsNewFsObjectNotFound(t *testing.T) { fstests.TestFsNewFsObjectNotFound(t) }
|
||||
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
|
||||
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
|
||||
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
|
||||
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
|
||||
func TestFsListRoot(t *testing.T) { fstests.TestFsListRoot(t) }
|
||||
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
|
||||
func TestFsNewFsObject(t *testing.T) { fstests.TestFsNewFsObject(t) }
|
||||
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
|
||||
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
|
||||
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
|
||||
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
|
||||
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
|
||||
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
|
||||
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
|
||||
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
|
||||
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
|
||||
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
|
||||
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
|
||||
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
|
||||
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
|
||||
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
|
||||
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
|
||||
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
|
||||
func TestLimitedFs(t *testing.T) { fstests.TestLimitedFs(t) }
|
||||
func TestLimitedFsNotFound(t *testing.T) { fstests.TestLimitedFsNotFound(t) }
|
||||
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
|
||||
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
|
||||
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }
|
||||
@@ -1,179 +0,0 @@
|
||||
package dropbox
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/stacktic/dropbox"
|
||||
)
|
||||
|
||||
type nameTreeNode struct {
|
||||
// Map from lowercase directory name to tree node
|
||||
Directories map[string]*nameTreeNode
|
||||
|
||||
// Map from file name (case sensitive) to dropbox entry
|
||||
Files map[string]*dropbox.Entry
|
||||
|
||||
// Empty string if exact case is unknown or root node
|
||||
CaseCorrectName string
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
func newNameTreeNode(caseCorrectName string) *nameTreeNode {
|
||||
return &nameTreeNode{
|
||||
CaseCorrectName: caseCorrectName,
|
||||
Directories: make(map[string]*nameTreeNode),
|
||||
Files: make(map[string]*dropbox.Entry),
|
||||
}
|
||||
}
|
||||
|
||||
func newNameTree() *nameTreeNode {
|
||||
return newNameTreeNode("")
|
||||
}
|
||||
|
||||
func (tree *nameTreeNode) String() string {
|
||||
if len(tree.CaseCorrectName) == 0 {
|
||||
return "nameTreeNode/<root>"
|
||||
}
|
||||
return fmt.Sprintf("nameTreeNode/%q", tree.CaseCorrectName)
|
||||
}
|
||||
|
||||
func (tree *nameTreeNode) getTreeNode(path string) *nameTreeNode {
|
||||
if len(path) == 0 {
|
||||
// no lookup required, just return root
|
||||
return tree
|
||||
}
|
||||
|
||||
current := tree
|
||||
for _, component := range strings.Split(path, "/") {
|
||||
if len(component) == 0 {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(tree, "getTreeNode: path component is empty (full path %q)", path)
|
||||
return nil
|
||||
}
|
||||
|
||||
lowercase := strings.ToLower(component)
|
||||
|
||||
lookup := current.Directories[lowercase]
|
||||
if lookup == nil {
|
||||
lookup = newNameTreeNode("")
|
||||
current.Directories[lowercase] = lookup
|
||||
}
|
||||
|
||||
current = lookup
|
||||
}
|
||||
|
||||
return current
|
||||
}
|
||||
|
||||
func (tree *nameTreeNode) PutCaseCorrectDirectoryName(parentPath string, caseCorrectDirectoryName string) {
|
||||
if len(caseCorrectDirectoryName) == 0 {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(tree, "PutCaseCorrectDirectoryName: empty caseCorrectDirectoryName is not allowed (parentPath: %q)", parentPath)
|
||||
return
|
||||
}
|
||||
|
||||
node := tree.getTreeNode(parentPath)
|
||||
if node == nil {
|
||||
return
|
||||
}
|
||||
|
||||
lowerCaseDirectoryName := strings.ToLower(caseCorrectDirectoryName)
|
||||
directory := node.Directories[lowerCaseDirectoryName]
|
||||
if directory == nil {
|
||||
directory = newNameTreeNode(caseCorrectDirectoryName)
|
||||
node.Directories[lowerCaseDirectoryName] = directory
|
||||
} else {
|
||||
if len(directory.CaseCorrectName) > 0 {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(tree, "PutCaseCorrectDirectoryName: directory %q is already exists under parent path %q", caseCorrectDirectoryName, parentPath)
|
||||
return
|
||||
}
|
||||
|
||||
directory.CaseCorrectName = caseCorrectDirectoryName
|
||||
}
|
||||
}
|
||||
|
||||
func (tree *nameTreeNode) PutFile(parentPath string, caseCorrectFileName string, dropboxEntry *dropbox.Entry) {
|
||||
node := tree.getTreeNode(parentPath)
|
||||
if node == nil {
|
||||
return
|
||||
}
|
||||
|
||||
if node.Files[caseCorrectFileName] != nil {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(tree, "PutFile: file %q is already exists at %q", caseCorrectFileName, parentPath)
|
||||
return
|
||||
}
|
||||
|
||||
node.Files[caseCorrectFileName] = dropboxEntry
|
||||
}
|
||||
|
||||
func (tree *nameTreeNode) GetPathWithCorrectCase(path string) *string {
|
||||
if path == "" {
|
||||
empty := ""
|
||||
return &empty
|
||||
}
|
||||
|
||||
var result bytes.Buffer
|
||||
|
||||
current := tree
|
||||
for _, component := range strings.Split(path, "/") {
|
||||
if component == "" {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(tree, "GetPathWithCorrectCase: path component is empty (full path %q)", path)
|
||||
return nil
|
||||
}
|
||||
|
||||
lowercase := strings.ToLower(component)
|
||||
|
||||
current = current.Directories[lowercase]
|
||||
if current == nil || current.CaseCorrectName == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
_, _ = result.WriteString("/")
|
||||
_, _ = result.WriteString(current.CaseCorrectName)
|
||||
}
|
||||
|
||||
resultString := result.String()
|
||||
return &resultString
|
||||
}
|
||||
|
||||
type nameTreeFileWalkFunc func(caseCorrectFilePath string, entry *dropbox.Entry)
|
||||
|
||||
func (tree *nameTreeNode) walkFilesRec(currentPath string, walkFunc nameTreeFileWalkFunc) {
|
||||
var prefix string
|
||||
if currentPath == "" {
|
||||
prefix = ""
|
||||
} else {
|
||||
prefix = currentPath + "/"
|
||||
}
|
||||
|
||||
for name, entry := range tree.Files {
|
||||
walkFunc(prefix+name, entry)
|
||||
}
|
||||
|
||||
for lowerCaseName, directory := range tree.Directories {
|
||||
caseCorrectName := directory.CaseCorrectName
|
||||
if caseCorrectName == "" {
|
||||
fs.Stats.Error()
|
||||
fs.ErrorLog(tree, "WalkFiles: exact name of the directory %q is unknown (parent path: %q)", lowerCaseName, currentPath)
|
||||
continue
|
||||
}
|
||||
|
||||
directory.walkFilesRec(prefix+caseCorrectName, walkFunc)
|
||||
}
|
||||
}
|
||||
|
||||
func (tree *nameTreeNode) WalkFiles(rootPath string, walkFunc nameTreeFileWalkFunc) {
|
||||
node := tree.getTreeNode(rootPath)
|
||||
if node == nil {
|
||||
return
|
||||
}
|
||||
|
||||
node.walkFilesRec(rootPath, walkFunc)
|
||||
}
|
||||
@@ -1,124 +0,0 @@
|
||||
package dropbox
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
dropboxapi "github.com/stacktic/dropbox"
|
||||
)
|
||||
|
||||
func assert(t *testing.T, shouldBeTrue bool, failMessage string) {
|
||||
if !shouldBeTrue {
|
||||
t.Fatal(failMessage)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPutCaseCorrectDirectoryName(t *testing.T) {
|
||||
errors := fs.Stats.GetErrors()
|
||||
|
||||
tree := newNameTree()
|
||||
tree.PutCaseCorrectDirectoryName("a/b", "C")
|
||||
|
||||
assert(t, tree.CaseCorrectName == "", "Root CaseCorrectName should be empty")
|
||||
|
||||
a := tree.Directories["a"]
|
||||
assert(t, a.CaseCorrectName == "", "CaseCorrectName at 'a' should be empty")
|
||||
|
||||
b := a.Directories["b"]
|
||||
assert(t, b.CaseCorrectName == "", "CaseCorrectName at 'a/b' should be empty")
|
||||
|
||||
c := b.Directories["c"]
|
||||
assert(t, c.CaseCorrectName == "C", "CaseCorrectName at 'a/b/c' should be 'C'")
|
||||
|
||||
assert(t, fs.Stats.GetErrors() == errors, "No errors should be reported")
|
||||
}
|
||||
|
||||
func TestPutCaseCorrectDirectoryNameEmptyComponent(t *testing.T) {
|
||||
errors := fs.Stats.GetErrors()
|
||||
|
||||
tree := newNameTree()
|
||||
tree.PutCaseCorrectDirectoryName("/a", "C")
|
||||
tree.PutCaseCorrectDirectoryName("b/", "C")
|
||||
tree.PutCaseCorrectDirectoryName("a//b", "C")
|
||||
|
||||
assert(t, fs.Stats.GetErrors() == errors+3, "3 errors should be reported")
|
||||
}
|
||||
|
||||
func TestPutCaseCorrectDirectoryNameEmptyParent(t *testing.T) {
|
||||
errors := fs.Stats.GetErrors()
|
||||
|
||||
tree := newNameTree()
|
||||
tree.PutCaseCorrectDirectoryName("", "C")
|
||||
|
||||
c := tree.Directories["c"]
|
||||
assert(t, c.CaseCorrectName == "C", "CaseCorrectName at 'c' should be 'C'")
|
||||
|
||||
assert(t, fs.Stats.GetErrors() == errors, "No errors should be reported")
|
||||
}
|
||||
|
||||
func TestGetPathWithCorrectCase(t *testing.T) {
|
||||
errors := fs.Stats.GetErrors()
|
||||
|
||||
tree := newNameTree()
|
||||
tree.PutCaseCorrectDirectoryName("a", "C")
|
||||
assert(t, tree.GetPathWithCorrectCase("a/c") == nil, "Path for 'a' should not be available")
|
||||
|
||||
tree.PutCaseCorrectDirectoryName("", "A")
|
||||
assert(t, *tree.GetPathWithCorrectCase("a/c") == "/A/C", "Path for 'a/c' should be '/A/C'")
|
||||
|
||||
assert(t, fs.Stats.GetErrors() == errors, "No errors should be reported")
|
||||
}
|
||||
|
||||
func TestPutAndWalk(t *testing.T) {
|
||||
errors := fs.Stats.GetErrors()
|
||||
|
||||
tree := newNameTree()
|
||||
tree.PutFile("a", "F", &dropboxapi.Entry{Path: "xxx"})
|
||||
tree.PutCaseCorrectDirectoryName("", "A")
|
||||
|
||||
numCalled := 0
|
||||
walkFunc := func(caseCorrectFilePath string, entry *dropboxapi.Entry) {
|
||||
assert(t, caseCorrectFilePath == "A/F", "caseCorrectFilePath should be A/F, not "+caseCorrectFilePath)
|
||||
assert(t, entry.Path == "xxx", "entry.Path should be xxx")
|
||||
numCalled++
|
||||
}
|
||||
tree.WalkFiles("", walkFunc)
|
||||
|
||||
assert(t, numCalled == 1, "walk func should be called only once")
|
||||
|
||||
assert(t, fs.Stats.GetErrors() == errors, "No errors should be reported")
|
||||
}
|
||||
|
||||
func TestPutAndWalkWithPrefix(t *testing.T) {
|
||||
errors := fs.Stats.GetErrors()
|
||||
|
||||
tree := newNameTree()
|
||||
tree.PutFile("a", "F", &dropboxapi.Entry{Path: "xxx"})
|
||||
tree.PutCaseCorrectDirectoryName("", "A")
|
||||
|
||||
numCalled := 0
|
||||
walkFunc := func(caseCorrectFilePath string, entry *dropboxapi.Entry) {
|
||||
assert(t, caseCorrectFilePath == "A/F", "caseCorrectFilePath should be A/F, not "+caseCorrectFilePath)
|
||||
assert(t, entry.Path == "xxx", "entry.Path should be xxx")
|
||||
numCalled++
|
||||
}
|
||||
tree.WalkFiles("A", walkFunc)
|
||||
|
||||
assert(t, numCalled == 1, "walk func should be called only once")
|
||||
|
||||
assert(t, fs.Stats.GetErrors() == errors, "No errors should be reported")
|
||||
}
|
||||
|
||||
func TestPutAndWalkIncompleteTree(t *testing.T) {
|
||||
errors := fs.Stats.GetErrors()
|
||||
|
||||
tree := newNameTree()
|
||||
tree.PutFile("a", "F", &dropboxapi.Entry{Path: "xxx"})
|
||||
|
||||
walkFunc := func(caseCorrectFilePath string, entry *dropboxapi.Entry) {
|
||||
t.Fatal("Should not be called")
|
||||
}
|
||||
tree.WalkFiles("", walkFunc)
|
||||
|
||||
assert(t, fs.Stats.GetErrors() == errors+1, "One error should be reported")
|
||||
}
|
||||
305
fs/accounting.go
305
fs/accounting.go
@@ -7,108 +7,51 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/VividCortex/ewma"
|
||||
"github.com/tsenart/tb"
|
||||
)
|
||||
|
||||
// Globals
|
||||
var (
|
||||
Stats = NewStats()
|
||||
tokenBucket *tb.Bucket
|
||||
Stats = NewStats()
|
||||
)
|
||||
|
||||
// Start the token bucket if necessary
|
||||
func startTokenBucket() {
|
||||
if bwLimit > 0 {
|
||||
tokenBucket = tb.NewBucket(int64(bwLimit), 100*time.Millisecond)
|
||||
Log(nil, "Starting bandwidth limiter at %vBytes/s", &bwLimit)
|
||||
}
|
||||
}
|
||||
// Stringset holds some strings
|
||||
type StringSet map[string]bool
|
||||
|
||||
// stringSet holds a set of strings
|
||||
type stringSet map[string]struct{}
|
||||
|
||||
// inProgress holds a synchronizes map of in progress transfers
|
||||
type inProgress struct {
|
||||
mu sync.Mutex
|
||||
m map[string]*Account
|
||||
}
|
||||
|
||||
// newInProgress makes a new inProgress object
|
||||
func newInProgress() *inProgress {
|
||||
return &inProgress{
|
||||
m: make(map[string]*Account, Config.Transfers),
|
||||
}
|
||||
}
|
||||
|
||||
// set marks the name as in progress
|
||||
func (ip *inProgress) set(name string, acc *Account) {
|
||||
ip.mu.Lock()
|
||||
defer ip.mu.Unlock()
|
||||
ip.m[name] = acc
|
||||
}
|
||||
|
||||
// clear marks the name as no longer in progress
|
||||
func (ip *inProgress) clear(name string) {
|
||||
ip.mu.Lock()
|
||||
defer ip.mu.Unlock()
|
||||
delete(ip.m, name)
|
||||
}
|
||||
|
||||
// get gets the account for name, of nil if not found
|
||||
func (ip *inProgress) get(name string) *Account {
|
||||
ip.mu.Lock()
|
||||
defer ip.mu.Unlock()
|
||||
return ip.m[name]
|
||||
}
|
||||
|
||||
// Strings returns all the strings in the stringSet
|
||||
func (ss stringSet) Strings() []string {
|
||||
// Strings returns all the strings in the StringSet
|
||||
func (ss StringSet) Strings() []string {
|
||||
strings := make([]string, 0, len(ss))
|
||||
for name := range ss {
|
||||
var out string
|
||||
if acc := Stats.inProgress.get(name); acc != nil {
|
||||
out = acc.String()
|
||||
} else {
|
||||
out = name
|
||||
}
|
||||
strings = append(strings, " * "+out)
|
||||
for k := range ss {
|
||||
strings = append(strings, k)
|
||||
}
|
||||
sorted := sort.StringSlice(strings)
|
||||
sorted.Sort()
|
||||
return sorted
|
||||
return strings
|
||||
}
|
||||
|
||||
// String returns all the file names in the stringSet joined by newline
|
||||
func (ss stringSet) String() string {
|
||||
return strings.Join(ss.Strings(), "\n")
|
||||
// String returns all the strings in the StringSet joined by comma
|
||||
func (ss StringSet) String() string {
|
||||
return strings.Join(ss.Strings(), ", ")
|
||||
}
|
||||
|
||||
// StatsInfo limits and accounts all transfers
|
||||
// Stats limits and accounts all transfers
|
||||
type StatsInfo struct {
|
||||
lock sync.RWMutex
|
||||
bytes int64
|
||||
errors int64
|
||||
checks int64
|
||||
checking stringSet
|
||||
checking StringSet
|
||||
transfers int64
|
||||
transferring stringSet
|
||||
transferring StringSet
|
||||
start time.Time
|
||||
inProgress *inProgress
|
||||
}
|
||||
|
||||
// NewStats cretates an initialised StatsInfo
|
||||
func NewStats() *StatsInfo {
|
||||
return &StatsInfo{
|
||||
checking: make(stringSet, Config.Checkers),
|
||||
transferring: make(stringSet, Config.Transfers),
|
||||
checking: make(StringSet, Config.Checkers),
|
||||
transferring: make(StringSet, Config.Transfers),
|
||||
start: time.Now(),
|
||||
inProgress: newInProgress(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -117,30 +60,29 @@ func (s *StatsInfo) String() string {
|
||||
s.lock.RLock()
|
||||
defer s.lock.RUnlock()
|
||||
dt := time.Now().Sub(s.start)
|
||||
dtSeconds := dt.Seconds()
|
||||
dt_seconds := dt.Seconds()
|
||||
speed := 0.0
|
||||
if dt > 0 {
|
||||
speed = float64(s.bytes) / 1024 / dtSeconds
|
||||
speed = float64(s.bytes) / 1024 / dt_seconds
|
||||
}
|
||||
dtRounded := dt - (dt % (time.Second / 10))
|
||||
buf := &bytes.Buffer{}
|
||||
fmt.Fprintf(buf, `
|
||||
Transferred: %10d Bytes (%7.2f kByte/s)
|
||||
Errors: %10d
|
||||
Checks: %10d
|
||||
Transferred: %10d
|
||||
Elapsed time: %10v
|
||||
Elapsed time: %v
|
||||
`,
|
||||
s.bytes, speed,
|
||||
s.errors,
|
||||
s.checks,
|
||||
s.transfers,
|
||||
dtRounded)
|
||||
dt)
|
||||
if len(s.checking) > 0 {
|
||||
fmt.Fprintf(buf, "Checking:\n%s\n", s.checking)
|
||||
fmt.Fprintf(buf, "Checking: %s\n", s.checking)
|
||||
}
|
||||
if len(s.transferring) > 0 {
|
||||
fmt.Fprintf(buf, "Transferring:\n%s\n", s.transferring)
|
||||
fmt.Fprintf(buf, "Transferring: %s\n", s.transferring)
|
||||
}
|
||||
return buf.String()
|
||||
}
|
||||
@@ -171,23 +113,6 @@ func (s *StatsInfo) GetErrors() int64 {
|
||||
return s.errors
|
||||
}
|
||||
|
||||
// ResetCounters sets the counters (bytes, checks, errors, transfers) to 0
|
||||
func (s *StatsInfo) ResetCounters() {
|
||||
s.lock.RLock()
|
||||
defer s.lock.RUnlock()
|
||||
s.bytes = 0
|
||||
s.errors = 0
|
||||
s.checks = 0
|
||||
s.transfers = 0
|
||||
}
|
||||
|
||||
// ResetErrors sets the errors count to 0
|
||||
func (s *StatsInfo) ResetErrors() {
|
||||
s.lock.RLock()
|
||||
defer s.lock.RUnlock()
|
||||
s.errors = 0
|
||||
}
|
||||
|
||||
// Errored returns whether there have been any errors
|
||||
func (s *StatsInfo) Errored() bool {
|
||||
s.lock.RLock()
|
||||
@@ -199,14 +124,14 @@ func (s *StatsInfo) Errored() bool {
|
||||
func (s *StatsInfo) Error() {
|
||||
s.lock.Lock()
|
||||
defer s.lock.Unlock()
|
||||
s.errors++
|
||||
s.errors += 1
|
||||
}
|
||||
|
||||
// Checking adds a check into the stats
|
||||
func (s *StatsInfo) Checking(o Object) {
|
||||
s.lock.Lock()
|
||||
defer s.lock.Unlock()
|
||||
s.checking[o.Remote()] = struct{}{}
|
||||
s.checking[o.Remote()] = true
|
||||
}
|
||||
|
||||
// DoneChecking removes a check from the stats
|
||||
@@ -214,21 +139,14 @@ func (s *StatsInfo) DoneChecking(o Object) {
|
||||
s.lock.Lock()
|
||||
defer s.lock.Unlock()
|
||||
delete(s.checking, o.Remote())
|
||||
s.checks++
|
||||
}
|
||||
|
||||
// GetTransfers reads the number of transfers
|
||||
func (s *StatsInfo) GetTransfers() int64 {
|
||||
s.lock.RLock()
|
||||
defer s.lock.RUnlock()
|
||||
return s.transfers
|
||||
s.checks += 1
|
||||
}
|
||||
|
||||
// Transferring adds a transfer into the stats
|
||||
func (s *StatsInfo) Transferring(o Object) {
|
||||
s.lock.Lock()
|
||||
defer s.lock.Unlock()
|
||||
s.transferring[o.Remote()] = struct{}{}
|
||||
s.transferring[o.Remote()] = true
|
||||
}
|
||||
|
||||
// DoneTransferring removes a transfer from the stats
|
||||
@@ -236,187 +154,36 @@ func (s *StatsInfo) DoneTransferring(o Object) {
|
||||
s.lock.Lock()
|
||||
defer s.lock.Unlock()
|
||||
delete(s.transferring, o.Remote())
|
||||
s.transfers++
|
||||
s.transfers += 1
|
||||
}
|
||||
|
||||
// Account limits and accounts for one transfer
|
||||
type Account struct {
|
||||
// The mutex is to make sure Read() and Close() aren't called
|
||||
// concurrently. Unfortunately the persistent connection loop
|
||||
// in http transport calls Read() after Do() returns on
|
||||
// CancelRequest so this race can happen when it apparently
|
||||
// shouldn't.
|
||||
mu sync.Mutex
|
||||
in io.ReadCloser
|
||||
size int64
|
||||
name string
|
||||
statmu sync.Mutex // Separate mutex for stat values.
|
||||
bytes int64 // Total number of bytes read
|
||||
start time.Time // Start time of first read
|
||||
lpTime time.Time // Time of last average measurement
|
||||
lpBytes int // Number of bytes read since last measurement
|
||||
avg ewma.MovingAverage // Moving average of last few measurements
|
||||
closed bool // set if the file is closed
|
||||
exit chan struct{} // channel that will be closed when transfer is finished
|
||||
in io.ReadCloser
|
||||
bytes int64
|
||||
}
|
||||
|
||||
// NewAccount makes a Account reader for an object
|
||||
func NewAccount(in io.ReadCloser, obj Object) *Account {
|
||||
acc := &Account{
|
||||
in: in,
|
||||
size: obj.Size(),
|
||||
name: obj.Remote(),
|
||||
exit: make(chan struct{}),
|
||||
avg: ewma.NewMovingAverage(),
|
||||
lpTime: time.Now(),
|
||||
}
|
||||
go acc.averageLoop()
|
||||
Stats.inProgress.set(acc.name, acc)
|
||||
return acc
|
||||
}
|
||||
|
||||
func (file *Account) averageLoop() {
|
||||
tick := time.NewTicker(time.Second)
|
||||
defer tick.Stop()
|
||||
for {
|
||||
select {
|
||||
case now := <-tick.C:
|
||||
file.statmu.Lock()
|
||||
// Add average of last second.
|
||||
elapsed := now.Sub(file.lpTime).Seconds()
|
||||
avg := float64(file.lpBytes) / elapsed
|
||||
file.avg.Add(avg)
|
||||
file.lpBytes = 0
|
||||
file.lpTime = now
|
||||
// Unlock stats
|
||||
file.statmu.Unlock()
|
||||
case <-file.exit:
|
||||
return
|
||||
}
|
||||
// NewAccount makes a Account reader
|
||||
func NewAccount(in io.ReadCloser) *Account {
|
||||
return &Account{
|
||||
in: in,
|
||||
}
|
||||
}
|
||||
|
||||
// Read bytes from the object - see io.Reader
|
||||
func (file *Account) Read(p []byte) (n int, err error) {
|
||||
file.mu.Lock()
|
||||
defer file.mu.Unlock()
|
||||
|
||||
// Set start time.
|
||||
file.statmu.Lock()
|
||||
if file.start.IsZero() {
|
||||
file.start = time.Now()
|
||||
}
|
||||
file.statmu.Unlock()
|
||||
|
||||
n, err = file.in.Read(p)
|
||||
|
||||
// Update Stats
|
||||
file.statmu.Lock()
|
||||
file.lpBytes += n
|
||||
file.bytes += int64(n)
|
||||
file.statmu.Unlock()
|
||||
|
||||
Stats.Bytes(int64(n))
|
||||
|
||||
// Limit the transfer speed if required
|
||||
if tokenBucket != nil {
|
||||
tokenBucket.Wait(int64(n))
|
||||
if err == io.EOF {
|
||||
// FIXME Do something?
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Progress returns bytes read as well as the size.
|
||||
// Size can be <= 0 if the size is unknown.
|
||||
func (file *Account) Progress() (bytes, size int64) {
|
||||
if file == nil {
|
||||
return 0, 0
|
||||
}
|
||||
file.statmu.Lock()
|
||||
if bytes > size {
|
||||
size = 0
|
||||
}
|
||||
defer file.statmu.Unlock()
|
||||
return file.bytes, file.size
|
||||
}
|
||||
|
||||
// Speed returns the speed of the current file transfer
|
||||
// in bytes per second, as well a an exponentially weighted moving average
|
||||
// If no read has completed yet, 0 is returned for both values.
|
||||
func (file *Account) Speed() (bps, current float64) {
|
||||
if file == nil {
|
||||
return 0, 0
|
||||
}
|
||||
file.statmu.Lock()
|
||||
defer file.statmu.Unlock()
|
||||
if file.bytes == 0 {
|
||||
return 0, 0
|
||||
}
|
||||
// Calculate speed from first read.
|
||||
total := float64(time.Now().Sub(file.start)) / float64(time.Second)
|
||||
bps = float64(file.bytes) / total
|
||||
current = file.avg.Value()
|
||||
return
|
||||
}
|
||||
|
||||
// ETA returns the ETA of the current operation,
|
||||
// rounded to full seconds.
|
||||
// If the ETA cannot be determined 'ok' returns false.
|
||||
func (file *Account) ETA() (eta time.Duration, ok bool) {
|
||||
if file == nil || file.size <= 0 {
|
||||
return 0, false
|
||||
}
|
||||
file.statmu.Lock()
|
||||
defer file.statmu.Unlock()
|
||||
if file.bytes == 0 {
|
||||
return 0, false
|
||||
}
|
||||
left := file.size - file.bytes
|
||||
if left <= 0 {
|
||||
return 0, true
|
||||
}
|
||||
avg := file.avg.Value()
|
||||
if avg <= 0 {
|
||||
return 0, false
|
||||
}
|
||||
seconds := float64(left) / file.avg.Value()
|
||||
|
||||
return time.Duration(time.Second * time.Duration(int(seconds))), true
|
||||
}
|
||||
|
||||
// String produces stats for this file
|
||||
func (file *Account) String() string {
|
||||
a, b := file.Progress()
|
||||
avg, cur := file.Speed()
|
||||
eta, etaok := file.ETA()
|
||||
etas := "-"
|
||||
if etaok {
|
||||
if eta > 0 {
|
||||
etas = fmt.Sprintf("%v", eta)
|
||||
} else {
|
||||
etas = "0s"
|
||||
}
|
||||
}
|
||||
name := []rune(file.name)
|
||||
if len(name) > 45 {
|
||||
where := len(name) - 42
|
||||
name = append([]rune{'.', '.', '.'}, name[where:]...)
|
||||
}
|
||||
if b <= 0 {
|
||||
return fmt.Sprintf("%45s: avg:%7.1f, cur: %6.1f kByte/s. ETA: %s", string(name), avg/1024, cur/1024, etas)
|
||||
}
|
||||
return fmt.Sprintf("%45s: %2d%% done. avg: %6.1f, cur: %6.1f kByte/s. ETA: %s", string(name), int(100*float64(a)/float64(b)), avg/1024, cur/1024, etas)
|
||||
}
|
||||
|
||||
// Close the object
|
||||
func (file *Account) Close() error {
|
||||
file.mu.Lock()
|
||||
defer file.mu.Unlock()
|
||||
if file.closed {
|
||||
return nil
|
||||
}
|
||||
file.closed = true
|
||||
close(file.exit)
|
||||
Stats.inProgress.clear(file.name)
|
||||
// FIXME do something?
|
||||
return file.in.Close()
|
||||
}
|
||||
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
package all
|
||||
|
||||
import (
|
||||
// Active file systems
|
||||
_ "github.com/ncw/rclone/amazonclouddrive"
|
||||
_ "github.com/ncw/rclone/b2"
|
||||
_ "github.com/ncw/rclone/drive"
|
||||
_ "github.com/ncw/rclone/dropbox"
|
||||
_ "github.com/ncw/rclone/googlecloudstorage"
|
||||
_ "github.com/ncw/rclone/hubic"
|
||||
_ "github.com/ncw/rclone/local"
|
||||
_ "github.com/ncw/rclone/onedrive"
|
||||
_ "github.com/ncw/rclone/s3"
|
||||
_ "github.com/ncw/rclone/swift"
|
||||
_ "github.com/ncw/rclone/yandex"
|
||||
)
|
||||
199
fs/buffer.go
199
fs/buffer.go
@@ -1,199 +0,0 @@
|
||||
package fs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
// asyncReader will do async read-ahead from the input reader
|
||||
// and make the data available as an io.Reader.
|
||||
// This should be fully transparent, except that once an error
|
||||
// has been returned from the Reader, it will not recover.
|
||||
type asyncReader struct {
|
||||
in io.ReadCloser // Input reader
|
||||
ready chan *buffer // Buffers ready to be handed to the reader
|
||||
reuse chan *buffer // Buffers to reuse for input reading
|
||||
exit chan struct{} // Closes when finished
|
||||
buffers int // Number of buffers
|
||||
err error // If an error has occurred it is here
|
||||
cur *buffer // Current buffer being served
|
||||
exited chan struct{} // Channel is closed been the async reader shuts down
|
||||
closed bool // Has the parent reader been closed?
|
||||
}
|
||||
|
||||
// newAsyncReader returns a reader that will asynchronously read from
|
||||
// the supplied Reader into a number of buffers each with a given size.
|
||||
// It will start reading from the input at once, maybe even before this
|
||||
// function has returned.
|
||||
// The input can be read from the returned reader.
|
||||
// When done use Close to release the buffers and close the supplied input.
|
||||
func newAsyncReader(rd io.ReadCloser, buffers, size int) (io.ReadCloser, error) {
|
||||
if size <= 0 {
|
||||
return nil, fmt.Errorf("buffer size too small")
|
||||
}
|
||||
if buffers <= 0 {
|
||||
return nil, fmt.Errorf("number of buffers too small")
|
||||
}
|
||||
if rd == nil {
|
||||
return nil, fmt.Errorf("nil reader supplied")
|
||||
}
|
||||
a := &asyncReader{}
|
||||
a.init(rd, buffers, size)
|
||||
return a, nil
|
||||
}
|
||||
|
||||
func (a *asyncReader) init(rd io.ReadCloser, buffers, size int) {
|
||||
a.in = rd
|
||||
a.ready = make(chan *buffer, buffers)
|
||||
a.reuse = make(chan *buffer, buffers)
|
||||
a.exit = make(chan struct{}, 0)
|
||||
a.exited = make(chan struct{}, 0)
|
||||
a.buffers = buffers
|
||||
a.cur = nil
|
||||
|
||||
// Create buffers
|
||||
for i := 0; i < buffers; i++ {
|
||||
a.reuse <- newBuffer(size)
|
||||
}
|
||||
|
||||
// Start async reader
|
||||
go func() {
|
||||
// Ensure that when we exit this is signalled.
|
||||
defer close(a.exited)
|
||||
for {
|
||||
select {
|
||||
case b := <-a.reuse:
|
||||
err := b.read(a.in)
|
||||
a.ready <- b
|
||||
if err != nil {
|
||||
close(a.ready)
|
||||
return
|
||||
}
|
||||
case <-a.exit:
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Read will return the next available data.
|
||||
func (a *asyncReader) fill() (err error) {
|
||||
if a.cur.isEmpty() {
|
||||
if a.cur != nil {
|
||||
a.reuse <- a.cur
|
||||
a.cur = nil
|
||||
}
|
||||
b, ok := <-a.ready
|
||||
if !ok {
|
||||
return a.err
|
||||
}
|
||||
a.cur = b
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Read will return the next available data.
|
||||
func (a *asyncReader) Read(p []byte) (n int, err error) {
|
||||
// Swap buffer and maybe return error
|
||||
err = a.fill()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Copy what we can
|
||||
n = copy(p, a.cur.buffer())
|
||||
a.cur.increment(n)
|
||||
|
||||
// If at end of buffer, return any error, if present
|
||||
if a.cur.isEmpty() {
|
||||
a.err = a.cur.err
|
||||
return n, a.err
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// WriteTo writes data to w until there's no more data to write or when an error occurs.
|
||||
// The return value n is the number of bytes written.
|
||||
// Any error encountered during the write is also returned.
|
||||
func (a *asyncReader) WriteTo(w io.Writer) (n int64, err error) {
|
||||
n = 0
|
||||
for {
|
||||
err = a.fill()
|
||||
if err != nil {
|
||||
return n, err
|
||||
}
|
||||
n2, err := w.Write(a.cur.buffer())
|
||||
a.cur.increment(n2)
|
||||
n += int64(n2)
|
||||
if err != nil {
|
||||
return n, err
|
||||
}
|
||||
if a.cur.err != nil {
|
||||
a.err = a.cur.err
|
||||
return n, a.cur.err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Close will ensure that the underlying async reader is shut down.
|
||||
// It will also close the input supplied on newAsyncReader.
|
||||
func (a *asyncReader) Close() (err error) {
|
||||
select {
|
||||
case <-a.exited:
|
||||
default:
|
||||
close(a.exit)
|
||||
<-a.exited
|
||||
}
|
||||
if !a.closed {
|
||||
a.closed = true
|
||||
return a.in.Close()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Internal buffer
|
||||
// If an error is present, it must be returned
|
||||
// once all buffer content has been served.
|
||||
type buffer struct {
|
||||
buf []byte
|
||||
err error
|
||||
offset int
|
||||
size int
|
||||
}
|
||||
|
||||
func newBuffer(size int) *buffer {
|
||||
return &buffer{buf: make([]byte, size), err: nil, size: size}
|
||||
}
|
||||
|
||||
// isEmpty returns true is offset is at end of
|
||||
// buffer, or
|
||||
func (b *buffer) isEmpty() bool {
|
||||
if b == nil {
|
||||
return true
|
||||
}
|
||||
if len(b.buf)-b.offset <= 0 {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// read into start of the buffer from the supplied reader,
|
||||
// resets the offset and updates the size of the buffer.
|
||||
// Any error encountered during the read is returned.
|
||||
func (b *buffer) read(rd io.Reader) error {
|
||||
var n int
|
||||
n, b.err = rd.Read(b.buf[0:b.size])
|
||||
b.buf = b.buf[0:n]
|
||||
b.offset = 0
|
||||
return b.err
|
||||
}
|
||||
|
||||
// Return the buffer at current offset
|
||||
func (b *buffer) buffer() []byte {
|
||||
return b.buf[b.offset:]
|
||||
}
|
||||
|
||||
// increment the offset
|
||||
func (b *buffer) increment(n int) {
|
||||
b.offset += n
|
||||
}
|
||||
@@ -1,268 +0,0 @@
|
||||
package fs
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"strings"
|
||||
"testing"
|
||||
"testing/iotest"
|
||||
)
|
||||
|
||||
func TestAsyncReader(t *testing.T) {
|
||||
buf := ioutil.NopCloser(bytes.NewBufferString("Testbuffer"))
|
||||
ar, err := newAsyncReader(buf, 4, 10000)
|
||||
if err != nil {
|
||||
t.Fatal("error when creating:", err)
|
||||
}
|
||||
|
||||
var dst = make([]byte, 100)
|
||||
n, err := ar.Read(dst)
|
||||
if err != nil {
|
||||
t.Fatal("error when reading:", err)
|
||||
}
|
||||
if n != 10 {
|
||||
t.Fatal("unexpected length, expected 10, got ", n)
|
||||
}
|
||||
|
||||
n, err = ar.Read(dst)
|
||||
if err != io.EOF {
|
||||
t.Fatal("expected io.EOF, got", err)
|
||||
}
|
||||
if n != 0 {
|
||||
t.Fatal("unexpected length, expected 0, got ", n)
|
||||
}
|
||||
|
||||
// Test read after error
|
||||
n, err = ar.Read(dst)
|
||||
if err != io.EOF {
|
||||
t.Fatal("expected io.EOF, got", err)
|
||||
}
|
||||
if n != 0 {
|
||||
t.Fatal("unexpected length, expected 0, got ", n)
|
||||
}
|
||||
|
||||
err = ar.Close()
|
||||
if err != nil {
|
||||
t.Fatal("error when closing:", err)
|
||||
}
|
||||
// Test double close
|
||||
err = ar.Close()
|
||||
if err != nil {
|
||||
t.Fatal("error when closing:", err)
|
||||
}
|
||||
|
||||
// Test Close without reading everything
|
||||
buf = ioutil.NopCloser(bytes.NewBuffer(make([]byte, 50000)))
|
||||
ar, err = newAsyncReader(buf, 4, 100)
|
||||
if err != nil {
|
||||
t.Fatal("error when creating:", err)
|
||||
}
|
||||
err = ar.Close()
|
||||
if err != nil {
|
||||
t.Fatal("error when closing, noread:", err)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestAsyncWriteTo(t *testing.T) {
|
||||
buf := ioutil.NopCloser(bytes.NewBufferString("Testbuffer"))
|
||||
ar, err := newAsyncReader(buf, 4, 10000)
|
||||
if err != nil {
|
||||
t.Fatal("error when creating:", err)
|
||||
}
|
||||
|
||||
var dst = &bytes.Buffer{}
|
||||
n, err := io.Copy(dst, ar)
|
||||
if err != io.EOF {
|
||||
t.Fatal("error when reading:", err)
|
||||
}
|
||||
if n != 10 {
|
||||
t.Fatal("unexpected length, expected 10, got ", n)
|
||||
}
|
||||
|
||||
// Should still return EOF
|
||||
n, err = io.Copy(dst, ar)
|
||||
if err != io.EOF {
|
||||
t.Fatal("expected io.EOF, got", err)
|
||||
}
|
||||
if n != 0 {
|
||||
t.Fatal("unexpected length, expected 0, got ", n)
|
||||
}
|
||||
|
||||
err = ar.Close()
|
||||
if err != nil {
|
||||
t.Fatal("error when closing:", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestAsyncReaderErrors(t *testing.T) {
|
||||
// test nil reader
|
||||
_, err := newAsyncReader(nil, 4, 10000)
|
||||
if err == nil {
|
||||
t.Fatal("expected error when creating, but got nil")
|
||||
}
|
||||
|
||||
// invalid buffer number
|
||||
buf := ioutil.NopCloser(bytes.NewBufferString("Testbuffer"))
|
||||
_, err = newAsyncReader(buf, 0, 10000)
|
||||
if err == nil {
|
||||
t.Fatal("expected error when creating, but got nil")
|
||||
}
|
||||
_, err = newAsyncReader(buf, -1, 10000)
|
||||
if err == nil {
|
||||
t.Fatal("expected error when creating, but got nil")
|
||||
}
|
||||
|
||||
// invalid buffer size
|
||||
_, err = newAsyncReader(buf, 4, 0)
|
||||
if err == nil {
|
||||
t.Fatal("expected error when creating, but got nil")
|
||||
}
|
||||
_, err = newAsyncReader(buf, 4, -1)
|
||||
if err == nil {
|
||||
t.Fatal("expected error when creating, but got nil")
|
||||
}
|
||||
}
|
||||
|
||||
// Complex read tests, leveraged from "bufio".
|
||||
|
||||
type readMaker struct {
|
||||
name string
|
||||
fn func(io.Reader) io.Reader
|
||||
}
|
||||
|
||||
var readMakers = []readMaker{
|
||||
{"full", func(r io.Reader) io.Reader { return r }},
|
||||
{"byte", iotest.OneByteReader},
|
||||
{"half", iotest.HalfReader},
|
||||
{"data+err", iotest.DataErrReader},
|
||||
{"timeout", iotest.TimeoutReader},
|
||||
}
|
||||
|
||||
// Call Read to accumulate the text of a file
|
||||
func reads(buf io.Reader, m int) string {
|
||||
var b [1000]byte
|
||||
nb := 0
|
||||
for {
|
||||
n, err := buf.Read(b[nb : nb+m])
|
||||
nb += n
|
||||
if err == io.EOF {
|
||||
break
|
||||
} else if err != nil && err != iotest.ErrTimeout {
|
||||
panic("Data: " + err.Error())
|
||||
} else if err != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
return string(b[0:nb])
|
||||
}
|
||||
|
||||
type bufReader struct {
|
||||
name string
|
||||
fn func(io.Reader) string
|
||||
}
|
||||
|
||||
var bufreaders = []bufReader{
|
||||
{"1", func(b io.Reader) string { return reads(b, 1) }},
|
||||
{"2", func(b io.Reader) string { return reads(b, 2) }},
|
||||
{"3", func(b io.Reader) string { return reads(b, 3) }},
|
||||
{"4", func(b io.Reader) string { return reads(b, 4) }},
|
||||
{"5", func(b io.Reader) string { return reads(b, 5) }},
|
||||
{"7", func(b io.Reader) string { return reads(b, 7) }},
|
||||
}
|
||||
|
||||
const minReadBufferSize = 16
|
||||
|
||||
var bufsizes = []int{
|
||||
0, minReadBufferSize, 23, 32, 46, 64, 93, 128, 1024, 4096,
|
||||
}
|
||||
|
||||
// Test various input buffer sizes, number of buffers and read sizes.
|
||||
func TestAsyncReaderSizes(t *testing.T) {
|
||||
var texts [31]string
|
||||
str := ""
|
||||
all := ""
|
||||
for i := 0; i < len(texts)-1; i++ {
|
||||
texts[i] = str + "\n"
|
||||
all += texts[i]
|
||||
str += string(i%26 + 'a')
|
||||
}
|
||||
texts[len(texts)-1] = all
|
||||
|
||||
for h := 0; h < len(texts); h++ {
|
||||
text := texts[h]
|
||||
for i := 0; i < len(readMakers); i++ {
|
||||
for j := 0; j < len(bufreaders); j++ {
|
||||
for k := 0; k < len(bufsizes); k++ {
|
||||
for l := 1; l < 10; l++ {
|
||||
readmaker := readMakers[i]
|
||||
bufreader := bufreaders[j]
|
||||
bufsize := bufsizes[k]
|
||||
read := readmaker.fn(strings.NewReader(text))
|
||||
buf := bufio.NewReaderSize(read, bufsize)
|
||||
ar, _ := newAsyncReader(ioutil.NopCloser(buf), l, 100)
|
||||
s := bufreader.fn(ar)
|
||||
// "timeout" expects the Reader to recover, asyncReader does not.
|
||||
if s != text && readmaker.name != "timeout" {
|
||||
t.Errorf("reader=%s fn=%s bufsize=%d want=%q got=%q",
|
||||
readmaker.name, bufreader.name, bufsize, text, s)
|
||||
}
|
||||
err := ar.Close()
|
||||
if err != nil {
|
||||
t.Fatal("Unexpected close error:", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Test various input buffer sizes, number of buffers and read sizes.
|
||||
func TestAsyncReaderWriteTo(t *testing.T) {
|
||||
var texts [31]string
|
||||
str := ""
|
||||
all := ""
|
||||
for i := 0; i < len(texts)-1; i++ {
|
||||
texts[i] = str + "\n"
|
||||
all += texts[i]
|
||||
str += string(i%26 + 'a')
|
||||
}
|
||||
texts[len(texts)-1] = all
|
||||
|
||||
for h := 0; h < len(texts); h++ {
|
||||
text := texts[h]
|
||||
for i := 0; i < len(readMakers); i++ {
|
||||
for j := 0; j < len(bufreaders); j++ {
|
||||
for k := 0; k < len(bufsizes); k++ {
|
||||
for l := 1; l < 10; l++ {
|
||||
readmaker := readMakers[i]
|
||||
bufreader := bufreaders[j]
|
||||
bufsize := bufsizes[k]
|
||||
read := readmaker.fn(strings.NewReader(text))
|
||||
buf := bufio.NewReaderSize(read, bufsize)
|
||||
ar, _ := newAsyncReader(ioutil.NopCloser(buf), l, 100)
|
||||
dst := &bytes.Buffer{}
|
||||
wt := ar.(io.WriterTo)
|
||||
_, err := wt.WriteTo(dst)
|
||||
if err != nil && err != io.EOF && err != iotest.ErrTimeout {
|
||||
t.Fatal("Copy:", err)
|
||||
}
|
||||
s := dst.String()
|
||||
// "timeout" expects the Reader to recover, asyncReader does not.
|
||||
if s != text && readmaker.name != "timeout" {
|
||||
t.Errorf("reader=%s fn=%s bufsize=%d want=%q got=%q",
|
||||
readmaker.name, bufreader.name, bufsize, text, s)
|
||||
}
|
||||
err = ar.Close()
|
||||
if err != nil {
|
||||
t.Fatal("Unexpected close error:", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
717
fs/config.go
717
fs/config.go
@@ -4,17 +4,8 @@ package fs
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"crypto/sha256"
|
||||
"crypto/tls"
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"math"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/user"
|
||||
"path"
|
||||
@@ -22,271 +13,57 @@ import (
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/Unknwon/goconfig"
|
||||
"github.com/mreiferson/go-httpclient"
|
||||
"github.com/spf13/pflag"
|
||||
"golang.org/x/crypto/nacl/secretbox"
|
||||
"golang.org/x/text/unicode/norm"
|
||||
"github.com/ogier/pflag"
|
||||
)
|
||||
|
||||
const (
|
||||
configFileName = ".rclone.conf"
|
||||
|
||||
// ConfigToken is the key used to store the token under
|
||||
ConfigToken = "token"
|
||||
|
||||
// ConfigClientID is the config key used to store the client id
|
||||
ConfigClientID = "client_id"
|
||||
|
||||
// ConfigClientSecret is the config key used to store the client secret
|
||||
ConfigClientSecret = "client_secret"
|
||||
|
||||
// ConfigAutomatic indicates that we want non-interactive configuration
|
||||
ConfigAutomatic = "config_automatic"
|
||||
)
|
||||
|
||||
// SizeSuffix is parsed by flag with k/M/G suffixes
|
||||
type SizeSuffix int64
|
||||
|
||||
// Global
|
||||
var (
|
||||
// ConfigFile is the config file data structure
|
||||
// Config file
|
||||
ConfigFile *goconfig.ConfigFile
|
||||
// HomeDir is the home directory of the user
|
||||
// Home directory
|
||||
HomeDir = configHome()
|
||||
// ConfigPath points to the config file
|
||||
// Config file path
|
||||
ConfigPath = path.Join(HomeDir, configFileName)
|
||||
// Config is the global config
|
||||
// Global config
|
||||
Config = &ConfigInfo{}
|
||||
// Flags
|
||||
verbose = pflag.BoolP("verbose", "v", false, "Print lots more stuff")
|
||||
quiet = pflag.BoolP("quiet", "q", false, "Print as little stuff as possible")
|
||||
modifyWindow = pflag.DurationP("modify-window", "", time.Nanosecond, "Max time diff to be considered the same")
|
||||
checkers = pflag.IntP("checkers", "", 8, "Number of checkers to run in parallel.")
|
||||
transfers = pflag.IntP("transfers", "", 4, "Number of file transfers to run in parallel.")
|
||||
configFile = pflag.StringP("config", "", ConfigPath, "Config file.")
|
||||
checkSum = pflag.BoolP("checksum", "c", false, "Skip based on checksum & size, not mod-time & size")
|
||||
sizeOnly = pflag.BoolP("size-only", "", false, "Skip based on size only, not mod-time or checksum")
|
||||
ignoreTimes = pflag.BoolP("ignore-times", "I", false, "Don't skip files that match size and time - transfer all files")
|
||||
ignoreExisting = pflag.BoolP("ignore-existing", "", false, "Skip all files that exist on destination")
|
||||
dryRun = pflag.BoolP("dry-run", "n", false, "Do a trial run with no permanent changes")
|
||||
connectTimeout = pflag.DurationP("contimeout", "", 60*time.Second, "Connect timeout")
|
||||
timeout = pflag.DurationP("timeout", "", 5*60*time.Second, "IO idle timeout")
|
||||
dumpHeaders = pflag.BoolP("dump-headers", "", false, "Dump HTTP headers - may contain sensitive info")
|
||||
dumpBodies = pflag.BoolP("dump-bodies", "", false, "Dump HTTP headers and bodies - may contain sensitive info")
|
||||
skipVerify = pflag.BoolP("no-check-certificate", "", false, "Do not verify the server SSL certificate. Insecure.")
|
||||
AskPassword = pflag.BoolP("ask-password", "", true, "Allow prompt for password for encrypted configuration.")
|
||||
deleteBefore = pflag.BoolP("delete-before", "", false, "When synchronizing, delete files on destination before transfering")
|
||||
deleteDuring = pflag.BoolP("delete-during", "", false, "When synchronizing, delete files during transfer (default)")
|
||||
deleteAfter = pflag.BoolP("delete-after", "", false, "When synchronizing, delete files on destination after transfering")
|
||||
lowLevelRetries = pflag.IntP("low-level-retries", "", 10, "Number of low level retries to do.")
|
||||
updateOlder = pflag.BoolP("update", "u", false, "Skip files that are newer on the destination.")
|
||||
noGzip = pflag.BoolP("no-gzip-encoding", "", false, "Don't set Accept-Encoding: gzip.")
|
||||
dedupeMode = pflag.StringP("dedupe-mode", "", "interactive", "Dedupe mode interactive|skip|first|newest|oldest|rename.")
|
||||
bwLimit SizeSuffix
|
||||
|
||||
// Key to use for password en/decryption.
|
||||
// When nil, no encryption will be used for saving.
|
||||
configKey []byte
|
||||
verbose = pflag.BoolP("verbose", "v", false, "Print lots more stuff")
|
||||
quiet = pflag.BoolP("quiet", "q", false, "Print as little stuff as possible")
|
||||
modifyWindow = pflag.DurationP("modify-window", "", time.Nanosecond, "Max time diff to be considered the same")
|
||||
checkers = pflag.IntP("checkers", "", 8, "Number of checkers to run in parallel.")
|
||||
transfers = pflag.IntP("transfers", "", 4, "Number of file transfers to run in parallel.")
|
||||
configFile = pflag.StringP("config", "", ConfigPath, "Config file.")
|
||||
dryRun = pflag.BoolP("dry-run", "n", false, "Do a trial run with no permanent changes")
|
||||
)
|
||||
|
||||
func init() {
|
||||
pflag.VarP(&bwLimit, "bwlimit", "", "Bandwidth limit in kBytes/s, or use suffix k|M|G")
|
||||
}
|
||||
|
||||
// Turn SizeSuffix into a string
|
||||
func (x SizeSuffix) String() string {
|
||||
scaled := float64(0)
|
||||
suffix := ""
|
||||
switch {
|
||||
case x == 0:
|
||||
return "0"
|
||||
case x < 1024*1024:
|
||||
scaled = float64(x) / 1024
|
||||
suffix = "k"
|
||||
case x < 1024*1024*1024:
|
||||
scaled = float64(x) / 1024 / 1024
|
||||
suffix = "M"
|
||||
default:
|
||||
scaled = float64(x) / 1024 / 1024 / 1024
|
||||
suffix = "G"
|
||||
}
|
||||
if math.Floor(scaled) == scaled {
|
||||
return fmt.Sprintf("%.0f%s", scaled, suffix)
|
||||
}
|
||||
return fmt.Sprintf("%.3f%s", scaled, suffix)
|
||||
}
|
||||
|
||||
// Set a SizeSuffix
|
||||
func (x *SizeSuffix) Set(s string) error {
|
||||
if len(s) == 0 {
|
||||
return fmt.Errorf("Empty string")
|
||||
}
|
||||
suffix := s[len(s)-1]
|
||||
suffixLen := 1
|
||||
var multiplier float64
|
||||
switch suffix {
|
||||
case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '.':
|
||||
suffixLen = 0
|
||||
multiplier = 1 << 10
|
||||
case 'k', 'K':
|
||||
multiplier = 1 << 10
|
||||
case 'm', 'M':
|
||||
multiplier = 1 << 20
|
||||
case 'g', 'G':
|
||||
multiplier = 1 << 30
|
||||
default:
|
||||
return fmt.Errorf("Bad suffix %q", suffix)
|
||||
}
|
||||
s = s[:len(s)-suffixLen]
|
||||
value, err := strconv.ParseFloat(s, 64)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if value < 0 {
|
||||
return fmt.Errorf("Size can't be negative %q", s)
|
||||
}
|
||||
value *= multiplier
|
||||
*x = SizeSuffix(value)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Type of the value
|
||||
func (x *SizeSuffix) Type() string {
|
||||
return "int64"
|
||||
}
|
||||
|
||||
// Check it satisfies the interface
|
||||
var _ pflag.Value = (*SizeSuffix)(nil)
|
||||
|
||||
// Obscure a config value
|
||||
func Obscure(x string) string {
|
||||
y := []byte(x)
|
||||
for i := range y {
|
||||
y[i] ^= byte(i) ^ 0xAA
|
||||
}
|
||||
return base64.StdEncoding.EncodeToString(y)
|
||||
}
|
||||
|
||||
// Reveal a config value
|
||||
func Reveal(y string) string {
|
||||
x, err := base64.StdEncoding.DecodeString(y)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to reveal %q: %v", y, err)
|
||||
}
|
||||
for i := range x {
|
||||
x[i] ^= byte(i) ^ 0xAA
|
||||
}
|
||||
return string(x)
|
||||
}
|
||||
|
||||
// ConfigInfo is filesystem config options
|
||||
// Filesystem config options
|
||||
type ConfigInfo struct {
|
||||
Verbose bool
|
||||
Quiet bool
|
||||
DryRun bool
|
||||
CheckSum bool
|
||||
SizeOnly bool
|
||||
IgnoreTimes bool
|
||||
IgnoreExisting bool
|
||||
ModifyWindow time.Duration
|
||||
Checkers int
|
||||
Transfers int
|
||||
ConnectTimeout time.Duration // Connect timeout
|
||||
Timeout time.Duration // Data channel timeout
|
||||
DumpHeaders bool
|
||||
DumpBodies bool
|
||||
Filter *Filter
|
||||
InsecureSkipVerify bool // Skip server certificate verification
|
||||
DeleteBefore bool // Delete before checking
|
||||
DeleteDuring bool // Delete during checking/transfer
|
||||
DeleteAfter bool // Delete after successful transfer.
|
||||
LowLevelRetries int
|
||||
UpdateOlder bool // Skip files that are newer on the destination
|
||||
NoGzip bool // Disable compression
|
||||
DedupeMode DeduplicateMode
|
||||
}
|
||||
|
||||
// Transport returns an http.RoundTripper with the correct timeouts
|
||||
func (ci *ConfigInfo) Transport() http.RoundTripper {
|
||||
t := &httpclient.Transport{
|
||||
Proxy: http.ProxyFromEnvironment,
|
||||
MaxIdleConnsPerHost: ci.Checkers + ci.Transfers + 1,
|
||||
|
||||
// ConnectTimeout, if non-zero, is the maximum amount of time a dial will wait for
|
||||
// a connect to complete.
|
||||
ConnectTimeout: ci.ConnectTimeout,
|
||||
|
||||
// ResponseHeaderTimeout, if non-zero, specifies the amount of
|
||||
// time to wait for a server's response headers after fully
|
||||
// writing the request (including its body, if any). This
|
||||
// time does not include the time to read the response body.
|
||||
ResponseHeaderTimeout: ci.Timeout,
|
||||
|
||||
// RequestTimeout, if non-zero, specifies the amount of time for the entire
|
||||
// request to complete (including all of the above timeouts + entire response body).
|
||||
// This should never be less than the sum total of the above two timeouts.
|
||||
//RequestTimeout: NOT SET,
|
||||
|
||||
// ReadWriteTimeout, if non-zero, will set a deadline for every Read and
|
||||
// Write operation on the request connection.
|
||||
ReadWriteTimeout: ci.Timeout,
|
||||
|
||||
// InsecureSkipVerify controls whether a client verifies the
|
||||
// server's certificate chain and host name.
|
||||
// If InsecureSkipVerify is true, TLS accepts any certificate
|
||||
// presented by the server and any host name in that certificate.
|
||||
// In this mode, TLS is susceptible to man-in-the-middle attacks.
|
||||
// This should be used only for testing.
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: ci.InsecureSkipVerify},
|
||||
|
||||
// DisableCompression, if true, prevents the Transport from
|
||||
// requesting compression with an "Accept-Encoding: gzip"
|
||||
// request header when the Request contains no existing
|
||||
// Accept-Encoding value. If the Transport requests gzip on
|
||||
// its own and gets a gzipped response, it's transparently
|
||||
// decoded in the Response.Body. However, if the user
|
||||
// explicitly requested gzip it is not automatically
|
||||
// uncompressed.
|
||||
DisableCompression: *noGzip,
|
||||
}
|
||||
if ci.DumpHeaders || ci.DumpBodies {
|
||||
return NewLoggedTransport(t, ci.DumpBodies)
|
||||
}
|
||||
return t
|
||||
}
|
||||
|
||||
// Client returns an http.Client with the correct timeouts
|
||||
func (ci *ConfigInfo) Client() *http.Client {
|
||||
return &http.Client{
|
||||
Transport: ci.Transport(),
|
||||
}
|
||||
Verbose bool
|
||||
Quiet bool
|
||||
DryRun bool
|
||||
ModifyWindow time.Duration
|
||||
Checkers int
|
||||
Transfers int
|
||||
}
|
||||
|
||||
// Find the config directory
|
||||
func configHome() string {
|
||||
// Find users home directory
|
||||
usr, err := user.Current()
|
||||
if err == nil {
|
||||
return usr.HomeDir
|
||||
if err != nil {
|
||||
log.Printf("Couldn't find home directory: %v", err)
|
||||
return ""
|
||||
}
|
||||
// Fall back to reading $HOME - work around user.Current() not
|
||||
// working for cross compiled binaries on OSX.
|
||||
// https://github.com/golang/go/issues/6376
|
||||
home := os.Getenv("HOME")
|
||||
if home != "" {
|
||||
return home
|
||||
}
|
||||
log.Printf("Couldn't find home directory or read HOME environment variable.")
|
||||
log.Printf("Defaulting to storing config in current directory.")
|
||||
log.Printf("Use -config flag to workaround.")
|
||||
log.Printf("Error was: %v", err)
|
||||
return ""
|
||||
return usr.HomeDir
|
||||
}
|
||||
|
||||
// LoadConfig loads the config file
|
||||
// Loads the config file
|
||||
func LoadConfig() {
|
||||
// Read some flags if set
|
||||
//
|
||||
@@ -297,256 +74,34 @@ func LoadConfig() {
|
||||
Config.Checkers = *checkers
|
||||
Config.Transfers = *transfers
|
||||
Config.DryRun = *dryRun
|
||||
Config.Timeout = *timeout
|
||||
Config.ConnectTimeout = *connectTimeout
|
||||
Config.CheckSum = *checkSum
|
||||
Config.SizeOnly = *sizeOnly
|
||||
Config.IgnoreTimes = *ignoreTimes
|
||||
Config.IgnoreExisting = *ignoreExisting
|
||||
Config.DumpHeaders = *dumpHeaders
|
||||
Config.DumpBodies = *dumpBodies
|
||||
Config.InsecureSkipVerify = *skipVerify
|
||||
Config.LowLevelRetries = *lowLevelRetries
|
||||
Config.UpdateOlder = *updateOlder
|
||||
Config.NoGzip = *noGzip
|
||||
|
||||
ConfigPath = *configFile
|
||||
|
||||
Config.DeleteBefore = *deleteBefore
|
||||
Config.DeleteDuring = *deleteDuring
|
||||
Config.DeleteAfter = *deleteAfter
|
||||
|
||||
switch strings.ToLower(*dedupeMode) {
|
||||
case "interactive":
|
||||
Config.DedupeMode = DeduplicateInteractive
|
||||
case "skip":
|
||||
Config.DedupeMode = DeduplicateSkip
|
||||
case "first":
|
||||
Config.DedupeMode = DeduplicateFirst
|
||||
case "newest":
|
||||
Config.DedupeMode = DeduplicateNewest
|
||||
case "oldest":
|
||||
Config.DedupeMode = DeduplicateOldest
|
||||
case "rename":
|
||||
Config.DedupeMode = DeduplicateRename
|
||||
default:
|
||||
log.Fatalf(`Unknown mode for --dedupe-mode %q.`, *dedupeMode)
|
||||
}
|
||||
|
||||
switch {
|
||||
case *deleteBefore && (*deleteDuring || *deleteAfter),
|
||||
*deleteDuring && *deleteAfter:
|
||||
log.Fatalf(`Only one of --delete-before, --delete-during or --delete-after can be used.`)
|
||||
|
||||
// If none are specified, use "during".
|
||||
case !*deleteBefore && !*deleteDuring && !*deleteAfter:
|
||||
Config.DeleteDuring = true
|
||||
}
|
||||
|
||||
// Load configuration file.
|
||||
var err error
|
||||
ConfigFile, err = loadConfigFile()
|
||||
ConfigFile, err = goconfig.LoadConfigFile(ConfigPath)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to config file \"%s\": %v", ConfigPath, err)
|
||||
}
|
||||
|
||||
// Load filters
|
||||
Config.Filter, err = NewFilter()
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to load filters: %v", err)
|
||||
}
|
||||
|
||||
// Start the token bucket limiter
|
||||
startTokenBucket()
|
||||
}
|
||||
|
||||
// loadConfigFile will load a config file, and
|
||||
// automatically decrypt it.
|
||||
func loadConfigFile() (*goconfig.ConfigFile, error) {
|
||||
b, err := ioutil.ReadFile(ConfigPath)
|
||||
if err != nil {
|
||||
log.Printf("Failed to load config file \"%v\" - using defaults: %v", ConfigPath, err)
|
||||
return goconfig.LoadFromReader(&bytes.Buffer{})
|
||||
}
|
||||
|
||||
// Find first non-empty line
|
||||
r := bufio.NewReader(bytes.NewBuffer(b))
|
||||
for {
|
||||
line, _, err := r.ReadLine()
|
||||
log.Printf("Failed to load config file %v - using defaults", ConfigPath)
|
||||
ConfigFile, err = goconfig.LoadConfigFile(os.DevNull)
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return goconfig.LoadFromReader(bytes.NewBuffer(b))
|
||||
}
|
||||
return nil, err
|
||||
log.Fatalf("Failed to read null config file: %v", err)
|
||||
}
|
||||
l := strings.TrimSpace(string(line))
|
||||
if len(l) == 0 || strings.HasPrefix(l, ";") || strings.HasPrefix(l, "#") {
|
||||
continue
|
||||
}
|
||||
// First non-empty or non-comment must be ENCRYPT_V0
|
||||
if l == "RCLONE_ENCRYPT_V0:" {
|
||||
break
|
||||
}
|
||||
if strings.HasPrefix(l, "RCLONE_ENCRYPT_V") {
|
||||
return nil, fmt.Errorf("Unsupported configuration encryption. Update rclone for support.")
|
||||
}
|
||||
return goconfig.LoadFromReader(bytes.NewBuffer(b))
|
||||
}
|
||||
|
||||
// Encrypted content is base64 encoded.
|
||||
dec := base64.NewDecoder(base64.StdEncoding, r)
|
||||
box, err := ioutil.ReadAll(dec)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to load base64 encoded data: %v", err)
|
||||
}
|
||||
if len(box) < 24+secretbox.Overhead {
|
||||
return nil, fmt.Errorf("Configuration data too short")
|
||||
}
|
||||
envpw := os.Getenv("RCLONE_CONFIG_PASS")
|
||||
|
||||
var out []byte
|
||||
for {
|
||||
if len(configKey) == 0 && envpw != "" {
|
||||
err := setPassword(envpw)
|
||||
if err != nil {
|
||||
fmt.Println("Using RCLONE_CONFIG_PASS returned:", err)
|
||||
envpw = ""
|
||||
} else {
|
||||
Debug(nil, "Using RCLONE_CONFIG_PASS password.")
|
||||
}
|
||||
}
|
||||
if len(configKey) == 0 {
|
||||
if !*AskPassword {
|
||||
return nil, fmt.Errorf("Unable to decrypt configuration and not allowed to ask for password. Set RCLONE_CONFIG_PASS to your configuration password.")
|
||||
}
|
||||
getPassword("Enter configuration password:")
|
||||
}
|
||||
|
||||
// Nonce is first 24 bytes of the ciphertext
|
||||
var nonce [24]byte
|
||||
copy(nonce[:], box[:24])
|
||||
var key [32]byte
|
||||
copy(key[:], configKey[:32])
|
||||
|
||||
// Attempt to decrypt
|
||||
var ok bool
|
||||
out, ok = secretbox.Open(nil, box[24:], &nonce, &key)
|
||||
if ok {
|
||||
break
|
||||
}
|
||||
|
||||
// Retry
|
||||
log.Println("Couldn't decrypt configuration, most likely wrong password.")
|
||||
configKey = nil
|
||||
envpw = ""
|
||||
}
|
||||
return goconfig.LoadFromReader(bytes.NewBuffer(out))
|
||||
}
|
||||
|
||||
// getPassword will query the user for a password the
|
||||
// first time it is required.
|
||||
func getPassword(q string) {
|
||||
if len(configKey) != 0 {
|
||||
return
|
||||
}
|
||||
for {
|
||||
fmt.Println(q)
|
||||
fmt.Print("password>")
|
||||
err := setPassword(ReadPassword())
|
||||
if err == nil {
|
||||
return
|
||||
}
|
||||
fmt.Println("Error:", err)
|
||||
}
|
||||
}
|
||||
|
||||
// setPassword will set the configKey to the hash of
|
||||
// the password. If the length of the password is
|
||||
// zero after trimming+normalization, an error is returned.
|
||||
func setPassword(password string) error {
|
||||
if !utf8.ValidString(password) {
|
||||
return fmt.Errorf("Password contains invalid utf8 characters")
|
||||
}
|
||||
// Remove leading+trailing whitespace
|
||||
password = strings.TrimSpace(password)
|
||||
|
||||
// Normalize to reduce weird variations.
|
||||
password = norm.NFKC.String(password)
|
||||
if len(password) == 0 {
|
||||
return fmt.Errorf("No characters in password")
|
||||
}
|
||||
// Create SHA256 has of the password
|
||||
sha := sha256.New()
|
||||
_, err := sha.Write([]byte("[" + password + "][rclone-config]"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
configKey = sha.Sum(nil)
|
||||
return nil
|
||||
}
|
||||
|
||||
// SaveConfig saves configuration file.
|
||||
// if configKey has been set, the file will be encrypted.
|
||||
// Save configuration file.
|
||||
func SaveConfig() {
|
||||
if len(configKey) == 0 {
|
||||
err := goconfig.SaveConfigFile(ConfigFile, ConfigPath)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to save config file: %v", err)
|
||||
}
|
||||
err = os.Chmod(ConfigPath, 0600)
|
||||
if err != nil {
|
||||
log.Printf("Failed to set permissions on config file: %v", err)
|
||||
}
|
||||
return
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
err := goconfig.SaveConfigData(ConfigFile, &buf)
|
||||
err := goconfig.SaveConfigFile(ConfigFile, ConfigPath)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to save config file: %v", err)
|
||||
}
|
||||
|
||||
f, err := os.Create(ConfigPath)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to save config file: %v", err)
|
||||
}
|
||||
|
||||
fmt.Fprintln(f, "# Encrypted rclone configuration File")
|
||||
fmt.Fprintln(f, "")
|
||||
fmt.Fprintln(f, "RCLONE_ENCRYPT_V0:")
|
||||
|
||||
// Generate new nonce and write it to the start of the ciphertext
|
||||
var nonce [24]byte
|
||||
n, _ := rand.Read(nonce[:])
|
||||
if n != 24 {
|
||||
log.Fatalf("nonce short read: %d", n)
|
||||
}
|
||||
enc := base64.NewEncoder(base64.StdEncoding, f)
|
||||
_, err = enc.Write(nonce[:])
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to write config file: %v", err)
|
||||
}
|
||||
|
||||
var key [32]byte
|
||||
copy(key[:], configKey[:32])
|
||||
|
||||
b := secretbox.Seal(nil, buf.Bytes(), &nonce, &key)
|
||||
_, err = enc.Write(b)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to write config file: %v", err)
|
||||
}
|
||||
_ = enc.Close()
|
||||
err = f.Close()
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to close config file: %v", err)
|
||||
}
|
||||
|
||||
err = os.Chmod(ConfigPath, 0600)
|
||||
if err != nil {
|
||||
log.Printf("Failed to set permissions on config file: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// ShowRemotes shows an overview of the config file
|
||||
// Show an overview of the config file
|
||||
func ShowRemotes() {
|
||||
remotes := ConfigFile.GetSectionList()
|
||||
if len(remotes) == 0 {
|
||||
@@ -567,7 +122,7 @@ func ChooseRemote() string {
|
||||
return Choose("remote", remotes, nil, false)
|
||||
}
|
||||
|
||||
// ReadLine reads some input
|
||||
// Read some input
|
||||
func ReadLine() string {
|
||||
buf := bufio.NewReader(os.Stdin)
|
||||
line, err := buf.ReadString('\n')
|
||||
@@ -599,7 +154,7 @@ func Command(commands []string) byte {
|
||||
}
|
||||
}
|
||||
|
||||
// Confirm asks the user for Yes or No and returns true or false
|
||||
// Asks the user for Yes or No and returns true or false
|
||||
func Confirm() bool {
|
||||
return Command([]string{"yYes", "nNo"}) == 'y'
|
||||
}
|
||||
@@ -612,34 +167,13 @@ func Choose(what string, defaults, help []string, newOk bool) string {
|
||||
}
|
||||
fmt.Println()
|
||||
for i, text := range defaults {
|
||||
var lines []string
|
||||
if help != nil {
|
||||
parts := strings.Split(help[i], "\n")
|
||||
lines = append(lines, parts...)
|
||||
}
|
||||
lines = append(lines, fmt.Sprintf("%q", text))
|
||||
pos := i + 1
|
||||
if len(lines) == 1 {
|
||||
fmt.Printf("%2d > %s\n", pos, text)
|
||||
} else {
|
||||
mid := (len(lines) - 1) / 2
|
||||
for i, line := range lines {
|
||||
var sep rune
|
||||
switch i {
|
||||
case 0:
|
||||
sep = '/'
|
||||
case len(lines) - 1:
|
||||
sep = '\\'
|
||||
default:
|
||||
sep = '|'
|
||||
}
|
||||
number := " "
|
||||
if i == mid {
|
||||
number = fmt.Sprintf("%2d", pos)
|
||||
}
|
||||
fmt.Printf("%s %c %s\n", number, sep, line)
|
||||
for _, part := range parts {
|
||||
fmt.Printf(" * %s\n", part)
|
||||
}
|
||||
}
|
||||
fmt.Printf("%2d) %s\n", i+1, text)
|
||||
}
|
||||
for {
|
||||
fmt.Printf("%s> ", what)
|
||||
@@ -657,26 +191,7 @@ func Choose(what string, defaults, help []string, newOk bool) string {
|
||||
}
|
||||
}
|
||||
|
||||
// ChooseNumber asks the user to enter a number between min and max
|
||||
// inclusive prompting them with what.
|
||||
func ChooseNumber(what string, min, max int) int {
|
||||
for {
|
||||
fmt.Printf("%s> ", what)
|
||||
result := ReadLine()
|
||||
i, err := strconv.Atoi(result)
|
||||
if err != nil {
|
||||
fmt.Printf("Bad number: %v\n", err)
|
||||
continue
|
||||
}
|
||||
if i < min || i > max {
|
||||
fmt.Printf("Out of range - %d to %d inclusive\n", min, max)
|
||||
continue
|
||||
}
|
||||
return i
|
||||
}
|
||||
}
|
||||
|
||||
// ShowRemote shows the contents of the remote
|
||||
// Show the contents of the remote
|
||||
func ShowRemote(name string) {
|
||||
fmt.Printf("--------------------\n")
|
||||
fmt.Printf("[%s]\n", name)
|
||||
@@ -686,7 +201,7 @@ func ShowRemote(name string) {
|
||||
fmt.Printf("--------------------\n")
|
||||
}
|
||||
|
||||
// OkRemote prints the contents of the remote and ask if it is OK
|
||||
// Print the contents of the remote and ask if it is OK
|
||||
func OkRemote(name string) bool {
|
||||
ShowRemote(name)
|
||||
switch i := Command([]string{"yYes this is OK", "eEdit this remote", "dDelete this remote"}); i {
|
||||
@@ -703,7 +218,7 @@ func OkRemote(name string) bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// RemoteConfig runs the config helper for the remote if needed
|
||||
// Runs the config helper for the remote if needed
|
||||
func RemoteConfig(name string) {
|
||||
fmt.Printf("Remote config\n")
|
||||
fsName := ConfigFile.MustValue(name, "type")
|
||||
@@ -719,7 +234,7 @@ func RemoteConfig(name string) {
|
||||
}
|
||||
}
|
||||
|
||||
// ChooseOption asks the user to choose an option
|
||||
// Choose an option
|
||||
func ChooseOption(o *Option) string {
|
||||
fmt.Println(o.Help)
|
||||
if len(o.Examples) > 0 {
|
||||
@@ -735,26 +250,14 @@ func ChooseOption(o *Option) string {
|
||||
return ReadLine()
|
||||
}
|
||||
|
||||
// fsOption returns an Option describing the possible remotes
|
||||
func fsOption() *Option {
|
||||
o := &Option{
|
||||
Name: "Storage",
|
||||
Help: "Type of storage to configure.",
|
||||
}
|
||||
for _, item := range fsRegistry {
|
||||
example := OptionExample{
|
||||
Value: item.Name,
|
||||
Help: item.Description,
|
||||
}
|
||||
o.Examples = append(o.Examples, example)
|
||||
}
|
||||
o.Examples.Sort()
|
||||
return o
|
||||
}
|
||||
|
||||
// NewRemote make a new remote from its name
|
||||
// Make a new remote
|
||||
func NewRemote(name string) {
|
||||
newType := ChooseOption(fsOption())
|
||||
fmt.Printf("What type of source is it?\n")
|
||||
types := []string{}
|
||||
for _, item := range fsRegistry {
|
||||
types = append(types, item.Name)
|
||||
}
|
||||
newType := Choose("type", types, nil, false)
|
||||
ConfigFile.SetValue(name, "type", newType)
|
||||
fs, err := Find(newType)
|
||||
if err != nil {
|
||||
@@ -771,7 +274,7 @@ func NewRemote(name string) {
|
||||
EditRemote(name)
|
||||
}
|
||||
|
||||
// EditRemote gets the user to edit a remote
|
||||
// Edit a remote
|
||||
func EditRemote(name string) {
|
||||
ShowRemote(name)
|
||||
fmt.Printf("Edit remote\n")
|
||||
@@ -793,148 +296,38 @@ func EditRemote(name string) {
|
||||
SaveConfig()
|
||||
}
|
||||
|
||||
// DeleteRemote gets the user to delete a remote
|
||||
// Delete a remote
|
||||
func DeleteRemote(name string) {
|
||||
ConfigFile.DeleteSection(name)
|
||||
SaveConfig()
|
||||
}
|
||||
|
||||
// EditConfig edits the config file interactively
|
||||
// Edit the config file interactively
|
||||
func EditConfig() {
|
||||
for {
|
||||
haveRemotes := len(ConfigFile.GetSectionList()) != 0
|
||||
what := []string{"eEdit existing remote", "nNew remote", "dDelete remote", "sSet configuration password", "qQuit config"}
|
||||
what := []string{"eEdit existing remote", "nNew remote", "dDelete remote", "qQuit config"}
|
||||
if haveRemotes {
|
||||
fmt.Printf("Current remotes:\n\n")
|
||||
ShowRemotes()
|
||||
fmt.Printf("\n")
|
||||
} else {
|
||||
fmt.Printf("No remotes found - make a new one\n")
|
||||
what = append(what[1:2], what[3:]...)
|
||||
what = append(what[1:2], what[3])
|
||||
}
|
||||
switch i := Command(what); i {
|
||||
case 'e':
|
||||
name := ChooseRemote()
|
||||
EditRemote(name)
|
||||
case 'n':
|
||||
nameLoop:
|
||||
for {
|
||||
fmt.Printf("name> ")
|
||||
name := ReadLine()
|
||||
parts := matcher.FindStringSubmatch(name + ":")
|
||||
switch {
|
||||
case name == "":
|
||||
fmt.Printf("Can't use empty name\n")
|
||||
case isDriveLetter(name):
|
||||
fmt.Printf("Can't use %q as it can be confused a drive letter\n", name)
|
||||
case len(parts) != 3 || parts[2] != "":
|
||||
fmt.Printf("Can't use %q as it has invalid characters in it %v\n", name, parts)
|
||||
default:
|
||||
NewRemote(name)
|
||||
break nameLoop
|
||||
}
|
||||
}
|
||||
fmt.Printf("name> ")
|
||||
name := ReadLine()
|
||||
NewRemote(name)
|
||||
case 'd':
|
||||
name := ChooseRemote()
|
||||
DeleteRemote(name)
|
||||
case 's':
|
||||
SetPassword()
|
||||
case 'q':
|
||||
return
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// SetPassword will allow the user to modify the current
|
||||
// configuration encryption settings.
|
||||
func SetPassword() {
|
||||
for {
|
||||
if len(configKey) > 0 {
|
||||
fmt.Println("Your configuration is encrypted.")
|
||||
what := []string{"cChange Password", "uUnencrypt configuration", "qQuit to main menu"}
|
||||
switch i := Command(what); i {
|
||||
case 'c':
|
||||
changePassword()
|
||||
SaveConfig()
|
||||
fmt.Println("Password changed")
|
||||
continue
|
||||
case 'u':
|
||||
configKey = nil
|
||||
SaveConfig()
|
||||
continue
|
||||
case 'q':
|
||||
return
|
||||
}
|
||||
|
||||
} else {
|
||||
fmt.Println("Your configuration is not encrypted.")
|
||||
fmt.Println("If you add a password, you will protect your login information to cloud services.")
|
||||
what := []string{"aAdd Password", "qQuit to main menu"}
|
||||
switch i := Command(what); i {
|
||||
case 'a':
|
||||
changePassword()
|
||||
SaveConfig()
|
||||
fmt.Println("Password set")
|
||||
continue
|
||||
case 'q':
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// changePassword will query the user twice
|
||||
// for a password. If the same password is entered
|
||||
// twice the key is updated.
|
||||
func changePassword() {
|
||||
for {
|
||||
configKey = nil
|
||||
getPassword("Enter NEW configuration password:")
|
||||
a := configKey
|
||||
// re-enter password
|
||||
configKey = nil
|
||||
getPassword("Confirm NEW password:")
|
||||
b := configKey
|
||||
if bytes.Equal(a, b) {
|
||||
return
|
||||
}
|
||||
fmt.Println("Passwords does not match!")
|
||||
}
|
||||
}
|
||||
|
||||
// Authorize is for remote authorization of headless machines.
|
||||
//
|
||||
// It expects 1 or 3 arguments
|
||||
//
|
||||
// rclone authorize "fs name"
|
||||
// rclone authorize "fs name" "client id" "client secret"
|
||||
func Authorize(args []string) {
|
||||
switch len(args) {
|
||||
case 1, 3:
|
||||
default:
|
||||
log.Fatalf("Invalid number of arguments: %d", len(args))
|
||||
}
|
||||
newType := args[0]
|
||||
fs, err := Find(newType)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to find fs: %v", err)
|
||||
}
|
||||
|
||||
if fs.Config == nil {
|
||||
log.Fatalf("Can't authorize fs %q", newType)
|
||||
}
|
||||
// Name used for temporary fs
|
||||
name := "**temp-fs**"
|
||||
|
||||
// Make sure we delete it
|
||||
defer DeleteRemote(name)
|
||||
|
||||
// Indicate that we want fully automatic configuration.
|
||||
ConfigFile.SetValue(name, ConfigAutomatic, "yes")
|
||||
if len(args) == 3 {
|
||||
ConfigFile.SetValue(name, ConfigClientID, args[1])
|
||||
ConfigFile.SetValue(name, ConfigClientSecret, args[2])
|
||||
}
|
||||
fs.Config(name)
|
||||
}
|
||||
|
||||
@@ -1,26 +0,0 @@
|
||||
// ReadPassword for OSes which are supported by golang.org/x/crypto/ssh/terminal
|
||||
// See https://github.com/golang/go/issues/14441 - plan9
|
||||
// https://github.com/golang/go/issues/13085 - solaris
|
||||
|
||||
// +build !solaris,!plan9
|
||||
|
||||
package fs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/crypto/ssh/terminal"
|
||||
)
|
||||
|
||||
// ReadPassword reads a password without echoing it to the terminal.
|
||||
func ReadPassword() string {
|
||||
line, err := terminal.ReadPassword(int(os.Stdin.Fd()))
|
||||
fmt.Println("")
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to read password: %v", err)
|
||||
}
|
||||
return strings.TrimSpace(string(line))
|
||||
}
|
||||
@@ -1,12 +0,0 @@
|
||||
// ReadPassword for OSes which are not supported by golang.org/x/crypto/ssh/terminal
|
||||
// See https://github.com/golang/go/issues/14441 - plan9
|
||||
// https://github.com/golang/go/issues/13085 - solaris
|
||||
|
||||
// +build solaris plan9
|
||||
|
||||
package fs
|
||||
|
||||
// ReadPassword reads a password with echoing it to the terminal.
|
||||
func ReadPassword() string {
|
||||
return ReadLine()
|
||||
}
|
||||
@@ -1,226 +0,0 @@
|
||||
package fs
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestSizeSuffixString(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in float64
|
||||
want string
|
||||
}{
|
||||
{0, "0"},
|
||||
{102, "0.100k"},
|
||||
{1024, "1k"},
|
||||
{1024 * 1024, "1M"},
|
||||
{1024 * 1024 * 1024, "1G"},
|
||||
{10 * 1024 * 1024 * 1024, "10G"},
|
||||
{10.1 * 1024 * 1024 * 1024, "10.100G"},
|
||||
} {
|
||||
ss := SizeSuffix(test.in)
|
||||
got := ss.String()
|
||||
if test.want != got {
|
||||
t.Errorf("Want %v got %v", test.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestSizeSuffixSet(t *testing.T) {
|
||||
for i, test := range []struct {
|
||||
in string
|
||||
want int64
|
||||
err bool
|
||||
}{
|
||||
{"0", 0, false},
|
||||
{"0.1k", 102, false},
|
||||
{"0.1", 102, false},
|
||||
{"1K", 1024, false},
|
||||
{"1", 1024, false},
|
||||
{"2.5", 1024 * 2.5, false},
|
||||
{"1M", 1024 * 1024, false},
|
||||
{"1.g", 1024 * 1024 * 1024, false},
|
||||
{"10G", 10 * 1024 * 1024 * 1024, false},
|
||||
{"", 0, true},
|
||||
{"1p", 0, true},
|
||||
{"1.p", 0, true},
|
||||
{"1p", 0, true},
|
||||
{"-1K", 0, true},
|
||||
} {
|
||||
ss := SizeSuffix(0)
|
||||
err := ss.Set(test.in)
|
||||
if (err != nil) != test.err {
|
||||
t.Errorf("%d: Expecting error %v but got error %v", i, test.err, err)
|
||||
}
|
||||
got := int64(ss)
|
||||
if test.want != got {
|
||||
t.Errorf("%d: Want %v got %v", i, test.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestReveal(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
want string
|
||||
}{
|
||||
{"", ""},
|
||||
{"2sTcyNrA", "potato"},
|
||||
} {
|
||||
got := Reveal(test.in)
|
||||
if got != test.want {
|
||||
t.Errorf("%q: want %q got %q", test.in, test.want, got)
|
||||
}
|
||||
if Obscure(got) != test.in {
|
||||
t.Errorf("%q: wasn't bidirectional", test.in)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigLoad(t *testing.T) {
|
||||
oldConfigPath := ConfigPath
|
||||
ConfigPath = "./testdata/plain.conf"
|
||||
defer func() {
|
||||
ConfigPath = oldConfigPath
|
||||
}()
|
||||
configKey = nil // reset password
|
||||
c, err := loadConfigFile()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
sections := c.GetSectionList()
|
||||
var expect = []string{"RCLONE_ENCRYPT_V0", "nounc", "unc"}
|
||||
if !reflect.DeepEqual(sections, expect) {
|
||||
t.Fatalf("%v != %v", sections, expect)
|
||||
}
|
||||
|
||||
keys := c.GetKeyList("nounc")
|
||||
expect = []string{"type", "nounc"}
|
||||
if !reflect.DeepEqual(keys, expect) {
|
||||
t.Fatalf("%v != %v", keys, expect)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigLoadEncrypted(t *testing.T) {
|
||||
var err error
|
||||
oldConfigPath := ConfigPath
|
||||
ConfigPath = "./testdata/encrypted.conf"
|
||||
defer func() {
|
||||
ConfigPath = oldConfigPath
|
||||
configKey = nil // reset password
|
||||
}()
|
||||
|
||||
// Set correct password
|
||||
err = setPassword("asdf")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
c, err := loadConfigFile()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
sections := c.GetSectionList()
|
||||
var expect = []string{"nounc", "unc"}
|
||||
if !reflect.DeepEqual(sections, expect) {
|
||||
t.Fatalf("%v != %v", sections, expect)
|
||||
}
|
||||
|
||||
keys := c.GetKeyList("nounc")
|
||||
expect = []string{"type", "nounc"}
|
||||
if !reflect.DeepEqual(keys, expect) {
|
||||
t.Fatalf("%v != %v", keys, expect)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigLoadEncryptedFailures(t *testing.T) {
|
||||
var err error
|
||||
|
||||
// This file should be too short to be decoded.
|
||||
oldConfigPath := ConfigPath
|
||||
ConfigPath = "./testdata/enc-short.conf"
|
||||
defer func() { ConfigPath = oldConfigPath }()
|
||||
_, err = loadConfigFile()
|
||||
if err == nil {
|
||||
t.Fatal("expected error")
|
||||
}
|
||||
t.Log("Correctly got:", err)
|
||||
|
||||
// This file contains invalid base64 characters.
|
||||
ConfigPath = "./testdata/enc-invalid.conf"
|
||||
_, err = loadConfigFile()
|
||||
if err == nil {
|
||||
t.Fatal("expected error")
|
||||
}
|
||||
t.Log("Correctly got:", err)
|
||||
|
||||
// This file contains invalid base64 characters.
|
||||
ConfigPath = "./testdata/enc-too-new.conf"
|
||||
_, err = loadConfigFile()
|
||||
if err == nil {
|
||||
t.Fatal("expected error")
|
||||
}
|
||||
t.Log("Correctly got:", err)
|
||||
|
||||
// This file contains invalid base64 characters.
|
||||
ConfigPath = "./testdata/filenotfound.conf"
|
||||
c, err := loadConfigFile()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(c.GetSectionList()) != 0 {
|
||||
t.Fatalf("Expected 0-length section, got %d entries", len(c.GetSectionList()))
|
||||
}
|
||||
}
|
||||
|
||||
func TestPassword(t *testing.T) {
|
||||
defer func() {
|
||||
configKey = nil // reset password
|
||||
}()
|
||||
var err error
|
||||
// Empty password should give error
|
||||
err = setPassword(" \t ")
|
||||
if err == nil {
|
||||
t.Fatal("expected error")
|
||||
}
|
||||
|
||||
// Test invalid utf8 sequence
|
||||
err = setPassword(string([]byte{0xff, 0xfe, 0xfd}) + "abc")
|
||||
if err == nil {
|
||||
t.Fatal("expected error")
|
||||
}
|
||||
|
||||
// Simple check of wrong passwords
|
||||
hashedKeyCompare(t, "mis", "match", false)
|
||||
|
||||
// Check that passwords match with trimmed whitespace
|
||||
hashedKeyCompare(t, " abcdef \t", "abcdef", true)
|
||||
|
||||
// Check that passwords match after unicode normalization
|
||||
hashedKeyCompare(t, "ff\u0041\u030A", "ffÅ", true)
|
||||
|
||||
// Check that passwords preserves case
|
||||
hashedKeyCompare(t, "abcdef", "ABCDEF", false)
|
||||
|
||||
}
|
||||
|
||||
func hashedKeyCompare(t *testing.T, a, b string, shouldMatch bool) {
|
||||
err := setPassword(a)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
k1 := configKey
|
||||
|
||||
err = setPassword(b)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
k2 := configKey
|
||||
matches := bytes.Equal(k1, k2)
|
||||
if shouldMatch && !matches {
|
||||
t.Fatalf("%v != %v", k1, k2)
|
||||
}
|
||||
if !shouldMatch && matches {
|
||||
t.Fatalf("%v == %v", k1, k2)
|
||||
}
|
||||
}
|
||||
@@ -1,12 +0,0 @@
|
||||
// +build !windows
|
||||
|
||||
package fs
|
||||
|
||||
// isDriveLetter returns a bool indicating whether name is a valid
|
||||
// Windows drive letter
|
||||
//
|
||||
// On non windows platforms we don't have drive letters so we always
|
||||
// return false
|
||||
func isDriveLetter(name string) bool {
|
||||
return false
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
// +build windows
|
||||
|
||||
package fs
|
||||
|
||||
// isDriveLetter returns a bool indicating whether name is a valid
|
||||
// Windows drive letter
|
||||
func isDriveLetter(name string) bool {
|
||||
if len(name) != 1 {
|
||||
return false
|
||||
}
|
||||
c := name[0]
|
||||
return (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z')
|
||||
}
|
||||
102
fs/error.go
102
fs/error.go
@@ -1,102 +0,0 @@
|
||||
// Errors and error handling
|
||||
|
||||
package fs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
)
|
||||
|
||||
// Retry is an optional interface for error as to whether the
|
||||
// operation should be retried at a high level.
|
||||
//
|
||||
// This should be returned from Update or Put methods as required
|
||||
type Retry interface {
|
||||
error
|
||||
Retry() bool
|
||||
}
|
||||
|
||||
// retryError is a type of error
|
||||
type retryError string
|
||||
|
||||
// Error interface
|
||||
func (r retryError) Error() string {
|
||||
return string(r)
|
||||
}
|
||||
|
||||
// Retry interface
|
||||
func (r retryError) Retry() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check interface
|
||||
var _ Retry = retryError("")
|
||||
|
||||
// RetryErrorf makes an error which indicates it would like to be retried
|
||||
func RetryErrorf(format string, a ...interface{}) error {
|
||||
return retryError(fmt.Sprintf(format, a...))
|
||||
}
|
||||
|
||||
// PlainRetryError is an error wrapped so it will retry
|
||||
type plainRetryError struct {
|
||||
error
|
||||
}
|
||||
|
||||
// Retry interface
|
||||
func (err plainRetryError) Retry() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check interface
|
||||
var _ Retry = plainRetryError{(error)(nil)}
|
||||
|
||||
// RetryError makes an error which indicates it would like to be retried
|
||||
func RetryError(err error) error {
|
||||
return plainRetryError{err}
|
||||
}
|
||||
|
||||
// ShouldRetry looks at an error and tries to work out if retrying the
|
||||
// operation that caused it would be a good idea. It returns true if
|
||||
// the error implements Timeout() or Temporary() and it returns true.
|
||||
func ShouldRetry(err error) bool {
|
||||
if err == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// Unwrap url.Error
|
||||
if urlErr, ok := err.(*url.Error); ok {
|
||||
err = urlErr.Err
|
||||
}
|
||||
|
||||
// Check for net error Timeout()
|
||||
if x, ok := err.(interface {
|
||||
Timeout() bool
|
||||
}); ok && x.Timeout() {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check for net error Temporary()
|
||||
if x, ok := err.(interface {
|
||||
Temporary() bool
|
||||
}); ok && x.Temporary() {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// ShouldRetryHTTP returns a boolean as to whether this resp deserves.
|
||||
// It checks to see if the HTTP response code is in the slice
|
||||
// retryErrorCodes.
|
||||
func ShouldRetryHTTP(resp *http.Response, retryErrorCodes []int) bool {
|
||||
if resp == nil {
|
||||
return false
|
||||
}
|
||||
for _, e := range retryErrorCodes {
|
||||
if resp.StatusCode == e {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
348
fs/filter.go
348
fs/filter.go
@@ -1,348 +0,0 @@
|
||||
// Control the filtering of files
|
||||
|
||||
package fs
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/pflag"
|
||||
)
|
||||
|
||||
// Global
|
||||
var (
|
||||
// Flags
|
||||
deleteExcluded = pflag.BoolP("delete-excluded", "", false, "Delete files on dest excluded from sync")
|
||||
filterRule = pflag.StringP("filter", "f", "", "Add a file-filtering rule")
|
||||
filterFrom = pflag.StringP("filter-from", "", "", "Read filtering patterns from a file")
|
||||
excludeRule = pflag.StringP("exclude", "", "", "Exclude files matching pattern")
|
||||
excludeFrom = pflag.StringP("exclude-from", "", "", "Read exclude patterns from file")
|
||||
includeRule = pflag.StringP("include", "", "", "Include files matching pattern")
|
||||
includeFrom = pflag.StringP("include-from", "", "", "Read include patterns from file")
|
||||
filesFrom = pflag.StringP("files-from", "", "", "Read list of source-file names from file")
|
||||
minAge = pflag.StringP("min-age", "", "", "Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y")
|
||||
maxAge = pflag.StringP("max-age", "", "", "Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y")
|
||||
minSize SizeSuffix
|
||||
maxSize SizeSuffix
|
||||
dumpFilters = pflag.BoolP("dump-filters", "", false, "Dump the filters to the output")
|
||||
//cvsExclude = pflag.BoolP("cvs-exclude", "C", false, "Exclude files in the same way CVS does")
|
||||
)
|
||||
|
||||
func init() {
|
||||
pflag.VarP(&minSize, "min-size", "", "Don't transfer any file smaller than this in k or suffix k|M|G")
|
||||
pflag.VarP(&maxSize, "max-size", "", "Don't transfer any file larger than this in k or suffix k|M|G")
|
||||
}
|
||||
|
||||
// rule is one filter rule
|
||||
type rule struct {
|
||||
Include bool
|
||||
Regexp *regexp.Regexp
|
||||
}
|
||||
|
||||
// Match returns true if rule matches path
|
||||
func (r *rule) Match(path string) bool {
|
||||
return r.Regexp.MatchString(path)
|
||||
}
|
||||
|
||||
// String the rule
|
||||
func (r *rule) String() string {
|
||||
c := "-"
|
||||
if r.Include {
|
||||
c = "+"
|
||||
}
|
||||
return fmt.Sprintf("%s %s", c, r.Regexp.String())
|
||||
}
|
||||
|
||||
// filesMap describes the map of files to transfer
|
||||
type filesMap map[string]struct{}
|
||||
|
||||
// Filter describes any filtering in operation
|
||||
type Filter struct {
|
||||
DeleteExcluded bool
|
||||
MinSize int64
|
||||
MaxSize int64
|
||||
ModTimeFrom time.Time
|
||||
ModTimeTo time.Time
|
||||
rules []rule
|
||||
files filesMap
|
||||
}
|
||||
|
||||
// We use time conventions
|
||||
var ageSuffixes = []struct {
|
||||
Suffix string
|
||||
Multiplier time.Duration
|
||||
}{
|
||||
{Suffix: "ms", Multiplier: time.Millisecond},
|
||||
{Suffix: "s", Multiplier: time.Second},
|
||||
{Suffix: "m", Multiplier: time.Minute},
|
||||
{Suffix: "h", Multiplier: time.Hour},
|
||||
{Suffix: "d", Multiplier: time.Hour * 24},
|
||||
{Suffix: "w", Multiplier: time.Hour * 24 * 7},
|
||||
{Suffix: "M", Multiplier: time.Hour * 24 * 30},
|
||||
{Suffix: "y", Multiplier: time.Hour * 24 * 365},
|
||||
|
||||
// Default to second
|
||||
{Suffix: "", Multiplier: time.Second},
|
||||
}
|
||||
|
||||
// ParseDuration parses a duration string. Accept ms|s|m|h|d|w|M|y suffixes. Defaults to second if not provided
|
||||
func ParseDuration(age string) (time.Duration, error) {
|
||||
var period float64
|
||||
|
||||
for _, ageSuffix := range ageSuffixes {
|
||||
if strings.HasSuffix(age, ageSuffix.Suffix) {
|
||||
numberString := age[:len(age)-len(ageSuffix.Suffix)]
|
||||
var err error
|
||||
period, err = strconv.ParseFloat(numberString, 64)
|
||||
if err != nil {
|
||||
return time.Duration(0), err
|
||||
}
|
||||
period *= float64(ageSuffix.Multiplier)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return time.Duration(period), nil
|
||||
}
|
||||
|
||||
// NewFilter parses the command line options and creates a Filter object
|
||||
func NewFilter() (f *Filter, err error) {
|
||||
f = &Filter{
|
||||
DeleteExcluded: *deleteExcluded,
|
||||
MinSize: int64(minSize),
|
||||
MaxSize: int64(maxSize),
|
||||
}
|
||||
addImplicitExclude := false
|
||||
|
||||
if *includeRule != "" {
|
||||
err = f.Add(true, *includeRule)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
addImplicitExclude = true
|
||||
}
|
||||
if *includeFrom != "" {
|
||||
err := forEachLine(*includeFrom, func(line string) error {
|
||||
return f.Add(true, line)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
addImplicitExclude = true
|
||||
}
|
||||
if *excludeRule != "" {
|
||||
err = f.Add(false, *excludeRule)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if *excludeFrom != "" {
|
||||
err := forEachLine(*excludeFrom, func(line string) error {
|
||||
return f.Add(false, line)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if *filterRule != "" {
|
||||
err = f.AddRule(*filterRule)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if *filterFrom != "" {
|
||||
err := forEachLine(*filterFrom, f.AddRule)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if *filesFrom != "" {
|
||||
err := forEachLine(*filesFrom, func(line string) error {
|
||||
return f.AddFile(line)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if addImplicitExclude {
|
||||
err = f.Add(false, "*")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if *minAge != "" {
|
||||
duration, err := ParseDuration(*minAge)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
f.ModTimeTo = time.Now().Add(-duration)
|
||||
Debug(nil, "--min-age %v to %v", duration, f.ModTimeTo)
|
||||
}
|
||||
if *maxAge != "" {
|
||||
duration, err := ParseDuration(*maxAge)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
f.ModTimeFrom = time.Now().Add(-duration)
|
||||
if !f.ModTimeTo.IsZero() && f.ModTimeTo.Before(f.ModTimeFrom) {
|
||||
return nil, fmt.Errorf("Argument --min-age can't be larger than --max-age")
|
||||
}
|
||||
Debug(nil, "--max-age %v to %v", duration, f.ModTimeFrom)
|
||||
}
|
||||
if *dumpFilters {
|
||||
fmt.Println("--- start filters ---")
|
||||
fmt.Println(f.DumpFilters())
|
||||
fmt.Println("--- end filters ---")
|
||||
}
|
||||
return f, nil
|
||||
}
|
||||
|
||||
// Add adds a filter rule with include or exclude status indicated
|
||||
func (f *Filter) Add(Include bool, glob string) error {
|
||||
re, err := globToRegexp(glob)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
rule := rule{
|
||||
Include: Include,
|
||||
Regexp: re,
|
||||
}
|
||||
f.rules = append(f.rules, rule)
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddRule adds a filter rule with include/exclude indicated by the prefix
|
||||
//
|
||||
// These are
|
||||
//
|
||||
// + glob
|
||||
// - glob
|
||||
// !
|
||||
//
|
||||
// '+' includes the glob, '-' excludes it and '!' resets the filter list
|
||||
//
|
||||
// Line comments may be introduced with '#' or ';'
|
||||
func (f *Filter) AddRule(rule string) error {
|
||||
switch {
|
||||
case rule == "!":
|
||||
f.Clear()
|
||||
return nil
|
||||
case strings.HasPrefix(rule, "- "):
|
||||
return f.Add(false, rule[2:])
|
||||
case strings.HasPrefix(rule, "+ "):
|
||||
return f.Add(true, rule[2:])
|
||||
}
|
||||
return fmt.Errorf("Malformed rule %q", rule)
|
||||
}
|
||||
|
||||
// AddFile adds a single file to the files from list
|
||||
func (f *Filter) AddFile(file string) error {
|
||||
if f.files == nil {
|
||||
f.files = make(filesMap)
|
||||
}
|
||||
file = strings.Trim(file, "/")
|
||||
f.files[file] = struct{}{}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Clear clears all the filter rules
|
||||
func (f *Filter) Clear() {
|
||||
f.rules = nil
|
||||
}
|
||||
|
||||
// InActive returns false if any filters are active
|
||||
func (f *Filter) InActive() bool {
|
||||
return (f.files == nil &&
|
||||
f.ModTimeFrom.IsZero() &&
|
||||
f.ModTimeTo.IsZero() &&
|
||||
f.MinSize == 0 &&
|
||||
f.MaxSize == 0 &&
|
||||
len(f.rules) == 0)
|
||||
}
|
||||
|
||||
// Include returns whether this object should be included into the
|
||||
// sync or not
|
||||
func (f *Filter) Include(remote string, size int64, modTime time.Time) bool {
|
||||
// filesFrom takes precedence
|
||||
if f.files != nil {
|
||||
_, include := f.files[remote]
|
||||
return include
|
||||
}
|
||||
if !f.ModTimeFrom.IsZero() && modTime.Before(f.ModTimeFrom) {
|
||||
return false
|
||||
}
|
||||
if !f.ModTimeTo.IsZero() && modTime.After(f.ModTimeTo) {
|
||||
return false
|
||||
}
|
||||
if f.MinSize != 0 && size < f.MinSize {
|
||||
return false
|
||||
}
|
||||
if f.MaxSize != 0 && size > f.MaxSize {
|
||||
return false
|
||||
}
|
||||
for _, rule := range f.rules {
|
||||
if rule.Match(remote) {
|
||||
return rule.Include
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// IncludeObject returns whether this object should be included into
|
||||
// the sync or not. This is a convenience function to avoid calling
|
||||
// o.ModTime(), which is an expensive operation.
|
||||
func (f *Filter) IncludeObject(o Object) bool {
|
||||
var modTime time.Time
|
||||
|
||||
if !f.ModTimeFrom.IsZero() || !f.ModTimeTo.IsZero() {
|
||||
modTime = o.ModTime()
|
||||
} else {
|
||||
modTime = time.Unix(0, 0)
|
||||
}
|
||||
|
||||
return f.Include(o.Remote(), o.Size(), modTime)
|
||||
}
|
||||
|
||||
// forEachLine calls fn on every line in the file pointed to by path
|
||||
//
|
||||
// It ignores empty lines and lines starting with '#' or ';'
|
||||
func forEachLine(path string, fn func(string) error) (err error) {
|
||||
in, err := os.Open(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer CheckClose(in, &err)
|
||||
scanner := bufio.NewScanner(in)
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
line = strings.TrimSpace(line)
|
||||
if len(line) == 0 || line[0] == '#' || line[0] == ';' {
|
||||
continue
|
||||
}
|
||||
err := fn(line)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return scanner.Err()
|
||||
}
|
||||
|
||||
// DumpFilters dumps the filters in textual form, 1 per line
|
||||
func (f *Filter) DumpFilters() string {
|
||||
rules := []string{}
|
||||
if !f.ModTimeFrom.IsZero() {
|
||||
rules = append(rules, fmt.Sprintf("Last-modified date must be equal or greater than: %s", f.ModTimeFrom.String()))
|
||||
}
|
||||
if !f.ModTimeTo.IsZero() {
|
||||
rules = append(rules, fmt.Sprintf("Last-modified date must be equal or less than: %s", f.ModTimeTo.String()))
|
||||
}
|
||||
for _, rule := range f.rules {
|
||||
rules = append(rules, rule.String())
|
||||
}
|
||||
return strings.Join(rules, "\n")
|
||||
}
|
||||
@@ -1,432 +0,0 @@
|
||||
package fs
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestAgeSuffix(t *testing.T) {
|
||||
for i, test := range []struct {
|
||||
in string
|
||||
want float64
|
||||
err bool
|
||||
}{
|
||||
{"0", 0, false},
|
||||
{"", 0, true},
|
||||
{"1ms", float64(time.Millisecond), false},
|
||||
{"1s", float64(time.Second), false},
|
||||
{"1m", float64(time.Minute), false},
|
||||
{"1h", float64(time.Hour), false},
|
||||
{"1d", float64(time.Hour) * 24, false},
|
||||
{"1w", float64(time.Hour) * 24 * 7, false},
|
||||
{"1M", float64(time.Hour) * 24 * 30, false},
|
||||
{"1y", float64(time.Hour) * 24 * 365, false},
|
||||
{"1.5y", float64(time.Hour) * 24 * 365 * 1.5, false},
|
||||
{"-1s", -float64(time.Second), false},
|
||||
{"1.s", float64(time.Second), false},
|
||||
{"1x", 0, true},
|
||||
} {
|
||||
duration, err := ParseDuration(test.in)
|
||||
if (err != nil) != test.err {
|
||||
t.Errorf("%d: Expecting error %v but got error %v", i, test.err, err)
|
||||
continue
|
||||
}
|
||||
|
||||
got := float64(duration)
|
||||
if test.want != got {
|
||||
t.Errorf("%d: Want %v got %v", i, test.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewFilterDefault(t *testing.T) {
|
||||
f, err := NewFilter()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if f.DeleteExcluded != false {
|
||||
t.Errorf("DeleteExcluded want false got %v", f.DeleteExcluded)
|
||||
}
|
||||
if f.MinSize != 0 {
|
||||
t.Errorf("MinSize want 0 got %v", f.MinSize)
|
||||
}
|
||||
if f.MaxSize != 0 {
|
||||
t.Errorf("MaxSize want 0 got %v", f.MaxSize)
|
||||
}
|
||||
if len(f.rules) != 0 {
|
||||
t.Errorf("rules want non got %v", f.rules)
|
||||
}
|
||||
if f.files != nil {
|
||||
t.Errorf("files want none got %v", f.files)
|
||||
}
|
||||
if !f.InActive() {
|
||||
t.Errorf("want InActive")
|
||||
}
|
||||
}
|
||||
|
||||
// return a pointer to the string
|
||||
func stringP(s string) *string {
|
||||
return &s
|
||||
}
|
||||
|
||||
// testFile creates a temp file with the contents
|
||||
func testFile(t *testing.T, contents string) *string {
|
||||
out, err := ioutil.TempFile("", "filter_test")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
defer func() {
|
||||
err := out.Close()
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
}()
|
||||
_, err = out.Write([]byte(contents))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
s := out.Name()
|
||||
return &s
|
||||
}
|
||||
|
||||
func TestNewFilterFull(t *testing.T) {
|
||||
mins := int64(100 * 1024)
|
||||
maxs := int64(1000 * 1024)
|
||||
emptyString := ""
|
||||
isFalse := false
|
||||
isTrue := true
|
||||
|
||||
// Set up the input
|
||||
deleteExcluded = &isTrue
|
||||
filterRule = stringP("- filter1")
|
||||
filterFrom = testFile(t, "#comment\n+ filter2\n- filter3\n")
|
||||
excludeRule = stringP("exclude1")
|
||||
excludeFrom = testFile(t, "#comment\nexclude2\nexclude3\n")
|
||||
includeRule = stringP("include1")
|
||||
includeFrom = testFile(t, "#comment\ninclude2\ninclude3\n")
|
||||
filesFrom = testFile(t, "#comment\nfiles1\nfiles2\n")
|
||||
minSize = SizeSuffix(mins)
|
||||
maxSize = SizeSuffix(maxs)
|
||||
|
||||
rm := func(p string) {
|
||||
err := os.Remove(p)
|
||||
if err != nil {
|
||||
t.Logf("error removing %q: %v", p, err)
|
||||
}
|
||||
}
|
||||
// Reset the input
|
||||
defer func() {
|
||||
rm(*filterFrom)
|
||||
rm(*excludeFrom)
|
||||
rm(*includeFrom)
|
||||
rm(*filesFrom)
|
||||
minSize = 0
|
||||
maxSize = 0
|
||||
deleteExcluded = &isFalse
|
||||
filterRule = &emptyString
|
||||
filterFrom = &emptyString
|
||||
excludeRule = &emptyString
|
||||
excludeFrom = &emptyString
|
||||
includeRule = &emptyString
|
||||
includeFrom = &emptyString
|
||||
filesFrom = &emptyString
|
||||
}()
|
||||
|
||||
f, err := NewFilter()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if f.DeleteExcluded != true {
|
||||
t.Errorf("DeleteExcluded want true got %v", f.DeleteExcluded)
|
||||
}
|
||||
if f.MinSize != mins {
|
||||
t.Errorf("MinSize want %v got %v", mins, f.MinSize)
|
||||
}
|
||||
if f.MaxSize != maxs {
|
||||
t.Errorf("MaxSize want %v got %v", maxs, f.MaxSize)
|
||||
}
|
||||
got := f.DumpFilters()
|
||||
want := `+ (^|/)include1$
|
||||
+ (^|/)include2$
|
||||
+ (^|/)include3$
|
||||
- (^|/)exclude1$
|
||||
- (^|/)exclude2$
|
||||
- (^|/)exclude3$
|
||||
- (^|/)filter1$
|
||||
+ (^|/)filter2$
|
||||
- (^|/)filter3$
|
||||
- (^|/)[^/]*$`
|
||||
if got != want {
|
||||
t.Errorf("rules want %s got %s", want, got)
|
||||
}
|
||||
if len(f.files) != 2 {
|
||||
t.Errorf("files want 2 got %v", f.files)
|
||||
}
|
||||
for _, name := range []string{"files1", "files2"} {
|
||||
_, ok := f.files[name]
|
||||
if !ok {
|
||||
t.Errorf("Didn't find file %q in f.files", name)
|
||||
}
|
||||
}
|
||||
if f.InActive() {
|
||||
t.Errorf("want !InActive")
|
||||
}
|
||||
}
|
||||
|
||||
type includeTest struct {
|
||||
in string
|
||||
size int64
|
||||
modTime int64
|
||||
want bool
|
||||
}
|
||||
|
||||
func testInclude(t *testing.T, f *Filter, tests []includeTest) {
|
||||
for _, test := range tests {
|
||||
got := f.Include(test.in, test.size, time.Unix(test.modTime, 0))
|
||||
if test.want != got {
|
||||
t.Errorf("%q,%d,%d: want %v got %v", test.in, test.size, test.modTime, test.want, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewFilterIncludeFiles(t *testing.T) {
|
||||
f, err := NewFilter()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = f.AddFile("file1.jpg")
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
err = f.AddFile("/file2.jpg")
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
testInclude(t, f, []includeTest{
|
||||
{"file1.jpg", 0, 0, true},
|
||||
{"file2.jpg", 1, 0, true},
|
||||
{"potato/file2.jpg", 2, 0, false},
|
||||
{"file3.jpg", 3, 0, false},
|
||||
})
|
||||
if f.InActive() {
|
||||
t.Errorf("want !InActive")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewFilterMinSize(t *testing.T) {
|
||||
f, err := NewFilter()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
f.MinSize = 100
|
||||
testInclude(t, f, []includeTest{
|
||||
{"file1.jpg", 100, 0, true},
|
||||
{"file2.jpg", 101, 0, true},
|
||||
{"potato/file2.jpg", 99, 0, false},
|
||||
})
|
||||
if f.InActive() {
|
||||
t.Errorf("want !InActive")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewFilterMaxSize(t *testing.T) {
|
||||
f, err := NewFilter()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
f.MaxSize = 100
|
||||
testInclude(t, f, []includeTest{
|
||||
{"file1.jpg", 100, 0, true},
|
||||
{"file2.jpg", 101, 0, false},
|
||||
{"potato/file2.jpg", 99, 0, true},
|
||||
})
|
||||
if f.InActive() {
|
||||
t.Errorf("want !InActive")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewFilterMinAndMaxAge(t *testing.T) {
|
||||
f, err := NewFilter()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
f.ModTimeFrom = time.Unix(1440000002, 0)
|
||||
f.ModTimeTo = time.Unix(1440000003, 0)
|
||||
testInclude(t, f, []includeTest{
|
||||
{"file1.jpg", 100, 1440000000, false},
|
||||
{"file2.jpg", 101, 1440000001, false},
|
||||
{"file3.jpg", 102, 1440000002, true},
|
||||
{"potato/file1.jpg", 98, 1440000003, true},
|
||||
{"potato/file2.jpg", 99, 1440000004, false},
|
||||
})
|
||||
if f.InActive() {
|
||||
t.Errorf("want !InActive")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewFilterMinAge(t *testing.T) {
|
||||
f, err := NewFilter()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
f.ModTimeTo = time.Unix(1440000002, 0)
|
||||
testInclude(t, f, []includeTest{
|
||||
{"file1.jpg", 100, 1440000000, true},
|
||||
{"file2.jpg", 101, 1440000001, true},
|
||||
{"file3.jpg", 102, 1440000002, true},
|
||||
{"potato/file1.jpg", 98, 1440000003, false},
|
||||
{"potato/file2.jpg", 99, 1440000004, false},
|
||||
})
|
||||
if f.InActive() {
|
||||
t.Errorf("want !InActive")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewFilterMaxAge(t *testing.T) {
|
||||
f, err := NewFilter()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
f.ModTimeFrom = time.Unix(1440000002, 0)
|
||||
testInclude(t, f, []includeTest{
|
||||
{"file1.jpg", 100, 1440000000, false},
|
||||
{"file2.jpg", 101, 1440000001, false},
|
||||
{"file3.jpg", 102, 1440000002, true},
|
||||
{"potato/file1.jpg", 98, 1440000003, true},
|
||||
{"potato/file2.jpg", 99, 1440000004, true},
|
||||
})
|
||||
if f.InActive() {
|
||||
t.Errorf("want !InActive")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewFilterMatches(t *testing.T) {
|
||||
f, err := NewFilter()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
add := func(s string) {
|
||||
err := f.AddRule(s)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
}
|
||||
add("+ cleared")
|
||||
add("!")
|
||||
add("- file1.jpg")
|
||||
add("+ file2.png")
|
||||
add("+ *.jpg")
|
||||
add("- *.png")
|
||||
add("- /potato")
|
||||
add("+ /sausage1")
|
||||
add("+ /sausage2*")
|
||||
add("+ /sausage3**")
|
||||
add("- *")
|
||||
testInclude(t, f, []includeTest{
|
||||
{"cleared", 100, 0, false},
|
||||
{"file1.jpg", 100, 0, false},
|
||||
{"file2.png", 100, 0, true},
|
||||
{"afile2.png", 100, 0, false},
|
||||
{"file3.jpg", 101, 0, true},
|
||||
{"file4.png", 101, 0, false},
|
||||
{"potato", 101, 0, false},
|
||||
{"sausage1", 101, 0, true},
|
||||
{"sausage1/potato", 101, 0, false},
|
||||
{"sausage2potato", 101, 0, true},
|
||||
{"sausage2/potato", 101, 0, false},
|
||||
{"sausage3/potato", 101, 0, true},
|
||||
{"unicorn", 99, 0, false},
|
||||
})
|
||||
if f.InActive() {
|
||||
t.Errorf("want !InActive")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterForEachLine(t *testing.T) {
|
||||
file := testFile(t, `; comment
|
||||
one
|
||||
# another comment
|
||||
|
||||
|
||||
two
|
||||
# indented comment
|
||||
three
|
||||
four
|
||||
five
|
||||
six `)
|
||||
defer func() {
|
||||
err := os.Remove(*file)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
}()
|
||||
lines := []string{}
|
||||
err := forEachLine(*file, func(s string) error {
|
||||
lines = append(lines, s)
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
got := strings.Join(lines, ",")
|
||||
want := "one,two,three,four,five,six"
|
||||
if want != got {
|
||||
t.Errorf("want %q got %q", want, got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilterMatchesFromDocs(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
glob string
|
||||
included bool
|
||||
file string
|
||||
}{
|
||||
{"file.jpg", true, "file.jpg"},
|
||||
{"file.jpg", true, "directory/file.jpg"},
|
||||
{"file.jpg", false, "afile.jpg"},
|
||||
{"file.jpg", false, "directory/afile.jpg"},
|
||||
{"/file.jpg", true, "file.jpg"},
|
||||
{"/file.jpg", false, "afile.jpg"},
|
||||
{"/file.jpg", false, "directory/file.jpg"},
|
||||
{"*.jpg", true, "file.jpg"},
|
||||
{"*.jpg", true, "directory/file.jpg"},
|
||||
{"*.jpg", false, "file.jpg/anotherfile.png"},
|
||||
{"dir/**", true, "dir/file.jpg"},
|
||||
{"dir/**", true, "dir/dir1/dir2/file.jpg"},
|
||||
{"dir/**", false, "directory/file.jpg"},
|
||||
{"dir/**", false, "adir/file.jpg"},
|
||||
{"l?ss", true, "less"},
|
||||
{"l?ss", true, "lass"},
|
||||
{"l?ss", false, "floss"},
|
||||
{"h[ae]llo", true, "hello"},
|
||||
{"h[ae]llo", true, "hallo"},
|
||||
{"h[ae]llo", false, "hullo"},
|
||||
{"{one,two}_potato", true, "one_potato"},
|
||||
{"{one,two}_potato", true, "two_potato"},
|
||||
{"{one,two}_potato", false, "three_potato"},
|
||||
{"{one,two}_potato", false, "_potato"},
|
||||
{"\\*.jpg", true, "*.jpg"},
|
||||
{"\\\\.jpg", true, "\\.jpg"},
|
||||
{"\\[one\\].jpg", true, "[one].jpg"},
|
||||
} {
|
||||
f, err := NewFilter()
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = f.Add(true, test.glob)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
err = f.Add(false, "*")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
included := f.Include(test.file, 0, time.Unix(0, 0))
|
||||
if included != test.included {
|
||||
t.Logf("%q match %q: want %v got %v", test.glob, test.file, test.included, included)
|
||||
}
|
||||
}
|
||||
}
|
||||
288
fs/fs.go
288
fs/fs.go
@@ -1,45 +1,31 @@
|
||||
// Package fs is a generic file system interface for rclone object storage systems
|
||||
// File system interface
|
||||
|
||||
package fs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"sort"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Constants
|
||||
const (
|
||||
// UserAgent for Fs which can set it
|
||||
// User agent for Fs which can set it
|
||||
UserAgent = "rclone/" + Version
|
||||
// ModTimeNotSupported is a very large precision value to show
|
||||
// mod time isn't supported on this Fs
|
||||
ModTimeNotSupported = 100 * 365 * 24 * time.Hour
|
||||
)
|
||||
|
||||
// Globals
|
||||
var (
|
||||
// Filesystem registry
|
||||
fsRegistry []*RegInfo
|
||||
// ErrorNotFoundInConfigFile is returned by NewFs if not found in config file
|
||||
ErrorNotFoundInConfigFile = fmt.Errorf("Didn't find section in config file")
|
||||
ErrorCantPurge = fmt.Errorf("Can't purge directory")
|
||||
ErrorCantCopy = fmt.Errorf("Can't copy object - incompatible remotes")
|
||||
ErrorCantMove = fmt.Errorf("Can't move object - incompatible remotes")
|
||||
ErrorCantDirMove = fmt.Errorf("Can't move directory - incompatible remotes")
|
||||
ErrorDirExists = fmt.Errorf("Can't copy directory - destination already exists")
|
||||
ErrorCantSetModTime = fmt.Errorf("Can't set modified time")
|
||||
fsRegistry []*FsInfo
|
||||
)
|
||||
|
||||
// RegInfo provides information about a filesystem
|
||||
type RegInfo struct {
|
||||
// Filesystem info
|
||||
type FsInfo struct {
|
||||
// Name of this fs
|
||||
Name string
|
||||
// Description of this fs - defaults to Name
|
||||
Description string
|
||||
// Create a new file system. If root refers to an existing
|
||||
// object, then it should return a Fs which only returns that
|
||||
// object.
|
||||
@@ -50,30 +36,15 @@ type RegInfo struct {
|
||||
Options []Option
|
||||
}
|
||||
|
||||
// Option is describes an option for the config wizard
|
||||
// An options for a Fs
|
||||
type Option struct {
|
||||
Name string
|
||||
Help string
|
||||
Optional bool
|
||||
Examples OptionExamples
|
||||
Examples []OptionExample
|
||||
}
|
||||
|
||||
// OptionExamples is a slice of examples
|
||||
type OptionExamples []OptionExample
|
||||
|
||||
// Len is part of sort.Interface.
|
||||
func (os OptionExamples) Len() int { return len(os) }
|
||||
|
||||
// Swap is part of sort.Interface.
|
||||
func (os OptionExamples) Swap(i, j int) { os[i], os[j] = os[j], os[i] }
|
||||
|
||||
// Less is part of sort.Interface.
|
||||
func (os OptionExamples) Less(i, j int) bool { return os[i].Help < os[j].Help }
|
||||
|
||||
// Sort sorts an OptionExamples
|
||||
func (os OptionExamples) Sort() { sort.Sort(os) }
|
||||
|
||||
// OptionExample describes an example for an Option
|
||||
// An example for an option
|
||||
type OptionExample struct {
|
||||
Value string
|
||||
Help string
|
||||
@@ -82,21 +53,22 @@ type OptionExample struct {
|
||||
// Register a filesystem
|
||||
//
|
||||
// Fs modules should use this in an init() function
|
||||
func Register(info *RegInfo) {
|
||||
func Register(info *FsInfo) {
|
||||
fsRegistry = append(fsRegistry, info)
|
||||
}
|
||||
|
||||
// Fs is the interface a cloud storage system must provide
|
||||
// A Filesystem, describes the local filesystem and the remote object store
|
||||
type Fs interface {
|
||||
Info
|
||||
// String returns a description of the FS
|
||||
String() string
|
||||
|
||||
// List the Fs into a channel
|
||||
List() ObjectsChan
|
||||
|
||||
// ListDir lists the Fs directories/buckets/containers into a channel
|
||||
// List the Fs directories/buckets/containers into a channel
|
||||
ListDir() DirChan
|
||||
|
||||
// NewFsObject finds the Object at remote. Returns nil if can't be found
|
||||
// Find the Object at remote. Returns nil if can't be found
|
||||
NewFsObject(remote string) Object
|
||||
|
||||
// Put in to the remote path with the modTime given of the given size
|
||||
@@ -104,154 +76,79 @@ type Fs interface {
|
||||
// May create the object even if it returns an error - if so
|
||||
// will return the object and the error, otherwise will return
|
||||
// nil and the error
|
||||
Put(in io.Reader, src ObjectInfo) (Object, error)
|
||||
Put(in io.Reader, remote string, modTime time.Time, size int64) (Object, error)
|
||||
|
||||
// Mkdir makes the directory (container, bucket)
|
||||
//
|
||||
// Shouldn't return an error if it already exists
|
||||
// Make the directory (container, bucket)
|
||||
Mkdir() error
|
||||
|
||||
// Rmdir removes the directory (container, bucket) if empty
|
||||
//
|
||||
// Return an error if it doesn't exist or isn't empty
|
||||
// Remove the directory (container, bucket) if empty
|
||||
Rmdir() error
|
||||
}
|
||||
|
||||
// Info provides an interface to reading information about a filesystem.
|
||||
type Info interface {
|
||||
// Name of the remote (as passed into NewFs)
|
||||
Name() string
|
||||
|
||||
// Root of the remote (as passed into NewFs)
|
||||
Root() string
|
||||
|
||||
// String returns a description of the FS
|
||||
String() string
|
||||
|
||||
// Precision of the ModTimes in this Fs
|
||||
Precision() time.Duration
|
||||
|
||||
// Returns the supported hash types of the filesystem
|
||||
Hashes() HashSet
|
||||
}
|
||||
|
||||
// Object is a filesystem like object provided by an Fs
|
||||
// A filesystem like object which can either be a remote object or a
|
||||
// local file/directory
|
||||
type Object interface {
|
||||
ObjectInfo
|
||||
|
||||
// String returns a description of the Object
|
||||
String() string
|
||||
|
||||
// Fs returns the Fs that this object is part of
|
||||
Fs() Fs
|
||||
|
||||
// Remote returns the remote path
|
||||
Remote() string
|
||||
|
||||
// Md5sum returns the md5 checksum of the file
|
||||
Md5sum() (string, error)
|
||||
|
||||
// ModTime returns the modification date of the file
|
||||
ModTime() time.Time
|
||||
|
||||
// SetModTime sets the metadata on the object to set the modification date
|
||||
SetModTime(time.Time) error
|
||||
SetModTime(time.Time)
|
||||
|
||||
// Size returns the size of the file
|
||||
Size() int64
|
||||
|
||||
// Open opens the file for read. Call Close() on the returned io.ReadCloser
|
||||
Open() (io.ReadCloser, error)
|
||||
|
||||
// Update in to the object with the modTime given of the given size
|
||||
Update(in io.Reader, src ObjectInfo) error
|
||||
Update(in io.Reader, modTime time.Time, size int64) error
|
||||
|
||||
// Storable says whether this object can be stored
|
||||
Storable() bool
|
||||
|
||||
// Removes this object
|
||||
Remove() error
|
||||
}
|
||||
|
||||
// ObjectInfo contains information about an object.
|
||||
type ObjectInfo interface {
|
||||
// Fs returns read only access to the Fs that this object is part of
|
||||
Fs() Info
|
||||
|
||||
// Remote returns the remote path
|
||||
Remote() string
|
||||
|
||||
// Hash returns the selected checksum of the file
|
||||
// If no checksum is available it returns ""
|
||||
Hash(HashType) (string, error)
|
||||
|
||||
// ModTime returns the modification date of the file
|
||||
// It should return a best guess if one isn't available
|
||||
ModTime() time.Time
|
||||
|
||||
// Size returns the size of the file
|
||||
Size() int64
|
||||
|
||||
// Storable says whether this object can be stored
|
||||
Storable() bool
|
||||
}
|
||||
|
||||
// Purger is an optional interfaces for Fs
|
||||
// Optional interfaces
|
||||
type Purger interface {
|
||||
// Purge all files in the root and the root directory
|
||||
//
|
||||
// Implement this if you have a way of deleting all the files
|
||||
// quicker than just running Remove() on the result of List()
|
||||
//
|
||||
// Return an error if it doesn't exist
|
||||
Purge() error
|
||||
}
|
||||
|
||||
// Copier is an optional interface for Fs
|
||||
type Copier interface {
|
||||
// Copy src to this remote using server side copy operations.
|
||||
//
|
||||
// This is stored with the remote path given
|
||||
//
|
||||
// It returns the destination Object and a possible error
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantCopy
|
||||
Copy(src Object, remote string) (Object, error)
|
||||
}
|
||||
|
||||
// Mover is an optional interface for Fs
|
||||
type Mover interface {
|
||||
// Move src to this remote using server side move operations.
|
||||
//
|
||||
// This is stored with the remote path given
|
||||
//
|
||||
// It returns the destination Object and a possible error
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantMove
|
||||
Move(src Object, remote string) (Object, error)
|
||||
}
|
||||
|
||||
// DirMover is an optional interface for Fs
|
||||
type DirMover interface {
|
||||
// DirMove moves src to this remote using server side move
|
||||
// operations.
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantDirMove
|
||||
//
|
||||
// If destination exists then return fs.ErrorDirExists
|
||||
DirMove(src Fs) error
|
||||
}
|
||||
|
||||
// UnWrapper is an optional interfaces for Fs
|
||||
type UnWrapper interface {
|
||||
// UnWrap returns the Fs that this Fs is wrapping
|
||||
UnWrap() Fs
|
||||
}
|
||||
|
||||
// ObjectsChan is a channel of Objects
|
||||
// A channel of Objects
|
||||
type ObjectsChan chan Object
|
||||
|
||||
// Objects is a slice of Object~s
|
||||
// A slice of Objects
|
||||
type Objects []Object
|
||||
|
||||
// ObjectPair is a pair of Objects used to describe a potential copy
|
||||
// operation.
|
||||
// A pair of Objects
|
||||
type ObjectPair struct {
|
||||
src, dst Object
|
||||
}
|
||||
|
||||
// ObjectPairChan is a channel of ObjectPair
|
||||
// A channel of ObjectPair
|
||||
type ObjectPairChan chan ObjectPair
|
||||
|
||||
// Dir describes a directory for directory/container/bucket lists
|
||||
// A structure of directory/container/bucket lists
|
||||
type Dir struct {
|
||||
Name string // name of the directory
|
||||
When time.Time // modification or creation time - IsZero for unknown
|
||||
@@ -259,13 +156,16 @@ type Dir struct {
|
||||
Count int64 // number of objects -1 for unknown
|
||||
}
|
||||
|
||||
// DirChan is a channel of Dir objects
|
||||
// A channel of Dir objects
|
||||
type DirChan chan *Dir
|
||||
|
||||
// Find looks for an Info object for the name passed in
|
||||
// Pattern to match a url
|
||||
var matcher = regexp.MustCompile(`^([\w_-]+):(.*)$`)
|
||||
|
||||
// Finds a FsInfo object for the name passed in
|
||||
//
|
||||
// Services are looked up in the config file
|
||||
func Find(name string) (*RegInfo, error) {
|
||||
func Find(name string) (*FsInfo, error) {
|
||||
for _, item := range fsRegistry {
|
||||
if item.Name == name {
|
||||
return item, nil
|
||||
@@ -274,120 +174,58 @@ func Find(name string) (*RegInfo, error) {
|
||||
return nil, fmt.Errorf("Didn't find filing system for %q", name)
|
||||
}
|
||||
|
||||
// Pattern to match an rclone url
|
||||
var matcher = regexp.MustCompile(`^([\w_ -]+):(.*)$`)
|
||||
|
||||
// NewFs makes a new Fs object from the path
|
||||
//
|
||||
// The path is of the form remote:path
|
||||
// The path is of the form service://path
|
||||
//
|
||||
// Remotes are looked up in the config file. If the remote isn't
|
||||
// found then NotFoundInConfigFile will be returned.
|
||||
//
|
||||
// On Windows avoid single character remote names as they can be mixed
|
||||
// up with drive letters.
|
||||
// Services are looked up in the config file
|
||||
func NewFs(path string) (Fs, error) {
|
||||
parts := matcher.FindStringSubmatch(path)
|
||||
fsName, configName, fsPath := "local", "local", path
|
||||
if parts != nil && !isDriveLetter(parts[1]) {
|
||||
if parts != nil {
|
||||
configName, fsPath = parts[1], parts[2]
|
||||
var err error
|
||||
fsName, err = ConfigFile.GetValue(configName, "type")
|
||||
if err != nil {
|
||||
return nil, ErrorNotFoundInConfigFile
|
||||
return nil, fmt.Errorf("Didn't find section in config file for %q", configName)
|
||||
}
|
||||
}
|
||||
fs, err := Find(fsName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// change native directory separators to / if there are any
|
||||
fsPath = filepath.ToSlash(fsPath)
|
||||
return fs.NewFs(configName, fsPath)
|
||||
}
|
||||
|
||||
// OutputLog logs for an object
|
||||
// Outputs log for object
|
||||
func OutputLog(o interface{}, text string, args ...interface{}) {
|
||||
description := ""
|
||||
if o != nil {
|
||||
description = fmt.Sprintf("%v: ", o)
|
||||
if x, ok := o.(fmt.Stringer); ok {
|
||||
description = x.String() + ": "
|
||||
}
|
||||
out := fmt.Sprintf(text, args...)
|
||||
log.Print(description + out)
|
||||
}
|
||||
|
||||
// Debug writes debuging output for this Object or Fs
|
||||
// Write debuging output for this Object or Fs
|
||||
func Debug(o interface{}, text string, args ...interface{}) {
|
||||
if Config.Verbose {
|
||||
OutputLog(o, text, args...)
|
||||
}
|
||||
}
|
||||
|
||||
// Log writes log output for this Object or Fs
|
||||
// Write log output for this Object or Fs
|
||||
func Log(o interface{}, text string, args ...interface{}) {
|
||||
if !Config.Quiet {
|
||||
OutputLog(o, text, args...)
|
||||
}
|
||||
}
|
||||
|
||||
// ErrorLog writes error log output for this Object or Fs. It
|
||||
// unconditionally logs a message regardless of Config.Quiet or
|
||||
// Config.Verbose.
|
||||
func ErrorLog(o interface{}, text string, args ...interface{}) {
|
||||
OutputLog(o, text, args...)
|
||||
}
|
||||
|
||||
// CheckClose is a utility function used to check the return from
|
||||
// checkClose is a utility function used to check the return from
|
||||
// Close in a defer statement.
|
||||
func CheckClose(c io.Closer, err *error) {
|
||||
func checkClose(c io.Closer, err *error) {
|
||||
cerr := c.Close()
|
||||
if *err == nil {
|
||||
*err = cerr
|
||||
}
|
||||
}
|
||||
|
||||
// NewStaticObjectInfo returns a static ObjectInfo
|
||||
// If hashes is nil and fs is not nil, the hash map will be replaced with
|
||||
// empty hashes of the types supported by the fs.
|
||||
func NewStaticObjectInfo(remote string, modTime time.Time, size int64, storable bool, hashes map[HashType]string, fs Info) ObjectInfo {
|
||||
info := &staticObjectInfo{
|
||||
remote: remote,
|
||||
modTime: modTime,
|
||||
size: size,
|
||||
storable: storable,
|
||||
hashes: hashes,
|
||||
fs: fs,
|
||||
}
|
||||
if fs != nil && hashes == nil {
|
||||
set := fs.Hashes().Array()
|
||||
info.hashes = make(map[HashType]string)
|
||||
for _, ht := range set {
|
||||
info.hashes[ht] = ""
|
||||
}
|
||||
}
|
||||
return info
|
||||
}
|
||||
|
||||
type staticObjectInfo struct {
|
||||
remote string
|
||||
modTime time.Time
|
||||
size int64
|
||||
storable bool
|
||||
hashes map[HashType]string
|
||||
fs Info
|
||||
}
|
||||
|
||||
func (i *staticObjectInfo) Fs() Info { return i.fs }
|
||||
func (i *staticObjectInfo) Remote() string { return i.remote }
|
||||
func (i *staticObjectInfo) ModTime() time.Time { return i.modTime }
|
||||
func (i *staticObjectInfo) Size() int64 { return i.size }
|
||||
func (i *staticObjectInfo) Storable() bool { return i.storable }
|
||||
func (i *staticObjectInfo) Hash(h HashType) (string, error) {
|
||||
if len(i.hashes) == 0 {
|
||||
return "", ErrHashUnsupported
|
||||
}
|
||||
if hash, ok := i.hashes[h]; ok {
|
||||
return hash, nil
|
||||
}
|
||||
return "", ErrHashUnsupported
|
||||
}
|
||||
|
||||
117
fs/glob.go
117
fs/glob.go
@@ -1,117 +0,0 @@
|
||||
// rsync style glob parser
|
||||
|
||||
package fs
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// globToRegexp converts an rsync style glob to a regexp
|
||||
//
|
||||
// documented in filtering.md
|
||||
func globToRegexp(glob string) (*regexp.Regexp, error) {
|
||||
var re bytes.Buffer
|
||||
if strings.HasPrefix(glob, "/") {
|
||||
glob = glob[1:]
|
||||
_, _ = re.WriteRune('^')
|
||||
} else {
|
||||
_, _ = re.WriteString("(^|/)")
|
||||
}
|
||||
consecutiveStars := 0
|
||||
insertStars := func() error {
|
||||
if consecutiveStars > 0 {
|
||||
switch consecutiveStars {
|
||||
case 1:
|
||||
_, _ = re.WriteString(`[^/]*`)
|
||||
case 2:
|
||||
_, _ = re.WriteString(`.*`)
|
||||
default:
|
||||
return fmt.Errorf("too many stars in %q", glob)
|
||||
}
|
||||
}
|
||||
consecutiveStars = 0
|
||||
return nil
|
||||
}
|
||||
inBraces := false
|
||||
inBrackets := 0
|
||||
slashed := false
|
||||
for _, c := range glob {
|
||||
if slashed {
|
||||
_, _ = re.WriteRune(c)
|
||||
slashed = false
|
||||
continue
|
||||
}
|
||||
if c != '*' {
|
||||
err := insertStars()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if inBrackets > 0 {
|
||||
_, _ = re.WriteRune(c)
|
||||
if c == '[' {
|
||||
inBrackets++
|
||||
}
|
||||
if c == ']' {
|
||||
inBrackets--
|
||||
}
|
||||
continue
|
||||
}
|
||||
switch c {
|
||||
case '\\':
|
||||
_, _ = re.WriteRune(c)
|
||||
slashed = true
|
||||
case '*':
|
||||
consecutiveStars++
|
||||
case '?':
|
||||
_, _ = re.WriteString(`[^/]`)
|
||||
case '[':
|
||||
_, _ = re.WriteRune(c)
|
||||
inBrackets++
|
||||
case ']':
|
||||
return nil, fmt.Errorf("mismatched ']' in glob %q", glob)
|
||||
case '{':
|
||||
if inBraces {
|
||||
return nil, fmt.Errorf("can't nest '{' '}' in glob %q", glob)
|
||||
}
|
||||
inBraces = true
|
||||
_, _ = re.WriteRune('(')
|
||||
case '}':
|
||||
if !inBraces {
|
||||
return nil, fmt.Errorf("mismatched '{' and '}' in glob %q", glob)
|
||||
}
|
||||
_, _ = re.WriteRune(')')
|
||||
inBraces = false
|
||||
case ',':
|
||||
if inBraces {
|
||||
_, _ = re.WriteRune('|')
|
||||
} else {
|
||||
_, _ = re.WriteRune(c)
|
||||
}
|
||||
case '.', '+', '(', ')', '|', '^', '$': // regexp meta characters not dealt with above
|
||||
_, _ = re.WriteRune('\\')
|
||||
_, _ = re.WriteRune(c)
|
||||
default:
|
||||
_, _ = re.WriteRune(c)
|
||||
}
|
||||
}
|
||||
err := insertStars()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if inBrackets > 0 {
|
||||
return nil, fmt.Errorf("mismatched '[' and ']' in glob %q", glob)
|
||||
}
|
||||
if inBraces {
|
||||
return nil, fmt.Errorf("mismatched '{' and '}' in glob %q", glob)
|
||||
}
|
||||
_, _ = re.WriteRune('$')
|
||||
result, err := regexp.Compile(re.String())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Bad glob pattern %q: %v (%q)", glob, err, re.String())
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
@@ -1,64 +0,0 @@
|
||||
package fs
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestGlobToRegexp(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
want string
|
||||
error string
|
||||
}{
|
||||
{``, `(^|/)$`, ``},
|
||||
{`potato`, `(^|/)potato$`, ``},
|
||||
{`potato,sausage`, `(^|/)potato,sausage$`, ``},
|
||||
{`/potato`, `^potato$`, ``},
|
||||
{`potato?sausage`, `(^|/)potato[^/]sausage$`, ``},
|
||||
{`potat[oa]`, `(^|/)potat[oa]$`, ``},
|
||||
{`potat[a-z]or`, `(^|/)potat[a-z]or$`, ``},
|
||||
{`potat[[:alpha:]]or`, `(^|/)potat[[:alpha:]]or$`, ``},
|
||||
{`'.' '+' '(' ')' '|' '^' '$'`, `(^|/)'\.' '\+' '\(' '\)' '\|' '\^' '\$'$`, ``},
|
||||
{`*.jpg`, `(^|/)[^/]*\.jpg$`, ``},
|
||||
{`a{b,c,d}e`, `(^|/)a(b|c|d)e$`, ``},
|
||||
{`potato**`, `(^|/)potato.*$`, ``},
|
||||
{`potato**sausage`, `(^|/)potato.*sausage$`, ``},
|
||||
{`*.p[lm]`, `(^|/)[^/]*\.p[lm]$`, ``},
|
||||
{`[\[\]]`, `(^|/)[\[\]]$`, ``},
|
||||
{`***potato`, `(^|/)`, `too many stars`},
|
||||
{`***`, `(^|/)`, `too many stars`},
|
||||
{`ab]c`, `(^|/)`, `mismatched ']'`},
|
||||
{`ab[c`, `(^|/)`, `mismatched '[' and ']'`},
|
||||
{`ab{{cd`, `(^|/)`, `can't nest`},
|
||||
{`ab{}}cd`, `(^|/)`, `mismatched '{' and '}'`},
|
||||
{`ab}c`, `(^|/)`, `mismatched '{' and '}'`},
|
||||
{`ab{c`, `(^|/)`, `mismatched '{' and '}'`},
|
||||
{`*.{jpg,png,gif}`, `(^|/)[^/]*\.(jpg|png|gif)$`, ``},
|
||||
{`[a--b]`, `(^|/)`, `Bad glob pattern`},
|
||||
{`a\*b`, `(^|/)a\*b$`, ``},
|
||||
{`a\\b`, `(^|/)a\\b$`, ``},
|
||||
} {
|
||||
gotRe, err := globToRegexp(test.in)
|
||||
if test.error == "" {
|
||||
if err != nil {
|
||||
t.Errorf("%q: not expecting error: %v", test.in, err)
|
||||
} else {
|
||||
got := gotRe.String()
|
||||
if test.want != got {
|
||||
t.Errorf("%q: want %q got %q", test.in, test.want, got)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if err == nil {
|
||||
t.Errorf("%q: expecting error but didn't get one", test.in)
|
||||
} else {
|
||||
got := err.Error()
|
||||
if !strings.Contains(got, test.error) {
|
||||
t.Errorf("%q: want error %q got %q", test.in, test.error, got)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
241
fs/hash.go
241
fs/hash.go
@@ -1,241 +0,0 @@
|
||||
package fs
|
||||
|
||||
import (
|
||||
"crypto/md5"
|
||||
"crypto/sha1"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"hash"
|
||||
"io"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// HashType indicates a standard hashing algorithm
|
||||
type HashType int
|
||||
|
||||
// ErrHashUnsupported should be returned by filesystem,
|
||||
// if it is requested to deliver an unsupported hash type.
|
||||
var ErrHashUnsupported = fmt.Errorf("hash type not supported")
|
||||
|
||||
const (
|
||||
// HashMD5 indicates MD5 support
|
||||
HashMD5 HashType = 1 << iota
|
||||
|
||||
// HashSHA1 indicates SHA-1 support
|
||||
HashSHA1
|
||||
|
||||
// HashNone indicates no hashes are supported
|
||||
HashNone HashType = 0
|
||||
)
|
||||
|
||||
// SupportedHashes returns a set of all the supported hashes by
|
||||
// HashStream and MultiHasher.
|
||||
var SupportedHashes = NewHashSet(HashMD5, HashSHA1)
|
||||
|
||||
// HashWidth returns the width in characters for any HashType
|
||||
var HashWidth = map[HashType]int{
|
||||
HashMD5: 32,
|
||||
HashSHA1: 40,
|
||||
}
|
||||
|
||||
// HashStream will calculate hashes of all supported hash types.
|
||||
func HashStream(r io.Reader) (map[HashType]string, error) {
|
||||
return HashStreamTypes(r, SupportedHashes)
|
||||
}
|
||||
|
||||
// HashStreamTypes will calculate hashes of the requested hash types.
|
||||
func HashStreamTypes(r io.Reader, set HashSet) (map[HashType]string, error) {
|
||||
hashers, err := hashFromTypes(set)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
_, err = io.Copy(hashToMultiWriter(hashers), r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var ret = make(map[HashType]string)
|
||||
for k, v := range hashers {
|
||||
ret[k] = hex.EncodeToString(v.Sum(nil))
|
||||
}
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
// String returns a string representation of the hash type.
|
||||
// The function will panic if the hash type is unknown.
|
||||
func (h HashType) String() string {
|
||||
switch h {
|
||||
case HashNone:
|
||||
return "None"
|
||||
case HashMD5:
|
||||
return "MD5"
|
||||
case HashSHA1:
|
||||
return "SHA-1"
|
||||
default:
|
||||
err := fmt.Sprintf("internal error: unknown hash type: 0x%x", int(h))
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
// hashFromTypes will return hashers for all the requested types.
|
||||
// The types must be a subset of SupportedHashes,
|
||||
// and this function must support all types.
|
||||
func hashFromTypes(set HashSet) (map[HashType]hash.Hash, error) {
|
||||
if !set.SubsetOf(SupportedHashes) {
|
||||
return nil, fmt.Errorf("Requested set %08x contains unknown hash types", int(set))
|
||||
}
|
||||
var hashers = make(map[HashType]hash.Hash)
|
||||
types := set.Array()
|
||||
for _, t := range types {
|
||||
switch t {
|
||||
case HashMD5:
|
||||
hashers[t] = md5.New()
|
||||
case HashSHA1:
|
||||
hashers[t] = sha1.New()
|
||||
default:
|
||||
err := fmt.Sprintf("internal error: Unsupported hash type %v", t)
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
return hashers, nil
|
||||
}
|
||||
|
||||
// hashToMultiWriter will return a set of hashers into a
|
||||
// single multiwriter, where one write will update all
|
||||
// the hashers.
|
||||
func hashToMultiWriter(h map[HashType]hash.Hash) io.Writer {
|
||||
// Convert to to slice
|
||||
var w = make([]io.Writer, 0, len(h))
|
||||
for _, v := range h {
|
||||
w = append(w, v)
|
||||
}
|
||||
return io.MultiWriter(w...)
|
||||
}
|
||||
|
||||
// A MultiHasher will construct various hashes on
|
||||
// all incoming writes.
|
||||
type MultiHasher struct {
|
||||
io.Writer
|
||||
h map[HashType]hash.Hash // Hashes
|
||||
}
|
||||
|
||||
// NewMultiHasher will return a hash writer that will write all
|
||||
// supported hash types.
|
||||
func NewMultiHasher() *MultiHasher {
|
||||
h, err := NewMultiHasherTypes(SupportedHashes)
|
||||
if err != nil {
|
||||
panic("internal error: could not create multihasher")
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
// NewMultiHasherTypes will return a hash writer that will write
|
||||
// the requested hash types.
|
||||
func NewMultiHasherTypes(set HashSet) (*MultiHasher, error) {
|
||||
hashers, err := hashFromTypes(set)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
m := MultiHasher{h: hashers, Writer: hashToMultiWriter(hashers)}
|
||||
return &m, nil
|
||||
}
|
||||
|
||||
// Sums returns the sums of all accumulated hashes as hex encoded
|
||||
// strings.
|
||||
func (m *MultiHasher) Sums() map[HashType]string {
|
||||
dst := make(map[HashType]string)
|
||||
for k, v := range m.h {
|
||||
dst[k] = hex.EncodeToString(v.Sum(nil))
|
||||
}
|
||||
return dst
|
||||
}
|
||||
|
||||
// A HashSet Indicates one or more hash types.
|
||||
type HashSet int
|
||||
|
||||
// NewHashSet will create a new hash set with the hash types supplied
|
||||
func NewHashSet(t ...HashType) HashSet {
|
||||
h := HashSet(HashNone)
|
||||
return h.Add(t...)
|
||||
}
|
||||
|
||||
// Add one or more hash types to the set.
|
||||
// Returns the modified hash set.
|
||||
func (h *HashSet) Add(t ...HashType) HashSet {
|
||||
for _, v := range t {
|
||||
*h |= HashSet(v)
|
||||
}
|
||||
return *h
|
||||
}
|
||||
|
||||
// Contains returns true if the
|
||||
func (h HashSet) Contains(t HashType) bool {
|
||||
return int(h)&int(t) != 0
|
||||
}
|
||||
|
||||
// Overlap returns the overlapping hash types
|
||||
func (h HashSet) Overlap(t HashSet) HashSet {
|
||||
return HashSet(int(h) & int(t))
|
||||
}
|
||||
|
||||
// SubsetOf will return true if all types of h
|
||||
// is present in the set c
|
||||
func (h HashSet) SubsetOf(c HashSet) bool {
|
||||
return int(h)|int(c) == int(c)
|
||||
}
|
||||
|
||||
// GetOne will return a hash type.
|
||||
// Currently the first is returned, but it could be
|
||||
// improved to return the strongest.
|
||||
func (h HashSet) GetOne() HashType {
|
||||
v := int(h)
|
||||
i := uint(0)
|
||||
for v != 0 {
|
||||
if v&1 != 0 {
|
||||
return HashType(1 << i)
|
||||
}
|
||||
i++
|
||||
v >>= 1
|
||||
}
|
||||
return HashType(HashNone)
|
||||
}
|
||||
|
||||
// Array returns an array of all hash types in the set
|
||||
func (h HashSet) Array() (ht []HashType) {
|
||||
v := int(h)
|
||||
i := uint(0)
|
||||
for v != 0 {
|
||||
if v&1 != 0 {
|
||||
ht = append(ht, HashType(1<<i))
|
||||
}
|
||||
i++
|
||||
v >>= 1
|
||||
}
|
||||
return ht
|
||||
}
|
||||
|
||||
// Count returns the number of hash types in the set
|
||||
func (h HashSet) Count() int {
|
||||
if int(h) == 0 {
|
||||
return 0
|
||||
}
|
||||
// credit: https://code.google.com/u/arnehormann/
|
||||
x := uint64(h)
|
||||
x -= (x >> 1) & 0x5555555555555555
|
||||
x = (x>>2)&0x3333333333333333 + x&0x3333333333333333
|
||||
x += x >> 4
|
||||
x &= 0x0f0f0f0f0f0f0f0f
|
||||
x *= 0x0101010101010101
|
||||
return int(x >> 56)
|
||||
}
|
||||
|
||||
// String returns a string representation of the hash set.
|
||||
// The function will panic if it contains an unknown type.
|
||||
func (h HashSet) String() string {
|
||||
a := h.Array()
|
||||
var r []string
|
||||
for _, v := range a {
|
||||
r = append(r, v.String())
|
||||
}
|
||||
return "[" + strings.Join(r, ", ") + "]"
|
||||
}
|
||||
260
fs/hash_test.go
260
fs/hash_test.go
@@ -1,260 +0,0 @@
|
||||
package fs_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
)
|
||||
|
||||
func TestHashSet(t *testing.T) {
|
||||
var h fs.HashSet
|
||||
|
||||
if h.Count() != 0 {
|
||||
t.Fatalf("expected empty set to have 0 elements, got %d", h.Count())
|
||||
}
|
||||
a := h.Array()
|
||||
if len(a) != 0 {
|
||||
t.Fatalf("expected empty slice, got %d", len(a))
|
||||
}
|
||||
|
||||
h = h.Add(fs.HashMD5)
|
||||
if h.Count() != 1 {
|
||||
t.Fatalf("expected 1 element, got %d", h.Count())
|
||||
}
|
||||
if h.GetOne() != fs.HashMD5 {
|
||||
t.Fatalf("expected HashMD5, got %v", h.GetOne())
|
||||
}
|
||||
a = h.Array()
|
||||
if len(a) != 1 {
|
||||
t.Fatalf("expected 1 element, got %d", len(a))
|
||||
}
|
||||
if a[0] != fs.HashMD5 {
|
||||
t.Fatalf("expected HashMD5, got %v", a[0])
|
||||
}
|
||||
|
||||
// Test overlap, with all hashes
|
||||
h = h.Overlap(fs.SupportedHashes)
|
||||
if h.Count() != 1 {
|
||||
t.Fatalf("expected 1 element, got %d", h.Count())
|
||||
}
|
||||
if h.GetOne() != fs.HashMD5 {
|
||||
t.Fatalf("expected HashMD5, got %v", h.GetOne())
|
||||
}
|
||||
if !h.SubsetOf(fs.SupportedHashes) {
|
||||
t.Fatalf("expected to be subset of all hashes")
|
||||
}
|
||||
if !h.SubsetOf(fs.NewHashSet(fs.HashMD5)) {
|
||||
t.Fatalf("expected to be subset of itself")
|
||||
}
|
||||
|
||||
h = h.Add(fs.HashSHA1)
|
||||
if h.Count() != 2 {
|
||||
t.Fatalf("expected 2 elements, got %d", h.Count())
|
||||
}
|
||||
one := h.GetOne()
|
||||
if !(one == fs.HashMD5 || one == fs.HashSHA1) {
|
||||
t.Fatalf("expected to be either MD5 or SHA1, got %v", one)
|
||||
}
|
||||
if !h.SubsetOf(fs.SupportedHashes) {
|
||||
t.Fatalf("expected to be subset of all hashes")
|
||||
}
|
||||
if h.SubsetOf(fs.NewHashSet(fs.HashMD5)) {
|
||||
t.Fatalf("did not expect to be subset of only MD5")
|
||||
}
|
||||
if h.SubsetOf(fs.NewHashSet(fs.HashSHA1)) {
|
||||
t.Fatalf("did not expect to be subset of only SHA1")
|
||||
}
|
||||
if !h.SubsetOf(fs.NewHashSet(fs.HashMD5, fs.HashSHA1)) {
|
||||
t.Fatalf("expected to be subset of MD5/SHA1")
|
||||
}
|
||||
a = h.Array()
|
||||
if len(a) != 2 {
|
||||
t.Fatalf("expected 2 elements, got %d", len(a))
|
||||
}
|
||||
|
||||
ol := h.Overlap(fs.NewHashSet(fs.HashMD5))
|
||||
if ol.Count() != 1 {
|
||||
t.Fatalf("expected 1 element overlap, got %d", ol.Count())
|
||||
}
|
||||
if !ol.Contains(fs.HashMD5) {
|
||||
t.Fatalf("expected overlap to be MD5, got %v", ol)
|
||||
}
|
||||
if ol.Contains(fs.HashSHA1) {
|
||||
t.Fatalf("expected overlap NOT to contain SHA1, got %v", ol)
|
||||
}
|
||||
|
||||
ol = h.Overlap(fs.NewHashSet(fs.HashMD5, fs.HashSHA1))
|
||||
if ol.Count() != 2 {
|
||||
t.Fatalf("expected 2 element overlap, got %d", ol.Count())
|
||||
}
|
||||
if !ol.Contains(fs.HashMD5) {
|
||||
t.Fatalf("expected overlap to contain MD5, got %v", ol)
|
||||
}
|
||||
if !ol.Contains(fs.HashSHA1) {
|
||||
t.Fatalf("expected overlap to contain SHA1, got %v", ol)
|
||||
}
|
||||
}
|
||||
|
||||
type hashTest struct {
|
||||
input []byte
|
||||
output map[fs.HashType]string
|
||||
}
|
||||
|
||||
var hashTestSet = []hashTest{
|
||||
{
|
||||
input: []byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14},
|
||||
output: map[fs.HashType]string{
|
||||
fs.HashMD5: "bf13fc19e5151ac57d4252e0e0f87abe",
|
||||
fs.HashSHA1: "3ab6543c08a75f292a5ecedac87ec41642d12166",
|
||||
},
|
||||
},
|
||||
// Empty data set
|
||||
{
|
||||
input: []byte{},
|
||||
output: map[fs.HashType]string{
|
||||
fs.HashMD5: "d41d8cd98f00b204e9800998ecf8427e",
|
||||
fs.HashSHA1: "da39a3ee5e6b4b0d3255bfef95601890afd80709",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
func TestMultiHasher(t *testing.T) {
|
||||
for _, test := range hashTestSet {
|
||||
mh := fs.NewMultiHasher()
|
||||
n, err := io.Copy(mh, bytes.NewBuffer(test.input))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if int(n) != len(test.input) {
|
||||
t.Fatalf("copy mismatch: %d != %d", n, len(test.input))
|
||||
}
|
||||
sums := mh.Sums()
|
||||
for k, v := range sums {
|
||||
expect, ok := test.output[k]
|
||||
if !ok {
|
||||
t.Errorf("Unknown hash type %v, sum: %q", k, v)
|
||||
}
|
||||
if expect != v {
|
||||
t.Errorf("hash %v mismatch %q != %q", k, v, expect)
|
||||
}
|
||||
}
|
||||
// Test that all are present
|
||||
for k, v := range test.output {
|
||||
expect, ok := sums[k]
|
||||
if !ok {
|
||||
t.Errorf("did not calculate hash type %v, sum: %q", k, v)
|
||||
}
|
||||
if expect != v {
|
||||
t.Errorf("hash %d mismatch %q != %q", k, v, expect)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestMultiHasherTypes(t *testing.T) {
|
||||
h := fs.HashSHA1
|
||||
for _, test := range hashTestSet {
|
||||
mh, err := fs.NewMultiHasherTypes(fs.NewHashSet(h))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
n, err := io.Copy(mh, bytes.NewBuffer(test.input))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if int(n) != len(test.input) {
|
||||
t.Fatalf("copy mismatch: %d != %d", n, len(test.input))
|
||||
}
|
||||
sums := mh.Sums()
|
||||
if len(sums) != 1 {
|
||||
t.Fatalf("expected 1 sum, got %d", len(sums))
|
||||
}
|
||||
expect := test.output[h]
|
||||
if expect != sums[h] {
|
||||
t.Errorf("hash %v mismatch %q != %q", h, sums[h], expect)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashStream(t *testing.T) {
|
||||
for _, test := range hashTestSet {
|
||||
sums, err := fs.HashStream(bytes.NewBuffer(test.input))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
for k, v := range sums {
|
||||
expect, ok := test.output[k]
|
||||
if !ok {
|
||||
t.Errorf("Unknown hash type %v, sum: %q", k, v)
|
||||
}
|
||||
if expect != v {
|
||||
t.Errorf("hash %v mismatch %q != %q", k, v, expect)
|
||||
}
|
||||
}
|
||||
// Test that all are present
|
||||
for k, v := range test.output {
|
||||
expect, ok := sums[k]
|
||||
if !ok {
|
||||
t.Errorf("did not calculate hash type %v, sum: %q", k, v)
|
||||
}
|
||||
if expect != v {
|
||||
t.Errorf("hash %v mismatch %q != %q", k, v, expect)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashStreamTypes(t *testing.T) {
|
||||
h := fs.HashSHA1
|
||||
for _, test := range hashTestSet {
|
||||
sums, err := fs.HashStreamTypes(bytes.NewBuffer(test.input), fs.NewHashSet(h))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(sums) != 1 {
|
||||
t.Fatalf("expected 1 sum, got %d", len(sums))
|
||||
}
|
||||
expect := test.output[h]
|
||||
if expect != sums[h] {
|
||||
t.Errorf("hash %d mismatch %q != %q", h, sums[h], expect)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashSetStringer(t *testing.T) {
|
||||
h := fs.NewHashSet(fs.HashSHA1, fs.HashMD5)
|
||||
s := h.String()
|
||||
expect := "[MD5, SHA-1]"
|
||||
if s != expect {
|
||||
t.Errorf("unexpected stringer: was %q, expected %q", s, expect)
|
||||
}
|
||||
h = fs.NewHashSet(fs.HashSHA1)
|
||||
s = h.String()
|
||||
expect = "[SHA-1]"
|
||||
if s != expect {
|
||||
t.Errorf("unexpected stringer: was %q, expected %q", s, expect)
|
||||
}
|
||||
h = fs.NewHashSet()
|
||||
s = h.String()
|
||||
expect = "[]"
|
||||
if s != expect {
|
||||
t.Errorf("unexpected stringer: was %q, expected %q", s, expect)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashStringer(t *testing.T) {
|
||||
h := fs.HashMD5
|
||||
s := h.String()
|
||||
expect := "MD5"
|
||||
if s != expect {
|
||||
t.Errorf("unexpected stringer: was %q, expected %q", s, expect)
|
||||
}
|
||||
h = fs.HashNone
|
||||
s = h.String()
|
||||
expect = "None"
|
||||
if s != expect {
|
||||
t.Errorf("unexpected stringer: was %q, expected %q", s, expect)
|
||||
}
|
||||
}
|
||||
@@ -6,8 +6,7 @@ import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// Limited defines a Fs which can only return the Objects passed in
|
||||
// from the Fs passed in
|
||||
// This defines a Limited Fs which can only return the Objects passed in from the Fs passed in
|
||||
type Limited struct {
|
||||
objects []Object
|
||||
fs Fs
|
||||
@@ -22,16 +21,6 @@ func NewLimited(fs Fs, objects ...Object) Fs {
|
||||
return f
|
||||
}
|
||||
|
||||
// Name is name of the remote (as passed into NewFs)
|
||||
func (f *Limited) Name() string {
|
||||
return f.fs.Name() // return name of underlying remote
|
||||
}
|
||||
|
||||
// Root is the root of the remote (as passed into NewFs)
|
||||
func (f *Limited) Root() string {
|
||||
return f.fs.Root() // return root of underlying remote
|
||||
}
|
||||
|
||||
// String returns a description of the FS
|
||||
func (f *Limited) String() string {
|
||||
return fmt.Sprintf("%s limited to %d objects", f.fs.String(), len(f.objects))
|
||||
@@ -49,14 +38,14 @@ func (f *Limited) List() ObjectsChan {
|
||||
return out
|
||||
}
|
||||
|
||||
// ListDir lists the Fs directories/buckets/containers into a channel
|
||||
// List the Fs directories/buckets/containers into a channel
|
||||
func (f *Limited) ListDir() DirChan {
|
||||
out := make(DirChan, Config.Checkers)
|
||||
close(out)
|
||||
return out
|
||||
}
|
||||
|
||||
// NewFsObject finds the Object at remote. Returns nil if can't be found
|
||||
// Find the Object at remote. Returns nil if can't be found
|
||||
func (f *Limited) NewFsObject(remote string) Object {
|
||||
for _, obj := range f.objects {
|
||||
if obj.Remote() == remote {
|
||||
@@ -71,25 +60,23 @@ func (f *Limited) NewFsObject(remote string) Object {
|
||||
// May create the object even if it returns an error - if so
|
||||
// will return the object and the error, otherwise will return
|
||||
// nil and the error
|
||||
func (f *Limited) Put(in io.Reader, src ObjectInfo) (Object, error) {
|
||||
remote := src.Remote()
|
||||
func (f *Limited) Put(in io.Reader, remote string, modTime time.Time, size int64) (Object, error) {
|
||||
obj := f.NewFsObject(remote)
|
||||
if obj == nil {
|
||||
return nil, fmt.Errorf("Can't create %q in limited fs", remote)
|
||||
}
|
||||
return obj, obj.Update(in, src)
|
||||
return obj, obj.Update(in, modTime, size)
|
||||
}
|
||||
|
||||
// Mkdir make the directory (container, bucket)
|
||||
// Make the directory (container, bucket)
|
||||
func (f *Limited) Mkdir() error {
|
||||
// All directories are already made - just ignore
|
||||
return nil
|
||||
}
|
||||
|
||||
// Rmdir removes the directory (container, bucket) if empty
|
||||
// Remove the directory (container, bucket) if empty
|
||||
func (f *Limited) Rmdir() error {
|
||||
// Ignore this in a limited fs
|
||||
return nil
|
||||
return fmt.Errorf("Can't rmdir in limited fs")
|
||||
}
|
||||
|
||||
// Precision of the ModTimes in this Fs
|
||||
@@ -97,54 +84,5 @@ func (f *Limited) Precision() time.Duration {
|
||||
return f.fs.Precision()
|
||||
}
|
||||
|
||||
// Hashes returns the supported hash sets.
|
||||
func (f *Limited) Hashes() HashSet {
|
||||
return f.fs.Hashes()
|
||||
}
|
||||
|
||||
// Copy src to this remote using server side copy operations.
|
||||
//
|
||||
// This is stored with the remote path given
|
||||
//
|
||||
// It returns the destination Object and a possible error
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantCopy
|
||||
func (f *Limited) Copy(src Object, remote string) (Object, error) {
|
||||
fCopy, ok := f.fs.(Copier)
|
||||
if !ok {
|
||||
return nil, ErrorCantCopy
|
||||
}
|
||||
return fCopy.Copy(src, remote)
|
||||
}
|
||||
|
||||
// Move src to this remote using server side move operations.
|
||||
//
|
||||
// This is stored with the remote path given
|
||||
//
|
||||
// It returns the destination Object and a possible error
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantMove
|
||||
func (f *Limited) Move(src Object, remote string) (Object, error) {
|
||||
fMove, ok := f.fs.(Mover)
|
||||
if !ok {
|
||||
return nil, ErrorCantMove
|
||||
}
|
||||
return fMove.Move(src, remote)
|
||||
}
|
||||
|
||||
// UnWrap returns the Fs that this Fs is wrapping
|
||||
func (f *Limited) UnWrap() Fs {
|
||||
return f.fs
|
||||
}
|
||||
|
||||
// Check the interfaces are satisfied
|
||||
var (
|
||||
_ Fs = (*Limited)(nil)
|
||||
_ Copier = (*Limited)(nil)
|
||||
_ Mover = (*Limited)(nil)
|
||||
_ UnWrapper = (*Limited)(nil)
|
||||
)
|
||||
var _ Fs = &Limited{}
|
||||
|
||||
@@ -1,61 +0,0 @@
|
||||
// A logging http transport
|
||||
|
||||
package fs
|
||||
|
||||
import (
|
||||
"log"
|
||||
"net/http"
|
||||
"net/http/httputil"
|
||||
)
|
||||
|
||||
const (
|
||||
separatorReq = ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>"
|
||||
separatorResp = "<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<"
|
||||
)
|
||||
|
||||
// LoggedTransport is an http transport which logs the traffic
|
||||
type LoggedTransport struct {
|
||||
wrapped http.RoundTripper
|
||||
logBody bool
|
||||
}
|
||||
|
||||
// NewLoggedTransport wraps the transport passed in and logs all roundtrips
|
||||
// including the body if logBody is set.
|
||||
func NewLoggedTransport(transport http.RoundTripper, logBody bool) *LoggedTransport {
|
||||
return &LoggedTransport{
|
||||
wrapped: transport,
|
||||
logBody: logBody,
|
||||
}
|
||||
}
|
||||
|
||||
// CancelRequest cancels an in-flight request by closing its
|
||||
// connection. CancelRequest should only be called after RoundTrip has
|
||||
// returned.
|
||||
func (t *LoggedTransport) CancelRequest(req *http.Request) {
|
||||
if wrapped, ok := t.wrapped.(interface {
|
||||
CancelRequest(*http.Request)
|
||||
}); ok {
|
||||
log.Printf("CANCEL REQUEST %v", req)
|
||||
wrapped.CancelRequest(req)
|
||||
}
|
||||
}
|
||||
|
||||
// RoundTrip implements the RoundTripper interface.
|
||||
func (t *LoggedTransport) RoundTrip(req *http.Request) (resp *http.Response, err error) {
|
||||
buf, _ := httputil.DumpRequestOut(req, t.logBody)
|
||||
log.Println(separatorReq)
|
||||
log.Println("HTTP REQUEST")
|
||||
log.Println(string(buf))
|
||||
log.Println(separatorReq)
|
||||
resp, err = t.wrapped.RoundTrip(req)
|
||||
log.Println(separatorResp)
|
||||
log.Println("HTTP RESPONSE")
|
||||
if err != nil {
|
||||
log.Printf("Error: %v\n", err)
|
||||
} else {
|
||||
buf, _ = httputil.DumpResponse(resp, t.logBody)
|
||||
log.Println(string(buf))
|
||||
}
|
||||
log.Println(separatorResp)
|
||||
return resp, err
|
||||
}
|
||||
@@ -1,146 +0,0 @@
|
||||
// +build ignore
|
||||
|
||||
// Build a directory structure with the required number of files in
|
||||
//
|
||||
// Run with go run make_test_files.go [flag] <directory>
|
||||
package main
|
||||
|
||||
import (
|
||||
cryptrand "crypto/rand"
|
||||
"flag"
|
||||
"io"
|
||||
"log"
|
||||
"math/rand"
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
var (
|
||||
// Flags
|
||||
numberOfFiles = flag.Int("n", 1000, "Number of files to create")
|
||||
averageFilesPerDirectory = flag.Int("files-per-directory", 10, "Average number of files per directory")
|
||||
maxDepth = flag.Int("max-depth", 10, "Maximum depth of directory heirachy")
|
||||
minFileSize = flag.Int64("min-size", 0, "Minimum size of file to create")
|
||||
maxFileSize = flag.Int64("max-size", 100, "Maximum size of files to create")
|
||||
minFileNameLength = flag.Int("min-name-length", 4, "Minimum size of file to create")
|
||||
maxFileNameLength = flag.Int("max-name-length", 12, "Maximum size of files to create")
|
||||
|
||||
directoriesToCreate int
|
||||
totalDirectories int
|
||||
fileNames = map[string]struct{}{} // keep a note of which file name we've used already
|
||||
)
|
||||
|
||||
// randomString create a random string for test purposes
|
||||
func randomString(n int) string {
|
||||
const (
|
||||
vowel = "aeiou"
|
||||
consonant = "bcdfghjklmnpqrstvwxyz"
|
||||
digit = "0123456789"
|
||||
)
|
||||
pattern := []string{consonant, vowel, consonant, vowel, consonant, vowel, consonant, digit}
|
||||
out := make([]byte, n)
|
||||
p := 0
|
||||
for i := range out {
|
||||
source := pattern[p]
|
||||
p = (p + 1) % len(pattern)
|
||||
out[i] = source[rand.Intn(len(source))]
|
||||
}
|
||||
return string(out)
|
||||
}
|
||||
|
||||
// fileName creates a unique random file or directory name
|
||||
func fileName() (name string) {
|
||||
for {
|
||||
length := rand.Intn(*maxFileNameLength-*minFileNameLength) + *minFileNameLength
|
||||
name = randomString(length)
|
||||
if _, found := fileNames[name]; !found {
|
||||
break
|
||||
}
|
||||
}
|
||||
fileNames[name] = struct{}{}
|
||||
return name
|
||||
}
|
||||
|
||||
// dir is a directory in the directory heirachy being built up
|
||||
type dir struct {
|
||||
name string
|
||||
depth int
|
||||
children []*dir
|
||||
parent *dir
|
||||
}
|
||||
|
||||
// Create a random directory heirachy under d
|
||||
func (d *dir) createDirectories() {
|
||||
for totalDirectories < directoriesToCreate {
|
||||
newDir := &dir{
|
||||
name: fileName(),
|
||||
depth: d.depth + 1,
|
||||
parent: d,
|
||||
}
|
||||
d.children = append(d.children, newDir)
|
||||
totalDirectories++
|
||||
switch rand.Intn(4) {
|
||||
case 0:
|
||||
if d.depth < *maxDepth {
|
||||
newDir.createDirectories()
|
||||
}
|
||||
case 1:
|
||||
return
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// list the directory heirachy
|
||||
func (d *dir) list(path string, output []string) []string {
|
||||
dirPath := path + "/" + d.name
|
||||
output = append(output, dirPath)
|
||||
for _, subDir := range d.children {
|
||||
output = subDir.list(dirPath, output)
|
||||
}
|
||||
return output
|
||||
}
|
||||
|
||||
// writeFile writes a random file at dir/name
|
||||
func writeFile(dir, name string) {
|
||||
err := os.MkdirAll(dir, 0777)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to make directory %q: %v", dir, err)
|
||||
}
|
||||
path := filepath.Join(dir, name)
|
||||
fd, err := os.Create(path)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to open file %q: %v", path, err)
|
||||
}
|
||||
size := rand.Int63n(*maxFileSize-*minFileSize) + *minFileSize
|
||||
_, err = io.CopyN(fd, cryptrand.Reader, size)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to write %v bytes to file %q: %v", size, path, err)
|
||||
}
|
||||
err = fd.Close()
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to close file %q: %v", path, err)
|
||||
}
|
||||
}
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
args := flag.Args()
|
||||
if len(args) != 1 {
|
||||
log.Fatalf("Require 1 directory argument")
|
||||
}
|
||||
outputDirectory := args[0]
|
||||
log.Printf("Output dir %q", outputDirectory)
|
||||
|
||||
directoriesToCreate = *numberOfFiles / *averageFilesPerDirectory
|
||||
log.Printf("directoriesToCreate %v", directoriesToCreate)
|
||||
root := &dir{name: outputDirectory, depth: 1}
|
||||
for totalDirectories < directoriesToCreate {
|
||||
root.createDirectories()
|
||||
}
|
||||
dirs := root.list("", []string{})
|
||||
for i := 0; i < *numberOfFiles; i++ {
|
||||
dir := dirs[rand.Intn(len(dirs))]
|
||||
writeFile(dir, fileName())
|
||||
}
|
||||
}
|
||||
1056
fs/operations.go
1056
fs/operations.go
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
222
fs/test_all.go
222
fs/test_all.go
@@ -1,222 +0,0 @@
|
||||
// +build ignore
|
||||
|
||||
// Run tests for all the remotes
|
||||
//
|
||||
// Run with go run test_all.go
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"runtime"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
_ "github.com/ncw/rclone/fs/all" // import all fs
|
||||
"github.com/ncw/rclone/fstest"
|
||||
)
|
||||
|
||||
var (
|
||||
remotes = []string{
|
||||
"TestSwift:",
|
||||
"TestS3:",
|
||||
"TestDrive:",
|
||||
"TestGoogleCloudStorage:",
|
||||
"TestDropbox:",
|
||||
"TestAmazonCloudDrive:",
|
||||
"TestOneDrive:",
|
||||
"TestHubic:",
|
||||
"TestB2:",
|
||||
"TestYandex:",
|
||||
}
|
||||
binary = "fs.test"
|
||||
// Flags
|
||||
maxTries = flag.Int("maxtries", 3, "Number of times to try each test")
|
||||
runTests = flag.String("run", "", "Comma separated list of remotes to test, eg 'TestSwift:,TestS3'")
|
||||
verbose = flag.Bool("verbose", false, "Run the tests with -v")
|
||||
clean = flag.Bool("clean", false, "Instead of testing, clean all left over test directories")
|
||||
runOnly = flag.String("run-only", "", "Run only those tests matching the regexp supplied")
|
||||
)
|
||||
|
||||
// test holds info about a running test
|
||||
type test struct {
|
||||
remote string
|
||||
subdir bool
|
||||
cmdLine []string
|
||||
cmdString string
|
||||
try int
|
||||
err error
|
||||
output []byte
|
||||
}
|
||||
|
||||
// newTest creates a new test
|
||||
func newTest(remote string, subdir bool) *test {
|
||||
t := &test{
|
||||
remote: remote,
|
||||
subdir: subdir,
|
||||
cmdLine: []string{"./" + binary, "-remote", remote},
|
||||
try: 1,
|
||||
}
|
||||
if *verbose {
|
||||
t.cmdLine = append(t.cmdLine, "-test.v")
|
||||
}
|
||||
if *runOnly != "" {
|
||||
t.cmdLine = append(t.cmdLine, "-test.run", *runOnly)
|
||||
}
|
||||
if subdir {
|
||||
t.cmdLine = append(t.cmdLine, "-subdir")
|
||||
}
|
||||
t.cmdString = strings.Join(t.cmdLine, " ")
|
||||
return t
|
||||
}
|
||||
|
||||
// trial runs a single test
|
||||
func (t *test) trial() {
|
||||
log.Printf("%q - Starting (try %d/%d)", t.cmdString, t.try, *maxTries)
|
||||
cmd := exec.Command(t.cmdLine[0], t.cmdLine[1:]...)
|
||||
start := time.Now()
|
||||
t.output, t.err = cmd.CombinedOutput()
|
||||
duration := time.Since(start)
|
||||
if t.passed() {
|
||||
log.Printf("%q - Finished OK in %v (try %d/%d)", t.cmdString, duration, t.try, *maxTries)
|
||||
} else {
|
||||
log.Printf("%q - Finished ERROR in %v (try %d/%d): %v", t.cmdString, duration, t.try, *maxTries, t.err)
|
||||
}
|
||||
}
|
||||
|
||||
// cleanFs runs a single clean fs for left over directories
|
||||
func (t *test) cleanFs() error {
|
||||
f, err := fs.NewFs(t.remote)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for dir := range f.ListDir() {
|
||||
if fstest.MatchTestRemote.MatchString(dir.Name) {
|
||||
log.Printf("Purging %s%s", t.remote, dir.Name)
|
||||
dir, err := fs.NewFs(t.remote + dir.Name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = fs.Purge(dir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// clean runs a single clean on a fs for left over directories
|
||||
func (t *test) clean() {
|
||||
log.Printf("%q - Starting clean (try %d/%d)", t.remote, t.try, *maxTries)
|
||||
start := time.Now()
|
||||
t.err = t.cleanFs()
|
||||
if t.err != nil {
|
||||
log.Printf("%q - Failed to purge %v", t.remote, t.err)
|
||||
}
|
||||
duration := time.Since(start)
|
||||
if t.passed() {
|
||||
log.Printf("%q - Finished OK in %v (try %d/%d)", t.cmdString, duration, t.try, *maxTries)
|
||||
} else {
|
||||
log.Printf("%q - Finished ERROR in %v (try %d/%d): %v", t.cmdString, duration, t.try, *maxTries, t.err)
|
||||
}
|
||||
}
|
||||
|
||||
// passed returns true if the test passed
|
||||
func (t *test) passed() bool {
|
||||
return t.err == nil
|
||||
}
|
||||
|
||||
// run runs all the trials for this test
|
||||
func (t *test) run(result chan<- *test) {
|
||||
for t.try = 1; t.try <= *maxTries; t.try++ {
|
||||
if *clean {
|
||||
if !t.subdir {
|
||||
t.clean()
|
||||
}
|
||||
} else {
|
||||
t.trial()
|
||||
}
|
||||
if t.passed() {
|
||||
break
|
||||
}
|
||||
}
|
||||
if !t.passed() {
|
||||
log.Println("------------------------------------------------------------")
|
||||
log.Println(string(t.output))
|
||||
log.Println("------------------------------------------------------------")
|
||||
}
|
||||
result <- t
|
||||
}
|
||||
|
||||
// makeTestBinary makes the binary we will run
|
||||
func makeTestBinary() {
|
||||
if runtime.GOOS == "windows" {
|
||||
binary += ".exe"
|
||||
}
|
||||
log.Printf("Making test binary %q", binary)
|
||||
err := exec.Command("go", "test", "-c", "-o", binary).Run()
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to make test binary: %v", err)
|
||||
}
|
||||
if _, err := os.Stat(binary); err != nil {
|
||||
log.Fatalf("Couldn't find test binary %q", binary)
|
||||
}
|
||||
}
|
||||
|
||||
// removeTestBinary removes the binary made in makeTestBinary
|
||||
func removeTestBinary() {
|
||||
err := os.Remove(binary) // Delete the binary when finished
|
||||
if err != nil {
|
||||
log.Printf("Error removing test binary %q: %v", binary, err)
|
||||
}
|
||||
}
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
if *runTests != "" {
|
||||
remotes = strings.Split(*runTests, ",")
|
||||
}
|
||||
log.Printf("Testing remotes: %s", strings.Join(remotes, ", "))
|
||||
|
||||
start := time.Now()
|
||||
if *clean {
|
||||
fs.LoadConfig()
|
||||
} else {
|
||||
makeTestBinary()
|
||||
defer removeTestBinary()
|
||||
}
|
||||
|
||||
// start the tests
|
||||
results := make(chan *test, 8)
|
||||
awaiting := 0
|
||||
for _, remote := range remotes {
|
||||
awaiting += 2
|
||||
go newTest(remote, false).run(results)
|
||||
go newTest(remote, true).run(results)
|
||||
}
|
||||
|
||||
// Wait for the tests to finish
|
||||
var failed []*test
|
||||
for ; awaiting > 0; awaiting-- {
|
||||
t := <-results
|
||||
if !t.passed() {
|
||||
failed = append(failed, t)
|
||||
}
|
||||
}
|
||||
duration := time.Since(start)
|
||||
|
||||
// Summarise results
|
||||
if len(failed) == 0 {
|
||||
log.Printf("PASS: All tests finished OK in %v", duration)
|
||||
} else {
|
||||
log.Printf("FAIL: %d tests failed in %v", len(failed), duration)
|
||||
for _, t := range failed {
|
||||
log.Printf(" * %s", t.cmdString)
|
||||
}
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
4
fs/testdata/enc-invalid.conf
vendored
4
fs/testdata/enc-invalid.conf
vendored
@@ -1,4 +0,0 @@
|
||||
# Encrypted rclone configuration File
|
||||
|
||||
RCLONE_ENCRYPT_V0:
|
||||
b5Uk6mE3cUn5Wb8xiWYnVBAxXUirAaEG1PO/GIDiO9274AOæøå+Yj790BwJA4d2y7lNkmHt4nJwIsoueFvUYmm7RDyzER8IA3XOCrjzl3OUcczZqcplk5JfBdhxMZpt1aGYWUdle1IgO/kAFne6sLD6IuxPySEb
|
||||
4
fs/testdata/enc-short.conf
vendored
4
fs/testdata/enc-short.conf
vendored
@@ -1,4 +0,0 @@
|
||||
# Encrypted rclone configuration File
|
||||
|
||||
RCLONE_ENCRYPT_V0:
|
||||
b5Uk6mE3cUn5Wb8xi
|
||||
4
fs/testdata/enc-too-new.conf
vendored
4
fs/testdata/enc-too-new.conf
vendored
@@ -1,4 +0,0 @@
|
||||
# Encrypted rclone configuration File
|
||||
|
||||
RCLONE_ENCRYPT_V1:
|
||||
b5Uk6mE3cUn5Wb8xiWYnVBAxXUirAaEG1PO/GIDiO9274AO+Yj790BwJA4d2y7lNkmHt4nJwIsoueFvUYmm7RDyzER8IA3XOCrjzl3OUcczZqcplk5JfBdhxMZpt1aGYWUdle1IgO/kAFne6sLD6IuxPySEb
|
||||
4
fs/testdata/encrypted.conf
vendored
4
fs/testdata/encrypted.conf
vendored
@@ -1,4 +0,0 @@
|
||||
# Encrypted rclone configuration File
|
||||
|
||||
RCLONE_ENCRYPT_V0:
|
||||
b5Uk6mE3cUn5Wb8xiWYnVBAxXUirAaEG1PO/GIDiO9274AO+Yj790BwJA4d2y7lNkmHt4nJwIsoueFvUYmm7RDyzER8IA3XOCrjzl3OUcczZqcplk5JfBdhxMZpt1aGYWUdle1IgO/kAFne6sLD6IuxPySEb
|
||||
12
fs/testdata/plain.conf
vendored
12
fs/testdata/plain.conf
vendored
@@ -1,12 +0,0 @@
|
||||
[RCLONE_ENCRYPT_V0]
|
||||
type = local
|
||||
nounc = true
|
||||
|
||||
[nounc]
|
||||
type = local
|
||||
nounc = true
|
||||
|
||||
|
||||
[unc]
|
||||
type = local
|
||||
nounc = false
|
||||
@@ -1,4 +1,3 @@
|
||||
package fs
|
||||
|
||||
// Version of rclone
|
||||
const Version = "v1.29"
|
||||
const Version = "v1.02"
|
||||
|
||||
331
fstest/fstest.go
331
fstest/fstest.go
@@ -1,331 +0,0 @@
|
||||
// Package fstest provides utilities for testing the Fs
|
||||
package fstest
|
||||
|
||||
// FIXME put name of test FS in Fs structure
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"math/rand"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
)
|
||||
|
||||
var (
|
||||
// MatchTestRemote matches the remote names used for testing
|
||||
MatchTestRemote = regexp.MustCompile(`^rclone-test-[abcdefghijklmnopqrstuvwxyz0123456789]{24}$`)
|
||||
)
|
||||
|
||||
// Seed the random number generator
|
||||
func init() {
|
||||
rand.Seed(time.Now().UnixNano())
|
||||
|
||||
}
|
||||
|
||||
// Item represents an item for checking
|
||||
type Item struct {
|
||||
Path string
|
||||
Hashes map[fs.HashType]string
|
||||
ModTime time.Time
|
||||
Size int64
|
||||
WinPath string
|
||||
}
|
||||
|
||||
// NewItem creates an item from a string content
|
||||
func NewItem(Path, Content string, modTime time.Time) Item {
|
||||
i := Item{
|
||||
Path: Path,
|
||||
ModTime: modTime,
|
||||
Size: int64(len(Content)),
|
||||
}
|
||||
hash := fs.NewMultiHasher()
|
||||
buf := bytes.NewBufferString(Content)
|
||||
_, err := io.Copy(hash, buf)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to create item: %v", err)
|
||||
}
|
||||
i.Hashes = hash.Sums()
|
||||
return i
|
||||
}
|
||||
|
||||
// CheckTimeEqualWithPrecision checks the times are equal within the
|
||||
// precision, returns the delta and a flag
|
||||
func CheckTimeEqualWithPrecision(t0, t1 time.Time, precision time.Duration) (time.Duration, bool) {
|
||||
dt := t0.Sub(t1)
|
||||
if dt >= precision || dt <= -precision {
|
||||
return dt, false
|
||||
}
|
||||
return dt, true
|
||||
}
|
||||
|
||||
// CheckModTime checks the mod time to the given precision
|
||||
func (i *Item) CheckModTime(t *testing.T, obj fs.Object, modTime time.Time, precision time.Duration) {
|
||||
dt, ok := CheckTimeEqualWithPrecision(modTime, i.ModTime, precision)
|
||||
if !ok {
|
||||
t.Errorf("%s: Modification time difference too big |%s| > %s (%s vs %s) (precision %s)", obj.Remote(), dt, precision, modTime, i.ModTime, precision)
|
||||
}
|
||||
}
|
||||
|
||||
// CheckHashes checks all the hashes the object supports are correct
|
||||
func (i *Item) CheckHashes(t *testing.T, obj fs.Object) {
|
||||
if obj == nil {
|
||||
t.Fatalf("Object is nil")
|
||||
}
|
||||
types := obj.Fs().Hashes().Array()
|
||||
for _, hash := range types {
|
||||
// Check attributes
|
||||
sum, err := obj.Hash(hash)
|
||||
if err != nil {
|
||||
t.Fatalf("%s: Failed to read hash %v for %q: %v", obj.Fs().String(), hash, obj.Remote(), err)
|
||||
}
|
||||
if !fs.HashEquals(i.Hashes[hash], sum) {
|
||||
t.Errorf("%s/%s: %v hash incorrect - expecting %q got %q", obj.Fs().String(), obj.Remote(), hash, i.Hashes[hash], sum)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check checks all the attributes of the object are correct
|
||||
func (i *Item) Check(t *testing.T, obj fs.Object, precision time.Duration) {
|
||||
i.CheckHashes(t, obj)
|
||||
if i.Size != obj.Size() {
|
||||
t.Errorf("%s/%s: Size incorrect - expecting %d got %d", obj.Fs().String(), obj.Remote(), i.Size, obj.Size())
|
||||
}
|
||||
i.CheckModTime(t, obj, obj.ModTime(), precision)
|
||||
}
|
||||
|
||||
// Items represents all items for checking
|
||||
type Items struct {
|
||||
byName map[string]*Item
|
||||
byNameAlt map[string]*Item
|
||||
items []Item
|
||||
}
|
||||
|
||||
// NewItems makes an Items
|
||||
func NewItems(items []Item) *Items {
|
||||
is := &Items{
|
||||
byName: make(map[string]*Item),
|
||||
byNameAlt: make(map[string]*Item),
|
||||
items: items,
|
||||
}
|
||||
// Fill up byName
|
||||
for i := range items {
|
||||
is.byName[items[i].Path] = &items[i]
|
||||
is.byNameAlt[items[i].WinPath] = &items[i]
|
||||
}
|
||||
return is
|
||||
}
|
||||
|
||||
// Find checks off an item
|
||||
func (is *Items) Find(t *testing.T, obj fs.Object, precision time.Duration) {
|
||||
i, ok := is.byName[obj.Remote()]
|
||||
if !ok {
|
||||
i, ok = is.byNameAlt[obj.Remote()]
|
||||
if !ok {
|
||||
t.Errorf("Unexpected file %q", obj.Remote())
|
||||
return
|
||||
}
|
||||
}
|
||||
delete(is.byName, i.Path)
|
||||
delete(is.byName, i.WinPath)
|
||||
i.Check(t, obj, precision)
|
||||
}
|
||||
|
||||
// Done checks all finished
|
||||
func (is *Items) Done(t *testing.T) {
|
||||
if len(is.byName) != 0 {
|
||||
for name := range is.byName {
|
||||
t.Logf("Not found %q", name)
|
||||
}
|
||||
t.Errorf("%d objects not found", len(is.byName))
|
||||
}
|
||||
}
|
||||
|
||||
// CheckListingWithPrecision checks the fs to see if it has the
|
||||
// expected contents with the given precision.
|
||||
func CheckListingWithPrecision(t *testing.T, f fs.Fs, items []Item, precision time.Duration) {
|
||||
is := NewItems(items)
|
||||
oldErrors := fs.Stats.GetErrors()
|
||||
var objs []fs.Object
|
||||
const retries = 6
|
||||
sleep := time.Second / 2
|
||||
for i := 1; i <= retries; i++ {
|
||||
objs = nil
|
||||
for obj := range f.List() {
|
||||
objs = append(objs, obj)
|
||||
}
|
||||
if len(objs) == len(items) {
|
||||
// Put an extra sleep in if we did any retries just to make sure it really
|
||||
// is consistent (here is looking at you Amazon Cloud Drive!)
|
||||
if i != 1 {
|
||||
extraSleep := 5*time.Second + sleep
|
||||
t.Logf("Sleeping for %v just to make sure", extraSleep)
|
||||
time.Sleep(extraSleep)
|
||||
}
|
||||
break
|
||||
}
|
||||
sleep *= 2
|
||||
t.Logf("Sleeping for %v for list eventual consistency: %d/%d", sleep, i, retries)
|
||||
time.Sleep(sleep)
|
||||
}
|
||||
for _, obj := range objs {
|
||||
if obj == nil {
|
||||
t.Errorf("Unexpected nil in List()")
|
||||
continue
|
||||
}
|
||||
is.Find(t, obj, precision)
|
||||
}
|
||||
is.Done(t)
|
||||
// Don't notice an error when listing an empty directory
|
||||
if len(items) == 0 && oldErrors == 0 && fs.Stats.GetErrors() == 1 {
|
||||
fs.Stats.ResetErrors()
|
||||
}
|
||||
}
|
||||
|
||||
// CheckListing checks the fs to see if it has the expected contents
|
||||
func CheckListing(t *testing.T, f fs.Fs, items []Item) {
|
||||
precision := f.Precision()
|
||||
CheckListingWithPrecision(t, f, items, precision)
|
||||
}
|
||||
|
||||
// CheckItems checks the fs to see if it has only the items passed in
|
||||
// using a precision of fs.Config.ModifyWindow
|
||||
func CheckItems(t *testing.T, f fs.Fs, items ...Item) {
|
||||
CheckListingWithPrecision(t, f, items, fs.Config.ModifyWindow)
|
||||
}
|
||||
|
||||
// Time parses a time string or logs a fatal error
|
||||
func Time(timeString string) time.Time {
|
||||
t, err := time.Parse(time.RFC3339Nano, timeString)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to parse time %q: %v", timeString, err)
|
||||
}
|
||||
return t
|
||||
}
|
||||
|
||||
// RandomString create a random string for test purposes
|
||||
func RandomString(n int) string {
|
||||
const (
|
||||
vowel = "aeiou"
|
||||
consonant = "bcdfghjklmnpqrstvwxyz"
|
||||
digit = "0123456789"
|
||||
)
|
||||
pattern := []string{consonant, vowel, consonant, vowel, consonant, vowel, consonant, digit}
|
||||
out := make([]byte, n)
|
||||
p := 0
|
||||
for i := range out {
|
||||
source := pattern[p]
|
||||
p = (p + 1) % len(pattern)
|
||||
out[i] = source[rand.Intn(len(source))]
|
||||
}
|
||||
return string(out)
|
||||
}
|
||||
|
||||
// LocalRemote creates a temporary directory name for local remotes
|
||||
func LocalRemote() (path string, err error) {
|
||||
path, err = ioutil.TempDir("", "rclone")
|
||||
if err == nil {
|
||||
// Now remove the directory
|
||||
err = os.Remove(path)
|
||||
}
|
||||
path = filepath.ToSlash(path)
|
||||
return
|
||||
}
|
||||
|
||||
// RandomRemoteName makes a random bucket or subdirectory name
|
||||
//
|
||||
// Returns a random remote name plus the leaf name
|
||||
func RandomRemoteName(remoteName string) (string, string, error) {
|
||||
var err error
|
||||
var leafName string
|
||||
|
||||
// Make a directory if remote name is null
|
||||
if remoteName == "" {
|
||||
remoteName, err = LocalRemote()
|
||||
if err != nil {
|
||||
return "", "", err
|
||||
}
|
||||
} else {
|
||||
if !strings.HasSuffix(remoteName, ":") {
|
||||
remoteName += "/"
|
||||
}
|
||||
leafName = "rclone-test-" + RandomString(24)
|
||||
if !MatchTestRemote.MatchString(leafName) {
|
||||
log.Fatalf("%q didn't match the test remote name regexp", leafName)
|
||||
}
|
||||
remoteName += leafName
|
||||
}
|
||||
return remoteName, leafName, nil
|
||||
}
|
||||
|
||||
// RandomRemote makes a random bucket or subdirectory on the remote
|
||||
//
|
||||
// Call the finalise function returned to Purge the fs at the end (and
|
||||
// the parent if necessary)
|
||||
func RandomRemote(remoteName string, subdir bool) (fs.Fs, func(), error) {
|
||||
var err error
|
||||
var parentRemote fs.Fs
|
||||
|
||||
remoteName, _, err = RandomRemoteName(remoteName)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
if subdir {
|
||||
parentRemote, err = fs.NewFs(remoteName)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
remoteName += "/rclone-test-subdir-" + RandomString(8)
|
||||
}
|
||||
|
||||
remote, err := fs.NewFs(remoteName)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
finalise := func() {
|
||||
_ = fs.Purge(remote) // ignore error
|
||||
if parentRemote != nil {
|
||||
err = fs.Purge(parentRemote) // ignore error
|
||||
if err != nil {
|
||||
log.Printf("Failed to purge %v: %v", parentRemote, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return remote, finalise, nil
|
||||
}
|
||||
|
||||
// TestMkdir tests Mkdir works
|
||||
func TestMkdir(t *testing.T, remote fs.Fs) {
|
||||
err := fs.Mkdir(remote)
|
||||
if err != nil {
|
||||
t.Fatalf("Mkdir failed: %v", err)
|
||||
}
|
||||
CheckListing(t, remote, []Item{})
|
||||
}
|
||||
|
||||
// TestPurge tests Purge works
|
||||
func TestPurge(t *testing.T, remote fs.Fs) {
|
||||
err := fs.Purge(remote)
|
||||
if err != nil {
|
||||
t.Fatalf("Purge failed: %v", err)
|
||||
}
|
||||
CheckListing(t, remote, []Item{})
|
||||
}
|
||||
|
||||
// TestRmdir tests Rmdir works
|
||||
func TestRmdir(t *testing.T, remote fs.Fs) {
|
||||
err := fs.Rmdir(remote)
|
||||
if err != nil {
|
||||
t.Fatalf("Rmdir failed: %v", err)
|
||||
}
|
||||
}
|
||||
@@ -1,651 +0,0 @@
|
||||
// Package fstests provides generic tests for testing the Fs and Object interfaces
|
||||
//
|
||||
// Run go generate to write the tests for the remotes
|
||||
package fstests
|
||||
|
||||
//go:generate go run gen_tests.go
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"flag"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/fstest"
|
||||
)
|
||||
|
||||
var (
|
||||
remote fs.Fs
|
||||
// RemoteName should be set to the name of the remote for testing
|
||||
RemoteName = ""
|
||||
subRemoteName = ""
|
||||
subRemoteLeaf = ""
|
||||
// NilObject should be set to a nil Object from the Fs under test
|
||||
NilObject fs.Object
|
||||
file1 = fstest.Item{
|
||||
ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"),
|
||||
Path: "file name.txt",
|
||||
}
|
||||
file2 = fstest.Item{
|
||||
ModTime: fstest.Time("2001-02-03T04:05:10.123123123Z"),
|
||||
Path: `hello? sausage/êé/Hello, 世界/ " ' @ < > & ?/z.txt`,
|
||||
WinPath: `hello_ sausage/êé/Hello, 世界/ _ ' @ _ _ & _/z.txt`,
|
||||
}
|
||||
verbose = flag.Bool("verbose", false, "Set to enable logging")
|
||||
dumpHeaders = flag.Bool("dump-headers", false, "Dump HTTP headers - may contain sensitive info")
|
||||
dumpBodies = flag.Bool("dump-bodies", false, "Dump HTTP headers and bodies - may contain sensitive info")
|
||||
)
|
||||
|
||||
const eventualConsistencyRetries = 10
|
||||
|
||||
func init() {
|
||||
flag.StringVar(&RemoteName, "remote", "", "Set this to override the default remote name (eg s3:)")
|
||||
}
|
||||
|
||||
// TestInit tests basic intitialisation
|
||||
func TestInit(t *testing.T) {
|
||||
var err error
|
||||
|
||||
// Never ask for passwords, fail instead.
|
||||
// If your local config is encrypted set environment variable
|
||||
// "RCLONE_CONFIG_PASS=hunter2" (or your password)
|
||||
*fs.AskPassword = false
|
||||
fs.LoadConfig()
|
||||
fs.Config.Verbose = *verbose
|
||||
fs.Config.Quiet = !*verbose
|
||||
fs.Config.DumpHeaders = *dumpHeaders
|
||||
fs.Config.DumpBodies = *dumpBodies
|
||||
t.Logf("Using remote %q", RemoteName)
|
||||
if RemoteName == "" {
|
||||
RemoteName, err = fstest.LocalRemote()
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to create tmp dir: %v", err)
|
||||
}
|
||||
}
|
||||
subRemoteName, subRemoteLeaf, err = fstest.RandomRemoteName(RemoteName)
|
||||
if err != nil {
|
||||
t.Fatalf("Couldn't make remote name: %v", err)
|
||||
}
|
||||
|
||||
remote, err = fs.NewFs(subRemoteName)
|
||||
if err == fs.ErrorNotFoundInConfigFile {
|
||||
log.Printf("Didn't find %q in config file - skipping tests", RemoteName)
|
||||
return
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatalf("Couldn't start FS: %v", err)
|
||||
}
|
||||
fstest.TestMkdir(t, remote)
|
||||
}
|
||||
|
||||
func skipIfNotOk(t *testing.T) {
|
||||
if remote == nil {
|
||||
t.Skip("FS not configured")
|
||||
}
|
||||
}
|
||||
|
||||
// TestFsString tests the String method
|
||||
func TestFsString(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
str := remote.String()
|
||||
if str == "" {
|
||||
t.Fatal("Bad fs.String()")
|
||||
}
|
||||
}
|
||||
|
||||
// TestFsRmdirEmpty tests deleting an empty directory
|
||||
func TestFsRmdirEmpty(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
fstest.TestRmdir(t, remote)
|
||||
}
|
||||
|
||||
// TestFsRmdirNotFound tests deleting a non existent directory
|
||||
func TestFsRmdirNotFound(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
err := remote.Rmdir()
|
||||
if err == nil {
|
||||
t.Fatalf("Expecting error on Rmdir non existent")
|
||||
}
|
||||
}
|
||||
|
||||
// TestFsMkdir tests tests making a directory
|
||||
func TestFsMkdir(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
fstest.TestMkdir(t, remote)
|
||||
fstest.TestMkdir(t, remote)
|
||||
}
|
||||
|
||||
// TestFsListEmpty tests listing an empty directory
|
||||
func TestFsListEmpty(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
fstest.CheckListing(t, remote, []fstest.Item{})
|
||||
}
|
||||
|
||||
// TestFsListDirEmpty tests listing the directories from an empty directory
|
||||
func TestFsListDirEmpty(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
for obj := range remote.ListDir() {
|
||||
t.Errorf("Found unexpected item %q", obj.Name)
|
||||
}
|
||||
}
|
||||
|
||||
// TestFsNewFsObjectNotFound tests not finding a object
|
||||
func TestFsNewFsObjectNotFound(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
if remote.NewFsObject("potato") != nil {
|
||||
t.Fatal("Didn't expect to find object")
|
||||
}
|
||||
}
|
||||
|
||||
func findObject(t *testing.T, Name string) fs.Object {
|
||||
var obj fs.Object
|
||||
for i := 1; i <= eventualConsistencyRetries; i++ {
|
||||
obj = remote.NewFsObject(Name)
|
||||
if obj != nil {
|
||||
break
|
||||
}
|
||||
t.Logf("Sleeping for 1 second for findObject eventual consistency: %d/%d", i, eventualConsistencyRetries)
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
if obj == nil {
|
||||
t.Fatalf("Object not found: %q", Name)
|
||||
}
|
||||
return obj
|
||||
}
|
||||
|
||||
func testPut(t *testing.T, file *fstest.Item) {
|
||||
buf := bytes.NewBufferString(fstest.RandomString(100))
|
||||
hash := fs.NewMultiHasher()
|
||||
in := io.TeeReader(buf, hash)
|
||||
|
||||
file.Size = int64(buf.Len())
|
||||
obji := fs.NewStaticObjectInfo(file.Path, file.ModTime, file.Size, true, nil, nil)
|
||||
obj, err := remote.Put(in, obji)
|
||||
if err != nil {
|
||||
t.Fatal("Put error", err)
|
||||
}
|
||||
file.Hashes = hash.Sums()
|
||||
file.Check(t, obj, remote.Precision())
|
||||
// Re-read the object and check again
|
||||
obj = findObject(t, file.Path)
|
||||
file.Check(t, obj, remote.Precision())
|
||||
}
|
||||
|
||||
// TestFsPutFile1 tests putting a file
|
||||
func TestFsPutFile1(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
testPut(t, &file1)
|
||||
}
|
||||
|
||||
// TestFsPutFile2 tests putting a file into a subdirectory
|
||||
func TestFsPutFile2(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
testPut(t, &file2)
|
||||
}
|
||||
|
||||
// TestFsListDirFile2 tests the files are correctly uploaded
|
||||
func TestFsListDirFile2(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
found := false
|
||||
for i := 1; i <= eventualConsistencyRetries; i++ {
|
||||
for obj := range remote.ListDir() {
|
||||
if obj.Name != `hello? sausage` && obj.Name != `hello_ sausage` {
|
||||
t.Errorf("Found unexpected item %q", obj.Name)
|
||||
} else {
|
||||
found = true
|
||||
}
|
||||
}
|
||||
if found {
|
||||
break
|
||||
}
|
||||
t.Logf("Sleeping for 1 second for TestFsListDirFile2 eventual consistency: %d/%d", i, eventualConsistencyRetries)
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
if !found {
|
||||
t.Errorf("Didn't find %q", `hello? sausage`)
|
||||
}
|
||||
}
|
||||
|
||||
// TestFsListDirRoot tests that DirList works in the root
|
||||
func TestFsListDirRoot(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
rootRemote, err := fs.NewFs(RemoteName)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to make remote %q: %v", RemoteName, err)
|
||||
}
|
||||
found := false
|
||||
for obj := range rootRemote.ListDir() {
|
||||
if obj.Name == subRemoteLeaf {
|
||||
found = true
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Errorf("Didn't find %q", subRemoteLeaf)
|
||||
}
|
||||
}
|
||||
|
||||
// TestFsListRoot tests List works in the root
|
||||
func TestFsListRoot(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
rootRemote, err := fs.NewFs(RemoteName)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to make remote %q: %v", RemoteName, err)
|
||||
}
|
||||
// Should either find file1 and file2 or nothing
|
||||
found1 := false
|
||||
f1 := subRemoteLeaf + "/" + file1.Path
|
||||
found2 := false
|
||||
f2 := subRemoteLeaf + "/" + file2.Path
|
||||
f2Alt := subRemoteLeaf + "/" + file2.WinPath
|
||||
count := 0
|
||||
errors := fs.Stats.GetErrors()
|
||||
for obj := range rootRemote.List() {
|
||||
count++
|
||||
if obj.Remote() == f1 {
|
||||
found1 = true
|
||||
}
|
||||
if obj.Remote() == f2 || obj.Remote() == f2Alt {
|
||||
found2 = true
|
||||
}
|
||||
}
|
||||
errors -= fs.Stats.GetErrors()
|
||||
if count == 0 {
|
||||
if errors == 0 {
|
||||
t.Error("Expecting error if count==0")
|
||||
}
|
||||
return
|
||||
}
|
||||
if found1 && found2 {
|
||||
if errors != 0 {
|
||||
t.Error("Not expecting error if found")
|
||||
}
|
||||
return
|
||||
}
|
||||
t.Errorf("Didn't find %q (%v) and %q (%v) or no files (count %d)", f1, found1, f2, found2, count)
|
||||
}
|
||||
|
||||
// TestFsListFile1 tests file present
|
||||
func TestFsListFile1(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
fstest.CheckListing(t, remote, []fstest.Item{file1, file2})
|
||||
}
|
||||
|
||||
// TestFsNewFsObject tests NewFsObject
|
||||
func TestFsNewFsObject(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
obj := findObject(t, file1.Path)
|
||||
file1.Check(t, obj, remote.Precision())
|
||||
}
|
||||
|
||||
// TestFsListFile1and2 tests two files present
|
||||
func TestFsListFile1and2(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
fstest.CheckListing(t, remote, []fstest.Item{file1, file2})
|
||||
}
|
||||
|
||||
// TestFsCopy tests Copy
|
||||
func TestFsCopy(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
|
||||
// Check have Copy
|
||||
_, ok := remote.(fs.Copier)
|
||||
if !ok {
|
||||
t.Skip("FS has no Copier interface")
|
||||
}
|
||||
|
||||
var file1Copy = file1
|
||||
file1Copy.Path += "-copy"
|
||||
|
||||
// do the copy
|
||||
src := findObject(t, file1.Path)
|
||||
dst, err := remote.(fs.Copier).Copy(src, file1Copy.Path)
|
||||
if err != nil {
|
||||
t.Fatalf("Copy failed: %v (%#v)", err, err)
|
||||
}
|
||||
|
||||
// check file exists in new listing
|
||||
fstest.CheckListing(t, remote, []fstest.Item{file1, file2, file1Copy})
|
||||
|
||||
// Check dst lightly - list above has checked ModTime/Hashes
|
||||
if dst.Remote() != file1Copy.Path {
|
||||
t.Errorf("object path: want %q got %q", file1Copy.Path, dst.Remote())
|
||||
}
|
||||
|
||||
// Delete copy
|
||||
err = dst.Remove()
|
||||
if err != nil {
|
||||
t.Fatal("Remove copy error", err)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// TestFsMove tests Move
|
||||
func TestFsMove(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
|
||||
// Check have Move
|
||||
_, ok := remote.(fs.Mover)
|
||||
if !ok {
|
||||
t.Skip("FS has no Mover interface")
|
||||
}
|
||||
|
||||
var file1Move = file1
|
||||
file1Move.Path += "-move"
|
||||
|
||||
// do the move
|
||||
src := findObject(t, file1.Path)
|
||||
dst, err := remote.(fs.Mover).Move(src, file1Move.Path)
|
||||
if err != nil {
|
||||
t.Fatalf("Move failed: %v", err)
|
||||
}
|
||||
|
||||
// check file exists in new listing
|
||||
fstest.CheckListing(t, remote, []fstest.Item{file2, file1Move})
|
||||
|
||||
// Check dst lightly - list above has checked ModTime/Hashes
|
||||
if dst.Remote() != file1Move.Path {
|
||||
t.Errorf("object path: want %q got %q", file1Move.Path, dst.Remote())
|
||||
}
|
||||
|
||||
// move it back
|
||||
src = findObject(t, file1Move.Path)
|
||||
_, err = remote.(fs.Mover).Move(src, file1.Path)
|
||||
if err != nil {
|
||||
t.Errorf("Move failed: %v", err)
|
||||
}
|
||||
|
||||
// check file exists in new listing
|
||||
fstest.CheckListing(t, remote, []fstest.Item{file2, file1})
|
||||
}
|
||||
|
||||
// Move src to this remote using server side move operations.
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantDirMove
|
||||
//
|
||||
// If destination exists then return fs.ErrorDirExists
|
||||
|
||||
// TestFsDirMove tests DirMove
|
||||
func TestFsDirMove(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
|
||||
// Check have DirMove
|
||||
_, ok := remote.(fs.DirMover)
|
||||
if !ok {
|
||||
t.Skip("FS has no DirMover interface")
|
||||
}
|
||||
|
||||
// Check it can't move onto itself
|
||||
err := remote.(fs.DirMover).DirMove(remote)
|
||||
if err != fs.ErrorDirExists {
|
||||
t.Errorf("Expecting fs.ErrorDirExists got: %v", err)
|
||||
}
|
||||
|
||||
// new remote
|
||||
newRemote, removeNewRemote, err := fstest.RandomRemote(RemoteName, false)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create remote: %v", err)
|
||||
}
|
||||
defer removeNewRemote()
|
||||
|
||||
// try the move
|
||||
err = newRemote.(fs.DirMover).DirMove(remote)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to DirMove: %v", err)
|
||||
}
|
||||
|
||||
// check remotes
|
||||
// FIXME: Prints errors.
|
||||
fstest.CheckListing(t, remote, []fstest.Item{})
|
||||
fstest.CheckListing(t, newRemote, []fstest.Item{file2, file1})
|
||||
|
||||
// move it back
|
||||
err = remote.(fs.DirMover).DirMove(newRemote)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to DirMove: %v", err)
|
||||
}
|
||||
|
||||
// check remotes
|
||||
fstest.CheckListing(t, remote, []fstest.Item{file2, file1})
|
||||
fstest.CheckListing(t, newRemote, []fstest.Item{})
|
||||
}
|
||||
|
||||
// TestFsRmdirFull tests removing a non empty directory
|
||||
func TestFsRmdirFull(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
err := remote.Rmdir()
|
||||
if err == nil {
|
||||
t.Fatalf("Expecting error on RMdir on non empty remote")
|
||||
}
|
||||
}
|
||||
|
||||
// TestFsPrecision tests the Precision of the Fs
|
||||
func TestFsPrecision(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
precision := remote.Precision()
|
||||
if precision == fs.ModTimeNotSupported {
|
||||
return
|
||||
}
|
||||
if precision > time.Second || precision < 0 {
|
||||
t.Fatalf("Precision out of range %v", precision)
|
||||
}
|
||||
// FIXME check expected precision
|
||||
}
|
||||
|
||||
// TestObjectString tests the Object String method
|
||||
func TestObjectString(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
obj := findObject(t, file1.Path)
|
||||
s := obj.String()
|
||||
if s != file1.Path {
|
||||
t.Errorf("String() wrong %v != %v", s, file1.Path)
|
||||
}
|
||||
obj = NilObject
|
||||
s = obj.String()
|
||||
if s != "<nil>" {
|
||||
t.Errorf("String() wrong %v != %v", s, "<nil>")
|
||||
}
|
||||
}
|
||||
|
||||
// TestObjectFs tests the object can be found
|
||||
func TestObjectFs(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
obj := findObject(t, file1.Path)
|
||||
equal := obj.Fs() == remote
|
||||
if !equal {
|
||||
// Check to see if this wraps something else
|
||||
if unwrap, ok := remote.(fs.UnWrapper); ok {
|
||||
equal = obj.Fs() == unwrap.UnWrap()
|
||||
}
|
||||
}
|
||||
if !equal {
|
||||
t.Errorf("Fs is wrong %v != %v", obj.Fs(), remote)
|
||||
}
|
||||
}
|
||||
|
||||
// TestObjectRemote tests the Remote is correct
|
||||
func TestObjectRemote(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
obj := findObject(t, file1.Path)
|
||||
if obj.Remote() != file1.Path {
|
||||
t.Errorf("Remote is wrong %v != %v", obj.Remote(), file1.Path)
|
||||
}
|
||||
}
|
||||
|
||||
// TestObjectHashes checks all the hashes the object supports
|
||||
func TestObjectHashes(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
obj := findObject(t, file1.Path)
|
||||
file1.CheckHashes(t, obj)
|
||||
}
|
||||
|
||||
// TestObjectModTime tests the ModTime of the object is correct
|
||||
func TestObjectModTime(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
obj := findObject(t, file1.Path)
|
||||
file1.CheckModTime(t, obj, obj.ModTime(), remote.Precision())
|
||||
}
|
||||
|
||||
// TestObjectSetModTime tests that SetModTime works
|
||||
func TestObjectSetModTime(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
newModTime := fstest.Time("2011-12-13T14:15:16.999999999Z")
|
||||
obj := findObject(t, file1.Path)
|
||||
err := obj.SetModTime(newModTime)
|
||||
if err == fs.ErrorCantSetModTime {
|
||||
t.Log(err)
|
||||
return
|
||||
} else if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
file1.ModTime = newModTime
|
||||
file1.CheckModTime(t, obj, obj.ModTime(), remote.Precision())
|
||||
// And make a new object and read it from there too
|
||||
TestObjectModTime(t)
|
||||
}
|
||||
|
||||
// TestObjectSize tests that Size works
|
||||
func TestObjectSize(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
obj := findObject(t, file1.Path)
|
||||
if obj.Size() != file1.Size {
|
||||
t.Errorf("Size is wrong %v != %v", obj.Size(), file1.Size)
|
||||
}
|
||||
}
|
||||
|
||||
// TestObjectOpen tests that Open works
|
||||
func TestObjectOpen(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
obj := findObject(t, file1.Path)
|
||||
in, err := obj.Open()
|
||||
if err != nil {
|
||||
t.Fatalf("Open() return error: %v", err)
|
||||
}
|
||||
hasher := fs.NewMultiHasher()
|
||||
n, err := io.Copy(hasher, in)
|
||||
if err != nil {
|
||||
t.Fatalf("io.Copy() return error: %v", err)
|
||||
}
|
||||
if n != file1.Size {
|
||||
t.Fatalf("Read wrong number of bytes %d != %d", n, file1.Size)
|
||||
}
|
||||
err = in.Close()
|
||||
if err != nil {
|
||||
t.Fatalf("in.Close() return error: %v", err)
|
||||
}
|
||||
// Check content of file by comparing the calculated hashes
|
||||
for hashType, got := range hasher.Sums() {
|
||||
want := file1.Hashes[hashType]
|
||||
if want != got {
|
||||
t.Errorf("%v is wrong %v != %v", hashType, want, got)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// TestObjectUpdate tests that Update works
|
||||
func TestObjectUpdate(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
buf := bytes.NewBufferString(fstest.RandomString(200))
|
||||
hash := fs.NewMultiHasher()
|
||||
in := io.TeeReader(buf, hash)
|
||||
|
||||
file1.Size = int64(buf.Len())
|
||||
obj := findObject(t, file1.Path)
|
||||
obji := fs.NewStaticObjectInfo("", file1.ModTime, file1.Size, true, nil, obj.Fs())
|
||||
err := obj.Update(in, obji)
|
||||
if err != nil {
|
||||
t.Fatal("Update error", err)
|
||||
}
|
||||
file1.Hashes = hash.Sums()
|
||||
file1.Check(t, obj, remote.Precision())
|
||||
// Re-read the object and check again
|
||||
obj = findObject(t, file1.Path)
|
||||
file1.Check(t, obj, remote.Precision())
|
||||
}
|
||||
|
||||
// TestObjectStorable tests that Storable works
|
||||
func TestObjectStorable(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
obj := findObject(t, file1.Path)
|
||||
if !obj.Storable() {
|
||||
t.Fatalf("Expecting %v to be storable", obj)
|
||||
}
|
||||
}
|
||||
|
||||
// TestLimitedFs tests that a LimitedFs is created
|
||||
func TestLimitedFs(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
remoteName := subRemoteName + "/" + file2.Path
|
||||
file2Copy := file2
|
||||
file2Copy.Path = "z.txt"
|
||||
fileRemote, err := fs.NewFs(remoteName)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to make remote %q: %v", remoteName, err)
|
||||
}
|
||||
fstest.CheckListing(t, fileRemote, []fstest.Item{file2Copy})
|
||||
_, ok := fileRemote.(*fs.Limited)
|
||||
if !ok {
|
||||
// Check to see if this wraps a Limited FS
|
||||
if unwrap, hasUnWrap := fileRemote.(fs.UnWrapper); hasUnWrap {
|
||||
_, ok = unwrap.UnWrap().(*fs.Limited)
|
||||
}
|
||||
if !ok {
|
||||
t.Errorf("%v is not a fs.Limited", fileRemote)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// TestLimitedFsNotFound tests that a LimitedFs is not created if no object
|
||||
func TestLimitedFsNotFound(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
remoteName := subRemoteName + "/not found.txt"
|
||||
fileRemote, err := fs.NewFs(remoteName)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to make remote %q: %v", remoteName, err)
|
||||
}
|
||||
fstest.CheckListing(t, fileRemote, []fstest.Item{})
|
||||
_, ok := fileRemote.(*fs.Limited)
|
||||
if ok {
|
||||
t.Errorf("%v is is a fs.Limited", fileRemote)
|
||||
}
|
||||
}
|
||||
|
||||
// TestObjectRemove tests Remove
|
||||
func TestObjectRemove(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
obj := findObject(t, file1.Path)
|
||||
err := obj.Remove()
|
||||
if err != nil {
|
||||
t.Fatal("Remove error", err)
|
||||
}
|
||||
fstest.CheckListing(t, remote, []fstest.Item{file2})
|
||||
}
|
||||
|
||||
// TestObjectPurge tests Purge
|
||||
func TestObjectPurge(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
fstest.TestPurge(t, remote)
|
||||
err := fs.Purge(remote)
|
||||
if err == nil {
|
||||
t.Fatal("Expecting error after on second purge")
|
||||
}
|
||||
}
|
||||
|
||||
// TestFinalise tidies up after the previous tests
|
||||
func TestFinalise(t *testing.T) {
|
||||
skipIfNotOk(t)
|
||||
if strings.HasPrefix(RemoteName, "/") {
|
||||
// Remove temp directory
|
||||
err := os.Remove(RemoteName)
|
||||
if err != nil {
|
||||
log.Printf("Failed to remove %q: %v\n", RemoteName, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user