1
0
mirror of https://github.com/rclone/rclone.git synced 2026-02-04 10:43:14 +00:00

Compare commits

..

114 Commits

Author SHA1 Message Date
Nick Craig-Wood
411f75aadf azureblob: enable on freebsd, netbsd, openbsd
At some point the SDK was fixed on these architectures, so re-enable
building the azure blob backend for them.
2018-11-26 08:32:58 +00:00
Nick Craig-Wood
a6c28a5faa Start v1.45-DEV development 2018-11-24 15:20:24 +00:00
Nick Craig-Wood
d35bd15762 Version v1.45 2018-11-24 13:44:25 +00:00
Nick Craig-Wood
8b8220c4f7 azureblob: wait for up to 60s to create a just deleted container
When a container is deleted, a container with the same name cannot be
created for at least 30 seconds; the container may not be available
for more than 30 seconds if the service is still processing the
request.

We sleep so that we wait at most 60 seconds.  This is mostly useful in
the integration tests where containers get deleted and remade
immediately.
2018-11-24 10:57:37 +00:00
Nick Craig-Wood
5fe3b0ad71 Add Stephen Harris to contributors 2018-11-24 10:57:37 +00:00
Stephen Harris
4c8c87a935 Update PROXY section of the FAQ 2018-11-23 20:14:36 +00:00
Nick Craig-Wood
bb10a51b39 test_all: limit to go1.11 so the template used is supported 2018-11-23 17:17:19 +00:00
Nick Craig-Wood
df01f7a4eb test_all: fix regexp for retrying nested tests 2018-11-23 17:17:19 +00:00
Nick Craig-Wood
e84790ef79 swift: add pacer for retries to make swift more reliable #2740 2018-11-22 22:15:52 +00:00
Nick Craig-Wood
369a8ee17b ncdu: fix deleting files 2018-11-22 21:41:17 +00:00
Nick Craig-Wood
84e21ade6b cmount: fix on Linux - only apply volname for Windows and macOS 2018-11-22 20:41:05 +00:00
Sebastian Bünger
703b0535a4 yandex: update docs 2018-11-22 20:14:50 +00:00
Sebastian Bünger
155264ae12 yandex: complete rewrite
Get rid of the api client and use rest/pacer for all API calls
Add Copy, Move, DirMove, PublicLink, About optional interfaces
Improve general error handling
Remove ListR for now due to inconsitent behaviour
fixes #2586, progress on #2740 and #2178
2018-11-22 20:14:50 +00:00
Nick Craig-Wood
31e2ce03c3 fstests: re-arrange backend integration tests so they can be retried
Before this change backend integration tests depended on each other,
so tests could not be retried.

After this change we nest tests to ensure that tests are provided with
the starting state they expect.

Tell the integration test runner that it can retry backend tests also.

This also includes bin/test_independence.go which runs each test
individually for a backend to prove that they are independent.
2018-11-22 20:12:12 +00:00
Nick Craig-Wood
e969505ae4 info: fix control character map output 2018-11-20 14:04:27 +00:00
Nick Craig-Wood
26e2f1a998 Add Alexander to contributors 2018-11-20 10:22:11 +00:00
Alexander
2682d5a9cf - install with busybox if any 2018-11-20 10:22:00 +00:00
Nick Craig-Wood
2191592e80 Add Henry Ptasinski to contributors 2018-11-19 13:33:59 +00:00
Nick Craig-Wood
18f758294e Add Peter Kaminski to contributors 2018-11-19 13:33:59 +00:00
Henry Ptasinski
f95c1c61dd s3: add config info for Wasabi's US-West endpoint
Wasabi has two location, US East and US West, with different endpoint URLs.
When configuring S3 to use Wasabi, provide the endpoint information for both
locations.
2018-11-19 13:33:42 +00:00
Nick Craig-Wood
8c8dcdd521 webdav: fix config parsing so --webdav-user and --webdav-pass flags work 2018-11-17 13:14:54 +00:00
Nick Craig-Wood
141c133818 fstest: Wait for longer if neccessary in TestFsChangeNotify 2018-11-16 07:45:24 +00:00
Nick Craig-Wood
0f03e55cd1 fstests: ignore main directory creation in TestFsChangeNotify 2018-11-15 18:39:28 +00:00
Nick Craig-Wood
9e6ba92a11 fstests: attempt to fix TestFsChangeNotify flakiness
This now uses testPut to upload the test files which will retry on
errors properly.
2018-11-15 18:39:28 +00:00
Nick Craig-Wood
762561f88e fstest: factor out retry logic from put code and make testPut return the object too 2018-11-15 18:39:28 +00:00
Nick Craig-Wood
084fe38922 fstests: fixes the integration test errors running crypt over swift.
Skip tests involving errors creating or removing dirs on non root
bucket based fs
2018-11-15 18:39:28 +00:00
Peter Kaminski
63a2a935fc fix typos in original files, per #2727 review request 2018-11-14 22:48:58 +00:00
Peter Kaminski
64fce8438b docs: Fix a couple of minor typos in rclone_mount.md
* "transferring" instead of "transfering"
* "connection" instead of "connnection"
* "mount" instead of "mount mount"
2018-11-14 22:48:58 +00:00
Nick Craig-Wood
f92beb4e14 fstest: Fix TestPurge causing errors with subsequent tests on azure
Before this change TestPurge would remove a container and subsequent
tests would fail because the container was still being deleted so
couldn't be created.

This was fixed by introducing an fstest.NewRunIndividual() test runner
for TestPurge which causes the test to be run on a new container.
2018-11-14 17:14:02 +00:00
Nick Craig-Wood
f7ce2e8d95 azureblob: fix erroneous Rmdir error "directory not empty"
Before this change Rmdir would check the root rather than the
directory specified for being empty and return "directory not empty"
when it shouldn't have done.
2018-11-14 17:13:39 +00:00
Nick Craig-Wood
3975d82b3b Add brused27 to contributors 2018-11-13 17:00:26 +00:00
brused27
d87aa33ec5 azureblob: Avoid context deadline exceeded error by setting a large TryTimeout value - Fixes #2647 2018-11-13 16:59:53 +00:00
Anagh Kumar Baranwal
1b78f4d1ea Changed the docs scripts to use $HOME & $USER instead of specific values
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-11-13 11:00:34 +00:00
Nick Craig-Wood
b3704597f3 cmount: make --volname work for Windows - fixes #2679 2018-11-12 16:32:02 +00:00
Nick Craig-Wood
16f797a7d7 filter: add --ignore-case flag - fixes #502
The --ignore-case flag causes the filtering of file names to be case
insensitive.  The flag name comes from GNU tar.
2018-11-12 14:29:37 +00:00
Nick Craig-Wood
ee700ec01a lib/readers: add mutex to RepeatableReader - fixes #2572 2018-11-12 12:02:05 +00:00
Nick Craig-Wood
9b3c951ab7 Add Jake Coggiano to contributors 2018-11-12 11:34:28 +00:00
Jake Coggiano
22d17e79e3 dropbox: add dropbox impersonate support - fixes #2577 2018-11-12 11:33:39 +00:00
Jake Coggiano
6d3088a00b vendor: add github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/team/ 2018-11-12 11:33:39 +00:00
Nick Craig-Wood
84202c7471 onedrive: note 50,000 files is limit for one directory #2707 2018-11-11 15:22:19 +00:00
Nick Craig-Wood
96a05516f9 acd,box,onedrive,pcloud: remote log.Fatal from NewFs
And replace with error returns.
2018-11-11 11:00:14 +00:00
Nick Craig-Wood
4f6a942595 cmd: Make --progress update the stats right at the end
Before this when rclone exited the stats would just show the last
printed version, rather than the actual final state.
2018-11-11 09:57:37 +00:00
Nick Craig-Wood
c4b0a37b21 rc: improve docs on debugging 2018-11-10 10:18:13 +00:00
Nick Craig-Wood
9322f4baef Add Erik Swanson to contributors 2018-11-08 12:58:41 +00:00
Erik Swanson
fa0a1e7261 s3: fix role_arn, credential_source, ...
When the env_auth option is enabled, the AWS SDK's session constructor
now loads configuration from ~/.aws/config and environment variables,
and credentials per the selected (or default) AWS_PROFILE's settings.

This is accomplished by **NOT** including any Credential provider in the
aws.Config passed to the session constructor: If the Config.Credentials
is non-nil, that will always be used and the user's configuration re
role_arn, credential_source, source_profile, etc... from the shared
config will be completely ignored.

(The conditional creation and configuration of the stscreds Credential
provider is complicated enough that it is not worth re-creating that
logic.)
2018-11-08 12:58:23 +00:00
Nick Craig-Wood
4ad08794c9 fserrors: add "server closed idle connection" to retriable errors
This seems to be related to this go issue: https://github.com/golang/go/issues/19943

See: https://forum.rclone.org/t/copy-from-dropbox-to-google-drive-yields-failed-to-copy-failed-to-open-source-object-server-closed-idle-connection-error/7460
2018-11-08 11:12:25 +00:00
Nick Craig-Wood
c0f600764b Add Scott Edlund to contributors 2018-11-07 14:27:06 +00:00
Scott Edlund
f139e07380 enable softfloat on MIPS arch
MIPS does not have a floating point unit.  Enable softfloat to build binaries run on devices that do not have MIPS_FPU enabled in their kernel.
2018-11-07 14:26:48 +00:00
Nick Craig-Wood
c6786eeb2d move: don't create directories with --dry-run - fixes #2676 2018-11-06 13:34:15 +00:00
Nick Craig-Wood
57b85b8155 rc: fix job tests on Windows 2018-11-06 13:03:48 +00:00
Nick Craig-Wood
2b1194c57e rc: update docs with new methods 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
e6dd121f52 config: add rc operations for config 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
e600217666 config: create config directory on save if it is missing 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
bc17ca7ed9 rc: implement core/obscure 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
1916410316 rc: add core/version and put definitions next to implementations 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
dddfbec92a cmd/version: factor version number parsing routines into fs/version 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
75a88de55c rc/rcserver: with --rc-files if auth set, pass on to URL opened
If `--rc-user` or `--rc-pass` is set then the URL that is opened with
`--rc-files` will have the authorization in the URL in the
`http://user:pass@localhost/` style.
2018-11-05 15:44:40 +00:00
Nick Craig-Wood
2466f4d152 sync: add rc commands for sync/copy/move 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
39283c8a35 operations: implement operations remote control commands 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
46c2f55545 copyurl: factor code into operations and write test 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
fc2afcbcbd lsjson: factor internals of lsjson command into operations 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
fa0a9653d2 rc: methods marked as AuthRequired need auth unless --rc-no-auth
Methods which can read or mutate external storage will require
authorisation - enforce this.  This can be overidden by `--rc-no-auth`.
2018-11-04 20:42:57 +00:00
Nick Craig-Wood
181267e20e cmd/rc: add --user and --pass flags and interpret --rc-user, --rc-pass, --rc-addr 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
75e8ea383c rc: implement rc.PutCachedFs for prefilling the remote cache 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
8c8b58a7de rc: expire remote cache and fix tests under race detector 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
b961e07c57 rc: ensure rclone fails to start up if the --rc port is in use already 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
0b80d1481a cache: make tests not start an rc but use the internal framework 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
89550e7121 rcserver: serve directories as well as files 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
370c218c63 cmd/http: factor directory serving routines into httplib/serve and write tests 2018-11-04 12:46:44 +00:00
Nick Craig-Wood
b972dcb0ae rc: implement options/blocks,get,set and register options 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
0bfa9811f7 rc: factor server code into rcserver and implement serving objects
If a GET or HEAD request is receivied with a URL parameter of fs then
it will be served from that remote.
2018-11-03 11:32:00 +00:00
Nick Craig-Wood
aa9b2c31f4 serve/restic: factor object serving into cmd/httplib/serve 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
cff75db6a4 rcd: implement new command just to serve the remote control API 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
75252e4a89 rc: add --rc-files flag to serve files on the rc http server
This enables building a browser based UI for rclone
2018-11-03 11:32:00 +00:00
Nick Craig-Wood
2089405e1b fs/rc: add more infrastructure to help writing rc functions
- Fs cache for rc commands
- Helper functions for parsing the input
- Reshape command for manipulating JSON blobs
- Background Job starting, control, query and expiry
2018-11-02 17:32:20 +00:00
Nick Craig-Wood
a379eec9d9 fstest/mockfs: create mock fs.Fs for testing 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
45d5339fcb cmd/rc: add --json flag for structured JSON input 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
bb5637d46a serve http, webdav, restic: ensure rclone exits if the port is in use 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
1f05d5bf4a delete: clarify that it only deletes files not directories 2018-11-02 17:07:45 +00:00
HerrH
ff87da9c3b Added some more links for easier finding
Expanded the Installation & Docs section with links to the website and added a link to the full list of storage providers and features.
2018-11-02 16:56:20 +00:00
ssaqua
3d81b75f44 dedupe: check for existing filename before renaming a dupe file 2018-11-02 16:51:52 +00:00
Nick Craig-Wood
baba6d67e6 s3: set ACL for server side copies to that provided by the user - fixes #2691
Before this change the ACL for objects which were server side copied
was left at the default "private" settings. S3 doesn't copy the ACL
from the source when you copy an object, you have to set it afresh
which is what this does.
2018-11-02 16:22:31 +00:00
Nick Craig-Wood
04c0564fe2 Add Ralf Hemberger to contributors 2018-11-02 09:53:23 +00:00
Ralf Hemberger
91cfdb81f5 change spaces to tab 2018-11-02 09:50:34 +00:00
Ralf Hemberger
deae7bf33c WebDav - Add RFC3339 date format - fixes #2712 2018-11-02 09:50:34 +00:00
Henning Surmeier
04a0da1f92 ncdu: remove option ('d' key)
delete files by pressing 'd' in the ncdu listing

GUI Improvements:
Boxes now have a border around them
Boxes can ask questions and allow the selection of options. The
selected option will be given to the UI.boxMenuHandler function.

Fixes #2571
2018-10-28 20:44:03 +00:00
Henning Surmeier
9486df0226 ncdu/scan: remove option for memory representation
Remove files/directories from the in memory structs of the cloud
directory. Size and Count will be recalculated and populated upwards
to the parent directories.
2018-10-28 20:44:03 +00:00
Nick Craig-Wood
948a5d25c2 operations: Fix Purge and Rmdirs when dir is not empty
Before this change, Purge on the fallback path would try to delete
directories starting from the root rather than the dir passed in.
Rmdirs would also attempt to delete the root.
2018-10-27 11:51:17 +01:00
Nick Craig-Wood
f7c31cd210 Add Florian Gamboeck to contributors 2018-10-27 00:28:11 +01:00
Florian Gamboeck
696e7b2833 backend/cache: Print correct info about Cache Writes 2018-10-27 00:27:47 +01:00
Anagh Kumar Baranwal
e76cf1217f Added docs to check for key generation on Mega
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-25 22:49:21 +01:00
Nick Craig-Wood
543e37f662 Require go1.8 for compilation 2018-10-25 17:06:33 +01:00
Nick Craig-Wood
c514cb752d vendor: update to latest versions of everything 2018-10-25 17:06:33 +01:00
Nick Craig-Wood
c0ca93ae6f opendrive: fix retries of upload chunks - fixes #2646
Before this change, upload chunks were being emptied on retry.  This
change introduces a RepeatableReader to fix the problem.
2018-10-25 11:50:38 +01:00
Nick Craig-Wood
38a89d49ae fstest/test_all: tidy HTML report
- link test number to online copy
- style links
- attempt to make a nicer colour scheme
2018-10-25 11:33:17 +01:00
Anagh Kumar Baranwal
6531126eb2 Fixes the rc docs creation
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-25 11:29:59 +01:00
Nick Craig-Wood
25d0e59ef8 fstest/test_all: make sure Version is correct in build 2018-10-25 08:36:09 +01:00
Nick Craig-Wood
b0db08fd2b fstest/test_all: constrain to go1.10 and above 2018-10-24 21:33:42 +01:00
Nick Craig-Wood
07addf74fd fstest/test_all: upload a copy of the report to "current" 2018-10-24 12:21:07 +01:00
Nick Craig-Wood
52c7c738ca fstest/test_all: limit concurrency and run tests in random order 2018-10-24 10:46:58 +01:00
Nick Craig-Wood
5c32b32011 fstest/test_all: fix directories that tests are run in
- Don't build a binary for backend tests
- Run tests in their relevant directories
2018-10-23 17:31:11 +01:00
Nick Craig-Wood
fe61cff079 crypt: ensure integration tests run correctly when -remote is set 2018-10-23 17:12:38 +01:00
Nick Craig-Wood
fbab1e55bb fstest/test_all: adapt to nested test definitions 2018-10-23 16:56:35 +01:00
Nick Craig-Wood
1bfd07567e fstest/test_all: add oneonly flag to only run one test per backend if required 2018-10-23 14:07:48 +01:00
Nick Craig-Wood
f97c4c8d9d fstest/test_all: rework integration tests to improve output
- Make integration tests use a config file
- Output individual logs for each test
- Make HTML report and open browser
- Optionally email and upload results
2018-10-23 14:07:48 +01:00
Anagh Kumar Baranwal
a3c55462a8 Set python version explicitly to 2 to avoid issues on systems where
the default python version is `3`

Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-23 12:14:52 +01:00
Anagh Kumar Baranwal
bbb9a504a8 Added docs to use the -P/--progress flag for real time statistics
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-23 12:14:52 +01:00
Jon Fautley
dedc7d885c sftp: Ensure file hash checking is really disabled 2018-10-23 12:03:50 +01:00
Nick Craig-Wood
c5ac96e9e7 Make --files-from only read the objects specified and don't scan directories
Before this change using --files-from would scan all the directories
that the files could possibly be in causing rclone to do more work
that was necessary.

After this change, rclone constructs an in memory tree using the
--fast-list mechanism but from all of the files in the --files-from
list and without scanning any directories.

Any objects that are not found in the --files-from list are ignored
silently.

This mechanism is used for sync/copy/move (march) and all of the
listing commands ls/lsf/md5sum/etc (walk).
2018-10-20 18:13:31 +01:00
Nick Craig-Wood
9959c5f17f webdav: add Content-Type to PUT requests - fixes #2664 2018-10-18 13:18:24 +01:00
Nick Craig-Wood
e8d0a363fc opendrive: fix transfer of files with + and & in - fixes #2657 2018-10-17 14:22:04 +01:00
albertony
935b7c1c0f jottacloud: fix bug in --fast-list handing of empty folders - fixes #2650 2018-10-17 13:58:36 +01:00
Fabian Möller
15ce0ae57c fstests: fix maximum tested size in TestFsPutChunked
Before this it was possible hat maxChunkSize was incorrectly set to 200.
2018-10-16 11:50:47 +02:00
Nick Craig-Wood
67703a73de Start v1.44-DEV development 2018-10-15 12:33:27 +01:00
474 changed files with 45131 additions and 20357 deletions

View File

@@ -4,7 +4,6 @@ dist: trusty
os:
- linux
go:
- 1.7.x
- 1.8.x
- 1.9.x
- 1.10.x

View File

@@ -123,6 +123,13 @@ but they can be run against any of the remotes.
cd fs/operations
go test -v -remote TestDrive:
If you want to use the integration test framework to run these tests
all together with an HTML report and test retries then from the
project root:
go install github.com/ncw/rclone/fstest/test_all
test_all -backend drive
If you want to run all the integration tests against all the remotes,
then change into the project root and run
@@ -343,7 +350,7 @@ Unit tests
Integration tests
* Add your fs to `fstest/test_all/test_all.go`
* Add your backend to `fstest/test_all/config.yaml`
* Make sure integration tests pass with
* `cd fs/operations`
* `go test -v -remote TestRemote:`

File diff suppressed because it is too large Load Diff

870
MANUAL.md

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -50,10 +50,9 @@ version:
# Full suite of integration tests
test: rclone
go install github.com/ncw/rclone/fstest/test_all
-go test -v -count 1 -timeout 20m $(BUILDTAGS) $(GO_FILES) 2>&1 | tee test.log
-test_all github.com/ncw/rclone/fs/operations github.com/ncw/rclone/fs/sync 2>&1 | tee fs/test_all.log
@echo "Written logs in test.log and fs/test_all.log"
go install --ldflags "-s -X github.com/ncw/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/ncw/rclone/fstest/test_all
-test_all 2>&1 | tee test_all.log
@echo "Written logs in test_all.log"
# Quick test
quicktest:
@@ -117,7 +116,7 @@ MANUAL.txt: MANUAL.md
pandoc -s --from markdown --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
rclone gendocs docs/content/commands/
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs docs/content/commands/
backenddocs: rclone bin/make_backend_docs.py
./bin/make_backend_docs.py

View File

@@ -55,6 +55,8 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
* The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
## Features
@@ -71,10 +73,15 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
## Installation & documentation
Please see the rclone website for installation, usage, documentation,
changelog and configuration walkthroughs.
Please see the [rclone website](https://rclone.org/) for:
* https://rclone.org/
* [Installation](https://rclone.org/install/)
* [Documentation & configuration](https://rclone.org/docs/)
* [Changelog](https://rclone.org/changelog/)
* [FAQ](https://rclone.org/faq/)
* [Storage providers](https://rclone.org/overview/)
* [Forum](https://forum.rclone.org/)
* ...and more
## Downloads

View File

@@ -32,6 +32,21 @@ Early in the next release cycle update the vendored dependencies
* git add new files
* git commit -a -v
If `make update` fails with errors like this:
```
# github.com/cpuguy83/go-md2man/md2man
../../../../pkg/mod/github.com/cpuguy83/go-md2man@v1.0.8/md2man/md2man.go:11:16: undefined: blackfriday.EXTENSION_NO_INTRA_EMPHASIS
../../../../pkg/mod/github.com/cpuguy83/go-md2man@v1.0.8/md2man/md2man.go:12:16: undefined: blackfriday.EXTENSION_TABLES
```
Can be fixed with
* GO111MODULE=on go get -u github.com/russross/blackfriday@v1.5.2
* GO111MODULE=on go mod tidy
* GO111MODULE=on go mod vendor
Making a point release. If rclone needs a point release due to some
horrendous bug, then
* git branch v1.XX v1.XX-fixes

View File

@@ -264,7 +264,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, acdConfig, baseClient)
if err != nil {
log.Fatalf("Failed to configure Amazon Drive: %v", err)
return nil, errors.Wrap(err, "failed to configure Amazon Drive")
}
c := acd.NewClient(oAuthClient)

View File

@@ -1,6 +1,6 @@
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
// +build !plan9,!solaris,go1.8
package azureblob
@@ -22,7 +22,7 @@ import (
"sync"
"time"
"github.com/Azure/azure-storage-blob-go/2018-03-28/azblob"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/config/configmap"
@@ -50,6 +50,7 @@ const (
defaultUploadCutoff = 256 * fs.MebiByte
maxUploadCutoff = 256 * fs.MebiByte
defaultAccessTier = azblob.AccessTierNone
maxTryTimeout = time.Hour * 24 * 365 //max time of an azure web request response window (whether or not data is flowing)
)
// Register with Fs
@@ -322,7 +323,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrap(err, "failed to make azure storage url from account and endpoint")
}
pipeline := azblob.NewPipeline(credential, azblob.PipelineOptions{})
pipeline := azblob.NewPipeline(credential, azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}})
serviceURL = azblob.NewServiceURL(*u, pipeline)
containerURL = serviceURL.NewContainerURL(container)
case opt.SASURL != "":
@@ -331,7 +332,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.Wrapf(err, "failed to parse SAS URL")
}
// use anonymous credentials in case of sas url
pipeline := azblob.NewPipeline(azblob.NewAnonymousCredential(), azblob.PipelineOptions{})
pipeline := azblob.NewPipeline(azblob.NewAnonymousCredential(), azblob.PipelineOptions{Retry: azblob.RetryOptions{TryTimeout: maxTryTimeout}})
// Check if we have container level SAS or account level sas
parts := azblob.NewBlobURLParts(*u)
if parts.ContainerName != "" {
@@ -705,6 +706,11 @@ func (f *Fs) Mkdir(dir string) error {
f.containerOK = true
return false, nil
case azblob.ServiceCodeContainerBeingDeleted:
// From https://docs.microsoft.com/en-us/rest/api/storageservices/delete-container
// When a container is deleted, a container with the same name cannot be created
// for at least 30 seconds; the container may not be available for more than 30
// seconds if the service is still processing the request.
time.Sleep(6 * time.Second) // default 10 retries will be 60 seconds
f.containerDeleted = true
return true, err
}
@@ -722,7 +728,7 @@ func (f *Fs) Mkdir(dir string) error {
// isEmpty checks to see if a given directory is empty and returns an error if not
func (f *Fs) isEmpty(dir string) (err error) {
empty := true
err = f.list("", true, 1, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
err = f.list(dir, true, 1, func(remote string, object *azblob.BlobItem, isDirectory bool) error {
empty = false
return nil
})
@@ -1368,7 +1374,7 @@ func (o *Object) SetTier(tier string) error {
blob := o.getBlobReference()
ctx := context.Background()
err := o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetTier(ctx, desiredAccessTier)
_, err := blob.SetTier(ctx, desiredAccessTier, azblob.LeaseAccessConditions{})
return o.fs.shouldRetry(err)
})

View File

@@ -1,4 +1,4 @@
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
// +build !plan9,!solaris,go1.8
package azureblob

View File

@@ -1,6 +1,6 @@
// Test AzureBlob filesystem interface
// +build !freebsd,!netbsd,!openbsd,!plan9,!solaris,go1.8
// +build !plan9,!solaris,go1.8
package azureblob

View File

@@ -1,6 +1,6 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build freebsd netbsd openbsd plan9 solaris !go1.8
// +build plan9 solaris !go1.8
package azureblob

View File

@@ -252,7 +252,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure Box: %v", err)
return nil, errors.Wrap(err, "failed to configure Box")
}
f := &Fs{

View File

@@ -471,7 +471,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
fs.Infof(name, "Chunk Clean Interval: %v", f.opt.ChunkCleanInterval)
fs.Infof(name, "Workers: %v", f.opt.TotalWorkers)
fs.Infof(name, "File Age: %v", f.opt.InfoAge)
if !f.opt.StoreWrites {
if f.opt.StoreWrites {
fs.Infof(name, "Cache Writes: enabled")
}

View File

@@ -5,14 +5,12 @@ package cache_test
import (
"bytes"
"encoding/base64"
"encoding/json"
goflag "flag"
"fmt"
"io"
"io/ioutil"
"log"
"math/rand"
"net/http"
"os"
"path"
"path/filepath"
@@ -32,11 +30,11 @@ import (
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/object"
"github.com/ncw/rclone/fs/rc"
"github.com/ncw/rclone/fs/rc/rcflags"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -692,8 +690,8 @@ func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
}
func TestInternalChangeSeenAfterRc(t *testing.T) {
rcflags.Opt.Enabled = true
rc.Start(&rcflags.Opt)
cacheExpire := rc.Calls.Get("cache/expire")
assert.NotNil(t, cacheExpire)
id := fmt.Sprintf("ticsarc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
@@ -726,13 +724,8 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
require.NoError(t, err)
require.NotEqual(t, o.ModTime().String(), co.ModTime().String())
m := make(map[string]string)
res, err := http.Post(fmt.Sprintf("http://localhost:5572/cache/expire?remote=%s", "data.bin"), "application/json; charset=utf-8", strings.NewReader(""))
require.NoError(t, err)
defer func() {
_ = res.Body.Close()
}()
_ = json.NewDecoder(res.Body).Decode(&m)
// Call the rc function
m, err := cacheExpire.Fn(rc.Params{"remote": "data.bin"})
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])
@@ -752,13 +745,8 @@ func TestInternalChangeSeenAfterRc(t *testing.T) {
li1, err = runInstance.list(t, rootFs, "")
require.Len(t, li1, 1)
m = make(map[string]string)
res2, err := http.Post("http://localhost:5572/cache/expire?remote=/", "application/json; charset=utf-8", strings.NewReader(""))
require.NoError(t, err)
defer func() {
_ = res2.Body.Close()
}()
_ = json.NewDecoder(res2.Body).Decode(&m)
// Call the rc function
m, err = cacheExpire.Fn(rc.Params{"remote": "/"})
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])

View File

@@ -7,13 +7,30 @@ import (
"testing"
"github.com/ncw/rclone/backend/crypt"
_ "github.com/ncw/rclone/backend/drive" // for integration tests
_ "github.com/ncw/rclone/backend/local"
_ "github.com/ncw/rclone/backend/swift" // for integration tests
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fstest"
"github.com/ncw/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
t.Skip("Skipping as -remote not set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*crypt.Object)(nil),
})
}
// TestStandard runs integration tests against the remote
func TestStandard(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard")
name := "TestCrypt"
fstests.Run(t, &fstests.Opt{
@@ -30,6 +47,9 @@ func TestStandard(t *testing.T) {
// TestOff runs integration tests against the remote
func TestOff(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-off")
name := "TestCrypt2"
fstests.Run(t, &fstests.Opt{
@@ -46,6 +66,9 @@ func TestOff(t *testing.T) {
// TestObfuscate runs integration tests against the remote
func TestObfuscate(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate")
name := "TestCrypt3"
fstests.Run(t, &fstests.Opt{

View File

@@ -243,10 +243,19 @@ func (f *Fs) InternalTestDocumentLink(t *testing.T) {
}
func (f *Fs) InternalTest(t *testing.T) {
t.Run("DocumentImport", f.InternalTestDocumentImport)
t.Run("DocumentUpdate", f.InternalTestDocumentUpdate)
t.Run("DocumentExport", f.InternalTestDocumentExport)
t.Run("DocumentLink", f.InternalTestDocumentLink)
// These tests all depend on each other so run them as nested tests
t.Run("DocumentImport", func(t *testing.T) {
f.InternalTestDocumentImport(t)
t.Run("DocumentUpdate", func(t *testing.T) {
f.InternalTestDocumentUpdate(t)
t.Run("DocumentExport", func(t *testing.T) {
f.InternalTestDocumentExport(t)
t.Run("DocumentLink", func(t *testing.T) {
f.InternalTestDocumentLink(t)
})
})
})
})
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -34,6 +34,7 @@ import (
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/common"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/sharing"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/team"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/users"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
@@ -131,13 +132,19 @@ slightly (at most 10%% for 128MB in tests) at the cost of using more
memory. It can be set smaller if you are tight on memory.`, fs.SizeSuffix(maxChunkSize)),
Default: fs.SizeSuffix(defaultChunkSize),
Advanced: true,
}, {
Name: "impersonate",
Help: "Impersonate this user when using a business account.",
Default: "",
Advanced: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Impersonate string `config:"impersonate"`
}
// Fs represents a remote dropbox server
@@ -149,6 +156,7 @@ type Fs struct {
srv files.Client // the connection to the dropbox server
sharing sharing.Client // as above, but for generating sharing links
users users.Client // as above, but for accessing user information
team team.Client // for the Teams API
slashRoot string // root with "/" prefix, lowercase
slashRootSlash string // root with "/" prefix and postfix, lowercase
pacer *pacer.Pacer // To pace the API calls
@@ -262,6 +270,29 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
Client: oAuthClient, // maybe???
HeaderGenerator: f.headerGenerator,
}
// NOTE: needs to be created pre-impersonation so we can look up the impersonated user
f.team = team.New(config)
if opt.Impersonate != "" {
user := team.UserSelectorArg{
Email: opt.Impersonate,
}
user.Tag = "email"
members := []*team.UserSelectorArg{&user}
args := team.NewMembersGetInfoArgs(members)
memberIds, err := f.team.MembersGetInfo(args)
if err != nil {
return nil, errors.Wrapf(err, "invalid dropbox team member: %q", opt.Impersonate)
}
config.AsMemberID = memberIds[0].MemberInfo.Profile.MemberProfile.TeamMemberId
}
f.srv = files.New(config)
f.sharing = sharing.New(config)
f.users = users.New(config)

View File

@@ -464,12 +464,12 @@ func (f *Fs) listFileDir(remoteStartPath string, startFolder *api.JottaFolder, f
if folder.Deleted {
return nil
}
folderPath := path.Join(folder.Path, folder.Name)
remoteDirLength := len(folderPath) - pathPrefixLength
folderPath := restoreReservedChars(path.Join(folder.Path, folder.Name))
folderPathLength := len(folderPath)
var remoteDir string
if remoteDirLength > 0 {
remoteDir = restoreReservedChars(folderPath[pathPrefixLength+1:])
if remoteDirLength > startPathLength {
if folderPathLength > pathPrefixLength {
remoteDir = folderPath[pathPrefixLength+1:]
if folderPathLength > startPathLength {
d := fs.NewDir(remoteDir, time.Time(folder.ModifiedAt))
err := fn(d)
if err != nil {

View File

@@ -404,13 +404,13 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
if opt.DriveID == "" || opt.DriveType == "" {
log.Fatalf("Unable to get drive_id and drive_type. If you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend.")
return nil, errors.New("unable to get drive_id and drive_type - if you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend")
}
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure OneDrive: %v", err)
return nil, errors.Wrap(err, "failed to configure OneDrive")
}
f := &Fs{

View File

@@ -6,6 +6,7 @@ import (
"io"
"mime/multipart"
"net/http"
"net/url"
"path"
"strconv"
"strings"
@@ -20,6 +21,7 @@ import (
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/dircache"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/readers"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
)
@@ -930,8 +932,9 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
// resp.Body.Close()
// fs.Debugf(nil, "PostOpen: %#v", openResponse)
// 1 MB chunks size
// 10 MB chunks size
chunkSize := int64(1024 * 1024 * 10)
buf := make([]byte, int(chunkSize))
chunkOffset := int64(0)
remainingBytes := size
chunkCounter := 0
@@ -944,14 +947,19 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
remainingBytes -= currentChunkSize
fs.Debugf(o, "Uploading chunk %d, size=%d, remain=%d", chunkCounter, currentChunkSize, remainingBytes)
chunk := readers.NewRepeatableLimitReaderBuffer(in, buf, currentChunkSize)
err = o.fs.pacer.Call(func() (bool, error) {
// seek to the start in case this is a retry
if _, err = chunk.Seek(0, io.SeekStart); err != nil {
return false, err
}
var formBody bytes.Buffer
w := multipart.NewWriter(&formBody)
fw, err := w.CreateFormFile("file_data", o.remote)
if err != nil {
return false, err
}
if _, err = io.CopyN(fw, in, currentChunkSize); err != nil {
if _, err = io.Copy(fw, chunk); err != nil {
return false, err
}
// Add session_id
@@ -1082,7 +1090,7 @@ func (o *Object) readMetaData() (err error) {
err = o.fs.pacer.Call(func() (bool, error) {
opts := rest.Opts{
Method: "GET",
Path: "/folder/itembyname.json/" + o.fs.session.SessionID + "/" + directoryID + "?name=" + rest.URLPathEscape(replaceReservedChars(leaf)),
Path: "/folder/itembyname.json/" + o.fs.session.SessionID + "/" + directoryID + "?name=" + url.QueryEscape(replaceReservedChars(leaf)),
}
resp, err = o.fs.srv.CallJSON(&opts, nil, &folderList)
return o.fs.shouldRetry(resp, err)

View File

@@ -246,7 +246,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure Pcloud: %v", err)
return nil, errors.Wrap(err, "failed to configure Pcloud")
}
f := &Fs{

View File

@@ -69,7 +69,7 @@ func init() {
}},
}, {
Name: "connection_retries",
Help: "Number of connnection retries.",
Help: "Number of connection retries.",
Default: 3,
Advanced: true,
}},

View File

@@ -291,7 +291,11 @@ func init() {
Provider: "DigitalOcean",
}, {
Value: "s3.wasabisys.com",
Help: "Wasabi Object Storage",
Help: "Wasabi US East endpoint",
Provider: "Wasabi",
}, {
Value: "s3.us-west-1.wasabisys.com",
Help: "Wasabi US West endpoint",
Provider: "Wasabi",
}},
}, {
@@ -448,7 +452,12 @@ func init() {
Provider: "!AWS,IBMCOS",
}, {
Name: "acl",
Help: "Canned ACL used when creating buckets and/or storing objects in S3.\nFor more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl",
Help: `Canned ACL used when creating buckets and storing or copying objects.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.`,
Examples: []fs.OptionExample{{
Value: "private",
Help: "Owner gets FULL_CONTROL. No one else has access rights (default).",
@@ -799,8 +808,21 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
WithHTTPClient(fshttp.NewClient(fs.Config)).
WithS3ForcePathStyle(opt.ForcePathStyle)
// awsConfig.WithLogLevel(aws.LogDebugWithSigning)
ses := session.New()
c := s3.New(ses, awsConfig)
awsSessionOpts := session.Options{
Config: *awsConfig,
}
if opt.EnvAuth && opt.AccessKeyID == "" && opt.SecretAccessKey == "" {
// Enable loading config options from ~/.aws/config (selected by AWS_PROFILE env)
awsSessionOpts.SharedConfigState = session.SharedConfigEnable
// The session constructor (aws/session/mergeConfigSrcs) will only use the user's preferred credential source
// (from the shared config file) if the passed-in Options.Config.Credentials is nil.
awsSessionOpts.Config.Credentials = nil
}
ses, err := session.NewSessionWithOptions(awsSessionOpts)
if err != nil {
return nil, nil, err
}
c := s3.New(ses)
if opt.V2Auth || opt.Region == "other-v2-signature" {
fs.Debugf(nil, "Using v2 auth")
signer := func(req *request.Request) {
@@ -1286,6 +1308,7 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
source := pathEscape(srcFs.bucket + "/" + srcFs.root + srcObj.remote)
req := s3.CopyObjectInput{
Bucket: &f.bucket,
ACL: &f.opt.ACL,
Key: &key,
CopySource: &source,
MetadataDirective: aws.String(s3.MetadataDirectiveCopy),

View File

@@ -769,6 +769,10 @@ func (o *Object) Hash(r hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
if o.fs.opt.DisableHashCheck {
return "", nil
}
c, err := o.fs.getSftpConnection()
if err != nil {
return "", errors.Wrap(err, "Hash get SFTP connection")

View File

@@ -21,6 +21,7 @@ import (
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/operations"
"github.com/ncw/rclone/fs/walk"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/swift"
"github.com/pkg/errors"
)
@@ -30,6 +31,7 @@ const (
directoryMarkerContentType = "application/directory" // content type of directory marker objects
listChunks = 1000 // chunk size to read directory listings
defaultChunkSize = 5 * fs.GibiByte
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
)
// SharedOptions are shared between swift and hubic
@@ -187,6 +189,7 @@ type Fs struct {
containerOK bool // true if we have created the container
segmentsContainer string // container to store the segments (if any) in
noCheckContainer bool // don't check the container before creating it
pacer *pacer.Pacer // To pace the API calls
}
// Object describes a swift object
@@ -227,6 +230,32 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
401, // Unauthorized (eg "Token has expired")
408, // Request Timeout
409, // Conflict - various states that could be resolved on a retry
429, // Rate exceeded.
500, // Get occasional 500 Internal Server Error
503, // Service Unavailable/Slow Down - "Reduce your request rate"
504, // Gateway Time-out
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(err error) (bool, error) {
// If this is an swift.Error object extract the HTTP error code
if swiftError, ok := err.(*swift.Error); ok {
for _, e := range retryErrorCodes {
if swiftError.StatusCode == e {
return true, err
}
}
}
// Check for generic failure conditions
return fserrors.ShouldRetry(err), err
}
// Pattern to match a swift path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
@@ -337,6 +366,7 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
segmentsContainer: container + "_segments",
root: directory,
noCheckContainer: noCheckContainer,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.S3Pacer),
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -346,7 +376,11 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
if f.root != "" {
f.root += "/"
// Check to see if the object exists - ignoring directory markers
info, _, err := f.c.Object(container, directory)
var info swift.Object
err = f.pacer.Call(func() (bool, error) {
info, _, err = f.c.Object(container, directory)
return shouldRetry(err)
})
if err == nil && info.ContentType != directoryMarkerContentType {
f.root = path.Dir(directory)
if f.root == "." {
@@ -436,7 +470,12 @@ func (f *Fs) listContainerRoot(container, root string, dir string, recurse bool,
}
rootLength := len(root)
return f.c.ObjectsWalk(container, &opts, func(opts *swift.ObjectsOpts) (interface{}, error) {
objects, err := f.c.Objects(container, opts)
var objects []swift.Object
var err error
err = f.pacer.Call(func() (bool, error) {
objects, err = f.c.Objects(container, opts)
return shouldRetry(err)
})
if err == nil {
for i := range objects {
object := &objects[i]
@@ -525,7 +564,11 @@ func (f *Fs) listContainers(dir string) (entries fs.DirEntries, err error) {
if dir != "" {
return nil, fs.ErrorListBucketRequired
}
containers, err := f.c.ContainersAll(nil)
var containers []swift.Container
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(nil)
return shouldRetry(err)
})
if err != nil {
return nil, errors.Wrap(err, "container listing failed")
}
@@ -586,7 +629,12 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
// About gets quota information
func (f *Fs) About() (*fs.Usage, error) {
containers, err := f.c.ContainersAll(nil)
var containers []swift.Container
var err error
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(nil)
return shouldRetry(err)
})
if err != nil {
return nil, errors.Wrap(err, "container listing failed")
}
@@ -636,14 +684,20 @@ func (f *Fs) Mkdir(dir string) error {
// Check to see if container exists first
var err error = swift.ContainerNotFound
if !f.noCheckContainer {
_, _, err = f.c.Container(f.container)
err = f.pacer.Call(func() (bool, error) {
_, _, err = f.c.Container(f.container)
return shouldRetry(err)
})
}
if err == swift.ContainerNotFound {
headers := swift.Headers{}
if f.opt.StoragePolicy != "" {
headers["X-Storage-Policy"] = f.opt.StoragePolicy
}
err = f.c.ContainerCreate(f.container, headers)
err = f.pacer.Call(func() (bool, error) {
err = f.c.ContainerCreate(f.container, headers)
return shouldRetry(err)
})
}
if err == nil {
f.containerOK = true
@@ -660,7 +714,11 @@ func (f *Fs) Rmdir(dir string) error {
if f.root != "" || dir != "" {
return nil
}
err := f.c.ContainerDelete(f.container)
var err error
err = f.pacer.Call(func() (bool, error) {
err = f.c.ContainerDelete(f.container)
return shouldRetry(err)
})
if err == nil {
f.containerOK = false
}
@@ -719,7 +777,10 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
return nil, fs.ErrorCantCopy
}
srcFs := srcObj.fs
_, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
return shouldRetry(err)
})
if err != nil {
return nil, err
}
@@ -809,7 +870,12 @@ func (o *Object) readMetaData() (err error) {
if o.headers != nil {
return nil
}
info, h, err := o.fs.c.Object(o.fs.container, o.fs.root+o.remote)
var info swift.Object
var h swift.Headers
err = o.fs.pacer.Call(func() (bool, error) {
info, h, err = o.fs.c.Object(o.fs.container, o.fs.root+o.remote)
return shouldRetry(err)
})
if err != nil {
if err == swift.ObjectNotFound {
return fs.ErrorObjectNotFound
@@ -861,7 +927,10 @@ func (o *Object) SetModTime(modTime time.Time) error {
newHeaders[k] = v
}
}
return o.fs.c.ObjectUpdate(o.fs.container, o.fs.root+o.remote, newHeaders)
return o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectUpdate(o.fs.container, o.fs.root+o.remote, newHeaders)
return shouldRetry(err)
})
}
// Storable returns if this object is storable
@@ -876,7 +945,10 @@ func (o *Object) Storable() bool {
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
headers := fs.OpenOptionHeaders(options)
_, isRanging := headers["Range"]
in, _, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
err = o.fs.pacer.Call(func() (bool, error) {
in, _, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
return shouldRetry(err)
})
return
}
@@ -903,13 +975,20 @@ func (o *Object) removeSegments(except string) error {
}
segmentPath := segmentsRoot + remote
fs.Debugf(o, "Removing segment file %q in container %q", segmentPath, o.fs.segmentsContainer)
return o.fs.c.ObjectDelete(o.fs.segmentsContainer, segmentPath)
var err error
return o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(o.fs.segmentsContainer, segmentPath)
return shouldRetry(err)
})
})
if err != nil {
return err
}
// remove the segments container if empty, ignore errors
err = o.fs.c.ContainerDelete(o.fs.segmentsContainer)
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ContainerDelete(o.fs.segmentsContainer)
return shouldRetry(err)
})
if err == nil {
fs.Debugf(o, "Removed empty container %q", o.fs.segmentsContainer)
}
@@ -938,13 +1017,19 @@ func urlEncode(str string) string {
func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64, contentType string) (string, error) {
// Create the segmentsContainer if it doesn't exist
var err error
_, _, err = o.fs.c.Container(o.fs.segmentsContainer)
err = o.fs.pacer.Call(func() (bool, error) {
_, _, err = o.fs.c.Container(o.fs.segmentsContainer)
return shouldRetry(err)
})
if err == swift.ContainerNotFound {
headers := swift.Headers{}
if o.fs.opt.StoragePolicy != "" {
headers["X-Storage-Policy"] = o.fs.opt.StoragePolicy
}
err = o.fs.c.ContainerCreate(o.fs.segmentsContainer, headers)
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ContainerCreate(o.fs.segmentsContainer, headers)
return shouldRetry(err)
})
}
if err != nil {
return "", err
@@ -973,7 +1058,10 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
segmentReader := io.LimitReader(in, n)
segmentPath := fmt.Sprintf("%s/%08d", segmentsPath, i)
fs.Debugf(o, "Uploading segment file %q into %q", segmentPath, o.fs.segmentsContainer)
_, err := o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
return shouldRetry(err)
})
if err != nil {
return "", err
}
@@ -984,7 +1072,10 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
headers["Content-Length"] = "0" // set Content-Length as we know it
emptyReader := bytes.NewReader(nil)
manifestName := o.fs.root + o.remote
_, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
return shouldRetry(err)
})
return uniquePrefix + "/", err
}
@@ -1021,7 +1112,10 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
} else {
headers["Content-Length"] = strconv.FormatInt(size, 10) // set Content-Length as we know it
_, err := o.fs.c.ObjectPut(o.fs.container, o.fs.root+o.remote, in, true, "", contentType, headers)
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = o.fs.c.ObjectPut(o.fs.container, o.fs.root+o.remote, in, true, "", contentType, headers)
return shouldRetry(err)
})
if err != nil {
return err
}
@@ -1047,7 +1141,10 @@ func (o *Object) Remove() error {
return err
}
// Remove file/manifest first
err = o.fs.c.ObjectDelete(o.fs.container, o.fs.root+o.remote)
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(o.fs.container, o.fs.root+o.remote)
return shouldRetry(err)
})
if err != nil {
return err
}

View File

@@ -145,6 +145,7 @@ var timeFormats = []string{
time.RFC1123Z, // Fri, 05 Jan 2018 14:14:38 +0000 (as used by mydrive.ch)
time.UnixDate, // Wed May 17 15:31:58 UTC 2017 (as used in an internal server)
noZerosRFC1123, // Fri, 7 Sep 2018 08:49:58 GMT (as used by server in #2574)
time.RFC3339, // Wed, 31 Oct 2018 13:57:11 CET (as used by komfortcloud.de)
}
// UnmarshalXML turns XML into a Time

View File

@@ -31,7 +31,6 @@ import (
"github.com/ncw/rclone/backend/webdav/api"
"github.com/ncw/rclone/backend/webdav/odrvcookie"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
@@ -96,10 +95,11 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
URL string `config:"url"`
Vendor string `config:"vendor"`
User string `config:"user"`
Pass string `config:"pass"`
URL string `config:"url"`
Vendor string `config:"vendor"`
User string `config:"user"`
Pass string `config:"pass"`
BearerToken string `config:"bearer_token"`
}
// Fs represents a remote webdav
@@ -283,9 +283,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
rootIsDir := strings.HasSuffix(root, "/")
root = strings.Trim(root, "/")
user := config.FileGet(name, "user")
pass := config.FileGet(name, "pass")
bearerToken := config.FileGet(name, "bearer_token")
if !strings.HasSuffix(opt.URL, "/") {
opt.URL += "/"
}
@@ -320,10 +317,10 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(f)
if user != "" || pass != "" {
if opt.User != "" || opt.Pass != "" {
f.srv.SetUserPass(opt.User, opt.Pass)
} else if bearerToken != "" {
f.srv.SetHeader("Authorization", "BEARER "+bearerToken)
} else if opt.BearerToken != "" {
f.srv.SetHeader("Authorization", "BEARER "+opt.BearerToken)
}
f.srv.SetErrorHandler(errorHandler)
err = f.setQuirks(opt.Vendor)
@@ -968,6 +965,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
Body: in,
NoResponse: true,
ContentLength: &size, // FIXME this isn't necessary with owncloud - See https://github.com/nextcloud/nextcloud-snap/issues/365
ContentType: fs.MimeType(src),
}
if o.fs.useOCMtime {
opts.ExtraHeaders = map[string]string{

View File

@@ -1,34 +0,0 @@
package src
//from yadisk
import (
"io"
"net/http"
)
//RootAddr is the base URL for Yandex Disk API.
const RootAddr = "https://cloud-api.yandex.com" //also https://cloud-api.yandex.net and https://cloud-api.yandex.ru
func (c *Client) setRequestScope(req *http.Request) {
req.Header.Add("Accept", "application/json")
req.Header.Add("Content-Type", "application/json")
req.Header.Add("Authorization", "OAuth "+c.token)
}
func (c *Client) scopedRequest(method, urlPath string, body io.Reader) (*http.Request, error) {
fullURL := RootAddr
if urlPath[:1] != "/" {
fullURL += "/" + urlPath
} else {
fullURL += urlPath
}
req, err := http.NewRequest(method, fullURL, body)
if err != nil {
return req, err
}
c.setRequestScope(req)
return req, nil
}

View File

@@ -1,133 +0,0 @@
package src
import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"strings"
"github.com/pkg/errors"
)
//Client struct
type Client struct {
token string
basePath string
HTTPClient *http.Client
}
//NewClient creates new client
func NewClient(token string, client ...*http.Client) *Client {
return newClientInternal(
token,
"https://cloud-api.yandex.com/v1/disk", //also "https://cloud-api.yandex.net/v1/disk" "https://cloud-api.yandex.ru/v1/disk"
client...)
}
func newClientInternal(token string, basePath string, client ...*http.Client) *Client {
c := &Client{
token: token,
basePath: basePath,
}
if len(client) != 0 {
c.HTTPClient = client[0]
} else {
c.HTTPClient = http.DefaultClient
}
return c
}
//ErrorHandler type
type ErrorHandler func(*http.Response) error
var defaultErrorHandler ErrorHandler = func(resp *http.Response) error {
if resp.StatusCode/100 == 5 {
return errors.New("server error")
}
if resp.StatusCode/100 == 4 {
var response DiskClientError
contents, _ := ioutil.ReadAll(resp.Body)
err := json.Unmarshal(contents, &response)
if err != nil {
return err
}
return response
}
if resp.StatusCode/100 == 3 {
return errors.New("redirect error")
}
return nil
}
func (HTTPRequest *HTTPRequest) run(client *Client) ([]byte, error) {
var err error
values := make(url.Values)
for k, v := range HTTPRequest.Parameters {
values.Set(k, fmt.Sprintf("%v", v))
}
var req *http.Request
if HTTPRequest.Method == "POST" {
// TODO json serialize
req, err = http.NewRequest(
"POST",
client.basePath+HTTPRequest.Path,
strings.NewReader(values.Encode()))
if err != nil {
return nil, err
}
// TODO
// req.Header.Set("Content-Type", "application/json")
} else {
req, err = http.NewRequest(
HTTPRequest.Method,
client.basePath+HTTPRequest.Path+"?"+values.Encode(),
nil)
if err != nil {
return nil, err
}
}
for headerName := range HTTPRequest.Headers {
var headerValues = HTTPRequest.Headers[headerName]
for _, headerValue := range headerValues {
req.Header.Set(headerName, headerValue)
}
}
return runRequest(client, req)
}
func runRequest(client *Client, req *http.Request) ([]byte, error) {
return runRequestWithErrorHandler(client, req, defaultErrorHandler)
}
func runRequestWithErrorHandler(client *Client, req *http.Request, errorHandler ErrorHandler) (out []byte, err error) {
resp, err := client.HTTPClient.Do(req)
if err != nil {
return nil, err
}
defer CheckClose(resp.Body, &err)
return checkResponseForErrorsWithErrorHandler(resp, errorHandler)
}
func checkResponseForErrorsWithErrorHandler(resp *http.Response, errorHandler ErrorHandler) ([]byte, error) {
if resp.StatusCode/100 > 2 {
return nil, errorHandler(resp)
}
return ioutil.ReadAll(resp.Body)
}
// CheckClose is a utility function used to check the return from
// Close in a defer statement.
func CheckClose(c io.Closer, err *error) {
cerr := c.Close()
if *err == nil {
*err = cerr
}
}

View File

@@ -1,51 +0,0 @@
package src
import (
"bytes"
"encoding/json"
"io"
"net/url"
)
//CustomPropertyResponse struct we send and is returned by the API for CustomProperty request.
type CustomPropertyResponse struct {
CustomProperties map[string]interface{} `json:"custom_properties"`
}
//SetCustomProperty will set specified data from Yandex Disk
func (c *Client) SetCustomProperty(remotePath string, property string, value string) error {
rcm := map[string]interface{}{
property: value,
}
cpr := CustomPropertyResponse{rcm}
data, _ := json.Marshal(cpr)
body := bytes.NewReader(data)
err := c.SetCustomPropertyRequest(remotePath, body)
if err != nil {
return err
}
return err
}
//SetCustomPropertyRequest will make an CustomProperty request and return a URL to CustomProperty data to.
func (c *Client) SetCustomPropertyRequest(remotePath string, body io.Reader) (err error) {
values := url.Values{}
values.Add("path", remotePath)
req, err := c.scopedRequest("PATCH", "/v1/disk/resources?"+values.Encode(), body)
if err != nil {
return err
}
resp, err := c.HTTPClient.Do(req)
if err != nil {
return err
}
if err := CheckAPIError(resp); err != nil {
return err
}
defer CheckClose(resp.Body, &err)
//If needed we can read response and check if custom_property is set.
return nil
}

View File

@@ -1,23 +0,0 @@
package src
import (
"net/url"
"strconv"
)
// Delete will remove specified file/folder from Yandex Disk
func (c *Client) Delete(remotePath string, permanently bool) error {
values := url.Values{}
values.Add("permanently", strconv.FormatBool(permanently))
values.Add("path", remotePath)
urlPath := "/v1/disk/resources?" + values.Encode()
fullURL := RootAddr
if urlPath[:1] != "/" {
fullURL += "/" + urlPath
} else {
fullURL += urlPath
}
return c.PerformDelete(fullURL)
}

View File

@@ -1,48 +0,0 @@
package src
import "encoding/json"
//DiskInfoRequest type
type DiskInfoRequest struct {
client *Client
HTTPRequest *HTTPRequest
}
func (req *DiskInfoRequest) request() *HTTPRequest {
return req.HTTPRequest
}
//DiskInfoResponse struct is returned by the API for DiskInfo request.
type DiskInfoResponse struct {
TrashSize uint64 `json:"TrashSize"`
TotalSpace uint64 `json:"TotalSpace"`
UsedSpace uint64 `json:"UsedSpace"`
SystemFolders map[string]string `json:"SystemFolders"`
}
//NewDiskInfoRequest create new DiskInfo Request
func (c *Client) NewDiskInfoRequest() *DiskInfoRequest {
return &DiskInfoRequest{
client: c,
HTTPRequest: createGetRequest(c, "/", nil),
}
}
//Exec run DiskInfo Request
func (req *DiskInfoRequest) Exec() (*DiskInfoResponse, error) {
data, err := req.request().run(req.client)
if err != nil {
return nil, err
}
var info DiskInfoResponse
err = json.Unmarshal(data, &info)
if err != nil {
return nil, err
}
if info.SystemFolders == nil {
info.SystemFolders = make(map[string]string)
}
return &info, nil
}

View File

@@ -1,66 +0,0 @@
package src
import (
"encoding/json"
"io"
"net/url"
)
// DownloadResponse struct is returned by the API for Download request.
type DownloadResponse struct {
HRef string `json:"href"`
Method string `json:"method"`
Templated bool `json:"templated"`
}
// Download will get specified data from Yandex.Disk supplying the extra headers
func (c *Client) Download(remotePath string, headers map[string]string) (io.ReadCloser, error) { //io.Writer
ur, err := c.DownloadRequest(remotePath)
if err != nil {
return nil, err
}
return c.PerformDownload(ur.HRef, headers)
}
// DownloadRequest will make an download request and return a URL to download data to.
func (c *Client) DownloadRequest(remotePath string) (ur *DownloadResponse, err error) {
values := url.Values{}
values.Add("path", remotePath)
req, err := c.scopedRequest("GET", "/v1/disk/resources/download?"+values.Encode(), nil)
if err != nil {
return nil, err
}
resp, err := c.HTTPClient.Do(req)
if err != nil {
return nil, err
}
if err := CheckAPIError(resp); err != nil {
return nil, err
}
defer CheckClose(resp.Body, &err)
ur, err = ParseDownloadResponse(resp.Body)
if err != nil {
return nil, err
}
return ur, nil
}
// ParseDownloadResponse tries to read and parse DownloadResponse struct.
func ParseDownloadResponse(data io.Reader) (*DownloadResponse, error) {
dec := json.NewDecoder(data)
var ur DownloadResponse
if err := dec.Decode(&ur); err == io.EOF {
// ok
} else if err != nil {
return nil, err
}
// TODO: check if there is any trash data after JSON and crash if there is.
return &ur, nil
}

View File

@@ -1,9 +0,0 @@
package src
// EmptyTrash will permanently delete all trashed files/folders from Yandex Disk
func (c *Client) EmptyTrash() error {
fullURL := RootAddr
fullURL += "/v1/disk/trash/resources"
return c.PerformDelete(fullURL)
}

View File

@@ -1,84 +0,0 @@
package src
//from yadisk
import (
"encoding/json"
"fmt"
"io"
"net/http"
)
// ErrorResponse represents erroneous API response.
// Implements go's built in `error`.
type ErrorResponse struct {
ErrorName string `json:"error"`
Description string `json:"description"`
Message string `json:"message"`
StatusCode int `json:""`
}
func (e *ErrorResponse) Error() string {
return fmt.Sprintf("[%d - %s] %s (%s)", e.StatusCode, e.ErrorName, e.Description, e.Message)
}
// ProccessErrorResponse tries to represent data passed as
// an ErrorResponse object.
func ProccessErrorResponse(data io.Reader) (*ErrorResponse, error) {
dec := json.NewDecoder(data)
var errorResponse ErrorResponse
if err := dec.Decode(&errorResponse); err == io.EOF {
// ok
} else if err != nil {
return nil, err
}
// TODO: check if there is any trash data after JSON and crash if there is.
return &errorResponse, nil
}
// CheckAPIError is a convenient function to turn erroneous
// API response into go error. It closes the Body on error.
func CheckAPIError(resp *http.Response) (err error) {
if resp.StatusCode >= 200 && resp.StatusCode < 400 {
return nil
}
defer CheckClose(resp.Body, &err)
errorResponse, err := ProccessErrorResponse(resp.Body)
if err != nil {
return err
}
errorResponse.StatusCode = resp.StatusCode
return errorResponse
}
// ProccessErrorString tries to represent data passed as
// an ErrorResponse object.
func ProccessErrorString(data string) (*ErrorResponse, error) {
var errorResponse ErrorResponse
if err := json.Unmarshal([]byte(data), &errorResponse); err == nil {
// ok
} else if err != nil {
return nil, err
}
// TODO: check if there is any trash data after JSON and crash if there is.
return &errorResponse, nil
}
// ParseAPIError Parse json error response from API
func (c *Client) ParseAPIError(jsonErr string) (string, error) { //ErrorName
errorResponse, err := ProccessErrorString(jsonErr)
if err != nil {
return err.Error(), err
}
return errorResponse.ErrorName, nil
}

View File

@@ -1,14 +0,0 @@
package src
import "encoding/json"
//DiskClientError struct
type DiskClientError struct {
Description string `json:"Description"`
Code string `json:"Error"`
}
func (e DiskClientError) Error() string {
b, _ := json.Marshal(e)
return string(b)
}

View File

@@ -1,8 +0,0 @@
package src
// FilesResourceListResponse struct is returned by the API for requests.
type FilesResourceListResponse struct {
Items []ResourceInfoResponse `json:"items"`
Limit *uint64 `json:"limit"`
Offset *uint64 `json:"offset"`
}

View File

@@ -1,78 +0,0 @@
package src
import (
"encoding/json"
"strings"
)
// FlatFileListRequest struct client for FlatFileList Request
type FlatFileListRequest struct {
client *Client
HTTPRequest *HTTPRequest
}
// FlatFileListRequestOptions struct - options for request
type FlatFileListRequestOptions struct {
MediaType []MediaType
Limit *uint32
Offset *uint32
Fields []string
PreviewSize *PreviewSize
PreviewCrop *bool
}
// Request get request
func (req *FlatFileListRequest) Request() *HTTPRequest {
return req.HTTPRequest
}
// NewFlatFileListRequest create new FlatFileList Request
func (c *Client) NewFlatFileListRequest(options ...FlatFileListRequestOptions) *FlatFileListRequest {
var parameters = make(map[string]interface{})
if len(options) > 0 {
opt := options[0]
if opt.Limit != nil {
parameters["limit"] = *opt.Limit
}
if opt.Offset != nil {
parameters["offset"] = *opt.Offset
}
if opt.Fields != nil {
parameters["fields"] = strings.Join(opt.Fields, ",")
}
if opt.PreviewSize != nil {
parameters["preview_size"] = opt.PreviewSize.String()
}
if opt.PreviewCrop != nil {
parameters["preview_crop"] = *opt.PreviewCrop
}
if opt.MediaType != nil {
var strMediaTypes = make([]string, len(opt.MediaType))
for i, t := range opt.MediaType {
strMediaTypes[i] = t.String()
}
parameters["media_type"] = strings.Join(strMediaTypes, ",")
}
}
return &FlatFileListRequest{
client: c,
HTTPRequest: createGetRequest(c, "/resources/files", parameters),
}
}
// Exec run FlatFileList Request
func (req *FlatFileListRequest) Exec() (*FilesResourceListResponse, error) {
data, err := req.Request().run(req.client)
if err != nil {
return nil, err
}
var info FilesResourceListResponse
err = json.Unmarshal(data, &info)
if err != nil {
return nil, err
}
if cap(info.Items) == 0 {
info.Items = []ResourceInfoResponse{}
}
return &info, nil
}

View File

@@ -1,24 +0,0 @@
package src
// HTTPRequest struct
type HTTPRequest struct {
Method string
Path string
Parameters map[string]interface{}
Headers map[string][]string
}
func createGetRequest(client *Client, path string, params map[string]interface{}) *HTTPRequest {
return createRequest(client, "GET", path, params)
}
func createRequest(client *Client, method string, path string, parameters map[string]interface{}) *HTTPRequest {
var headers = make(map[string][]string)
headers["Authorization"] = []string{"OAuth " + client.token}
return &HTTPRequest{
Method: method,
Path: path,
Parameters: parameters,
Headers: headers,
}
}

View File

@@ -1,7 +0,0 @@
package src
// LastUploadedResourceListResponse struct
type LastUploadedResourceListResponse struct {
Items []ResourceInfoResponse `json:"items"`
Limit *uint64 `json:"limit"`
}

View File

@@ -1,74 +0,0 @@
package src
import (
"encoding/json"
"strings"
)
// LastUploadedResourceListRequest struct
type LastUploadedResourceListRequest struct {
client *Client
HTTPRequest *HTTPRequest
}
// LastUploadedResourceListRequestOptions struct
type LastUploadedResourceListRequestOptions struct {
MediaType []MediaType
Limit *uint32
Fields []string
PreviewSize *PreviewSize
PreviewCrop *bool
}
// Request return request
func (req *LastUploadedResourceListRequest) Request() *HTTPRequest {
return req.HTTPRequest
}
// NewLastUploadedResourceListRequest create new LastUploadedResourceList Request
func (c *Client) NewLastUploadedResourceListRequest(options ...LastUploadedResourceListRequestOptions) *LastUploadedResourceListRequest {
var parameters = make(map[string]interface{})
if len(options) > 0 {
opt := options[0]
if opt.Limit != nil {
parameters["limit"] = opt.Limit
}
if opt.Fields != nil {
parameters["fields"] = strings.Join(opt.Fields, ",")
}
if opt.PreviewSize != nil {
parameters["preview_size"] = opt.PreviewSize.String()
}
if opt.PreviewCrop != nil {
parameters["preview_crop"] = opt.PreviewCrop
}
if opt.MediaType != nil {
var strMediaTypes = make([]string, len(opt.MediaType))
for i, t := range opt.MediaType {
strMediaTypes[i] = t.String()
}
parameters["media_type"] = strings.Join(strMediaTypes, ",")
}
}
return &LastUploadedResourceListRequest{
client: c,
HTTPRequest: createGetRequest(c, "/resources/last-uploaded", parameters),
}
}
// Exec run LastUploadedResourceList Request
func (req *LastUploadedResourceListRequest) Exec() (*LastUploadedResourceListResponse, error) {
data, err := req.Request().run(req.client)
if err != nil {
return nil, err
}
var info LastUploadedResourceListResponse
err = json.Unmarshal(data, &info)
if err != nil {
return nil, err
}
if cap(info.Items) == 0 {
info.Items = []ResourceInfoResponse{}
}
return &info, nil
}

View File

@@ -1,144 +0,0 @@
package src
// MediaType struct - media types
type MediaType struct {
mediaType string
}
// Audio - media type
func (m *MediaType) Audio() *MediaType {
return &MediaType{
mediaType: "audio",
}
}
// Backup - media type
func (m *MediaType) Backup() *MediaType {
return &MediaType{
mediaType: "backup",
}
}
// Book - media type
func (m *MediaType) Book() *MediaType {
return &MediaType{
mediaType: "book",
}
}
// Compressed - media type
func (m *MediaType) Compressed() *MediaType {
return &MediaType{
mediaType: "compressed",
}
}
// Data - media type
func (m *MediaType) Data() *MediaType {
return &MediaType{
mediaType: "data",
}
}
// Development - media type
func (m *MediaType) Development() *MediaType {
return &MediaType{
mediaType: "development",
}
}
// Diskimage - media type
func (m *MediaType) Diskimage() *MediaType {
return &MediaType{
mediaType: "diskimage",
}
}
// Document - media type
func (m *MediaType) Document() *MediaType {
return &MediaType{
mediaType: "document",
}
}
// Encoded - media type
func (m *MediaType) Encoded() *MediaType {
return &MediaType{
mediaType: "encoded",
}
}
// Executable - media type
func (m *MediaType) Executable() *MediaType {
return &MediaType{
mediaType: "executable",
}
}
// Flash - media type
func (m *MediaType) Flash() *MediaType {
return &MediaType{
mediaType: "flash",
}
}
// Font - media type
func (m *MediaType) Font() *MediaType {
return &MediaType{
mediaType: "font",
}
}
// Image - media type
func (m *MediaType) Image() *MediaType {
return &MediaType{
mediaType: "image",
}
}
// Settings - media type
func (m *MediaType) Settings() *MediaType {
return &MediaType{
mediaType: "settings",
}
}
// Spreadsheet - media type
func (m *MediaType) Spreadsheet() *MediaType {
return &MediaType{
mediaType: "spreadsheet",
}
}
// Text - media type
func (m *MediaType) Text() *MediaType {
return &MediaType{
mediaType: "text",
}
}
// Unknown - media type
func (m *MediaType) Unknown() *MediaType {
return &MediaType{
mediaType: "unknown",
}
}
// Video - media type
func (m *MediaType) Video() *MediaType {
return &MediaType{
mediaType: "video",
}
}
// Web - media type
func (m *MediaType) Web() *MediaType {
return &MediaType{
mediaType: "web",
}
}
// String - media type
func (m *MediaType) String() string {
return m.mediaType
}

View File

@@ -1,21 +0,0 @@
package src
import (
"net/url"
)
// Mkdir will make specified folder on Yandex Disk
func (c *Client) Mkdir(remotePath string) (int, string, error) {
values := url.Values{}
values.Add("path", remotePath) // only one current folder will be created. Not all the folders in the path.
urlPath := "/v1/disk/resources?" + values.Encode()
fullURL := RootAddr
if urlPath[:1] != "/" {
fullURL += "/" + urlPath
} else {
fullURL += urlPath
}
return c.PerformMkdir(fullURL)
}

View File

@@ -1,35 +0,0 @@
package src
import (
"io/ioutil"
"net/http"
"github.com/pkg/errors"
)
// PerformDelete does the actual delete via DELETE request.
func (c *Client) PerformDelete(url string) error {
req, err := http.NewRequest("DELETE", url, nil)
if err != nil {
return err
}
//set access token and headers
c.setRequestScope(req)
resp, err := c.HTTPClient.Do(req)
if err != nil {
return err
}
//204 - resource deleted.
//202 - folder not empty, content will be deleted soon (async delete).
if resp.StatusCode != 204 && resp.StatusCode != 202 {
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
return errors.Errorf("delete error [%d]: %s", resp.StatusCode, string(body))
}
return nil
}

View File

@@ -1,40 +0,0 @@
package src
import (
"io"
"io/ioutil"
"net/http"
"github.com/pkg/errors"
)
// PerformDownload does the actual download via unscoped GET request.
func (c *Client) PerformDownload(url string, headers map[string]string) (out io.ReadCloser, err error) {
req, err := http.NewRequest("GET", url, nil)
if err != nil {
return nil, err
}
// Set any extra headers
for k, v := range headers {
req.Header.Set(k, v)
}
//c.setRequestScope(req)
resp, err := c.HTTPClient.Do(req)
if err != nil {
return nil, err
}
_, isRanging := req.Header["Range"]
if !(resp.StatusCode == http.StatusOK || (isRanging && resp.StatusCode == http.StatusPartialContent)) {
defer CheckClose(resp.Body, &err)
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
return nil, errors.Errorf("download error [%d]: %s", resp.StatusCode, string(body))
}
return resp.Body, err
}

View File

@@ -1,34 +0,0 @@
package src
import (
"io/ioutil"
"net/http"
"github.com/pkg/errors"
)
// PerformMkdir does the actual mkdir via PUT request.
func (c *Client) PerformMkdir(url string) (int, string, error) {
req, err := http.NewRequest("PUT", url, nil)
if err != nil {
return 0, "", err
}
//set access token and headers
c.setRequestScope(req)
resp, err := c.HTTPClient.Do(req)
if err != nil {
return 0, "", err
}
if resp.StatusCode != 201 {
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return 0, "", err
}
//third parameter is the json error response body
return resp.StatusCode, string(body), errors.Errorf("create folder error [%d]: %s", resp.StatusCode, string(body))
}
return resp.StatusCode, "", nil
}

View File

@@ -1,38 +0,0 @@
package src
//from yadisk
import (
"io"
"io/ioutil"
"net/http"
"github.com/pkg/errors"
)
// PerformUpload does the actual upload via unscoped PUT request.
func (c *Client) PerformUpload(url string, data io.Reader, contentType string) (err error) {
req, err := http.NewRequest("PUT", url, data)
if err != nil {
return err
}
req.Header.Set("Content-Type", contentType)
//c.setRequestScope(req)
resp, err := c.HTTPClient.Do(req)
if err != nil {
return err
}
defer CheckClose(resp.Body, &err)
if resp.StatusCode != 201 {
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
return errors.Errorf("upload error [%d]: %s", resp.StatusCode, string(body))
}
return nil
}

View File

@@ -1,75 +0,0 @@
package src
import "fmt"
// PreviewSize struct
type PreviewSize struct {
size string
}
// PredefinedSizeS - set preview size
func (s *PreviewSize) PredefinedSizeS() *PreviewSize {
return &PreviewSize{
size: "S",
}
}
// PredefinedSizeM - set preview size
func (s *PreviewSize) PredefinedSizeM() *PreviewSize {
return &PreviewSize{
size: "M",
}
}
// PredefinedSizeL - set preview size
func (s *PreviewSize) PredefinedSizeL() *PreviewSize {
return &PreviewSize{
size: "L",
}
}
// PredefinedSizeXL - set preview size
func (s *PreviewSize) PredefinedSizeXL() *PreviewSize {
return &PreviewSize{
size: "XL",
}
}
// PredefinedSizeXXL - set preview size
func (s *PreviewSize) PredefinedSizeXXL() *PreviewSize {
return &PreviewSize{
size: "XXL",
}
}
// PredefinedSizeXXXL - set preview size
func (s *PreviewSize) PredefinedSizeXXXL() *PreviewSize {
return &PreviewSize{
size: "XXXL",
}
}
// ExactWidth - set preview size
func (s *PreviewSize) ExactWidth(width uint32) *PreviewSize {
return &PreviewSize{
size: fmt.Sprintf("%dx", width),
}
}
// ExactHeight - set preview size
func (s *PreviewSize) ExactHeight(height uint32) *PreviewSize {
return &PreviewSize{
size: fmt.Sprintf("x%d", height),
}
}
// ExactSize - set preview size
func (s *PreviewSize) ExactSize(width uint32, height uint32) *PreviewSize {
return &PreviewSize{
size: fmt.Sprintf("%dx%d", width, height),
}
}
func (s *PreviewSize) String() string {
return s.size
}

View File

@@ -1,19 +0,0 @@
package src
//ResourceInfoResponse struct is returned by the API for metedata requests.
type ResourceInfoResponse struct {
PublicKey string `json:"public_key"`
Name string `json:"name"`
Created string `json:"created"`
CustomProperties map[string]interface{} `json:"custom_properties"`
Preview string `json:"preview"`
PublicURL string `json:"public_url"`
OriginPath string `json:"origin_path"`
Modified string `json:"modified"`
Path string `json:"path"`
Md5 string `json:"md5"`
ResourceType string `json:"type"`
MimeType string `json:"mime_type"`
Size uint64 `json:"size"`
Embedded *ResourceListResponse `json:"_embedded"`
}

View File

@@ -1,45 +0,0 @@
package src
import "encoding/json"
// ResourceInfoRequest struct
type ResourceInfoRequest struct {
client *Client
HTTPRequest *HTTPRequest
}
// Request of ResourceInfoRequest
func (req *ResourceInfoRequest) Request() *HTTPRequest {
return req.HTTPRequest
}
// NewResourceInfoRequest create new ResourceInfo Request
func (c *Client) NewResourceInfoRequest(path string, options ...ResourceInfoRequestOptions) *ResourceInfoRequest {
return &ResourceInfoRequest{
client: c,
HTTPRequest: createResourceInfoRequest(c, "/resources", path, options...),
}
}
// Exec run ResourceInfo Request
func (req *ResourceInfoRequest) Exec() (*ResourceInfoResponse, error) {
data, err := req.Request().run(req.client)
if err != nil {
return nil, err
}
var info ResourceInfoResponse
err = json.Unmarshal(data, &info)
if err != nil {
return nil, err
}
if info.CustomProperties == nil {
info.CustomProperties = make(map[string]interface{})
}
if info.Embedded != nil {
if cap(info.Embedded.Items) == 0 {
info.Embedded.Items = []ResourceInfoResponse{}
}
}
return &info, nil
}

View File

@@ -1,33 +0,0 @@
package src
import "strings"
func createResourceInfoRequest(c *Client,
apiPath string,
path string,
options ...ResourceInfoRequestOptions) *HTTPRequest {
var parameters = make(map[string]interface{})
parameters["path"] = path
if len(options) > 0 {
opt := options[0]
if opt.SortMode != nil {
parameters["sort"] = opt.SortMode.String()
}
if opt.Limit != nil {
parameters["limit"] = *opt.Limit
}
if opt.Offset != nil {
parameters["offset"] = *opt.Offset
}
if opt.Fields != nil {
parameters["fields"] = strings.Join(opt.Fields, ",")
}
if opt.PreviewSize != nil {
parameters["preview_size"] = opt.PreviewSize.String()
}
if opt.PreviewCrop != nil {
parameters["preview_crop"] = *opt.PreviewCrop
}
}
return createGetRequest(c, apiPath, parameters)
}

View File

@@ -1,11 +0,0 @@
package src
// ResourceInfoRequestOptions struct
type ResourceInfoRequestOptions struct {
SortMode *SortMode
Limit *uint32
Offset *uint32
Fields []string
PreviewSize *PreviewSize
PreviewCrop *bool
}

View File

@@ -1,12 +0,0 @@
package src
// ResourceListResponse struct
type ResourceListResponse struct {
Sort *SortMode `json:"sort"`
PublicKey string `json:"public_key"`
Items []ResourceInfoResponse `json:"items"`
Path string `json:"path"`
Limit *uint64 `json:"limit"`
Offset *uint64 `json:"offset"`
Total *uint64 `json:"total"`
}

View File

@@ -1,79 +0,0 @@
package src
import "strings"
// SortMode struct - sort mode
type SortMode struct {
mode string
}
// Default - sort mode
func (m *SortMode) Default() *SortMode {
return &SortMode{
mode: "",
}
}
// ByName - sort mode
func (m *SortMode) ByName() *SortMode {
return &SortMode{
mode: "name",
}
}
// ByPath - sort mode
func (m *SortMode) ByPath() *SortMode {
return &SortMode{
mode: "path",
}
}
// ByCreated - sort mode
func (m *SortMode) ByCreated() *SortMode {
return &SortMode{
mode: "created",
}
}
// ByModified - sort mode
func (m *SortMode) ByModified() *SortMode {
return &SortMode{
mode: "modified",
}
}
// BySize - sort mode
func (m *SortMode) BySize() *SortMode {
return &SortMode{
mode: "size",
}
}
// Reverse - sort mode
func (m *SortMode) Reverse() *SortMode {
if strings.HasPrefix(m.mode, "-") {
return &SortMode{
mode: m.mode[1:],
}
}
return &SortMode{
mode: "-" + m.mode,
}
}
func (m *SortMode) String() string {
return m.mode
}
// UnmarshalJSON sort mode
func (m *SortMode) UnmarshalJSON(value []byte) error {
if value == nil || len(value) == 0 {
m.mode = ""
return nil
}
m.mode = string(value)
if strings.HasPrefix(m.mode, "\"") && strings.HasSuffix(m.mode, "\"") {
m.mode = m.mode[1 : len(m.mode)-1]
}
return nil
}

View File

@@ -1,45 +0,0 @@
package src
import "encoding/json"
// TrashResourceInfoRequest struct
type TrashResourceInfoRequest struct {
client *Client
HTTPRequest *HTTPRequest
}
// Request of TrashResourceInfoRequest struct
func (req *TrashResourceInfoRequest) Request() *HTTPRequest {
return req.HTTPRequest
}
// NewTrashResourceInfoRequest create new TrashResourceInfo Request
func (c *Client) NewTrashResourceInfoRequest(path string, options ...ResourceInfoRequestOptions) *TrashResourceInfoRequest {
return &TrashResourceInfoRequest{
client: c,
HTTPRequest: createResourceInfoRequest(c, "/trash/resources", path, options...),
}
}
// Exec run TrashResourceInfo Request
func (req *TrashResourceInfoRequest) Exec() (*ResourceInfoResponse, error) {
data, err := req.Request().run(req.client)
if err != nil {
return nil, err
}
var info ResourceInfoResponse
err = json.Unmarshal(data, &info)
if err != nil {
return nil, err
}
if info.CustomProperties == nil {
info.CustomProperties = make(map[string]interface{})
}
if info.Embedded != nil {
if cap(info.Embedded.Items) == 0 {
info.Embedded.Items = []ResourceInfoResponse{}
}
}
return &info, nil
}

157
backend/yandex/api/types.go Normal file
View File

@@ -0,0 +1,157 @@
package api
import (
"fmt"
"strings"
)
// DiskInfo contains disk metadata
type DiskInfo struct {
TotalSpace int64 `json:"total_space"`
UsedSpace int64 `json:"used_space"`
TrashSize int64 `json:"trash_size"`
}
// ResourceInfoRequestOptions struct
type ResourceInfoRequestOptions struct {
SortMode *SortMode
Limit uint64
Offset uint64
Fields []string
}
//ResourceInfoResponse struct is returned by the API for metedata requests.
type ResourceInfoResponse struct {
PublicKey string `json:"public_key"`
Name string `json:"name"`
Created string `json:"created"`
CustomProperties map[string]interface{} `json:"custom_properties"`
Preview string `json:"preview"`
PublicURL string `json:"public_url"`
OriginPath string `json:"origin_path"`
Modified string `json:"modified"`
Path string `json:"path"`
Md5 string `json:"md5"`
ResourceType string `json:"type"`
MimeType string `json:"mime_type"`
Size int64 `json:"size"`
Embedded *ResourceListResponse `json:"_embedded"`
}
// ResourceListResponse struct
type ResourceListResponse struct {
Sort *SortMode `json:"sort"`
PublicKey string `json:"public_key"`
Items []ResourceInfoResponse `json:"items"`
Path string `json:"path"`
Limit *uint64 `json:"limit"`
Offset *uint64 `json:"offset"`
Total *uint64 `json:"total"`
}
// AsyncInfo struct is returned by the API for various async operations.
type AsyncInfo struct {
HRef string `json:"href"`
Method string `json:"method"`
Templated bool `json:"templated"`
}
// AsyncStatus is returned when requesting the status of an async operations. Possble values in-progress, success, failure
type AsyncStatus struct {
Status string `json:"status"`
}
//CustomPropertyResponse struct we send and is returned by the API for CustomProperty request.
type CustomPropertyResponse struct {
CustomProperties map[string]interface{} `json:"custom_properties"`
}
// SortMode struct - sort mode
type SortMode struct {
mode string
}
// Default - sort mode
func (m *SortMode) Default() *SortMode {
return &SortMode{
mode: "",
}
}
// ByName - sort mode
func (m *SortMode) ByName() *SortMode {
return &SortMode{
mode: "name",
}
}
// ByPath - sort mode
func (m *SortMode) ByPath() *SortMode {
return &SortMode{
mode: "path",
}
}
// ByCreated - sort mode
func (m *SortMode) ByCreated() *SortMode {
return &SortMode{
mode: "created",
}
}
// ByModified - sort mode
func (m *SortMode) ByModified() *SortMode {
return &SortMode{
mode: "modified",
}
}
// BySize - sort mode
func (m *SortMode) BySize() *SortMode {
return &SortMode{
mode: "size",
}
}
// Reverse - sort mode
func (m *SortMode) Reverse() *SortMode {
if strings.HasPrefix(m.mode, "-") {
return &SortMode{
mode: m.mode[1:],
}
}
return &SortMode{
mode: "-" + m.mode,
}
}
func (m *SortMode) String() string {
return m.mode
}
// UnmarshalJSON sort mode
func (m *SortMode) UnmarshalJSON(value []byte) error {
if value == nil || len(value) == 0 {
m.mode = ""
return nil
}
m.mode = string(value)
if strings.HasPrefix(m.mode, "\"") && strings.HasSuffix(m.mode, "\"") {
m.mode = m.mode[1 : len(m.mode)-1]
}
return nil
}
// ErrorResponse represents erroneous API response.
// Implements go's built in `error`.
type ErrorResponse struct {
ErrorName string `json:"error"`
Description string `json:"description"`
Message string `json:"message"`
StatusCode int `json:""`
}
func (e *ErrorResponse) Error() string {
return fmt.Sprintf("[%d - %s] %s (%s)", e.StatusCode, e.ErrorName, e.Description, e.Message)
}

View File

@@ -1,71 +0,0 @@
package src
//from yadisk
import (
"encoding/json"
"io"
"net/url"
"strconv"
)
// UploadResponse struct is returned by the API for upload request.
type UploadResponse struct {
HRef string `json:"href"`
Method string `json:"method"`
Templated bool `json:"templated"`
}
// Upload will put specified data to Yandex.Disk.
func (c *Client) Upload(data io.Reader, remotePath string, overwrite bool, contentType string) error {
ur, err := c.UploadRequest(remotePath, overwrite)
if err != nil {
return err
}
return c.PerformUpload(ur.HRef, data, contentType)
}
// UploadRequest will make an upload request and return a URL to upload data to.
func (c *Client) UploadRequest(remotePath string, overwrite bool) (ur *UploadResponse, err error) {
values := url.Values{}
values.Add("path", remotePath)
values.Add("overwrite", strconv.FormatBool(overwrite))
req, err := c.scopedRequest("GET", "/v1/disk/resources/upload?"+values.Encode(), nil)
if err != nil {
return nil, err
}
resp, err := c.HTTPClient.Do(req)
if err != nil {
return nil, err
}
if err := CheckAPIError(resp); err != nil {
return nil, err
}
defer CheckClose(resp.Body, &err)
ur, err = ParseUploadResponse(resp.Body)
if err != nil {
return nil, err
}
return ur, nil
}
// ParseUploadResponse tries to read and parse UploadResponse struct.
func ParseUploadResponse(data io.Reader) (*UploadResponse, error) {
dec := json.NewDecoder(data)
var ur UploadResponse
if err := dec.Decode(&ur); err == io.EOF {
// ok
} else if err != nil {
return nil, err
}
// TODO: check if there is any trash data after JSON and crash if there is.
return &ur, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -63,7 +63,9 @@ var osarches = []string{
// Special environment flags for a given arch
var archFlags = map[string][]string{
"386": {"GO386=387"},
"386": {"GO386=387"},
"mips": {"GOMIPS=softfloat"},
"mipsle": {"GOMIPS=softfloat"},
}
// runEnv - run a shell command with env

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python2
"""
Make backend documentation
"""

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python2
"""
Make single page versions of the documentation for release and
conversion into man pages etc.

View File

@@ -4,18 +4,20 @@
set -e
go install
mkdir -p /tmp/rclone_cache_test
mkdir -p /tmp/rclone/cache_test
mkdir -p /tmp/rclone/rc_mount
export RCLONE_CONFIG_RCDOCS_TYPE=cache
export RCLONE_CONFIG_RCDOCS_REMOTE=/tmp/rclone/cache_test
rclone -q --rc mount rcdocs: /mnt/tmp/ &
rclone -q --rc mount rcdocs: /tmp/rclone/rc_mount &
sleep 0.5
rclone rc > /tmp/z.md
fusermount -z -u /mnt/tmp/
rclone rc > /tmp/rclone/z.md
fusermount -u -z /tmp/rclone/rc_mount > /dev/null 2>&1 || umount /tmp/rclone/rc_mount
awk '
BEGIN {p=1}
/^<!--- autogenerated start/ {print;system("cat /tmp/z.md");p=0}
/^<!--- autogenerated start/ {print;system("cat /tmp/rclone/z.md");p=0}
/^<!--- autogenerated stop/ {p=1}
p' docs/content/rc.md > /tmp/rc.md
p' docs/content/rc.md > /tmp/rclone/rc.md
mv /tmp/rc.md docs/content/rc.md
mv /tmp/rclone/rc.md docs/content/rc.md
rm -rf /tmp/rclone

59
bin/test_independence.go Normal file
View File

@@ -0,0 +1,59 @@
// +build ignore
// Test that the tests in the suite passed in are independent
package main
import (
"flag"
"log"
"os"
"os/exec"
"regexp"
)
var matchLine = regexp.MustCompile(`(?m)^=== RUN\s*(TestIntegration/\S*)\s*$`)
// run the test pass in and grep out the test names
func findTests(packageToTest string) (tests []string) {
cmd := exec.Command("go", "test", "-v", packageToTest)
out, err := cmd.CombinedOutput()
if err != nil {
_, _ = os.Stderr.Write(out)
log.Fatal(err)
}
results := matchLine.FindAllSubmatch(out, -1)
if results == nil {
log.Fatal("No tests found")
}
for _, line := range results {
tests = append(tests, string(line[1]))
}
return tests
}
// run the test passed in with the -run passed in
func runTest(packageToTest string, testName string) {
cmd := exec.Command("go", "test", "-v", packageToTest, "-run", "^"+testName+"$")
out, err := cmd.CombinedOutput()
if err != nil {
log.Printf("%s FAILED ------------------", testName)
_, _ = os.Stderr.Write(out)
log.Printf("%s FAILED ------------------", testName)
} else {
log.Printf("%s OK", testName)
}
}
func main() {
flag.Parse()
args := flag.Args()
if len(args) != 1 {
log.Fatalf("Syntax: %s <test_to_run>", os.Args[0])
}
packageToTest := args[0]
testNames := findTests(packageToTest)
// fmt.Printf("%s\n", testNames)
for _, testName := range testNames {
runTest(packageToTest, testName)
}
}

View File

@@ -43,6 +43,7 @@ import (
_ "github.com/ncw/rclone/cmd/purge"
_ "github.com/ncw/rclone/cmd/rc"
_ "github.com/ncw/rclone/cmd/rcat"
_ "github.com/ncw/rclone/cmd/rcd"
_ "github.com/ncw/rclone/cmd/reveal"
_ "github.com/ncw/rclone/cmd/rmdir"
_ "github.com/ncw/rclone/cmd/rmdirs"

View File

@@ -29,8 +29,8 @@ import (
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/rclone/fs/fspath"
fslog "github.com/ncw/rclone/fs/log"
"github.com/ncw/rclone/fs/rc"
"github.com/ncw/rclone/fs/rc/rcflags"
"github.com/ncw/rclone/fs/rc/rcserver"
"github.com/ncw/rclone/lib/atexit"
"github.com/pkg/errors"
"github.com/spf13/cobra"
@@ -352,8 +352,11 @@ func initConfig() {
// Write the args for debug purposes
fs.Debugf("rclone", "Version %q starting with parameters %q", fs.Version, os.Args)
// Start the remote control if configured
rc.Start(&rcflags.Opt)
// Start the remote control server if configured
_, err = rcserver.Start(&rcflags.Opt)
if err != nil {
log.Fatalf("Failed to start remote control: %v", err)
}
// Setup CPU profiling if desired
if *cpuProfile != "" {

View File

@@ -53,7 +53,6 @@ func mountOptions(device string, mountpoint string) (options []string) {
// OSX options
if runtime.GOOS == "darwin" {
options = append(options, "-o", "volname="+mountlib.VolumeName)
if mountlib.NoAppleDouble {
options = append(options, "-o", "noappledouble")
}
@@ -70,6 +69,11 @@ func mountOptions(device string, mountpoint string) (options []string) {
options = append(options, "--FileSystemName=rclone")
}
if runtime.GOOS == "darwin" || runtime.GOOS == "windows" {
if mountlib.VolumeName != "" {
options = append(options, "-o", "volname="+mountlib.VolumeName)
}
}
if mountlib.AllowNonEmpty {
options = append(options, "-o", "nonempty")
}

View File

@@ -1,8 +1,11 @@
package config
import (
"errors"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/rc"
"github.com/spf13/cobra"
)
@@ -93,7 +96,16 @@ you would do:
`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(2, 256, command, args)
return config.CreateRemote(args[0], args[1], args[2:])
in, err := argsToMap(args[2:])
if err != nil {
return err
}
err = config.CreateRemote(args[0], args[1], in)
if err != nil {
return err
}
config.ShowRemote(args[0])
return nil
},
}
@@ -110,7 +122,16 @@ For example to update the env_auth field of a remote of name myremote you would
`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(3, 256, command, args)
return config.UpdateRemote(args[0], args[1:])
in, err := argsToMap(args[1:])
if err != nil {
return err
}
err = config.UpdateRemote(args[0], in)
if err != nil {
return err
}
config.ShowRemote(args[0])
return nil
},
}
@@ -136,6 +157,29 @@ For example to set password of a remote of name myremote you would do:
`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(3, 256, command, args)
return config.PasswordRemote(args[0], args[1:])
in, err := argsToMap(args[1:])
if err != nil {
return err
}
err = config.PasswordRemote(args[0], in)
if err != nil {
return err
}
config.ShowRemote(args[0])
return nil
},
}
// This takes a list of arguments in key value key value form and
// converts it into a map
func argsToMap(args []string) (out rc.Params, err error) {
if len(args)%2 != 0 {
return nil, errors.New("found key without value")
}
out = rc.Params{}
// Set the config
for i := 0; i < len(args); i += 2 {
out[args[i]] = args[i+1]
}
return out, nil
}

View File

@@ -50,6 +50,8 @@ If you are familiar with ` + "`rsync`" + `, rclone always works as if you had
written a trailing / - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the
source or destination.
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -40,6 +40,8 @@ This will:
This doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. It doesn't delete files from the
destination.
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -1,9 +1,6 @@
package copyurl
import (
"net/http"
"time"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs/operations"
"github.com/spf13/cobra"
@@ -25,14 +22,7 @@ without saving it in tmp storage.
fsdst, dstFileName := cmd.NewFsDstFile(args[1:])
cmd.Run(true, true, command, func() error {
resp, err := http.Get(args[0])
if err != nil {
return err
}
_, err = operations.RcatSize(fsdst, dstFileName, resp.Body, resp.ContentLength, time.Now())
_, err := operations.CopyURL(fsdst, dstFileName, args[0])
return err
})
},

View File

@@ -14,9 +14,13 @@ var commandDefintion = &cobra.Command{
Use: "delete remote:path",
Short: `Remove the contents of path.`,
Long: `
Remove the contents of path. Unlike ` + "`" + `purge` + "`" + ` it obeys include/exclude
Remove the files in path. Unlike ` + "`" + `purge` + "`" + ` it obeys include/exclude
filters so can be used to selectively delete files.
` + "`" + `rclone delete` + "`" + ` only deletes objects but leaves the directory structure
alone. If you want to delete a directory and all of its contents use
` + "`" + `rclone purge` + "`" + `
Eg delete all files bigger than 100MBytes
Check what would be deleted first (use either)

View File

@@ -138,6 +138,7 @@ func (r *results) checkChar(c rune) {
escape := false
if err != nil {
fs.Infof(r.f, "Couldn't write file 0x%02X", c)
escape = true
} else {
fs.Infof(r.f, "OK writing file 0x%02X", c)
}

View File

@@ -3,62 +3,26 @@ package lsjson
import (
"encoding/json"
"fmt"
"log"
"os"
"path"
"time"
"github.com/ncw/rclone/backend/crypt"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/cmd/ls/lshelp"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/operations"
"github.com/ncw/rclone/fs/walk"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
var (
recurse bool
showHash bool
showEncrypted bool
showOrigIDs bool
noModTime bool
opt operations.ListJSONOpt
)
func init() {
cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&recurse, "recursive", "R", false, "Recurse into the listing.")
commandDefintion.Flags().BoolVarP(&showHash, "hash", "", false, "Include hashes in the output (may take longer).")
commandDefintion.Flags().BoolVarP(&noModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up).")
commandDefintion.Flags().BoolVarP(&showEncrypted, "encrypted", "M", false, "Show the encrypted names.")
commandDefintion.Flags().BoolVarP(&showOrigIDs, "original", "", false, "Show the ID of the underlying Object.")
}
// lsJSON in the struct which gets marshalled for each line
type lsJSON struct {
Path string
Name string
Encrypted string `json:",omitempty"`
Size int64
MimeType string `json:",omitempty"`
ModTime Timestamp //`json:",omitempty"`
IsDir bool
Hashes map[string]string `json:",omitempty"`
ID string `json:",omitempty"`
OrigID string `json:",omitempty"`
}
// Timestamp a time in RFC3339 format with Nanosecond precision secongs
type Timestamp time.Time
// MarshalJSON turns a Timestamp into JSON
func (t Timestamp) MarshalJSON() (out []byte, err error) {
tt := time.Time(t)
if tt.IsZero() {
return []byte(`""`), nil
}
return []byte(`"` + tt.Format(time.RFC3339Nano) + `"`), nil
commandDefintion.Flags().BoolVarP(&opt.Recurse, "recursive", "R", false, "Recurse into the listing.")
commandDefintion.Flags().BoolVarP(&opt.ShowHash, "hash", "", false, "Include hashes in the output (may take longer).")
commandDefintion.Flags().BoolVarP(&opt.NoModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up).")
commandDefintion.Flags().BoolVarP(&opt.ShowEncrypted, "encrypted", "M", false, "Show the encrypted names.")
commandDefintion.Flags().BoolVarP(&opt.ShowOrigIDs, "original", "", false, "Show the ID of the underlying Object.")
}
var commandDefintion = &cobra.Command{
@@ -104,107 +68,27 @@ can be processed line by line as each item is written one to a line.
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
var cipher crypt.Cipher
if showEncrypted {
fsInfo, _, _, config, err := fs.ConfigFs(args[0])
if err != nil {
log.Fatalf(err.Error())
}
if fsInfo.Name != "crypt" {
log.Fatalf("The remote needs to be of type \"crypt\"")
}
cipher, err = crypt.NewCipher(config)
if err != nil {
log.Fatalf(err.Error())
}
}
cmd.Run(false, false, command, func() error {
fmt.Println("[")
first := true
err := walk.Walk(fsrc, "", false, operations.ConfigMaxDepth(recurse), func(dirPath string, entries fs.DirEntries, err error) error {
err := operations.ListJSON(fsrc, "", &opt, func(item *operations.ListJSONItem) error {
out, err := json.Marshal(item)
if err != nil {
fs.CountError(err)
fs.Errorf(dirPath, "error listing: %v", err)
return nil
return errors.Wrap(err, "failed to marshal list object")
}
for _, entry := range entries {
item := lsJSON{
Path: entry.Remote(),
Name: path.Base(entry.Remote()),
Size: entry.Size(),
MimeType: fs.MimeTypeDirEntry(entry),
}
if !noModTime {
item.ModTime = Timestamp(entry.ModTime())
}
if cipher != nil {
switch entry.(type) {
case fs.Directory:
item.Encrypted = cipher.EncryptDirName(path.Base(entry.Remote()))
case fs.Object:
item.Encrypted = cipher.EncryptFileName(path.Base(entry.Remote()))
default:
fs.Errorf(nil, "Unknown type %T in listing", entry)
}
}
if do, ok := entry.(fs.IDer); ok {
item.ID = do.ID()
}
if showOrigIDs {
cur := entry
for {
u, ok := cur.(fs.ObjectUnWrapper)
if !ok {
break // not a wrapped object, use current id
}
next := u.UnWrap()
if next == nil {
break // no base object found, use current id
}
cur = next
}
if do, ok := cur.(fs.IDer); ok {
item.OrigID = do.ID()
}
}
switch x := entry.(type) {
case fs.Directory:
item.IsDir = true
case fs.Object:
item.IsDir = false
if showHash {
item.Hashes = make(map[string]string)
for _, hashType := range x.Fs().Hashes().Array() {
hash, err := x.Hash(hashType)
if err != nil {
fs.Errorf(x, "Failed to read hash: %v", err)
} else if hash != "" {
item.Hashes[hashType.String()] = hash
}
}
}
default:
fs.Errorf(nil, "Unknown type %T in listing", entry)
}
out, err := json.Marshal(item)
if err != nil {
return errors.Wrap(err, "failed to marshal list object")
}
if first {
first = false
} else {
fmt.Print(",\n")
}
_, err = os.Stdout.Write(out)
if err != nil {
return errors.Wrap(err, "failed to write to output")
}
if first {
first = false
} else {
fmt.Print(",\n")
}
_, err = os.Stdout.Write(out)
if err != nil {
return errors.Wrap(err, "failed to write to output")
}
return nil
})
if err != nil {
return errors.Wrap(err, "error listing JSON")
return err
}
if !first {
fmt.Println()

View File

@@ -147,7 +147,7 @@ systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone ` + commandName + `
can't use retries in the same way without making local copies of the
uploads. Look at the [file caching](#file-caching)
for solutions to make ` + commandName + ` mount more reliable.
for solutions to make ` + commandName + ` more reliable.
### Attribute caching

View File

@@ -39,6 +39,8 @@ If you want to delete empty source directories after move, use the --delete-empt
**Important**: Since this can cause data loss, test first with the
--dry-run flag.
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -43,6 +43,8 @@ transfer.
**Important**: Since this can cause data loss, test first with the
--dry-run flag.
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -13,6 +13,7 @@ import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/cmd/ncdu/scan"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/operations"
termbox "github.com/nsf/termbox-go"
"github.com/pkg/errors"
"github.com/spf13/cobra"
@@ -42,8 +43,11 @@ Here are the keys - press '?' to toggle the help on and off
` + strings.Join(helpText[1:], "\n ") + `
This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for
rclone remotes. It is missing lots of features at the moment, most
importantly deleting files, but is useful as it stands.
rclone remotes. It is missing lots of features at the moment
but is useful as it stands.
Note that it might take some time to delete big files/folders. The
UI won't respond in the meantime since the deletion is done synchronously.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
@@ -63,6 +67,7 @@ var helpText = []string{
" c toggle counts",
" g toggle graph",
" n,s,C sort by name,size,count",
" d delete file/directory",
" ^L refresh screen",
" ? to toggle help on and off",
" q/ESC/c-C to quit",
@@ -70,24 +75,27 @@ var helpText = []string{
// UI contains the state of the user interface
type UI struct {
f fs.Fs // fs being displayed
fsName string // human name of Fs
root *scan.Dir // root directory
d *scan.Dir // current directory being displayed
path string // path of current directory
showBox bool // whether to show a box
boxText []string // text to show in box
entries fs.DirEntries // entries of current directory
sortPerm []int // order to display entries in after sorting
invSortPerm []int // inverse order
dirListHeight int // height of listing
listing bool // whether listing is in progress
showGraph bool // toggle showing graph
showCounts bool // toggle showing counts
sortByName int8 // +1 for normal, 0 for off, -1 for reverse
sortBySize int8
sortByCount int8
dirPosMap map[string]dirPos // store for directory positions
f fs.Fs // fs being displayed
fsName string // human name of Fs
root *scan.Dir // root directory
d *scan.Dir // current directory being displayed
path string // path of current directory
showBox bool // whether to show a box
boxText []string // text to show in box
boxMenu []string // box menu options
boxMenuButton int
boxMenuHandler func(fs fs.Fs, path string, option int) (string, error)
entries fs.DirEntries // entries of current directory
sortPerm []int // order to display entries in after sorting
invSortPerm []int // inverse order
dirListHeight int // height of listing
listing bool // whether listing is in progress
showGraph bool // toggle showing graph
showCounts bool // toggle showing counts
sortByName int8 // +1 for normal, 0 for off, -1 for reverse
sortBySize int8
sortByCount int8
dirPosMap map[string]dirPos // store for directory positions
}
// Where we have got to in the directory listing
@@ -130,6 +138,54 @@ func Linef(x, y, xmax int, fg, bg termbox.Attribute, spacer rune, format string,
Line(x, y, xmax, fg, bg, spacer, s)
}
// LineOptions Print line of selectable options
func LineOptions(x, y, xmax int, fg, bg termbox.Attribute, options []string, selected int) {
defaultBg := bg
defaultFg := fg
// Print left+right whitespace to center the options
xoffset := ((xmax - x) - lineOptionLength(options)) / 2
for j := x; j < x+xoffset; j++ {
termbox.SetCell(j, y, ' ', fg, bg)
}
for j := xmax - xoffset; j < xmax; j++ {
termbox.SetCell(j, y, ' ', fg, bg)
}
x += xoffset
for i, o := range options {
termbox.SetCell(x, y, ' ', fg, bg)
if i == selected {
bg = termbox.ColorBlack
fg = termbox.ColorWhite
}
termbox.SetCell(x+1, y, '<', fg, bg)
x += 2
// print option text
for _, c := range o {
termbox.SetCell(x, y, c, fg, bg)
x++
}
termbox.SetCell(x, y, '>', fg, bg)
bg = defaultBg
fg = defaultFg
termbox.SetCell(x+1, y, ' ', fg, bg)
x += 2
}
}
func lineOptionLength(o []string) int {
count := 0
for _, i := range o {
count += len(i)
}
return count + 4*len(o) // spacer and arrows <entry>
}
// Box the u.boxText onto the screen
func (u *UI) Box() {
w, h := termbox.Size()
@@ -147,6 +203,15 @@ func (u *UI) Box() {
x := (w - boxWidth) / 2
y := (h - boxHeight) / 2
xmax := x + boxWidth
if len(u.boxMenu) != 0 {
count := lineOptionLength(u.boxMenu)
if x+boxWidth > x+count {
xmax = x + boxWidth
} else {
xmax = x + count
}
}
ymax := y + len(u.boxText)
// draw text
fg, bg := termbox.ColorRed, termbox.ColorWhite
@@ -155,7 +220,43 @@ func (u *UI) Box() {
fg = termbox.ColorBlack
}
// FIXME draw a box around
if len(u.boxMenu) != 0 {
ymax++
LineOptions(x, ymax-1, xmax, fg, bg, u.boxMenu, u.boxMenuButton)
}
// draw top border
for i := y; i < ymax; i++ {
termbox.SetCell(x-1, i, '│', fg, bg)
termbox.SetCell(xmax, i, '│', fg, bg)
}
for j := x; j < xmax; j++ {
termbox.SetCell(j, y-1, '─', fg, bg)
termbox.SetCell(j, ymax, '─', fg, bg)
}
termbox.SetCell(x-1, y-1, '┌', fg, bg)
termbox.SetCell(xmax, y-1, '┐', fg, bg)
termbox.SetCell(x-1, ymax, '└', fg, bg)
termbox.SetCell(xmax, ymax, '┘', fg, bg)
}
func (u *UI) moveBox(to int) {
if len(u.boxMenu) == 0 {
return
}
if to > 0 { // move right
u.boxMenuButton++
} else { // move left
u.boxMenuButton--
}
if u.boxMenuButton >= len(u.boxMenu) {
u.boxMenuButton = len(u.boxMenu) - 1
} else if u.boxMenuButton < 0 {
u.boxMenuButton = 0
}
}
// find the biggest entry in the current listing
@@ -314,6 +415,50 @@ func (u *UI) move(d int) {
u.dirPosMap[u.path] = dirPos
}
func (u *UI) removeEntry(pos int) {
u.d.Remove(pos)
u.setCurrentDir(u.d)
}
// delete the entry at the current position
func (u *UI) delete() {
dirPos := u.sortPerm[u.dirPosMap[u.path].entry]
entry := u.entries[dirPos]
u.boxMenu = []string{"cancel", "confirm"}
if obj, isFile := entry.(fs.Object); isFile {
u.boxMenuHandler = func(f fs.Fs, p string, o int) (string, error) {
if o != 1 {
return "Aborted!", nil
}
err := operations.DeleteFile(obj)
if err != nil {
return "", err
}
u.removeEntry(dirPos)
return "Successfully deleted file!", nil
}
u.popupBox([]string{
"Delete this file?",
u.fsName + entry.String()})
} else {
u.boxMenuHandler = func(f fs.Fs, p string, o int) (string, error) {
if o != 1 {
return "Aborted!", nil
}
err := operations.Purge(f, entry.String())
if err != nil {
return "", err
}
u.removeEntry(dirPos)
return "Successfully purged folder!", nil
}
u.popupBox([]string{
"Purge this directory?",
"ALL files in it will be deleted",
u.fsName + entry.String()})
}
}
// Sort by the configured sort method
type ncduSort struct {
sortPerm []int
@@ -405,6 +550,25 @@ func (u *UI) enter() {
u.setCurrentDir(d)
}
// handles a box option that was selected
func (u *UI) handleBoxOption() {
msg, err := u.boxMenuHandler(u.f, u.path, u.boxMenuButton)
// reset
u.boxMenuButton = 0
u.boxMenu = []string{}
u.boxMenuHandler = nil
if err != nil {
u.popupBox([]string{
"error:",
err.Error(),
})
return
}
u.popupBox([]string{"Finished:", msg})
}
// up goes up to the parent directory
func (u *UI) up() {
if u.d == nil {
@@ -524,8 +688,22 @@ outer:
case termbox.KeyPgup, '=', '+':
u.move(-u.dirListHeight)
case termbox.KeyArrowLeft, 'h':
if u.showBox {
u.moveBox(-1)
break
}
u.up()
case termbox.KeyArrowRight, 'l', termbox.KeyEnter:
case termbox.KeyEnter:
if len(u.boxMenu) > 0 {
u.handleBoxOption()
break
}
u.enter()
case termbox.KeyArrowRight, 'l':
if u.showBox {
u.moveBox(1)
break
}
u.enter()
case 'c':
u.showCounts = !u.showCounts
@@ -537,6 +715,8 @@ outer:
u.toggleSort(&u.sortBySize)
case 'C':
u.toggleSort(&u.sortByCount)
case 'd':
u.delete()
case '?':
u.togglePopupBox(helpText)

View File

@@ -70,6 +70,45 @@ func (d *Dir) Entries() fs.DirEntries {
return append(fs.DirEntries(nil), d.entries...)
}
// Remove removes the i-th entry from the
// in-memory representation of the remote directory
func (d *Dir) Remove(i int) {
d.mu.Lock()
defer d.mu.Unlock()
d.remove(i)
}
// removes the i-th entry from the
// in-memory representation of the remote directory
//
// Call with d.mu held
func (d *Dir) remove(i int) {
size := d.entries[i].Size()
count := int64(1)
subDir, ok := d.getDir(i)
if ok {
size = subDir.size
count = subDir.count
delete(d.dirs, path.Base(subDir.path))
}
d.size -= size
d.count -= count
d.entries = append(d.entries[:i], d.entries[i+1:]...)
dir := d
// populate changed size and count to parent(s)
for parent := d.parent; parent != nil; parent = parent.parent {
parent.mu.Lock()
parent.dirs[path.Base(dir.path)] = dir
parent.size -= size
parent.count -= count
dir = parent
parent.mu.Unlock()
}
}
// gets the directory of the i-th entry
//
// returns nil if it is a file

View File

@@ -51,6 +51,7 @@ func startProgress() func() {
printProgress("")
case <-stopStats:
ticker.Stop()
printProgress("")
fs.LogPrint = oldLogPrint
fmt.Println("")
return

View File

@@ -19,31 +19,50 @@ import (
)
var (
noOutput = false
url = "http://localhost:5572/"
noOutput = false
url = "http://localhost:5572/"
jsonInput = ""
authUser = ""
authPass = ""
)
func init() {
cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&noOutput, "no-output", "", noOutput, "If set don't output the JSON result.")
commandDefintion.Flags().StringVarP(&url, "url", "", url, "URL to connect to rclone remote control.")
commandDefintion.Flags().StringVarP(&jsonInput, "json", "", jsonInput, "Input JSON - use instead of key=value args.")
commandDefintion.Flags().StringVarP(&authUser, "user", "", "", "Username to use to rclone remote control.")
commandDefintion.Flags().StringVarP(&authPass, "pass", "", "", "Password to use to connect to rclone remote control.")
}
var commandDefintion = &cobra.Command{
Use: "rc commands parameter",
Short: `Run a command against a running rclone.`,
Long: `
This runs a command against a running rclone. By default it will use
that specified in the --rc-addr command.
This runs a command against a running rclone. Use the --url flag to
specify an non default URL to connect on. This can be either a
":port" which is taken to mean "http://localhost:port" or a
"host:port" which is taken to mean "http://host:port"
A username and password can be passed in with --user and --pass.
Note that --rc-addr, --rc-user, --rc-pass will be read also for --url,
--user, --pass.
Arguments should be passed in as parameter=value.
The result will be returned as a JSON object by default.
The --json parameter can be used to pass in a JSON blob as an input
instead of key=value arguments. This is the only way of passing in
more complicated values.
Use "rclone rc" to see a list of all possible commands.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1E9, command, args)
cmd.Run(false, false, command, func() error {
parseFlags()
if len(args) == 0 {
return list()
}
@@ -52,30 +71,56 @@ Use "rclone rc" to see a list of all possible commands.`,
},
}
// Parse the flags
func parseFlags() {
// set alternates from alternate flags
setAlternateFlag("rc-addr", &url)
setAlternateFlag("rc-user", &authUser)
setAlternateFlag("rc-pass", &authPass)
// If url is just :port then fix it up
if strings.HasPrefix(url, ":") {
url = "localhost" + url
}
// if url is just host:port add http://
if !strings.HasPrefix(url, "http:") && !strings.HasPrefix(url, "https:") {
url = "http://" + url
}
// if url doesn't end with / add it
if !strings.HasSuffix(url, "/") {
url += "/"
}
}
// If the user set flagName set the output to its value
func setAlternateFlag(flagName string, output *string) {
if rcFlag := pflag.Lookup(flagName); rcFlag != nil && rcFlag.Changed {
*output = rcFlag.Value.String()
}
}
// do a call from (path, in) to (out, err).
//
// if err is set, out may be a valid error return or it may be nil
func doCall(path string, in rc.Params) (out rc.Params, err error) {
// Do HTTP request
client := fshttp.NewClient(fs.Config)
url := url
// set the user use --rc-addr as well as --url
if rcAddrFlag := pflag.Lookup("rc-addr"); rcAddrFlag != nil && rcAddrFlag.Changed {
url = rcAddrFlag.Value.String()
if strings.HasPrefix(url, ":") {
url = "localhost" + url
}
url = "http://" + url + "/"
}
if !strings.HasSuffix(url, "/") {
url += "/"
}
url += path
data, err := json.Marshal(in)
if err != nil {
return nil, errors.Wrap(err, "failed to encode JSON")
}
resp, err := client.Post(url, "application/json", bytes.NewBuffer(data))
req, err := http.NewRequest("POST", url, bytes.NewBuffer(data))
if err != nil {
return nil, errors.Wrap(err, "failed to make request")
}
req.Header.Set("Content-Type", "application/json")
if authUser != "" || authPass != "" {
req.SetBasicAuth(authUser, authPass)
}
resp, err := client.Do(req)
if err != nil {
return nil, errors.Wrap(err, "connection failed")
}
@@ -115,13 +160,24 @@ func run(args []string) (err error) {
// parse input
in := make(rc.Params)
for _, param := range args[1:] {
equals := strings.IndexRune(param, '=')
if equals < 0 {
return errors.Errorf("No '=' found in parameter %q", param)
params := args[1:]
if jsonInput == "" {
for _, param := range params {
equals := strings.IndexRune(param, '=')
if equals < 0 {
return errors.Errorf("no '=' found in parameter %q", param)
}
key, value := param[:equals], param[equals+1:]
in[key] = value
}
} else {
if len(params) > 0 {
return errors.New("can't use --json and parameters together")
}
err = json.Unmarshal([]byte(jsonInput), &in)
if err != nil {
return errors.Wrap(err, "bad --json input")
}
key, value := param[:equals], param[equals+1:]
in[key] = value
}
// Do the call
@@ -155,6 +211,11 @@ func list() error {
}
fmt.Printf("### %s: %s\n\n", info["Path"], info["Title"])
fmt.Printf("%s\n\n", info["Help"])
if authRequired := info["AuthRequired"]; authRequired != nil {
if authRequired.(bool) {
fmt.Printf("Authentication is required for this call.\n\n")
}
}
}
return nil
}

49
cmd/rcd/rcd.go Normal file
View File

@@ -0,0 +1,49 @@
package rcd
import (
"log"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs/rc/rcflags"
"github.com/ncw/rclone/fs/rc/rcserver"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(commandDefintion)
}
var commandDefintion = &cobra.Command{
Use: "rcd <path to files to serve>*",
Short: `Run rclone listening to remote control commands only.`,
Long: `
This runs rclone so that it only listents to remote control commands.
This is useful if you are controlling rclone via the rc API.
If you pass in a path to a directory, rclone will serve that directory
for GET requests on the URL passed in. It will also open the URL in
the browser when rclone is run.
See the [rc documentation](/rc/) for more info on the rc flags.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1, command, args)
if rcflags.Opt.Enabled {
log.Fatalf("Don't supply --rc flag when using rcd")
}
// Start the rc
rcflags.Opt.Enabled = true
if len(args) > 0 {
rcflags.Opt.Files = args[0]
}
s, err := rcserver.Start(&rcflags.Opt)
if err != nil {
log.Fatalf("Failed to start remote control: %v", err)
}
if s == nil {
log.Fatal("rc server not configured")
}
s.Wait()
},
}

View File

@@ -3,6 +3,7 @@ package ftpflags
import (
"github.com/ncw/rclone/cmd/serve/ftp/ftpopt"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/rc"
"github.com/spf13/pflag"
)
@@ -13,6 +14,7 @@ var (
// AddFlagsPrefix adds flags for the ftpopt
func AddFlagsPrefix(flagSet *pflag.FlagSet, prefix string, Opt *ftpopt.Options) {
rc.AddOption("ftp", &Opt)
flags.StringVarP(flagSet, &Opt.ListenAddr, prefix+"addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to.")
flags.StringVarP(flagSet, &Opt.PassivePorts, prefix+"passive-port", "", Opt.PassivePorts, "Passive port range to use.")
flags.StringVarP(flagSet, &Opt.BasicUser, prefix+"user", "", Opt.BasicUser, "User name for authentication.")

View File

@@ -1,8 +1,6 @@
package http
import (
"fmt"
"html/template"
"net/http"
"os"
"path"
@@ -12,9 +10,9 @@ import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/cmd/serve/httplib"
"github.com/ncw/rclone/cmd/serve/httplib/httpflags"
"github.com/ncw/rclone/cmd/serve/httplib/serve"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/lib/rest"
"github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags"
"github.com/spf13/cobra"
@@ -46,7 +44,11 @@ control the stats printing.
f := cmd.NewFsSrc(args)
cmd.Run(false, true, command, func() error {
s := newServer(f, &httpflags.Opt)
s.serve()
err := s.Serve()
if err != nil {
return err
}
s.Wait()
return nil
})
},
@@ -54,30 +56,32 @@ control the stats printing.
// server contains everything to run the server
type server struct {
*httplib.Server
f fs.Fs
vfs *vfs.VFS
srv *httplib.Server
}
func newServer(f fs.Fs, opt *httplib.Options) *server {
mux := http.NewServeMux()
s := &server{
f: f,
vfs: vfs.New(f, &vfsflags.Opt),
srv: httplib.NewServer(mux, opt),
Server: httplib.NewServer(mux, opt),
f: f,
vfs: vfs.New(f, &vfsflags.Opt),
}
mux.HandleFunc("/", s.handler)
return s
}
// serve runs the http server - doesn't return
func (s *server) serve() {
err := s.srv.Serve()
// Serve runs the http server in the background.
//
// Use s.Close() and s.Wait() to shutdown server
func (s *server) Serve() error {
err := s.Server.Serve()
if err != nil {
fs.Errorf(s.f, "Opening listener: %v", err)
return err
}
fs.Logf(s.f, "Serving on %s", s.srv.URL())
s.srv.Wait()
fs.Logf(s.f, "Serving on %s", s.URL())
return nil
}
// handler reads incoming requests and dispatches them
@@ -99,62 +103,6 @@ func (s *server) handler(w http.ResponseWriter, r *http.Request) {
}
}
// entry is a directory entry
type entry struct {
remote string
URL string
Leaf string
}
// entries represents a directory
type entries []entry
// addEntry adds an entry to that directory
func (es *entries) addEntry(node interface {
Path() string
Name() string
IsDir() bool
}) {
remote := node.Path()
leaf := node.Name()
urlRemote := leaf
if node.IsDir() {
leaf += "/"
urlRemote += "/"
}
*es = append(*es, entry{remote: remote, URL: rest.URLPathEscape(urlRemote), Leaf: leaf})
}
// indexPage is a directory listing template
var indexPage = `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>{{ .Title }}</title>
</head>
<body>
<h1>{{ .Title }}</h1>
{{ range $i := .Entries }}<a href="{{ $i.URL }}">{{ $i.Leaf }}</a><br />
{{ end }}</body>
</html>
`
// indexTemplate is the instantiated indexPage
var indexTemplate = template.Must(template.New("index").Parse(indexPage))
// indexData is used to fill in the indexTemplate
type indexData struct {
Title string
Entries entries
}
// error returns an http.StatusInternalServerError and logs the error
func internalError(what interface{}, w http.ResponseWriter, text string, err error) {
fs.CountError(err)
fs.Errorf(what, "%s: %v", text, err)
http.Error(w, text+".", http.StatusInternalServerError)
}
// serveDir serves a directory index at dirRemote
func (s *server) serveDir(w http.ResponseWriter, r *http.Request, dirRemote string) {
// List the directory
@@ -163,7 +111,7 @@ func (s *server) serveDir(w http.ResponseWriter, r *http.Request, dirRemote stri
http.Error(w, "Directory not found", http.StatusNotFound)
return
} else if err != nil {
internalError(dirRemote, w, "Failed to list directory", err)
serve.Error(dirRemote, w, "Failed to list directory", err)
return
}
if !node.IsDir() {
@@ -173,28 +121,17 @@ func (s *server) serveDir(w http.ResponseWriter, r *http.Request, dirRemote stri
dir := node.(*vfs.Dir)
dirEntries, err := dir.ReadDirAll()
if err != nil {
internalError(dirRemote, w, "Failed to list directory", err)
serve.Error(dirRemote, w, "Failed to list directory", err)
return
}
var out entries
// Make the entries for display
directory := serve.NewDirectory(dirRemote)
for _, node := range dirEntries {
out.addEntry(node)
directory.AddEntry(node.Path(), node.IsDir())
}
// Account the transfer
accounting.Stats.Transferring(dirRemote)
defer accounting.Stats.DoneTransferring(dirRemote, true)
fs.Infof(dirRemote, "%s: Serving directory", r.RemoteAddr)
err = indexTemplate.Execute(w, indexData{
Entries: out,
Title: fmt.Sprintf("Directory listing of /%s", dirRemote),
})
if err != nil {
internalError(dirRemote, w, "Failed to render template", err)
return
}
directory.Serve(w, r)
}
// serveFile serves a file object at remote
@@ -205,7 +142,7 @@ func (s *server) serveFile(w http.ResponseWriter, r *http.Request, remote string
http.Error(w, "File not found", http.StatusNotFound)
return
} else if err != nil {
internalError(remote, w, "Failed to find file", err)
serve.Error(remote, w, "Failed to find file", err)
return
}
if !node.IsFile() {
@@ -239,7 +176,7 @@ func (s *server) serveFile(w http.ResponseWriter, r *http.Request, remote string
// open the object
in, err := file.Open(os.O_RDONLY)
if err != nil {
internalError(remote, w, "Failed to open file", err)
serve.Error(remote, w, "Failed to open file", err)
return
}
defer func() {

View File

@@ -7,7 +7,6 @@ import (
"io/ioutil"
"net"
"net/http"
"path"
"strings"
"testing"
"time"
@@ -35,7 +34,7 @@ func startServer(t *testing.T, f fs.Fs) {
opt := httplib.DefaultOpt
opt.ListenAddr = testBindAddress
httpServer = newServer(f, &opt)
go httpServer.serve()
assert.NoError(t, httpServer.Serve())
// try to connect to the test server
pause := time.Millisecond
@@ -202,36 +201,7 @@ func TestGET(t *testing.T) {
}
}
type mockNode struct {
path string
isdir bool
}
func (n mockNode) Path() string { return n.path }
func (n mockNode) Name() string {
if n.path == "" {
return ""
}
return path.Base(n.path)
}
func (n mockNode) IsDir() bool { return n.isdir }
func TestAddEntry(t *testing.T) {
var es entries
es.addEntry(mockNode{path: "", isdir: true})
es.addEntry(mockNode{path: "dir", isdir: true})
es.addEntry(mockNode{path: "a/b/c/d.txt", isdir: false})
es.addEntry(mockNode{path: "a/b/c/colon:colon.txt", isdir: false})
es.addEntry(mockNode{path: "\"quotes\".txt", isdir: false})
assert.Equal(t, entries{
{remote: "", URL: "/", Leaf: "/"},
{remote: "dir", URL: "dir/", Leaf: "dir/"},
{remote: "a/b/c/d.txt", URL: "d.txt", Leaf: "d.txt"},
{remote: "a/b/c/colon:colon.txt", URL: "./colon:colon.txt", Leaf: "colon:colon.txt"},
{remote: "\"quotes\".txt", URL: "%22quotes%22.txt", Leaf: "\"quotes\".txt"},
}, es)
}
func TestFinalise(t *testing.T) {
httpServer.srv.Close()
httpServer.Close()
httpServer.Wait()
}

View File

@@ -3,6 +3,7 @@ package httpflags
import (
"github.com/ncw/rclone/cmd/serve/httplib"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/rc"
"github.com/spf13/pflag"
)
@@ -13,6 +14,7 @@ var (
// AddFlagsPrefix adds flags for the httplib
func AddFlagsPrefix(flagSet *pflag.FlagSet, prefix string, Opt *httplib.Options) {
rc.AddOption(prefix+"http", &Opt)
flags.StringVarP(flagSet, &Opt.ListenAddr, prefix+"addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to.")
flags.DurationVarP(flagSet, &Opt.ServerReadTimeout, prefix+"server-read-timeout", "", Opt.ServerReadTimeout, "Timeout for server reading data")
flags.DurationVarP(flagSet, &Opt.ServerWriteTimeout, prefix+"server-write-timeout", "", Opt.ServerWriteTimeout, "Timeout for server writing data")

View File

@@ -13,6 +13,7 @@ import (
auth "github.com/abbot/go-http-auth"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
)
// Globals
@@ -105,6 +106,7 @@ type Server struct {
httpServer *http.Server
basicPassHashed string
useSSL bool // if server is configured for SSL/TLS
usingAuth bool // set if authentication is configured
}
// singleUserProvider provides the encrypted password for a single user
@@ -142,6 +144,7 @@ func NewServer(handler http.Handler, opt *Options) *Server {
}
authenticator := auth.NewBasicAuthenticator(s.Opt.Realm, secretProvider)
handler = auth.JustCheck(authenticator, handler.ServeHTTP)
s.usingAuth = true
}
s.useSSL = s.Opt.SslKey != ""
@@ -188,7 +191,7 @@ func NewServer(handler http.Handler, opt *Options) *Server {
func (s *Server) Serve() error {
ln, err := net.Listen("tcp", s.httpServer.Addr)
if err != nil {
return err
return errors.Wrapf(err, "start server failed")
}
s.listener = ln
s.waitChan = make(chan struct{})
@@ -254,3 +257,8 @@ func (s *Server) URL() string {
}
return fmt.Sprintf("%s://%s/", proto, addr)
}
// UsingAuth returns true if authentication is required
func (s *Server) UsingAuth() bool {
return s.usingAuth
}

View File

@@ -0,0 +1,102 @@
package serve
import (
"fmt"
"html/template"
"net/http"
"net/url"
"path"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/lib/rest"
)
// DirEntry is a directory entry
type DirEntry struct {
remote string
URL string
Leaf string
}
// Directory represents a directory
type Directory struct {
DirRemote string
Title string
Entries []DirEntry
Query string
}
// NewDirectory makes an empty Directory
func NewDirectory(dirRemote string) *Directory {
d := &Directory{
DirRemote: dirRemote,
Title: fmt.Sprintf("Directory listing of /%s", dirRemote),
}
return d
}
// SetQuery sets the query parameters for each URL
func (d *Directory) SetQuery(queryParams url.Values) *Directory {
d.Query = ""
if len(queryParams) > 0 {
d.Query = "?" + queryParams.Encode()
}
return d
}
// AddEntry adds an entry to that directory
func (d *Directory) AddEntry(remote string, isDir bool) {
leaf := path.Base(remote)
if leaf == "." {
leaf = ""
}
urlRemote := leaf
if isDir {
leaf += "/"
urlRemote += "/"
}
d.Entries = append(d.Entries, DirEntry{
remote: remote,
URL: rest.URLPathEscape(urlRemote) + d.Query,
Leaf: leaf,
})
}
// Error returns an http.StatusInternalServerError and logs the error
func Error(what interface{}, w http.ResponseWriter, text string, err error) {
fs.CountError(err)
fs.Errorf(what, "%s: %v", text, err)
http.Error(w, text+".", http.StatusInternalServerError)
}
// Serve serves a directory
func (d *Directory) Serve(w http.ResponseWriter, r *http.Request) {
// Account the transfer
accounting.Stats.Transferring(d.DirRemote)
defer accounting.Stats.DoneTransferring(d.DirRemote, true)
fs.Infof(d.DirRemote, "%s: Serving directory", r.RemoteAddr)
err := indexTemplate.Execute(w, d)
if err != nil {
Error(d.DirRemote, w, "Failed to render template", err)
return
}
}
// indexPage is a directory listing template
var indexPage = `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>{{ .Title }}</title>
</head>
<body>
<h1>{{ .Title }}</h1>
{{ range $i := .Entries }}<a href="{{ $i.URL }}">{{ $i.Leaf }}</a><br />
{{ end }}</body>
</html>
`
// indexTemplate is the instantiated indexPage
var indexTemplate = template.Must(template.New("index").Parse(indexPage))

View File

@@ -0,0 +1,88 @@
package serve
import (
"errors"
"io/ioutil"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"github.com/stretchr/testify/assert"
)
func TestNewDirectory(t *testing.T) {
d := NewDirectory("z")
assert.Equal(t, "z", d.DirRemote)
assert.Equal(t, "Directory listing of /z", d.Title)
}
func TestSetQuery(t *testing.T) {
d := NewDirectory("z")
assert.Equal(t, "", d.Query)
d.SetQuery(url.Values{"potato": []string{"42"}})
assert.Equal(t, "?potato=42", d.Query)
d.SetQuery(url.Values{})
assert.Equal(t, "", d.Query)
}
func TestAddEntry(t *testing.T) {
var d = NewDirectory("z")
d.AddEntry("", true)
d.AddEntry("dir", true)
d.AddEntry("a/b/c/d.txt", false)
d.AddEntry("a/b/c/colon:colon.txt", false)
d.AddEntry("\"quotes\".txt", false)
assert.Equal(t, []DirEntry{
{remote: "", URL: "/", Leaf: "/"},
{remote: "dir", URL: "dir/", Leaf: "dir/"},
{remote: "a/b/c/d.txt", URL: "d.txt", Leaf: "d.txt"},
{remote: "a/b/c/colon:colon.txt", URL: "./colon:colon.txt", Leaf: "colon:colon.txt"},
{remote: "\"quotes\".txt", URL: "%22quotes%22.txt", Leaf: "\"quotes\".txt"},
}, d.Entries)
// Now test with a query parameter
d = NewDirectory("z").SetQuery(url.Values{"potato": []string{"42"}})
d.AddEntry("file", false)
d.AddEntry("dir", true)
assert.Equal(t, []DirEntry{
{remote: "file", URL: "file?potato=42", Leaf: "file"},
{remote: "dir", URL: "dir/?potato=42", Leaf: "dir/"},
}, d.Entries)
}
func TestError(t *testing.T) {
w := httptest.NewRecorder()
err := errors.New("help")
Error("potato", w, "sausage", err)
resp := w.Result()
assert.Equal(t, http.StatusInternalServerError, resp.StatusCode)
body, _ := ioutil.ReadAll(resp.Body)
assert.Equal(t, "sausage.\n", string(body))
}
func TestServe(t *testing.T) {
d := NewDirectory("aDirectory")
d.AddEntry("file", false)
d.AddEntry("dir", true)
w := httptest.NewRecorder()
r := httptest.NewRequest("GET", "http://example.com/aDirectory/", nil)
d.Serve(w, r)
resp := w.Result()
assert.Equal(t, http.StatusOK, resp.StatusCode)
body, _ := ioutil.ReadAll(resp.Body)
assert.Equal(t, `<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Directory listing of /aDirectory</title>
</head>
<body>
<h1>Directory listing of /aDirectory</h1>
<a href="file">file</a><br />
<a href="dir/">dir/</a><br />
</body>
</html>
`, string(body))
}

View File

@@ -0,0 +1,102 @@
// Package serve deals with serving objects over HTTP
package serve
import (
"fmt"
"io"
"net/http"
"path"
"strconv"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
)
// Object serves an fs.Object via HEAD or GET
func Object(w http.ResponseWriter, r *http.Request, o fs.Object) {
if r.Method != "HEAD" && r.Method != "GET" {
http.Error(w, http.StatusText(http.StatusMethodNotAllowed), http.StatusMethodNotAllowed)
return
}
// Show that we accept ranges
w.Header().Set("Accept-Ranges", "bytes")
// Set content length since we know how long the object is
if o.Size() >= 0 {
w.Header().Set("Content-Length", strconv.FormatInt(o.Size(), 10))
}
// Set content type
mimeType := fs.MimeType(o)
if mimeType == "application/octet-stream" && path.Ext(o.Remote()) == "" {
// Leave header blank so http server guesses
} else {
w.Header().Set("Content-Type", mimeType)
}
if r.Method == "HEAD" {
return
}
// Decode Range request if present
code := http.StatusOK
size := o.Size()
var options []fs.OpenOption
if rangeRequest := r.Header.Get("Range"); rangeRequest != "" {
//fs.Debugf(nil, "Range: request %q", rangeRequest)
option, err := fs.ParseRangeOption(rangeRequest)
if err != nil {
fs.Debugf(o, "Get request parse range request error: %v", err)
http.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest)
return
}
options = append(options, option)
offset, limit := option.Decode(o.Size())
end := o.Size() // exclusive
if limit >= 0 {
end = offset + limit
}
if end > o.Size() {
end = o.Size()
}
size = end - offset
// fs.Debugf(nil, "Range: offset=%d, limit=%d, end=%d, size=%d (object size %d)", offset, limit, end, size, o.Size())
// Content-Range: bytes 0-1023/146515
w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", offset, end-1, o.Size()))
// fs.Debugf(nil, "Range: Content-Range: %q", w.Header().Get("Content-Range"))
code = http.StatusPartialContent
}
w.Header().Set("Content-Length", strconv.FormatInt(size, 10))
file, err := o.Open(options...)
if err != nil {
fs.Debugf(o, "Get request open error: %v", err)
http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound)
return
}
accounting.Stats.Transferring(o.Remote())
in := accounting.NewAccount(file, o) // account the transfer (no buffering)
defer func() {
closeErr := in.Close()
if closeErr != nil {
fs.Errorf(o, "Get request: close failed: %v", closeErr)
if err == nil {
err = closeErr
}
}
ok := err == nil
accounting.Stats.DoneTransferring(o.Remote(), ok)
if !ok {
accounting.Stats.Error(err)
}
}()
w.WriteHeader(code)
n, err := io.Copy(w, in)
if err != nil {
fs.Errorf(o, "Didn't finish writing GET request (wrote %d/%d bytes): %v", n, size, err)
return
}
}

View File

@@ -0,0 +1,76 @@
package serve
import (
"io/ioutil"
"net/http"
"net/http/httptest"
"testing"
"github.com/ncw/rclone/fstest/mockobject"
"github.com/stretchr/testify/assert"
)
func TestObjectBadMethod(t *testing.T) {
w := httptest.NewRecorder()
r := httptest.NewRequest("BADMETHOD", "http://example.com/aFile", nil)
o := mockobject.New("aFile")
Object(w, r, o)
resp := w.Result()
assert.Equal(t, http.StatusMethodNotAllowed, resp.StatusCode)
body, _ := ioutil.ReadAll(resp.Body)
assert.Equal(t, "Method Not Allowed\n", string(body))
}
func TestObjectHEAD(t *testing.T) {
w := httptest.NewRecorder()
r := httptest.NewRequest("HEAD", "http://example.com/aFile", nil)
o := mockobject.New("aFile").WithContent([]byte("hello"), mockobject.SeekModeNone)
Object(w, r, o)
resp := w.Result()
assert.Equal(t, http.StatusOK, resp.StatusCode)
assert.Equal(t, "5", resp.Header.Get("Content-Length"))
assert.Equal(t, "bytes", resp.Header.Get("Accept-Ranges"))
body, _ := ioutil.ReadAll(resp.Body)
assert.Equal(t, "", string(body))
}
func TestObjectGET(t *testing.T) {
w := httptest.NewRecorder()
r := httptest.NewRequest("GET", "http://example.com/aFile", nil)
o := mockobject.New("aFile").WithContent([]byte("hello"), mockobject.SeekModeNone)
Object(w, r, o)
resp := w.Result()
assert.Equal(t, http.StatusOK, resp.StatusCode)
assert.Equal(t, "5", resp.Header.Get("Content-Length"))
assert.Equal(t, "bytes", resp.Header.Get("Accept-Ranges"))
body, _ := ioutil.ReadAll(resp.Body)
assert.Equal(t, "hello", string(body))
}
func TestObjectRange(t *testing.T) {
w := httptest.NewRecorder()
r := httptest.NewRequest("GET", "http://example.com/aFile", nil)
r.Header.Add("Range", "bytes=3-5")
o := mockobject.New("aFile").WithContent([]byte("0123456789"), mockobject.SeekModeNone)
Object(w, r, o)
resp := w.Result()
assert.Equal(t, http.StatusPartialContent, resp.StatusCode)
assert.Equal(t, "3", resp.Header.Get("Content-Length"))
assert.Equal(t, "bytes", resp.Header.Get("Accept-Ranges"))
assert.Equal(t, "bytes 3-5/10", resp.Header.Get("Content-Range"))
body, _ := ioutil.ReadAll(resp.Body)
assert.Equal(t, "345", string(body))
}
func TestObjectBadRange(t *testing.T) {
w := httptest.NewRecorder()
r := httptest.NewRequest("GET", "http://example.com/aFile", nil)
r.Header.Add("Range", "xxxbytes=3-5")
o := mockobject.New("aFile").WithContent([]byte("0123456789"), mockobject.SeekModeNone)
Object(w, r, o)
resp := w.Result()
assert.Equal(t, http.StatusBadRequest, resp.StatusCode)
assert.Equal(t, "10", resp.Header.Get("Content-Length"))
body, _ := ioutil.ReadAll(resp.Body)
assert.Equal(t, "Bad Request\n", string(body))
}

View File

@@ -4,19 +4,17 @@ package restic
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"os"
"path"
"regexp"
"strconv"
"strings"
"time"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/cmd/serve/httplib"
"github.com/ncw/rclone/cmd/serve/httplib/httpflags"
"github.com/ncw/rclone/cmd/serve/httplib/serve"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/fserrors"
@@ -138,8 +136,11 @@ these **must** end with /. Eg
httpSrv.ServeConn(conn, opts)
return nil
}
s.serve()
err := s.Serve()
if err != nil {
return err
}
s.Wait()
return nil
})
},
@@ -151,28 +152,30 @@ const (
// server contains everything to run the server
type server struct {
f fs.Fs
srv *httplib.Server
*httplib.Server
f fs.Fs
}
func newServer(f fs.Fs, opt *httplib.Options) *server {
mux := http.NewServeMux()
s := &server{
f: f,
srv: httplib.NewServer(mux, opt),
Server: httplib.NewServer(mux, opt),
f: f,
}
mux.HandleFunc("/", s.handler)
return s
}
// serve runs the http server - doesn't return
func (s *server) serve() {
err := s.srv.Serve()
// Serve runs the http server in the background.
//
// Use s.Close() and s.Wait() to shutdown server
func (s *server) Serve() error {
err := s.Server.Serve()
if err != nil {
fs.Errorf(s.f, "Opening listener: %v", err)
return err
}
fs.Logf(s.f, "Serving restic REST API on %s", s.srv.URL())
s.srv.Wait()
fs.Logf(s.f, "Serving restic REST API on %s", s.URL())
return nil
}
var matchData = regexp.MustCompile("(?:^|/)data/([^/]{2,})$")
@@ -215,10 +218,8 @@ func (s *server) handler(w http.ResponseWriter, r *http.Request) {
}
} else {
switch r.Method {
case "GET":
s.getObject(w, r, remote)
case "HEAD":
s.headObject(w, r, remote)
case "GET", "HEAD":
s.serveObject(w, r, remote)
case "POST":
s.postObject(w, r, remote)
case "DELETE":
@@ -229,91 +230,15 @@ func (s *server) handler(w http.ResponseWriter, r *http.Request) {
}
}
// head request the remote
func (s *server) headObject(w http.ResponseWriter, r *http.Request, remote string) {
o, err := s.f.NewObject(remote)
if err != nil {
fs.Debugf(remote, "Head request error: %v", err)
http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound)
return
}
// Set content length since we know how long the object is
w.Header().Set("Content-Length", strconv.FormatInt(o.Size(), 10))
}
// get the remote
func (s *server) getObject(w http.ResponseWriter, r *http.Request, remote string) {
func (s *server) serveObject(w http.ResponseWriter, r *http.Request, remote string) {
o, err := s.f.NewObject(remote)
if err != nil {
fs.Debugf(remote, "Get request error: %v", err)
fs.Debugf(remote, "%s request error: %v", r.Method, err)
http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound)
return
}
// Set content length since we know how long the object is
w.Header().Set("Content-Length", strconv.FormatInt(o.Size(), 10))
// Decode Range request if present
code := http.StatusOK
size := o.Size()
var options []fs.OpenOption
if rangeRequest := r.Header.Get("Range"); rangeRequest != "" {
//fs.Debugf(nil, "Range: request %q", rangeRequest)
option, err := fs.ParseRangeOption(rangeRequest)
if err != nil {
fs.Debugf(remote, "Get request parse range request error: %v", err)
http.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest)
return
}
options = append(options, option)
offset, limit := option.Decode(o.Size())
end := o.Size() // exclusive
if limit >= 0 {
end = offset + limit
}
if end > o.Size() {
end = o.Size()
}
size = end - offset
// fs.Debugf(nil, "Range: offset=%d, limit=%d, end=%d, size=%d (object size %d)", offset, limit, end, size, o.Size())
// Content-Range: bytes 0-1023/146515
w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", offset, end-1, o.Size()))
// fs.Debugf(nil, "Range: Content-Range: %q", w.Header().Get("Content-Range"))
code = http.StatusPartialContent
}
w.Header().Set("Content-Length", strconv.FormatInt(size, 10))
file, err := o.Open(options...)
if err != nil {
fs.Debugf(remote, "Get request open error: %v", err)
http.Error(w, http.StatusText(http.StatusNotFound), http.StatusNotFound)
return
}
accounting.Stats.Transferring(o.Remote())
in := accounting.NewAccount(file, o) // account the transfer (no buffering)
defer func() {
closeErr := in.Close()
if closeErr != nil {
fs.Errorf(remote, "Get request: close failed: %v", closeErr)
if err == nil {
err = closeErr
}
}
ok := err == nil
accounting.Stats.DoneTransferring(o.Remote(), ok)
if !ok {
accounting.Stats.Error(err)
}
}()
w.WriteHeader(code)
n, err := io.Copy(w, in)
if err != nil {
fs.Errorf(remote, "Didn't finish writing GET request (wrote %d/%d bytes): %v", n, size, err)
return
}
serve.Object(w, r, o)
}
// postObject posts an object to the repository

View File

@@ -41,8 +41,11 @@ func TestRestic(t *testing.T) {
// Start the server
w := newServer(fremote, &opt)
go w.serve()
defer w.srv.Close()
assert.NoError(t, w.Serve())
defer func() {
w.Close()
w.Wait()
}()
// Change directory to run the tests
err = os.Chdir(resticSource)

View File

@@ -68,8 +68,12 @@ Use "rclone hashsum" to see the full list.
fs.Debugf(f, "Using hash %v for ETag", hashType)
}
cmd.Run(false, false, command, func() error {
w := newWebDAV(f, &httpflags.Opt)
w.serve()
s := newWebDAV(f, &httpflags.Opt)
err := s.serve()
if err != nil {
return err
}
s.Wait()
return nil
})
return nil
@@ -89,9 +93,9 @@ Use "rclone hashsum" to see the full list.
// might apply". In particular, whether or not renaming a file or directory
// overwriting another existing file or directory is an error is OS-dependent.
type WebDAV struct {
*httplib.Server
f fs.Fs
vfs *vfs.VFS
srv *httplib.Server
}
// check interface
@@ -110,18 +114,20 @@ func newWebDAV(f fs.Fs, opt *httplib.Options) *WebDAV {
Logger: w.logRequest, // FIXME
}
w.srv = httplib.NewServer(handler, opt)
w.Server = httplib.NewServer(handler, opt)
return w
}
// serve runs the http server - doesn't return
func (w *WebDAV) serve() {
err := w.srv.Serve()
// serve runs the http server in the background.
//
// Use s.Close() and s.Wait() to shutdown server
func (w *WebDAV) serve() error {
err := w.Serve()
if err != nil {
fs.Errorf(w.f, "Opening listener: %v", err)
return err
}
fs.Logf(w.f, "WebDav Server started on %s", w.srv.URL())
w.srv.Wait()
fs.Logf(w.f, "WebDav Server started on %s", w.URL())
return nil
}
// logRequest is called by the webdav module on every request

View File

@@ -48,8 +48,11 @@ func TestWebDav(t *testing.T) {
// Start the server
w := newWebDAV(fremote, &opt)
go w.serve()
defer w.srv.Close()
assert.NoError(t, w.serve())
defer func() {
w.Close()
w.Wait()
}()
// Change directory to run the tests
err = os.Chdir("../../../backend/webdav")

View File

@@ -32,6 +32,8 @@ extended explanation in the ` + "`" + `copy` + "`" + ` command above if unsure.
If dest:path doesn't exist, it is created and the source:path contents
go there.
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -4,13 +4,12 @@ import (
"fmt"
"io/ioutil"
"net/http"
"regexp"
"strconv"
"strings"
"time"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/version"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
@@ -66,63 +65,8 @@ Or
},
}
var parseVersion = regexp.MustCompile(`^(?:rclone )?v(\d+)\.(\d+)(?:\.(\d+))?(?:-(\d+)(?:-(g[\wβ-]+))?)?$`)
type version []int
func newVersion(in string) (v version, err error) {
r := parseVersion.FindStringSubmatch(in)
if r == nil {
return v, errors.Errorf("failed to match version string %q", in)
}
atoi := func(s string) int {
i, err := strconv.Atoi(s)
if err != nil {
fs.Errorf(nil, "Failed to parse %q as int from %q: %v", s, in, err)
}
return i
}
v = version{
atoi(r[1]), // major
atoi(r[2]), // minor
}
if r[3] != "" {
v = append(v, atoi(r[3])) // patch
} else if r[4] != "" {
v = append(v, 0) // patch
}
if r[4] != "" {
v = append(v, atoi(r[4])) // dev
}
return v, nil
}
// String converts v to a string
func (v version) String() string {
var out []string
for _, vv := range v {
out = append(out, fmt.Sprint(vv))
}
return strings.Join(out, ".")
}
// cmp compares two versions returning >0, <0 or 0
func (v version) cmp(o version) (d int) {
n := len(v)
if n > len(o) {
n = len(o)
}
for i := 0; i < n; i++ {
d = v[i] - o[i]
if d != 0 {
return d
}
}
return len(v) - len(o)
}
// getVersion gets the version by checking the download repository passed in
func getVersion(url string) (v version, vs string, date time.Time, err error) {
func getVersion(url string) (v version.Version, vs string, date time.Time, err error) {
resp, err := http.Get(url)
if err != nil {
return v, vs, date, err
@@ -144,26 +88,17 @@ func getVersion(url string) (v version, vs string, date time.Time, err error) {
if err != nil {
return v, vs, date, err
}
v, err = newVersion(vs)
v, err = version.New(vs)
return v, vs, date, err
}
// check the current version against available versions
func checkVersion() {
// Get Current version
currentVersion := fs.Version
currentIsGit := strings.HasSuffix(currentVersion, "-DEV")
if currentIsGit {
currentVersion = currentVersion[:len(currentVersion)-4]
}
vCurrent, err := newVersion(currentVersion)
vCurrent, err := version.New(fs.Version)
if err != nil {
fs.Errorf(nil, "Failed to get parse version: %v", err)
}
if currentIsGit {
vCurrent = append(vCurrent, 999, 999)
}
const timeFormat = "2006-01-02"
printVersion := func(what, url string) {
@@ -177,7 +112,7 @@ func checkVersion() {
v,
"(released "+t.Format(timeFormat)+")",
)
if v.cmp(vCurrent) > 0 {
if v.Cmp(vCurrent) > 0 {
fmt.Printf(" upgrade: %s\n", url+vs)
}
}
@@ -190,7 +125,7 @@ func checkVersion() {
"beta",
"https://beta.rclone.org/",
)
if currentIsGit {
if vCurrent.IsGit() {
fmt.Println("Your version is compiled from git so comparisons may be wrong.")
}
}

View File

@@ -1,7 +1,6 @@
package version
import (
"fmt"
"io/ioutil"
"os"
"runtime"
@@ -46,65 +45,3 @@ func TestVersionWorksWithoutAccessibleConfigFile(t *testing.T) {
// assert.NoError(t, cmd.Root.Execute())
// })
}
func TestVersionNew(t *testing.T) {
for _, test := range []struct {
in string
want version
wantErr bool
}{
{"v1.41", version{1, 41}, false},
{"rclone v1.41", version{1, 41}, false},
{"rclone v1.41.23", version{1, 41, 23}, false},
{"rclone v1.41.23-100", version{1, 41, 23, 100}, false},
{"rclone v1.41-100", version{1, 41, 0, 100}, false},
{"rclone v1.41.23-100-g12312a", version{1, 41, 23, 100}, false},
{"rclone v1.41-100-g12312a", version{1, 41, 0, 100}, false},
{"rclone v1.42-005-g56e1e820β", version{1, 42, 0, 5}, false},
{"rclone v1.42-005-g56e1e820-feature-branchβ", version{1, 42, 0, 5}, false},
{"v1.41s", nil, true},
{"rclone v1-41", nil, true},
{"rclone v1.41.2c3", nil, true},
{"rclone v1.41.23-100 potato", nil, true},
{"rclone 1.41-100", nil, true},
{"rclone v1.41.23-100-12312a", nil, true},
} {
what := fmt.Sprintf("in=%q", test.in)
got, err := newVersion(test.in)
if test.wantErr {
assert.Error(t, err, what)
} else {
assert.NoError(t, err, what)
}
assert.Equal(t, test.want, got, what)
}
}
func TestVersionCmp(t *testing.T) {
for _, test := range []struct {
a, b version
want int
}{
{version{1}, version{1}, 0},
{version{1}, version{2}, -1},
{version{2}, version{1}, 1},
{version{2}, version{2, 1}, -1},
{version{2, 1}, version{2}, 1},
{version{2, 1}, version{2, 1}, 0},
{version{2, 1}, version{2, 2}, -1},
{version{2, 2}, version{2, 1}, 1},
} {
got := test.a.cmp(test.b)
if got < 0 {
got = -1
} else if got > 0 {
got = 1
}
assert.Equal(t, test.want, got, fmt.Sprintf("%v cmp %v", test.a, test.b))
// test the reverse
got = -test.b.cmp(test.a)
assert.Equal(t, test.want, got, fmt.Sprintf("%v cmp %v", test.b, test.a))
}
}

View File

@@ -66,7 +66,7 @@ Contributors
* Marvin Watson <marvwatson@users.noreply.github.com>
* Danny Tsai <danny8376@gmail.com>
* Yoni Jah <yonjah+git@gmail.com> <yonjah+github@gmail.com>
* Stephen Harris <github@spuddy.org>
* Stephen Harris <github@spuddy.org> <sweharris@users.noreply.github.com>
* Ihor Dvoretskyi <ihor.dvoretskyi@gmail.com>
* Jon Craton <jncraton@gmail.com>
* Hraban Luyat <hraban@0brg.net>
@@ -208,3 +208,12 @@ Contributors
* David Haguenauer <ml@kurokatta.org>
* teresy <hi.teresy@gmail.com>
* buergi <patbuergi@gmx.de>
* Florian Gamboeck <mail@floga.de>
* Ralf Hemberger <10364191+rhemberger@users.noreply.github.com>
* Scott Edlund <sedlund@users.noreply.github.com>
* Erik Swanson <erik@retailnext.net>
* Jake Coggiano <jake@stripe.com>
* brused27 <brused27@noemailaddress>
* Peter Kaminski <kaminski@istori.com>
* Henry Ptasinski <henry@logout.com>
* Alexander <kharkovalexander@gmail.com>

Some files were not shown because too many files have changed in this diff Show More