1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-06 10:33:34 +00:00

Compare commits

..

401 Commits
v1.23 ... v1.33

Author SHA1 Message Date
Nick Craig-Wood
3996bbb8cb Version v1.33 2016-08-24 23:02:05 +01:00
Nick Craig-Wood
c2599cb116 Fix crypt tests on Windows 2016-08-24 22:21:34 +01:00
Nick Craig-Wood
2c13074f6c drive: document how to make your own client_id - fixes #560 2016-08-24 22:06:41 +01:00
Nick Craig-Wood
059743a1b0 crypt: add to integration tests 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
73cd1f4e88 crypt: Implement DirMover 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
a54806e5c1 Fix Move when underlying remote returns ErrorCantMove 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
e6a0521ca2 Make it possible to test Fs multiple times and use this with crypt
We test both the filename encryption modes for crypt.
2016-08-23 17:45:37 +01:00
Nick Craig-Wood
43eadf278c Remove flattening and replace with {off, standard} name encryption 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
5f375a182d Create TestCrypt remote 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
663dd6ed8b crypt: ask for a second password for the salt 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
226c2a0d83 Implement crypt for encrypted remotes - #219 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
b4b4b6cb1c Allow Fs tests to declare new config items 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
9985fc40f4 Make Password parameters obey Optional flag and offer to generate random ones 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
b1de4c8cba Implement password Option and re-implement editing
Editing now shows all the options for the fs and asks one at a time
whether they should be changed.
2016-08-23 17:45:37 +01:00
Nick Craig-Wood
6a4e424630 Re-implement Obscure/Reveal so they use AES-CTR encryption 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
ebb67c135e Fix listToChan passing nil objects to DeleteFile 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
326dcf2470 Add more troublesome symbols to test cases
These are from #623 #620 #218
2016-08-23 14:28:05 +01:00
Nick Craig-Wood
86eb80ecdc Add Radek Šenfeld to contributors. 2016-08-23 12:25:39 +01:00
Radek Šenfeld
2003ba356b User-configurable Amazon S3 ACL
fixes #413
2016-08-23 12:25:08 +01:00
Nick Craig-Wood
037a000cc8 b2: fix stats accounting for upload - fixes #602 2016-08-22 21:19:38 +01:00
Nick Craig-Wood
8a771450d2 docs: Add hover over links on headings 2016-08-22 17:21:06 +01:00
Nick Craig-Wood
1e7dc06ab8 Fix file encoding 2016-08-22 16:47:06 +01:00
Nick Craig-Wood
ca841c56a8 Disable smart dashes so --flag shows properly in the docs - fixes #632 2016-08-22 16:46:08 +01:00
Nick Craig-Wood
79eebf1993 onedrive: fix URL escaping in file names - eg uploading files with + in them.
Fixes #620
Fixes #218
2016-08-22 10:58:49 +01:00
Nick Craig-Wood
bbccf4acd5 Update go versions
Remove tip for the moment
2016-08-20 14:14:48 +01:00
Nick Craig-Wood
9e7ddd5efc Fix tests when FUSE isn't present 2016-08-20 14:11:21 +01:00
Nick Craig-Wood
6089f443b9 Fix windows build - fixes #628
Try to make clearer the distinction between OS paths and rclone paths
(remotes) so it is harder to muddle them up.
2016-08-20 12:29:54 +01:00
Nick Craig-Wood
84eb7031bb Implement the rclone cat command 2016-08-18 22:45:32 +01:00
Nick Craig-Wood
f22029bf3d Add mount command to implement FUSE mounting of remotes #494
This enables any rclone remote to be mounted and used as a filesystem
with some limitations.

Only supported for Linux, FreeBSD and OS X
2016-08-18 21:54:54 +01:00
Nick Craig-Wood
d7b79b4481 Mark the compiled from source version with -DEV - fixes #627 2016-08-18 21:31:10 +01:00
Nick Craig-Wood
b5faaf7116 Fix double close of abort channel - fixes #592 2016-08-18 18:56:57 +01:00
Nick Craig-Wood
b4f2ada820 b2: on cleanup delete hide marker if it is the current file #604 2016-08-18 18:36:00 +01:00
Nick Craig-Wood
8a66930bd7 acd: document --acd-upload-wait-time 2016-08-18 17:49:49 +01:00
Nick Craig-Wood
2ebeed6753 acd: Fix token expiry during large uploads
When rclone is busy doing lots of very long uploads it doesn't refresh
the token. Amazon will fail uploads if they finish when the token is
more than 1 Hour past expiry.

Fix this by keeping track of the number of uploads and refreshing the
token when the token expires if there is an upload in progress.
2016-08-18 17:39:23 +01:00
Nick Craig-Wood
23d8ba41d5 oauthuil: implement a timer for token expiry 2016-08-18 17:39:23 +01:00
Nick Craig-Wood
4f9e805d44 acd: Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
Amazon Drive sometimes returns errors at the end of large uploads

  * 408 REQUEST_TIMEOUT
  * 504 GATEWAY_TIMEOUT
  * 500 Internal error

The file may have been uploaded correctly though, so, on error, wait
for up to 2 minutes for it to appear if it was fully
uploaded (configure timeout with --acd-upload-wait-time).

Issues: #601 #605 #606
2016-08-18 17:39:23 +01:00
Nick Craig-Wood
3f7107839e Add Per Cederberg to contributors 2016-08-18 17:10:50 +01:00
Per Cederberg
bb62c49489 New B2 API endpoint
Backblaze will change the authentication API endpoint on August 16, 2016. The old endpoint will be removed Feb 2nd 2017.

See https://help.backblaze.com/hc/en-us/articles/224959187-B2-Domain-Migration-Plan
2016-08-15 15:59:19 +02:00
Nick Craig-Wood
ae6018355c Correct parameter order for copy/sync etc 2016-08-06 00:07:36 +01:00
Nick Craig-Wood
0805ec051f Add BasicInfo interface shared between Dir and Object 2016-08-05 17:45:27 +01:00
Nick Craig-Wood
e27b91ffb8 Factor each commmand into its own package 2016-08-05 17:13:54 +01:00
Nick Craig-Wood
0a7b34eefc Move internals of rclone command into cmd so it can be imported externally 2016-08-04 22:33:46 +01:00
Nick Craig-Wood
549cac90af Use cobra autogenerated docs
* put the most up to date docs into the code
  * generate command docs using rclone gendocs
  * put command docs into own directory
  * remake them into MANUAL.md
2016-08-04 21:47:14 +01:00
Nick Craig-Wood
ba0b41dd92 Add gendocs command to rclone 2016-08-04 21:47:14 +01:00
Nick Craig-Wood
2df261e42b Add genautocomplete command to make bash completion script. 2016-08-04 21:47:14 +01:00
Nick Craig-Wood
38adb35abe Make dedupe take an optional mode parameter 2016-08-04 21:47:14 +01:00
Nick Craig-Wood
520ded60e3 Add memtest command for debugging purposes 2016-08-04 21:47:14 +01:00
Nick Craig-Wood
ae56df7d4f Add --dedupe-mode only to dedupe command 2016-08-04 21:47:14 +01:00
Nick Craig-Wood
412591dfaf Make rclone use cobra for command line parsing 2016-08-03 17:16:27 +01:00
Nick Craig-Wood
57f8f1ec92 b2: set maximum backoff to 5 Minutes #597 2016-08-01 22:57:52 +01:00
Nick Craig-Wood
f0434789cf Encourage using the latest version before submitting an issue. 2016-07-28 10:38:16 +01:00
Nick Craig-Wood
c2f6decb9c swift: note that tenant isn't optional for > v1 auth - fixes #563 2016-07-15 18:25:59 +01:00
Nick Craig-Wood
9eeed25418 local: fix filenames with invalid UTF-8 not being uploaded #568 2016-07-15 14:18:09 +01:00
Nick Craig-Wood
67562081f7 Version v1.32 2016-07-13 17:32:39 +01:00
Nick Craig-Wood
41917eb1f2 b2: Fix upload of files large files not in root - fixes #582 2016-07-13 15:28:39 +01:00
Nick Craig-Wood
c3e996f10f b2 doc fixes 2016-07-13 14:50:47 +01:00
Nick Craig-Wood
63f6827a0d Version v1.31 2016-07-13 12:28:01 +01:00
Nick Craig-Wood
96e2271cce Factor commands into Makefile 2016-07-13 12:25:19 +01:00
Nick Craig-Wood
ac3c83f966 Fix integration tests for drive 2016-07-12 21:38:15 +01:00
Nick Craig-Wood
b9c8e61d39 Explicitly check the state in tests after writing files
...otherwise Amazon Drive will fail.
2016-07-12 21:36:39 +01:00
Nick Craig-Wood
a6056408dd Fix move command - stop it running for overlapping fses - fixes #577
* Make move command check for overlapping remotes and refuse to run
  * Do copy/delete rather than all the copies then all the deletes
  * Doesn't purge the source - this was unexpected behaviour see #512 and #416
  * Add -list-retries flag to test suite to control retries

This changes the semantics of `move` slightly.  However it now errs on
the side of not deleting stuff.
2016-07-12 10:49:37 +01:00
Nick Craig-Wood
b9479cf7ab Implement --no-update-modtime flag - fixes #511 2016-07-12 10:46:45 +01:00
Nick Craig-Wood
452a5badc1 Add Stefan Weichinger to contributors 2016-07-11 15:32:58 +01:00
Stefan G. Weichinger
d645bf0966 Add basic info how to use ansible role for installation 2016-07-11 15:31:36 +01:00
Nick Craig-Wood
50addaa91e Add Antonio Messina to contributors 2016-07-11 15:22:17 +01:00
Antonio Messina
02a3bbaa3d swift: add support for non-default project domain.
With Keystone V3 both users and projects (a.k.a. tenants) can belong
to different domains. This change allow specifying different domains
for the user and the project.
2016-07-11 15:16:58 +01:00
Nick Craig-Wood
a20d80565b Tidy stats output - fixes #541 2016-07-11 13:04:30 +01:00
Nick Craig-Wood
56adb52a21 Rename Amazon Cloud Drive to Amazon Drive - fixes #532 2016-07-11 12:42:44 +01:00
Nick Craig-Wood
8c2fc6daf8 s3: Add instructions on how to use rclone with minio 2016-07-11 12:12:28 +01:00
Nick Craig-Wood
4bd9932703 Fix wording in verbose copy logs - fixes #574 2016-07-09 10:11:57 +01:00
Nick Craig-Wood
2a1d4b7563 s3: Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions - fixes #567 2016-07-06 11:14:59 +01:00
Nick Craig-Wood
b394431f18 Improve --files-from docs - fixes #547 2016-07-05 12:33:59 +01:00
Nick Craig-Wood
cc628717d8 b2: Add --b2-versions flag so old versions can be listed and retreived. #420 2016-07-05 11:27:04 +01:00
Nick Craig-Wood
f3e00133a0 dropbox: Don't retry 461 errors - fixes #551
461 errors from dropbox indicate some sort of copyright violation.
2016-07-04 13:45:53 +01:00
Nick Craig-Wood
606961f49d b2: Treat 403 errors (eg cap exceeded) as fatal #420 2016-07-04 13:45:53 +01:00
Nick Craig-Wood
13591c7c00 Redo error handling for sync/copy/move
* Factor sync/copy/move into its own file
  * Make fatal errors abort the sync
  * Make Copy return errors
  * Make Sync/Copy/Move return the last Copy error if there was one
  * Prioritise returning Fatal errors
  * NoRetry errors are returned if no other types of errors
2016-07-04 13:45:53 +01:00
Nick Craig-Wood
28f4061892 Add two more classes of error Fatal and NoRetry
These are for remotes to signal that they have a fatal error and don't
want to continue (eg cap exceeded) or that a particular file shouldn't
be retried for some reason.
2016-07-04 13:45:52 +01:00
Nick Craig-Wood
018fe80bcb b2: cleanup old file versions - fixes #462 2016-07-02 17:03:08 +01:00
Nick Craig-Wood
0a43ff9c13 Modify interface for accounting to take a string not an fs.Object 2016-07-02 16:58:50 +01:00
Nick Craig-Wood
9aae143833 Implement cleanup command for emptying trash / removing old versions of files 2016-07-01 16:35:36 +01:00
Nick Craig-Wood
c8e2531c8b b2: make error handling compliant 2016-07-01 16:23:23 +01:00
Nick Craig-Wood
9290004bb8 pacer: make sleep get-able and set-able 2016-07-01 16:22:51 +01:00
Nick Craig-Wood
cbebefebc4 b2: Fix handling of token expiry #420
Found with --b2-test-mode expire_some_account_authorization_tokens
2016-07-01 11:47:42 +01:00
Nick Craig-Wood
6f3897ce2c b2: implement --b2-test-mode to set X-Bz-Test-Mode header #420 2016-07-01 11:30:09 +01:00
Nick Craig-Wood
ea5878f590 b2: set cutoff for chunked upload to 200MB #420
This is the value recommended in the b2 integration checklist:

https://www.backblaze.com/b2/docs/integration_checklist.html
2016-07-01 10:08:09 +01:00
Nick Craig-Wood
46f8e50614 b2: Make upload multi-threaded - fixes #531 2016-07-01 10:04:52 +01:00
Nick Craig-Wood
70dc97231e Convert more tests to use assert/require 2016-06-30 15:45:30 +01:00
Nick Craig-Wood
f6a053df6e Automatically set --no-traverse when copying a single file 2016-06-29 17:38:56 +01:00
Nick Craig-Wood
af4ef8ad8d Implement --no-traverse flag to stop copy traversing the destination remote.
Refactor sync/copy/move
  * Don't load the src listing unless doing a sync and --delete-before
  * Don't load the dst listing if doing copy/move and --no-traverse is set

`rclone --no-traverse copy src dst` now won't load either of the
listings into memory so will use the minimum amount of memory.

This change will reduce the amount of memory rclone uses dramatically
too as in normal operations (copy without --notraverse or sync) as it
no longer loads the source file listing into memory at all.

Fixes #8
Fixes #544
Fixes #546
2016-06-29 17:38:50 +01:00
Nick Craig-Wood
13797a1fb8 Make retry logs be debug in main copy routine 2016-06-28 08:51:57 +01:00
Nick Craig-Wood
3ad8fb8634 Make DeleteFile and DeleteFiles return errors 2016-06-28 08:51:57 +01:00
Nick Craig-Wood
ab43005422 Make NewObject return an error
* make it return an error
  * make a canonical error fs.ErrorNotFound
  * make a test for it
  * remove logs/debugs of error
2016-06-28 08:51:57 +01:00
Nick Craig-Wood
b1f131964e Rename NewFsObject to NewObject 2016-06-28 08:51:57 +01:00
Nick Craig-Wood
1a87b69376 Get rid of LimitedFs - FIXME needs docs on copying single files
If remote:path points to a file make NewFs return a sentinel error
fs.ErrorIsFile and an Fs which points to the parent.

Use this to remove the LimitedFs and just add this file to the
--files-from list.

This means that server side operations can be used also.

Fixes #518
Fixes #545
2016-06-28 08:51:43 +01:00
Nick Craig-Wood
5a3b109e25 Fix issues identified by go vet -shadow - fixes #530 2016-06-21 21:17:52 +01:00
Nick Craig-Wood
a67c7461ee s3: skip SetModTime for objects > 5GB - fixes #534 2016-06-19 17:26:44 +01:00
Klaus Post
e0aa4bb492 Fix incomplete local hashes.
Fixes #533
2016-06-19 16:51:49 +02:00
Nick Craig-Wood
ab0947ee37 Fix typo in changelog 2016-06-18 16:58:37 +01:00
Nick Craig-Wood
bd0227450e Version v1.30 2016-06-18 16:41:46 +01:00
Nick Craig-Wood
f438f1e9ef Fix stats print 2016-06-18 16:41:46 +01:00
Nick Craig-Wood
3f7b2c1ade Add Justin R. Wilson to contributors 2016-06-18 14:31:17 +01:00
Justin R. Wilson
6e35a3b3ce Add AES256 server-side encryption for s3 - Fixes #491
Add a configuration key and support for AES256 server-side encryption.
2016-06-18 14:28:38 +01:00
Nick Craig-Wood
d3dd672640 Document recursion requirements for Fses 2016-06-18 14:12:47 +01:00
Nick Craig-Wood
2a46be8cf3 b2: implement large file uploading - fixes #456 2016-06-18 13:38:05 +01:00
Nick Craig-Wood
1b4370bde1 Rework retry logic when copying objects
* Fix off by one retry logic - fixes #406
  * Retry any retriable errors
  * Restructure code
2016-06-18 10:55:58 +01:00
Nick Craig-Wood
cc6a776034 drive, acd: Tweak logging after changing Fs.Put so that it must cope with existing files 2016-06-18 10:54:42 +01:00
Nick Craig-Wood
2cfb3834f2 Log errors with %v 2016-06-18 09:36:47 +01:00
Nick Craig-Wood
46135d830e Add --ignore-size flag - fixes #399 2016-06-17 17:20:08 +01:00
Nick Craig-Wood
318e42e35b Add a section on quoting in the shell to the docs - fixes #473 2016-06-17 16:28:50 +01:00
Nick Craig-Wood
c7f04e24d3 Document that you can't repeat filter flags - fixes #506 2016-06-17 16:06:21 +01:00
Nick Craig-Wood
e4650eff58 drive: fix retry of multipart uploads - fixes #520
Reset the reader on retry otherwise it is empty when read again.
2016-06-15 21:48:30 +01:00
Nick Craig-Wood
869d91269d Debug cause of low level retries 2016-06-15 21:48:14 +01:00
Nick Craig-Wood
df1092ef33 Change Fs.Put so that it must cope with existing files
This should fix duplicate files on drive and 409 errors on
amazonclouddrive however it will slow down the upload slightly as
another roundtrip will be needed.

None of the other Fses needed adjusting.

Fixes #483
2016-06-13 19:29:10 +01:00
Nick Craig-Wood
4c5b2833b3 Convert to using github.com/pkg/errors everywhere 2016-06-13 17:43:03 +01:00
Nick Craig-Wood
7fe653c350 Unwrap errors properly for patform specific connection retry code.
Include more possible errors for Windows.

For #442
2016-06-10 13:48:41 +01:00
Nick Craig-Wood
661715733a Make sure we don't use conflicting content types on upload - fixes #513 2016-06-09 17:52:58 +01:00
Nick Craig-Wood
f17cb1bf50 Fix retry of Windows wsaend errors #442
Make the test for wsaend error less specific
2016-06-09 15:34:13 +01:00
Nick Craig-Wood
9ec06df79f Be explicit about which arch we support which fixes failure to build with new gox 2016-06-09 15:33:26 +01:00
Nick Craig-Wood
67d0375b98 Audit use of log.Print and change to Debug, Log, or ErrorLog as appropriate 2016-06-06 21:23:54 +01:00
Nick Craig-Wood
4882b8ba67 Tweak website footer 2016-06-06 21:23:22 +01:00
Nick Craig-Wood
108760e17b Log -v output to stdout by default - fixes #228 2016-06-04 18:49:27 +01:00
Nick Craig-Wood
f15e7e89d2 Add version string to debug startup message 2016-06-03 23:08:14 +01:00
Nick Craig-Wood
e2788aa729 Display the transfer stats in more human readable form - fixes #428 2016-06-03 22:49:50 +01:00
Nick Craig-Wood
772f99fd74 Make SizeSuffix output without b suffix for more useful printouts 2016-06-03 22:49:14 +01:00
Nick Craig-Wood
9bbcdeefd0 Start the logger earlier so all messages go there - fixes #486 2016-06-03 22:08:27 +01:00
Nick Craig-Wood
a21cc161de Make 0 size files specifiable with --max-size 0b - fixes #450 2016-06-03 21:54:27 +01:00
Nick Craig-Wood
e818b7c206 Represent -1 as "off" for SIZE values 2016-06-03 21:51:39 +01:00
Nick Craig-Wood
5723d788a4 Add b suffix so we can specify bytes in --bwlimit, --min-size etc
Fixes #449
2016-06-03 21:16:48 +01:00
Nick Craig-Wood
1d6698a754 Build tweaks - fixes #484
* disable CGO for static builds everywhere
  * override Version in release build script
  * don't output symbol table in release binaries
2016-06-03 20:34:19 +01:00
Nick Craig-Wood
1fce83b936 swift: add auth version parameter - fixes #407 2016-06-03 17:52:24 +01:00
Nick Craig-Wood
ccdd1ea6c4 Add --max-depth parameter
This will apply to ls/lsd/sync/copy etc

Fixes #412
Fixes #213
2016-06-03 17:05:39 +01:00
Nick Craig-Wood
348734584b Try OS X 10.11 to fix travis build 2016-05-30 20:32:35 +01:00
Nick Craig-Wood
c6a79ff72d Fix remaining places in listing where we were logging errors not returning them 2016-05-30 19:51:15 +01:00
Nick Craig-Wood
b6f1391da3 Fix new style directory listing on windows 2016-05-30 19:44:15 +01:00
Nick Craig-Wood
ce94c0e729 Update go versions in travis 2016-05-28 20:45:25 +01:00
Nick Craig-Wood
58befe280c Fix directory name normalisation on OS X 2016-05-28 20:23:37 +01:00
Nick Craig-Wood
4c0f4ccb65 Fix destination of Facebook share link - fixes #499 2016-05-28 17:27:25 +01:00
Nick Craig-Wood
085677d511 acd: Work around spurious 403 errors
Sometimes ACD gives this error on reauthentication

HTTP code 403: "403 Forbidden", reponse body: {"message":"Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header. Authorization=Bearer"}

This code retries this error if it is received.
2016-05-28 16:49:26 +01:00
Nick Craig-Wood
0a922ad1dc acd: Reauth on 401 errors
Fixes #493
Fixes #501
2016-05-28 16:49:26 +01:00
Nick Craig-Wood
83c3bb2f1a Add Romain Lapray to contributors 2016-05-28 16:39:17 +01:00
rlapray
83087a45f0 Details about Hubic "default" folder 2016-05-28 16:36:47 +01:00
Nick Craig-Wood
cadf202107 Clarify filtering docs #489 2016-05-19 12:39:16 +01:00
Nick Craig-Wood
36700d36a7 Fix dropbox root directory listings 2016-05-16 17:54:59 +01:00
Nick Craig-Wood
ad85f6e413 Implement directory include filtering for efficiency
Fixes #395
2016-05-16 17:14:04 +01:00
Nick Craig-Wood
536526cc92 amazonclouddrive: Restart directory listings on error - fixes #475
Before this change rclone would retry only the page that was missing
from the directory listing.  However it turns out that on 429 errors
at least, that page is gone from the directory listing which results
in missing files in the list.  The workaround for this is to restart
the directory listing on any retryable errors.
2016-05-14 17:15:42 +01:00
Nick Craig-Wood
ac9c20b048 Make IsRetryError function 2016-05-14 17:11:19 +01:00
Nick Craig-Wood
2db35f0ce7 Dump out unexpected state in integration test 2016-05-07 21:19:26 +01:00
Nick Craig-Wood
dbfa7031d2 Factor Lister into own file, write tests and fix 2016-05-07 17:17:43 +01:00
Nick Craig-Wood
c2d0e86431 Add more tests for List() and fix resulting problems 2016-05-07 14:50:35 +01:00
Nick Craig-Wood
68ec6a9f5b Add a directory parameter to Fs.List() 2016-05-06 16:52:34 +01:00
Nick Craig-Wood
753b0717be Refactor the List and ListDir interface
Gives more accurate error propagation, control of depth of recursion
and short circuit recursion where possible.

Most of the the heavy lifting is done in the "fs" package, making file
system implementations a bit simpler.

This commit contains some code originally by Klaus Post.

Fixes #316
2016-05-06 16:52:34 +01:00
Nick Craig-Wood
3bdad260b0 Fix typo (thanks Saverio Proto) 2016-05-06 14:09:12 +01:00
Nick Craig-Wood
d205dc23e9 Fix oddities using a file in the root - fixes #471
* Check return from NewFsObject which caused nil ptr deref
  * Correct root directory from "" to string(os.PathSeparator) in getDirFile
2016-05-06 13:52:50 +01:00
Nick Craig-Wood
bdd26d71b2 Clarify swift errors - fixes #460 2016-05-02 12:34:15 +01:00
Nick Craig-Wood
8b2f6faf18 Re-enable OS X in travis tests 2016-05-01 13:13:20 +01:00
Nick Craig-Wood
7c01bbddf8 Normalise path names for OSX local filesystem
Fixes #194 Fixes #451 Fixes #463
2016-05-01 13:13:20 +01:00
Nick Craig-Wood
1752ee3c8b Retry errors which indicate the connection closed prematurely.
See discussion in #442
2016-04-29 17:29:34 +01:00
Nick Craig-Wood
5c2d8ffe33 Retry only the failing tests in the integration tests 2016-04-26 10:20:07 +01:00
Nick Craig-Wood
7fecd5c8c6 Add Leigh Klotz to contributors 2016-04-22 21:12:45 +01:00
Leigh Klotz
19b7ff12ad Doc updates for pasword prompt changes 2016-04-22 21:11:36 +01:00
Nick Craig-Wood
b053234eb1 Add Fabian Ruff to contributors 2016-04-22 21:02:54 +01:00
Fabian Ruff
640d7bd365 Add domain option for openstack (v3 auth) 2016-04-22 21:00:54 +01:00
Nick Craig-Wood
8af68e779f Add Michal Witkowski to contributors 2016-04-22 20:09:16 +01:00
Nick Craig-Wood
3a1198cac5 gcs: Don't configure the oauth token if service_account_file is supplied 2016-04-22 20:07:10 +01:00
Michal Witkowski
022ab4516d Add service account support for GCS 2016-04-22 19:53:27 +01:00
Nick Craig-Wood
17aac9b15f Note certificates FAQ works on Solaris too 2016-04-22 11:53:56 +01:00
Klaus Post
6c0c9abd57 Use "password:" instead of "password>" prompt
Fixes #410
2016-04-21 19:39:46 +01:00
Nick Craig-Wood
70496c15e1 Add Jim Tittsler to contributors 2016-04-21 19:37:41 +01:00
Jim Tittsler
8b61e68bb7 Fix doc typos. 2016-04-20 11:50:28 +09:00
Nick Craig-Wood
bb75d80d33 Fix frontmatter 2016-04-18 18:55:07 +01:00
Nick Craig-Wood
157d7d45f5 Version v1.29 2016-04-18 18:30:29 +01:00
Nick Craig-Wood
b5cba73cc3 Make test more reliable 2016-04-18 17:48:52 +01:00
Nick Craig-Wood
dd36264aad Add FAQ All my uploaded docx/xlsx/pptx files appear as archive/zip
Fixes #417
2016-04-12 21:41:24 +01:00
Nick Craig-Wood
ddb47758f3 drive: increase default chunk size to 8 MB and document - fixes #397 2016-04-12 21:33:55 +01:00
Nick Craig-Wood
9539bbf78a Fix appveyor build after vet removal from tools repo 2016-04-07 20:07:00 +01:00
Nick Craig-Wood
0f8e7c3843 Make rclone check obey the --size-only flag - fixes #419 2016-04-07 15:01:45 +01:00
Nick Craig-Wood
b835330714 Use "application/octet-stream" if mime.TypeByExtension returns invalid type
Fixes #424
2016-04-07 14:32:01 +01:00
Nick Craig-Wood
310db14ed6 Notes on --transfers and B2 2016-04-04 17:58:36 +01:00
Klaus Post
7f2e9d9a6b Require go v1.5 for compilation
Google cloud package requires go v1.5 to compile, so we need to require the same for rclone.

Fixes #408
2016-04-04 17:34:39 +01:00
Nick Craig-Wood
6cc9c09610 drive: preserve mime type on file update - fixes #417 2016-04-04 16:58:42 +01:00
Nick Craig-Wood
93c60c34e1 b2: Fix incorrect value of Precision - should be 1ms not 1s 2016-03-24 15:23:27 +00:00
Klaus Post
02c11dd4a7 Don't de-reference swift connection
The connection object contains a mutex, so it is good practice not to dereference it to a value.

Reported by Go tip "go vet".
2016-03-23 17:09:05 +00:00
Klaus Post
40dc575aa4 Update Travis CI
- Only use golint if version is > Go 1.4
- Add Go 1.6 and tip as test targets.
2016-03-23 17:07:26 +00:00
Klaus Post
f8101771c9 Disable keepalive to keep server from serving stale results.
Fixes issue #402

Bonus fix: Fix "multiple header writes" warning when no code is received.
2016-03-23 16:57:56 +00:00
Klaus Post
8f4d6973fb Fix missing "quit" option when there are no remotes. 2016-03-23 16:57:56 +00:00
Nick Craig-Wood
ced3a4bc19 Implement -I, --ignore-times for unconditional upload - fixes #311 2016-03-22 17:02:27 +00:00
Nick Craig-Wood
cb22583212 b2: Enable mod time syncing - fixes #348 2016-03-22 15:56:44 +00:00
Nick Craig-Wood
414b35ea56 Change the interface of SetModTime to return an error - #348 2016-03-22 15:56:44 +00:00
Nick Craig-Wood
f469905d07 dropbox: Note 10,000 files limitation on purge - fixes #374 2016-03-22 14:46:43 +00:00
Nick Craig-Wood
20f4b2c91d b2: update API to new version - fixes #393
* Make reading mod time and SHA1 much more efficient
    * removes an HTTP transaction to increase speed
  * Reduce memory usage of the objects
2016-03-22 14:39:56 +00:00
Nick Craig-Wood
37543bd1d9 b2: Fix parsing of mod time when not in metadata
This files this error `Failed to parse mod time string "":
"src_last_modified_millis" not found in metadata`.
2016-03-22 10:26:37 +00:00
Nick Craig-Wood
0dc0052e93 Note that filters must use / not \ - #394 2016-03-19 17:40:54 +00:00
Nick Craig-Wood
bd27473762 swift: Don't return an MD5SUM for static large objects - #392
* rename isManifest to isDynamicLargeObject for clarity
2016-03-17 17:36:20 +00:00
Nick Craig-Wood
9dccf91da7 swift/hubic: document segmented object MD5SUM limitations - fixes #392 2016-03-16 17:39:44 +00:00
Nick Craig-Wood
a1323eb204 s3: Fix uploading files bigger than 50GB - fixes #386 2016-03-10 16:48:55 +00:00
Klaus Post
e57c4406f3 Add mutex to "warned" map.
Fixes #385
2016-03-10 15:51:56 +01:00
Nick Craig-Wood
fdd4b4ee22 drive: Add missing retries for Move and DirMove 2016-03-06 18:15:01 +00:00
Nick Craig-Wood
8ef551bf9c Make dedupe remove identical copies without asking and add non interactive mode - fixes #338
* Now removes identical copies without asking
  * Now obeys `--dry-run`
  * Implement `--dedupe-mode` for non interactive running
    * `--dedupe-mode interactive` - interactive the default.
    * `--dedupe-mode skip` - removes identical files then skips anything left.
    * `--dedupe-mode first` - removes identical files then keeps the first one.
    * `--dedupe-mode newest` - removes identical files then keeps the newest one.
    * `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
    * `--dedupe-mode rename` - removes identical files then renames the rest to be different.
  * Add tests which will only run on Google Drive.
2016-03-06 18:15:01 +00:00
Nick Craig-Wood
2119fb4314 drive: tweak pacer to speed up directory listings and make more reliable 2016-03-06 18:15:01 +00:00
Nick Craig-Wood
0166544319 Add Attack constant to pacer 2016-03-05 20:29:05 +00:00
Nick Craig-Wood
874a64e5f6 A script to make a directory heirarchy for testing 2016-03-05 20:26:15 +00:00
Nick Craig-Wood
e0c03a11ab Commit missing docs changes and adjust RELEASE.md to make sure it doesn't happen again 2016-03-01 17:42:27 +00:00
Nick Craig-Wood
3c7f80f58f Version v1.28 2016-03-01 09:00:01 +00:00
Nick Craig-Wood
229ea3f86c Stop --update tests running on remotes which don't do mod time 2016-03-01 07:26:33 +00:00
Nick Craig-Wood
41eb386063 Reset password/config path in config tests to fix other tests 2016-02-29 21:43:37 +00:00
Nick Craig-Wood
dfc7cd97a3 Optionally disable gzip compression on downloads with --no-gzip-encoding - fixes #353 2016-02-29 19:48:54 +00:00
Nick Craig-Wood
280ac26464 Implement -u/--update so creation times can be used on all remotes - #226 2016-02-29 17:46:40 +00:00
Nick Craig-Wood
88cca8a6eb Simplify literals (after running gofmt -s over the code) 2016-02-29 16:57:23 +00:00
Nick Craig-Wood
9c263e3e2b Commit missing tests 2016-02-28 20:25:51 +00:00
Nick Craig-Wood
7d4e143dee Make it obvious that the client secrets are encrypted 2016-02-28 19:57:19 +00:00
Nick Craig-Wood
3343c1afa4 Don't make directories if --dry-run set - fixes #342 2016-02-28 19:56:50 +00:00
Nick Craig-Wood
b279df2e67 Drive: disable copy and move for google docs - fixes #332 2016-02-28 09:35:28 +00:00
Nick Craig-Wood
e6f340d245 swift: Fix uploading of chunked files with non ascii characters - fixes #350 2016-02-27 18:59:16 +00:00
Nick Craig-Wood
bfc66cceaa Update b2 docs after temp file changes 2016-02-27 16:32:40 +00:00
Nick Craig-Wood
1105b6bd94 Add Jakub Gedeon to contributors 2016-02-27 13:58:00 +00:00
Jakub Gedeon
694d390710 s3: Check if directory exists during Mkdir
If you dont have privs to create a bucket in S3 but it exists, don't
fail with an auth error, but detect that the mkdir was not needed and
return successfully.
2016-02-27 13:24:46 +00:00
Nick Craig-Wood
6b6b43402b b2: Use one upload URL per go routine
This fixes `more than one upload using auth token` errors.
2016-02-27 13:00:35 +00:00
Nick Craig-Wood
6f46270735 b2: Add pacing, retries and reauthentication - fixes #310 2016-02-27 12:04:45 +00:00
Nick Craig-Wood
ee5e34a19c b2: factor authorize account into its own method 2016-02-27 12:04:45 +00:00
Nick Craig-Wood
70902b4051 Make rest Set methods safe for concurrent calling 2016-02-27 12:04:45 +00:00
Nick Craig-Wood
f46304e8ae Update README from docs/content/about.md 2016-02-27 11:15:51 +00:00
Nick Craig-Wood
40252f0aa6 Make continuous integrations logs less noisy 2016-02-26 17:01:19 +00:00
Nick Craig-Wood
e7b9cc4705 Fix pacer tests 2016-02-26 16:59:52 +00:00
Nick Craig-Wood
867a26fe4f Implement --low-level-retries flag - fixes #266 2016-02-25 22:58:21 +00:00
Nick Craig-Wood
3890105cdc Add -run-only flag to run_all test 2016-02-25 22:05:57 +00:00
Nick Craig-Wood
d2219a800a Fix and document the move command - fixes #334
* Don't attempt to use server side Move unless they are on the same Fs
  * Fix move in the presense of filters
2016-02-25 20:05:34 +00:00
Nick Craig-Wood
ccb59480bd Add InActive method to Filter to detect when no fiters are in use. 2016-02-25 19:58:00 +00:00
Nick Craig-Wood
b5c5209162 Fix redirecting stderr on unix-like OSes - fixes #363 2016-02-24 22:03:14 +00:00
Nick Craig-Wood
835b6761b7 Write about convmv in the docs for fixing non UTF-8 filesystems - fixes #300 2016-02-21 14:09:06 +00:00
Nick Craig-Wood
f30c836696 Note Linux version requirements for running rclone - fixes #346 2016-02-21 13:59:24 +00:00
Nick Craig-Wood
090ce00afc Clarify Dropbox docs on mod times - fixes #345 2016-02-21 13:52:00 +00:00
Nick Craig-Wood
377986d599 Update config walk throughs with new style choice menu 2016-02-21 13:40:16 +00:00
Nick Craig-Wood
95e4d837ef Make config chooser easier to understand 2016-02-21 13:40:16 +00:00
Nick Craig-Wood
e08e35984c Add help to remote chooser in rclone config - fixes #43 2016-02-21 13:40:16 +00:00
Nick Craig-Wood
a3b4c8a0f2 Add issue template for github 2016-02-21 10:32:44 +00:00
Nick Craig-Wood
700e47d6e2 Stub out ReadPassword on plan9 and solaris to fix compilation 2016-02-21 10:31:53 +00:00
Nick Craig-Wood
ea11f5ff3d Stop make beta remaking the docs 2016-02-21 10:29:48 +00:00
klauspost
758c7f2d84 Avoid b2 temporary file.
If source can provide SHA1 hash we don't copy input to a temporary file.

Fixes #358
2016-02-19 18:07:15 +00:00
klauspost
ef06371c93 Create separate interface for object information.
Take out read-only information about a Fs in a separate struct to limit access.

See discussion at #282.
2016-02-19 13:31:09 +00:00
Nick Craig-Wood
85a0f25b95 b2: Fix reading metadata for all files when using a subdir - fixes #356
Also fix some confusion with Metadata prefix/root.
2016-02-19 12:11:30 +00:00
klauspost
84b00b362f Change back to original goconfig package.
Add documentation for `--ask-password`.
2016-02-17 11:45:05 +01:00
klauspost
bfd7601cf9 Add configuration file encryption
See #317 for details.

Use `rclone config` to add/change/remove password.

Tests that loads the default configuration will now fail with a better error message, and add a switch that makes it possible to disable password prompts and fail instead.

Make it possible to use the "RCLONE_CONFIG_PASS" environment variable as password for configuration.
2016-02-16 16:32:05 +01:00
Nick Craig-Wood
4676a89963 Note that you may need curl --insecure when fetching root CA certificates 2016-02-16 14:55:26 +00:00
Nick Craig-Wood
8cd3c25b41 Amazon Cloud Drive: retry on 400, 401, 408, 504 and EOF errors - fixes #340 2016-02-16 14:45:22 +00:00
Nick Craig-Wood
5f97603684 Fix fetch test dependencies too. 2016-02-15 17:31:11 +00:00
Nick Craig-Wood
f1debd4701 Fetch test dependencies too. 2016-02-15 17:20:26 +00:00
Nick Craig-Wood
1cd0d9a1f2 Fix listing drive docs at root - fixes #336
* Remove full drive list code
    * it is slower and uses more data
    * having two directory listing routines is causing problems (including this one)
    * less code is more
  * Make sure we don't recurse into directories we don't own
  * Fix export extension handling and add tests
2016-02-15 16:46:43 +00:00
Nick Craig-Wood
a6320bbad3 Fix delete command to wait until all finished - fixes missing deletes.
This also could affect deletes at the end of the sync command.
2016-02-15 16:43:59 +00:00
Nick Craig-Wood
b1dd8e998b Yandex Disk: Use http.Client passed in for all operations - fixes logging. 2016-02-15 16:43:18 +00:00
Xavier Lucas
c2e8f06bfa Swift storageUrl overloading fixes #167 2016-02-09 22:17:13 +00:00
Nick Craig-Wood
08a8f7174a Add Brian Stengaard to contributors 2016-02-09 21:45:51 +00:00
Nick Craig-Wood
ce4c1d4f35 s3: Fix empty checks in auth 2016-02-09 17:19:33 +00:00
Nick Craig-Wood
a0b9bd527e Add both forms of env var to the docs 2016-02-09 17:19:13 +00:00
Brian Stengaard
ce05ef7110 Add IAM role and Env credentials
This will make the s3 provider authentaction logic

  - Configured credentials if both key and secret available
  - Anonymous if key and secret missing and env_auth not set
  - if env_auth is set to truthy (https://golang.org/pkg/strconv/#ParseBool)
    - AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY environment variables
    - IAM role credentials as fallback
2016-02-09 16:32:36 +00:00
Werner Beroux
6a47d966a4 Update filtering documentation - fixes #306
Explains that filtering is done relative to the remote root.

Also removes a section that seems more about internal knowledge and
that may likely more confuse people. Adds instead a section giving an
overview of how to perform filtering before going into details.
2016-02-09 16:25:19 +00:00
Nick Craig-Wood
85d99de26b Fix typo in error strings 2016-02-09 16:15:50 +00:00
Nick Craig-Wood
4a82251c62 Add man page to repository too (missed from #256) 2016-02-07 20:26:10 +00:00
Nick Craig-Wood
e62c0a58a7 Version 1.27 2016-01-31 17:50:13 +00:00
Nick Craig-Wood
1f3e48f18f Add manuals to repository - fixes #256 2016-01-31 16:34:30 +00:00
Nick Craig-Wood
bbbe11790b Update docs to make syncing from a directory more obvious - fixes #302 2016-01-31 16:27:19 +00:00
Nick Craig-Wood
13edf62824 Document rclone return codes - fixes #308 2016-01-31 16:15:25 +00:00
Nick Craig-Wood
558bc2e132 drive: Export Google documents - fixes #49
Rclone will download one format of a google doc. The choice of which
export format is controlled by the `--drive-formats` flag.
2016-01-31 16:10:43 +00:00
Nick Craig-Wood
0f73129ab7 dedupe command to deduplicate a remote. Useful with google drive - fixes #41 2016-01-31 16:09:42 +00:00
Nick Craig-Wood
1373efaa39 Delete command which does obey the filters - fixes #327 2016-01-31 16:06:04 +00:00
Nick Craig-Wood
5c37b777fc Make the --dry-run warnings into logs so they appear without the -v flag 2016-01-31 16:06:04 +00:00
Nick Craig-Wood
d4df3f2154 acd: Download files >= 9GB with their tempLink direct from s3
This files the problem downloading files > 10GB.

Fixes #204 Fixes #313
2016-01-30 18:08:44 +00:00
Nick Craig-Wood
8ae424c5a3 Emphasize testing sync with --dry-run and -v 2016-01-29 07:59:33 +00:00
Nick Craig-Wood
cae19df058 s3: URL escape CopySource
This fixes metadata update and copy for files with `+` in

Fixes #315
2016-01-27 17:39:33 +00:00
Nick Craig-Wood
8c211fc8df Warn the user about files with same name but different case
Relates to #107 & #119.
2016-01-26 16:57:09 +00:00
Nick Craig-Wood
74a71f7824 Add tests for --delete-before, --delete-during and --delete-after 2016-01-26 16:57:09 +00:00
Nick Craig-Wood
12b51c5eb8 Remove duplicate check for filter IncludeObject 2016-01-26 16:57:09 +00:00
klauspost
14069fd8e6 Implement --delete-before, --delete-during, --delete-after - fixes #252. 2016-01-26 16:57:09 +00:00
Nick Craig-Wood
cd62f41606 Reduce number of logs and show hash type where appropriate 2016-01-24 18:06:57 +00:00
Nick Craig-Wood
109d4ee490 Prefix all test remotes with rclone-test- and make names more pronouncable 2016-01-24 12:37:46 +00:00
Nick Craig-Wood
18ebec8276 Check remote is empty between integration tests 2016-01-24 12:37:19 +00:00
Nick Craig-Wood
c47b4f828f acd: Fix deadlock in directory traversal code 2016-01-24 11:20:55 +00:00
Nick Craig-Wood
c3a0c0c451 swift: Fix upload from unprivileged user - fixes #273 2016-01-23 20:32:53 +00:00
Nick Craig-Wood
6cb0de43ce Deprecate compiling with go1.3 2016-01-23 17:27:00 +00:00
Nick Craig-Wood
83f0d3e03d acd: remove 409 conflict from error codes we will retry
This should fix the very long pauses or getting stuck people have seen
in uploads.
2016-01-23 17:02:09 +00:00
Nick Craig-Wood
eda4130703 Fix integration tests so they can be run independently and out of order - fixes #291
* Make all integration tests start with an empty remote
  * Add an -individual flag so this can be a different bucket/container/directory
  * Fix up tests after changing the hashers
  * Add sha1sum test
  * Make directory checking in tests sleep more to fix acd inconsistencies
  * Factor integration tests to make more maintainable
  * Ensure remote writes have a fstest.CheckItems() before use
    * this fixes eventual consistency on the directory listings later
  * Call fs.Stats.ResetCounters() before every fs.Sync()

Note that the tests shouldn't be run concurrently as fs.Config is global state.
2016-01-23 17:02:09 +00:00
Nick Craig-Wood
ccba859812 Test all available hashes for each remote 2016-01-23 09:10:36 +00:00
Nick Craig-Wood
de3cf5e8d7 Add -verbose flag to unit tests and add some more eventual consistency retries 2016-01-20 20:06:05 +00:00
Nick Craig-Wood
ce305321b6 amazon cloud drive: Fix "Next token is expired" - Fixes #289 Fixes #263
This should also fix the consequent "409 Conflict" name already exists errors.
2016-01-20 20:05:52 +00:00
Nick Craig-Wood
e6117e978e Add Werner Beroux to contributors 2016-01-20 16:33:28 +00:00
Werner Beroux
4b40898743 Update filtering.md
Clarify by removing the extension which makes it confusing if not careful.
2016-01-20 16:16:24 +01:00
Nick Craig-Wood
ae3a0ec27e b2: Don't re-read the SHA1 if we already have it 2016-01-19 08:21:20 +00:00
Nick Craig-Wood
d9458fb4ee b2: return error in Hash from readFileMetadata operation 2016-01-19 08:21:10 +00:00
Nick Craig-Wood
27f67edb1a Fix formatting problem in sha1sum 2016-01-17 13:56:42 +00:00
Nick Craig-Wood
3ffea738e6 Make hash constants start from 1 not 2 2016-01-17 10:47:24 +00:00
Nick Craig-Wood
a63dd6020c onedrive: fix incorrectly decoded SHA-1 2016-01-17 10:46:36 +00:00
Nick Craig-Wood
d0678bc3e5 local: report error on stat in Hash in case file disappeared 2016-01-17 10:46:19 +00:00
klauspost
ce04a073ef Update templates to changes in the latest hugo version
Fixes #295
2016-01-16 14:11:52 +00:00
Nick Craig-Wood
c337a367f3 Make make serve fail if make website would fail 2016-01-16 14:10:57 +00:00
klauspost
7ae40cb352 Update information on revised hash functionality. 2016-01-16 10:17:11 +00:00
Nick Craig-Wood
e8daab7971 Fix integration tests for remotes with unsupported hash schemes 2016-01-16 09:45:15 +00:00
klauspost
78c3a5ccfa Add support for multiple hash types.
Add support for multiple hash types with negotiation of common hash types for comparison.

Manually rebased version of #277 (see discussion there)
2016-01-11 13:39:33 +01:00
Nick Craig-Wood
2142c75846 Add missing docs for options - fixes #278 2016-01-10 12:04:20 +00:00
Nick Craig-Wood
c724d8f614 dropbox: Make file exclusion error controllable with -q #287 2016-01-10 11:49:04 +00:00
Nick Craig-Wood
af5f4ee724 Make --include rules add their implict exclude * at the end of the filter list
This means you can mix `--include` and `--include-from` with the
other filters (eg `--exclude`) but you must include all the files you
want in the include statement.

Fixes #280
2016-01-10 11:42:53 +00:00
Nick Craig-Wood
01aa4394a6 Explain that errored sync doesn't delete files - fixes #285 2016-01-10 10:44:33 +00:00
Nick Craig-Wood
2646519712 Add --memprofile flag 2016-01-09 15:25:48 +00:00
Nick Craig-Wood
5b2efd563a Add Xavier Lucas to contributors 2016-01-08 08:32:52 +00:00
xlucas
e7b7432079 OVH Swift authentication enpoint 2016-01-08 08:30:13 +00:00
Nick Craig-Wood
ea2ef4443b Remove -verbose from errcheck 2016-01-08 08:20:04 +00:00
klauspost
25f22ec561 Add "--ignore-existing" flag.
Add option to completely ignore existing files and not consider them for transfer.

Fixes #274
2016-01-08 08:20:04 +00:00
Nick Craig-Wood
5189231a34 Tweaks to rclone authorize
* Document the headless / remote setup procedure
  * Move Config constants into fs
  * Parse arguments in main for Authorize
2016-01-07 20:31:23 +00:00
klauspost
bcbd30bb8a Add easier headless configuration.
This will allow setting up a remote with copy&paste of values to a headless machine. It will allow copy+pasting a token into the configuration.

This requires rclone to be on a machine with a proper browser. Custom client id and secrets are supported.

To test token generation, use `rclone auth "fs type"`.
2016-01-07 20:31:23 +00:00
Nick Craig-Wood
c245183101 Stop errcheck running for go < 1.5 2016-01-07 16:37:51 +00:00
klauspost
4ce2a84df0 Document workaround for ACD maximum file size.
Document workaround for ACD maximum file size and display a warning in verbose mode before upload starts.

Fixes #215.
2016-01-05 17:12:16 +00:00
klauspost
3c31d711b3 Add local file system option to disable UNC on Windows.
This will add an option to disable UNC conversion on Windows to deal with buggy file system implementations like EncFS.

Fixes #261
2016-01-05 17:08:11 +00:00
Nick Craig-Wood
3f5d8390ba Add Björn Harrtell to contributors 2016-01-05 17:05:31 +00:00
Björn Harrtell
78edafcaac drive: add --drive-auth-owner-only to only consider files owned by the user. 2016-01-05 17:02:04 +00:00
Nick Craig-Wood
1ce3673006 Add -clean flag to test_all.go to clean left over test directories 2016-01-03 21:49:26 +00:00
Nick Craig-Wood
3423de65fa Make canonical place for all fs in fs/all/all.go 2016-01-03 14:12:45 +00:00
Nick Craig-Wood
0c81439bc3 Fix upload_github target 2016-01-02 12:18:32 +00:00
Nick Craig-Wood
77fb8ac240 Version 1.26 2016-01-02 12:04:32 +00:00
Nick Craig-Wood
979dfb8cc6 Add Joseph Spurrier to contributors 2016-01-02 11:50:49 +00:00
Joseph Spurrier
fe0289f2f5 s3: Fix corrupting Content-Type on mod time update
This fixes an issue where updating the modification time resets the
content-type to the S3 default of binary/octet-stream which breaks
static websites that expect an html file to have a content-type of
text/html.
2016-01-02 11:47:52 +00:00
Nick Craig-Wood
6a64567dd7 Add Dmitry Burdeev (dibu) to contributors 2016-01-02 11:45:30 +00:00
Nick Craig-Wood
8de8cd62ca yandex: stop create folder error being fatal 2015-12-30 21:07:42 +00:00
Nick Craig-Wood
cba27d2920 yandex: correct precision to 1ns 2015-12-30 20:47:44 +00:00
Nick Craig-Wood
9ade179407 yandex: Fix socket leaks 2015-12-30 14:30:16 +00:00
Nick Craig-Wood
82b85431bd yandex: Make it use our http client so logging, bwlimit etc works properly 2015-12-30 14:30:16 +00:00
Nick Craig-Wood
98778b1870 Docs for Yandex 2015-12-30 14:30:16 +00:00
Nick Craig-Wood
dfd46c23f9 Fix forgotten update for test_all.go 2015-12-30 12:12:24 +00:00
dibu28
3ac4407b88 Implement Yandex storage backend - fixes #234 2015-12-30 12:11:46 +00:00
Nick Craig-Wood
8ea0d5212f Add -verbose flag to test_all and fix tries count 2015-12-30 11:34:22 +00:00
Nick Craig-Wood
acd350d833 Add retry for eventual consistency in findObject test 2015-12-30 10:46:04 +00:00
Nick Craig-Wood
2f4b9f619d Add C. Bess to contributors 2015-12-30 10:13:11 +00:00
C. Bess
70efd0274c Add Contributing link to readme 2015-12-30 10:10:53 +00:00
Nick Craig-Wood
33b3eea6ec Implement Backblaze B2 - fixes #224 2015-12-30 10:05:07 +00:00
Nick Craig-Wood
113624691a Add -dump-headers and -dump-bodies flags for operations test debugging 2015-12-30 09:35:35 +00:00
Nick Craig-Wood
afaec1a2e9 Use test logger instead of log for test output 2015-12-30 09:35:25 +00:00
Nick Craig-Wood
ddf39f2d57 Replace test_all.sh with test_all.go which is cross platform and parallel 2015-12-30 09:26:34 +00:00
Nick Craig-Wood
2df5d95d70 Documentation for --min-age and --max-age 2015-12-29 19:34:10 +00:00
Nick Craig-Wood
64a808ac76 Add CONTRIBUTING file 2015-12-29 19:23:20 +00:00
Nick Craig-Wood
05dc7183cb onedrive: Don't mask HTTP error codes with JSON decode error 2015-12-28 15:15:12 +00:00
Nick Craig-Wood
e69e181090 Fix --min-age and --max-age when only one is present 2015-12-17 14:22:43 +00:00
Nick Craig-Wood
a1269fa669 Make sure we use bash as our shell 2015-12-17 13:30:58 +00:00
Nick Craig-Wood
8369b5209f swift: Make sure we read the size for 0 length files - Fixes #237
This was causing a problem with sync for chunked files.  The directory
listing would read their size back as 0 and see that the size had
changed and immediately resync it.
2015-12-17 13:30:58 +00:00
Nick Craig-Wood
2aa3c0a2af make beta announces destination URL 2015-12-17 13:30:58 +00:00
Nick Craig-Wood
ac65d8369e Make fs.CheckClose public to stop duplication 2015-12-17 13:30:58 +00:00
Nick Craig-Wood
7a24532224 Factor REST library out of onedrive 2015-12-17 13:30:58 +00:00
Nick Craig-Wood
8057d668bb Fix crash in http logging - fixes #223
A nil-pointer exception was caused if the http transaction ever
resulted in a go error while using `--dump-bodies`.  Now don't ignore
the error and log it instead of the http body.
2015-12-17 13:30:58 +00:00
Nick Craig-Wood
36f1bc4a8a Make ls/lsl/md5sum/size/check obey includes and excludes - fixes #169
* run check directory listings concurrently
2015-12-17 13:30:58 +00:00
Nick Craig-Wood
beb8098b0a Ignore current builds when uploading to github 2015-12-17 13:28:12 +00:00
Nick Craig-Wood
6e64a71382 Add Adriano Aurélio Meirelles to contributors 2015-12-17 13:28:12 +00:00
Adriano Aurélio Meirelles
3cbd57d9ad Add support to filter files based on their age 2015-12-17 09:52:38 -02:00
Nick Craig-Wood
4f50b26af0 Add missing cloud storage systems 2015-11-23 22:19:50 +00:00
Nick Craig-Wood
cb651b5866 Upload releases to github too - fixes #225 2015-11-23 22:18:21 +00:00
Nick Craig-Wood
3c1069c815 onedrive: re-enable server side copy 2015-11-22 11:04:16 +00:00
Nick Craig-Wood
7f0020a407 Version v1.25 2015-11-14 13:06:39 +00:00
Nick Craig-Wood
c270c1c80c Increase retries for eventual consistency in tests 2015-11-14 12:57:17 +00:00
Nick Craig-Wood
29ecc2d8bb onedrive: disable server side copy as it seems to be broken 2015-11-14 12:11:38 +00:00
Nick Craig-Wood
13da1b8d28 Add docs for fs specific options - fixes #210 2015-11-14 11:38:35 +00:00
Nick Craig-Wood
0b338eaa28 Fix up sensitive vs insensitive in the docs and some formatting - fixes #214 2015-11-14 11:20:04 +00:00
Nick Craig-Wood
46696865fd Ignore golint errors that can't be fixed
Stop duplicating checkers in .travis.yml - use Makefile as definitive source
2015-11-14 10:08:52 +00:00
Nick Craig-Wood
fcea3777c0 Implement Hubic storage system - fixes #200 2015-11-14 08:08:52 +00:00
Nick Craig-Wood
5023050d95 Add RedirectLocalhostURL for another form of redirect URL 2015-11-14 08:08:51 +00:00
Nick Craig-Wood
bed01a303f Add UnWrapper interface and implement in LimitedFs 2015-11-14 08:08:51 +00:00
Nick Craig-Wood
2c2cb84ca7 Make it so optional interface Purge can fail so it can be wrapped 2015-11-14 08:08:51 +00:00
Nick Craig-Wood
e9dda25c60 Implement Move in limited fs 2015-11-14 08:08:51 +00:00
Nick Craig-Wood
80ffbade22 Fix deletion of some excluded files without --delete-excluded #205
This only happened if the destination file was present but the source
file was missing.
2015-11-12 11:46:04 +00:00
Nick Craig-Wood
7beb50caa7 Remove go tip for the moment since it seems to be broken 2015-11-11 18:18:04 +00:00
Nick Craig-Wood
e8ba43c479 swift: Use ContentType from Object to avoid lookups in listings - fixes #208 2015-11-11 17:19:57 +00:00
Nick Craig-Wood
dcd6bedc27 make beta to compile and upload a beta release 2015-11-11 17:00:08 +00:00
Nick Craig-Wood
5bb76cc35c Stop SetModTime losing metadata (eg X-Object-Manifest) - fixes #203 2015-11-11 17:00:08 +00:00
Nick Craig-Wood
3e68d485f2 Use svg for build status like the other badges 2015-11-08 17:46:19 +00:00
Nick Craig-Wood
1945f09d06 Drop back to testing with go 1.4.2 as it includes go vet 2015-11-08 10:52:35 +00:00
Nick Craig-Wood
2c66bdd6bb Remove Go 1.5-ism to make compilable by go 1.3 & 1.4 - fixes #201 2015-11-08 10:42:50 +00:00
Nick Craig-Wood
a4f3548bbf Remove OS X build until #194 is fixed and update go versions 2015-11-08 10:31:40 +00:00
Nick Craig-Wood
4276abc58b Version v1.24 2015-11-07 16:23:12 +00:00
Nick Craig-Wood
a795d93bc3 swift, s3, googlecloudstorage: Don't delete the container/bucket if fs wasn't at root - fixes #172 2015-11-07 15:32:40 +00:00
Nick Craig-Wood
5df04cb763 swift: ignore directory marker objects where appropriate - fixes #190
* When creating a LimitedFs
  * When calling List() to list files
  * In the Storable() method
  * Add a Purge() method to delete the directory marker objects too

This is a partial fix for #172
2015-11-07 15:32:11 +00:00
Nick Craig-Wood
ef54167a4a Add goimports check to make check 2015-11-07 12:16:33 +00:00
Nick Craig-Wood
d42cb11b84 Fix tests to run all tests again and add onedrive 2015-11-07 11:21:15 +00:00
Nick Craig-Wood
b257de4aba Be more constistent with naming in remotes
* External objects are called Fs and Object
  * Object.fs always points to the Fs
2015-11-07 11:14:46 +00:00
Nick Craig-Wood
365b4babae Make filter test files pass errcheck 2015-11-07 10:27:47 +00:00
Nick Craig-Wood
6d48dffa2f Add -dump-headers and -dump-bodies flags for remote tests 2015-11-07 10:27:47 +00:00
Nick Craig-Wood
8f2999b6af onedrive: implement Copy 2015-11-07 10:27:47 +00:00
Nick Craig-Wood
be6115fbfa Fix nil pointer exception on test failure 2015-11-07 10:19:10 +00:00
Nick Craig-Wood
2fcb8f5db7 Add support for Microsoft One Drive - fixes #10
* Still to do
    * Copy
    * Move
    * MoveDir
2015-11-07 10:19:10 +00:00
Nick Craig-Wood
0ab3f020ab Fix Amazon icon in the docs 2015-11-06 15:26:58 +00:00
Nick Craig-Wood
64c23c2f5b Update font awesome to 4.4.0 2015-11-06 15:26:58 +00:00
Nick Craig-Wood
ff16e0f6df Factor common error handling into fs module 2015-11-06 15:26:58 +00:00
Nick Craig-Wood
1a82ba196b dircache: expose FoundRoot flag 2015-11-06 15:26:58 +00:00
Nick Craig-Wood
ed72c678f8 Protect accounting from being closed twice 2015-11-06 15:26:58 +00:00
Nick Craig-Wood
4ed8836a71 oauthutil: add RedirectPublicURL 2015-11-06 15:26:58 +00:00
Nick Craig-Wood
5529978fa7 dircache: make separate mutex for the cache 2015-11-06 15:26:58 +00:00
Nick Craig-Wood
66d84c9914 Document where to install root certificates - fixes #196 2015-11-05 18:09:56 +00:00
klauspost
b85ddc4e4f Extend CI tests to include formatting checks.
CI tests now tests 'go vet', 'go fmt' (via goimports) and golint.

Adds Travis experimental OSX support.
2015-11-03 13:50:29 +01:00
klauspost
e4a9e27a55 Add proxy information to FAQ.
Fixes #160.
2015-11-02 20:19:50 +00:00
Klaus Post
22645eea2e Add AppVeyor Windows CI to tests
AppVeyor is free, and functions pretty much like Travis, only on Windows.
2015-11-02 20:11:06 +00:00
klauspost
345c98ed62 Update to AWS SDK 0.10.0
Tested with S3 and Dreamhost

Here is a link to the release notes:

http://aws.amazon.com/releasenotes/5476699172355228
2015-11-02 19:52:11 +00:00
klauspost
b872ff0237 Add option to disable server certificate verification.
The option name mirrors the 'wget' option (also `--no-check-certificate`). The cURL equivalent is called `--insecure`, which is a bit unclear.

Put in the "developers" section in documentation with proper warnings.

Fixes #168
2015-10-29 16:42:25 +01:00
Nick Craig-Wood
1b95718460 Fix typos in filter docs and unit test assertions 2015-10-20 09:16:47 +01:00
klauspost
6a3580c556 Show status of master branch, so it doesn't show the status of the last pushed branch, 2015-10-19 18:56:03 +01:00
klauspost
16c9fba5de Fix tests failing on Windows.
* ":" is kept when part of a drive.
  * Create tests.
  * Fix test framework.
2015-10-19 17:36:15 +01:00
Nick Craig-Wood
4e952af614 Allow spaces in remotes and check remote names for validity at creation time - fixes #171 2015-10-12 17:54:09 +01:00
Klaus Post
6344c3051c Add async readahead buffer
This adds an async read buffer of 4x4MB when copying files >10MB.

This fixes #164 and reduces the number of IO operations for copy/move.
2015-10-12 08:30:27 +01:00
klauspost
ab9f521cbd Allow '&' and disallow ':' in Windows filenames.
Fixes #161
2015-10-05 11:04:25 +02:00
242 changed files with 46567 additions and 5226 deletions

4
.gitignore vendored
View File

@@ -4,7 +4,3 @@ rclone
rclonetest/rclonetest
build
docs/public
MANUAL.md
MANUAL.html
MANUAL.txt
rclone.1

View File

@@ -1,12 +1,21 @@
language: go
sudo: false
osx_image: xcode7.3
os:
- linux
- osx
go:
- 1.3.3
- 1.4.2
- 1.5
- 1.5.4
- 1.6.3
- 1.7
# - tip
install:
- make build_dep
script:
- go get ./...
- go test -v ./...
- go test -cpu=2 -race -v ./...
- make check
- make quicktest

162
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,162 @@
# Contributing to rclone #
This is a short guide on how to contribute things to rclone.
## Reporting a bug ##
Bug reports are welcome. Check your issue exists with the latest
version first. Please add when submitting:
* Rclone version (eg output from `rclone -V`)
* Which OS you are using and how many bits (eg Windows 7, 64 bit)
* The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
* A log of the command with the `-v` flag (eg output from `rclone -v copy /tmp remote:tmp`)
* if the log contains secrets then edit the file with a text editor first to obscure them
## Submitting a pull request ##
If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via Github.
If it is a big feature then make an issue first so it can be discussed.
You'll need a Go environment set up with GOPATH set. See [the Go
getting started docs](https://golang.org/doc/install) for more info.
First in your web browser press the fork button on [rclone's Github
page](https://github.com/ncw/rclone).
Now in your terminal
go get github.com/ncw/rclone
cd $GOPATH/src/github.com/ncw/rclone
git remote rename origin upstream
git remote add origin git@github.com:YOURUSER/rclone.git
Make a branch to add your new feature
git checkout -b my-new-feature
And get hacking.
When ready - run the unit tests for the code you changed
go test -v
Note that you make need to make a test remote, eg `TestSwift` for some
of the unit tests.
Note the top level Makefile targets
* make check
* make test
Both of these will be run by Travis when you make a pull request but
you can do this yourself locally too.
Make sure you
* Add documentation for a new feature
* Add unit tests for a new feature
* squash commits down to one per feature
* rebase to master `git rebase master`
When you are done with that
git push origin my-new-feature
Go to the Github website and click [Create pull
request](https://help.github.com/articles/creating-a-pull-request/).
You patch will get reviewed and you might get asked to fix some stuff.
If so, then make the changes in the same branch, squash the commits,
rebase it to master then push it to Github with `--force`.
## Testing ##
rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests.
go test -v ./...
rclone contains a mixture of unit tests and integration tests.
Because it is difficult (and in some respects pointless) to test cloud
storage systems by mocking all their interfaces, rclone unit tests can
run against any of the backends. This is done by making specially
named remotes in the default config file.
If you wanted to test changes in the `drive` backend, then you would
need to make a remote called `TestDrive`.
You can then run the unit tests in the drive directory. These tests
are skipped if `TestDrive:` isn't defined.
cd drive
go test -v
You can then run the integration tests which tests all of rclone's
operations. Normally these get run against the local filing system,
but they can be run against any of the remotes.
cd ../fs
go test -v -remote TestDrive:
go test -v -remote TestDrive: -subdir
If you want to run all the integration tests against all the remotes,
then run in that directory
go run test_all.go
## Making a release ##
There are separate instructions for making a release in the RELEASE.md
file - doing the first few steps is useful before making a
contribution.
* go get -u -f -v ./...
* make check
* make test
* make tag
## Writing a new backend ##
Choose a name. The docs here will use `remote` as an example.
Note that in rclone terminology a file system backend is called a
remote or an fs.
Research
* Look at the interfaces defined in `fs/fs.go`
* Study one or more of the existing remotes
Getting going
* Create `remote/remote.go` (copy this from a similar fs)
* Add your fs to the imports in `fs/all/all.go`
Unit tests
* Create a config entry called `TestRemote` for the unit tests to use
* Add your fs to the end of `fstest/fstests/gen_tests.go`
* generate `remote/remote_test.go` unit tests `cd fstest/fstests; go generate`
* Make sure all tests pass with `go test -v`
Integration tests
* Add your fs to `fs/test_all.go`
* Make sure integration tests pass with
* `cd fs`
* `go test -v -remote TestRemote:` and
* `go test -v -remote TestRemote: -subdir`
Add your fs to the docs
* `README.md` - main Github page
* `docs/content/remote.md` - main docs page
* `docs/content/overview.md` - overview docs
* `docs/content/docs.md` - list of remotes in config section
* `docs/content/about.md` - front page of rclone.org
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
* `make_manual.py` - add the page to the `docs` constant

14
ISSUE_TEMPLATE.md Normal file
View File

@@ -0,0 +1,14 @@
When filing an issue, please include the following information if
possible as well as a description of the problem. Make sure you are
using the [latest version of rclone](http://rclone.org/downloads/).
> What is your rclone version (eg output from `rclone -V`)
> Which OS you are using and how many bits (eg Windows 7, 64 bit)
> Which cloud storage system are you using? (eg Google Drive)
> The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
> A log from the command with the `-v` flag (eg output from `rclone -v copy /tmp remote:tmp`)

3169
MANUAL.html Normal file

File diff suppressed because it is too large Load Diff

4675
MANUAL.md Normal file

File diff suppressed because it is too large Load Diff

4625
MANUAL.txt Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,4 @@
SHELL = /bin/bash
TAG := $(shell git describe --tags)
LAST_TAG := $(shell git describe --tags --abbrev=0)
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f", $$_)')
@@ -6,21 +7,40 @@ rclone:
@go version
go install -v ./...
# Full suite of integration tests
test: rclone
go test ./...
cd fs && ./test_all.sh
cd fs && go run test_all.go
# Quick test
quicktest:
go test ./...
go test -cpu=2 -race ./...
# Do source code quality checks
check: rclone
go vet ./...
errcheck ./...
golint ./...
goimports -d . | grep . ; test $$? -eq 1
golint ./... | grep -E -v '(StorageUrl|CdnUrl)' ; test $$? -eq 1
# Get the build dependencies
build_dep:
go get -t ./...
go get -u github.com/kisielk/errcheck
go get -u golang.org/x/tools/cmd/goimports
go get -u github.com/golang/lint/golint
# Update dependencies
update:
go get -t -u -f -v ./...
doc: rclone.1 MANUAL.html MANUAL.txt
rclone.1: MANUAL.md
pandoc -s --from markdown --to man MANUAL.md -o rclone.1
MANUAL.md: make_manual.py docs/content/*.md
MANUAL.md: make_manual.py docs/content/*.md commanddocs
./make_manual.py
MANUAL.html: MANUAL.md
@@ -29,6 +49,9 @@ MANUAL.html: MANUAL.md
MANUAL.txt: MANUAL.md
pandoc -s --from markdown --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
rclone gendocs docs/content/commands/
install: rclone
install -d ${DESTDIR}/usr/bin
install -t ${DESTDIR}/usr/bin ${GOPATH}/bin/rclone
@@ -37,7 +60,7 @@ clean:
go clean ./...
find . -name \*~ | xargs -r rm -f
rm -rf build docs/public
rm -f rclone rclonetest/rclonetest rclone.1 MANUAL.md MANUAL.html MANUAL.txt
rm -f rclone rclonetest/rclonetest
website:
cd docs && hugo
@@ -48,16 +71,25 @@ upload_website: website
upload:
rclone -v copy build/ memstore:downloads-rclone-org
upload_github:
./upload-github $(TAG)
cross: doc
./cross-compile $(TAG)
serve:
beta:
./cross-compile $(TAG)β
rm build/*-current-*
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)β
@echo Beta release ready at http://pub.rclone.org/$(TAG)%CE%B2/
serve: website
cd docs && hugo server -v -w
tag:
tag: doc
@echo "Old tag is $(LAST_TAG)"
@echo "New tag is $(NEW_TAG)"
echo -e "package fs\n\n// Version of rclone\nconst Version = \"$(NEW_TAG)\"\n" | gofmt > fs/version.go
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEW_TAG)-DEV\"\n" | gofmt > fs/version.go
perl -lpe 's/VERSION/${NEW_TAG}/g; s/DATE/'`date -I`'/g;' docs/content/downloads.md.in > docs/content/downloads.md
git tag $(NEW_TAG)
@echo "Add this to changelog in docs/content/changelog.md"

View File

@@ -2,11 +2,13 @@
[Website](http://rclone.org) |
[Documentation](http://rclone.org/docs/) |
[Contributing](CONTRIBUTING.md) |
[Changelog](http://rclone.org/changelog/) |
[Installation](http://rclone.org/install/) |
[G+](https://google.com/+RcloneOrg)
[![Build Status](https://travis-ci.org/ncw/rclone.png)](https://travis-ci.org/ncw/rclone) [![GoDoc](https://godoc.org/github.com/ncw/rclone?status.svg)](https://godoc.org/github.com/ncw/rclone)
[![Build Status](https://travis-ci.org/ncw/rclone.svg?branch=master)](https://travis-ci.org/ncw/rclone) [![Windows Build Status](https://ci.appveyor.com/api/projects/status/github/ncw/rclone?branch=master&passingText=windows%20-%20ok&svg=true)](https://ci.appveyor.com/project/ncw/rclone) [![GoDoc](https://godoc.org/github.com/ncw/rclone?status.svg)](https://godoc.org/github.com/ncw/rclone)
Rclone is a command line program to sync files and directories to and from
@@ -15,18 +17,22 @@ Rclone is a command line program to sync files and directories to and from
* Openstack Swift / Rackspace cloud files / Memset Memstore
* Dropbox
* Google Cloud Storage
* Amazon Cloud Drive
* Amazon Drive
* Microsoft One Drive
* Hubic
* Backblaze B2
* Yandex Disk
* The local filesystem
Features
* MD5SUMs checked at all times for file integrity
* MD5/SHA1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* Copy mode to just copy new/changed files
* Sync mode to make a directory identical
* Check mode to check all MD5SUMs
* Can sync to and from network, eg two different Drive accounts
* Sync (one way) mode to make a directory identical
* Check mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
See the home page for installation, usage, documentation, changelog
and configuration walkthroughs.

View File

@@ -8,11 +8,12 @@ Required software for making a release
* golint - go get github.com/golang/lint
Making a release
* go get -u -f -v ./...
* make update
* make check
* make test
* make tag
* edit docs/content/changelog.md
* make doc
* git commit -a -v
* make retag
* # Set the GOPATH for a gox enabled compiler - . ~/bin/go-cross - not required for go >= 1.5
@@ -20,3 +21,4 @@ Making a release
* make upload
* make upload_website
* git push --tags origin master
* make upload_github

View File

@@ -16,12 +16,10 @@ import (
"fmt"
"io"
"log"
"net"
"net/http"
"net/url"
"regexp"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/ncw/go-acd"
@@ -29,22 +27,28 @@ import (
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/oauthutil"
"github.com/ncw/rclone/pacer"
"github.com/pkg/errors"
"github.com/spf13/pflag"
"golang.org/x/oauth2"
)
const (
rcloneClientID = "amzn1.application-oa2-client.6bf18d2d1f5b485c94c8988bb03ad0e7"
rcloneClientSecret = "k8/NyszKm5vEkZXAwsbGkd6C3NrbjIqMg4qEhIeF14Szub2wur+/teS3ubXgsLe9//+tr/qoqK+lq6mg8vWkoA=="
folderKind = "FOLDER"
fileKind = "FILE"
assetKind = "ASSET"
statusAvailable = "AVAILABLE"
timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z
minSleep = 20 * time.Millisecond
rcloneClientID = "amzn1.application-oa2-client.6bf18d2d1f5b485c94c8988bb03ad0e7"
rcloneEncryptedClientSecret = "ZP12wYlGw198FtmqfOxyNAGXU3fwVcQdmt--ba1d00wJnUs0LOzvVyXVDbqhbcUqnr5Vd1QejwWmiv1Ep7UJG1kUQeuBP5n9goXWd5MrAf0"
folderKind = "FOLDER"
fileKind = "FILE"
assetKind = "ASSET"
statusAvailable = "AVAILABLE"
timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z
minSleep = 20 * time.Millisecond
warnFileSize = 50 << 30 // Display warning for files larger than this size
)
// Globals
var (
// Flags
tempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
uploadWaitTime = pflag.DurationP("acd-upload-wait-time", "", 2*60*time.Second, "Time to wait after a failed complete upload to see if it appears.")
// Description of how to auth for this app
acdConfig = &oauth2.Config{
Scopes: []string{"clouddrive:read_all", "clouddrive:write"},
@@ -53,46 +57,51 @@ var (
TokenURL: "https://api.amazon.com/auth/o2/token",
},
ClientID: rcloneClientID,
ClientSecret: fs.Reveal(rcloneClientSecret),
ClientSecret: fs.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
}
)
// Register with Fs
func init() {
fs.Register(&fs.Info{
Name: "amazon cloud drive",
NewFs: NewFs,
fs.Register(&fs.RegInfo{
Name: "amazon cloud drive",
Description: "Amazon Drive",
NewFs: NewFs,
Config: func(name string) {
err := oauthutil.Config(name, acdConfig)
err := oauthutil.Config("amazon cloud drive", name, acdConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: oauthutil.ConfigClientID,
Name: fs.ConfigClientID,
Help: "Amazon Application Client Id - leave blank normally.",
}, {
Name: oauthutil.ConfigClientSecret,
Name: fs.ConfigClientSecret,
Help: "Amazon Application Client Secret - leave blank normally.",
}},
})
pflag.VarP(&tempLinkThreshold, "acd-templink-threshold", "", "Files >= this size will be downloaded via their tempLink.")
}
// FsAcd represents a remote acd server
type FsAcd struct {
name string // name of this remote
c *acd.Client // the connection to the acd server
root string // the path we are working on
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
// Fs represents a remote acd server
type Fs struct {
name string // name of this remote
c *acd.Client // the connection to the acd server
noAuthClient *http.Client // unauthenticated http client
root string // the path we are working on
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
ts *oauthutil.TokenSource // token source for oauth
uploads int32 // number of uploads in progress - atomic access required
}
// FsObjectAcd describes a acd object
// Object describes a acd object
//
// Will definitely have info but maybe not meta
type FsObjectAcd struct {
acd *FsAcd // what this object is part of
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
info *acd.Node // Info from the acd object if known
}
@@ -100,18 +109,18 @@ type FsObjectAcd struct {
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
func (f *FsAcd) Name() string {
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *FsAcd) Root() string {
func (f *Fs) Root() string {
return f.root
}
// String converts this FsAcd to a string
func (f *FsAcd) String() string {
return fmt.Sprintf("Amazon cloud drive root '%s'", f.root)
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("amazon drive root '%s'", f.root)
}
// Pattern to match a acd path
@@ -125,76 +134,73 @@ func parsePath(path string) (root string) {
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
400, // Bad request (seen in "Next token is expired")
401, // Unauthorized (seen in "Token has expired")
408, // Request Timeout
429, // Rate exceeded.
500, // Get occasional 500 Internal Server Error
409, // Conflict - happens in the unit tests a lot
503, // Service Unavailable
504, // Gateway Time-out
}
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
// See if HTTP error code is to be retried
if err != nil && resp != nil {
for _, e := range retryErrorCodes {
if resp.StatusCode == e {
return true, err
}
}
}
// Allow retry if request times out. Adapted from
// http://stackoverflow.com/questions/23494950/specifically-check-for-timeout-error
switch err := err.(type) {
case *url.Error:
if err, ok := err.Err.(net.Error); ok && err.Timeout() {
func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
if resp != nil {
if resp.StatusCode == 401 {
f.ts.Invalidate()
fs.Log(f, "401 error received - invalidating token")
return true, err
}
case net.Error:
if err.Timeout() {
// Work around receiving this error sporadically on authentication
//
// HTTP code 403: "403 Forbidden", reponse body: {"message":"Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header. Authorization=Bearer"}
if resp.StatusCode == 403 && strings.Contains(err.Error(), "Authorization header requires") {
fs.Log(f, "403 \"Authorization header requires...\" error received - retry")
return true, err
}
}
return false, err
return fs.ShouldRetry(err) || fs.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// NewFs constructs an FsAcd from the path, container:path
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) {
root = parsePath(root)
oAuthClient, err := oauthutil.NewClient(name, acdConfig)
oAuthClient, ts, err := oauthutil.NewClient(name, acdConfig)
if err != nil {
log.Fatalf("Failed to configure amazon cloud drive: %v", err)
log.Fatalf("Failed to configure Amazon Drive: %v", err)
}
c := acd.NewClient(oAuthClient)
c.UserAgent = fs.UserAgent
f := &FsAcd{
name: name,
root: root,
c: c,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
f := &Fs{
name: name,
root: root,
c: c,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
noAuthClient: fs.Config.Client(),
ts: ts,
}
// Update endpoints
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
_, resp, err = f.c.Account.GetEndpoints()
return shouldRetry(resp, err)
return f.shouldRetry(resp, err)
})
if err != nil {
return nil, fmt.Errorf("Failed to get endpoints: %v", err)
return nil, errors.Wrap(err, "failed to get endpoints")
}
// Get rootID
var rootInfo *acd.Folder
err = f.pacer.Call(func() (bool, error) {
rootInfo, resp, err = f.c.Nodes.GetRoot()
return shouldRetry(resp, err)
})
rootInfo, err := f.getRootInfo()
if err != nil || rootInfo.Id == nil {
return nil, fmt.Errorf("Failed to get root: %v", err)
return nil, errors.Wrap(err, "failed to get root")
}
// Renew the token in the background
go f.renewToken()
f.dirCache = dircache.New(root, *rootInfo.Id, f)
// Find the current root
@@ -211,23 +217,69 @@ func NewFs(name, root string) (fs.Fs, error) {
// No root so return old f
return f, nil
}
obj := newF.newFsObjectWithInfo(remote, nil)
if obj == nil {
// File doesn't exist so return old f
return f, nil
_, err := newF.newObjectWithInfo(remote, nil)
if err != nil {
if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f
return f, nil
}
return nil, err
}
// return a Fs Limited to this object
return fs.NewLimited(&newF, obj), nil
// return an error with an fs which points to the parent
return &newF, fs.ErrorIsFile
}
return f, nil
}
// Return an FsObject from a path
// getRootInfo gets the root folder info
func (f *Fs) getRootInfo() (rootInfo *acd.Folder, err error) {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
rootInfo, resp, err = f.c.Nodes.GetRoot()
return f.shouldRetry(resp, err)
})
return rootInfo, err
}
// Renew the token - runs in the background
//
// May return nil if an error occurred
func (f *FsAcd) newFsObjectWithInfo(remote string, info *acd.Node) fs.Object {
o := &FsObjectAcd{
acd: f,
// Renews the token whenever it expires. Useful when there are lots
// of uploads in progress and the token doesn't get renewed. Amazon
// seem to cancel your uploads if you don't renew your token for 2hrs.
func (f *Fs) renewToken() {
expiry := f.ts.OnExpiry()
for {
<-expiry
uploads := atomic.LoadInt32(&f.uploads)
if uploads != 0 {
fs.Debug(f, "Token expired - %d uploads in progress - refreshing", uploads)
// Do a transaction
_, err := f.getRootInfo()
if err == nil {
fs.Debug(f, "Token refresh successful")
} else {
fs.ErrorLog(f, "Token refresh failed: %v", err)
}
} else {
fs.Debug(f, "Token expired but no uploads in progress - doing nothing")
}
}
}
func (f *Fs) startUpload() {
atomic.AddInt32(&f.uploads, 1)
}
func (f *Fs) stopUpload() {
atomic.AddInt32(&f.uploads, -1)
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *acd.Node) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
}
if info != nil {
@@ -236,29 +288,27 @@ func (f *FsAcd) newFsObjectWithInfo(remote string, info *acd.Node) fs.Object {
} else {
err := o.readMetaData() // reads info and meta, returning an error
if err != nil {
// logged already FsDebug("Failed to read info: %s", err)
return nil
return nil, err
}
}
return o
return o, nil
}
// NewFsObject returns an FsObject from a path
//
// May return nil if an error occurred
func (f *FsAcd) NewFsObject(remote string) fs.Object {
return f.newFsObjectWithInfo(remote, nil)
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
}
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *FsAcd) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error) {
func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error) {
//fs.Debug(f, "FindLeaf(%q, %q)", pathID, leaf)
folder := acd.FolderFromId(pathID, f.c.Nodes)
var resp *http.Response
var subFolder *acd.Folder
err = f.pacer.Call(func() (bool, error) {
subFolder, resp, err = folder.GetFolder(leaf)
return shouldRetry(resp, err)
return f.shouldRetry(resp, err)
})
if err != nil {
if err == acd.ErrorNodeNotFound {
@@ -269,7 +319,7 @@ func (f *FsAcd) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err
return "", false, err
}
if subFolder.Status != nil && *subFolder.Status != statusAvailable {
fs.Debug(f, "Ignoring folder %q in state %q", *subFolder.Status)
fs.Debug(f, "Ignoring folder %q in state %q", leaf, *subFolder.Status)
time.Sleep(1 * time.Second) // FIXME wait for problem to go away!
return "", false, nil
}
@@ -278,14 +328,14 @@ func (f *FsAcd) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *FsAcd) CreateDir(pathID, leaf string) (newID string, err error) {
func (f *Fs) CreateDir(pathID, leaf string) (newID string, err error) {
//fmt.Printf("CreateDir(%q, %q)\n", pathID, leaf)
folder := acd.FolderFromId(pathID, f.c.Nodes)
var resp *http.Response
var info *acd.Folder
err = f.pacer.Call(func() (bool, error) {
info, resp, err = folder.CreateFolder(leaf)
return shouldRetry(resp, err)
return f.shouldRetry(resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
@@ -306,7 +356,7 @@ type listAllFn func(*acd.Node) bool
// Lists the directory required calling the user function on each item found
//
// If the user fn ever returns true then it early exits with found = true
func (f *FsAcd) listAll(dirID string, title string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
query := "parents:" + dirID
if directoriesOnly {
query += " AND kind:" + folderKind
@@ -321,18 +371,16 @@ func (f *FsAcd) listAll(dirID string, title string, directoriesOnly bool, filesO
Filters: query,
}
var nodes []*acd.Node
var out []*acd.Node
//var resp *http.Response
OUTER:
for {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
err = f.pacer.CallNoRetry(func() (bool, error) {
nodes, resp, err = f.c.Nodes.GetNodes(&opts)
return shouldRetry(resp, err)
return f.shouldRetry(resp, err)
})
if err != nil {
fs.Stats.Error()
fs.ErrorLog(f, "Couldn't list files: %v", err)
break
return false, err
}
if nodes == nil {
break
@@ -343,109 +391,132 @@ OUTER:
if *node.Status != statusAvailable {
continue
}
if fn(node) {
found = true
break OUTER
}
// Store the nodes up in case we have to retry the listing
out = append(out, node)
}
}
}
// Send the nodes now
for _, node := range out {
if fn(node) {
found = true
break
}
}
return
}
// Path should be directory path either "" or "path/"
//
// List the directory using a recursive list from the root
//
// This fetches the minimum amount of stuff but does more API calls
// which makes it slow
func (f *FsAcd) listDirRecursive(dirID string, path string, out fs.ObjectsChan) error {
var subError error
// Make the API request
var wg sync.WaitGroup
_, err := f.listAll(dirID, "", false, false, func(node *acd.Node) bool {
// Recurse on directories
switch *node.Kind {
case folderKind:
wg.Add(1)
folder := path + *node.Name + "/"
fs.Debug(f, "Reading %s", folder)
go func() {
defer wg.Done()
err := f.listDirRecursive(*node.Id, folder, out)
// ListDir reads the directory specified by the job into out, returning any more jobs
func (f *Fs) ListDir(out fs.ListOpts, job dircache.ListDirJob) (jobs []dircache.ListDirJob, err error) {
fs.Debug(f, "Reading %q", job.Path)
maxTries := fs.Config.LowLevelRetries
for tries := 1; tries <= maxTries; tries++ {
_, err = f.listAll(job.DirID, "", false, false, func(node *acd.Node) bool {
remote := job.Path + *node.Name
switch *node.Kind {
case folderKind:
if out.IncludeDirectory(remote) {
dir := &fs.Dir{
Name: remote,
Bytes: -1,
Count: -1,
}
dir.When, _ = time.Parse(timeFormat, *node.ModifiedDate) // FIXME
if out.AddDir(dir) {
return true
}
if job.Depth > 0 {
jobs = append(jobs, dircache.ListDirJob{DirID: *node.Id, Path: remote + "/", Depth: job.Depth - 1})
}
}
case fileKind:
o, err := f.newObjectWithInfo(remote, node)
if err != nil {
subError = err
fs.ErrorLog(f, "Error reading %s:%s", folder, err)
out.SetError(err)
return true
}
}()
if out.Add(o) {
return true
}
default:
// ignore ASSET etc
}
return false
case fileKind:
if fs := f.newFsObjectWithInfo(path+*node.Name, node); fs != nil {
out <- fs
}
default:
// ignore ASSET etc
})
if fs.IsRetryError(err) {
fs.Debug(f, "Directory listing error for %q: %v - low level retry %d/%d", job.Path, err, tries, maxTries)
continue
}
return false
})
wg.Wait()
fs.Debug(f, "Finished reading %s", path)
if err != nil {
return err
if err != nil {
return nil, err
}
break
}
if subError != nil {
return subError
}
return nil
fs.Debug(f, "Finished reading %q", job.Path)
return jobs, err
}
// List walks the path returning a channel of FsObjects
func (f *FsAcd) List() fs.ObjectsChan {
out := make(fs.ObjectsChan, fs.Config.Checkers)
go func() {
defer close(out)
err := f.dirCache.FindRoot(false)
if err != nil {
fs.Stats.Error()
fs.ErrorLog(f, "Couldn't find root: %s", err)
} else {
err = f.listDirRecursive(f.dirCache.RootID(), "", out)
if err != nil {
fs.Stats.Error()
fs.ErrorLog(f, "List failed: %s", err)
}
}
}()
return out
// List walks the path returning iles and directories into out
func (f *Fs) List(out fs.ListOpts, dir string) {
f.dirCache.List(f, out, dir)
}
// ListDir lists the directories
func (f *FsAcd) ListDir() fs.DirChan {
out := make(fs.DirChan, fs.Config.Checkers)
go func() {
defer close(out)
err := f.dirCache.FindRoot(false)
if err != nil {
fs.Stats.Error()
fs.ErrorLog(f, "Couldn't find root: %s", err)
// checkUpload checks to see if an error occurred after the file was
// completely uploaded.
//
// If it was then it waits for a while to see if the file really
// exists and is the right size and returns an updated info.
//
// If the file wasn't found or was the wrong size then it returns the
// original error.
//
// This is a workaround for Amazon sometimes returning
//
// * 408 REQUEST_TIMEOUT
// * 504 GATEWAY_TIMEOUT
// * 500 Internal server error
//
// At the end of large uploads. The speculation is that the timeout
// is waiting for the sha1 hashing to complete and the file may well
// be properly uploaded.
func (f *Fs) checkUpload(in io.Reader, src fs.ObjectInfo, inInfo *acd.File, inErr error) (fixedError bool, info *acd.File, err error) {
// Return if no error - all is well
if inErr == nil {
return false, inInfo, inErr
}
const sleepTime = 5 * time.Second // sleep between tries
retries := int(*uploadWaitTime / sleepTime) // number of retries
if retries <= 0 {
retries = 1
}
buf := make([]byte, 1)
n, err := in.Read(buf)
if !(n == 0 && err == io.EOF) {
fs.Debug(src, "Upload error detected but didn't finish upload (n=%d, err=%v): %v", n, err, inErr)
return false, inInfo, inErr
}
fs.Debug(src, "Error detected after finished upload - waiting to see if object was uploaded correctly: %v", inErr)
remote := src.Remote()
for i := 1; i <= retries; i++ {
o, err := f.NewObject(remote)
if err == fs.ErrorObjectNotFound {
fs.Debug(src, "Object not found - waiting (%d/%d)", i, retries)
} else if err != nil {
fs.Debug(src, "Object returned error - waiting (%d/%d): %v", i, retries, err)
} else {
_, err := f.listAll(f.dirCache.RootID(), "", true, false, func(item *acd.Node) bool {
dir := &fs.Dir{
Name: *item.Name,
Bytes: -1,
Count: -1,
if src.Size() == o.Size() {
fs.Debug(src, "Object found with correct size - returning with no error")
info = &acd.File{
Node: o.(*Object).info,
}
dir.When, _ = time.Parse(timeFormat, *item.ModifiedDate)
out <- dir
return false
})
if err != nil {
fs.Stats.Error()
fs.ErrorLog(f, "ListDir failed: %s", err)
return true, info, nil
}
fs.Debug(src, "Object found but wrong size %d vs %d - waiting (%d/%d)", src.Size(), o.Size(), i, retries)
}
}()
return out
time.Sleep(sleepTime)
}
fs.Debug(src, "Finished waiting for object - returning original error: %v", inErr)
return false, inInfo, inErr
}
// Put the object into the container
@@ -453,26 +524,49 @@ func (f *FsAcd) ListDir() fs.DirChan {
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *FsAcd) Put(in io.Reader, remote string, modTime time.Time, size int64) (fs.Object, error) {
// Temporary FsObject under construction
o := &FsObjectAcd{
acd: f,
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo) (fs.Object, error) {
remote := src.Remote()
size := src.Size()
// Temporary Object under construction
o := &Object{
fs: f,
remote: remote,
}
// Check if object already exists
err := o.readMetaData()
switch err {
case nil:
return o, o.Update(in, src)
case fs.ErrorObjectNotFound:
// Not found so create it
default:
return nil, err
}
// If not create it
leaf, directoryID, err := f.dirCache.FindPath(remote, true)
if err != nil {
return nil, err
}
folder := acd.FolderFromId(directoryID, o.acd.c.Nodes)
if size > warnFileSize {
fs.Debug(f, "Warning: file %q may fail because it is too big. Use --max-size=%dGB to skip large files.", remote, warnFileSize>>30)
}
folder := acd.FolderFromId(directoryID, o.fs.c.Nodes)
var info *acd.File
var resp *http.Response
err = f.pacer.CallNoRetry(func() (bool, error) {
if size != 0 {
f.startUpload()
if src.Size() != 0 {
info, resp, err = folder.Put(in, leaf)
} else {
info, resp, err = folder.PutSized(in, size, leaf)
}
return shouldRetry(resp, err)
f.stopUpload()
var ok bool
ok, info, err = f.checkUpload(in, src, info, err)
if ok {
return false, nil
}
return f.shouldRetry(resp, err)
})
if err != nil {
return nil, err
@@ -482,15 +576,15 @@ func (f *FsAcd) Put(in io.Reader, remote string, modTime time.Time, size int64)
}
// Mkdir creates the container if it doesn't exist
func (f *FsAcd) Mkdir() error {
func (f *Fs) Mkdir() error {
return f.dirCache.FindRoot(true)
}
// purgeCheck remotes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *FsAcd) purgeCheck(check bool) error {
func (f *Fs) purgeCheck(check bool) error {
if f.root == "" {
return fmt.Errorf("Can't purge root directory")
return errors.New("can't purge root directory")
}
dc := f.dirCache
err := dc.FindRoot(false)
@@ -502,7 +596,7 @@ func (f *FsAcd) purgeCheck(check bool) error {
if check {
// check directory is empty
empty := true
_, err := f.listAll(rootID, "", false, false, func(node *acd.Node) bool {
_, err = f.listAll(rootID, "", false, false, func(node *acd.Node) bool {
switch *node.Kind {
case folderKind:
empty = false
@@ -519,7 +613,7 @@ func (f *FsAcd) purgeCheck(check bool) error {
return err
}
if !empty {
return fmt.Errorf("Directory not empty")
return errors.New("directory not empty")
}
}
@@ -527,7 +621,7 @@ func (f *FsAcd) purgeCheck(check bool) error {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = node.Trash()
return shouldRetry(resp, err)
return f.shouldRetry(resp, err)
})
if err != nil {
return err
@@ -543,15 +637,20 @@ func (f *FsAcd) purgeCheck(check bool) error {
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *FsAcd) Rmdir() error {
func (f *Fs) Rmdir() error {
return f.purgeCheck(true)
}
// Precision return the precision of this Fs
func (f *FsAcd) Precision() time.Duration {
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() fs.HashSet {
return fs.HashSet(fs.HashMD5)
}
// Copy src to this remote using server side copy operations.
//
// This is stored with the remote path given
@@ -561,18 +660,18 @@ func (f *FsAcd) Precision() time.Duration {
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
//func (f *FsAcd) Copy(src fs.Object, remote string) (fs.Object, error) {
// srcObj, ok := src.(*FsObjectAcd)
//func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
// srcObj, ok := src.(*Object)
// if !ok {
// fs.Debug(src, "Can't copy - not same remote type")
// return nil, fs.ErrorCantCopy
// }
// srcFs := srcObj.acd
// srcFs := srcObj.fs
// _, err := f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
// if err != nil {
// return nil, err
// }
// return f.NewFsObject(remote), nil
// return f.NewObject(remote), nil
//}
// Purge deletes all the files and the container
@@ -580,19 +679,19 @@ func (f *FsAcd) Precision() time.Duration {
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *FsAcd) Purge() error {
func (f *Fs) Purge() error {
return f.purgeCheck(false)
}
// ------------------------------------------------------------
// Fs returns the parent Fs
func (o *FsObjectAcd) Fs() fs.Fs {
return o.acd
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *FsObjectAcd) String() string {
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
@@ -600,12 +699,15 @@ func (o *FsObjectAcd) String() string {
}
// Remote returns the remote path
func (o *FsObjectAcd) Remote() string {
func (o *Object) Remote() string {
return o.remote
}
// Md5sum returns the Md5sum of an object returning a lowercase hex string
func (o *FsObjectAcd) Md5sum() (string, error) {
// Hash returns the Md5sum of an object returning a lowercase hex string
func (o *Object) Hash(t fs.HashType) (string, error) {
if t != fs.HashMD5 {
return "", fs.ErrHashUnsupported
}
if o.info.ContentProperties.Md5 != nil {
return *o.info.ContentProperties.Md5, nil
}
@@ -613,30 +715,37 @@ func (o *FsObjectAcd) Md5sum() (string, error) {
}
// Size returns the size of an object in bytes
func (o *FsObjectAcd) Size() int64 {
func (o *Object) Size() int64 {
return int64(*o.info.ContentProperties.Size)
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
func (o *FsObjectAcd) readMetaData() (err error) {
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (o *Object) readMetaData() (err error) {
if o.info != nil {
return nil
}
leaf, directoryID, err := o.acd.dirCache.FindPath(o.remote, false)
leaf, directoryID, err := o.fs.dirCache.FindPath(o.remote, false)
if err != nil {
if err == fs.ErrorDirNotFound {
return fs.ErrorObjectNotFound
}
return err
}
folder := acd.FolderFromId(directoryID, o.acd.c.Nodes)
folder := acd.FolderFromId(directoryID, o.fs.c.Nodes)
var resp *http.Response
var info *acd.File
err = o.acd.pacer.Call(func() (bool, error) {
err = o.fs.pacer.Call(func() (bool, error) {
info, resp, err = folder.GetFile(leaf)
return shouldRetry(resp, err)
return o.fs.shouldRetry(resp, err)
})
if err != nil {
fs.Debug(o, "Failed to read info: %s", err)
if err == acd.ErrorNodeNotFound {
return fs.ErrorObjectNotFound
}
return err
}
o.info = info.Node
@@ -648,38 +757,46 @@ func (o *FsObjectAcd) readMetaData() (err error) {
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *FsObjectAcd) ModTime() time.Time {
func (o *Object) ModTime() time.Time {
err := o.readMetaData()
if err != nil {
fs.Log(o, "Failed to read metadata: %s", err)
fs.Log(o, "Failed to read metadata: %v", err)
return time.Now()
}
modTime, err := time.Parse(timeFormat, *o.info.ModifiedDate)
if err != nil {
fs.Log(o, "Failed to read mtime from object: %s", err)
fs.Log(o, "Failed to read mtime from object: %v", err)
return time.Now()
}
return modTime
}
// SetModTime sets the modification time of the local fs object
func (o *FsObjectAcd) SetModTime(modTime time.Time) {
func (o *Object) SetModTime(modTime time.Time) error {
// FIXME not implemented
return
return fs.ErrorCantSetModTime
}
// Storable returns a boolean showing whether this object storable
func (o *FsObjectAcd) Storable() bool {
func (o *Object) Storable() bool {
return true
}
// Open an object for read
func (o *FsObjectAcd) Open() (in io.ReadCloser, err error) {
func (o *Object) Open() (in io.ReadCloser, err error) {
bigObject := o.Size() >= int64(tempLinkThreshold)
if bigObject {
fs.Debug(o, "Dowloading large object via tempLink")
}
file := acd.File{Node: o.info}
var resp *http.Response
err = o.acd.pacer.Call(func() (bool, error) {
in, resp, err = file.Open()
return shouldRetry(resp, err)
err = o.fs.pacer.Call(func() (bool, error) {
if !bigObject {
in, resp, err = file.Open()
} else {
in, resp, err = file.OpenTempURL(o.fs.noAuthClient)
}
return o.fs.shouldRetry(resp, err)
})
return in, err
}
@@ -687,18 +804,26 @@ func (o *FsObjectAcd) Open() (in io.ReadCloser, err error) {
// Update the object with the contents of the io.Reader, modTime and size
//
// The new object may have been created if an error is returned
func (o *FsObjectAcd) Update(in io.Reader, modTime time.Time, size int64) error {
func (o *Object) Update(in io.Reader, src fs.ObjectInfo) error {
size := src.Size()
file := acd.File{Node: o.info}
var info *acd.File
var resp *http.Response
var err error
err = o.acd.pacer.CallNoRetry(func() (bool, error) {
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
o.fs.startUpload()
if size != 0 {
info, resp, err = file.OverwriteSized(in, size)
} else {
info, resp, err = file.Overwrite(in)
}
return shouldRetry(resp, err)
o.fs.stopUpload()
var ok bool
ok, info, err = o.fs.checkUpload(in, src, info, err)
if ok {
return false, nil
}
return o.fs.shouldRetry(resp, err)
})
if err != nil {
return err
@@ -708,22 +833,22 @@ func (o *FsObjectAcd) Update(in io.Reader, modTime time.Time, size int64) error
}
// Remove an object
func (o *FsObjectAcd) Remove() error {
func (o *Object) Remove() error {
var resp *http.Response
var err error
err = o.acd.pacer.Call(func() (bool, error) {
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.info.Trash()
return shouldRetry(resp, err)
return o.fs.shouldRetry(resp, err)
})
return err
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*FsAcd)(nil)
_ fs.Purger = (*FsAcd)(nil)
// _ fs.Copier = (*FsAcd)(nil)
// _ fs.Mover = (*FsAcd)(nil)
// _ fs.DirMover = (*FsAcd)(nil)
_ fs.Object = (*FsObjectAcd)(nil)
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
// _ fs.Copier = (*Fs)(nil)
// _ fs.Mover = (*Fs)(nil)
// _ fs.DirMover = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

View File

@@ -12,45 +12,47 @@ import (
"github.com/ncw/rclone/fstest/fstests"
)
func init() {
fstests.NilObject = fs.Object((*amazonclouddrive.FsObjectAcd)(nil))
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*amazonclouddrive.Object)(nil))
fstests.RemoteName = "TestAmazonCloudDrive:"
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsNewFsObjectNotFound(t *testing.T) { fstests.TestFsNewFsObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRoot(t *testing.T) { fstests.TestFsListRoot(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewFsObject(t *testing.T) { fstests.TestFsNewFsObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectMd5sum(t *testing.T) { fstests.TestObjectMd5sum(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestLimitedFs(t *testing.T) { fstests.TestLimitedFs(t) }
func TestLimitedFsNotFound(t *testing.T) { fstests.TestLimitedFsNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

20
appveyor.yml Normal file
View File

@@ -0,0 +1,20 @@
version: "{build}"
os: Windows Server 2012 R2
clone_folder: c:\gopath\src\github.com\ncw\rclone
environment:
GOPATH: c:\gopath
install:
- echo %PATH%
- echo %GOPATH%
- go version
- go env
- go get -t -d ./...
build_script:
- go vet ./...
- go test -cpu=2 ./...
- go test -cpu=2 -short -race ./...

299
b2/api/types.go Normal file
View File

@@ -0,0 +1,299 @@
package api
import (
"fmt"
"path"
"strconv"
"strings"
"time"
"github.com/ncw/rclone/fs"
)
// Error describes a B2 error response
type Error struct {
Status int `json:"status"` // The numeric HTTP status code. Always matches the status in the HTTP response.
Code string `json:"code"` // A single-identifier code that identifies the error.
Message string `json:"message"` // A human-readable message, in English, saying what went wrong.
}
// Error statisfies the error interface
func (e *Error) Error() string {
return fmt.Sprintf("%s (%d %s)", e.Message, e.Status, e.Code)
}
// Fatal statisfies the Fatal interface
//
// It indicates which errors should be treated as fatal
func (e *Error) Fatal() bool {
return e.Status == 403 // 403 errors shouldn't be retried
}
var _ fs.Fataler = (*Error)(nil)
// Account describes a B2 account
type Account struct {
ID string `json:"accountId"` // The identifier for the account.
}
// Bucket describes a B2 bucket
type Bucket struct {
ID string `json:"bucketId"`
AccountID string `json:"accountId"`
Name string `json:"bucketName"`
Type string `json:"bucketType"`
}
// Timestamp is a UTC time when this file was uploaded. It is a base
// 10 number of milliseconds since midnight, January 1, 1970 UTC. This
// fits in a 64 bit integer such as the type "long" in the programming
// language Java. It is intended to be compatible with Java's time
// long. For example, it can be passed directly into the java call
// Date.setTime(long time).
type Timestamp time.Time
// MarshalJSON turns a Timestamp into JSON (in UTC)
func (t *Timestamp) MarshalJSON() (out []byte, err error) {
timestamp := (*time.Time)(t).UTC().UnixNano()
return []byte(strconv.FormatInt(timestamp/1E6, 10)), nil
}
// UnmarshalJSON turns JSON into a Timestamp
func (t *Timestamp) UnmarshalJSON(data []byte) error {
timestamp, err := strconv.ParseInt(string(data), 10, 64)
if err != nil {
return err
}
*t = Timestamp(time.Unix(timestamp/1E3, (timestamp%1E3)*1E6).UTC())
return nil
}
const versionFormat = "-v2006-01-02-150405.000"
// AddVersion adds the timestamp as a version string into the filename passed in.
func (t Timestamp) AddVersion(remote string) string {
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
s := (time.Time)(t).Format(versionFormat)
// Replace the '.' with a '-'
s = strings.Replace(s, ".", "-", -1)
return base + s + ext
}
// RemoveVersion removes the timestamp from a filename as a version string.
//
// It returns the new file name and a timestamp, or the old filename
// and a zero timestamp.
func RemoveVersion(remote string) (t Timestamp, newRemote string) {
newRemote = remote
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
if len(base) < len(versionFormat) {
return
}
versionStart := len(base) - len(versionFormat)
// Check it ends in -xxx
if base[len(base)-4] != '-' {
return
}
// Replace with .xxx for parsing
base = base[:len(base)-4] + "." + base[len(base)-3:]
newT, err := time.Parse(versionFormat, base[versionStart:])
if err != nil {
return
}
return Timestamp(newT), base[:versionStart] + ext
}
// IsZero returns true if the timestamp is unitialised
func (t Timestamp) IsZero() bool {
return (time.Time)(t).IsZero()
}
// Equal compares two timestamps
//
// If either are !IsZero then it returns false
func (t Timestamp) Equal(s Timestamp) bool {
if (time.Time)(t).IsZero() {
return false
}
if (time.Time)(s).IsZero() {
return false
}
return (time.Time)(t).Equal((time.Time)(s))
}
// File is info about a file
type File struct {
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
Action string `json:"action"` // Either "upload" or "hide". "upload" means a file that was uploaded to B2 Cloud Storage. "hide" means a file version marking the file as hidden, so that it will not show up in b2_list_file_names. The result of b2_list_file_names will contain only "upload". The result of b2_list_file_versions may have both.
Size int64 `json:"size"` // The number of bytes in the file.
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
ContentType string `json:"contentType"` // The MIME type of the file.
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
}
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct {
AccountID string `json:"accountId"` // The identifier for the account.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
}
// ListBucketsResponse is as returned from the b2_list_buckets call
type ListBucketsResponse struct {
Buckets []Bucket `json:"buckets"`
}
// ListFileNamesRequest is as passed to b2_list_file_names or b2_list_file_versions
type ListFileNamesRequest struct {
BucketID string `json:"bucketId"` // required - The bucket to look for file names in.
StartFileName string `json:"startFileName,omitempty"` // optional - The first file name to return. If there is a file with this name, it will be returned in the list. If not, the first file name after this the first one after this name.
MaxFileCount int `json:"maxFileCount,omitempty"` // optional - The maximum number of files to return from this call. The default value is 100, and the maximum allowed is 1000.
StartFileID string `json:"startFileId,omitempty"` // optional - What to pass in to startFileId for the next search to continue where this one left off.
}
// ListFileNamesResponse is as received from b2_list_file_names or b2_list_file_versions
type ListFileNamesResponse struct {
Files []File `json:"files"` // An array of objects, each one describing one file.
NextFileName *string `json:"nextFileName"` // What to pass in to startFileName for the next search to continue where this one left off, or null if there are no more files.
NextFileID *string `json:"nextFileId"` // What to pass in to startFileId for the next search to continue where this one left off, or null if there are no more files.
}
// GetUploadURLRequest is passed to b2_get_upload_url
type GetUploadURLRequest struct {
BucketID string `json:"bucketId"` // The ID of the bucket that you want to upload to.
}
// GetUploadURLResponse is received from b2_get_upload_url
type GetUploadURLResponse struct {
BucketID string `json:"bucketId"` // The unique ID of the bucket.
UploadURL string `json:"uploadUrl"` // The URL that can be used to upload files to this bucket, see b2_upload_file.
AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when uploading files to this bucket, see b2_upload_file.
}
// FileInfo is received from b2_upload_file, b2_get_file_info and b2_finish_large_file
type FileInfo struct {
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
Action string `json:"action"` // Either "upload" or "hide". "upload" means a file that was uploaded to B2 Cloud Storage. "hide" means a file version marking the file as hidden, so that it will not show up in b2_list_file_names. The result of b2_list_file_names will contain only "upload". The result of b2_list_file_versions may have both.
AccountID string `json:"accountId"` // Your account ID.
BucketID string `json:"bucketId"` // The bucket that the file is in.
Size int64 `json:"contentLength"` // The number of bytes stored in the file.
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
ContentType string `json:"contentType"` // The MIME type of the file.
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
}
// CreateBucketRequest is used to create a bucket
type CreateBucketRequest struct {
AccountID string `json:"accountId"`
Name string `json:"bucketName"`
Type string `json:"bucketType"`
}
// DeleteBucketRequest is used to create a bucket
type DeleteBucketRequest struct {
ID string `json:"bucketId"`
AccountID string `json:"accountId"`
}
// DeleteFileRequest is used to delete a file version
type DeleteFileRequest struct {
ID string `json:"fileId"` // The ID of the file, as returned by b2_upload_file, b2_list_file_names, or b2_list_file_versions.
Name string `json:"fileName"` // The name of this file.
}
// HideFileRequest is used to delete a file
type HideFileRequest struct {
BucketID string `json:"bucketId"` // The bucket containing the file to hide.
Name string `json:"fileName"` // The name of the file to hide.
}
// GetFileInfoRequest is used to return a FileInfo struct with b2_get_file_info
type GetFileInfoRequest struct {
ID string `json:"fileId"` // The ID of the file, as returned by b2_upload_file, b2_list_file_names, or b2_list_file_versions.
}
// StartLargeFileRequest (b2_start_large_file) Prepares for uploading the parts of a large file.
//
// If the original source of the file being uploaded has a last
// modified time concept, Backblaze recommends using
// src_last_modified_millis as the name, and a string holding the base
// 10 number number of milliseconds since midnight, January 1, 1970
// UTC. This fits in a 64 bit integer such as the type "long" in the
// programming language Java. It is intended to be compatible with
// Java's time long. For example, it can be passed directly into the
// Java call Date.setTime(long time).
//
// If the caller knows the SHA1 of the entire large file being
// uploaded, Backblaze recommends using large_file_sha1 as the name,
// and a 40 byte hex string representing the SHA1.
//
// Example: { "src_last_modified_millis" : "1452802803026", "large_file_sha1" : "a3195dc1e7b46a2ff5da4b3c179175b75671e80d", "color": "blue" }
type StartLargeFileRequest struct {
BucketID string `json:"bucketId"` //The ID of the bucket that the file will go in.
Name string `json:"fileName"` // The name of the file. See Files for requirements on file names.
ContentType string `json:"contentType"` // The MIME type of the content of the file, which will be returned in the Content-Type header when downloading the file. Use the Content-Type b2/x-auto to automatically set the stored Content-Type post upload. In the case where a file extension is absent or the lookup fails, the Content-Type is set to application/octet-stream.
Info map[string]string `json:"fileInfo"` // A JSON object holding the name/value pairs for the custom file info.
}
// StartLargeFileResponse is the response to StartLargeFileRequest
type StartLargeFileResponse struct {
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId"` // The unique ID of the bucket.
ContentType string `json:"contentType"` // The MIME type of the file.
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
}
// GetUploadPartURLRequest is passed to b2_get_upload_part_url
type GetUploadPartURLRequest struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
}
// GetUploadPartURLResponse is received from b2_get_upload_url
type GetUploadPartURLResponse struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
UploadURL string `json:"uploadUrl"` // The URL that can be used to upload files to this bucket, see b2_upload_part.
AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when uploading files to this bucket, see b2_upload_part.
}
// UploadPartResponse is the response to b2_upload_part
type UploadPartResponse struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
PartNumber int64 `json:"partNumber"` // Which part this is (starting from 1)
Size int64 `json:"contentLength"` // The number of bytes stored in the file.
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
}
// FinishLargeFileRequest is passed to b2_finish_large_file
//
// The response is a FileInfo object (with extra AccountID and BucketID fields which we ignore).
//
// Large files do not have a SHA1 checksum. The value will always be "none".
type FinishLargeFileRequest struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
SHA1s []string `json:"partSha1Array"` // A JSON array of hex SHA1 checksums of the parts of the large file. This is a double-check that the right parts were uploaded in the right order, and that none were missed. Note that the part numbers start at 1, and the SHA1 of the part 1 is the first string in the array, at index 0.
}
// CancelLargeFileRequest is passed to b2_finish_large_file
//
// The response is a CancelLargeFileResponse
type CancelLargeFileRequest struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
}
// CancelLargeFileResponse is the response to CancelLargeFileRequest
type CancelLargeFileResponse struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
Name string `json:"fileName"` // The name of this file.
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId"` // The unique ID of the bucket.
}

87
b2/api/types_test.go Normal file
View File

@@ -0,0 +1,87 @@
package api_test
import (
"testing"
"time"
"github.com/ncw/rclone/b2/api"
"github.com/ncw/rclone/fstest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var (
emptyT api.Timestamp
t0 = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123456789Z"))
t0r = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123000000Z"))
t1 = api.Timestamp(fstest.Time("2001-02-03T04:05:06.123000000Z"))
)
func TestTimestampMarshalJSON(t *testing.T) {
resB, err := t0.MarshalJSON()
res := string(resB)
require.NoError(t, err)
assert.Equal(t, "3661123", res)
resB, err = t1.MarshalJSON()
res = string(resB)
require.NoError(t, err)
assert.Equal(t, "981173106123", res)
}
func TestTimestampUnmarshalJSON(t *testing.T) {
var tActual api.Timestamp
err := tActual.UnmarshalJSON([]byte("981173106123"))
require.NoError(t, err)
assert.Equal(t, (time.Time)(t1), (time.Time)(tActual))
}
func TestTimestampAddVersion(t *testing.T) {
for _, test := range []struct {
t api.Timestamp
in string
expected string
}{
{t0, "potato.txt", "potato-v1970-01-01-010101-123.txt"},
{t1, "potato", "potato-v2001-02-03-040506-123"},
{t1, "", "-v2001-02-03-040506-123"},
} {
actual := test.t.AddVersion(test.in)
assert.Equal(t, test.expected, actual, test.in)
}
}
func TestTimestampRemoveVersion(t *testing.T) {
for _, test := range []struct {
in string
expectedT api.Timestamp
expectedRemote string
}{
{"potato.txt", emptyT, "potato.txt"},
{"potato-v1970-01-01-010101-123.txt", t0r, "potato.txt"},
{"potato-v2001-02-03-040506-123", t1, "potato"},
{"-v2001-02-03-040506-123", t1, ""},
{"potato-v2A01-02-03-040506-123", emptyT, "potato-v2A01-02-03-040506-123"},
{"potato-v2001-02-03-040506=123", emptyT, "potato-v2001-02-03-040506=123"},
} {
actualT, actualRemote := api.RemoveVersion(test.in)
assert.Equal(t, test.expectedT, actualT, test.in)
assert.Equal(t, test.expectedRemote, actualRemote, test.in)
}
}
func TestTimestampIsZero(t *testing.T) {
assert.True(t, emptyT.IsZero())
assert.False(t, t0.IsZero())
assert.False(t, t1.IsZero())
}
func TestTimestampEqual(t *testing.T) {
assert.False(t, emptyT.Equal(emptyT))
assert.False(t, t0.Equal(emptyT))
assert.False(t, emptyT.Equal(t0))
assert.False(t, t0.Equal(t1))
assert.False(t, t1.Equal(t0))
assert.True(t, t0.Equal(t0))
assert.True(t, t1.Equal(t1))
}

1328
b2/b2.go Normal file

File diff suppressed because it is too large Load Diff

284
b2/b2_internal_test.go Normal file
View File

@@ -0,0 +1,284 @@
package b2
import (
"testing"
"time"
"github.com/ncw/rclone/fstest"
"github.com/stretchr/testify/assert"
)
// Test b2 string encoding
// https://www.backblaze.com/b2/docs/string_encoding.html
var encodeTest = []struct {
fullyEncoded string
minimallyEncoded string
plainText string
}{
{fullyEncoded: "%20", minimallyEncoded: "+", plainText: " "},
{fullyEncoded: "%21", minimallyEncoded: "!", plainText: "!"},
{fullyEncoded: "%22", minimallyEncoded: "%22", plainText: "\""},
{fullyEncoded: "%23", minimallyEncoded: "%23", plainText: "#"},
{fullyEncoded: "%24", minimallyEncoded: "$", plainText: "$"},
{fullyEncoded: "%25", minimallyEncoded: "%25", plainText: "%"},
{fullyEncoded: "%26", minimallyEncoded: "%26", plainText: "&"},
{fullyEncoded: "%27", minimallyEncoded: "'", plainText: "'"},
{fullyEncoded: "%28", minimallyEncoded: "(", plainText: "("},
{fullyEncoded: "%29", minimallyEncoded: ")", plainText: ")"},
{fullyEncoded: "%2A", minimallyEncoded: "*", plainText: "*"},
{fullyEncoded: "%2B", minimallyEncoded: "%2B", plainText: "+"},
{fullyEncoded: "%2C", minimallyEncoded: "%2C", plainText: ","},
{fullyEncoded: "%2D", minimallyEncoded: "-", plainText: "-"},
{fullyEncoded: "%2E", minimallyEncoded: ".", plainText: "."},
{fullyEncoded: "%2F", minimallyEncoded: "/", plainText: "/"},
{fullyEncoded: "%30", minimallyEncoded: "0", plainText: "0"},
{fullyEncoded: "%31", minimallyEncoded: "1", plainText: "1"},
{fullyEncoded: "%32", minimallyEncoded: "2", plainText: "2"},
{fullyEncoded: "%33", minimallyEncoded: "3", plainText: "3"},
{fullyEncoded: "%34", minimallyEncoded: "4", plainText: "4"},
{fullyEncoded: "%35", minimallyEncoded: "5", plainText: "5"},
{fullyEncoded: "%36", minimallyEncoded: "6", plainText: "6"},
{fullyEncoded: "%37", minimallyEncoded: "7", plainText: "7"},
{fullyEncoded: "%38", minimallyEncoded: "8", plainText: "8"},
{fullyEncoded: "%39", minimallyEncoded: "9", plainText: "9"},
{fullyEncoded: "%3A", minimallyEncoded: ":", plainText: ":"},
{fullyEncoded: "%3B", minimallyEncoded: ";", plainText: ";"},
{fullyEncoded: "%3C", minimallyEncoded: "%3C", plainText: "<"},
{fullyEncoded: "%3D", minimallyEncoded: "=", plainText: "="},
{fullyEncoded: "%3E", minimallyEncoded: "%3E", plainText: ">"},
{fullyEncoded: "%3F", minimallyEncoded: "%3F", plainText: "?"},
{fullyEncoded: "%40", minimallyEncoded: "@", plainText: "@"},
{fullyEncoded: "%41", minimallyEncoded: "A", plainText: "A"},
{fullyEncoded: "%42", minimallyEncoded: "B", plainText: "B"},
{fullyEncoded: "%43", minimallyEncoded: "C", plainText: "C"},
{fullyEncoded: "%44", minimallyEncoded: "D", plainText: "D"},
{fullyEncoded: "%45", minimallyEncoded: "E", plainText: "E"},
{fullyEncoded: "%46", minimallyEncoded: "F", plainText: "F"},
{fullyEncoded: "%47", minimallyEncoded: "G", plainText: "G"},
{fullyEncoded: "%48", minimallyEncoded: "H", plainText: "H"},
{fullyEncoded: "%49", minimallyEncoded: "I", plainText: "I"},
{fullyEncoded: "%4A", minimallyEncoded: "J", plainText: "J"},
{fullyEncoded: "%4B", minimallyEncoded: "K", plainText: "K"},
{fullyEncoded: "%4C", minimallyEncoded: "L", plainText: "L"},
{fullyEncoded: "%4D", minimallyEncoded: "M", plainText: "M"},
{fullyEncoded: "%4E", minimallyEncoded: "N", plainText: "N"},
{fullyEncoded: "%4F", minimallyEncoded: "O", plainText: "O"},
{fullyEncoded: "%50", minimallyEncoded: "P", plainText: "P"},
{fullyEncoded: "%51", minimallyEncoded: "Q", plainText: "Q"},
{fullyEncoded: "%52", minimallyEncoded: "R", plainText: "R"},
{fullyEncoded: "%53", minimallyEncoded: "S", plainText: "S"},
{fullyEncoded: "%54", minimallyEncoded: "T", plainText: "T"},
{fullyEncoded: "%55", minimallyEncoded: "U", plainText: "U"},
{fullyEncoded: "%56", minimallyEncoded: "V", plainText: "V"},
{fullyEncoded: "%57", minimallyEncoded: "W", plainText: "W"},
{fullyEncoded: "%58", minimallyEncoded: "X", plainText: "X"},
{fullyEncoded: "%59", minimallyEncoded: "Y", plainText: "Y"},
{fullyEncoded: "%5A", minimallyEncoded: "Z", plainText: "Z"},
{fullyEncoded: "%5B", minimallyEncoded: "%5B", plainText: "["},
{fullyEncoded: "%5C", minimallyEncoded: "%5C", plainText: "\\"},
{fullyEncoded: "%5D", minimallyEncoded: "%5D", plainText: "]"},
{fullyEncoded: "%5E", minimallyEncoded: "%5E", plainText: "^"},
{fullyEncoded: "%5F", minimallyEncoded: "_", plainText: "_"},
{fullyEncoded: "%60", minimallyEncoded: "%60", plainText: "`"},
{fullyEncoded: "%61", minimallyEncoded: "a", plainText: "a"},
{fullyEncoded: "%62", minimallyEncoded: "b", plainText: "b"},
{fullyEncoded: "%63", minimallyEncoded: "c", plainText: "c"},
{fullyEncoded: "%64", minimallyEncoded: "d", plainText: "d"},
{fullyEncoded: "%65", minimallyEncoded: "e", plainText: "e"},
{fullyEncoded: "%66", minimallyEncoded: "f", plainText: "f"},
{fullyEncoded: "%67", minimallyEncoded: "g", plainText: "g"},
{fullyEncoded: "%68", minimallyEncoded: "h", plainText: "h"},
{fullyEncoded: "%69", minimallyEncoded: "i", plainText: "i"},
{fullyEncoded: "%6A", minimallyEncoded: "j", plainText: "j"},
{fullyEncoded: "%6B", minimallyEncoded: "k", plainText: "k"},
{fullyEncoded: "%6C", minimallyEncoded: "l", plainText: "l"},
{fullyEncoded: "%6D", minimallyEncoded: "m", plainText: "m"},
{fullyEncoded: "%6E", minimallyEncoded: "n", plainText: "n"},
{fullyEncoded: "%6F", minimallyEncoded: "o", plainText: "o"},
{fullyEncoded: "%70", minimallyEncoded: "p", plainText: "p"},
{fullyEncoded: "%71", minimallyEncoded: "q", plainText: "q"},
{fullyEncoded: "%72", minimallyEncoded: "r", plainText: "r"},
{fullyEncoded: "%73", minimallyEncoded: "s", plainText: "s"},
{fullyEncoded: "%74", minimallyEncoded: "t", plainText: "t"},
{fullyEncoded: "%75", minimallyEncoded: "u", plainText: "u"},
{fullyEncoded: "%76", minimallyEncoded: "v", plainText: "v"},
{fullyEncoded: "%77", minimallyEncoded: "w", plainText: "w"},
{fullyEncoded: "%78", minimallyEncoded: "x", plainText: "x"},
{fullyEncoded: "%79", minimallyEncoded: "y", plainText: "y"},
{fullyEncoded: "%7A", minimallyEncoded: "z", plainText: "z"},
{fullyEncoded: "%7B", minimallyEncoded: "%7B", plainText: "{"},
{fullyEncoded: "%7C", minimallyEncoded: "%7C", plainText: "|"},
{fullyEncoded: "%7D", minimallyEncoded: "%7D", plainText: "}"},
{fullyEncoded: "%7E", minimallyEncoded: "~", plainText: "~"},
{fullyEncoded: "%7F", minimallyEncoded: "%7F", plainText: "\u007f"},
{fullyEncoded: "%E8%87%AA%E7%94%B1", minimallyEncoded: "%E8%87%AA%E7%94%B1", plainText: "自由"},
{fullyEncoded: "%F0%90%90%80", minimallyEncoded: "%F0%90%90%80", plainText: "𐐀"},
}
func TestUrlEncode(t *testing.T) {
for _, test := range encodeTest {
got := urlEncode(test.plainText)
if got != test.minimallyEncoded && got != test.fullyEncoded {
t.Errorf("urlEncode(%q) got %q wanted %q or %q", test.plainText, got, test.minimallyEncoded, test.fullyEncoded)
}
}
}
func TestTimeString(t *testing.T) {
for _, test := range []struct {
in time.Time
want string
}{
{fstest.Time("1970-01-01T00:00:00.000000000Z"), "0"},
{fstest.Time("2001-02-03T04:05:10.123123123Z"), "981173110123"},
{fstest.Time("2001-02-03T05:05:10.123123123+01:00"), "981173110123"},
} {
got := timeString(test.in)
if test.want != got {
t.Logf("%v: want %v got %v", test.in, test.want, got)
}
}
}
func TestParseTimeString(t *testing.T) {
for _, test := range []struct {
in string
want time.Time
wantError string
}{
{"0", fstest.Time("1970-01-01T00:00:00.000000000Z"), ""},
{"981173110123", fstest.Time("2001-02-03T04:05:10.123000000Z"), ""},
{"", time.Time{}, ""},
{"potato", time.Time{}, `strconv.ParseInt: parsing "potato": invalid syntax`},
} {
o := Object{}
err := o.parseTimeString(test.in)
got := o.modTime
var gotError string
if err != nil {
gotError = err.Error()
}
if test.want != got {
t.Logf("%v: want %v got %v", test.in, test.want, got)
}
if test.wantError != gotError {
t.Logf("%v: want error %v got error %v", test.in, test.wantError, gotError)
}
}
}
func TestSendDir(t *testing.T) {
for _, test := range []struct {
lastDir string
remote string
level int
dirNames []string
newLastDir string
}{
{
lastDir: "",
remote: "test.txt",
level: 100,
dirNames: nil,
newLastDir: "",
},
{
lastDir: "",
remote: "potato/test.txt",
level: 100,
dirNames: []string{"potato"},
newLastDir: "potato",
},
{
lastDir: "potato",
remote: "potato/test.txt",
level: 100,
dirNames: nil,
newLastDir: "potato",
},
{
lastDir: "",
remote: "potato/sausage/test.txt",
level: 100,
dirNames: []string{"potato", "potato/sausage"},
newLastDir: "potato/sausage",
},
{
lastDir: "potato",
remote: "potato/sausage/test.txt",
level: 100,
dirNames: []string{"potato/sausage"},
newLastDir: "potato/sausage",
},
{
lastDir: "potato/sausage",
remote: "potato/sausage/test.txt",
level: 100,
dirNames: nil,
newLastDir: "potato/sausage",
},
{
lastDir: "",
remote: "a/b/c/d/e/f.txt",
level: 100,
dirNames: []string{"a", "a/b", "a/b/c", "a/b/c/d", "a/b/c/d/e"},
newLastDir: "a/b/c/d/e",
},
{
lastDir: "a/b/c/d/e",
remote: "a/b/c/d/E/f.txt",
level: 100,
dirNames: []string{"a/b/c/d/E"},
newLastDir: "a/b/c/d/E",
},
{
lastDir: "a/b/c/d/e",
remote: "a/b/C/D/E/f.txt",
level: 100,
dirNames: []string{"a/b/C", "a/b/C/D", "a/b/C/D/E"},
newLastDir: "a/b/C/D/E",
},
{
lastDir: "a/b/c",
remote: "a/b/c/d/e/f.txt",
level: 100,
dirNames: []string{"a/b/c/d", "a/b/c/d/e"},
newLastDir: "a/b/c/d/e",
},
{
lastDir: "",
remote: "a/b/c/d/e/f.txt",
level: 1,
dirNames: []string{"a"},
newLastDir: "a/b/c/d/e",
},
{
lastDir: "a/b/c",
remote: "a/b/c/d/e/f.txt",
level: 1,
dirNames: nil,
newLastDir: "a/b/c/d/e",
},
{
lastDir: "",
remote: "a/b/c/d/e/f.txt",
level: 3,
dirNames: []string{"a", "a/b", "a/b/c"},
newLastDir: "a/b/c/d/e",
},
{
lastDir: "a/b/C/D/E",
remote: "a/b/c/d/e/f.txt",
level: 3,
dirNames: []string{"a/b/c"},
newLastDir: "a/b/c/d/e",
},
} {
dirNames, newLastDir := sendDir(test.lastDir, test.remote, test.level)
assert.Equal(t, test.dirNames, dirNames, "dirNames fail for sendDir(%q,%q,%v)", test.lastDir, test.remote, test.level)
assert.Equal(t, test.newLastDir, newLastDir, "newLastDir fail for sendDir(%q,%q,%v)", test.lastDir, test.remote, test.level)
}
}

58
b2/b2_test.go Normal file
View File

@@ -0,0 +1,58 @@
// Test B2 filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package b2_test
import (
"testing"
"github.com/ncw/rclone/b2"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*b2.Object)(nil))
fstests.RemoteName = "TestB2:"
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

301
b2/upload.go Normal file
View File

@@ -0,0 +1,301 @@
// Upload large files for b2
//
// Docs - https://www.backblaze.com/b2/docs/large_files.html
package b2
import (
"bytes"
"crypto/sha1"
"fmt"
"io"
"sync"
"github.com/ncw/rclone/b2/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/rest"
"github.com/pkg/errors"
)
// largeUpload is used to control the upload of large files which need chunking
type largeUpload struct {
f *Fs // parent Fs
o *Object // object being uploaded
in io.Reader // read the data from here
id string // ID of the file being uploaded
size int64 // total size
parts int64 // calculated number of parts
sha1s []string // slice of SHA1s for each part
uploadMu sync.Mutex // lock for upload variable
uploads []*api.GetUploadPartURLResponse // result of get upload URL calls
}
// newLargeUpload starts an upload of object o from in with metadata in src
func (f *Fs) newLargeUpload(o *Object, in io.Reader, src fs.ObjectInfo) (up *largeUpload, err error) {
remote := o.remote
size := src.Size()
parts := size / int64(chunkSize)
if size%int64(chunkSize) != 0 {
parts++
}
if parts > maxParts {
return nil, errors.Errorf("%q too big (%d bytes) makes too many parts %d > %d - increase --b2-chunk-size", remote, size, parts, maxParts)
}
modTime := src.ModTime()
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
}
bucketID, err := f.getBucketID()
if err != nil {
return nil, err
}
var request = api.StartLargeFileRequest{
BucketID: bucketID,
Name: o.fs.root + remote,
ContentType: fs.MimeType(src),
Info: map[string]string{
timeKey: timeString(modTime),
},
}
// Set the SHA1 if known
if calculatedSha1, err := src.Hash(fs.HashSHA1); err == nil && calculatedSha1 != "" {
request.Info[sha1Key] = calculatedSha1
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
up = &largeUpload{
f: f,
o: o,
in: in,
id: response.ID,
size: size,
parts: parts,
sha1s: make([]string, parts),
}
return up, nil
}
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
//
// This should be returned with returnUploadURL when finished
func (up *largeUpload) getUploadURL() (upload *api.GetUploadPartURLResponse, err error) {
up.uploadMu.Lock()
defer up.uploadMu.Unlock()
if len(up.uploads) == 0 {
opts := rest.Opts{
Method: "POST",
Path: "/b2_get_upload_part_url",
}
var request = api.GetUploadPartURLRequest{
ID: up.id,
}
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(&opts, &request, &upload)
return up.f.shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get upload URL")
}
} else {
upload, up.uploads = up.uploads[0], up.uploads[1:]
}
return upload, nil
}
// returnUploadURL returns the UploadURL to the cache
func (up *largeUpload) returnUploadURL(upload *api.GetUploadPartURLResponse) {
if upload == nil {
return
}
up.uploadMu.Lock()
up.uploads = append(up.uploads, upload)
up.uploadMu.Unlock()
}
// clearUploadURL clears the current UploadURL and the AuthorizationToken
func (up *largeUpload) clearUploadURL() {
up.uploadMu.Lock()
up.uploads = nil
up.uploadMu.Unlock()
}
// Transfer a chunk
func (up *largeUpload) transferChunk(part int64, body []byte) error {
calculatedSHA1 := fmt.Sprintf("%x", sha1.Sum(body))
up.sha1s[part-1] = calculatedSHA1
size := int64(len(body))
err := up.f.pacer.Call(func() (bool, error) {
fs.Debug(up.o, "Sending chunk %d length %d", part, len(body))
// Get upload URL
upload, err := up.getUploadURL()
if err != nil {
return false, err
}
// Authorization
//
// An upload authorization token, from b2_get_upload_part_url.
//
// X-Bz-Part-Number
//
// A number from 1 to 10000. The parts uploaded for one file
// must have contiguous numbers, starting with 1.
//
// Content-Length
//
// The number of bytes in the file being uploaded. Note that
// this header is required; you cannot leave it out and just
// use chunked encoding. The minimum size of every part but
// the last one is 100MB.
//
// X-Bz-Content-Sha1
//
// The SHA1 checksum of the this part of the file. B2 will
// check this when the part is uploaded, to make sure that the
// data arrived correctly. The same SHA1 checksum must be
// passed to b2_finish_large_file.
opts := rest.Opts{
Method: "POST",
Absolute: true,
Path: upload.UploadURL,
Body: fs.AccountPart(up.o, bytes.NewBuffer(body)),
ExtraHeaders: map[string]string{
"Authorization": upload.AuthorizationToken,
"X-Bz-Part-Number": fmt.Sprintf("%d", part),
sha1Header: calculatedSHA1,
},
ContentLength: &size,
}
var response api.UploadPartResponse
resp, err := up.f.srv.CallJSON(&opts, nil, &response)
retry, err := up.f.shouldRetryNoReauth(resp, err)
// On retryable error clear PartUploadURL
if retry {
fs.Debug(up.o, "Clearing part upload URL because of error: %v", err)
upload = nil
}
up.returnUploadURL(upload)
return retry, err
})
if err != nil {
fs.Debug(up.o, "Error sending chunk %d: %v", part, err)
} else {
fs.Debug(up.o, "Done sending chunk %d", part)
}
return err
}
// finish closes off the large upload
func (up *largeUpload) finish() error {
opts := rest.Opts{
Method: "POST",
Path: "/b2_finish_large_file",
}
var request = api.FinishLargeFileRequest{
ID: up.id,
SHA1s: up.sha1s,
}
var response api.FileInfo
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(&opts, &request, &response)
return up.f.shouldRetry(resp, err)
})
if err != nil {
return err
}
return up.o.decodeMetaDataFileInfo(&response)
}
// cancel aborts the large upload
func (up *largeUpload) cancel() error {
opts := rest.Opts{
Method: "POST",
Path: "/b2_cancel_large_file",
}
var request = api.CancelLargeFileRequest{
ID: up.id,
}
var response api.CancelLargeFileResponse
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(&opts, &request, &response)
return up.f.shouldRetry(resp, err)
})
return err
}
// Upload uploads the chunks from the input
func (up *largeUpload) Upload() error {
fs.Debug(up.o, "Starting upload of large file in %d chunks (id %q)", up.parts, up.id)
remaining := up.size
errs := make(chan error, 1)
var wg sync.WaitGroup
var err error
fs.AccountByPart(up.o) // Cancel whole file accounting before reading
outer:
for part := int64(1); part <= up.parts; part++ {
// Check any errors
select {
case err = <-errs:
break outer
default:
}
reqSize := remaining
if reqSize >= int64(chunkSize) {
reqSize = int64(chunkSize)
}
// Read the chunk
buf := make([]byte, reqSize)
_, err = io.ReadFull(up.in, buf)
if err != nil {
break outer
}
// Transfer the chunk
// Get upload Token
up.f.getUploadToken()
wg.Add(1)
go func(part int64, buf []byte) {
defer up.f.returnUploadToken()
defer wg.Done()
err := up.transferChunk(part, buf)
if err != nil {
select {
case errs <- err:
default:
}
}
}(part, buf)
remaining -= reqSize
}
wg.Wait()
if err == nil {
select {
case err = <-errs:
default:
}
}
if err != nil {
fs.Debug(up.o, "Cancelling large file upload due to error: %v", err)
cancelErr := up.cancel()
if cancelErr != nil {
fs.ErrorLog(up.o, "Failed to cancel large file upload: %v", cancelErr)
}
return err
}
// Check any errors
fs.Debug(up.o, "Finishing large file upload")
return up.finish()
}

31
cmd/all/all.go Normal file
View File

@@ -0,0 +1,31 @@
// Package all imports all the commands
package all
import (
// Active commands
_ "github.com/ncw/rclone/cmd"
_ "github.com/ncw/rclone/cmd/authorize"
_ "github.com/ncw/rclone/cmd/cat"
_ "github.com/ncw/rclone/cmd/check"
_ "github.com/ncw/rclone/cmd/cleanup"
_ "github.com/ncw/rclone/cmd/config"
_ "github.com/ncw/rclone/cmd/copy"
_ "github.com/ncw/rclone/cmd/dedupe"
_ "github.com/ncw/rclone/cmd/delete"
_ "github.com/ncw/rclone/cmd/genautocomplete"
_ "github.com/ncw/rclone/cmd/gendocs"
_ "github.com/ncw/rclone/cmd/ls"
_ "github.com/ncw/rclone/cmd/lsd"
_ "github.com/ncw/rclone/cmd/lsl"
_ "github.com/ncw/rclone/cmd/md5sum"
_ "github.com/ncw/rclone/cmd/memtest"
_ "github.com/ncw/rclone/cmd/mkdir"
_ "github.com/ncw/rclone/cmd/mount"
_ "github.com/ncw/rclone/cmd/move"
_ "github.com/ncw/rclone/cmd/purge"
_ "github.com/ncw/rclone/cmd/rmdir"
_ "github.com/ncw/rclone/cmd/sha1sum"
_ "github.com/ncw/rclone/cmd/size"
_ "github.com/ncw/rclone/cmd/sync"
_ "github.com/ncw/rclone/cmd/version"
)

View File

@@ -0,0 +1,24 @@
package authorize
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(authorizeCmd)
}
var authorizeCmd = &cobra.Command{
Use: "authorize",
Short: `Remote authorization.`,
Long: `
Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 3, command, args)
fs.Authorize(args)
},
}

40
cmd/cat/cat.go Normal file
View File

@@ -0,0 +1,40 @@
package cat
import (
"os"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(catCmd)
}
var catCmd = &cobra.Command{
Use: "cat remote:path",
Short: `Concatenates any files and sends them to stdout.`,
Long: `
rclone cat sends any files to standard output.
You can use it like this to output a single file
rclone cat remote:path/to/file
Or like this to output any file in dir or subdirectories.
rclone cat remote:path/to/dir
Or like this to output any .txt files in dir or subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, command, func() error {
return fs.Cat(fsrc, os.Stdout)
})
},
}

30
cmd/check/check.go Normal file
View File

@@ -0,0 +1,30 @@
package check
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(checkCmd)
}
var checkCmd = &cobra.Command{
Use: "check source:path dest:path",
Short: `Checks the files in the source and destination match.`,
Long: `
Checks the files in the source and destination match. It
compares sizes and MD5SUMs and prints a report of files which
don't match. It doesn't alter the source or destination.
` + "`" + `--size-only` + "`" + ` may be used to only compare the sizes, not the MD5SUMs.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, fdst := cmd.NewFsSrcDst(args)
cmd.Run(false, command, func() error {
return fs.Check(fdst, fsrc)
})
},
}

27
cmd/cleanup/cleanup.go Normal file
View File

@@ -0,0 +1,27 @@
package cleanup
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(cleanupCmd)
}
var cleanupCmd = &cobra.Command{
Use: "cleanup remote:path",
Short: `Clean up the remote if possible`,
Long: `
Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(true, command, func() error {
return fs.CleanUp(fsrc)
})
},
}

293
cmd/cmd.go Normal file
View File

@@ -0,0 +1,293 @@
// Package cmd implemnts the rclone command
//
// It is in a sub package so it's internals can be re-used elsewhere
package cmd
// FIXME only attach the remote flags when using a remote???
// would probably mean bringing all the flags in to here? Or define some flagsets in fs...
import (
"fmt"
"log"
"os"
"path"
"runtime"
"runtime/pprof"
"time"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/ncw/rclone/fs"
)
// Globals
var (
// Flags
cpuProfile = pflag.StringP("cpuprofile", "", "", "Write cpu profile to file")
memProfile = pflag.String("memprofile", "", "Write memory profile to file")
statsInterval = pflag.DurationP("stats", "", time.Minute*1, "Interval to print stats (0 to disable)")
version bool
logFile = pflag.StringP("log-file", "", "", "Log everything to this file")
retries = pflag.IntP("retries", "", 3, "Retry operations this many times if they fail")
)
// Root is the main rclone command
var Root = &cobra.Command{
Use: "rclone",
Short: "Sync files and directories to and from local and remote object stores - " + fs.Version,
Long: `
Rclone is a command line program to sync files and directories to and
from various cloud storage systems, such as:
* Google Drive
* Amazon S3
* Openstack Swift / Rackspace cloud files / Memset Memstore
* Dropbox
* Google Cloud Storage
* Amazon Drive
* Microsoft One Drive
* Hubic
* Backblaze B2
* Yandex Disk
* The local filesystem
Features
* MD5/SHA1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* Copy mode to just copy new/changed files
* Sync (one way) mode to make a directory identical
* Check mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
See the home page for installation, usage, documentation, changelog
and configuration walkthroughs.
* http://rclone.org/
`,
}
// runRoot implements the main rclone command with no subcommands
func runRoot(cmd *cobra.Command, args []string) {
if version {
ShowVersion()
os.Exit(0)
} else {
_ = Root.Usage()
fmt.Fprintf(os.Stderr, "Command not found.\n")
os.Exit(1)
}
}
func init() {
Root.Run = runRoot
Root.Flags().BoolVarP(&version, "version", "V", false, "Print the version number")
cobra.OnInitialize(initConfig)
}
// ShowVersion prints the version to stdout
func ShowVersion() {
fmt.Printf("rclone %s\n", fs.Version)
}
// newFsSrc creates a src Fs from a name
//
// This can point to a file
func newFsSrc(remote string) fs.Fs {
fsInfo, configName, fsPath, err := fs.ParseRemote(remote)
if err != nil {
fs.Stats.Error()
log.Fatalf("Failed to create file system for %q: %v", remote, err)
}
f, err := fsInfo.NewFs(configName, fsPath)
if err == fs.ErrorIsFile {
if !fs.Config.Filter.InActive() {
fs.Stats.Error()
log.Fatalf("Can't limit to single files when using filters: %v", remote)
}
// Limit transfers to this file
err = fs.Config.Filter.AddFile(path.Base(fsPath))
// Set --no-traverse as only one file
fs.Config.NoTraverse = true
}
if err != nil {
fs.Stats.Error()
log.Fatalf("Failed to create file system for %q: %v", remote, err)
}
return f
}
// newFsDst creates a dst Fs from a name
//
// This must point to a directory
func newFsDst(remote string) fs.Fs {
f, err := fs.NewFs(remote)
if err != nil {
fs.Stats.Error()
log.Fatalf("Failed to create file system for %q: %v", remote, err)
}
return f
}
// NewFsSrcDst creates a new src and dst fs from the arguments
func NewFsSrcDst(args []string) (fs.Fs, fs.Fs) {
fsrc, fdst := newFsSrc(args[0]), newFsDst(args[1])
fs.CalculateModifyWindow(fdst, fsrc)
return fsrc, fdst
}
// NewFsSrc creates a new src fs from the arguments
func NewFsSrc(args []string) fs.Fs {
fsrc := newFsSrc(args[0])
fs.CalculateModifyWindow(fsrc)
return fsrc
}
// NewFsDst creates a new dst fs from the arguments
//
// Dst fs-es can't point to single files
func NewFsDst(args []string) fs.Fs {
fdst := newFsDst(args[0])
fs.CalculateModifyWindow(fdst)
return fdst
}
// Run the function with stats and retries if required
func Run(Retry bool, cmd *cobra.Command, f func() error) {
var err error
stopStats := startStats()
for try := 1; try <= *retries; try++ {
err = f()
if !Retry || (err == nil && !fs.Stats.Errored()) {
break
}
if fs.IsFatalError(err) {
fs.Log(nil, "Fatal error received - not attempting retries")
break
}
if fs.IsNoRetryError(err) {
fs.Log(nil, "Can't retry this error - not attempting retries")
break
}
if err != nil {
fs.Log(nil, "Attempt %d/%d failed with %d errors and: %v", try, *retries, fs.Stats.GetErrors(), err)
} else {
fs.Log(nil, "Attempt %d/%d failed with %d errors", try, *retries, fs.Stats.GetErrors())
}
if try < *retries {
fs.Stats.ResetErrors()
}
}
close(stopStats)
if err != nil {
log.Fatalf("Failed to %s: %v", cmd.Name(), err)
}
if !fs.Config.Quiet || fs.Stats.Errored() || *statsInterval > 0 {
fs.Log(nil, "%s", fs.Stats)
}
if fs.Config.Verbose {
fs.Debug(nil, "Go routines at exit %d\n", runtime.NumGoroutine())
}
if fs.Stats.Errored() {
os.Exit(1)
}
}
// CheckArgs checks there are enough arguments and prints a message if not
func CheckArgs(MinArgs, MaxArgs int, cmd *cobra.Command, args []string) {
if len(args) < MinArgs {
_ = cmd.Usage()
fmt.Fprintf(os.Stderr, "Command %s needs %d arguments mininum\n", cmd.Name(), MinArgs)
os.Exit(1)
} else if len(args) > MaxArgs {
_ = cmd.Usage()
fmt.Fprintf(os.Stderr, "Command %s needs %d arguments maximum\n", cmd.Name(), MaxArgs)
os.Exit(1)
}
}
// startStats prints the stats every statsInterval
//
// It returns a channel which should be closed to stop the stats.
func startStats() chan struct{} {
stopStats := make(chan struct{})
if *statsInterval > 0 {
go func() {
ticker := time.NewTicker(*statsInterval)
for {
select {
case <-ticker.C:
fs.Stats.Log()
case <-stopStats:
ticker.Stop()
return
}
}
}()
}
return stopStats
}
// initConfig is run by cobra after initialising the flags
func initConfig() {
// Log file output
if *logFile != "" {
f, err := os.OpenFile(*logFile, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0640)
if err != nil {
log.Fatalf("Failed to open log file: %v", err)
}
_, err = f.Seek(0, os.SEEK_END)
if err != nil {
fs.ErrorLog(nil, "Failed to seek log file to end: %v", err)
}
log.SetOutput(f)
fs.DebugLogger.SetOutput(f)
redirectStderr(f)
}
// Load the rest of the config now we have started the logger
fs.LoadConfig()
// Write the args for debug purposes
fs.Debug("rclone", "Version %q starting with parameters %q", fs.Version, os.Args)
// Setup CPU profiling if desired
if *cpuProfile != "" {
fs.Log(nil, "Creating CPU profile %q\n", *cpuProfile)
f, err := os.Create(*cpuProfile)
if err != nil {
fs.Stats.Error()
log.Fatal(err)
}
err = pprof.StartCPUProfile(f)
if err != nil {
fs.Stats.Error()
log.Fatal(err)
}
defer pprof.StopCPUProfile()
}
// Setup memory profiling if desired
if *memProfile != "" {
defer func() {
fs.Log(nil, "Saving Memory profile %q\n", *memProfile)
f, err := os.Create(*memProfile)
if err != nil {
fs.Stats.Error()
log.Fatal(err)
}
err = pprof.WriteHeapProfile(f)
if err != nil {
fs.Stats.Error()
log.Fatal(err)
}
err = f.Close()
if err != nil {
fs.Stats.Error()
log.Fatal(err)
}
}()
}
}

20
cmd/config/config.go Normal file
View File

@@ -0,0 +1,20 @@
package config
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(configCmd)
}
var configCmd = &cobra.Command{
Use: "config",
Short: `Enter an interactive configuration session.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
fs.EditConfig()
},
}

63
cmd/copy/copy.go Normal file
View File

@@ -0,0 +1,63 @@
package copy
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(copyCmd)
}
var copyCmd = &cobra.Command{
Use: "copy source:path dest:path",
Short: `Copy files from source to dest, skipping already copied`,
Long: `
Copy the source to the destination. Doesn't transfer
unchanged files, testing by size and modification time or
MD5SUM. Doesn't delete files from the destination.
Note that it is always the contents of the directory that is synced,
not the directory so when source:path is a directory, it's the
contents of source:path that are copied, not the directory name and
contents.
If dest:path doesn't exist, it is created and the source:path contents
go there.
For example
rclone copy source:sourcepath dest:destpath
Let's say there are two files in sourcepath
sourcepath/one.txt
sourcepath/two.txt
This copies them to
destpath/one.txt
destpath/two.txt
Not to
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
If you are familiar with ` + "`" + `rsync` + "`" + `, rclone always works as if you had
written a trailing / - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the
source or destination.
See the ` + "`" + `--no-traverse` + "`" + ` option for controlling whether rclone lists
the destination directory or not.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, fdst := cmd.NewFsSrcDst(args)
cmd.Run(true, command, func() error {
return fs.CopyDir(fdst, fsrc)
})
},
}

113
cmd/dedupe/dedupe.go Normal file
View File

@@ -0,0 +1,113 @@
package dedupe
import (
"log"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
var (
dedupeMode = fs.DeduplicateInteractive
)
func init() {
cmd.Root.AddCommand(dedupeCmd)
dedupeCmd.Flags().VarP(&dedupeMode, "dedupe-mode", "", "Dedupe mode interactive|skip|first|newest|oldest|rename.")
}
var dedupeCmd = &cobra.Command{
Use: "dedupe [mode] remote:path",
Short: `Interactively find duplicate files delete/rename them.`,
Long: `
By default ` + "`" + `dedup` + "`" + ` interactively finds duplicate files and offers to
delete all but one or rename them to be different. Only useful with
Google Drive which can have duplicate file names.
The ` + "`" + `dedupe` + "`" + ` command will delete all but one of any identical (same
md5sum) files it finds without confirmation. This means that for most
duplicated files the ` + "`" + `dedupe` + "`" + ` command will not be interactive. You
can use ` + "`" + `--dry-run` + "`" + ` to see what would happen without doing anything.
Here is an example run.
Before - with duplicates
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
6048320 2016-03-05 16:23:11.775000000 one.txt
564374 2016-03-05 16:23:06.731000000 one.txt
6048320 2016-03-05 16:18:26.092000000 one.txt
6048320 2016-03-05 16:22:46.185000000 two.txt
1744073 2016-03-05 16:22:38.104000000 two.txt
564374 2016-03-05 16:22:52.118000000 two.txt
Now the ` + "`" + `dedupe` + "`" + ` session
$ rclone dedupe drive:dupes
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
one.txt: Found 4 duplicates - deleting identical copies
one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
one.txt: 2 duplicates remain
1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 duplicates - deleting identical copies
two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> r
two-1.txt: renamed from: two.txt
two-2.txt: renamed from: two.txt
two-3.txt: renamed from: two.txt
The result being
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
564374 2016-03-05 16:22:52.118000000 two-1.txt
6048320 2016-03-05 16:22:46.185000000 two-2.txt
1744073 2016-03-05 16:22:38.104000000 two-3.txt
Dedupe can be run non interactively using the ` + "`" + `--dedupe-mode` + "`" + ` flag or by using an extra parameter with the same value
* ` + "`" + `--dedupe-mode interactive` + "`" + ` - interactive as above.
* ` + "`" + `--dedupe-mode skip` + "`" + ` - removes identical files then skips anything left.
* ` + "`" + `--dedupe-mode first` + "`" + ` - removes identical files then keeps the first one.
* ` + "`" + `--dedupe-mode newest` + "`" + ` - removes identical files then keeps the newest one.
* ` + "`" + `--dedupe-mode oldest` + "`" + ` - removes identical files then keeps the oldest one.
* ` + "`" + `--dedupe-mode rename` + "`" + ` - removes identical files then renames the rest to be different.
For example to rename all the identically named photos in your Google Photos directory, do
rclone dedupe --dedupe-mode rename "drive:Google Photos"
Or
rclone dedupe rename "drive:Google Photos"
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 2, command, args)
if len(args) > 1 {
err := dedupeMode.Set(args[0])
if err != nil {
log.Fatal(err)
}
args = args[1:]
}
fdst := cmd.NewFsSrc(args)
cmd.Run(false, command, func() error {
return fs.Deduplicate(fdst, dedupeMode)
})
},
}

41
cmd/delete/delete.go Normal file
View File

@@ -0,0 +1,41 @@
package delete
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(deleteCmd)
}
var deleteCmd = &cobra.Command{
Use: "delete remote:path",
Short: `Remove the contents of path.`,
Long: `
Remove the contents of path. Unlike ` + "`" + `purge` + "`" + ` it obeys include/exclude
filters so can be used to selectively delete files.
Eg delete all files bigger than 100MBytes
Check what would be deleted first (use either)
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
Then delete
rclone --min-size 100M delete remote:path
That reads "delete everything with a minimum size of 100 MB", hence
delete all files bigger than 100MBytes.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(true, command, func() error {
return fs.Delete(fsrc)
})
},
}

View File

@@ -0,0 +1,44 @@
package genautocomplete
import (
"log"
"github.com/ncw/rclone/cmd"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(genautocompleteCmd)
}
var genautocompleteCmd = &cobra.Command{
Use: "genautocomplete [output_file]",
Short: `Output bash completion script for rclone.`,
Long: `
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will
probably need to be run with sudo or as root, eg
sudo rclone genautocomplete
Logout and login again to use the autocompletion scripts, or source
them directly
. /etc/bash_completion
If you supply a command line argument the script will be written
there.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1, command, args)
out := "/etc/bash_completion.d/rclone"
if len(args) > 0 {
out = args[0]
}
err := cmd.Root.GenBashCompletionFile(out)
if err != nil {
log.Fatal(err)
}
},
}

55
cmd/gendocs/gendocs.go Normal file
View File

@@ -0,0 +1,55 @@
package gendocs
import (
"fmt"
"os"
"path"
"path/filepath"
"strings"
"time"
"github.com/ncw/rclone/cmd"
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
)
func init() {
cmd.Root.AddCommand(gendocsCmd)
}
const gendocFrontmatterTemplate = `---
date: %s
title: "%s"
slug: %s
url: %s
---
`
var gendocsCmd = &cobra.Command{
Use: "gendocs output_directory",
Short: `Output markdown docs for rclone to the directory supplied.`,
Long: `
This produces markdown docs for the rclone commands to the directory
supplied. These are in a format suitable for hugo to render into the
rclone.org website.`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
out := args[0]
err := os.MkdirAll(out, 0777)
if err != nil {
return err
}
now := time.Now().Format(time.RFC3339)
prepender := func(filename string) string {
name := filepath.Base(filename)
base := strings.TrimSuffix(name, path.Ext(name))
url := "/commands/" + strings.ToLower(base) + "/"
return fmt.Sprintf(gendocFrontmatterTemplate, now, strings.Replace(base, "_", " ", -1), base, url)
}
linkHandler := func(name string) string {
base := strings.TrimSuffix(name, path.Ext(name))
return "/commands/" + strings.ToLower(base) + "/"
}
return doc.GenMarkdownTreeCustom(cmd.Root, out, prepender, linkHandler)
},
}

25
cmd/ls/ls.go Normal file
View File

@@ -0,0 +1,25 @@
package ls
import (
"os"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(lsCmd)
}
var lsCmd = &cobra.Command{
Use: "ls remote:path",
Short: `List all the objects in the the path with size and path.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, command, func() error {
return fs.List(fsrc, os.Stdout)
})
},
}

25
cmd/lsd/lsd.go Normal file
View File

@@ -0,0 +1,25 @@
package lsd
import (
"os"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(lsdCmd)
}
var lsdCmd = &cobra.Command{
Use: "lsd remote:path",
Short: `List all directories/containers/buckets in the the path.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, command, func() error {
return fs.ListDir(fsrc, os.Stdout)
})
},
}

25
cmd/lsl/lsl.go Normal file
View File

@@ -0,0 +1,25 @@
package lsl
import (
"os"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(lslCmd)
}
var lslCmd = &cobra.Command{
Use: "lsl remote:path",
Short: `List all the objects path with modification time, size and path.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, command, func() error {
return fs.ListLong(fsrc, os.Stdout)
})
},
}

29
cmd/md5sum/md5sum.go Normal file
View File

@@ -0,0 +1,29 @@
package md5sum
import (
"os"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(md5sumCmd)
}
var md5sumCmd = &cobra.Command{
Use: "md5sum remote:path",
Short: `Produces an md5sum file for all the objects in the path.`,
Long: `
Produces an md5sum file for all the objects in the path. This
is in the same format as the standard md5sum tool produces.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, command, func() error {
return fs.Md5sum(fsrc, os.Stdout)
})
},
}

49
cmd/memtest/memtest.go Normal file
View File

@@ -0,0 +1,49 @@
package memtest
import (
"runtime"
"sync"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(memtestCmd)
}
var memtestCmd = &cobra.Command{
Use: "memtest remote:path",
Short: `Load all the objects at remote:path and report memory stats.`,
Hidden: true,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, command, func() error {
objects, _, err := fs.Count(fsrc)
if err != nil {
return err
}
objs := make([]fs.Object, 0, objects)
var before, after runtime.MemStats
runtime.GC()
runtime.ReadMemStats(&before)
var mu sync.Mutex
err = fs.ListFn(fsrc, func(o fs.Object) {
mu.Lock()
objs = append(objs, o)
mu.Unlock()
})
if err != nil {
return err
}
runtime.GC()
runtime.ReadMemStats(&after)
usedMemory := after.Alloc - before.Alloc
fs.Log(nil, "%d objects took %d bytes, %.1f bytes/object", len(objs), usedMemory, float64(usedMemory)/float64(len(objs)))
fs.Log(nil, "System memory changed from %d to %d bytes a change of %d bytes", before.Sys, after.Sys, after.Sys-before.Sys)
return nil
})
},
}

23
cmd/mkdir/mkdir.go Normal file
View File

@@ -0,0 +1,23 @@
package mkdir
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(mkdirCmd)
}
var mkdirCmd = &cobra.Command{
Use: "mkdir remote:path",
Short: `Make the path if it doesn't already exist.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fdst := cmd.NewFsDst(args)
cmd.Run(true, command, func() error {
return fs.Mkdir(fdst)
})
},
}

57
cmd/mount/createinfo.go Normal file
View File

@@ -0,0 +1,57 @@
// +build linux darwin freebsd
package mount
import (
"time"
"github.com/ncw/rclone/fs"
)
// info to create a new object
type createInfo struct {
f fs.Fs
remote string
}
func newCreateInfo(f fs.Fs, remote string) *createInfo {
return &createInfo{
f: f,
remote: remote,
}
}
// Fs returns read only access to the Fs that this object is part of
func (ci *createInfo) Fs() fs.Info {
return ci.f
}
// Remote returns the remote path
func (ci *createInfo) Remote() string {
return ci.remote
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (ci *createInfo) Hash(fs.HashType) (string, error) {
return "", fs.ErrHashUnsupported
}
// ModTime returns the modification date of the file
// It should return a best guess if one isn't available
func (ci *createInfo) ModTime() time.Time {
return time.Now()
}
// Size returns the size of the file
func (ci *createInfo) Size() int64 {
// FIXME this means this won't work with all remotes...
return 0
}
// Storable says whether this object can be stored
func (ci *createInfo) Storable() bool {
return true
}
var _ fs.ObjectInfo = (*createInfo)(nil)

377
cmd/mount/dir.go Normal file
View File

@@ -0,0 +1,377 @@
// +build linux darwin freebsd
package mount
import (
"os"
"path"
"sync"
"time"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
// DirEntry describes the contents of a directory entry
//
// It can be a file or a directory
//
// node may be nil, but o may not
type DirEntry struct {
o fs.BasicInfo
node fusefs.Node
}
// Dir represents a directory entry
type Dir struct {
f fs.Fs
path string
mu sync.RWMutex // protects the following
read bool
items map[string]*DirEntry
}
func newDir(f fs.Fs, path string) *Dir {
return &Dir{
f: f,
path: path,
}
}
// addObject adds a new object or directory to the directory
//
// note that we add new objects rather than updating old ones
func (d *Dir) addObject(o fs.BasicInfo, node fusefs.Node) *DirEntry {
item := &DirEntry{
o: o,
node: node,
}
d.mu.Lock()
d.items[path.Base(o.Remote())] = item
d.mu.Unlock()
return item
}
// delObject removes an object from the directory
func (d *Dir) delObject(leaf string) {
d.mu.Lock()
delete(d.items, leaf)
d.mu.Unlock()
}
// read the directory
func (d *Dir) readDir() error {
d.mu.Lock()
defer d.mu.Unlock()
if d.read {
return nil
}
objs, dirs, err := fs.NewLister().SetLevel(1).Start(d.f, d.path).GetAll()
if err == fs.ErrorDirNotFound {
// We treat directory not found as empty because we
// create directories on the fly
} else if err != nil {
return err
}
// Cache the items by name
d.items = make(map[string]*DirEntry, len(objs)+len(dirs))
for _, obj := range objs {
name := path.Base(obj.Remote())
d.items[name] = &DirEntry{
o: obj,
node: nil,
}
}
for _, dir := range dirs {
name := path.Base(dir.Remote())
d.items[name] = &DirEntry{
o: dir,
node: nil,
}
}
d.read = true
return nil
}
// lookup a single item in the directory
//
// returns fuse.ENOENT if not found.
func (d *Dir) lookup(leaf string) (*DirEntry, error) {
err := d.readDir()
if err != nil {
return nil, err
}
d.mu.RLock()
item, ok := d.items[leaf]
d.mu.RUnlock()
if !ok {
return nil, fuse.ENOENT
}
return item, nil
}
// Check to see if a directory is empty
func (d *Dir) isEmpty() (bool, error) {
err := d.readDir()
if err != nil {
return false, err
}
d.mu.RLock()
defer d.mu.RUnlock()
return len(d.items) == 0, nil
}
// Check interface satsified
var _ fusefs.Node = (*Dir)(nil)
// Attr updates the attribes of a directory
func (d *Dir) Attr(ctx context.Context, a *fuse.Attr) error {
fs.Debug(d.path, "Dir.Attr")
a.Mode = os.ModeDir | dirPerms
// FIXME include Valid so get some caching? Also mtime
return nil
}
// lookupNode calls lookup then makes sure the node is not nil in the DirEntry
func (d *Dir) lookupNode(leaf string) (item *DirEntry, err error) {
item, err = d.lookup(leaf)
if err != nil {
return nil, err
}
if item.node != nil {
return item, nil
}
var node fusefs.Node
switch x := item.o.(type) {
case fs.Object:
node, err = newFile(d, x), nil
case *fs.Dir:
node, err = newDir(d.f, x.Remote()), nil
default:
err = errors.Errorf("unknown type %T", item)
}
if err != nil {
return nil, err
}
item = d.addObject(item.o, node)
return item, err
}
// Check interface satisfied
var _ fusefs.NodeRequestLookuper = (*Dir)(nil)
// Lookup looks up a specific entry in the receiver.
//
// Lookup should return a Node corresponding to the entry. If the
// name does not exist in the directory, Lookup should return ENOENT.
//
// Lookup need not to handle the names "." and "..".
func (d *Dir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.LookupResponse) (node fusefs.Node, err error) {
path := path.Join(d.path, req.Name)
fs.Debug(path, "Dir.Lookup")
item, err := d.lookupNode(req.Name)
if err != nil {
if err != fuse.ENOENT {
fs.ErrorLog(path, "Dir.Lookup error: %v", err)
}
return nil, err
}
fs.Debug(path, "Dir.Lookup OK")
return item.node, nil
}
// Check interface satisfied
var _ fusefs.HandleReadDirAller = (*Dir)(nil)
// ReadDirAll reads the contents of the directory
func (d *Dir) ReadDirAll(ctx context.Context) (dirents []fuse.Dirent, err error) {
fs.Debug(d.path, "Dir.ReadDirAll")
err = d.readDir()
if err != nil {
fs.Debug(d.path, "Dir.ReadDirAll error: %v", err)
return nil, err
}
d.mu.RLock()
defer d.mu.RUnlock()
for _, item := range d.items {
var dirent fuse.Dirent
switch x := item.o.(type) {
case fs.Object:
dirent = fuse.Dirent{
// Inode FIXME ???
Type: fuse.DT_File,
Name: path.Base(x.Remote()),
}
case *fs.Dir:
dirent = fuse.Dirent{
// Inode FIXME ???
Type: fuse.DT_Dir,
Name: path.Base(x.Remote()),
}
default:
err = errors.Errorf("unknown type %T", item)
fs.ErrorLog(d.path, "Dir.ReadDirAll error: %v", err)
return nil, err
}
dirents = append(dirents, dirent)
}
fs.Debug(d.path, "Dir.ReadDirAll OK with %d entries", len(dirents))
return dirents, nil
}
var _ fusefs.NodeCreater = (*Dir)(nil)
// Create makes a new file
func (d *Dir) Create(ctx context.Context, req *fuse.CreateRequest, resp *fuse.CreateResponse) (fusefs.Node, fusefs.Handle, error) {
path := path.Join(d.path, req.Name)
fs.Debug(path, "Dir.Create")
src := newCreateInfo(d.f, path)
// This gets added to the directory when the file is written
file := newFile(d, nil)
fh, err := newWriteFileHandle(d, file, src)
if err != nil {
fs.ErrorLog(path, "Dir.Create error: %v", err)
return nil, nil, err
}
fs.Debug(path, "Dir.Create OK")
return file, fh, nil
}
var _ fusefs.NodeMkdirer = (*Dir)(nil)
// Mkdir creates a new directory
func (d *Dir) Mkdir(ctx context.Context, req *fuse.MkdirRequest) (fusefs.Node, error) {
// We just pretend to have created the directory - rclone will
// actually create the directory if we write files into it
path := path.Join(d.path, req.Name)
fs.Debug(path, "Dir.Mkdir")
fsDir := &fs.Dir{
Name: path,
When: time.Now(),
}
dir := newDir(d.f, path)
d.addObject(fsDir, dir)
fs.Debug(path, "Dir.Mkdir OK")
return dir, nil
}
var _ fusefs.NodeRemover = (*Dir)(nil)
// Remove removes the entry with the given name from
// the receiver, which must be a directory. The entry to be removed
// may correspond to a file (unlink) or to a directory (rmdir).
func (d *Dir) Remove(ctx context.Context, req *fuse.RemoveRequest) error {
path := path.Join(d.path, req.Name)
fs.Debug(path, "Dir.Remove")
item, err := d.lookupNode(req.Name)
if err != nil {
fs.ErrorLog(path, "Dir.Remove error: %v", err)
return err
}
switch x := item.o.(type) {
case fs.Object:
err = x.Remove()
if err != nil {
fs.ErrorLog(path, "Dir.Remove file error: %v", err)
return err
}
case *fs.Dir:
// Do nothing for deleting directory - rclone can't
// currently remote a random directory
//
// Check directory is empty first though
dir := item.node.(*Dir)
empty, err := dir.isEmpty()
if err != nil {
fs.ErrorLog(path, "Dir.Remove dir error: %v", err)
return err
}
if !empty {
// return fuse.ENOTEMPTY - doesn't exist though so use EEXIST
fs.ErrorLog(path, "Dir.Remove not empty")
return fuse.EEXIST
}
default:
fs.ErrorLog(path, "Dir.Remove unknown type %T", item)
return errors.Errorf("unknown type %T", item)
}
// Remove the item from the directory listing
d.delObject(req.Name)
fs.Debug(path, "Dir.Remove OK")
return nil
}
// Check interface satisfied
var _ fusefs.NodeRenamer = (*Dir)(nil)
// Rename the file
func (d *Dir) Rename(ctx context.Context, req *fuse.RenameRequest, newDir fusefs.Node) error {
oldPath := path.Join(d.path, req.OldName)
destDir, ok := newDir.(*Dir)
if !ok {
err := errors.Errorf("Unknown Dir type %T", newDir)
fs.ErrorLog(oldPath, "Dir.Rename error: %v", err)
return err
}
newPath := path.Join(destDir.path, req.NewName)
fs.Debug(oldPath, "Dir.Rename to %q", newPath)
oldItem, err := d.lookupNode(req.OldName)
if err != nil {
fs.ErrorLog(oldPath, "Dir.Rename error: %v", err)
return err
}
var newObj fs.BasicInfo
switch x := oldItem.o.(type) {
case fs.Object:
oldObject := x
do, ok := d.f.(fs.Mover)
if !ok {
err := errors.Errorf("Fs %q can't Move files", d.f)
fs.ErrorLog(oldPath, "Dir.Rename error: %v", err)
return err
}
newObject, err := do.Move(oldObject, newPath)
if err != nil {
fs.ErrorLog(oldPath, "Dir.Rename error: %v", err)
return err
}
newObj = newObject
case *fs.Dir:
oldDir := oldItem.node.(*Dir)
empty, err := oldDir.isEmpty()
if err != nil {
fs.ErrorLog(oldPath, "Dir.Rename dir error: %v", err)
return err
}
if !empty {
// return fuse.ENOTEMPTY - doesn't exist though so use EEXIST
fs.ErrorLog(oldPath, "Dir.Rename can't rename non empty directory")
return fuse.EEXIST
}
newObj = &fs.Dir{
Name: newPath,
When: time.Now(),
}
default:
err = errors.Errorf("unknown type %T", oldItem)
fs.ErrorLog(d.path, "Dir.ReadDirAll error: %v", err)
return err
}
// Show moved - delete from old dir and add to new
d.delObject(req.OldName)
destDir.addObject(newObj, nil)
// FIXME need to flush the dir also
// FIXME use DirMover to move a directory?
// or maybe use MoveDir which can move anything
// fallback to Copy/Delete if no Move?
// if dir is empty then can move it
fs.ErrorLog(newPath, "Dir.Rename renamed from %q", oldPath)
return nil
}

133
cmd/mount/dir_test.go Normal file
View File

@@ -0,0 +1,133 @@
// +build linux darwin freebsd
package mount
import (
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestDirLs(t *testing.T) {
run.skipIfNoFUSE(t)
run.checkDir(t, "")
run.mkdir(t, "a directory")
run.createFile(t, "a file", "hello")
run.checkDir(t, "a directory/|a file 5")
run.rmdir(t, "a directory")
run.rm(t, "a file")
run.checkDir(t, "")
}
func TestDirCreateAndRemoveDir(t *testing.T) {
run.skipIfNoFUSE(t)
run.mkdir(t, "dir")
run.mkdir(t, "dir/subdir")
run.checkDir(t, "dir/|dir/subdir/")
// Check we can't delete a directory with stuff in
err := os.Remove(run.path("dir"))
assert.Error(t, err, "file exists")
// Now delete subdir then dir - should produce no errors
run.rmdir(t, "dir/subdir")
run.checkDir(t, "dir/")
run.rmdir(t, "dir")
run.checkDir(t, "")
}
func TestDirCreateAndRemoveFile(t *testing.T) {
run.skipIfNoFUSE(t)
run.mkdir(t, "dir")
run.createFile(t, "dir/file", "potato")
run.checkDir(t, "dir/|dir/file 6")
// Check we can't delete a directory with stuff in
err := os.Remove(run.path("dir"))
assert.Error(t, err, "file exists")
// Now delete file
run.rm(t, "dir/file")
run.checkDir(t, "dir/")
run.rmdir(t, "dir")
run.checkDir(t, "")
}
func TestDirRenameFile(t *testing.T) {
run.skipIfNoFUSE(t)
run.mkdir(t, "dir")
run.createFile(t, "file", "potato")
run.checkDir(t, "dir/|file 6")
err := os.Rename(run.path("file"), run.path("dir/file2"))
require.NoError(t, err)
run.checkDir(t, "dir/|dir/file2 6")
err = os.Rename(run.path("dir/file2"), run.path("dir/file3"))
require.NoError(t, err)
run.checkDir(t, "dir/|dir/file3 6")
run.rm(t, "dir/file3")
run.rmdir(t, "dir")
run.checkDir(t, "")
}
func TestDirRenameEmptyDir(t *testing.T) {
run.skipIfNoFUSE(t)
run.mkdir(t, "dir")
run.mkdir(t, "dir1")
run.checkDir(t, "dir/|dir1/")
err := os.Rename(run.path("dir1"), run.path("dir/dir2"))
require.NoError(t, err)
run.checkDir(t, "dir/|dir/dir2/")
err = os.Rename(run.path("dir/dir2"), run.path("dir/dir3"))
require.NoError(t, err)
run.checkDir(t, "dir/|dir/dir3/")
run.rmdir(t, "dir/dir3")
run.rmdir(t, "dir")
run.checkDir(t, "")
}
func TestDirRenameFullDir(t *testing.T) {
run.skipIfNoFUSE(t)
run.mkdir(t, "dir")
run.mkdir(t, "dir1")
run.createFile(t, "dir1/potato.txt", "maris piper")
run.checkDir(t, "dir/|dir1/|dir1/potato.txt 11")
err := os.Rename(run.path("dir1"), run.path("dir/dir2"))
require.Error(t, err, "file exists")
// Can't currently rename directories with stuff in
/*
require.NoError(t, err)
run.checkDir(t, "dir/|dir/dir2/|dir/dir2/potato.txt 11")
err = os.Rename(run.path("dir/dir2"), run.path("dir/dir3"))
require.NoError(t, err)
run.checkDir(t, "dir/|dir/dir3/|dir/dir3/potato.txt 11")
run.rm(t, "dir/dir3/potato.txt")
run.rmdir(t, "dir/dir3")
*/
run.rm(t, "dir1/potato.txt")
run.rmdir(t, "dir1")
run.rmdir(t, "dir")
run.checkDir(t, "")
}

142
cmd/mount/file.go Normal file
View File

@@ -0,0 +1,142 @@
// +build linux darwin freebsd
package mount
import (
"sync"
"sync/atomic"
"time"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
"golang.org/x/net/context"
)
// File represents a file
type File struct {
d *Dir // parent directory - read only
size int64 // size of file - read and written with atomic
mu sync.RWMutex // protects the following
o fs.Object // NB o may be nil if file is being written
writers int // number of writers for this file
}
// newFile creates a new File
func newFile(d *Dir, o fs.Object) *File {
return &File{
d: d,
o: o,
}
}
// addWriters increments or decrements the writers
func (f *File) addWriters(n int) {
f.mu.Lock()
f.writers += n
f.mu.Unlock()
}
// Check interface satisfied
var _ fusefs.Node = (*File)(nil)
// Attr fills out the attributes for the file
func (f *File) Attr(ctx context.Context, a *fuse.Attr) error {
f.mu.Lock()
defer f.mu.Unlock()
fs.Debug(f.o, "File.Attr")
a.Mode = filePerms
// if o is nil it isn't valid yet, so return the size so far
if f.o == nil {
a.Size = uint64(atomic.LoadInt64(&f.size))
} else {
a.Size = uint64(f.o.Size())
if !noModTime {
modTime := f.o.ModTime()
a.Atime = modTime
a.Mtime = modTime
a.Ctime = modTime
a.Crtime = modTime
}
}
return nil
}
// Update the size while writing
func (f *File) written(n int64) {
atomic.AddInt64(&f.size, n)
}
// Update the object when written
func (f *File) setObject(o fs.Object) {
f.mu.Lock()
defer f.mu.Unlock()
f.o = o
f.d.addObject(o, f)
}
// Wait for f.o to become non nil for a short time returning it or an
// error
//
// Call without the mutex held
func (f *File) waitForValidObject() (o fs.Object, err error) {
for i := 0; i < 50; i++ {
f.mu.Lock()
o = f.o
writers := f.writers
f.mu.Unlock()
if o != nil {
return o, nil
}
if writers == 0 {
return nil, errors.New("can't open file - writer failed")
}
time.Sleep(100 * time.Millisecond)
}
return nil, fuse.ENOENT
}
// Check interface satisfied
var _ fusefs.NodeOpener = (*File)(nil)
// Open the file for read or write
func (f *File) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenResponse) (fusefs.Handle, error) {
// if o is nil it isn't valid yet
o, err := f.waitForValidObject()
if err != nil {
return nil, err
}
fs.Debug(o, "File.Open")
// Files aren't seekable
resp.Flags |= fuse.OpenNonSeekable
switch {
case req.Flags.IsReadOnly():
return newReadFileHandle(o)
case req.Flags.IsWriteOnly():
src := newCreateInfo(f.d.f, o.Remote())
fh, err := newWriteFileHandle(f.d, f, src)
if err != nil {
return nil, err
}
return fh, nil
case req.Flags.IsReadWrite():
return nil, errors.New("can't open read and write")
}
/*
// File was opened in append-only mode, all writes will go to end
// of file. OS X does not provide this information.
OpenAppend OpenFlags = syscall.O_APPEND
OpenCreate OpenFlags = syscall.O_CREAT
OpenDirectory OpenFlags = syscall.O_DIRECTORY
OpenExclusive OpenFlags = syscall.O_EXCL
OpenNonblock OpenFlags = syscall.O_NONBLOCK
OpenSync OpenFlags = syscall.O_SYNC
OpenTruncate OpenFlags = syscall.O_TRUNC
*/
return nil, errors.New("can't figure out how to open")
}

67
cmd/mount/fs.go Normal file
View File

@@ -0,0 +1,67 @@
// FUSE main Fs
// +build linux darwin freebsd
package mount
import (
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/fs"
)
// Default permissions
const (
dirPerms = 0755
filePerms = 0644
)
// FS represents the top level filing system
type FS struct {
f fs.Fs
}
// Check interface satistfied
var _ fusefs.FS = (*FS)(nil)
// Root returns the root node
func (f *FS) Root() (fusefs.Node, error) {
fs.Debug(f.f, "Root()")
return newDir(f.f, ""), nil
}
// mount the file system
//
// The mount point will be ready when this returns.
//
// returns an error, and an error channel for the serve process to
// report an error when fusermount is called.
func mount(f fs.Fs, mountpoint string) (<-chan error, error) {
c, err := fuse.Mount(mountpoint)
if err != nil {
return nil, err
}
filesys := &FS{
f: f,
}
// Serve the mount point in the background returning error to errChan
errChan := make(chan error, 1)
go func() {
err := fusefs.Serve(c, filesys)
closeErr := c.Close()
if err == nil {
err = closeErr
}
errChan <- err
}()
// check if the mount process has an error to report
<-c.Ready
if err := c.MountError; err != nil {
return nil, err
}
return errChan, nil
}

264
cmd/mount/fs_test.go Normal file
View File

@@ -0,0 +1,264 @@
// +build linux darwin freebsd
// Test suite for rclonefs
package mount
import (
"flag"
"fmt"
"io/ioutil"
"log"
"os"
"os/exec"
"path"
"strings"
"testing"
"github.com/ncw/rclone/fs"
_ "github.com/ncw/rclone/fs/all"
"github.com/ncw/rclone/fstest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Globals
var (
RemoteName = flag.String("remote", "", "Remote to test with, defaults to local filesystem")
SubDir = flag.Bool("subdir", false, "Set to test with a sub directory")
Verbose = flag.Bool("verbose", false, "Set to enable logging")
DumpHeaders = flag.Bool("dump-headers", false, "Set to dump headers (needs -verbose)")
DumpBodies = flag.Bool("dump-bodies", false, "Set to dump bodies (needs -verbose)")
Individual = flag.Bool("individual", false, "Make individual bucket/container/directory for each test - much slower")
LowLevelRetries = flag.Int("low-level-retries", 10, "Number of low level retries")
)
// TestMain drives the tests
func TestMain(m *testing.M) {
flag.Parse()
run = newRun()
rc := m.Run()
run.Finalise()
os.Exit(rc)
}
// Run holds the remotes for a test run
type Run struct {
mountPath string
fremote fs.Fs
fremoteName string
cleanRemote func()
umountResult <-chan error
skip bool
}
// run holds the master Run data
var run *Run
// newRun initialise the remote mount for testing and returns a run
// object.
//
// r.fremote is an empty remote Fs
//
// Finalise() will tidy them away when done.
func newRun() *Run {
r := &Run{
umountResult: make(chan error, 1),
}
// Never ask for passwords, fail instead.
// If your local config is encrypted set environment variable
// "RCLONE_CONFIG_PASS=hunter2" (or your password)
*fs.AskPassword = false
fs.LoadConfig()
fs.Config.Verbose = *Verbose
fs.Config.Quiet = !*Verbose
fs.Config.DumpHeaders = *DumpHeaders
fs.Config.DumpBodies = *DumpBodies
fs.Config.LowLevelRetries = *LowLevelRetries
var err error
r.fremote, r.fremoteName, r.cleanRemote, err = fstest.RandomRemote(*RemoteName, *SubDir)
if err != nil {
log.Fatalf("Failed to open remote %q: %v", *RemoteName, err)
}
r.mountPath, err = ioutil.TempDir("", "rclonefs-mount")
if err != nil {
log.Fatalf("Failed to create mount dir: %v", err)
}
// Mount it up
r.mount()
return r
}
func (r *Run) mount() {
log.Printf("mount %q %q", r.fremote, r.mountPath)
var err error
r.umountResult, err = mount(r.fremote, r.mountPath)
if err != nil {
log.Printf("mount failed: %v", err)
r.skip = true
}
log.Printf("mount OK")
}
func (r *Run) umount() {
if r.skip {
log.Printf("FUSE not found so skipping umount")
return
}
log.Printf("Calling fusermount -u %q", r.mountPath)
err := exec.Command("fusermount", "-u", r.mountPath).Run()
if err != nil {
log.Printf("fusermount failed: %v", err)
}
log.Printf("Waiting for umount")
err = <-r.umountResult
if err != nil {
log.Fatalf("umount failed: %v", err)
}
}
func (r *Run) skipIfNoFUSE(t *testing.T) {
if r.skip {
t.Skip("FUSE not found so skipping test")
}
}
// Finalise cleans the remote and unmounts
func (r *Run) Finalise() {
r.umount()
r.cleanRemote()
err := os.RemoveAll(r.mountPath)
if err != nil {
log.Printf("Failed to clean mountPath %q: %v", r.mountPath, err)
}
}
func (r *Run) path(filepath string) string {
return path.Join(run.mountPath, filepath)
}
type dirMap map[string]struct{}
// Create a dirMap from a string
func newDirMap(dirString string) (dm dirMap) {
dm = make(dirMap)
for _, entry := range strings.Split(dirString, "|") {
if entry != "" {
dm[entry] = struct{}{}
}
}
return dm
}
// Returns a dirmap with only the files in
func (dm dirMap) filesOnly() dirMap {
newDm := make(dirMap)
for name := range dm {
if !strings.HasSuffix(name, "/") {
newDm[name] = struct{}{}
}
}
return newDm
}
// reads the local tree into dir
func (r *Run) readLocal(t *testing.T, dir dirMap, filepath string) {
realPath := r.path(filepath)
files, err := ioutil.ReadDir(realPath)
require.NoError(t, err)
for _, fi := range files {
name := path.Join(filepath, fi.Name())
if fi.IsDir() {
dir[name+"/"] = struct{}{}
r.readLocal(t, dir, name)
assert.Equal(t, fi.Mode().Perm(), os.FileMode(dirPerms))
} else {
dir[fmt.Sprintf("%s %d", name, fi.Size())] = struct{}{}
assert.Equal(t, fi.Mode().Perm(), os.FileMode(filePerms))
}
}
}
// reads the remote tree into dir
func (r *Run) readRemote(t *testing.T, dir dirMap, filepath string) {
objs, dirs, err := fs.NewLister().SetLevel(1).Start(r.fremote, filepath).GetAll()
if err == fs.ErrorDirNotFound {
return
}
require.NoError(t, err)
for _, obj := range objs {
dir[fmt.Sprintf("%s %d", obj.Remote(), obj.Size())] = struct{}{}
}
for _, d := range dirs {
name := d.Remote()
dir[name+"/"] = struct{}{}
r.readRemote(t, dir, name)
}
}
// checkDir checks the local and remote against the string passed in
func (r *Run) checkDir(t *testing.T, dirString string) {
dm := newDirMap(dirString)
localDm := make(dirMap)
r.readLocal(t, localDm, "")
remoteDm := make(dirMap)
r.readRemote(t, remoteDm, "")
// Ignore directories for remote compare
assert.Equal(t, dm.filesOnly(), remoteDm.filesOnly(), "expected vs remote")
assert.Equal(t, dm, localDm, "expected vs fuse mount")
}
func (r *Run) createFile(t *testing.T, filepath string, contents string) {
filepath = r.path(filepath)
err := ioutil.WriteFile(filepath, []byte(contents), 0600)
require.NoError(t, err)
}
func (r *Run) readFile(t *testing.T, filepath string) string {
filepath = r.path(filepath)
result, err := ioutil.ReadFile(filepath)
require.NoError(t, err)
return string(result)
}
func (r *Run) mkdir(t *testing.T, filepath string) {
filepath = r.path(filepath)
err := os.Mkdir(filepath, 0700)
require.NoError(t, err)
}
func (r *Run) rm(t *testing.T, filepath string) {
filepath = r.path(filepath)
err := os.Remove(filepath)
require.NoError(t, err)
}
func (r *Run) rmdir(t *testing.T, filepath string) {
filepath = r.path(filepath)
err := os.Remove(filepath)
require.NoError(t, err)
}
// Check that the Fs is mounted by seeing if the mountpoint is
// in the mount output
func TestMount(t *testing.T) {
run.skipIfNoFUSE(t)
out, err := exec.Command("mount").Output()
require.NoError(t, err)
assert.Contains(t, string(out), run.mountPath)
}
// Check root directory is present and correct
func TestRoot(t *testing.T) {
run.skipIfNoFUSE(t)
fi, err := os.Lstat(run.mountPath)
require.NoError(t, err)
assert.True(t, fi.IsDir())
assert.Equal(t, fi.Mode().Perm(), os.FileMode(dirPerms))
}

118
cmd/mount/mount.go Normal file
View File

@@ -0,0 +1,118 @@
// Package mount implents a FUSE mounting system for rclone remotes.
// +build linux darwin freebsd
package mount
import (
"bazil.org/fuse"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
// Globals
var (
noModTime = false
debugFUSE = false
)
func init() {
cmd.Root.AddCommand(mountCmd)
mountCmd.Flags().BoolVarP(&noModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up).")
mountCmd.Flags().BoolVarP(&debugFUSE, "debug-fuse", "", false, "Debug the FUSE internals - needs -v.")
}
var mountCmd = &cobra.Command{
Use: "mount remote:path /path/to/mountpoint",
Short: `Mount the remote as a mountpoint. **EXPERIMENTAL**`,
Long: `
rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's
cloud storage systems as a file system with FUSE.
This is **EXPERIMENTAL** - use with care.
First set up your remote using ` + "`rclone config`" + `. Check it works with ` + "`rclone ls`" + ` etc.
Start the mount like this
rclone mount remote:path/to/files /path/to/local/mount &
Stop the mount with
fusermount -u /path/to/local/mount
Or with OS X
umount -u /path/to/local/mount
### Limitations ###
This can only read files seqentially, or write files sequentially. It
can't read and write or seek in files.
rclonefs inherits rclone's directory handling. In rclone's world
directories don't really exist. This means that empty directories
will have a tendency to disappear once they fall out of the directory
cache.
The bucket based FSes (eg swift, s3, google compute storage, b2) won't
work from the root - you will need to specify a bucket, or a path
within the bucket. So ` + "`swift:`" + ` won't work whereas ` + "`swift:bucket`" + ` will
as will ` + "`swift:bucket/path`" + `.
Only supported on Linux, FreeBSD and OS X at the moment.
### rclone mount vs rclone sync/copy ##
File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the
uploads. This might happen in the future, but for the moment rclone
mount won't do that, so will be less reliable than the rclone command.
### Bugs ###
* All the remotes should work for read, but some may not for write
* those which need to know the size in advance won't - eg B2
* maybe should pass in size as -1 to mean work it out
### TODO ###
* Check hashes on upload/download
* Preserve timestamps
* Move directories
`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(2, 2, command, args)
fdst := cmd.NewFsDst(args)
return Mount(fdst, args[1])
},
}
// Mount mounts the remote at mountpoint.
//
// If noModTime is set then it
func Mount(f fs.Fs, mountpoint string) error {
if debugFUSE {
fuse.Debug = func(msg interface{}) {
fs.Debug("fuse", "%v", msg)
}
}
// Mount it
errChan, err := mount(f, mountpoint)
if err != nil {
return errors.Wrap(err, "failed to mount FUSE fs")
}
// Wait for umount
err = <-errChan
if err != nil {
return errors.Wrap(err, "failed to umount FUSE fs")
}
return nil
}

View File

@@ -0,0 +1,6 @@
// Build for mount for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !linux,!darwin,!freebsd
package mount

130
cmd/mount/read.go Normal file
View File

@@ -0,0 +1,130 @@
// +build linux darwin freebsd
package mount
import (
"io"
"sync"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/fs"
"golang.org/x/net/context"
)
// ReadFileHandle is an open for read file handle on a File
type ReadFileHandle struct {
mu sync.Mutex
closed bool // set if handle has been closed
r io.ReadCloser
o fs.Object
readCalled bool // set if read has been called
}
func newReadFileHandle(o fs.Object) (*ReadFileHandle, error) {
r, err := o.Open()
if err != nil {
return nil, err
}
return &ReadFileHandle{
r: r,
o: o,
}, nil
}
// Check interface satisfied
var _ fusefs.Handle = (*ReadFileHandle)(nil)
// Check interface satisfied
var _ fusefs.HandleReader = (*ReadFileHandle)(nil)
// Read from the file handle
func (fh *ReadFileHandle) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) error {
fs.Debug(fh.o, "ReadFileHandle.Open")
if fh.closed {
fs.ErrorLog(fh.o, "ReadFileHandle.Read error: %v", errClosedFileHandle)
return errClosedFileHandle
}
fh.readCalled = true
// We don't actually enforce Offset to match where previous read
// ended. Maybe we should, but that would mean'd we need to track
// it. The kernel *should* do it for us, based on the
// fuse.OpenNonSeekable flag.
//
// One exception to the above is if we fail to fully populate a
// page cache page; a read into page cache is always page aligned.
// Make sure we never serve a partial read, to avoid that.
buf := make([]byte, req.Size)
n, err := io.ReadFull(fh.r, buf)
if err == io.ErrUnexpectedEOF || err == io.EOF {
err = nil
}
resp.Data = buf[:n]
if err != nil {
fs.ErrorLog(fh.o, "ReadFileHandle.Open error: %v", err)
} else {
fs.Debug(fh.o, "ReadFileHandle.Open OK")
}
return err
}
// close the file handle returning errClosedFileHandle if it has been
// closed already.
//
// Must be called with fh.mu held
func (fh *ReadFileHandle) close() error {
if fh.closed {
return errClosedFileHandle
}
fh.closed = true
return fh.r.Close()
}
// Check interface satisfied
var _ fusefs.HandleFlusher = (*ReadFileHandle)(nil)
// Flush is called each time the file or directory is closed.
// Because there can be multiple file descriptors referring to a
// single opened file, Flush can be called multiple times.
func (fh *ReadFileHandle) Flush(ctx context.Context, req *fuse.FlushRequest) error {
fh.mu.Lock()
defer fh.mu.Unlock()
fs.Debug(fh.o, "ReadFileHandle.Flush")
// If Read hasn't been called then ignore the Flush - Release
// will pick it up
if !fh.readCalled {
fs.Debug(fh.o, "ReadFileHandle.Flush ignoring flush on unread handle")
return nil
}
err := fh.close()
if err != nil {
fs.ErrorLog(fh.o, "ReadFileHandle.Flush error: %v", err)
return err
}
fs.Debug(fh.o, "ReadFileHandle.Flush OK")
return nil
}
var _ fusefs.HandleReleaser = (*ReadFileHandle)(nil)
// Release is called when we are finished with the file handle
//
// It isn't called directly from userspace so the error is ignored by
// the kernel
func (fh *ReadFileHandle) Release(ctx context.Context, req *fuse.ReleaseRequest) error {
fh.mu.Lock()
defer fh.mu.Unlock()
if fh.closed {
fs.Debug(fh.o, "ReadFileHandle.Release nothing to do")
return nil
}
fs.Debug(fh.o, "ReadFileHandle.Release closing")
err := fh.close()
if err != nil {
fs.ErrorLog(fh.o, "ReadFileHandle.Release error: %v", err)
} else {
fs.Debug(fh.o, "ReadFileHandle.Release OK")
}
return err
}

79
cmd/mount/read_test.go Normal file
View File

@@ -0,0 +1,79 @@
// +build linux darwin freebsd
package mount
import (
"io"
"os"
"syscall"
"testing"
"github.com/stretchr/testify/assert"
)
// Read by byte including don't read any bytes
func TestReadByByte(t *testing.T) {
run.skipIfNoFUSE(t)
var data = []byte("hellohello")
run.createFile(t, "testfile", string(data))
run.checkDir(t, "testfile 10")
for i := 0; i < len(data); i++ {
fd, err := os.Open(run.path("testfile"))
assert.NoError(t, err)
for j := 0; j < i; j++ {
buf := make([]byte, 1)
n, err := io.ReadFull(fd, buf)
assert.NoError(t, err)
assert.Equal(t, 1, n)
assert.Equal(t, buf[0], data[j])
}
err = fd.Close()
assert.NoError(t, err)
}
run.rm(t, "testfile")
}
// Test double close
func TestReadFileDoubleClose(t *testing.T) {
run.skipIfNoFUSE(t)
run.createFile(t, "testdoubleclose", "hello")
in, err := os.Open(run.path("testdoubleclose"))
assert.NoError(t, err)
fd := in.Fd()
fd1, err := syscall.Dup(int(fd))
assert.NoError(t, err)
fd2, err := syscall.Dup(int(fd))
assert.NoError(t, err)
// close one of the dups - should produce no error
err = syscall.Close(fd1)
assert.NoError(t, err)
// read from the file
buf := make([]byte, 1)
_, err = in.Read(buf)
assert.NoError(t, err)
// close it
err = in.Close()
assert.NoError(t, err)
// read from the other dup - should produce no error as this
// file is now buffered
n, err := syscall.Read(fd2, buf)
assert.NoError(t, err)
assert.Equal(t, 1, n)
// close the dup - should produce an error
err = syscall.Close(fd2)
assert.Error(t, err, "input/output error")
run.rm(t, "testdoubleclose")
}

157
cmd/mount/write.go Normal file
View File

@@ -0,0 +1,157 @@
// +build linux darwin freebsd
package mount
import (
"errors"
"io"
"sync"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/fs"
"golang.org/x/net/context"
)
var errClosedFileHandle = errors.New("Attempt to use closed file handle")
// WriteFileHandle is an open for write handle on a File
type WriteFileHandle struct {
mu sync.Mutex
closed bool // set if handle has been closed
remote string
pipeReader *io.PipeReader
pipeWriter *io.PipeWriter
o fs.Object
result chan error
file *File
writeCalled bool // set the first time Write() is called
}
// Check interface satisfied
var _ fusefs.Handle = (*WriteFileHandle)(nil)
func newWriteFileHandle(d *Dir, f *File, src fs.ObjectInfo) (*WriteFileHandle, error) {
fh := &WriteFileHandle{
remote: src.Remote(),
result: make(chan error, 1),
file: f,
}
fh.pipeReader, fh.pipeWriter = io.Pipe()
go func() {
o, err := d.f.Put(fh.pipeReader, src)
fh.o = o
fh.result <- err
}()
fh.file.addWriters(1)
return fh, nil
}
// Check interface satisfied
var _ fusefs.HandleWriter = (*WriteFileHandle)(nil)
// Write data to the file handle
func (fh *WriteFileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) error {
fs.Debug(fh.remote, "WriteFileHandle.Write len=%d", len(req.Data))
fh.mu.Lock()
defer fh.mu.Unlock()
if fh.closed {
fs.ErrorLog(fh.remote, "WriteFileHandle.Write error: %v", errClosedFileHandle)
return errClosedFileHandle
}
fh.writeCalled = true
// FIXME should probably check the file isn't being seeked?
n, err := fh.pipeWriter.Write(req.Data)
resp.Size = n
fh.file.written(int64(n))
if err != nil {
fs.ErrorLog(fh.remote, "WriteFileHandle.Write error: %v", err)
return err
}
fs.Debug(fh.remote, "WriteFileHandle.Write OK (%d bytes written)", n)
return nil
}
// close the file handle returning errClosedFileHandle if it has been
// closed already.
//
// Must be called with fh.mu held
func (fh *WriteFileHandle) close() error {
if fh.closed {
return errClosedFileHandle
}
fh.closed = true
fh.file.addWriters(-1)
writeCloseErr := fh.pipeWriter.Close()
err := <-fh.result
readCloseErr := fh.pipeReader.Close()
if err == nil {
fh.file.setObject(fh.o)
err = writeCloseErr
}
if err == nil {
err = readCloseErr
}
return err
}
// Check interface satisfied
var _ fusefs.HandleFlusher = (*WriteFileHandle)(nil)
// Flush is called on each close() of a file descriptor. So if a
// filesystem wants to return write errors in close() and the file has
// cached dirty data, this is a good place to write back data and
// return any errors. Since many applications ignore close() errors
// this is not always useful.
//
// NOTE: The flush() method may be called more than once for each
// open(). This happens if more than one file descriptor refers to an
// opened file due to dup(), dup2() or fork() calls. It is not
// possible to determine if a flush is final, so each flush should be
// treated equally. Multiple write-flush sequences are relatively
// rare, so this shouldn't be a problem.
//
// Filesystems shouldn't assume that flush will always be called after
// some writes, or that if will be called at all.
func (fh *WriteFileHandle) Flush(ctx context.Context, req *fuse.FlushRequest) error {
fh.mu.Lock()
defer fh.mu.Unlock()
fs.Debug(fh.remote, "WriteFileHandle.Flush")
// If Write hasn't been called then ignore the Flush - Release
// will pick it up
if !fh.writeCalled {
fs.Debug(fh.remote, "WriteFileHandle.Flush ignoring flush on unwritten handle")
return nil
}
err := fh.close()
if err != nil {
fs.ErrorLog(fh.remote, "WriteFileHandle.Flush error: %v", err)
} else {
fs.Debug(fh.remote, "WriteFileHandle.Flush OK")
}
return err
}
var _ fusefs.HandleReleaser = (*WriteFileHandle)(nil)
// Release is called when we are finished with the file handle
//
// It isn't called directly from userspace so the error is ignored by
// the kernel
func (fh *WriteFileHandle) Release(ctx context.Context, req *fuse.ReleaseRequest) error {
fh.mu.Lock()
defer fh.mu.Unlock()
if fh.closed {
fs.Debug(fh.remote, "WriteFileHandle.Release nothing to do")
return nil
}
fs.Debug(fh.remote, "WriteFileHandle.Release closing")
err := fh.close()
if err != nil {
fs.ErrorLog(fh.remote, "WriteFileHandle.Release error: %v", err)
} else {
fs.Debug(fh.remote, "WriteFileHandle.Release OK")
}
return err
}

103
cmd/mount/write_test.go Normal file
View File

@@ -0,0 +1,103 @@
// +build linux darwin freebsd
package mount
import (
"os"
"syscall"
"testing"
"github.com/stretchr/testify/assert"
)
// Test writing a file with no write()'s to it
func TestWriteFileNoWrite(t *testing.T) {
run.skipIfNoFUSE(t)
fd, err := os.Create(run.path("testnowrite"))
assert.NoError(t, err)
err = fd.Close()
assert.NoError(t, err)
run.checkDir(t, "testnowrite 0")
run.rm(t, "testnowrite")
}
// Test open file in directory listing
func FIXMETestWriteOpenFileInDirListing(t *testing.T) {
run.skipIfNoFUSE(t)
fd, err := os.Create(run.path("testnowrite"))
assert.NoError(t, err)
run.checkDir(t, "testnowrite 0")
err = fd.Close()
assert.NoError(t, err)
run.rm(t, "testnowrite")
}
// Test writing a file and reading it back
func TestWriteFileWrite(t *testing.T) {
run.skipIfNoFUSE(t)
run.createFile(t, "testwrite", "data")
run.checkDir(t, "testwrite 4")
contents := run.readFile(t, "testwrite")
assert.Equal(t, "data", contents)
run.rm(t, "testwrite")
}
// Test overwriting a file
func TestWriteFileOverwrite(t *testing.T) {
run.skipIfNoFUSE(t)
run.createFile(t, "testwrite", "data")
run.checkDir(t, "testwrite 4")
run.createFile(t, "testwrite", "potato")
contents := run.readFile(t, "testwrite")
assert.Equal(t, "potato", contents)
run.rm(t, "testwrite")
}
// Test double close
func TestWriteFileDoubleClose(t *testing.T) {
run.skipIfNoFUSE(t)
out, err := os.Create(run.path("testdoubleclose"))
assert.NoError(t, err)
fd := out.Fd()
fd1, err := syscall.Dup(int(fd))
assert.NoError(t, err)
fd2, err := syscall.Dup(int(fd))
assert.NoError(t, err)
// close one of the dups - should produce no error
err = syscall.Close(fd1)
assert.NoError(t, err)
// write to the file
buf := []byte("hello")
n, err := out.Write(buf)
assert.NoError(t, err)
assert.Equal(t, 5, n)
// close it
err = out.Close()
assert.NoError(t, err)
// write to the other dup - should produce an error
n, err = syscall.Write(fd2, buf)
assert.Error(t, err, "input/output error")
// close the dup - should produce an error
err = syscall.Close(fd2)
assert.Error(t, err, "input/output error")
run.rm(t, "testdoubleclose")
}

40
cmd/move/move.go Normal file
View File

@@ -0,0 +1,40 @@
package move
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(moveCmd)
}
var moveCmd = &cobra.Command{
Use: "move source:path dest:path",
Short: `Move files from source to dest.`,
Long: `
Moves the contents of the source directory to the destination
directory. Rclone will error if the source and destination overlap.
If no filters are in use and if possible this will server side move
` + "`" + `source:path` + "`" + ` into ` + "`" + `dest:path` + "`" + `. After this ` + "`" + `source:path` + "`" + ` will no
longer longer exist.
Otherwise for each file in ` + "`" + `source:path` + "`" + ` selected by the filters (if
any) this will move it into ` + "`" + `dest:path` + "`" + `. If possible a server side
move will be used, otherwise it will copy it (server side if possible)
into ` + "`" + `dest:path` + "`" + ` then delete the original (if no errors on copy) in
` + "`" + `source:path` + "`" + `.
**Important**: Since this can cause data loss, test first with the
--dry-run flag.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, fdst := cmd.NewFsSrcDst(args)
cmd.Run(true, command, func() error {
return fs.MoveDir(fdst, fsrc)
})
},
}

28
cmd/purge/purge.go Normal file
View File

@@ -0,0 +1,28 @@
package purge
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(purgeCmd)
}
var purgeCmd = &cobra.Command{
Use: "purge remote:path",
Short: `Remove the path and all of its contents.`,
Long: `
Remove the path and all of its contents. Note that this does not obey
include/exclude filters - everything will be removed. Use ` + "`" + `delete` + "`" + ` if
you want to selectively delete files.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fdst := cmd.NewFsDst(args)
cmd.Run(true, command, func() error {
return fs.Purge(fdst)
})
},
}

View File

@@ -1,5 +1,5 @@
// Tests for rclone
package main
package cmd
import (
"testing"

16
cmd/redirect_stderr.go Normal file
View File

@@ -0,0 +1,16 @@
// Log the panic to the log file - for oses which can't do this
// +build !windows,!darwin,!dragonfly,!freebsd,!linux,!nacl,!netbsd,!openbsd
package cmd
import (
"os"
"github.com/ncw/rclone/fs"
)
// redirectStderr to the file passed in
func redirectStderr(f *os.File) {
fs.ErrorLog(nil, "Can't redirect stderr to file")
}

View File

@@ -1,8 +1,8 @@
// Log the panic under unix to the log file
//+build unix
// +build darwin dragonfly freebsd linux nacl netbsd openbsd
package main
package cmd
import (
"log"

View File

@@ -4,9 +4,9 @@
//
// http://play.golang.org/p/kLtct7lSUg
//+build windows
// +build windows
package main
package cmd
import (
"log"

26
cmd/rmdir/rmdir.go Normal file
View File

@@ -0,0 +1,26 @@
package rmdir
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(rmdirCmd)
}
var rmdirCmd = &cobra.Command{
Use: "rmdir remote:path",
Short: `Remove the path if empty.`,
Long: `
Remove the path. Note that you can't remove a path with
objects in it, use purge for that.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fdst := cmd.NewFsDst(args)
cmd.Run(true, command, func() error {
return fs.Rmdir(fdst)
})
},
}

29
cmd/sha1sum/sha1sum.go Normal file
View File

@@ -0,0 +1,29 @@
package sha1sum
import (
"os"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(sha1sumCmd)
}
var sha1sumCmd = &cobra.Command{
Use: "sha1sum remote:path",
Short: `Produces an sha1sum file for all the objects in the path.`,
Long: `
Produces an sha1sum file for all the objects in the path. This
is in the same format as the standard sha1sum tool produces.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, command, func() error {
return fs.Sha1sum(fsrc, os.Stdout)
})
},
}

31
cmd/size/size.go Normal file
View File

@@ -0,0 +1,31 @@
package size
import (
"fmt"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(sizeCmd)
}
var sizeCmd = &cobra.Command{
Use: "size remote:path",
Short: `Prints the total size and number of objects in remote:path.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, command, func() error {
objects, size, err := fs.Count(fsrc)
if err != nil {
return err
}
fmt.Printf("Total objects: %d\n", objects)
fmt.Printf("Total size: %s (%d Bytes)\n", fs.SizeSuffix(size).Unit("Bytes"), size)
return nil
})
},
}

43
cmd/sync/sync.go Normal file
View File

@@ -0,0 +1,43 @@
package sync
import (
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/fs"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(syncCmd)
}
var syncCmd = &cobra.Command{
Use: "sync source:path dest:path",
Short: `Make source and dest identical, modifying destination only.`,
Long: `
Sync the source to the destination, changing the destination
only. Doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. Destination is updated to match
source, including deleting files if necessary.
**Important**: Since this can cause data loss, test first with the
` + "`" + `--dry-run` + "`" + ` flag to see exactly what would be copied and deleted.
Note that files in the destination won't be deleted if there were any
errors at any point.
It is always the contents of the directory that is synced, not the
directory so when source:path is a directory, it's the contents of
source:path that are copied, not the directory name and contents. See
extended explanation in the ` + "`" + `copy` + "`" + ` command above if unsure.
If dest:path doesn't exist, it is created and the source:path contents
go there.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsrc, fdst := cmd.NewFsSrcDst(args)
cmd.Run(true, command, func() error {
return fs.Sync(fdst, fsrc)
})
},
}

19
cmd/version/version.go Normal file
View File

@@ -0,0 +1,19 @@
package version
import (
"github.com/ncw/rclone/cmd"
"github.com/spf13/cobra"
)
func init() {
cmd.Root.AddCommand(versionCmd)
}
var versionCmd = &cobra.Command{
Use: "version",
Short: `Show the version number.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
cmd.ShowVersion()
},
}

6
cmd/versioncheck.go Normal file
View File

@@ -0,0 +1,6 @@
//+build !go1.5
package cmd
// Upgrade to Go version 1.5 to compile rclone.
func init() { Go_version_1_5_required_for_compilation() }

View File

@@ -1,4 +1,4 @@
#!/bin/sh
#!/bin/bash
set -e
@@ -13,10 +13,37 @@ VERSION="$1"
rm -rf build
gox -output "build/{{.Dir}}-${VERSION}-{{.OS}}-{{.Arch}}/{{.Dir}}" -os "darwin linux freebsd openbsd windows freebsd netbsd plan9 solaris"
# Not implemented yet: nacl dragonfly android
# Disable CGO and dynamic builds on all platforms (including build patform)
export CGO_ENABLED=0
# Arch pairs we build for
# gox -osarch-list for definitive list
OSARCH="\
windows/386
windows/amd64
darwin/386
darwin/amd64
linux/386
linux/amd64
linux/arm
freebsd/386
freebsd/amd64
freebsd/arm
netbsd/386
netbsd/amd64
netbsd/arm
openbsd/386
openbsd/amd64
plan9/386
plan9/amd64
solaris/amd64"
# Make space separated
OSARCH=${OSARCH//$'\n'/ }
gox --ldflags "-s -X github.com/ncw/rclone/fs.Version=${VERSION}" -output "build/{{.Dir}}-${VERSION}-{{.OS}}-{{.Arch}}/{{.Dir}}" -osarch "${OSARCH}"
mv build/rclone-${VERSION}-darwin-amd64 build/rclone-${VERSION}-osx-amd64
mv build/rclone-${VERSION}-darwin-386 build/rclone-${VERSION}-osx-386

608
crypt/cipher.go Normal file
View File

@@ -0,0 +1,608 @@
package crypt
import (
"bytes"
"crypto/aes"
gocipher "crypto/cipher"
"crypto/rand"
"encoding/base32"
"fmt"
"io"
"strings"
"sync"
"unicode/utf8"
"github.com/ncw/rclone/crypt/pkcs7"
"github.com/pkg/errors"
"golang.org/x/crypto/nacl/secretbox"
"golang.org/x/crypto/scrypt"
"github.com/rfjakob/eme"
)
// Constancs
const (
nameCipherBlockSize = aes.BlockSize
fileMagic = "RCLONE\x00\x00"
fileMagicSize = len(fileMagic)
fileNonceSize = 24
fileHeaderSize = fileMagicSize + fileNonceSize
blockHeaderSize = secretbox.Overhead
blockDataSize = 64 * 1024
blockSize = blockHeaderSize + blockDataSize
encryptedSuffix = ".bin" // when file name encryption is off we add this suffix to make sure the cloud provider doesn't process the file
)
// Errors returned by cipher
var (
ErrorBadDecryptUTF8 = errors.New("bad decryption - utf-8 invalid")
ErrorBadDecryptControlChar = errors.New("bad decryption - contains control chars")
ErrorNotAMultipleOfBlocksize = errors.New("not a multiple of blocksize")
ErrorTooShortAfterDecode = errors.New("too short after base32 decode")
ErrorEncryptedFileTooShort = errors.New("file is too short to be encrypted")
ErrorEncryptedFileBadHeader = errors.New("file has truncated block header")
ErrorEncryptedBadMagic = errors.New("not an encrypted file - bad magic string")
ErrorEncryptedBadBlock = errors.New("failed to authenticate decrypted block - bad password?")
ErrorBadBase32Encoding = errors.New("bad base32 filename encoding")
ErrorFileClosed = errors.New("file already closed")
ErrorNotAnEncryptedFile = errors.New("not an encrypted file - no \"" + encryptedSuffix + "\" suffix")
defaultSalt = []byte{0xA8, 0x0D, 0xF4, 0x3A, 0x8F, 0xBD, 0x03, 0x08, 0xA7, 0xCA, 0xB8, 0x3E, 0x58, 0x1F, 0x86, 0xB1}
)
// Global variables
var (
fileMagicBytes = []byte(fileMagic)
)
// Cipher is used to swap out the encryption implementations
type Cipher interface {
// EncryptFileName encrypts a file path
EncryptFileName(string) string
// DecryptFileName decrypts a file path, returns error if decrypt was invalid
DecryptFileName(string) (string, error)
// EncryptDirName encrypts a directory path
EncryptDirName(string) string
// DecryptDirName decrypts a directory path, returns error if decrypt was invalid
DecryptDirName(string) (string, error)
// EncryptData
EncryptData(io.Reader) (io.Reader, error)
// DecryptData
DecryptData(io.ReadCloser) (io.ReadCloser, error)
// EncryptedSize calculates the size of the data when encrypted
EncryptedSize(int64) int64
// DecryptedSize calculates the size of the data when decrypted
DecryptedSize(int64) (int64, error)
}
// NameEncryptionMode is the type of file name encryption in use
type NameEncryptionMode int
// NameEncryptionMode levels
const (
NameEncryptionOff NameEncryptionMode = iota
NameEncryptionStandard
)
// NewNameEncryptionMode turns a string into a NameEncryptionMode
func NewNameEncryptionMode(s string) (mode NameEncryptionMode, err error) {
s = strings.ToLower(s)
switch s {
case "off":
mode = NameEncryptionOff
case "standard":
mode = NameEncryptionStandard
default:
err = errors.Errorf("Unknown file name encryption mode %q", s)
}
return mode, err
}
// String turns mode into a human readable string
func (mode NameEncryptionMode) String() (out string) {
switch mode {
case NameEncryptionOff:
out = "off"
case NameEncryptionStandard:
out = "standard"
default:
out = fmt.Sprintf("Unknown mode #%d", mode)
}
return out
}
type cipher struct {
dataKey [32]byte // Key for secretbox
nameKey [32]byte // 16,24 or 32 bytes
nameTweak [nameCipherBlockSize]byte // used to tweak the name crypto
block gocipher.Block
mode NameEncryptionMode
buffers sync.Pool // encrypt/decrypt buffers
cryptoRand io.Reader // read crypto random numbers from here
}
// newCipher initialises the cipher. If salt is "" then it uses a built in salt val
func newCipher(mode NameEncryptionMode, password, salt string) (*cipher, error) {
c := &cipher{
mode: mode,
cryptoRand: rand.Reader,
}
c.buffers.New = func() interface{} {
return make([]byte, blockSize)
}
err := c.Key(password, salt)
if err != nil {
return nil, err
}
return c, nil
}
// Key creates all the internal keys from the password passed in using
// scrypt.
//
// If salt is "" we use a fixed salt just to make attackers lives
// slighty harder than using no salt.
//
// Note that empty passsword makes all 0x00 keys which is used in the
// tests.
func (c *cipher) Key(password, salt string) (err error) {
const keySize = len(c.dataKey) + len(c.nameKey) + len(c.nameTweak)
var saltBytes = defaultSalt
if salt != "" {
saltBytes = []byte(salt)
}
var key []byte
if password == "" {
key = make([]byte, keySize)
} else {
key, err = scrypt.Key([]byte(password), saltBytes, 16384, 8, 1, keySize)
if err != nil {
return err
}
}
copy(c.dataKey[:], key)
copy(c.nameKey[:], key[len(c.dataKey):])
copy(c.nameTweak[:], key[len(c.dataKey)+len(c.nameKey):])
// Key the name cipher
c.block, err = aes.NewCipher(c.nameKey[:])
return err
}
// getBlock gets a block from the pool of size blockSize
func (c *cipher) getBlock() []byte {
return c.buffers.Get().([]byte)
}
// putBlock returns a block to the pool of size blockSize
func (c *cipher) putBlock(buf []byte) {
if len(buf) != blockSize {
panic("bad blocksize returned to pool")
}
c.buffers.Put(buf)
}
// check to see if the byte string is valid with no control characters
// from 0x00 to 0x1F and is a valid UTF-8 string
func checkValidString(buf []byte) error {
for i := range buf {
c := buf[i]
if c >= 0x00 && c < 0x20 || c == 0x7F {
return ErrorBadDecryptControlChar
}
}
if !utf8.Valid(buf) {
return ErrorBadDecryptUTF8
}
return nil
}
// encodeFileName encodes a filename using a modified version of
// standard base32 as described in RFC4648
//
// The standard encoding is modified in two ways
// * it becomes lower case (no-one likes upper case filenames!)
// * we strip the padding character `=`
func encodeFileName(in []byte) string {
encoded := base32.HexEncoding.EncodeToString(in)
encoded = strings.TrimRight(encoded, "=")
return strings.ToLower(encoded)
}
// decodeFileName decodes a filename as encoded by encodeFileName
func decodeFileName(in string) ([]byte, error) {
if strings.HasSuffix(in, "=") {
return nil, ErrorBadBase32Encoding
}
// First figure out how many padding characters to add
roundUpToMultipleOf8 := (len(in) + 7) &^ 7
equals := roundUpToMultipleOf8 - len(in)
in = strings.ToUpper(in) + "========"[:equals]
return base32.HexEncoding.DecodeString(in)
}
// encryptSegment encrypts a path segment
//
// This uses EME with AES
//
// EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the
// 2003 paper "A Parallelizable Enciphering Mode" by Halevi and
// Rogaway.
//
// This makes for determinstic encryption which is what we want - the
// same filename must encrypt to the same thing.
//
// This means that
// * filenames with the same name will encrypt the same
// * filenames which start the same won't have a common prefix
func (c *cipher) encryptSegment(plaintext string) string {
if plaintext == "" {
return ""
}
paddedPlaintext := pkcs7.Pad(nameCipherBlockSize, []byte(plaintext))
ciphertext := eme.Transform(c.block, c.nameTweak[:], paddedPlaintext, eme.DirectionEncrypt)
return encodeFileName(ciphertext)
}
// decryptSegment decrypts a path segment
func (c *cipher) decryptSegment(ciphertext string) (string, error) {
if ciphertext == "" {
return "", nil
}
rawCiphertext, err := decodeFileName(ciphertext)
if err != nil {
return "", err
}
if len(rawCiphertext)%nameCipherBlockSize != 0 {
return "", ErrorNotAMultipleOfBlocksize
}
if len(rawCiphertext) == 0 {
// not possible if decodeFilename() working correctly
return "", ErrorTooShortAfterDecode
}
paddedPlaintext := eme.Transform(c.block, c.nameTweak[:], rawCiphertext, eme.DirectionDecrypt)
plaintext, err := pkcs7.Unpad(nameCipherBlockSize, paddedPlaintext)
if err != nil {
return "", err
}
err = checkValidString(plaintext)
if err != nil {
return "", err
}
return string(plaintext), err
}
// encryptFileName encrypts a file path
func (c *cipher) encryptFileName(in string) string {
segments := strings.Split(in, "/")
for i := range segments {
segments[i] = c.encryptSegment(segments[i])
}
return strings.Join(segments, "/")
}
// EncryptFileName encrypts a file path
func (c *cipher) EncryptFileName(in string) string {
if c.mode == NameEncryptionOff {
return in + encryptedSuffix
}
return c.encryptFileName(in)
}
// EncryptDirName encrypts a directory path
func (c *cipher) EncryptDirName(in string) string {
if c.mode == NameEncryptionOff {
return in
}
return c.encryptFileName(in)
}
// decryptFileName decrypts a file path
func (c *cipher) decryptFileName(in string) (string, error) {
segments := strings.Split(in, "/")
for i := range segments {
var err error
segments[i], err = c.decryptSegment(segments[i])
if err != nil {
return "", err
}
}
return strings.Join(segments, "/"), nil
}
// DecryptFileName decrypts a file path
func (c *cipher) DecryptFileName(in string) (string, error) {
if c.mode == NameEncryptionOff {
remainingLength := len(in) - len(encryptedSuffix)
if remainingLength > 0 && strings.HasSuffix(in, encryptedSuffix) {
return in[:remainingLength], nil
}
return "", ErrorNotAnEncryptedFile
}
return c.decryptFileName(in)
}
// DecryptDirName decrypts a directory path
func (c *cipher) DecryptDirName(in string) (string, error) {
if c.mode == NameEncryptionOff {
return in, nil
}
return c.decryptFileName(in)
}
// nonce is an NACL secretbox nonce
type nonce [fileNonceSize]byte
// pointer returns the nonce as a *[24]byte for secretbox
func (n *nonce) pointer() *[fileNonceSize]byte {
return (*[fileNonceSize]byte)(n)
}
// fromReader fills the nonce from an io.Reader - normally the OSes
// crypto random number generator
func (n *nonce) fromReader(in io.Reader) error {
read, err := io.ReadFull(in, (*n)[:])
if read != fileNonceSize {
return errors.Wrap(err, "short read of nonce")
}
return nil
}
// fromBuf fills the nonce from the buffer passed in
func (n *nonce) fromBuf(buf []byte) {
read := copy((*n)[:], buf)
if read != fileNonceSize {
panic("buffer to short to read nonce")
}
}
// increment to add 1 to the nonce
func (n *nonce) increment() {
for i := 0; i < len(*n); i++ {
digit := (*n)[i]
newDigit := digit + 1
(*n)[i] = newDigit
if newDigit >= digit {
// exit if no carry
break
}
}
}
// encrypter encrypts an io.Reader on the fly
type encrypter struct {
in io.Reader
c *cipher
nonce nonce
buf []byte
readBuf []byte
bufIndex int
bufSize int
err error
}
// newEncrypter creates a new file handle encrypting on the fly
func (c *cipher) newEncrypter(in io.Reader) (*encrypter, error) {
fh := &encrypter{
in: in,
c: c,
buf: c.getBlock(),
readBuf: c.getBlock(),
bufSize: fileHeaderSize,
}
// Initialise nonce
err := fh.nonce.fromReader(c.cryptoRand)
if err != nil {
return nil, err
}
// Copy magic into buffer
copy(fh.buf, fileMagicBytes)
// Copy nonce into buffer
copy(fh.buf[fileMagicSize:], fh.nonce[:])
return fh, nil
}
// Read as per io.Reader
func (fh *encrypter) Read(p []byte) (n int, err error) {
if fh.err != nil {
return 0, fh.err
}
if fh.bufIndex >= fh.bufSize {
// Read data
// FIXME should overlap the reads with a go-routine and 2 buffers?
readBuf := fh.readBuf[:blockDataSize]
n, err = io.ReadFull(fh.in, readBuf)
if err == io.EOF {
// ReadFull only returns n=0 and EOF
return fh.finish(io.EOF)
} else if err == io.ErrUnexpectedEOF {
// Next read will return EOF
} else if err != nil {
return fh.finish(err)
}
// Write nonce to start of block
copy(fh.buf, fh.nonce[:])
// Encrypt the block using the nonce
block := fh.buf
secretbox.Seal(block[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey)
fh.bufIndex = 0
fh.bufSize = blockHeaderSize + n
fh.nonce.increment()
}
n = copy(p, fh.buf[fh.bufIndex:fh.bufSize])
fh.bufIndex += n
return n, nil
}
// finish sets the final error and tidies up
func (fh *encrypter) finish(err error) (int, error) {
if fh.err != nil {
return 0, fh.err
}
fh.err = err
fh.c.putBlock(fh.buf)
fh.c.putBlock(fh.readBuf)
return 0, err
}
// Encrypt data encrypts the data stream
func (c *cipher) EncryptData(in io.Reader) (io.Reader, error) {
out, err := c.newEncrypter(in)
if err != nil {
return nil, err
}
return out, nil
}
// decrypter decrypts an io.ReaderCloser on the fly
type decrypter struct {
rc io.ReadCloser
nonce nonce
c *cipher
buf []byte
readBuf []byte
bufIndex int
bufSize int
err error
}
// newDecrypter creates a new file handle decrypting on the fly
func (c *cipher) newDecrypter(rc io.ReadCloser) (*decrypter, error) {
fh := &decrypter{
rc: rc,
c: c,
buf: c.getBlock(),
readBuf: c.getBlock(),
}
// Read file header (magic + nonce)
readBuf := fh.readBuf[:fileHeaderSize]
_, err := io.ReadFull(fh.rc, readBuf)
if err == io.EOF || err == io.ErrUnexpectedEOF {
// This read from 0..fileHeaderSize-1 bytes
return nil, fh.finishAndClose(ErrorEncryptedFileTooShort)
} else if err != nil {
return nil, fh.finishAndClose(err)
}
// check the magic
if !bytes.Equal(readBuf[:fileMagicSize], fileMagicBytes) {
return nil, fh.finishAndClose(ErrorEncryptedBadMagic)
}
// retreive the nonce
fh.nonce.fromBuf(readBuf[fileMagicSize:])
return fh, nil
}
// Read as per io.Reader
func (fh *decrypter) Read(p []byte) (n int, err error) {
if fh.err != nil {
return 0, fh.err
}
if fh.bufIndex >= fh.bufSize {
// Read data
// FIXME should overlap the reads with a go-routine and 2 buffers?
readBuf := fh.readBuf
n, err = io.ReadFull(fh.rc, readBuf)
if err == io.EOF {
// ReadFull only returns n=0 and EOF
return 0, fh.finish(io.EOF)
} else if err == io.ErrUnexpectedEOF {
// Next read will return EOF
} else if err != nil {
return 0, fh.finish(err)
}
// Check header + 1 byte exists
if n <= blockHeaderSize {
return 0, fh.finish(ErrorEncryptedFileBadHeader)
}
// Decrypt the block using the nonce
block := fh.buf
_, ok := secretbox.Open(block[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey)
if !ok {
return 0, fh.finish(ErrorEncryptedBadBlock)
}
fh.bufIndex = 0
fh.bufSize = n - blockHeaderSize
fh.nonce.increment()
}
n = copy(p, fh.buf[fh.bufIndex:fh.bufSize])
fh.bufIndex += n
return n, nil
}
// finish sets the final error and tidies up
func (fh *decrypter) finish(err error) error {
if fh.err != nil {
return fh.err
}
fh.err = err
fh.c.putBlock(fh.buf)
fh.c.putBlock(fh.readBuf)
return err
}
// Close
func (fh *decrypter) Close() error {
// Check already closed
if fh.err == ErrorFileClosed {
return fh.err
}
// Closed before reading EOF so not finish()ed yet
if fh.err == nil {
_ = fh.finish(io.EOF)
}
// Show file now closed
fh.err = ErrorFileClosed
return fh.rc.Close()
}
// finishAndClose does finish then Close()
//
// Used when we are returning a nil fh from new
func (fh *decrypter) finishAndClose(err error) error {
_ = fh.finish(err)
_ = fh.Close()
return err
}
// DecryptData decrypts the data stream
func (c *cipher) DecryptData(rc io.ReadCloser) (io.ReadCloser, error) {
out, err := c.newDecrypter(rc)
if err != nil {
return nil, err
}
return out, nil
}
// EncryptedSize calculates the size of the data when encrypted
func (c *cipher) EncryptedSize(size int64) int64 {
blocks, residue := size/blockDataSize, size%blockDataSize
encryptedSize := int64(fileHeaderSize) + blocks*(blockHeaderSize+blockDataSize)
if residue != 0 {
encryptedSize += blockHeaderSize + residue
}
return encryptedSize
}
// DecryptedSize calculates the size of the data when decrypted
func (c *cipher) DecryptedSize(size int64) (int64, error) {
size -= int64(fileHeaderSize)
if size < 0 {
return 0, ErrorEncryptedFileTooShort
}
blocks, residue := size/blockSize, size%blockSize
decryptedSize := blocks * blockDataSize
if residue != 0 {
residue -= blockHeaderSize
if residue <= 0 {
return 0, ErrorEncryptedFileBadHeader
}
}
decryptedSize += residue
return decryptedSize, nil
}
// check interfaces
var (
_ Cipher = (*cipher)(nil)
_ io.ReadCloser = (*decrypter)(nil)
_ io.Reader = (*encrypter)(nil)
)

843
crypt/cipher_test.go Normal file
View File

@@ -0,0 +1,843 @@
package crypt
import (
"bytes"
"encoding/base32"
"fmt"
"io"
"io/ioutil"
"strings"
"testing"
"github.com/ncw/rclone/crypt/pkcs7"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewNameEncryptionMode(t *testing.T) {
for _, test := range []struct {
in string
expected NameEncryptionMode
expectedErr string
}{
{"off", NameEncryptionOff, ""},
{"standard", NameEncryptionStandard, ""},
{"potato", NameEncryptionMode(0), "Unknown file name encryption mode \"potato\""},
} {
actual, actualErr := NewNameEncryptionMode(test.in)
assert.Equal(t, actual, test.expected)
if test.expectedErr == "" {
assert.NoError(t, actualErr)
} else {
assert.Error(t, actualErr, test.expectedErr)
}
}
}
func TestNewNameEncryptionModeString(t *testing.T) {
assert.Equal(t, NameEncryptionOff.String(), "off")
assert.Equal(t, NameEncryptionStandard.String(), "standard")
assert.Equal(t, NameEncryptionMode(2).String(), "Unknown mode #2")
}
func TestValidString(t *testing.T) {
for _, test := range []struct {
in string
expected error
}{
{"", nil},
{"\x01", ErrorBadDecryptControlChar},
{"a\x02", ErrorBadDecryptControlChar},
{"abc\x03", ErrorBadDecryptControlChar},
{"abc\x04def", ErrorBadDecryptControlChar},
{"\x05d", ErrorBadDecryptControlChar},
{"\x06def", ErrorBadDecryptControlChar},
{"\x07", ErrorBadDecryptControlChar},
{"\x08", ErrorBadDecryptControlChar},
{"\x09", ErrorBadDecryptControlChar},
{"\x0A", ErrorBadDecryptControlChar},
{"\x0B", ErrorBadDecryptControlChar},
{"\x0C", ErrorBadDecryptControlChar},
{"\x0D", ErrorBadDecryptControlChar},
{"\x0E", ErrorBadDecryptControlChar},
{"\x0F", ErrorBadDecryptControlChar},
{"\x10", ErrorBadDecryptControlChar},
{"\x11", ErrorBadDecryptControlChar},
{"\x12", ErrorBadDecryptControlChar},
{"\x13", ErrorBadDecryptControlChar},
{"\x14", ErrorBadDecryptControlChar},
{"\x15", ErrorBadDecryptControlChar},
{"\x16", ErrorBadDecryptControlChar},
{"\x17", ErrorBadDecryptControlChar},
{"\x18", ErrorBadDecryptControlChar},
{"\x19", ErrorBadDecryptControlChar},
{"\x1A", ErrorBadDecryptControlChar},
{"\x1B", ErrorBadDecryptControlChar},
{"\x1C", ErrorBadDecryptControlChar},
{"\x1D", ErrorBadDecryptControlChar},
{"\x1E", ErrorBadDecryptControlChar},
{"\x1F", ErrorBadDecryptControlChar},
{"\x20", nil},
{"\x7E", nil},
{"\x7F", ErrorBadDecryptControlChar},
{"£100", nil},
{`hello? sausage/êé/Hello, 世界/ " ' @ < > & ?/z.txt`, nil},
{"£100", nil},
// Following tests from http://www.php.net/manual/en/reference.pcre.pattern.modifiers.php#54805
{"a", nil}, // Valid ASCII
{"\xc3\xb1", nil}, // Valid 2 Octet Sequence
{"\xc3\x28", ErrorBadDecryptUTF8}, // Invalid 2 Octet Sequence
{"\xa0\xa1", ErrorBadDecryptUTF8}, // Invalid Sequence Identifier
{"\xe2\x82\xa1", nil}, // Valid 3 Octet Sequence
{"\xe2\x28\xa1", ErrorBadDecryptUTF8}, // Invalid 3 Octet Sequence (in 2nd Octet)
{"\xe2\x82\x28", ErrorBadDecryptUTF8}, // Invalid 3 Octet Sequence (in 3rd Octet)
{"\xf0\x90\x8c\xbc", nil}, // Valid 4 Octet Sequence
{"\xf0\x28\x8c\xbc", ErrorBadDecryptUTF8}, // Invalid 4 Octet Sequence (in 2nd Octet)
{"\xf0\x90\x28\xbc", ErrorBadDecryptUTF8}, // Invalid 4 Octet Sequence (in 3rd Octet)
{"\xf0\x28\x8c\x28", ErrorBadDecryptUTF8}, // Invalid 4 Octet Sequence (in 4th Octet)
{"\xf8\xa1\xa1\xa1\xa1", ErrorBadDecryptUTF8}, // Valid 5 Octet Sequence (but not Unicode!)
{"\xfc\xa1\xa1\xa1\xa1\xa1", ErrorBadDecryptUTF8}, // Valid 6 Octet Sequence (but not Unicode!)
} {
actual := checkValidString([]byte(test.in))
assert.Equal(t, actual, test.expected, fmt.Sprintf("in=%q", test.in))
}
}
func TestEncodeFileName(t *testing.T) {
for _, test := range []struct {
in string
expected string
}{
{"", ""},
{"1", "64"},
{"12", "64p0"},
{"123", "64p36"},
{"1234", "64p36d0"},
{"12345", "64p36d1l"},
{"123456", "64p36d1l6o"},
{"1234567", "64p36d1l6org"},
{"12345678", "64p36d1l6orjg"},
{"123456789", "64p36d1l6orjge8"},
{"1234567890", "64p36d1l6orjge9g"},
{"12345678901", "64p36d1l6orjge9g64"},
{"123456789012", "64p36d1l6orjge9g64p0"},
{"1234567890123", "64p36d1l6orjge9g64p36"},
{"12345678901234", "64p36d1l6orjge9g64p36d0"},
{"123456789012345", "64p36d1l6orjge9g64p36d1l"},
{"1234567890123456", "64p36d1l6orjge9g64p36d1l6o"},
} {
actual := encodeFileName([]byte(test.in))
assert.Equal(t, actual, test.expected, fmt.Sprintf("in=%q", test.in))
recovered, err := decodeFileName(test.expected)
assert.NoError(t, err)
assert.Equal(t, string(recovered), test.in, fmt.Sprintf("reverse=%q", test.expected))
in := strings.ToUpper(test.expected)
recovered, err = decodeFileName(in)
assert.NoError(t, err)
assert.Equal(t, string(recovered), test.in, fmt.Sprintf("reverse=%q", in))
}
}
func TestDecodeFileName(t *testing.T) {
// We've tested decoding the valid ones above, now concentrate on the invalid ones
for _, test := range []struct {
in string
expectedErr error
}{
{"64=", ErrorBadBase32Encoding},
{"!", base32.CorruptInputError(0)},
{"hello=hello", base32.CorruptInputError(5)},
} {
actual, actualErr := decodeFileName(test.in)
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr))
}
}
func TestEncryptSegment(t *testing.T) {
c, _ := newCipher(NameEncryptionStandard, "", "")
for _, test := range []struct {
in string
expected string
}{
{"", ""},
{"1", "p0e52nreeaj0a5ea7s64m4j72s"},
{"12", "l42g6771hnv3an9cgc8cr2n1ng"},
{"123", "qgm4avr35m5loi1th53ato71v0"},
{"1234", "8ivr2e9plj3c3esisjpdisikos"},
{"12345", "rh9vu63q3o29eqmj4bg6gg7s44"},
{"123456", "bn717l3alepn75b2fb2ejmi4b4"},
{"1234567", "n6bo9jmb1qe3b1ogtj5qkf19k8"},
{"12345678", "u9t24j7uaq94dh5q53m3s4t9ok"},
{"123456789", "37hn305g6j12d1g0kkrl7ekbs4"},
{"1234567890", "ot8d91eplaglb62k2b1trm2qv0"},
{"12345678901", "h168vvrgb53qnrtvvmb378qrcs"},
{"123456789012", "s3hsdf9e29ithrqbjqu01t8q2s"},
{"1234567890123", "cf3jimlv1q2oc553mv7s3mh3eo"},
{"12345678901234", "moq0uqdlqrblrc5pa5u5c7hq9g"},
{"123456789012345", "eeam3li4rnommi3a762h5n7meg"},
{"1234567890123456", "mijbj0frqf6ms7frcr6bd9h0env53jv96pjaaoirk7forcgpt70g"},
} {
actual := c.encryptSegment(test.in)
assert.Equal(t, test.expected, actual, fmt.Sprintf("Testing %q", test.in))
recovered, err := c.decryptSegment(test.expected)
assert.NoError(t, err, fmt.Sprintf("Testing reverse %q", test.expected))
assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %q", test.expected))
in := strings.ToUpper(test.expected)
recovered, err = c.decryptSegment(in)
assert.NoError(t, err, fmt.Sprintf("Testing reverse %q", in))
assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %q", in))
}
}
func TestDecryptSegment(t *testing.T) {
// We've tested the forwards above, now concentrate on the errors
c, _ := newCipher(NameEncryptionStandard, "", "")
for _, test := range []struct {
in string
expectedErr error
}{
{"64=", ErrorBadBase32Encoding},
{"!", base32.CorruptInputError(0)},
{encodeFileName([]byte("a")), ErrorNotAMultipleOfBlocksize},
{encodeFileName([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize},
{encodeFileName([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong},
{c.encryptSegment("\x01"), ErrorBadDecryptControlChar},
{c.encryptSegment("\xc3\x28"), ErrorBadDecryptUTF8},
} {
actual, actualErr := c.decryptSegment(test.in)
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr))
}
}
func TestEncryptFileName(t *testing.T) {
// First standard mode
c, _ := newCipher(NameEncryptionStandard, "", "")
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123"))
// Now off mode
c, _ = newCipher(NameEncryptionOff, "", "")
assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123"))
}
func TestDecryptFileName(t *testing.T) {
for _, test := range []struct {
mode NameEncryptionMode
in string
expected string
expectedErr error
}{
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s", "1", nil},
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "1/12", nil},
{NameEncryptionStandard, "p0e52nreeAJ0A5EA7S64M4J72S/L42G6771HNv3an9cgc8cr2n1ng", "1/12", nil},
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize},
{NameEncryptionOff, "1/12/123.bin", "1/12/123", nil},
{NameEncryptionOff, "1/12/123.bix", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, ".bin", "", ErrorNotAnEncryptedFile},
} {
c, _ := newCipher(test.mode, "", "")
actual, actualErr := c.DecryptFileName(test.in)
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
assert.Equal(t, test.expected, actual, what)
assert.Equal(t, test.expectedErr, actualErr, what)
}
}
func TestEncryptDirName(t *testing.T) {
// First standard mode
c, _ := newCipher(NameEncryptionStandard, "", "")
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptDirName("1"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptDirName("1/12"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptDirName("1/12/123"))
// Now off mode
c, _ = newCipher(NameEncryptionOff, "", "")
assert.Equal(t, "1/12/123", c.EncryptDirName("1/12/123"))
}
func TestDecryptDirName(t *testing.T) {
for _, test := range []struct {
mode NameEncryptionMode
in string
expected string
expectedErr error
}{
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s", "1", nil},
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "1/12", nil},
{NameEncryptionStandard, "p0e52nreeAJ0A5EA7S64M4J72S/L42G6771HNv3an9cgc8cr2n1ng", "1/12", nil},
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize},
{NameEncryptionOff, "1/12/123.bin", "1/12/123.bin", nil},
{NameEncryptionOff, "1/12/123", "1/12/123", nil},
{NameEncryptionOff, ".bin", ".bin", nil},
} {
c, _ := newCipher(test.mode, "", "")
actual, actualErr := c.DecryptDirName(test.in)
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
assert.Equal(t, test.expected, actual, what)
assert.Equal(t, test.expectedErr, actualErr, what)
}
}
func TestEncryptedSize(t *testing.T) {
c, _ := newCipher(NameEncryptionStandard, "", "")
for _, test := range []struct {
in int64
expected int64
}{
{0, 32},
{1, 32 + 16 + 1},
{65536, 32 + 16 + 65536},
{65537, 32 + 16 + 65536 + 16 + 1},
{1 << 20, 32 + 16*(16+65536)},
{(1 << 20) + 65535, 32 + 16*(16+65536) + 16 + 65535},
{1 << 30, 32 + 16384*(16+65536)},
{(1 << 40) + 1, 32 + 16777216*(16+65536) + 16 + 1},
} {
actual := c.EncryptedSize(test.in)
assert.Equal(t, test.expected, actual, fmt.Sprintf("Testing %d", test.in))
recovered, err := c.DecryptedSize(test.expected)
assert.NoError(t, err, fmt.Sprintf("Testing reverse %d", test.expected))
assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %d", test.expected))
}
}
func TestDecryptedSize(t *testing.T) {
// Test the errors since we tested the reverse above
c, _ := newCipher(NameEncryptionStandard, "", "")
for _, test := range []struct {
in int64
expectedErr error
}{
{0, ErrorEncryptedFileTooShort},
{0, ErrorEncryptedFileTooShort},
{1, ErrorEncryptedFileTooShort},
{7, ErrorEncryptedFileTooShort},
{32 + 1, ErrorEncryptedFileBadHeader},
{32 + 16, ErrorEncryptedFileBadHeader},
{32 + 16 + 65536 + 1, ErrorEncryptedFileBadHeader},
{32 + 16 + 65536 + 16, ErrorEncryptedFileBadHeader},
} {
_, actualErr := c.DecryptedSize(test.in)
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("Testing %d", test.in))
}
}
func TestNoncePointer(t *testing.T) {
var x nonce
assert.Equal(t, (*[24]byte)(&x), x.pointer())
}
func TestNonceFromReader(t *testing.T) {
var x nonce
buf := bytes.NewBufferString("123456789abcdefghijklmno")
err := x.fromReader(buf)
assert.NoError(t, err)
assert.Equal(t, nonce{'1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o'}, x)
buf = bytes.NewBufferString("123456789abcdefghijklmn")
err = x.fromReader(buf)
assert.Error(t, err, "short read of nonce")
}
func TestNonceFromBuf(t *testing.T) {
var x nonce
buf := []byte("123456789abcdefghijklmnoXXXXXXXX")
x.fromBuf(buf)
assert.Equal(t, nonce{'1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o'}, x)
buf = []byte("0123456789abcdefghijklmn")
x.fromBuf(buf)
assert.Equal(t, nonce{'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n'}, x)
buf = []byte("0123456789abcdefghijklm")
assert.Panics(t, func() { x.fromBuf(buf) })
}
func TestNonceIncrement(t *testing.T) {
for _, test := range []struct {
in nonce
out nonce
}{
{
nonce{0x00},
nonce{0x01},
},
{
nonce{0xFF},
nonce{0x00, 0x01},
},
{
nonce{0xFF, 0xFF},
nonce{0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
},
{
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
},
} {
x := test.in
x.increment()
assert.Equal(t, test.out, x)
}
}
// randomSource can read or write a random sequence
type randomSource struct {
counter int64
size int64
}
func newRandomSource(size int64) *randomSource {
return &randomSource{
size: size,
}
}
func (r *randomSource) next() byte {
r.counter++
return byte(r.counter % 257)
}
func (r *randomSource) Read(p []byte) (n int, err error) {
for i := range p {
if r.counter >= r.size {
err = io.EOF
break
}
p[i] = r.next()
n++
}
return n, err
}
func (r *randomSource) Write(p []byte) (n int, err error) {
for i := range p {
if p[i] != r.next() {
return 0, errors.Errorf("Error in stream at %d", r.counter)
}
}
return len(p), nil
}
func (r *randomSource) Close() error { return nil }
// Check interfaces
var (
_ io.ReadCloser = (*randomSource)(nil)
_ io.WriteCloser = (*randomSource)(nil)
)
// Test test infrastructure first!
func TestRandomSource(t *testing.T) {
source := newRandomSource(1E8)
sink := newRandomSource(1E8)
n, err := io.Copy(sink, source)
assert.NoError(t, err)
assert.Equal(t, int64(1E8), n)
source = newRandomSource(1E8)
buf := make([]byte, 16)
_, _ = source.Read(buf)
sink = newRandomSource(1E8)
n, err = io.Copy(sink, source)
assert.Error(t, err, "Error in stream")
}
type zeroes struct{}
func (z *zeroes) Read(p []byte) (n int, err error) {
for i := range p {
p[i] = 0
n++
}
return n, nil
}
// Test encrypt decrypt with different buffer sizes
func testEncryptDecrypt(t *testing.T, bufSize int, copySize int64) {
c, err := newCipher(NameEncryptionStandard, "", "")
assert.NoError(t, err)
c.cryptoRand = &zeroes{} // zero out the nonce
buf := make([]byte, bufSize)
source := newRandomSource(copySize)
encrypted, err := c.newEncrypter(source)
assert.NoError(t, err)
decrypted, err := c.newDecrypter(ioutil.NopCloser(encrypted))
assert.NoError(t, err)
sink := newRandomSource(copySize)
n, err := io.CopyBuffer(sink, decrypted, buf)
assert.NoError(t, err)
assert.Equal(t, copySize, n)
blocks := copySize / blockSize
if (copySize % blockSize) != 0 {
blocks++
}
var expectedNonce = nonce{byte(blocks), byte(blocks >> 8), byte(blocks >> 16), byte(blocks >> 32)}
assert.Equal(t, expectedNonce, encrypted.nonce)
assert.Equal(t, expectedNonce, decrypted.nonce)
}
func TestEncryptDecrypt1(t *testing.T) {
testEncryptDecrypt(t, 1, 1E7)
}
func TestEncryptDecrypt32(t *testing.T) {
testEncryptDecrypt(t, 32, 1E8)
}
func TestEncryptDecrypt4096(t *testing.T) {
testEncryptDecrypt(t, 4096, 1E8)
}
func TestEncryptDecrypt65536(t *testing.T) {
testEncryptDecrypt(t, 65536, 1E8)
}
func TestEncryptDecrypt65537(t *testing.T) {
testEncryptDecrypt(t, 65537, 1E8)
}
var (
file0 = []byte{
0x52, 0x43, 0x4c, 0x4f, 0x4e, 0x45, 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
}
file1 = []byte{
0x52, 0x43, 0x4c, 0x4f, 0x4e, 0x45, 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
0x09, 0x5b, 0x44, 0x6c, 0xd6, 0x23, 0x7b, 0xbc, 0xb0, 0x8d, 0x09, 0xfb, 0x52, 0x4c, 0xe5, 0x65,
0xAA,
}
file16 = []byte{
0x52, 0x43, 0x4c, 0x4f, 0x4e, 0x45, 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
0xb9, 0xc4, 0x55, 0x2a, 0x27, 0x10, 0x06, 0x29, 0x18, 0x96, 0x0a, 0x3e, 0x60, 0x8c, 0x29, 0xb9,
0xaa, 0x8a, 0x5e, 0x1e, 0x16, 0x5b, 0x6d, 0x07, 0x5d, 0xe4, 0xe9, 0xbb, 0x36, 0x7f, 0xd6, 0xd4,
}
)
func TestEncryptData(t *testing.T) {
for _, test := range []struct {
in []byte
expected []byte
}{
{[]byte{}, file0},
{[]byte{1}, file1},
{[]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, file16},
} {
c, err := newCipher(NameEncryptionStandard, "", "")
assert.NoError(t, err)
c.cryptoRand = newRandomSource(1E8) // nodge the crypto rand generator
// Check encode works
buf := bytes.NewBuffer(test.in)
encrypted, err := c.EncryptData(buf)
assert.NoError(t, err)
out, err := ioutil.ReadAll(encrypted)
assert.NoError(t, err)
assert.Equal(t, test.expected, out)
// Check we can decode the data properly too...
buf = bytes.NewBuffer(out)
decrypted, err := c.DecryptData(ioutil.NopCloser(buf))
assert.NoError(t, err)
out, err = ioutil.ReadAll(decrypted)
assert.NoError(t, err)
assert.Equal(t, test.in, out)
}
}
func TestNewEncrypter(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "")
assert.NoError(t, err)
c.cryptoRand = newRandomSource(1E8) // nodge the crypto rand generator
z := &zeroes{}
fh, err := c.newEncrypter(z)
assert.NoError(t, err)
assert.Equal(t, nonce{0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18}, fh.nonce)
assert.Equal(t, []byte{'R', 'C', 'L', 'O', 'N', 'E', 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18}, fh.buf[:32])
// Test error path
c.cryptoRand = bytes.NewBufferString("123456789abcdefghijklmn")
fh, err = c.newEncrypter(z)
assert.Nil(t, fh)
assert.Error(t, err, "short read of nonce")
}
type errorReader struct {
err error
}
func (er errorReader) Read(p []byte) (n int, err error) {
return 0, er.err
}
type closeDetector struct {
io.Reader
closed int
}
func newCloseDetector(in io.Reader) *closeDetector {
return &closeDetector{
Reader: in,
}
}
func (c *closeDetector) Close() error {
c.closed++
return nil
}
func TestNewDecrypter(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "")
assert.NoError(t, err)
c.cryptoRand = newRandomSource(1E8) // nodge the crypto rand generator
cd := newCloseDetector(bytes.NewBuffer(file0))
fh, err := c.newDecrypter(cd)
assert.NoError(t, err)
// check nonce is in place
assert.Equal(t, file0[8:32], fh.nonce[:])
assert.Equal(t, 0, cd.closed)
// Test error paths
for i := range file0 {
cd := newCloseDetector(bytes.NewBuffer(file0[:i]))
fh, err = c.newDecrypter(cd)
assert.Nil(t, fh)
assert.Error(t, err, ErrorEncryptedFileTooShort.Error())
assert.Equal(t, 1, cd.closed)
}
er := &errorReader{errors.New("potato")}
cd = newCloseDetector(er)
fh, err = c.newDecrypter(cd)
assert.Nil(t, fh)
assert.Error(t, err, "potato")
assert.Equal(t, 1, cd.closed)
// bad magic
file0copy := make([]byte, len(file0))
copy(file0copy, file0)
for i := range fileMagic {
file0copy[i] ^= 0x1
cd := newCloseDetector(bytes.NewBuffer(file0copy))
fh, err := c.newDecrypter(cd)
assert.Nil(t, fh)
assert.Error(t, err, ErrorEncryptedBadMagic.Error())
file0copy[i] ^= 0x1
assert.Equal(t, 1, cd.closed)
}
}
func TestDecrypterRead(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "")
assert.NoError(t, err)
// Test truncating the header
for i := 1; i < blockHeaderSize; i++ {
cd := newCloseDetector(bytes.NewBuffer(file1[:len(file1)-i]))
fh, err := c.newDecrypter(cd)
assert.NoError(t, err)
_, err = ioutil.ReadAll(fh)
assert.Error(t, err, ErrorEncryptedFileBadHeader.Error())
assert.Equal(t, 0, cd.closed)
}
// Test producing an error on the file on Read the underlying file
in1 := bytes.NewBuffer(file1)
in2 := &errorReader{errors.New("potato")}
in := io.MultiReader(in1, in2)
cd := newCloseDetector(in)
fh, err := c.newDecrypter(cd)
assert.NoError(t, err)
_, err = ioutil.ReadAll(fh)
assert.Error(t, err, "potato")
assert.Equal(t, 0, cd.closed)
// Test corrupting the input
// shouldn't be able to corrupt any byte without some sort of error
file16copy := make([]byte, len(file16))
copy(file16copy, file16)
for i := range file16copy {
file16copy[i] ^= 0xFF
fh, err := c.newDecrypter(ioutil.NopCloser(bytes.NewBuffer(file16copy)))
if i < fileMagicSize {
assert.Error(t, err, ErrorEncryptedBadMagic.Error())
assert.Nil(t, fh)
} else {
assert.NoError(t, err)
_, err = ioutil.ReadAll(fh)
assert.Error(t, err, ErrorEncryptedFileBadHeader.Error())
}
file16copy[i] ^= 0xFF
}
}
func TestDecrypterClose(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "")
assert.NoError(t, err)
cd := newCloseDetector(bytes.NewBuffer(file16))
fh, err := c.newDecrypter(cd)
assert.NoError(t, err)
assert.Equal(t, 0, cd.closed)
// close before reading
assert.Equal(t, nil, fh.err)
err = fh.Close()
assert.Equal(t, ErrorFileClosed, fh.err)
assert.Equal(t, 1, cd.closed)
// double close
err = fh.Close()
assert.Error(t, err, ErrorFileClosed.Error())
assert.Equal(t, 1, cd.closed)
// try again reading the file this time
cd = newCloseDetector(bytes.NewBuffer(file1))
fh, err = c.newDecrypter(cd)
assert.NoError(t, err)
assert.Equal(t, 0, cd.closed)
// close after reading
out, err := ioutil.ReadAll(fh)
assert.NoError(t, err)
assert.Equal(t, []byte{1}, out)
assert.Equal(t, io.EOF, fh.err)
err = fh.Close()
assert.Equal(t, ErrorFileClosed, fh.err)
assert.Equal(t, 1, cd.closed)
}
func TestPutGetBlock(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "")
assert.NoError(t, err)
block := c.getBlock()
c.putBlock(block)
c.putBlock(block)
assert.Panics(t, func() { c.putBlock(block[:len(block)-1]) })
}
func TestKey(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "")
assert.NoError(t, err)
// Check zero keys OK
assert.Equal(t, [32]byte{}, c.dataKey)
assert.Equal(t, [32]byte{}, c.nameKey)
assert.Equal(t, [16]byte{}, c.nameTweak)
require.NoError(t, c.Key("potato", ""))
assert.Equal(t, [32]byte{0x74, 0x55, 0xC7, 0x1A, 0xB1, 0x7C, 0x86, 0x5B, 0x84, 0x71, 0xF4, 0x7B, 0x79, 0xAC, 0xB0, 0x7E, 0xB3, 0x1D, 0x56, 0x78, 0xB8, 0x0C, 0x7E, 0x2E, 0xAF, 0x4F, 0xC8, 0x06, 0x6A, 0x9E, 0xE4, 0x68}, c.dataKey)
assert.Equal(t, [32]byte{0x76, 0x5D, 0xA2, 0x7A, 0xB1, 0x5D, 0x77, 0xF9, 0x57, 0x96, 0x71, 0x1F, 0x7B, 0x93, 0xAD, 0x63, 0xBB, 0xB4, 0x84, 0x07, 0x2E, 0x71, 0x80, 0xA8, 0xD1, 0x7A, 0x9B, 0xBE, 0xC1, 0x42, 0x70, 0xD0}, c.nameKey)
assert.Equal(t, [16]byte{0xC1, 0x8D, 0x59, 0x32, 0xF5, 0x5B, 0x28, 0x28, 0xC5, 0xE1, 0xE8, 0x72, 0x15, 0x52, 0x03, 0x10}, c.nameTweak)
require.NoError(t, c.Key("Potato", ""))
assert.Equal(t, [32]byte{0xAE, 0xEA, 0x6A, 0xD3, 0x47, 0xDF, 0x75, 0xB9, 0x63, 0xCE, 0x12, 0xF5, 0x76, 0x23, 0xE9, 0x46, 0xD4, 0x2E, 0xD8, 0xBF, 0x3E, 0x92, 0x8B, 0x39, 0x24, 0x37, 0x94, 0x13, 0x3E, 0x5E, 0xF7, 0x5E}, c.dataKey)
assert.Equal(t, [32]byte{0x54, 0xF7, 0x02, 0x6E, 0x8A, 0xFC, 0x56, 0x0A, 0x86, 0x63, 0x6A, 0xAB, 0x2C, 0x9C, 0x51, 0x62, 0xE5, 0x1A, 0x12, 0x23, 0x51, 0x83, 0x6E, 0xAF, 0x50, 0x42, 0x0F, 0x98, 0x1C, 0x86, 0x0A, 0x19}, c.nameKey)
assert.Equal(t, [16]byte{0xF8, 0xC1, 0xB6, 0x27, 0x2D, 0x52, 0x9B, 0x4A, 0x8F, 0xDA, 0xEB, 0x42, 0x4A, 0x28, 0xDD, 0xF3}, c.nameTweak)
require.NoError(t, c.Key("potato", "sausage"))
assert.Equal(t, [32]uint8{0x8e, 0x9b, 0x6b, 0x99, 0xf8, 0x69, 0x4, 0x67, 0xa0, 0x71, 0xf9, 0xcb, 0x92, 0xd0, 0xaa, 0x78, 0x7f, 0x8f, 0xf1, 0x78, 0xbe, 0xc9, 0x6f, 0x99, 0x9f, 0xd5, 0x20, 0x6e, 0x64, 0x4a, 0x1b, 0x50}, c.dataKey)
assert.Equal(t, [32]uint8{0x3e, 0xa9, 0x5e, 0xf6, 0x81, 0x78, 0x2d, 0xc9, 0xd9, 0x95, 0x5d, 0x22, 0x5b, 0xfd, 0x44, 0x2c, 0x6f, 0x5d, 0x68, 0x97, 0xb0, 0x29, 0x1, 0x5c, 0x6f, 0x46, 0x2e, 0x2a, 0x9d, 0xae, 0x2c, 0xe3}, c.nameKey)
assert.Equal(t, [16]uint8{0xf1, 0x7f, 0xd7, 0x14, 0x1d, 0x65, 0x27, 0x4f, 0x36, 0x3f, 0xc2, 0xa0, 0x4d, 0xd2, 0x14, 0x8a}, c.nameTweak)
require.NoError(t, c.Key("potato", "Sausage"))
assert.Equal(t, [32]uint8{0xda, 0x81, 0x8c, 0x67, 0xef, 0x11, 0xf, 0xc8, 0xd5, 0xc8, 0x62, 0x4b, 0x7f, 0xe2, 0x9e, 0x35, 0x35, 0xb0, 0x8d, 0x79, 0x84, 0x89, 0xac, 0xcb, 0xa0, 0xff, 0x2, 0x72, 0x3, 0x1a, 0x5e, 0x64}, c.dataKey)
assert.Equal(t, [32]uint8{0x2, 0x81, 0x7e, 0x7b, 0xea, 0x99, 0x81, 0x5a, 0xd0, 0x2d, 0xb9, 0x64, 0x48, 0xb0, 0x28, 0x27, 0x7c, 0x20, 0xb4, 0xd4, 0xa4, 0x68, 0xad, 0x4e, 0x5c, 0x29, 0xf, 0x79, 0xef, 0xee, 0xdb, 0x3b}, c.nameKey)
assert.Equal(t, [16]uint8{0x9a, 0xb5, 0xb, 0x3d, 0xcb, 0x60, 0x59, 0x55, 0xa5, 0x4d, 0xe6, 0xb6, 0x47, 0x3, 0x23, 0xe2}, c.nameTweak)
require.NoError(t, c.Key("", ""))
assert.Equal(t, [32]byte{}, c.dataKey)
assert.Equal(t, [32]byte{}, c.nameKey)
assert.Equal(t, [16]byte{}, c.nameTweak)
}

430
crypt/crypt.go Normal file
View File

@@ -0,0 +1,430 @@
// Package crypt provides wrappers for Fs and Object which implement encryption
package crypt
import (
"fmt"
"io"
"path"
"sync"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "crypt",
Description: "Encrypt/Decrypt a remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",
Help: "Remote to encrypt/decrypt.",
}, {
Name: "filename_encryption",
Help: "How to encrypt the filenames.",
Examples: []fs.OptionExample{
{
Value: "off",
Help: "Don't encrypt the file names. Adds a \".bin\" extension only.",
}, {
Value: "standard",
Help: "Encrypt the filenames see the docs for the details.",
},
},
}, {
Name: "password",
Help: "Password or pass phrase for encryption.",
IsPassword: true,
}, {
Name: "password2",
Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.",
IsPassword: true,
Optional: true,
}},
})
}
// NewFs contstructs an Fs from the path, container:path
func NewFs(name, rpath string) (fs.Fs, error) {
mode, err := NewNameEncryptionMode(fs.ConfigFile.MustValue(name, "filename_encryption", "standard"))
if err != nil {
return nil, err
}
password := fs.ConfigFile.MustValue(name, "password", "")
if password == "" {
return nil, errors.New("password not set in config file")
}
password, err = fs.Reveal(password)
if err != nil {
return nil, errors.Wrap(err, "failed to decrypt password")
}
salt := fs.ConfigFile.MustValue(name, "password2", "")
if salt != "" {
salt, err = fs.Reveal(salt)
if err != nil {
return nil, errors.Wrap(err, "failed to decrypt password2")
}
}
cipher, err := newCipher(mode, password, salt)
if err != nil {
return nil, errors.Wrap(err, "failed to make cipher")
}
remote := fs.ConfigFile.MustValue(name, "remote")
// Look for a file first
remotePath := path.Join(remote, cipher.EncryptFileName(rpath))
wrappedFs, err := fs.NewFs(remotePath)
// if that didn't produce a file, look for a directory
if err != fs.ErrorIsFile {
remotePath = path.Join(remote, cipher.EncryptDirName(rpath))
wrappedFs, err = fs.NewFs(remotePath)
}
if err != fs.ErrorIsFile && err != nil {
return nil, errors.Wrapf(err, "failed to make remote %q to wrap", remotePath)
}
f := &Fs{
Fs: wrappedFs,
cipher: cipher,
mode: mode,
}
return f, err
}
// Fs represents a wrapped fs.Fs
type Fs struct {
fs.Fs
cipher Cipher
mode NameEncryptionMode
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("Encrypted %s", f.Fs.String())
}
// List the Fs into a channel
func (f *Fs) List(opts fs.ListOpts, dir string) {
f.Fs.List(f.newListOpts(opts, dir), f.cipher.EncryptDirName(dir))
}
// NewObject finds the Object at remote.
func (f *Fs) NewObject(remote string) (fs.Object, error) {
o, err := f.Fs.NewObject(f.cipher.EncryptFileName(remote))
if err != nil {
return nil, err
}
return f.newObject(o), nil
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo) (fs.Object, error) {
wrappedIn, err := f.cipher.EncryptData(in)
if err != nil {
return nil, err
}
o, err := f.Fs.Put(wrappedIn, f.newObjectInfo(src))
if err != nil {
return nil, err
}
return f.newObject(o), nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() fs.HashSet {
return fs.HashSet(fs.HashNone)
}
// Purge all files in the root and the root directory
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge() error {
do, ok := f.Fs.(fs.Purger)
if !ok {
return fs.ErrorCantPurge
}
return do.Purge()
}
// Copy src to this remote using server side copy operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
do, ok := f.Fs.(fs.Copier)
if !ok {
return nil, fs.ErrorCantCopy
}
o, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantCopy
}
oResult, err := do.Copy(o.Object, f.cipher.EncryptFileName(remote))
if err != nil {
return nil, err
}
return f.newObject(oResult), nil
}
// Move src to this remote using server side move operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
do, ok := f.Fs.(fs.Mover)
if !ok {
return nil, fs.ErrorCantMove
}
o, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantMove
}
oResult, err := do.Move(o.Object, f.cipher.EncryptFileName(remote))
if err != nil {
return nil, err
}
return f.newObject(oResult), nil
}
// DirMove moves src to this remote using server side move
// operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(src fs.Fs) error {
do, ok := f.Fs.(fs.DirMover)
if !ok {
return fs.ErrorCantDirMove
}
srcFs, ok := src.(*Fs)
if !ok {
fs.Debug(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
return do.DirMove(srcFs.Fs)
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs {
return f.Fs
}
// Object describes a wrapped for being read from the Fs
//
// This decrypts the remote name and decrypts the data
type Object struct {
fs.Object
f *Fs
}
func (f *Fs) newObject(o fs.Object) *Object {
return &Object{
Object: o,
f: f,
}
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.f
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Remote()
}
// Remote returns the remote path
func (o *Object) Remote() string {
remote := o.Object.Remote()
decryptedName, err := o.f.cipher.DecryptFileName(remote)
if err != nil {
fs.Debug(remote, "Undecryptable file name: %v", err)
return remote
}
return decryptedName
}
// Size returns the size of the file
func (o *Object) Size() int64 {
size, err := o.f.cipher.DecryptedSize(o.Object.Size())
if err != nil {
fs.Debug(o, "Bad size for decrypt: %v", err)
}
return size
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(hash fs.HashType) (string, error) {
return "", nil
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open() (io.ReadCloser, error) {
in, err := o.Object.Open()
if err != nil {
return in, err
}
return o.f.cipher.DecryptData(in)
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(in io.Reader, src fs.ObjectInfo) error {
wrappedIn, err := o.f.cipher.EncryptData(in)
if err != nil {
return err
}
return o.Object.Update(wrappedIn, o.f.newObjectInfo(src))
}
// newDir returns a dir with the Name decrypted
func (f *Fs) newDir(dir *fs.Dir) *fs.Dir {
new := *dir
remote := dir.Name
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debug(remote, "Undecryptable dir name: %v", err)
} else {
new.Name = decryptedRemote
}
return &new
}
// ObjectInfo describes a wrapped fs.ObjectInfo for being the source
//
// This encrypts the remote name and adjusts the size
type ObjectInfo struct {
fs.ObjectInfo
f *Fs
}
func (f *Fs) newObjectInfo(src fs.ObjectInfo) *ObjectInfo {
return &ObjectInfo{
ObjectInfo: src,
f: f,
}
}
// Fs returns read only access to the Fs that this object is part of
func (o *ObjectInfo) Fs() fs.Info {
return o.f
}
// Remote returns the remote path
func (o *ObjectInfo) Remote() string {
return o.f.cipher.EncryptFileName(o.ObjectInfo.Remote())
}
// Size returns the size of the file
func (o *ObjectInfo) Size() int64 {
return o.f.cipher.EncryptedSize(o.ObjectInfo.Size())
}
// ListOpts wraps a listopts decrypting the directory listing and
// replacing the Objects
type ListOpts struct {
fs.ListOpts
f *Fs
dir string // dir we are listing
mu sync.Mutex // to protect dirs
dirs map[string]struct{} // keep track of synthetic directory objects added
}
// Make a ListOpts wrapper
func (f *Fs) newListOpts(lo fs.ListOpts, dir string) *ListOpts {
if dir != "" {
dir += "/"
}
return &ListOpts{
ListOpts: lo,
f: f,
dir: dir,
dirs: make(map[string]struct{}),
}
}
// Level gets the recursion level for this listing.
//
// Fses may ignore this, but should implement it for improved efficiency if possible.
//
// Level 1 means list just the contents of the directory
//
// Each returned item must have less than level `/`s in.
func (lo *ListOpts) Level() int {
return lo.ListOpts.Level()
}
// Add an object to the output.
// If the function returns true, the operation has been aborted.
// Multiple goroutines can safely add objects concurrently.
func (lo *ListOpts) Add(obj fs.Object) (abort bool) {
remote := obj.Remote()
_, err := lo.f.cipher.DecryptFileName(remote)
if err != nil {
fs.Debug(remote, "Skipping undecryptable file name: %v", err)
return lo.ListOpts.IsFinished()
}
return lo.ListOpts.Add(lo.f.newObject(obj))
}
// AddDir adds a directory to the output.
// If the function returns true, the operation has been aborted.
// Multiple goroutines can safely add objects concurrently.
func (lo *ListOpts) AddDir(dir *fs.Dir) (abort bool) {
remote := dir.Name
_, err := lo.f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debug(remote, "Skipping undecryptable dir name: %v", err)
return lo.ListOpts.IsFinished()
}
return lo.ListOpts.AddDir(lo.f.newDir(dir))
}
// IncludeDirectory returns whether this directory should be
// included in the listing (and recursed into or not).
func (lo *ListOpts) IncludeDirectory(remote string) bool {
decryptedRemote, err := lo.f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debug(remote, "Not including undecryptable directory name: %v", err)
return false
}
return lo.ListOpts.IncludeDirectory(decryptedRemote)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
// _ fs.PutUncheckeder = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)
_ fs.ObjectInfo = (*ObjectInfo)(nil)
_ fs.Object = (*Object)(nil)
_ fs.ListOpts = (*ListOpts)(nil)
)

59
crypt/crypt2_test.go Normal file
View File

@@ -0,0 +1,59 @@
// Test Crypt filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package crypt_test
import (
"testing"
"github.com/ncw/rclone/crypt"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
_ "github.com/ncw/rclone/local"
)
func TestSetup2(t *testing.T) {
fstests.NilObject = fs.Object((*crypt.Object)(nil))
fstests.RemoteName = "TestCrypt2:"
}
// Generic tests for the Fs
func TestInit2(t *testing.T) { fstests.TestInit(t) }
func TestFsString2(t *testing.T) { fstests.TestFsString(t) }
func TestFsRmdirEmpty2(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound2(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir2(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsListEmpty2(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty2(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsNewObjectNotFound2(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile12(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutFile22(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile12(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile22(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListDirRoot2(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListSubdir2(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListLevel22(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListFile12(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject2(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and22(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsCopy2(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove2(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove2(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull2(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision2(t *testing.T) { fstests.TestFsPrecision(t) }
func TestObjectString2(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs2(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote2(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes2(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime2(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectSetModTime2(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize2(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen2(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectUpdate2(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable2(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile2(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound2(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove2(t *testing.T) { fstests.TestObjectRemove(t) }
func TestObjectPurge2(t *testing.T) { fstests.TestObjectPurge(t) }
func TestFinalise2(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -0,0 +1,27 @@
package crypt_test
import (
"os"
"path/filepath"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
// Create the TestCrypt: remote
func init() {
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard")
name := "TestCrypt"
tempdir2 := filepath.Join(os.TempDir(), "rclone-crypt-test-off")
name2 := name + "2"
fstests.ExtraConfig = []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: fs.MustObscure("potato")},
{Name: name, Key: "filename_encryption", Value: "standard"},
{Name: name2, Key: "type", Value: "crypt"},
{Name: name2, Key: "remote", Value: tempdir2},
{Name: name2, Key: "password", Value: fs.MustObscure("potato2")},
{Name: name2, Key: "filename_encryption", Value: "off"},
}
}

59
crypt/crypt_test.go Normal file
View File

@@ -0,0 +1,59 @@
// Test Crypt filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package crypt_test
import (
"testing"
"github.com/ncw/rclone/crypt"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
_ "github.com/ncw/rclone/local"
)
func TestSetup(t *testing.T) {
fstests.NilObject = fs.Object((*crypt.Object)(nil))
fstests.RemoteName = "TestCrypt:"
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

63
crypt/pkcs7/pkcs7.go Normal file
View File

@@ -0,0 +1,63 @@
// Package pkcs7 implements PKCS#7 padding
//
// This is a standard way of encoding variable length buffers into
// buffers which are a multiple of an underlying crypto block size.
package pkcs7
import "github.com/pkg/errors"
// Errors Unpad can return
var (
ErrorPaddingNotFound = errors.New("Bad PKCS#7 padding - not padded")
ErrorPaddingNotAMultiple = errors.New("Bad PKCS#7 padding - not a multiple of blocksize")
ErrorPaddingTooLong = errors.New("Bad PKCS#7 padding - too long")
ErrorPaddingTooShort = errors.New("Bad PKCS#7 padding - too short")
ErrorPaddingNotAllTheSame = errors.New("Bad PKCS#7 padding - not all the same")
)
// Pad buf using PKCS#7 to a multiple of n.
//
// Appends the padding to buf - make a copy of it first if you don't
// want it modified.
func Pad(n int, buf []byte) []byte {
if n <= 1 || n >= 256 {
panic("bad multiple")
}
length := len(buf)
padding := n - (length % n)
for i := 0; i < padding; i++ {
buf = append(buf, byte(padding))
}
if (len(buf) % n) != 0 {
panic("padding failed")
}
return buf
}
// Unpad buf using PKCS#7 from a multiple of n returning a slice of
// buf or an error if malformed.
func Unpad(n int, buf []byte) ([]byte, error) {
if n <= 1 || n >= 256 {
panic("bad multiple")
}
length := len(buf)
if length == 0 {
return nil, ErrorPaddingNotFound
}
if (length % n) != 0 {
return nil, ErrorPaddingNotAMultiple
}
padding := int(buf[length-1])
if padding > n {
return nil, ErrorPaddingTooLong
}
if padding == 0 {
return nil, ErrorPaddingTooShort
}
for i := 0; i < padding; i++ {
if buf[length-1-i] != byte(padding) {
return nil, ErrorPaddingNotAllTheSame
}
}
return buf[:length-padding], nil
}

73
crypt/pkcs7/pkcs7_test.go Normal file
View File

@@ -0,0 +1,73 @@
package pkcs7
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
)
func TestPad(t *testing.T) {
for _, test := range []struct {
n int
in string
expected string
}{
{8, "", "\x08\x08\x08\x08\x08\x08\x08\x08"},
{8, "1", "1\x07\x07\x07\x07\x07\x07\x07"},
{8, "12", "12\x06\x06\x06\x06\x06\x06"},
{8, "123", "123\x05\x05\x05\x05\x05"},
{8, "1234", "1234\x04\x04\x04\x04"},
{8, "12345", "12345\x03\x03\x03"},
{8, "123456", "123456\x02\x02"},
{8, "1234567", "1234567\x01"},
{8, "abcdefgh", "abcdefgh\x08\x08\x08\x08\x08\x08\x08\x08"},
{8, "abcdefgh1", "abcdefgh1\x07\x07\x07\x07\x07\x07\x07"},
{8, "abcdefgh12", "abcdefgh12\x06\x06\x06\x06\x06\x06"},
{8, "abcdefgh123", "abcdefgh123\x05\x05\x05\x05\x05"},
{8, "abcdefgh1234", "abcdefgh1234\x04\x04\x04\x04"},
{8, "abcdefgh12345", "abcdefgh12345\x03\x03\x03"},
{8, "abcdefgh123456", "abcdefgh123456\x02\x02"},
{8, "abcdefgh1234567", "abcdefgh1234567\x01"},
{8, "abcdefgh12345678", "abcdefgh12345678\x08\x08\x08\x08\x08\x08\x08\x08"},
{16, "", "\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10"},
{16, "a", "a\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f"},
} {
actual := Pad(test.n, []byte(test.in))
assert.Equal(t, test.expected, string(actual), fmt.Sprintf("Pad %d %q", test.n, test.in))
recovered, err := Unpad(test.n, actual)
assert.NoError(t, err)
assert.Equal(t, []byte(test.in), recovered, fmt.Sprintf("Unpad %d %q", test.n, test.in))
}
assert.Panics(t, func() { Pad(1, []byte("")) }, "bad multiple")
assert.Panics(t, func() { Pad(256, []byte("")) }, "bad multiple")
}
func TestUnpad(t *testing.T) {
// We've tested the OK decoding in TestPad, now test the error cases
for _, test := range []struct {
n int
in string
err error
}{
{8, "", ErrorPaddingNotFound},
{8, "1", ErrorPaddingNotAMultiple},
{8, "12", ErrorPaddingNotAMultiple},
{8, "123", ErrorPaddingNotAMultiple},
{8, "1234", ErrorPaddingNotAMultiple},
{8, "12345", ErrorPaddingNotAMultiple},
{8, "123456", ErrorPaddingNotAMultiple},
{8, "1234567", ErrorPaddingNotAMultiple},
{8, "1234567\xFF", ErrorPaddingTooLong},
{8, "1234567\x09", ErrorPaddingTooLong},
{8, "1234567\x00", ErrorPaddingTooShort},
{8, "123456\x01\x02", ErrorPaddingNotAllTheSame},
{8, "\x07\x08\x08\x08\x08\x08\x08\x08", ErrorPaddingNotAllTheSame},
} {
result, actualErr := Unpad(test.n, []byte(test.in))
assert.Equal(t, test.err, actualErr, fmt.Sprintf("Unpad %d %q", test.n, test.in))
assert.Equal(t, result, []byte(nil))
}
assert.Panics(t, func() { _, _ = Unpad(1, []byte("")) }, "bad multiple")
assert.Panics(t, func() { _, _ = Unpad(256, []byte("")) }, "bad multiple")
}

View File

@@ -4,17 +4,20 @@ package dircache
// _methods are called without the lock
import (
"fmt"
"log"
"strings"
"sync"
"github.com/ncw/rclone/fs"
"github.com/pkg/errors"
)
// DirCache caches paths to directory IDs and vice versa
type DirCache struct {
mu sync.RWMutex
cacheMu sync.RWMutex
cache map[string]string
invCache map[string]string
mu sync.Mutex
fs DirCacher // Interface to find and make stuff
trueRootID string // ID of the absolute root
root string // the path we are working on
@@ -43,52 +46,36 @@ func New(root string, trueRootID string, fs DirCacher) *DirCache {
return d
}
// _get an ID given a path - without lock
func (dc *DirCache) _get(path string) (id string, ok bool) {
id, ok = dc.cache[path]
return
}
// Get an ID given a path
func (dc *DirCache) Get(path string) (id string, ok bool) {
dc.mu.RLock()
id, ok = dc._get(path)
dc.mu.RUnlock()
dc.cacheMu.RLock()
id, ok = dc.cache[path]
dc.cacheMu.RUnlock()
return
}
// GetInv gets a path given an ID
func (dc *DirCache) GetInv(path string) (id string, ok bool) {
dc.mu.RLock()
id, ok = dc.invCache[path]
dc.mu.RUnlock()
func (dc *DirCache) GetInv(id string) (path string, ok bool) {
dc.cacheMu.RLock()
path, ok = dc.invCache[id]
dc.cacheMu.RUnlock()
return
}
// _put a path, id into the map without lock
func (dc *DirCache) _put(path, id string) {
dc.cache[path] = id
dc.invCache[id] = path
}
// Put a path, id into the map
func (dc *DirCache) Put(path, id string) {
dc.mu.Lock()
dc._put(path, id)
dc.mu.Unlock()
}
// _flush the map of all data without lock
func (dc *DirCache) _flush() {
dc.cache = make(map[string]string)
dc.invCache = make(map[string]string)
dc.cacheMu.Lock()
dc.cache[path] = id
dc.invCache[id] = path
dc.cacheMu.Unlock()
}
// Flush the map of all data
func (dc *DirCache) Flush() {
dc.mu.Lock()
dc._flush()
dc.mu.Unlock()
dc.cacheMu.Lock()
dc.cache = make(map[string]string)
dc.invCache = make(map[string]string)
dc.cacheMu.Unlock()
}
// SplitPath splits a path into directory, leaf
@@ -120,8 +107,8 @@ func SplitPath(path string) (directory, leaf string) {
// If not found strip the last path off the path and recurse
// Now have a parent directory id, so look in the parent for self and return it
func (dc *DirCache) FindDir(path string, create bool) (pathID string, err error) {
dc.mu.RLock()
defer dc.mu.RUnlock()
dc.mu.Lock()
defer dc.mu.Unlock()
return dc._findDir(path, create)
}
@@ -135,7 +122,7 @@ func (dc *DirCache) _findDirInCache(path string) string {
}
// If it is in the cache then return it
pathID, ok := dc._get(path)
pathID, ok := dc.Get(path)
if ok {
// fmt.Println("Cache hit on", path)
return pathID
@@ -146,10 +133,6 @@ func (dc *DirCache) _findDirInCache(path string) string {
// Unlocked findDir - must have mu
func (dc *DirCache) _findDir(path string, create bool) (pathID string, err error) {
// if !dc.foundRoot {
// return "", fmt.Errorf("FindDir called before FindRoot")
// }
pathID = dc._findDirInCache(path)
if pathID != "" {
return pathID, nil
@@ -176,15 +159,15 @@ func (dc *DirCache) _findDir(path string, create bool) (pathID string, err error
if create {
pathID, err = dc.fs.CreateDir(parentPathID, leaf)
if err != nil {
return "", fmt.Errorf("Failed to make directory: %v", err)
return "", errors.Wrap(err, "failed to make directory")
}
} else {
return "", fmt.Errorf("Couldn't find directory: %q", path)
return "", fs.ErrorDirNotFound
}
}
// Store the leaf directory in the cache
dc._put(path, pathID)
dc.Put(path, pathID)
// fmt.Println("Dir", path, "is", pathID)
return pathID, nil
@@ -198,13 +181,6 @@ func (dc *DirCache) FindPath(path string, create bool) (leaf, directoryID string
defer dc.mu.Unlock()
directory, leaf := SplitPath(path)
directoryID, err = dc._findDir(directory, create)
if err != nil {
if create {
err = fmt.Errorf("Couldn't find or make directory %q: %s", directory, err)
} else {
err = fmt.Errorf("Couldn't find directory %q: %s", directory, err)
}
}
return
}
@@ -219,26 +195,32 @@ func (dc *DirCache) FindRoot(create bool) error {
if dc.foundRoot {
return nil
}
dc.foundRoot = true
rootID, err := dc._findDir(dc.root, create)
if err != nil {
dc.foundRoot = false
return err
}
dc.foundRoot = true
dc.rootID = rootID
// Find the parent of the root while we still have the root
// directory tree cached
rootParentPath, _ := SplitPath(dc.root)
dc.rootParentID, _ = dc._get(rootParentPath)
dc.rootParentID, _ = dc.Get(rootParentPath)
// Reset the tree based on dc.root
dc._flush()
dc.Flush()
// Put the root directory in
dc._put("", dc.rootID)
dc.Put("", dc.rootID)
return nil
}
// FoundRoot returns whether the root directory has been found yet
//
// Call this from FindLeaf or CreateDir only
func (dc *DirCache) FoundRoot() bool {
return dc.foundRoot
}
// RootID returns the ID of the root directory
//
// This should be called after FindRoot
@@ -258,13 +240,13 @@ func (dc *DirCache) RootParentID() (string, error) {
dc.mu.Lock()
defer dc.mu.Unlock()
if !dc.foundRoot {
return "", fmt.Errorf("Internal Error: RootID() called before FindRoot")
return "", errors.New("internal error: RootID() called before FindRoot")
}
if dc.rootParentID == "" {
return "", fmt.Errorf("Internal Error: Didn't find rootParentID")
return "", errors.New("internal error: didn't find rootParentID")
}
if dc.rootID == dc.trueRootID {
return "", fmt.Errorf("Is root directory")
return "", errors.New("is root directory")
}
return dc.rootParentID, nil
}
@@ -275,11 +257,11 @@ func (dc *DirCache) ResetRoot() {
dc.mu.Lock()
defer dc.mu.Unlock()
dc.foundRoot = false
dc._flush()
dc.Flush()
// Put the true root in
dc.rootID = dc.trueRootID
// Put the root directory in
dc._put("", dc.rootID)
dc.Put("", dc.rootID)
}

82
dircache/list.go Normal file
View File

@@ -0,0 +1,82 @@
// Listing utility functions for fses which use dircache
package dircache
import (
"sync"
"github.com/ncw/rclone/fs"
)
// ListDirJob describe a directory listing that needs to be done
type ListDirJob struct {
DirID string
Path string
Depth int
}
// ListDirer describes the interface necessary to use ListDir
type ListDirer interface {
// ListDir reads the directory specified by the job into out, returning any more jobs
ListDir(out fs.ListOpts, job ListDirJob) (jobs []ListDirJob, err error)
}
// listDir lists the directory using a recursive list from the root
//
// It does this in parallel, calling f.ListDir to do the actual reading
func listDir(f ListDirer, out fs.ListOpts, dirID string, path string) {
// Start some directory listing go routines
var wg sync.WaitGroup // sync closing of go routines
var traversing sync.WaitGroup // running directory traversals
buffer := out.Buffer()
in := make(chan ListDirJob, buffer)
for i := 0; i < buffer; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for job := range in {
jobs, err := f.ListDir(out, job)
if err != nil {
out.SetError(err)
fs.Debug(f, "Error reading %s: %s", path, err)
} else {
traversing.Add(len(jobs))
go func() {
// Now we have traversed this directory, send these
// jobs off for traversal in the background
for _, job := range jobs {
in <- job
}
}()
}
traversing.Done()
}
}()
}
// Start the process
traversing.Add(1)
in <- ListDirJob{DirID: dirID, Path: path, Depth: out.Level() - 1}
traversing.Wait()
close(in)
wg.Wait()
}
// List walks the path returning iles and directories into out
func (dc *DirCache) List(f ListDirer, out fs.ListOpts, dir string) {
defer out.Finished()
err := dc.FindRoot(false)
if err != nil {
out.SetError(err)
return
}
id, err := dc.FindDir(dir, false)
if err != nil {
out.SetError(err)
return
}
if dir != "" {
dir += "/"
}
listDir(f, out, id, dir)
}

View File

@@ -1,11 +1,15 @@
{
"indexes": {
"tag": "tags",
"group": "groups",
"menu": "menu"
},
"baseurl": "http://rclone.org",
"title": "rclone - rsync for cloud storage",
"description": "rclone - rsync for cloud storage: google drive, s3, swift, cloudfiles, dropbox, memstore...",
"canonifyurls": true
}
{
"indexes": {
"tag": "tags",
"group": "groups",
"menu": "menu"
},
"baseurl": "http://rclone.org",
"title": "rclone - rsync for cloud storage",
"description": "rclone - rsync for cloud storage: google drive, s3, swift, cloudfiles, dropbox, memstore...",
"canonifyurls": true,
"blackfriday": {
"smartDashes": false,
"plainIDAnchors": true
}
}

View File

@@ -1,6 +1,6 @@
---
title: "Rclone"
description: "rclone syncs files to and from Google Drive, S3, Swift, Cloudfiles, Dropbox, Google Cloud Storage and Amazon Cloud Drive."
description: "rclone syncs files to and from Google Drive, S3, Swift, Cloudfiles, Dropbox, Google Cloud Storage and Amazon Drive."
type: page
date: "2015-09-06"
groups: ["about"]
@@ -18,18 +18,22 @@ Rclone is a command line program to sync files and directories to and from
* Openstack Swift / Rackspace cloud files / Memset Memstore
* Dropbox
* Google Cloud Storage
* Amazon Cloud Drive
* Amazon Drive
* Microsoft One Drive
* Hubic
* Backblaze B2
* Yandex Disk
* The local filesystem
Features
* MD5SUMs checked at all times for file integrity
* MD5/SHA1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* Copy mode to just copy new/changed files
* Sync mode to make a directory identical
* Check mode to check all MD5SUMs
* Can sync to and from network, eg two different Drive accounts
* Sync (one way) mode to make a directory identical
* Check mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
Links

View File

@@ -1,17 +1,17 @@
---
title: "Amazon Cloud Drive"
description: "Rclone docs for Amazon Cloud Drive"
date: "2015-09-06"
title: "Amazon Drive"
description: "Rclone docs for Amazon Drive"
date: "2016-07-11"
---
<i class="fa fa-google"></i> Amazon Cloud Drive
<i class="fa fa-amazon"></i> Amazon Drive
-----------------------------------------
Paths are specified as `remote:path`
Paths may be as deep as required, eg `remote:directory/subdirectory`.
The initial setup for Amazon cloud drive involves getting a token from
The initial setup for Amazon Drive involves getting a token from
Amazon which you need to do in your browser. `rclone config` walks
you through it.
@@ -27,16 +27,31 @@ d) Delete remote
q) Quit config
e/n/d/q> n
name> remote
What type of source is it?
Choose a number from below
1) amazon cloud drive
2) drive
3) dropbox
4) google cloud storage
5) local
6) s3
7) swift
type> 1
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
5 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
6 / Google Drive
\ "drive"
7 / Hubic
\ "hubic"
8 / Local Disk
\ "local"
9 / Microsoft OneDrive
\ "onedrive"
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
11 / Yandex Disk
\ "yandex"
Storage> 1
Amazon Application Client Id - leave blank normally.
client_id>
Amazon Application Client Secret - leave blank normally.
@@ -58,6 +73,9 @@ d) Delete this remote
y/e/d> y
```
See the [remote setup docs](/remote_setup/) for how to set it up on a
machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the
token as returned from Amazon. This only runs from the moment it
opens your browser to the moment you get back the verification
@@ -66,21 +84,21 @@ you to unblock it temporarily if you are running a host firewall.
Once configured you can then use `rclone` like this,
List directories in top level of your Amazon cloud drive
List directories in top level of your Amazon Drive
rclone lsd remote:
List all the files in your Amazon cloud drive
List all the files in your Amazon Drive
rclone ls remote:
To copy a local directory to an Amazon cloud drive directory called backup
To copy a local directory to an Amazon Drive directory called backup
rclone copy /home/source remote:backup
### Modified time and MD5SUMs ###
Amazon cloud drive doesn't allow modification times to be changed via
Amazon Drive doesn't allow modification times to be changed via
the API so these won't be accurate or used for syncing.
It does store MD5SUMs so for a more accurate sync, you can use the
@@ -91,14 +109,50 @@ It does store MD5SUMs so for a more accurate sync, you can use the
Any files you delete with rclone will end up in the trash. Amazon
don't provide an API to permanently delete files, nor to empty the
trash, so you will have to do that with one of Amazon's apps or via
the Amazon cloud drive website.
the Amazon Drive website.
### Specific options ###
Here are the command line options specific to this cloud storage
system.
#### --acd-templink-threshold=SIZE ####
Files this size or more will be downloaded via their `tempLink`. This
is to work around a problem with Amazon Drive which blocks downloads
of files bigger than about 10GB. The default for this is 9GB which
shouldn't need to be changed.
To download files above this threshold, rclone requests a `tempLink`
which downloads the file through a temporary URL directly from the
underlying S3 storage.
#### --acd-upload-wait-time=TIME ####
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This
controls the time rclone waits - 2 minutes by default. You might want
to increase the time if you are having problems with very big files.
Upload with the `-v` flag for more info.
### Limitations ###
Note that Amazon cloud drive is case sensitive so you can't have a
Note that Amazon Drive is case insensitive so you can't have a
file called "Hello.doc" and one called "hello.doc".
Amazon cloud drive has rate limiting so you may notice errors in the
Amazon Drive has rate limiting so you may notice errors in the
sync (429 errors). rclone will automatically retry the sync up to 3
times by default (see `--retries` flag) which should hopefully work
around this problem.
Amazon Drive has an internal limit of file sizes that can be uploaded
to the service. This limit is not officially published, but all files
larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file.
This means that larger files are likely to fail.
Unfortunatly there is no way for rclone to see that this failure is
because of file size, so it will retry the operation, as any other
failure. To avoid this problem, use `--max-size=50GB` option to limit
the maximum size of uploaded files.

View File

@@ -1,7 +1,7 @@
---
title: "Authors"
description: "Rclone Authors and Contributors"
date: "2015-09-28"
date: "2016-04-22"
---
Authors
@@ -18,3 +18,22 @@ Contributors
* Colin Nicholson <colin@colinn.com>
* Klaus Post <klauspost@gmail.com>
* Sergey Tolmachev <tolsi.ru@gmail.com>
* Adriano Aurélio Meirelles <adriano@atinge.com>
* C. Bess <cbess@users.noreply.github.com>
* Dmitry Burdeev <dibu28@gmail.com>
* Joseph Spurrier <github@josephspurrier.com>
* Björn Harrtell <bjorn@wololo.org>
* Xavier Lucas <xavier.lucas@corp.ovh.com>
* Werner Beroux <werner@beroux.com>
* Brian Stengaard <brian@stengaard.eu>
* Jakub Gedeon <jgedeon@sofi.com>
* Jim Tittsler <jwt@onjapan.net>
* Michal Witkowski <michal@improbable.io>
* Fabian Ruff <fabian.ruff@sap.com>
* Leigh Klotz <klotz@quixey.com>
* Romain Lapray <lapray.romain@gmail.com>
* Justin R. Wilson <jrw972@gmail.com>
* Antonio Messina <antonio.s.messina@gmail.com>
* Stefan G. Weichinger <office@oops.co.at>
* Per Cederberg <cederberg@gmail.com>
* Radek Šenfeld <rush@logic.cz>

248
docs/content/b2.md Normal file
View File

@@ -0,0 +1,248 @@
---
title: "B2"
description: "Backblaze B2"
date: "2016-06-15"
---
<i class="fa fa-fire"></i>Backblaze B2
----------------------------------------
B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/).
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
Here is an example of making a b2 configuration. First run
rclone config
This will guide you through an interactive setup process. You will
need your account number (a short hex number) and key (a long hex
number) which you can get from the b2 control panel.
```
No remotes found - make a new one
n) New remote
q) Quit config
n/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph)
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
5 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
6 / Google Drive
\ "drive"
7 / Hubic
\ "hubic"
8 / Local Disk
\ "local"
9 / Microsoft OneDrive
\ "onedrive"
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
11 / Yandex Disk
\ "yandex"
Storage> 3
Account ID
account> 123456789abc
Application Key
key> 0123456789abcdef0123456789abcdef0123456789
Endpoint for the service - leave blank normally.
endpoint>
Remote config
--------------------
[remote]
account = 123456789abc
key = 0123456789abcdef0123456789abcdef0123456789
endpoint =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This remote is called `remote` and can now be used like this
See all buckets
rclone lsd remote:
Make a new bucket
rclone mkdir remote:bucket
List the contents of a bucket
rclone ls remote:bucket
Sync `/home/local/directory` to the remote bucket, deleting any
excess files in the bucket.
rclone sync /home/local/directory remote:bucket
### Modified time ###
The modified time is stored as metadata on the object as
`X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
in the Backblaze standard. Other tools should be able to use this as
a modified time.
Modified times are used in syncing and are fully supported except in
the case of updating a modification time on an existing object. In
this case the object will be uploaded again as B2 doesn't have an API
method to set the modification time independent of doing an upload.
### SHA1 checksums ###
The SHA1 checksums of the files are checked on upload and download and
will be used in the syncing process.
Large files which are uploaded in chunks will store their SHA1 on the
object as `X-Bz-Info-large_file_sha1` as recommended by Backblaze.
### Transfers ###
Backblaze recommends that you do lots of transfers simultaneously for
maximum speed. In tests from my SSD equiped laptop the optimum
setting is about `--transfers 32` though higher numbers may be used
for a slight speed improvement. The optimum number for you may vary
depending on your hardware, how big the files are, how much you want
to load your computer, etc. The default of `--transfers 4` is
definitely too low for Backblaze B2 though.
Note that uploading big files (bigger than 200 MB by default) will use
a 96 MB RAM buffer by default. There can be at most `--transfers` of
these in use at any moment, so this sets the upper limit on the memory
used.
### Versions ###
When rclone uploads a new version of a file it creates a [new version
of it](https://www.backblaze.com/b2/docs/file_versions.html).
Likewise when you delete a file, the old version will still be
available.
Old versions of files are visible using the `--b2-versions` flag.
If you wish to remove all the old versions then you can use the
`rclone cleanup remote:bucket` command which will delete all the old
versions of files, leaving the current ones intact. You can also
supply a path and only old versions under that path will be deleted,
eg `rclone cleanup remote:bucket/path/to/stuff`.
When you `purge` a bucket, the current and the old versions will be
deleted then the bucket will be deleted.
However `delete` will cause the current versions of the files to
become hidden old versions.
Here is a session showing the listing and and retreival of an old
version followed by a `cleanup` of the old versions.
Show current version and all the versions with `--b2-versions` flag.
```
$ rclone -q ls b2:cleanup-test
9 one.txt
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
```
Retreive an old verson
```
$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
$ ls -l /tmp/one-v2016-07-04-141003-000.txt
-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
```
Clean up all the old versions and show that they've gone.
```
$ rclone -q cleanup b2:cleanup-test
$ rclone -q ls b2:cleanup-test
9 one.txt
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
```
### Specific options ###
Here are the command line options specific to this cloud storage
system.
#### --b2-chunk-size valuee=SIZE ####
When uploading large files chunk the file into this size. Note that
these chunks are buffered in memory and there might a maximum of
`--transfers` chunks in progress at once. 100,000,000 Bytes is the
minimim size (default 96M).
#### --b2-upload-cutoff=SIZE ####
Cutoff for switching to chunked upload (default 190.735 MiB == 200
MB). Files above this size will be uploaded in chunks of
`--b2-chunk-size`.
This value should be set no larger than 4.657GiB (== 5GB) as this is
the largest file size that can be uploaded.
#### --b2-test-mode=FLAG ####
This is for debugging purposes only.
Setting FLAG to one of the strings below will cause b2 to return
specific errors for debugging purposes.
* `fail_some_uploads`
* `expire_some_account_authorization_tokens`
* `force_cap_exceeded`
These will be set in the `X-Bz-Test-Mode` header which is documented
in the [b2 integrations
checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).
#### --b2-versions ####
When set rclone will show and act on older versions of files. For example
Listing without `--b2-versions`
```
$ rclone -q ls b2:cleanup-test
9 one.txt
```
And with
```
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
```
Showing that the current version is unchanged but older versions can
be seen. These have the UTC date that they were uploaded to the
server to the nearest millisecond appended to them.
Note that when using `--b2-versions` no file write operations are
permitted, so you can't upload files or delete them.

View File

@@ -1,12 +1,252 @@
---
title: "Documentation"
description: "Rclone Changelog"
date: "2015-10-03"
date: "2016-08-24"
---
Changelog
---------
* v1.33 - 2016-08-24
* New Features
* Implement encryption
* data encrypted in NACL secretbox format
* with optional file name encryption
* New commands
* rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL)
* works on Linux, FreeBSD and OS X (need testers for the last 2!)
* rclone cat - outputs remote file or files to the terminal
* rclone genautocomplete - command to make a bash completion script for rclone
* Editing a remote using `rclone config` now goes through the wizard
* Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386 processors
* Use cobra for sub commands and docs generation
* drive
* Document how to make your own client_id
* s3
* User-configurable Amazon S3 ACL (thanks Radek Šenfeld)
* b2
* Fix stats accounting for upload - no more jumping to 100% done
* On cleanup delete hide marker if it is the current file
* New B2 API endpoint (thanks Per Cederberg)
* Set maximum backoff to 5 Minutes
* onedrive
* Fix URL escaping in file names - eg uploading files with `+` in them.
* amazon cloud drive
* Fix token expiry during large uploads
* Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
* local
* Fix filenames with invalid UTF-8 not being uploaded
* Fix problem with some UTF-8 characters on OS X
* v1.32 - 2016-07-13
* Backblaze B2
* Fix upload of files large files not in root
* v1.31 - 2016-07-13
* New Features
* Reduce memory on sync by about 50%
* Implement --no-traverse flag to stop copy traversing the destination remote.
* This can be used to reduce memory usage down to the smallest possible.
* Useful to copy a small number of files into a large destination folder.
* Implement cleanup command for emptying trash / removing old versions of files
* Currently B2 only
* Single file handling improved
* Now copied with --files-from
* Automatically sets --no-traverse when copying a single file
* Info on using installing with ansible - thanks Stefan Weichinger
* Implement --no-update-modtime flag to stop rclone fixing the remote modified times.
* Bug Fixes
* Fix move command - stop it running for overlapping Fses - this was causing data loss.
* Local
* Fix incomplete hashes - this was causing problems for B2.
* Amazon Drive
* Rename Amazon Cloud Drive to Amazon Drive - no changes to config file needed.
* Swift
* Add support for non-default project domain - thanks Antonio Messina.
* S3
* Add instructions on how to use rclone with minio.
* Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.
* Skip setting the modified time for objects > 5GB as it isn't possible.
* Backblaze B2
* Add --b2-versions flag so old versions can be listed and retreived.
* Treat 403 errors (eg cap exceeded) as fatal.
* Implement cleanup command for deleting old file versions.
* Make error handling compliant with B2 integrations notes.
* Fix handling of token expiry.
* Implement --b2-test-mode to set `X-Bz-Test-Mode` header.
* Set cutoff for chunked upload to 200MB as per B2 guidelines.
* Make upload multi-threaded.
* Dropbox
* Don't retry 461 errors.
* v1.30 - 2016-06-18
* New Features
* Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables
* Directory include filtering for efficiency
* --max-depth parameter
* Better error reporting
* More to come
* Retry more errors
* Add --ignore-size flag - for uploading images to onedrive
* Log -v output to stdout by default
* Display the transfer stats in more human readable form
* Make 0 size files specifiable with `--max-size 0b`
* Add `b` suffix so we can specify bytes in --bwlimit, --min-size etc
* Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz
* Bug Fixes
* Fix retry doing one too many retries
* Local
* Fix problems with OS X and UTF-8 characters
* Amazon Drive
* Check a file exists before uploading to help with 408 Conflict errors
* Reauth on 401 errors - this has been causing a lot of problems
* Work around spurious 403 errors
* Restart directory listings on error
* Google Drive
* Check a file exists before uploading to help with duplicates
* Fix retry of multipart uploads
* Backblaze B2
* Implement large file uploading
* S3
* Add AES256 server-side encryption for - thanks Justin R. Wilson
* Google Cloud Storage
* Make sure we don't use conflicting content types on upload
* Add service account support - thanks Michal Witkowski
* Swift
* Add auth version parameter
* Add domain option for openstack (v3 auth) - thanks Fabian Ruff
* v1.29 - 2016-04-18
* New Features
* Implement `-I, --ignore-times` for unconditional upload
* Improve `dedupe`command
* Now removes identical copies without asking
* Now obeys `--dry-run`
* Implement `--dedupe-mode` for non interactive running
* `--dedupe-mode interactive` - interactive the default.
* `--dedupe-mode skip` - removes identical files then skips anything left.
* `--dedupe-mode first` - removes identical files then keeps the first one.
* `--dedupe-mode newest` - removes identical files then keeps the newest one.
* `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
* Bug fixes
* Make rclone check obey the `--size-only` flag.
* Use "application/octet-stream" if discovered mime type is invalid.
* Fix missing "quit" option when there are no remotes.
* Google Drive
* Increase default chunk size to 8 MB - increases upload speed of big files
* Speed up directory listings and make more reliable
* Add missing retries for Move and DirMove - increases reliability
* Preserve mime type on file update
* Backblaze B2
* Enable mod time syncing
* This means that B2 will now check modification times
* It will upload new files to update the modification times
* (there isn't an API to just set the mod time.)
* If you want the old behaviour use `--size-only`.
* Update API to new version
* Fix parsing of mod time when not in metadata
* Swift/Hubic
* Don't return an MD5SUM for static large objects
* S3
* Fix uploading files bigger than 50GB
* v1.28 - 2016-03-01
* New Features
* Configuration file encryption - thanks Klaus Post
* Improve `rclone config` adding more help and making it easier to understand
* Implement `-u`/`--update` so creation times can be used on all remotes
* Implement `--low-level-retries` flag
* Optionally disable gzip compression on downloads with `--no-gzip-encoding`
* Bug fixes
* Don't make directories if `--dry-run` set
* Fix and document the `move` command
* Fix redirecting stderr on unix-like OSes when using `--log-file`
* Fix `delete` command to wait until all finished - fixes missing deletes.
* Backblaze B2
* Use one upload URL per go routine fixes `more than one upload using auth token`
* Add pacing, retries and reauthentication - fixes token expiry problems
* Upload without using a temporary file from local (and remotes which support SHA1)
* Fix reading metadata for all files when it shouldn't have been
* Drive
* Fix listing drive documents at root
* Disable copy and move for Google docs
* Swift
* Fix uploading of chunked files with non ASCII characters
* Allow setting of `storage_url` in the config - thanks Xavier Lucas
* S3
* Allow IAM role and credentials from environment variables - thanks Brian Stengaard
* Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon
* Amazon Drive
* Retry on more things to make directory listings more reliable
* v1.27 - 2016-01-31
* New Features
* Easier headless configuration with `rclone authorize`
* Add support for multiple hash types - we now check SHA1 as well as MD5 hashes.
* `delete` command which does obey the filters (unlike `purge`)
* `dedupe` command to deduplicate a remote. Useful with Google Drive.
* Add `--ignore-existing` flag to skip all files that exist on destination.
* Add `--delete-before`, `--delete-during`, `--delete-after` flags.
* Add `--memprofile` flag to debug memory use.
* Warn the user about files with same name but different case
* Make `--include` rules add their implict exclude * at the end of the filter list
* Deprecate compiling with go1.3
* Amazon Drive
* Fix download of files > 10 GB
* Fix directory traversal ("Next token is expired") for large directory listings
* Remove 409 conflict from error codes we will retry - stops very long pauses
* Backblaze B2
* SHA1 hashes now checked by rclone core
* Drive
* Add `--drive-auth-owner-only` to only consider files owned by the user - thanks Björn Harrtell
* Export Google documents
* Dropbox
* Make file exclusion error controllable with -q
* Swift
* Fix upload from unprivileged user.
* S3
* Fix updating of mod times of files with `+` in.
* Local
* Add local file system option to disable UNC on Windows.
* v1.26 - 2016-01-02
* New Features
* Yandex storage backend - thank you Dmitry Burdeev ("dibu")
* Implement Backblaze B2 storage backend
* Add --min-age and --max-age flags - thank you Adriano Aurélio Meirelles
* Make ls/lsl/md5sum/size/check obey includes and excludes
* Fixes
* Fix crash in http logging
* Upload releases to github too
* Swift
* Fix sync for chunked files
* One Drive
* Re-enable server side copy
* Don't mask HTTP error codes with JSON decode error
* S3
* Fix corrupting Content-Type on mod time update (thanks Joseph Spurrier)
* v1.25 - 2015-11-14
* New features
* Implement Hubic storage system
* Fixes
* Fix deletion of some excluded files without --delete-excluded
* This could have deleted files unexpectedly on sync
* Always check first with `--dry-run`!
* Swift
* Stop SetModTime losing metadata (eg X-Object-Manifest)
* This could have caused data loss for files > 5GB in size
* Use ContentType from Object to avoid lookups in listings
* One Drive
* disable server side copy as it seems to be broken at Microsoft
* v1.24 - 2015-11-07
* New features
* Add support for Microsoft One Drive
* Add `--no-check-certificate` option to disable server certificate verification
* Add async readahead buffer for faster transfer of big files
* Fixes
* Allow spaces in remotes and check remote names for validity at creation time
* Allow '&' and disallow ':' in Windows filenames.
* Swift
* Ignore directory marker objects where appropriate - allows working with Hubic
* Don't delete the container if fs wasn't at root
* S3
* Don't delete the bucket if fs wasn't at root
* Google Cloud Storage
* Don't delete the bucket if fs wasn't at root
* v1.23 - 2015-10-03
* New features
* Implement `rclone size` for measuring remotes
@@ -29,13 +269,13 @@ Changelog
* Make lsl output times in localtime
* Fixes
* Fix allowing user to override credentials again in Drive, GCS and ACD
* Amazon Cloud Drive
* Amazon Drive
* Implement compliant pacing scheme
* Google Drive
* Make directory reads concurrent for increased speed.
* v1.20 - 2015-09-15
* New features
* Amazon Cloud Drive support
* Amazon Drive support
* Oauth support redone - fix many bugs and improve usability
* Use "golang.org/x/oauth2" as oauth libary of choice
* Improve oauth usability for smoother initial signup

View File

@@ -0,0 +1,143 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone"
slug: rclone
url: /commands/rclone/
---
## rclone
Sync files and directories to and from local and remote object stores - v1.33-DEV
### Synopsis
Rclone is a command line program to sync files and directories to and
from various cloud storage systems, such as:
* Google Drive
* Amazon S3
* Openstack Swift / Rackspace cloud files / Memset Memstore
* Dropbox
* Google Cloud Storage
* Amazon Drive
* Microsoft One Drive
* Hubic
* Backblaze B2
* Yandex Disk
* The local filesystem
Features
* MD5/SHA1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* Copy mode to just copy new/changed files
* Sync (one way) mode to make a directory identical
* Check mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
See the home page for installation, usage, documentation, changelog
and configuration walkthroughs.
* http://rclone.org/
```
rclone
```
### Options
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
-V, --version Print the version number
```
### SEE ALSO
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization.
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied
* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files delete/rename them.
* [rclone delete](/commands/rclone_delete/) - Remove the contents of path.
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output bash completion script for rclone.
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
* [rclone ls](/commands/rclone_ls/) - List all the objects in the the path with size and path.
* [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the the path.
* [rclone lsl](/commands/rclone_lsl/) - List all the objects path with modification time, size and path.
* [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path.
* [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist.
* [rclone mount](/commands/rclone_mount/) - Mount the remote as a mountpoint. **EXPERIMENTAL**
* [rclone move](/commands/rclone_move/) - Move files from source to dest.
* [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents.
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty.
* [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path.
* [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path.
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
* [rclone version](/commands/rclone_version/) - Show the version number.
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,93 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
---
## rclone authorize
Remote authorization.
### Synopsis
Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
```
rclone authorize
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,96 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
---
## rclone check
Checks the files in the source and destination match.
### Synopsis
Checks the files in the source and destination match. It
compares sizes and MD5SUMs and prints a report of files which
don't match. It doesn't alter the source or destination.
`--size-only` may be used to only compare the sizes, not the MD5SUMs.
```
rclone check source:path dest:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,93 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
---
## rclone cleanup
Clean up the remote if possible
### Synopsis
Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
```
rclone cleanup remote:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,90 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
---
## rclone config
Enter an interactive configuration session.
### Synopsis
Enter an interactive configuration session.
```
rclone config
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,129 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
---
## rclone copy
Copy files from source to dest, skipping already copied
### Synopsis
Copy the source to the destination. Doesn't transfer
unchanged files, testing by size and modification time or
MD5SUM. Doesn't delete files from the destination.
Note that it is always the contents of the directory that is synced,
not the directory so when source:path is a directory, it's the
contents of source:path that are copied, not the directory name and
contents.
If dest:path doesn't exist, it is created and the source:path contents
go there.
For example
rclone copy source:sourcepath dest:destpath
Let's say there are two files in sourcepath
sourcepath/one.txt
sourcepath/two.txt
This copies them to
destpath/one.txt
destpath/two.txt
Not to
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
If you are familiar with `rsync`, rclone always works as if you had
written a trailing / - meaning "copy the contents of this directory".
This applies to all commands and whether you are talking about the
source or destination.
See the `--no-traverse` option for controlling whether rclone lists
the destination directory or not.
```
rclone copy source:path dest:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,171 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/
---
## rclone dedupe
Interactively find duplicate files delete/rename them.
### Synopsis
By default `dedup` interactively finds duplicate files and offers to
delete all but one or rename them to be different. Only useful with
Google Drive which can have duplicate file names.
The `dedupe` command will delete all but one of any identical (same
md5sum) files it finds without confirmation. This means that for most
duplicated files the `dedupe` command will not be interactive. You
can use `--dry-run` to see what would happen without doing anything.
Here is an example run.
Before - with duplicates
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
6048320 2016-03-05 16:23:11.775000000 one.txt
564374 2016-03-05 16:23:06.731000000 one.txt
6048320 2016-03-05 16:18:26.092000000 one.txt
6048320 2016-03-05 16:22:46.185000000 two.txt
1744073 2016-03-05 16:22:38.104000000 two.txt
564374 2016-03-05 16:22:52.118000000 two.txt
Now the `dedupe` session
$ rclone dedupe drive:dupes
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
one.txt: Found 4 duplicates - deleting identical copies
one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
one.txt: 2 duplicates remain
1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 duplicates - deleting identical copies
two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> r
two-1.txt: renamed from: two.txt
two-2.txt: renamed from: two.txt
two-3.txt: renamed from: two.txt
The result being
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
564374 2016-03-05 16:22:52.118000000 two-1.txt
6048320 2016-03-05 16:22:46.185000000 two-2.txt
1744073 2016-03-05 16:22:38.104000000 two-3.txt
Dedupe can be run non interactively using the `--dedupe-mode` flag or by using an extra parameter with the same value
* `--dedupe-mode interactive` - interactive as above.
* `--dedupe-mode skip` - removes identical files then skips anything left.
* `--dedupe-mode first` - removes identical files then keeps the first one.
* `--dedupe-mode newest` - removes identical files then keeps the newest one.
* `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
For example to rename all the identically named photos in your Google Photos directory, do
rclone dedupe --dedupe-mode rename "drive:Google Photos"
Or
rclone dedupe rename "drive:Google Photos"
```
rclone dedupe [mode] remote:path
```
### Options
```
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename.
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,107 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/
---
## rclone delete
Remove the contents of path.
### Synopsis
Remove the contents of path. Unlike `purge` it obeys include/exclude
filters so can be used to selectively delete files.
Eg delete all files bigger than 100MBytes
Check what would be deleted first (use either)
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
Then delete
rclone --min-size 100M delete remote:path
That reads "delete everything with a minimum size of 100 MB", hence
delete all files bigger than 100MBytes.
```
rclone delete remote:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,105 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone genautocomplete"
slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/
---
## rclone genautocomplete
Output bash completion script for rclone.
### Synopsis
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will
probably need to be run with sudo or as root, eg
sudo rclone genautocomplete
Logout and login again to use the autocompletion scripts, or source
them directly
. /etc/bash_completion
If you supply a command line argument the script will be written
there.
```
rclone genautocomplete [output_file]
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,93 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone gendocs"
slug: rclone_gendocs
url: /commands/rclone_gendocs/
---
## rclone gendocs
Output markdown docs for rclone to the directory supplied.
### Synopsis
This produces markdown docs for the rclone commands to the directory
supplied. These are in a format suitable for hugo to render into the
rclone.org website.
```
rclone gendocs output_directory
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,90 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone ls"
slug: rclone_ls
url: /commands/rclone_ls/
---
## rclone ls
List all the objects in the the path with size and path.
### Synopsis
List all the objects in the the path with size and path.
```
rclone ls remote:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,90 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone lsd"
slug: rclone_lsd
url: /commands/rclone_lsd/
---
## rclone lsd
List all directories/containers/buckets in the the path.
### Synopsis
List all directories/containers/buckets in the the path.
```
rclone lsd remote:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,90 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone lsl"
slug: rclone_lsl
url: /commands/rclone_lsl/
---
## rclone lsl
List all the objects path with modification time, size and path.
### Synopsis
List all the objects path with modification time, size and path.
```
rclone lsl remote:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,93 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone md5sum"
slug: rclone_md5sum
url: /commands/rclone_md5sum/
---
## rclone md5sum
Produces an md5sum file for all the objects in the path.
### Synopsis
Produces an md5sum file for all the objects in the path. This
is in the same format as the standard md5sum tool produces.
```
rclone md5sum remote:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,90 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone mkdir"
slug: rclone_mkdir
url: /commands/rclone_mkdir/
---
## rclone mkdir
Make the path if it doesn't already exist.
### Synopsis
Make the path if it doesn't already exist.
```
rclone mkdir remote:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,106 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone move"
slug: rclone_move
url: /commands/rclone_move/
---
## rclone move
Move files from source to dest.
### Synopsis
Moves the contents of the source directory to the destination
directory. Rclone will error if the source and destination overlap.
If no filters are in use and if possible this will server side move
`source:path` into `dest:path`. After this `source:path` will no
longer longer exist.
Otherwise for each file in `source:path` selected by the filters (if
any) this will move it into `dest:path`. If possible a server side
move will be used, otherwise it will copy it (server side if possible)
into `dest:path` then delete the original (if no errors on copy) in
`source:path`.
**Important**: Since this can cause data loss, test first with the
--dry-run flag.
```
rclone move source:path dest:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,94 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone purge"
slug: rclone_purge
url: /commands/rclone_purge/
---
## rclone purge
Remove the path and all of its contents.
### Synopsis
Remove the path and all of its contents. Note that this does not obey
include/exclude filters - everything will be removed. Use `delete` if
you want to selectively delete files.
```
rclone purge remote:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,92 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone rmdir"
slug: rclone_rmdir
url: /commands/rclone_rmdir/
---
## rclone rmdir
Remove the path if empty.
### Synopsis
Remove the path. Note that you can't remove a path with
objects in it, use purge for that.
```
rclone rmdir remote:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,93 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone sha1sum"
slug: rclone_sha1sum
url: /commands/rclone_sha1sum/
---
## rclone sha1sum
Produces an sha1sum file for all the objects in the path.
### Synopsis
Produces an sha1sum file for all the objects in the path. This
is in the same format as the standard sha1sum tool produces.
```
rclone sha1sum remote:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,90 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone size"
slug: rclone_size
url: /commands/rclone_size/
---
## rclone size
Prints the total size and number of objects in remote:path.
### Synopsis
Prints the total size and number of objects in remote:path.
```
rclone size remote:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,109 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone sync"
slug: rclone_sync
url: /commands/rclone_sync/
---
## rclone sync
Make source and dest identical, modifying destination only.
### Synopsis
Sync the source to the destination, changing the destination
only. Doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. Destination is updated to match
source, including deleting files if necessary.
**Important**: Since this can cause data loss, test first with the
`--dry-run` flag to see exactly what would be copied and deleted.
Note that files in the destination won't be deleted if there were any
errors at any point.
It is always the contents of the directory that is synced, not the
directory so when source:path is a directory, it's the contents of
source:path that are copied, not the directory name and contents. See
extended explanation in the `copy` command above if unsure.
If dest:path doesn't exist, it is created and the source:path contents
go there.
```
rclone sync source:path dest:path
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

View File

@@ -0,0 +1,90 @@
---
date: 2016-08-24T23:01:36+01:00
title: "rclone version"
slug: rclone_version
url: /commands/rclone_version/
---
## rclone version
Show the version number.
### Synopsis
Show the version number.
```
rclone version
```
### Options inherited from parent commands
```
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
--ask-password Allow prompt for password for encrypted configuration. (default true)
--b2-chunk-size int Upload chunk size. Must fit in memory.
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
--b2-upload-cutoff int Cutoff for switching to chunked upload
--b2-versions Include old versions in directory listings.
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
--checkers int Number of checkers to run in parallel. (default 8)
-c, --checksum Skip based on checksum & size, not mod-time & size
--config string Config file. (default "/home/ncw/.rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--cpuprofile string Write cpu profile to file
--delete-after When synchronizing, delete files on destination after transfering
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-upload-cutoff int Cutoff for switching to chunked upload
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M.
-n, --dry-run Do a trial run with no permanent changes
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-filters Dump the filters to the output
--dump-headers Dump HTTP headers - may contain sensitive info
--exclude string Exclude files matching pattern
--exclude-from string Read exclude patterns from file
--files-from string Read list of source-file names from file
-f, --filter string Add a file-filtering rule
--filter-from string Read filtering patterns from a file
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
-I, --ignore-times Don't skip files that match size and time - transfer all files
--include string Include files matching pattern
--include-from string Read include patterns from file
--log-file string Log everything to this file
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
--memprofile string Write memory profile to file
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
--modify-window duration Max time diff to be considered the same (default 1ns)
--no-check-certificate Do not verify the server SSL certificate. Insecure.
--no-gzip-encoding Don't set Accept-Encoding: gzip.
--no-traverse Don't traverse destination file system on copy.
--no-update-modtime Don't update destination mod-time if files identical.
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
-q, --quiet Print as little stuff as possible
--retries int Retry operations this many times if they fail (default 3)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval to print stats (0 to disable) (default 1m0s)
--swift-chunk-size int Above this size files will be chunked into a _segments container.
--timeout duration IO idle timeout (default 5m0s)
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
-v, --verbose Print lots more stuff
```
### SEE ALSO
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
###### Auto generated by spf13/cobra on 24-Aug-2016

Some files were not shown because too many files have changed in this diff Show More