mirror of
https://github.com/rclone/rclone.git
synced 2026-01-04 09:33:36 +00:00
Compare commits
679 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3996bbb8cb | ||
|
|
c2599cb116 | ||
|
|
2c13074f6c | ||
|
|
059743a1b0 | ||
|
|
73cd1f4e88 | ||
|
|
a54806e5c1 | ||
|
|
e6a0521ca2 | ||
|
|
43eadf278c | ||
|
|
5f375a182d | ||
|
|
663dd6ed8b | ||
|
|
226c2a0d83 | ||
|
|
b4b4b6cb1c | ||
|
|
9985fc40f4 | ||
|
|
b1de4c8cba | ||
|
|
6a4e424630 | ||
|
|
ebb67c135e | ||
|
|
326dcf2470 | ||
|
|
86eb80ecdc | ||
|
|
2003ba356b | ||
|
|
037a000cc8 | ||
|
|
8a771450d2 | ||
|
|
1e7dc06ab8 | ||
|
|
ca841c56a8 | ||
|
|
79eebf1993 | ||
|
|
bbccf4acd5 | ||
|
|
9e7ddd5efc | ||
|
|
6089f443b9 | ||
|
|
84eb7031bb | ||
|
|
f22029bf3d | ||
|
|
d7b79b4481 | ||
|
|
b5faaf7116 | ||
|
|
b4f2ada820 | ||
|
|
8a66930bd7 | ||
|
|
2ebeed6753 | ||
|
|
23d8ba41d5 | ||
|
|
4f9e805d44 | ||
|
|
3f7107839e | ||
|
|
bb62c49489 | ||
|
|
ae6018355c | ||
|
|
0805ec051f | ||
|
|
e27b91ffb8 | ||
|
|
0a7b34eefc | ||
|
|
549cac90af | ||
|
|
ba0b41dd92 | ||
|
|
2df261e42b | ||
|
|
38adb35abe | ||
|
|
520ded60e3 | ||
|
|
ae56df7d4f | ||
|
|
412591dfaf | ||
|
|
57f8f1ec92 | ||
|
|
f0434789cf | ||
|
|
c2f6decb9c | ||
|
|
9eeed25418 | ||
|
|
67562081f7 | ||
|
|
41917eb1f2 | ||
|
|
c3e996f10f | ||
|
|
63f6827a0d | ||
|
|
96e2271cce | ||
|
|
ac3c83f966 | ||
|
|
b9c8e61d39 | ||
|
|
a6056408dd | ||
|
|
b9479cf7ab | ||
|
|
452a5badc1 | ||
|
|
d645bf0966 | ||
|
|
50addaa91e | ||
|
|
02a3bbaa3d | ||
|
|
a20d80565b | ||
|
|
56adb52a21 | ||
|
|
8c2fc6daf8 | ||
|
|
4bd9932703 | ||
|
|
2a1d4b7563 | ||
|
|
b394431f18 | ||
|
|
cc628717d8 | ||
|
|
f3e00133a0 | ||
|
|
606961f49d | ||
|
|
13591c7c00 | ||
|
|
28f4061892 | ||
|
|
018fe80bcb | ||
|
|
0a43ff9c13 | ||
|
|
9aae143833 | ||
|
|
c8e2531c8b | ||
|
|
9290004bb8 | ||
|
|
cbebefebc4 | ||
|
|
6f3897ce2c | ||
|
|
ea5878f590 | ||
|
|
46f8e50614 | ||
|
|
70dc97231e | ||
|
|
f6a053df6e | ||
|
|
af4ef8ad8d | ||
|
|
13797a1fb8 | ||
|
|
3ad8fb8634 | ||
|
|
ab43005422 | ||
|
|
b1f131964e | ||
|
|
1a87b69376 | ||
|
|
5a3b109e25 | ||
|
|
a67c7461ee | ||
|
|
e0aa4bb492 | ||
|
|
ab0947ee37 | ||
|
|
bd0227450e | ||
|
|
f438f1e9ef | ||
|
|
3f7b2c1ade | ||
|
|
6e35a3b3ce | ||
|
|
d3dd672640 | ||
|
|
2a46be8cf3 | ||
|
|
1b4370bde1 | ||
|
|
cc6a776034 | ||
|
|
2cfb3834f2 | ||
|
|
46135d830e | ||
|
|
318e42e35b | ||
|
|
c7f04e24d3 | ||
|
|
e4650eff58 | ||
|
|
869d91269d | ||
|
|
df1092ef33 | ||
|
|
4c5b2833b3 | ||
|
|
7fe653c350 | ||
|
|
661715733a | ||
|
|
f17cb1bf50 | ||
|
|
9ec06df79f | ||
|
|
67d0375b98 | ||
|
|
4882b8ba67 | ||
|
|
108760e17b | ||
|
|
f15e7e89d2 | ||
|
|
e2788aa729 | ||
|
|
772f99fd74 | ||
|
|
9bbcdeefd0 | ||
|
|
a21cc161de | ||
|
|
e818b7c206 | ||
|
|
5723d788a4 | ||
|
|
1d6698a754 | ||
|
|
1fce83b936 | ||
|
|
ccdd1ea6c4 | ||
|
|
348734584b | ||
|
|
c6a79ff72d | ||
|
|
b6f1391da3 | ||
|
|
ce94c0e729 | ||
|
|
58befe280c | ||
|
|
4c0f4ccb65 | ||
|
|
085677d511 | ||
|
|
0a922ad1dc | ||
|
|
83c3bb2f1a | ||
|
|
83087a45f0 | ||
|
|
cadf202107 | ||
|
|
36700d36a7 | ||
|
|
ad85f6e413 | ||
|
|
536526cc92 | ||
|
|
ac9c20b048 | ||
|
|
2db35f0ce7 | ||
|
|
dbfa7031d2 | ||
|
|
c2d0e86431 | ||
|
|
68ec6a9f5b | ||
|
|
753b0717be | ||
|
|
3bdad260b0 | ||
|
|
d205dc23e9 | ||
|
|
bdd26d71b2 | ||
|
|
8b2f6faf18 | ||
|
|
7c01bbddf8 | ||
|
|
1752ee3c8b | ||
|
|
5c2d8ffe33 | ||
|
|
7fecd5c8c6 | ||
|
|
19b7ff12ad | ||
|
|
b053234eb1 | ||
|
|
640d7bd365 | ||
|
|
8af68e779f | ||
|
|
3a1198cac5 | ||
|
|
022ab4516d | ||
|
|
17aac9b15f | ||
|
|
6c0c9abd57 | ||
|
|
70496c15e1 | ||
|
|
8b61e68bb7 | ||
|
|
bb75d80d33 | ||
|
|
157d7d45f5 | ||
|
|
b5cba73cc3 | ||
|
|
dd36264aad | ||
|
|
ddb47758f3 | ||
|
|
9539bbf78a | ||
|
|
0f8e7c3843 | ||
|
|
b835330714 | ||
|
|
310db14ed6 | ||
|
|
7f2e9d9a6b | ||
|
|
6cc9c09610 | ||
|
|
93c60c34e1 | ||
|
|
02c11dd4a7 | ||
|
|
40dc575aa4 | ||
|
|
f8101771c9 | ||
|
|
8f4d6973fb | ||
|
|
ced3a4bc19 | ||
|
|
cb22583212 | ||
|
|
414b35ea56 | ||
|
|
f469905d07 | ||
|
|
20f4b2c91d | ||
|
|
37543bd1d9 | ||
|
|
0dc0052e93 | ||
|
|
bd27473762 | ||
|
|
9dccf91da7 | ||
|
|
a1323eb204 | ||
|
|
e57c4406f3 | ||
|
|
fdd4b4ee22 | ||
|
|
8ef551bf9c | ||
|
|
2119fb4314 | ||
|
|
0166544319 | ||
|
|
874a64e5f6 | ||
|
|
e0c03a11ab | ||
|
|
3c7f80f58f | ||
|
|
229ea3f86c | ||
|
|
41eb386063 | ||
|
|
dfc7cd97a3 | ||
|
|
280ac26464 | ||
|
|
88cca8a6eb | ||
|
|
9c263e3e2b | ||
|
|
7d4e143dee | ||
|
|
3343c1afa4 | ||
|
|
b279df2e67 | ||
|
|
e6f340d245 | ||
|
|
bfc66cceaa | ||
|
|
1105b6bd94 | ||
|
|
694d390710 | ||
|
|
6b6b43402b | ||
|
|
6f46270735 | ||
|
|
ee5e34a19c | ||
|
|
70902b4051 | ||
|
|
f46304e8ae | ||
|
|
40252f0aa6 | ||
|
|
e7b9cc4705 | ||
|
|
867a26fe4f | ||
|
|
3890105cdc | ||
|
|
d2219a800a | ||
|
|
ccb59480bd | ||
|
|
b5c5209162 | ||
|
|
835b6761b7 | ||
|
|
f30c836696 | ||
|
|
090ce00afc | ||
|
|
377986d599 | ||
|
|
95e4d837ef | ||
|
|
e08e35984c | ||
|
|
a3b4c8a0f2 | ||
|
|
700e47d6e2 | ||
|
|
ea11f5ff3d | ||
|
|
758c7f2d84 | ||
|
|
ef06371c93 | ||
|
|
85a0f25b95 | ||
|
|
84b00b362f | ||
|
|
bfd7601cf9 | ||
|
|
4676a89963 | ||
|
|
8cd3c25b41 | ||
|
|
5f97603684 | ||
|
|
f1debd4701 | ||
|
|
1cd0d9a1f2 | ||
|
|
a6320bbad3 | ||
|
|
b1dd8e998b | ||
|
|
c2e8f06bfa | ||
|
|
08a8f7174a | ||
|
|
ce4c1d4f35 | ||
|
|
a0b9bd527e | ||
|
|
ce05ef7110 | ||
|
|
6a47d966a4 | ||
|
|
85d99de26b | ||
|
|
4a82251c62 | ||
|
|
e62c0a58a7 | ||
|
|
1f3e48f18f | ||
|
|
bbbe11790b | ||
|
|
13edf62824 | ||
|
|
558bc2e132 | ||
|
|
0f73129ab7 | ||
|
|
1373efaa39 | ||
|
|
5c37b777fc | ||
|
|
d4df3f2154 | ||
|
|
8ae424c5a3 | ||
|
|
cae19df058 | ||
|
|
8c211fc8df | ||
|
|
74a71f7824 | ||
|
|
12b51c5eb8 | ||
|
|
14069fd8e6 | ||
|
|
cd62f41606 | ||
|
|
109d4ee490 | ||
|
|
18ebec8276 | ||
|
|
c47b4f828f | ||
|
|
c3a0c0c451 | ||
|
|
6cb0de43ce | ||
|
|
83f0d3e03d | ||
|
|
eda4130703 | ||
|
|
ccba859812 | ||
|
|
de3cf5e8d7 | ||
|
|
ce305321b6 | ||
|
|
e6117e978e | ||
|
|
4b40898743 | ||
|
|
ae3a0ec27e | ||
|
|
d9458fb4ee | ||
|
|
27f67edb1a | ||
|
|
3ffea738e6 | ||
|
|
a63dd6020c | ||
|
|
d0678bc3e5 | ||
|
|
ce04a073ef | ||
|
|
c337a367f3 | ||
|
|
7ae40cb352 | ||
|
|
e8daab7971 | ||
|
|
78c3a5ccfa | ||
|
|
2142c75846 | ||
|
|
c724d8f614 | ||
|
|
af5f4ee724 | ||
|
|
01aa4394a6 | ||
|
|
2646519712 | ||
|
|
5b2efd563a | ||
|
|
e7b7432079 | ||
|
|
ea2ef4443b | ||
|
|
25f22ec561 | ||
|
|
5189231a34 | ||
|
|
bcbd30bb8a | ||
|
|
c245183101 | ||
|
|
4ce2a84df0 | ||
|
|
3c31d711b3 | ||
|
|
3f5d8390ba | ||
|
|
78edafcaac | ||
|
|
1ce3673006 | ||
|
|
3423de65fa | ||
|
|
0c81439bc3 | ||
|
|
77fb8ac240 | ||
|
|
979dfb8cc6 | ||
|
|
fe0289f2f5 | ||
|
|
6a64567dd7 | ||
|
|
8de8cd62ca | ||
|
|
cba27d2920 | ||
|
|
9ade179407 | ||
|
|
82b85431bd | ||
|
|
98778b1870 | ||
|
|
dfd46c23f9 | ||
|
|
3ac4407b88 | ||
|
|
8ea0d5212f | ||
|
|
acd350d833 | ||
|
|
2f4b9f619d | ||
|
|
70efd0274c | ||
|
|
33b3eea6ec | ||
|
|
113624691a | ||
|
|
afaec1a2e9 | ||
|
|
ddf39f2d57 | ||
|
|
2df5d95d70 | ||
|
|
64a808ac76 | ||
|
|
05dc7183cb | ||
|
|
e69e181090 | ||
|
|
a1269fa669 | ||
|
|
8369b5209f | ||
|
|
2aa3c0a2af | ||
|
|
ac65d8369e | ||
|
|
7a24532224 | ||
|
|
8057d668bb | ||
|
|
36f1bc4a8a | ||
|
|
beb8098b0a | ||
|
|
6e64a71382 | ||
|
|
3cbd57d9ad | ||
|
|
4f50b26af0 | ||
|
|
cb651b5866 | ||
|
|
3c1069c815 | ||
|
|
7f0020a407 | ||
|
|
c270c1c80c | ||
|
|
29ecc2d8bb | ||
|
|
13da1b8d28 | ||
|
|
0b338eaa28 | ||
|
|
46696865fd | ||
|
|
fcea3777c0 | ||
|
|
5023050d95 | ||
|
|
bed01a303f | ||
|
|
2c2cb84ca7 | ||
|
|
e9dda25c60 | ||
|
|
80ffbade22 | ||
|
|
7beb50caa7 | ||
|
|
e8ba43c479 | ||
|
|
dcd6bedc27 | ||
|
|
5bb76cc35c | ||
|
|
3e68d485f2 | ||
|
|
1945f09d06 | ||
|
|
2c66bdd6bb | ||
|
|
a4f3548bbf | ||
|
|
4276abc58b | ||
|
|
a795d93bc3 | ||
|
|
5df04cb763 | ||
|
|
ef54167a4a | ||
|
|
d42cb11b84 | ||
|
|
b257de4aba | ||
|
|
365b4babae | ||
|
|
6d48dffa2f | ||
|
|
8f2999b6af | ||
|
|
be6115fbfa | ||
|
|
2fcb8f5db7 | ||
|
|
0ab3f020ab | ||
|
|
64c23c2f5b | ||
|
|
ff16e0f6df | ||
|
|
1a82ba196b | ||
|
|
ed72c678f8 | ||
|
|
4ed8836a71 | ||
|
|
5529978fa7 | ||
|
|
66d84c9914 | ||
|
|
b85ddc4e4f | ||
|
|
e4a9e27a55 | ||
|
|
22645eea2e | ||
|
|
345c98ed62 | ||
|
|
b872ff0237 | ||
|
|
1b95718460 | ||
|
|
6a3580c556 | ||
|
|
16c9fba5de | ||
|
|
4e952af614 | ||
|
|
6344c3051c | ||
|
|
ab9f521cbd | ||
|
|
3a900e5bb7 | ||
|
|
b4d7741611 | ||
|
|
95fd79faf9 | ||
|
|
b79dc01016 | ||
|
|
bf562d7373 | ||
|
|
2e9f2ea3d3 | ||
|
|
177dbbc29a | ||
|
|
4712043e26 | ||
|
|
852acd5e4e | ||
|
|
9f1daabb2c | ||
|
|
938dd24cc9 | ||
|
|
57aad81b68 | ||
|
|
a91bcaaeb0 | ||
|
|
d04c21b198 | ||
|
|
4a0a42c2f1 | ||
|
|
cc7b9af50e | ||
|
|
68fef49c55 | ||
|
|
5d4b149884 | ||
|
|
5f20ae707d | ||
|
|
e9c915e6fe | ||
|
|
2ed158aba3 | ||
|
|
05050d53ad | ||
|
|
e391311512 | ||
|
|
3234c28f7c | ||
|
|
6fbd9cf24b | ||
|
|
bc5b63ffef | ||
|
|
788ef76f1c | ||
|
|
0872ec3204 | ||
|
|
0a5870208e | ||
|
|
3219334c3e | ||
|
|
79fd662676 | ||
|
|
34193fd8d9 | ||
|
|
2203766f77 | ||
|
|
235cbe0e57 | ||
|
|
f50f353b5d | ||
|
|
00afe6cc96 | ||
|
|
dd48e62b7e | ||
|
|
a1a780e847 | ||
|
|
fa87077211 | ||
|
|
6ac7145d2d | ||
|
|
f1226f19b2 | ||
|
|
3ecbf2af25 | ||
|
|
79f2e95bf9 | ||
|
|
faee50b238 | ||
|
|
807d4a3c00 | ||
|
|
073d112204 | ||
|
|
14f814b806 | ||
|
|
a288c2b3a3 | ||
|
|
fec16b0ac8 | ||
|
|
dd8717797e | ||
|
|
7e7c239f09 | ||
|
|
edd0e8abb1 | ||
|
|
d2b537d9a1 | ||
|
|
8c3df224ef | ||
|
|
967fd2a778 | ||
|
|
ea12e446ca | ||
|
|
c8cd2b510f | ||
|
|
8b05a8322b | ||
|
|
c98a51b26c | ||
|
|
e2717a031e | ||
|
|
8d33ce0154 | ||
|
|
92745aa950 | ||
|
|
cbc6bf6a89 | ||
|
|
f72575e75f | ||
|
|
0168f55f3e | ||
|
|
8b60ab86a1 | ||
|
|
7463a7a509 | ||
|
|
9ed2de3d6e | ||
|
|
4f35fb59c8 | ||
|
|
59ba8f28c8 | ||
|
|
d298b578ab | ||
|
|
fabbc035c4 | ||
|
|
6530b07cde | ||
|
|
f8b7eaec93 | ||
|
|
5c226e91c0 | ||
|
|
8e3d45d2dc | ||
|
|
a96b522958 | ||
|
|
fedf81c2b7 | ||
|
|
0c6f816a49 | ||
|
|
dfe771fb0c | ||
|
|
bc19e2d84b | ||
|
|
8c4d91cff7 | ||
|
|
2fcc18779b | ||
|
|
96cc3e5a0b | ||
|
|
cc8fe0630c | ||
|
|
1d9e76bb0f | ||
|
|
337110b7a0 | ||
|
|
83733205f6 | ||
|
|
d8306938a1 | ||
|
|
88ea8b305d | ||
|
|
e2f4d7b5e3 | ||
|
|
8140869767 | ||
|
|
6a8de87116 | ||
|
|
0da6f24221 | ||
|
|
771e60bd07 | ||
|
|
40b3c4883f | ||
|
|
e320f4a988 | ||
|
|
5835f15f21 | ||
|
|
67c311233b | ||
|
|
3fcff32524 | ||
|
|
472f065ce7 | ||
|
|
6c6d7eb770 | ||
|
|
c646ada3c3 | ||
|
|
f55359b3ac | ||
|
|
9d9a17547a | ||
|
|
c6dc88766b | ||
|
|
754ce9dec6 | ||
|
|
bd5f685d0a | ||
|
|
c663e24669 | ||
|
|
5948764e9e | ||
|
|
539ad44757 | ||
|
|
74994a2ec1 | ||
|
|
97dced6a0b | ||
|
|
e04acb09ce | ||
|
|
87ed7fc932 | ||
|
|
90744301d3 | ||
|
|
bf4879f57f | ||
|
|
e22b445cff | ||
|
|
5ab7970e18 | ||
|
|
e984eeedc4 | ||
|
|
968b5a0984 | ||
|
|
7af1282375 | ||
|
|
d9fcc32f70 | ||
|
|
870a9fc3b2 | ||
|
|
8e3703abeb | ||
|
|
ba81277bbe | ||
|
|
88293a4b8a | ||
|
|
981104519e | ||
|
|
1d254a3674 | ||
|
|
f88d171afd | ||
|
|
ba2091725e | ||
|
|
7c120b8bc5 | ||
|
|
5cc5429f99 | ||
|
|
09d71239b6 | ||
|
|
c643e4585e | ||
|
|
873db29391 | ||
|
|
81a933ae38 | ||
|
|
ecb3c7bcef | ||
|
|
80000b904c | ||
|
|
c47c9cd440 | ||
|
|
b4a0941d4c | ||
|
|
c03d6a1ec3 | ||
|
|
46d39ebaf7 | ||
|
|
fe68737268 | ||
|
|
2360bf907a | ||
|
|
aa093e991e | ||
|
|
a5974999eb | ||
|
|
24a6ff54c2 | ||
|
|
e89ea3360e | ||
|
|
85f8552c4d | ||
|
|
a287e3ced7 | ||
|
|
8e4d8d13b8 | ||
|
|
cf208ad21b | ||
|
|
0faed16899 | ||
|
|
8d1c0ad07c | ||
|
|
165e89c266 | ||
|
|
b4e19cfd62 | ||
|
|
20ad96f3cd | ||
|
|
d64a37772f | ||
|
|
5fb6f94579 | ||
|
|
20535348db | ||
|
|
3d83a265c5 | ||
|
|
18a8a61cc5 | ||
|
|
1758621a51 | ||
|
|
5710247bf6 | ||
|
|
78b03929b7 | ||
|
|
492362ec7d | ||
|
|
51b24a1dc6 | ||
|
|
cfdb48c864 | ||
|
|
14567952b3 | ||
|
|
2b052671e2 | ||
|
|
439a126af6 | ||
|
|
0fb35f081a | ||
|
|
9ba25c7219 | ||
|
|
af9c447146 | ||
|
|
ee6b39aa6c | ||
|
|
839133c5e1 | ||
|
|
f4eb48e531 | ||
|
|
18439cf2d7 | ||
|
|
d3c16608e4 | ||
|
|
3e27ff1b95 | ||
|
|
ff91698fb5 | ||
|
|
c389616657 | ||
|
|
442578ca25 | ||
|
|
0b51d6221a | ||
|
|
2f9f9afac2 | ||
|
|
9711a5d647 | ||
|
|
cc679aa714 | ||
|
|
457ef2c190 | ||
|
|
17ffb0855f | ||
|
|
125fc8f1f0 | ||
|
|
1660903aa2 | ||
|
|
b013c58537 | ||
|
|
a5b0d88608 | ||
|
|
02d50f8c6e | ||
|
|
e09ef62d5b | ||
|
|
a75bc0703f | ||
|
|
80ecea82e8 | ||
|
|
54cd46372a | ||
|
|
282cba20a0 | ||
|
|
2479ce2c8e | ||
|
|
9aa4b6bd9b | ||
|
|
6c10024420 | ||
|
|
e559194fb2 | ||
|
|
1c472348b6 | ||
|
|
5a8bce6353 | ||
|
|
f9b31591f9 | ||
|
|
1527e64ee7 | ||
|
|
f7652db4f1 | ||
|
|
8b75fb14c5 | ||
|
|
07f9a1a9f0 | ||
|
|
7d8bac2711 | ||
|
|
cad9479a00 | ||
|
|
dfc8a375f6 | ||
|
|
7c9bdb4b7a | ||
|
|
f8bb0d9cc8 | ||
|
|
b185e104ed | ||
|
|
e57a4c7c0c | ||
|
|
d2f187e1a1 | ||
|
|
c9aca33030 | ||
|
|
2b0911531c | ||
|
|
2149185fc2 | ||
|
|
0159da9f37 | ||
|
|
680283d69f | ||
|
|
c71f339e01 | ||
|
|
c91c96565f | ||
|
|
b72fc69fbe | ||
|
|
a1732c21d8 | ||
|
|
b83441081c | ||
|
|
8a76568ea8 | ||
|
|
c4dc9d273a | ||
|
|
66cf2df780 | ||
|
|
c1a245d1c8 | ||
|
|
e40b09fe61 | ||
|
|
eb2b4ea8aa | ||
|
|
e055ed0489 | ||
|
|
dd6d7cad3a | ||
|
|
37b2274e10 | ||
|
|
91cfbd4146 | ||
|
|
d4817399ff | ||
|
|
48d259da68 | ||
|
|
f86fa6a062 | ||
|
|
93cb0a47e4 | ||
|
|
a12760c038 | ||
|
|
fdcd6a3a4c | ||
|
|
cb7891f4ff | ||
|
|
8f2684fa70 | ||
|
|
7ebf48ef42 | ||
|
|
1d67b014cb | ||
|
|
4bfca75012 | ||
|
|
8e411afca3 | ||
|
|
10d44a0302 | ||
|
|
907a3aa88f | ||
|
|
862bf2eae1 | ||
|
|
2b70658b2c | ||
|
|
def9adac4e | ||
|
|
083bb154ba | ||
|
|
ebc757ac6b | ||
|
|
c6dfd5f2d3 | ||
|
|
99695d57ab | ||
|
|
ca3752f824 | ||
|
|
d0ca58bbb1 | ||
|
|
580fa3a5a7 | ||
|
|
365dc2ff59 | ||
|
|
a81ae3c3f9 | ||
|
|
8fd59f2e7d | ||
|
|
02afcb00e9 | ||
|
|
d6a5bfe2d4 | ||
|
|
bb0bf2fa8e | ||
|
|
2c1e6b54f9 | ||
|
|
40f755df20 | ||
|
|
8d32651c53 | ||
|
|
86b77f3ca8 | ||
|
|
bd62eb17e3 | ||
|
|
6c045c00df | ||
|
|
f5ad93714d | ||
|
|
92ec29fe3f | ||
|
|
bc221fb27e |
6
.gitignore
vendored
6
.gitignore
vendored
@@ -1,6 +1,6 @@
|
||||
*~
|
||||
*.pyc
|
||||
test-env*
|
||||
_junk/
|
||||
rclone
|
||||
upload
|
||||
rclonetest/rclonetest
|
||||
build
|
||||
docs/public
|
||||
|
||||
21
.travis.yml
Normal file
21
.travis.yml
Normal file
@@ -0,0 +1,21 @@
|
||||
language: go
|
||||
sudo: false
|
||||
osx_image: xcode7.3
|
||||
|
||||
os:
|
||||
- linux
|
||||
- osx
|
||||
|
||||
go:
|
||||
- 1.5.4
|
||||
- 1.6.3
|
||||
- 1.7
|
||||
|
||||
# - tip
|
||||
|
||||
install:
|
||||
- make build_dep
|
||||
|
||||
script:
|
||||
- make check
|
||||
- make quicktest
|
||||
162
CONTRIBUTING.md
Normal file
162
CONTRIBUTING.md
Normal file
@@ -0,0 +1,162 @@
|
||||
# Contributing to rclone #
|
||||
|
||||
This is a short guide on how to contribute things to rclone.
|
||||
|
||||
## Reporting a bug ##
|
||||
|
||||
Bug reports are welcome. Check your issue exists with the latest
|
||||
version first. Please add when submitting:
|
||||
|
||||
* Rclone version (eg output from `rclone -V`)
|
||||
* Which OS you are using and how many bits (eg Windows 7, 64 bit)
|
||||
* The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
|
||||
* A log of the command with the `-v` flag (eg output from `rclone -v copy /tmp remote:tmp`)
|
||||
* if the log contains secrets then edit the file with a text editor first to obscure them
|
||||
|
||||
## Submitting a pull request ##
|
||||
|
||||
If you find a bug that you'd like to fix, or a new feature that you'd
|
||||
like to implement then please submit a pull request via Github.
|
||||
|
||||
If it is a big feature then make an issue first so it can be discussed.
|
||||
|
||||
You'll need a Go environment set up with GOPATH set. See [the Go
|
||||
getting started docs](https://golang.org/doc/install) for more info.
|
||||
|
||||
First in your web browser press the fork button on [rclone's Github
|
||||
page](https://github.com/ncw/rclone).
|
||||
|
||||
Now in your terminal
|
||||
|
||||
go get github.com/ncw/rclone
|
||||
cd $GOPATH/src/github.com/ncw/rclone
|
||||
git remote rename origin upstream
|
||||
git remote add origin git@github.com:YOURUSER/rclone.git
|
||||
|
||||
Make a branch to add your new feature
|
||||
|
||||
git checkout -b my-new-feature
|
||||
|
||||
And get hacking.
|
||||
|
||||
When ready - run the unit tests for the code you changed
|
||||
|
||||
go test -v
|
||||
|
||||
Note that you make need to make a test remote, eg `TestSwift` for some
|
||||
of the unit tests.
|
||||
|
||||
Note the top level Makefile targets
|
||||
|
||||
* make check
|
||||
* make test
|
||||
|
||||
Both of these will be run by Travis when you make a pull request but
|
||||
you can do this yourself locally too.
|
||||
|
||||
Make sure you
|
||||
|
||||
* Add documentation for a new feature
|
||||
* Add unit tests for a new feature
|
||||
* squash commits down to one per feature
|
||||
* rebase to master `git rebase master`
|
||||
|
||||
When you are done with that
|
||||
|
||||
git push origin my-new-feature
|
||||
|
||||
Go to the Github website and click [Create pull
|
||||
request](https://help.github.com/articles/creating-a-pull-request/).
|
||||
|
||||
You patch will get reviewed and you might get asked to fix some stuff.
|
||||
|
||||
If so, then make the changes in the same branch, squash the commits,
|
||||
rebase it to master then push it to Github with `--force`.
|
||||
|
||||
## Testing ##
|
||||
|
||||
rclone's tests are run from the go testing framework, so at the top
|
||||
level you can run this to run all the tests.
|
||||
|
||||
go test -v ./...
|
||||
|
||||
rclone contains a mixture of unit tests and integration tests.
|
||||
Because it is difficult (and in some respects pointless) to test cloud
|
||||
storage systems by mocking all their interfaces, rclone unit tests can
|
||||
run against any of the backends. This is done by making specially
|
||||
named remotes in the default config file.
|
||||
|
||||
If you wanted to test changes in the `drive` backend, then you would
|
||||
need to make a remote called `TestDrive`.
|
||||
|
||||
You can then run the unit tests in the drive directory. These tests
|
||||
are skipped if `TestDrive:` isn't defined.
|
||||
|
||||
cd drive
|
||||
go test -v
|
||||
|
||||
You can then run the integration tests which tests all of rclone's
|
||||
operations. Normally these get run against the local filing system,
|
||||
but they can be run against any of the remotes.
|
||||
|
||||
cd ../fs
|
||||
go test -v -remote TestDrive:
|
||||
go test -v -remote TestDrive: -subdir
|
||||
|
||||
If you want to run all the integration tests against all the remotes,
|
||||
then run in that directory
|
||||
|
||||
go run test_all.go
|
||||
|
||||
## Making a release ##
|
||||
|
||||
There are separate instructions for making a release in the RELEASE.md
|
||||
file - doing the first few steps is useful before making a
|
||||
contribution.
|
||||
|
||||
* go get -u -f -v ./...
|
||||
* make check
|
||||
* make test
|
||||
* make tag
|
||||
|
||||
## Writing a new backend ##
|
||||
|
||||
Choose a name. The docs here will use `remote` as an example.
|
||||
|
||||
Note that in rclone terminology a file system backend is called a
|
||||
remote or an fs.
|
||||
|
||||
Research
|
||||
|
||||
* Look at the interfaces defined in `fs/fs.go`
|
||||
* Study one or more of the existing remotes
|
||||
|
||||
Getting going
|
||||
|
||||
* Create `remote/remote.go` (copy this from a similar fs)
|
||||
* Add your fs to the imports in `fs/all/all.go`
|
||||
|
||||
Unit tests
|
||||
|
||||
* Create a config entry called `TestRemote` for the unit tests to use
|
||||
* Add your fs to the end of `fstest/fstests/gen_tests.go`
|
||||
* generate `remote/remote_test.go` unit tests `cd fstest/fstests; go generate`
|
||||
* Make sure all tests pass with `go test -v`
|
||||
|
||||
Integration tests
|
||||
|
||||
* Add your fs to `fs/test_all.go`
|
||||
* Make sure integration tests pass with
|
||||
* `cd fs`
|
||||
* `go test -v -remote TestRemote:` and
|
||||
* `go test -v -remote TestRemote: -subdir`
|
||||
|
||||
Add your fs to the docs
|
||||
|
||||
* `README.md` - main Github page
|
||||
* `docs/content/remote.md` - main docs page
|
||||
* `docs/content/overview.md` - overview docs
|
||||
* `docs/content/docs.md` - list of remotes in config section
|
||||
* `docs/content/about.md` - front page of rclone.org
|
||||
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
|
||||
* `make_manual.py` - add the page to the `docs` constant
|
||||
14
ISSUE_TEMPLATE.md
Normal file
14
ISSUE_TEMPLATE.md
Normal file
@@ -0,0 +1,14 @@
|
||||
When filing an issue, please include the following information if
|
||||
possible as well as a description of the problem. Make sure you are
|
||||
using the [latest version of rclone](http://rclone.org/downloads/).
|
||||
|
||||
> What is your rclone version (eg output from `rclone -V`)
|
||||
|
||||
> Which OS you are using and how many bits (eg Windows 7, 64 bit)
|
||||
|
||||
> Which cloud storage system are you using? (eg Google Drive)
|
||||
|
||||
> The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
|
||||
|
||||
> A log from the command with the `-v` flag (eg output from `rclone -v copy /tmp remote:tmp`)
|
||||
|
||||
3169
MANUAL.html
Normal file
3169
MANUAL.html
Normal file
File diff suppressed because it is too large
Load Diff
4625
MANUAL.txt
Normal file
4625
MANUAL.txt
Normal file
File diff suppressed because it is too large
Load Diff
106
Makefile
Normal file
106
Makefile
Normal file
@@ -0,0 +1,106 @@
|
||||
SHELL = /bin/bash
|
||||
TAG := $(shell git describe --tags)
|
||||
LAST_TAG := $(shell git describe --tags --abbrev=0)
|
||||
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f", $$_)')
|
||||
|
||||
rclone:
|
||||
@go version
|
||||
go install -v ./...
|
||||
|
||||
# Full suite of integration tests
|
||||
test: rclone
|
||||
go test ./...
|
||||
cd fs && go run test_all.go
|
||||
|
||||
# Quick test
|
||||
quicktest:
|
||||
go test ./...
|
||||
go test -cpu=2 -race ./...
|
||||
|
||||
# Do source code quality checks
|
||||
check: rclone
|
||||
go vet ./...
|
||||
errcheck ./...
|
||||
goimports -d . | grep . ; test $$? -eq 1
|
||||
golint ./... | grep -E -v '(StorageUrl|CdnUrl)' ; test $$? -eq 1
|
||||
|
||||
# Get the build dependencies
|
||||
build_dep:
|
||||
go get -t ./...
|
||||
go get -u github.com/kisielk/errcheck
|
||||
go get -u golang.org/x/tools/cmd/goimports
|
||||
go get -u github.com/golang/lint/golint
|
||||
|
||||
# Update dependencies
|
||||
update:
|
||||
go get -t -u -f -v ./...
|
||||
|
||||
doc: rclone.1 MANUAL.html MANUAL.txt
|
||||
|
||||
rclone.1: MANUAL.md
|
||||
pandoc -s --from markdown --to man MANUAL.md -o rclone.1
|
||||
|
||||
MANUAL.md: make_manual.py docs/content/*.md commanddocs
|
||||
./make_manual.py
|
||||
|
||||
MANUAL.html: MANUAL.md
|
||||
pandoc -s --from markdown --to html MANUAL.md -o MANUAL.html
|
||||
|
||||
MANUAL.txt: MANUAL.md
|
||||
pandoc -s --from markdown --to plain MANUAL.md -o MANUAL.txt
|
||||
|
||||
commanddocs: rclone
|
||||
rclone gendocs docs/content/commands/
|
||||
|
||||
install: rclone
|
||||
install -d ${DESTDIR}/usr/bin
|
||||
install -t ${DESTDIR}/usr/bin ${GOPATH}/bin/rclone
|
||||
|
||||
clean:
|
||||
go clean ./...
|
||||
find . -name \*~ | xargs -r rm -f
|
||||
rm -rf build docs/public
|
||||
rm -f rclone rclonetest/rclonetest
|
||||
|
||||
website:
|
||||
cd docs && hugo
|
||||
|
||||
upload_website: website
|
||||
rclone -v sync docs/public memstore:www-rclone-org
|
||||
|
||||
upload:
|
||||
rclone -v copy build/ memstore:downloads-rclone-org
|
||||
|
||||
upload_github:
|
||||
./upload-github $(TAG)
|
||||
|
||||
cross: doc
|
||||
./cross-compile $(TAG)
|
||||
|
||||
beta:
|
||||
./cross-compile $(TAG)β
|
||||
rm build/*-current-*
|
||||
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)β
|
||||
@echo Beta release ready at http://pub.rclone.org/$(TAG)%CE%B2/
|
||||
|
||||
serve: website
|
||||
cd docs && hugo server -v -w
|
||||
|
||||
tag: doc
|
||||
@echo "Old tag is $(LAST_TAG)"
|
||||
@echo "New tag is $(NEW_TAG)"
|
||||
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEW_TAG)-DEV\"\n" | gofmt > fs/version.go
|
||||
perl -lpe 's/VERSION/${NEW_TAG}/g; s/DATE/'`date -I`'/g;' docs/content/downloads.md.in > docs/content/downloads.md
|
||||
git tag $(NEW_TAG)
|
||||
@echo "Add this to changelog in docs/content/changelog.md"
|
||||
@echo " * $(NEW_TAG) -" `date -I`
|
||||
@git log $(LAST_TAG)..$(NEW_TAG) --oneline
|
||||
@echo "Then commit the changes"
|
||||
@echo git commit -m \"Version $(NEW_TAG)\" -a -v
|
||||
@echo "And finally run make retag before make cross etc"
|
||||
|
||||
retag:
|
||||
git tag -f $(LAST_TAG)
|
||||
|
||||
gen_tests:
|
||||
cd fstest/fstests && go generate
|
||||
332
README.md
332
README.md
@@ -1,330 +1,46 @@
|
||||
Rclone
|
||||
======
|
||||
[](http://rclone.org/)
|
||||
|
||||
[](http://rclone.org/)
|
||||
[Website](http://rclone.org) |
|
||||
[Documentation](http://rclone.org/docs/) |
|
||||
[Contributing](CONTRIBUTING.md) |
|
||||
[Changelog](http://rclone.org/changelog/) |
|
||||
[Installation](http://rclone.org/install/) |
|
||||
[G+](https://google.com/+RcloneOrg)
|
||||
|
||||
Sync files and directories to and from
|
||||
|
||||
[](https://travis-ci.org/ncw/rclone) [](https://ci.appveyor.com/project/ncw/rclone) [](https://godoc.org/github.com/ncw/rclone)
|
||||
|
||||
Rclone is a command line program to sync files and directories to and from
|
||||
|
||||
* Google Drive
|
||||
* Amazon S3
|
||||
* Openstack Swift / Rackspace cloud files / Memset Memstore
|
||||
* Dropbox
|
||||
* Google Cloud Storage
|
||||
* Amazon Drive
|
||||
* Microsoft One Drive
|
||||
* Hubic
|
||||
* Backblaze B2
|
||||
* Yandex Disk
|
||||
* The local filesystem
|
||||
|
||||
Features
|
||||
|
||||
* MD5SUMs checked at all times for file integrity
|
||||
* MD5/SHA1 hashes checked at all times for file integrity
|
||||
* Timestamps preserved on files
|
||||
* Partial syncs supported on a whole file basis
|
||||
* Copy mode to just copy new/changed files
|
||||
* Sync mode to make a directory identical
|
||||
* Check mode to check all MD5SUMs
|
||||
* Can sync to and from network, eg two different Drive accounts
|
||||
* Sync (one way) mode to make a directory identical
|
||||
* Check mode to check for file hash equality
|
||||
* Can sync to and from network, eg two different cloud accounts
|
||||
|
||||
Home page
|
||||
See the home page for installation, usage, documentation, changelog
|
||||
and configuration walkthroughs.
|
||||
|
||||
* http://rclone.org/
|
||||
|
||||
Install
|
||||
-------
|
||||
|
||||
Rclone is a Go program and comes as a single binary file.
|
||||
|
||||
Download the relevant binary from
|
||||
|
||||
* http://www.craig-wood.com/nick/pub/rclone/
|
||||
|
||||
Or alternatively if you have Go installed use
|
||||
|
||||
go get github.com/ncw/rclone
|
||||
|
||||
and this will build the binary in `$GOPATH/bin`.
|
||||
|
||||
You can then modify the source and submit patches.
|
||||
|
||||
Configure
|
||||
---------
|
||||
|
||||
First you'll need to configure rclone. As the object storage systems
|
||||
have quite complicated authentication these are kept in a config file
|
||||
`.rclone.conf` in your home directory by default. (You can use the
|
||||
-config option to choose a different config file.)
|
||||
|
||||
The easiest way to make the config is to run rclone with the config
|
||||
option, Eg
|
||||
|
||||
rclone config
|
||||
|
||||
Here is an example of making an s3 configuration
|
||||
|
||||
```
|
||||
$ rclone config
|
||||
No remotes found - make a new one
|
||||
n) New remote
|
||||
q) Quit config
|
||||
n/q> n
|
||||
name> remote
|
||||
What type of source is it?
|
||||
Choose a number from below
|
||||
1) swift
|
||||
2) s3
|
||||
3) local
|
||||
4) drive
|
||||
type> 2
|
||||
AWS Access Key ID.
|
||||
access_key_id> accesskey
|
||||
AWS Secret Access Key (password).
|
||||
secret_access_key> secretaccesskey
|
||||
Endpoint for S3 API.
|
||||
Choose a number from below, or type in your own value
|
||||
* The default endpoint - a good choice if you are unsure.
|
||||
* US Region, Northern Virginia or Pacific Northwest.
|
||||
* Leave location constraint empty.
|
||||
1) https://s3.amazonaws.com/
|
||||
* US Region, Northern Virginia only.
|
||||
* Leave location constraint empty.
|
||||
2) https://s3-external-1.amazonaws.com
|
||||
[snip]
|
||||
* South America (Sao Paulo) Region
|
||||
* Needs location constraint sa-east-1.
|
||||
9) https://s3-sa-east-1.amazonaws.com
|
||||
endpoint> 1
|
||||
Location constraint - must be set to match the Endpoint.
|
||||
Choose a number from below, or type in your own value
|
||||
* Empty for US Region, Northern Virginia or Pacific Northwest.
|
||||
1)
|
||||
* US West (Oregon) Region.
|
||||
2) us-west-2
|
||||
[snip]
|
||||
* South America (Sao Paulo) Region.
|
||||
9) sa-east-1
|
||||
location_constraint> 1
|
||||
--------------------
|
||||
[remote]
|
||||
access_key_id = accesskey
|
||||
secret_access_key = secretaccesskey
|
||||
endpoint = https://s3.amazonaws.com/
|
||||
location_constraint =
|
||||
--------------------
|
||||
y) Yes this is OK
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
Current remotes:
|
||||
|
||||
Name Type
|
||||
==== ====
|
||||
remote s3
|
||||
|
||||
e) Edit existing remote
|
||||
n) New remote
|
||||
d) Delete remote
|
||||
q) Quit config
|
||||
e/n/d/q> q
|
||||
```
|
||||
|
||||
This can now be used like this
|
||||
|
||||
```
|
||||
rclone lsd remote: - see all buckets/containers
|
||||
rclone ls remote: - list a bucket
|
||||
rclone sync /home/local/directory remote:bucket
|
||||
```
|
||||
|
||||
See the next section for more details.
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
Rclone syncs a directory tree from local to remote.
|
||||
|
||||
Its basic syntax is like this
|
||||
|
||||
Syntax: [options] subcommand <parameters> <parameters...>
|
||||
|
||||
See below for how to specify the source and destination paths.
|
||||
|
||||
Subcommands
|
||||
-----------
|
||||
|
||||
rclone copy source:path dest:path
|
||||
|
||||
Copy the source to the destination. Doesn't transfer
|
||||
unchanged files, testing first by modification time then by
|
||||
MD5SUM. Doesn't delete files from the destination.
|
||||
|
||||
rclone sync source:path dest:path
|
||||
|
||||
Sync the source to the destination. Doesn't transfer
|
||||
unchanged files, testing first by modification time then by
|
||||
MD5SUM. Deletes any files that exist in source that don't
|
||||
exist in destination. Since this can cause data loss, test
|
||||
first with the -dry-run flag.
|
||||
|
||||
rclone ls [remote:path]
|
||||
|
||||
List all the objects in the the path.
|
||||
|
||||
rclone lsd [remote:path]
|
||||
|
||||
List all directoryes/objects/buckets in the the path.
|
||||
|
||||
rclone mkdir remote:path
|
||||
|
||||
Make the path if it doesn't already exist
|
||||
|
||||
rclone rmdir remote:path
|
||||
|
||||
Remove the path. Note that you can't remove a path with
|
||||
objects in it, use purge for that.
|
||||
|
||||
rclone purge remote:path
|
||||
|
||||
Remove the path and all of its contents.
|
||||
|
||||
rclone check source:path dest:path
|
||||
|
||||
Checks the files in the source and destination match. It
|
||||
compares sizes and MD5SUMs and prints a report of files which
|
||||
don't match. It doesn't alter the source or destination.
|
||||
|
||||
General options:
|
||||
* `-config` Location of the config file
|
||||
* `-transfers=4`: Number of file transfers to run in parallel.
|
||||
* `-checkers=8`: Number of MD5SUM checkers to run in parallel.
|
||||
* `-dry-run=false`: Do a trial run with no permanent changes
|
||||
* `-modify-window=1ns`: Max time difference to be considered the same - this is automatically set usually
|
||||
* `-quiet=false`: Print as little stuff as possible
|
||||
* `-stats=1m0s`: Interval to print stats
|
||||
* `-verbose=false`: Print lots more stuff
|
||||
|
||||
Developer options:
|
||||
* `-cpuprofile=""`: Write cpu profile to file
|
||||
|
||||
Local Filesystem
|
||||
----------------
|
||||
|
||||
Paths are specified as normal filesystem paths, so
|
||||
|
||||
rclone sync /home/source /tmp/destination
|
||||
|
||||
Will sync source to destination
|
||||
|
||||
Swift / Rackspace cloudfiles / Memset Memstore
|
||||
----------------------------------------------
|
||||
|
||||
Paths are specified as remote:container (or remote: for the `lsd`
|
||||
command.)
|
||||
|
||||
So to copy a local directory to a swift container called backup:
|
||||
|
||||
rclone sync /home/source swift:backup
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
`X-Object-Meta-Mtime` as floating point since the epoch.
|
||||
|
||||
This is a defacto standard (used in the official python-swiftclient
|
||||
amongst others) for storing the modification time (as read using
|
||||
os.Stat) for an object.
|
||||
|
||||
Amazon S3
|
||||
---------
|
||||
|
||||
Paths are specified as remote:bucket
|
||||
|
||||
So to copy a local directory to a s3 container called backup
|
||||
|
||||
rclone sync /home/source s3:backup
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
`X-Amz-Meta-Mtime` as floating point since the epoch.
|
||||
|
||||
Google drive
|
||||
------------
|
||||
|
||||
Paths are specified as drive:path Drive paths may be as deep as required.
|
||||
|
||||
The initial setup for drive involves getting a token from Google drive
|
||||
which you need to do in your browser. The `rclone config` walks you
|
||||
through it.
|
||||
|
||||
Here is an example of how to make a remote called `drv`
|
||||
|
||||
```
|
||||
$ ./rclone config
|
||||
n) New remote
|
||||
d) Delete remote
|
||||
q) Quit config
|
||||
e/n/d/q> n
|
||||
name> drv
|
||||
What type of source is it?
|
||||
Choose a number from below
|
||||
1) swift
|
||||
2) s3
|
||||
3) local
|
||||
4) drive
|
||||
type> 4
|
||||
Google Application Client Id - leave blank to use rclone's.
|
||||
client_id>
|
||||
Google Application Client Secret - leave blank to use rclone's.
|
||||
client_secret>
|
||||
Remote config
|
||||
Go to the following link in your browser
|
||||
https://accounts.google.com/o/oauth2/auth?access_type=&approval_prompt=&client_id=XXXXXXXXXXXX.apps.googleusercontent.com&redirect_uri=urn%3XXXXX%3Awg%3Aoauth%3XX.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=state
|
||||
Log in, then type paste the token that is returned in the browser here
|
||||
Enter verification code> X/XXXXXXXXXXXXXXXXXX-XXXXXXXXX.XXXXXXXXX-XXXXX_XXXXXXX_XXXXXXX
|
||||
--------------------
|
||||
[drv]
|
||||
client_id =
|
||||
client_secret =
|
||||
token = {"AccessToken":"xxxx.xxxxxxx_xxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"1/xxxxxxxxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxx","Expiry":"2014-03-16T13:57:58.955387075Z","Extra":null}
|
||||
--------------------
|
||||
y) Yes this is OK
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
You can then use it like this
|
||||
|
||||
rclone lsd drv:
|
||||
rclone ls drv:
|
||||
|
||||
To copy a local directory to a drive directory called backup
|
||||
|
||||
rclone copy /home/source drv:backup
|
||||
|
||||
Google drive stores modification times accurate to 1 ms.
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
This is free software under the terms of MIT the license (check the
|
||||
COPYING file included in this package).
|
||||
|
||||
Bugs
|
||||
----
|
||||
|
||||
* Doesn't sync individual files yet, only directories.
|
||||
* Drive: Sometimes get: Failed to copy: Upload failed: googleapi: Error 403: Rate Limit Exceeded
|
||||
* quota is 100.0 requests/second/user
|
||||
* Empty directories left behind with Local and Drive
|
||||
* eg purging a local directory with subdirectories doesn't work
|
||||
|
||||
Contact and support
|
||||
-------------------
|
||||
|
||||
The project website is at:
|
||||
|
||||
* https://github.com/ncw/rclone
|
||||
|
||||
There you can file bug reports, ask for help or contribute patches.
|
||||
|
||||
Authors
|
||||
-------
|
||||
|
||||
* Nick Craig-Wood <nick@craig-wood.com>
|
||||
|
||||
Contributors
|
||||
------------
|
||||
|
||||
* Your name goes here!
|
||||
|
||||
24
RELEASE.md
Normal file
24
RELEASE.md
Normal file
@@ -0,0 +1,24 @@
|
||||
Required software for making a release
|
||||
* [github-release](https://github.com/aktau/github-release) for uploading packages
|
||||
* [gox](https://github.com/mitchellh/gox) for cross compiling
|
||||
* Run `gox -build-toolchain`
|
||||
* This assumes you have your own source checkout
|
||||
* pandoc for making the html and man pages
|
||||
* errcheck - go get github.com/kisielk/errcheck
|
||||
* golint - go get github.com/golang/lint
|
||||
|
||||
Making a release
|
||||
* make update
|
||||
* make check
|
||||
* make test
|
||||
* make tag
|
||||
* edit docs/content/changelog.md
|
||||
* make doc
|
||||
* git commit -a -v
|
||||
* make retag
|
||||
* # Set the GOPATH for a gox enabled compiler - . ~/bin/go-cross - not required for go >= 1.5
|
||||
* make cross
|
||||
* make upload
|
||||
* make upload_website
|
||||
* git push --tags origin master
|
||||
* make upload_github
|
||||
854
amazonclouddrive/amazonclouddrive.go
Normal file
854
amazonclouddrive/amazonclouddrive.go
Normal file
@@ -0,0 +1,854 @@
|
||||
// Package amazonclouddrive provides an interface to the Amazon Cloud
|
||||
// Drive object storage system.
|
||||
package amazonclouddrive
|
||||
|
||||
/*
|
||||
|
||||
FIXME make searching for directory in id and file in id more efficient
|
||||
- use the name: search parameter - remember the escaping rules
|
||||
- use Folder GetNode and GetFile
|
||||
|
||||
FIXME make the default for no files and no dirs be (FILE & FOLDER) so
|
||||
we ignore assets completely!
|
||||
*/
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"regexp"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/ncw/go-acd"
|
||||
"github.com/ncw/rclone/dircache"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/oauthutil"
|
||||
"github.com/ncw/rclone/pacer"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/spf13/pflag"
|
||||
"golang.org/x/oauth2"
|
||||
)
|
||||
|
||||
const (
|
||||
rcloneClientID = "amzn1.application-oa2-client.6bf18d2d1f5b485c94c8988bb03ad0e7"
|
||||
rcloneEncryptedClientSecret = "ZP12wYlGw198FtmqfOxyNAGXU3fwVcQdmt--ba1d00wJnUs0LOzvVyXVDbqhbcUqnr5Vd1QejwWmiv1Ep7UJG1kUQeuBP5n9goXWd5MrAf0"
|
||||
folderKind = "FOLDER"
|
||||
fileKind = "FILE"
|
||||
assetKind = "ASSET"
|
||||
statusAvailable = "AVAILABLE"
|
||||
timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z
|
||||
minSleep = 20 * time.Millisecond
|
||||
warnFileSize = 50 << 30 // Display warning for files larger than this size
|
||||
)
|
||||
|
||||
// Globals
|
||||
var (
|
||||
// Flags
|
||||
tempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
|
||||
uploadWaitTime = pflag.DurationP("acd-upload-wait-time", "", 2*60*time.Second, "Time to wait after a failed complete upload to see if it appears.")
|
||||
// Description of how to auth for this app
|
||||
acdConfig = &oauth2.Config{
|
||||
Scopes: []string{"clouddrive:read_all", "clouddrive:write"},
|
||||
Endpoint: oauth2.Endpoint{
|
||||
AuthURL: "https://www.amazon.com/ap/oa",
|
||||
TokenURL: "https://api.amazon.com/auth/o2/token",
|
||||
},
|
||||
ClientID: rcloneClientID,
|
||||
ClientSecret: fs.MustReveal(rcloneEncryptedClientSecret),
|
||||
RedirectURL: oauthutil.RedirectURL,
|
||||
}
|
||||
)
|
||||
|
||||
// Register with Fs
|
||||
func init() {
|
||||
fs.Register(&fs.RegInfo{
|
||||
Name: "amazon cloud drive",
|
||||
Description: "Amazon Drive",
|
||||
NewFs: NewFs,
|
||||
Config: func(name string) {
|
||||
err := oauthutil.Config("amazon cloud drive", name, acdConfig)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
},
|
||||
Options: []fs.Option{{
|
||||
Name: fs.ConfigClientID,
|
||||
Help: "Amazon Application Client Id - leave blank normally.",
|
||||
}, {
|
||||
Name: fs.ConfigClientSecret,
|
||||
Help: "Amazon Application Client Secret - leave blank normally.",
|
||||
}},
|
||||
})
|
||||
pflag.VarP(&tempLinkThreshold, "acd-templink-threshold", "", "Files >= this size will be downloaded via their tempLink.")
|
||||
}
|
||||
|
||||
// Fs represents a remote acd server
|
||||
type Fs struct {
|
||||
name string // name of this remote
|
||||
c *acd.Client // the connection to the acd server
|
||||
noAuthClient *http.Client // unauthenticated http client
|
||||
root string // the path we are working on
|
||||
dirCache *dircache.DirCache // Map of directory path to directory id
|
||||
pacer *pacer.Pacer // pacer for API calls
|
||||
ts *oauthutil.TokenSource // token source for oauth
|
||||
uploads int32 // number of uploads in progress - atomic access required
|
||||
}
|
||||
|
||||
// Object describes a acd object
|
||||
//
|
||||
// Will definitely have info but maybe not meta
|
||||
type Object struct {
|
||||
fs *Fs // what this object is part of
|
||||
remote string // The remote path
|
||||
info *acd.Node // Info from the acd object if known
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
// Name of the remote (as passed into NewFs)
|
||||
func (f *Fs) Name() string {
|
||||
return f.name
|
||||
}
|
||||
|
||||
// Root of the remote (as passed into NewFs)
|
||||
func (f *Fs) Root() string {
|
||||
return f.root
|
||||
}
|
||||
|
||||
// String converts this Fs to a string
|
||||
func (f *Fs) String() string {
|
||||
return fmt.Sprintf("amazon drive root '%s'", f.root)
|
||||
}
|
||||
|
||||
// Pattern to match a acd path
|
||||
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
|
||||
|
||||
// parsePath parses an acd 'url'
|
||||
func parsePath(path string) (root string) {
|
||||
root = strings.Trim(path, "/")
|
||||
return
|
||||
}
|
||||
|
||||
// retryErrorCodes is a slice of error codes that we will retry
|
||||
var retryErrorCodes = []int{
|
||||
400, // Bad request (seen in "Next token is expired")
|
||||
401, // Unauthorized (seen in "Token has expired")
|
||||
408, // Request Timeout
|
||||
429, // Rate exceeded.
|
||||
500, // Get occasional 500 Internal Server Error
|
||||
503, // Service Unavailable
|
||||
504, // Gateway Time-out
|
||||
}
|
||||
|
||||
// shouldRetry returns a boolean as to whether this resp and err
|
||||
// deserve to be retried. It returns the err as a convenience
|
||||
func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
|
||||
if resp != nil {
|
||||
if resp.StatusCode == 401 {
|
||||
f.ts.Invalidate()
|
||||
fs.Log(f, "401 error received - invalidating token")
|
||||
return true, err
|
||||
}
|
||||
// Work around receiving this error sporadically on authentication
|
||||
//
|
||||
// HTTP code 403: "403 Forbidden", reponse body: {"message":"Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header. Authorization=Bearer"}
|
||||
if resp.StatusCode == 403 && strings.Contains(err.Error(), "Authorization header requires") {
|
||||
fs.Log(f, "403 \"Authorization header requires...\" error received - retry")
|
||||
return true, err
|
||||
}
|
||||
}
|
||||
return fs.ShouldRetry(err) || fs.ShouldRetryHTTP(resp, retryErrorCodes), err
|
||||
}
|
||||
|
||||
// NewFs constructs an Fs from the path, container:path
|
||||
func NewFs(name, root string) (fs.Fs, error) {
|
||||
root = parsePath(root)
|
||||
oAuthClient, ts, err := oauthutil.NewClient(name, acdConfig)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to configure Amazon Drive: %v", err)
|
||||
}
|
||||
|
||||
c := acd.NewClient(oAuthClient)
|
||||
c.UserAgent = fs.UserAgent
|
||||
f := &Fs{
|
||||
name: name,
|
||||
root: root,
|
||||
c: c,
|
||||
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
|
||||
noAuthClient: fs.Config.Client(),
|
||||
ts: ts,
|
||||
}
|
||||
|
||||
// Update endpoints
|
||||
var resp *http.Response
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
_, resp, err = f.c.Account.GetEndpoints()
|
||||
return f.shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to get endpoints")
|
||||
}
|
||||
|
||||
// Get rootID
|
||||
rootInfo, err := f.getRootInfo()
|
||||
if err != nil || rootInfo.Id == nil {
|
||||
return nil, errors.Wrap(err, "failed to get root")
|
||||
}
|
||||
|
||||
// Renew the token in the background
|
||||
go f.renewToken()
|
||||
|
||||
f.dirCache = dircache.New(root, *rootInfo.Id, f)
|
||||
|
||||
// Find the current root
|
||||
err = f.dirCache.FindRoot(false)
|
||||
if err != nil {
|
||||
// Assume it is a file
|
||||
newRoot, remote := dircache.SplitPath(root)
|
||||
newF := *f
|
||||
newF.dirCache = dircache.New(newRoot, *rootInfo.Id, &newF)
|
||||
newF.root = newRoot
|
||||
// Make new Fs which is the parent
|
||||
err = newF.dirCache.FindRoot(false)
|
||||
if err != nil {
|
||||
// No root so return old f
|
||||
return f, nil
|
||||
}
|
||||
_, err := newF.newObjectWithInfo(remote, nil)
|
||||
if err != nil {
|
||||
if err == fs.ErrorObjectNotFound {
|
||||
// File doesn't exist so return old f
|
||||
return f, nil
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
// return an error with an fs which points to the parent
|
||||
return &newF, fs.ErrorIsFile
|
||||
}
|
||||
return f, nil
|
||||
}
|
||||
|
||||
// getRootInfo gets the root folder info
|
||||
func (f *Fs) getRootInfo() (rootInfo *acd.Folder, err error) {
|
||||
var resp *http.Response
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
rootInfo, resp, err = f.c.Nodes.GetRoot()
|
||||
return f.shouldRetry(resp, err)
|
||||
})
|
||||
return rootInfo, err
|
||||
}
|
||||
|
||||
// Renew the token - runs in the background
|
||||
//
|
||||
// Renews the token whenever it expires. Useful when there are lots
|
||||
// of uploads in progress and the token doesn't get renewed. Amazon
|
||||
// seem to cancel your uploads if you don't renew your token for 2hrs.
|
||||
func (f *Fs) renewToken() {
|
||||
expiry := f.ts.OnExpiry()
|
||||
for {
|
||||
<-expiry
|
||||
uploads := atomic.LoadInt32(&f.uploads)
|
||||
if uploads != 0 {
|
||||
fs.Debug(f, "Token expired - %d uploads in progress - refreshing", uploads)
|
||||
// Do a transaction
|
||||
_, err := f.getRootInfo()
|
||||
if err == nil {
|
||||
fs.Debug(f, "Token refresh successful")
|
||||
} else {
|
||||
fs.ErrorLog(f, "Token refresh failed: %v", err)
|
||||
}
|
||||
} else {
|
||||
fs.Debug(f, "Token expired but no uploads in progress - doing nothing")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (f *Fs) startUpload() {
|
||||
atomic.AddInt32(&f.uploads, 1)
|
||||
}
|
||||
|
||||
func (f *Fs) stopUpload() {
|
||||
atomic.AddInt32(&f.uploads, -1)
|
||||
}
|
||||
|
||||
// Return an Object from a path
|
||||
//
|
||||
// If it can't be found it returns the error fs.ErrorObjectNotFound.
|
||||
func (f *Fs) newObjectWithInfo(remote string, info *acd.Node) (fs.Object, error) {
|
||||
o := &Object{
|
||||
fs: f,
|
||||
remote: remote,
|
||||
}
|
||||
if info != nil {
|
||||
// Set info but not meta
|
||||
o.info = info
|
||||
} else {
|
||||
err := o.readMetaData() // reads info and meta, returning an error
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return o, nil
|
||||
}
|
||||
|
||||
// NewObject finds the Object at remote. If it can't be found
|
||||
// it returns the error fs.ErrorObjectNotFound.
|
||||
func (f *Fs) NewObject(remote string) (fs.Object, error) {
|
||||
return f.newObjectWithInfo(remote, nil)
|
||||
}
|
||||
|
||||
// FindLeaf finds a directory of name leaf in the folder with ID pathID
|
||||
func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error) {
|
||||
//fs.Debug(f, "FindLeaf(%q, %q)", pathID, leaf)
|
||||
folder := acd.FolderFromId(pathID, f.c.Nodes)
|
||||
var resp *http.Response
|
||||
var subFolder *acd.Folder
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
subFolder, resp, err = folder.GetFolder(leaf)
|
||||
return f.shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
if err == acd.ErrorNodeNotFound {
|
||||
//fs.Debug(f, "...Not found")
|
||||
return "", false, nil
|
||||
}
|
||||
//fs.Debug(f, "...Error %v", err)
|
||||
return "", false, err
|
||||
}
|
||||
if subFolder.Status != nil && *subFolder.Status != statusAvailable {
|
||||
fs.Debug(f, "Ignoring folder %q in state %q", leaf, *subFolder.Status)
|
||||
time.Sleep(1 * time.Second) // FIXME wait for problem to go away!
|
||||
return "", false, nil
|
||||
}
|
||||
//fs.Debug(f, "...Found(%q, %v)", *subFolder.Id, leaf)
|
||||
return *subFolder.Id, true, nil
|
||||
}
|
||||
|
||||
// CreateDir makes a directory with pathID as parent and name leaf
|
||||
func (f *Fs) CreateDir(pathID, leaf string) (newID string, err error) {
|
||||
//fmt.Printf("CreateDir(%q, %q)\n", pathID, leaf)
|
||||
folder := acd.FolderFromId(pathID, f.c.Nodes)
|
||||
var resp *http.Response
|
||||
var info *acd.Folder
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
info, resp, err = folder.CreateFolder(leaf)
|
||||
return f.shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
//fmt.Printf("...Error %v\n", err)
|
||||
return "", err
|
||||
}
|
||||
//fmt.Printf("...Id %q\n", *info.Id)
|
||||
return *info.Id, nil
|
||||
}
|
||||
|
||||
// list the objects into the function supplied
|
||||
//
|
||||
// If directories is set it only sends directories
|
||||
// User function to process a File item from listAll
|
||||
//
|
||||
// Should return true to finish processing
|
||||
type listAllFn func(*acd.Node) bool
|
||||
|
||||
// Lists the directory required calling the user function on each item found
|
||||
//
|
||||
// If the user fn ever returns true then it early exits with found = true
|
||||
func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
|
||||
query := "parents:" + dirID
|
||||
if directoriesOnly {
|
||||
query += " AND kind:" + folderKind
|
||||
} else if filesOnly {
|
||||
query += " AND kind:" + fileKind
|
||||
} else {
|
||||
// FIXME none of these work
|
||||
//query += " AND kind:(" + fileKind + " OR " + folderKind + ")"
|
||||
//query += " AND (kind:" + fileKind + " OR kind:" + folderKind + ")"
|
||||
}
|
||||
opts := acd.NodeListOptions{
|
||||
Filters: query,
|
||||
}
|
||||
var nodes []*acd.Node
|
||||
var out []*acd.Node
|
||||
//var resp *http.Response
|
||||
for {
|
||||
var resp *http.Response
|
||||
err = f.pacer.CallNoRetry(func() (bool, error) {
|
||||
nodes, resp, err = f.c.Nodes.GetNodes(&opts)
|
||||
return f.shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if nodes == nil {
|
||||
break
|
||||
}
|
||||
for _, node := range nodes {
|
||||
if node.Name != nil && node.Id != nil && node.Kind != nil && node.Status != nil {
|
||||
// Ignore nodes if not AVAILABLE
|
||||
if *node.Status != statusAvailable {
|
||||
continue
|
||||
}
|
||||
// Store the nodes up in case we have to retry the listing
|
||||
out = append(out, node)
|
||||
}
|
||||
}
|
||||
}
|
||||
// Send the nodes now
|
||||
for _, node := range out {
|
||||
if fn(node) {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// ListDir reads the directory specified by the job into out, returning any more jobs
|
||||
func (f *Fs) ListDir(out fs.ListOpts, job dircache.ListDirJob) (jobs []dircache.ListDirJob, err error) {
|
||||
fs.Debug(f, "Reading %q", job.Path)
|
||||
maxTries := fs.Config.LowLevelRetries
|
||||
for tries := 1; tries <= maxTries; tries++ {
|
||||
_, err = f.listAll(job.DirID, "", false, false, func(node *acd.Node) bool {
|
||||
remote := job.Path + *node.Name
|
||||
switch *node.Kind {
|
||||
case folderKind:
|
||||
if out.IncludeDirectory(remote) {
|
||||
dir := &fs.Dir{
|
||||
Name: remote,
|
||||
Bytes: -1,
|
||||
Count: -1,
|
||||
}
|
||||
dir.When, _ = time.Parse(timeFormat, *node.ModifiedDate) // FIXME
|
||||
if out.AddDir(dir) {
|
||||
return true
|
||||
}
|
||||
if job.Depth > 0 {
|
||||
jobs = append(jobs, dircache.ListDirJob{DirID: *node.Id, Path: remote + "/", Depth: job.Depth - 1})
|
||||
}
|
||||
}
|
||||
case fileKind:
|
||||
o, err := f.newObjectWithInfo(remote, node)
|
||||
if err != nil {
|
||||
out.SetError(err)
|
||||
return true
|
||||
}
|
||||
if out.Add(o) {
|
||||
return true
|
||||
}
|
||||
default:
|
||||
// ignore ASSET etc
|
||||
}
|
||||
return false
|
||||
})
|
||||
if fs.IsRetryError(err) {
|
||||
fs.Debug(f, "Directory listing error for %q: %v - low level retry %d/%d", job.Path, err, tries, maxTries)
|
||||
continue
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
break
|
||||
}
|
||||
fs.Debug(f, "Finished reading %q", job.Path)
|
||||
return jobs, err
|
||||
}
|
||||
|
||||
// List walks the path returning iles and directories into out
|
||||
func (f *Fs) List(out fs.ListOpts, dir string) {
|
||||
f.dirCache.List(f, out, dir)
|
||||
}
|
||||
|
||||
// checkUpload checks to see if an error occurred after the file was
|
||||
// completely uploaded.
|
||||
//
|
||||
// If it was then it waits for a while to see if the file really
|
||||
// exists and is the right size and returns an updated info.
|
||||
//
|
||||
// If the file wasn't found or was the wrong size then it returns the
|
||||
// original error.
|
||||
//
|
||||
// This is a workaround for Amazon sometimes returning
|
||||
//
|
||||
// * 408 REQUEST_TIMEOUT
|
||||
// * 504 GATEWAY_TIMEOUT
|
||||
// * 500 Internal server error
|
||||
//
|
||||
// At the end of large uploads. The speculation is that the timeout
|
||||
// is waiting for the sha1 hashing to complete and the file may well
|
||||
// be properly uploaded.
|
||||
func (f *Fs) checkUpload(in io.Reader, src fs.ObjectInfo, inInfo *acd.File, inErr error) (fixedError bool, info *acd.File, err error) {
|
||||
// Return if no error - all is well
|
||||
if inErr == nil {
|
||||
return false, inInfo, inErr
|
||||
}
|
||||
const sleepTime = 5 * time.Second // sleep between tries
|
||||
retries := int(*uploadWaitTime / sleepTime) // number of retries
|
||||
if retries <= 0 {
|
||||
retries = 1
|
||||
}
|
||||
buf := make([]byte, 1)
|
||||
n, err := in.Read(buf)
|
||||
if !(n == 0 && err == io.EOF) {
|
||||
fs.Debug(src, "Upload error detected but didn't finish upload (n=%d, err=%v): %v", n, err, inErr)
|
||||
return false, inInfo, inErr
|
||||
}
|
||||
fs.Debug(src, "Error detected after finished upload - waiting to see if object was uploaded correctly: %v", inErr)
|
||||
remote := src.Remote()
|
||||
for i := 1; i <= retries; i++ {
|
||||
o, err := f.NewObject(remote)
|
||||
if err == fs.ErrorObjectNotFound {
|
||||
fs.Debug(src, "Object not found - waiting (%d/%d)", i, retries)
|
||||
} else if err != nil {
|
||||
fs.Debug(src, "Object returned error - waiting (%d/%d): %v", i, retries, err)
|
||||
} else {
|
||||
if src.Size() == o.Size() {
|
||||
fs.Debug(src, "Object found with correct size - returning with no error")
|
||||
info = &acd.File{
|
||||
Node: o.(*Object).info,
|
||||
}
|
||||
return true, info, nil
|
||||
}
|
||||
fs.Debug(src, "Object found but wrong size %d vs %d - waiting (%d/%d)", src.Size(), o.Size(), i, retries)
|
||||
}
|
||||
time.Sleep(sleepTime)
|
||||
}
|
||||
fs.Debug(src, "Finished waiting for object - returning original error: %v", inErr)
|
||||
return false, inInfo, inErr
|
||||
}
|
||||
|
||||
// Put the object into the container
|
||||
//
|
||||
// Copy the reader in to the new object which is returned
|
||||
//
|
||||
// The new object may have been created if an error is returned
|
||||
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo) (fs.Object, error) {
|
||||
remote := src.Remote()
|
||||
size := src.Size()
|
||||
// Temporary Object under construction
|
||||
o := &Object{
|
||||
fs: f,
|
||||
remote: remote,
|
||||
}
|
||||
// Check if object already exists
|
||||
err := o.readMetaData()
|
||||
switch err {
|
||||
case nil:
|
||||
return o, o.Update(in, src)
|
||||
case fs.ErrorObjectNotFound:
|
||||
// Not found so create it
|
||||
default:
|
||||
return nil, err
|
||||
}
|
||||
// If not create it
|
||||
leaf, directoryID, err := f.dirCache.FindPath(remote, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if size > warnFileSize {
|
||||
fs.Debug(f, "Warning: file %q may fail because it is too big. Use --max-size=%dGB to skip large files.", remote, warnFileSize>>30)
|
||||
}
|
||||
folder := acd.FolderFromId(directoryID, o.fs.c.Nodes)
|
||||
var info *acd.File
|
||||
var resp *http.Response
|
||||
err = f.pacer.CallNoRetry(func() (bool, error) {
|
||||
f.startUpload()
|
||||
if src.Size() != 0 {
|
||||
info, resp, err = folder.Put(in, leaf)
|
||||
} else {
|
||||
info, resp, err = folder.PutSized(in, size, leaf)
|
||||
}
|
||||
f.stopUpload()
|
||||
var ok bool
|
||||
ok, info, err = f.checkUpload(in, src, info, err)
|
||||
if ok {
|
||||
return false, nil
|
||||
}
|
||||
return f.shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
o.info = info.Node
|
||||
return o, nil
|
||||
}
|
||||
|
||||
// Mkdir creates the container if it doesn't exist
|
||||
func (f *Fs) Mkdir() error {
|
||||
return f.dirCache.FindRoot(true)
|
||||
}
|
||||
|
||||
// purgeCheck remotes the root directory, if check is set then it
|
||||
// refuses to do so if it has anything in
|
||||
func (f *Fs) purgeCheck(check bool) error {
|
||||
if f.root == "" {
|
||||
return errors.New("can't purge root directory")
|
||||
}
|
||||
dc := f.dirCache
|
||||
err := dc.FindRoot(false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
rootID := dc.RootID()
|
||||
|
||||
if check {
|
||||
// check directory is empty
|
||||
empty := true
|
||||
_, err = f.listAll(rootID, "", false, false, func(node *acd.Node) bool {
|
||||
switch *node.Kind {
|
||||
case folderKind:
|
||||
empty = false
|
||||
return true
|
||||
case fileKind:
|
||||
empty = false
|
||||
return true
|
||||
default:
|
||||
fs.Debug("Found ASSET %s", *node.Id)
|
||||
}
|
||||
return false
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !empty {
|
||||
return errors.New("directory not empty")
|
||||
}
|
||||
}
|
||||
|
||||
node := acd.NodeFromId(rootID, f.c.Nodes)
|
||||
var resp *http.Response
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
resp, err = node.Trash()
|
||||
return f.shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
f.dirCache.ResetRoot()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Rmdir deletes the root folder
|
||||
//
|
||||
// Returns an error if it isn't empty
|
||||
func (f *Fs) Rmdir() error {
|
||||
return f.purgeCheck(true)
|
||||
}
|
||||
|
||||
// Precision return the precision of this Fs
|
||||
func (f *Fs) Precision() time.Duration {
|
||||
return fs.ModTimeNotSupported
|
||||
}
|
||||
|
||||
// Hashes returns the supported hash sets.
|
||||
func (f *Fs) Hashes() fs.HashSet {
|
||||
return fs.HashSet(fs.HashMD5)
|
||||
}
|
||||
|
||||
// Copy src to this remote using server side copy operations.
|
||||
//
|
||||
// This is stored with the remote path given
|
||||
//
|
||||
// It returns the destination Object and a possible error
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantCopy
|
||||
//func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
|
||||
// srcObj, ok := src.(*Object)
|
||||
// if !ok {
|
||||
// fs.Debug(src, "Can't copy - not same remote type")
|
||||
// return nil, fs.ErrorCantCopy
|
||||
// }
|
||||
// srcFs := srcObj.fs
|
||||
// _, err := f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
|
||||
// if err != nil {
|
||||
// return nil, err
|
||||
// }
|
||||
// return f.NewObject(remote), nil
|
||||
//}
|
||||
|
||||
// Purge deletes all the files and the container
|
||||
//
|
||||
// Optional interface: Only implement this if you have a way of
|
||||
// deleting all the files quicker than just running Remove() on the
|
||||
// result of List()
|
||||
func (f *Fs) Purge() error {
|
||||
return f.purgeCheck(false)
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
// Fs returns the parent Fs
|
||||
func (o *Object) Fs() fs.Info {
|
||||
return o.fs
|
||||
}
|
||||
|
||||
// Return a string version
|
||||
func (o *Object) String() string {
|
||||
if o == nil {
|
||||
return "<nil>"
|
||||
}
|
||||
return o.remote
|
||||
}
|
||||
|
||||
// Remote returns the remote path
|
||||
func (o *Object) Remote() string {
|
||||
return o.remote
|
||||
}
|
||||
|
||||
// Hash returns the Md5sum of an object returning a lowercase hex string
|
||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
||||
if t != fs.HashMD5 {
|
||||
return "", fs.ErrHashUnsupported
|
||||
}
|
||||
if o.info.ContentProperties.Md5 != nil {
|
||||
return *o.info.ContentProperties.Md5, nil
|
||||
}
|
||||
return "", nil
|
||||
}
|
||||
|
||||
// Size returns the size of an object in bytes
|
||||
func (o *Object) Size() int64 {
|
||||
return int64(*o.info.ContentProperties.Size)
|
||||
}
|
||||
|
||||
// readMetaData gets the metadata if it hasn't already been fetched
|
||||
//
|
||||
// it also sets the info
|
||||
//
|
||||
// If it can't be found it returns the error fs.ErrorObjectNotFound.
|
||||
func (o *Object) readMetaData() (err error) {
|
||||
if o.info != nil {
|
||||
return nil
|
||||
}
|
||||
leaf, directoryID, err := o.fs.dirCache.FindPath(o.remote, false)
|
||||
if err != nil {
|
||||
if err == fs.ErrorDirNotFound {
|
||||
return fs.ErrorObjectNotFound
|
||||
}
|
||||
return err
|
||||
}
|
||||
folder := acd.FolderFromId(directoryID, o.fs.c.Nodes)
|
||||
var resp *http.Response
|
||||
var info *acd.File
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
info, resp, err = folder.GetFile(leaf)
|
||||
return o.fs.shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
if err == acd.ErrorNodeNotFound {
|
||||
return fs.ErrorObjectNotFound
|
||||
}
|
||||
return err
|
||||
}
|
||||
o.info = info.Node
|
||||
return nil
|
||||
}
|
||||
|
||||
// ModTime returns the modification time of the object
|
||||
//
|
||||
//
|
||||
// It attempts to read the objects mtime and if that isn't present the
|
||||
// LastModified returned in the http headers
|
||||
func (o *Object) ModTime() time.Time {
|
||||
err := o.readMetaData()
|
||||
if err != nil {
|
||||
fs.Log(o, "Failed to read metadata: %v", err)
|
||||
return time.Now()
|
||||
}
|
||||
modTime, err := time.Parse(timeFormat, *o.info.ModifiedDate)
|
||||
if err != nil {
|
||||
fs.Log(o, "Failed to read mtime from object: %v", err)
|
||||
return time.Now()
|
||||
}
|
||||
return modTime
|
||||
}
|
||||
|
||||
// SetModTime sets the modification time of the local fs object
|
||||
func (o *Object) SetModTime(modTime time.Time) error {
|
||||
// FIXME not implemented
|
||||
return fs.ErrorCantSetModTime
|
||||
}
|
||||
|
||||
// Storable returns a boolean showing whether this object storable
|
||||
func (o *Object) Storable() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// Open an object for read
|
||||
func (o *Object) Open() (in io.ReadCloser, err error) {
|
||||
bigObject := o.Size() >= int64(tempLinkThreshold)
|
||||
if bigObject {
|
||||
fs.Debug(o, "Dowloading large object via tempLink")
|
||||
}
|
||||
file := acd.File{Node: o.info}
|
||||
var resp *http.Response
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
if !bigObject {
|
||||
in, resp, err = file.Open()
|
||||
} else {
|
||||
in, resp, err = file.OpenTempURL(o.fs.noAuthClient)
|
||||
}
|
||||
return o.fs.shouldRetry(resp, err)
|
||||
})
|
||||
return in, err
|
||||
}
|
||||
|
||||
// Update the object with the contents of the io.Reader, modTime and size
|
||||
//
|
||||
// The new object may have been created if an error is returned
|
||||
func (o *Object) Update(in io.Reader, src fs.ObjectInfo) error {
|
||||
size := src.Size()
|
||||
file := acd.File{Node: o.info}
|
||||
var info *acd.File
|
||||
var resp *http.Response
|
||||
var err error
|
||||
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
||||
o.fs.startUpload()
|
||||
if size != 0 {
|
||||
info, resp, err = file.OverwriteSized(in, size)
|
||||
} else {
|
||||
info, resp, err = file.Overwrite(in)
|
||||
}
|
||||
o.fs.stopUpload()
|
||||
var ok bool
|
||||
ok, info, err = o.fs.checkUpload(in, src, info, err)
|
||||
if ok {
|
||||
return false, nil
|
||||
}
|
||||
return o.fs.shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
o.info = info.Node
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove an object
|
||||
func (o *Object) Remove() error {
|
||||
var resp *http.Response
|
||||
var err error
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
resp, err = o.info.Trash()
|
||||
return o.fs.shouldRetry(resp, err)
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
// Check the interfaces are satisfied
|
||||
var (
|
||||
_ fs.Fs = (*Fs)(nil)
|
||||
_ fs.Purger = (*Fs)(nil)
|
||||
// _ fs.Copier = (*Fs)(nil)
|
||||
// _ fs.Mover = (*Fs)(nil)
|
||||
// _ fs.DirMover = (*Fs)(nil)
|
||||
_ fs.Object = (*Object)(nil)
|
||||
)
|
||||
58
amazonclouddrive/amazonclouddrive_test.go
Normal file
58
amazonclouddrive/amazonclouddrive_test.go
Normal file
@@ -0,0 +1,58 @@
|
||||
// Test AmazonCloudDrive filesystem interface
|
||||
//
|
||||
// Automatically generated - DO NOT EDIT
|
||||
// Regenerate with: make gen_tests
|
||||
package amazonclouddrive_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ncw/rclone/amazonclouddrive"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/fstest/fstests"
|
||||
)
|
||||
|
||||
func TestSetup(t *testing.T) {
|
||||
fstests.NilObject = fs.Object((*amazonclouddrive.Object)(nil))
|
||||
fstests.RemoteName = "TestAmazonCloudDrive:"
|
||||
}
|
||||
|
||||
// Generic tests for the Fs
|
||||
func TestInit(t *testing.T) { fstests.TestInit(t) }
|
||||
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
|
||||
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
|
||||
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
|
||||
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
|
||||
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
|
||||
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
|
||||
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
|
||||
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
|
||||
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
|
||||
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
|
||||
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
|
||||
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
|
||||
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
|
||||
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
|
||||
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
|
||||
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
|
||||
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
|
||||
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
|
||||
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
|
||||
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
|
||||
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
|
||||
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
|
||||
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
|
||||
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
|
||||
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
|
||||
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
|
||||
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
|
||||
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
|
||||
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
|
||||
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
|
||||
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
|
||||
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
|
||||
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
|
||||
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
|
||||
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
|
||||
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
|
||||
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }
|
||||
20
appveyor.yml
Normal file
20
appveyor.yml
Normal file
@@ -0,0 +1,20 @@
|
||||
version: "{build}"
|
||||
|
||||
os: Windows Server 2012 R2
|
||||
|
||||
clone_folder: c:\gopath\src\github.com\ncw\rclone
|
||||
|
||||
environment:
|
||||
GOPATH: c:\gopath
|
||||
|
||||
install:
|
||||
- echo %PATH%
|
||||
- echo %GOPATH%
|
||||
- go version
|
||||
- go env
|
||||
- go get -t -d ./...
|
||||
|
||||
build_script:
|
||||
- go vet ./...
|
||||
- go test -cpu=2 ./...
|
||||
- go test -cpu=2 -short -race ./...
|
||||
299
b2/api/types.go
Normal file
299
b2/api/types.go
Normal file
@@ -0,0 +1,299 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
)
|
||||
|
||||
// Error describes a B2 error response
|
||||
type Error struct {
|
||||
Status int `json:"status"` // The numeric HTTP status code. Always matches the status in the HTTP response.
|
||||
Code string `json:"code"` // A single-identifier code that identifies the error.
|
||||
Message string `json:"message"` // A human-readable message, in English, saying what went wrong.
|
||||
}
|
||||
|
||||
// Error statisfies the error interface
|
||||
func (e *Error) Error() string {
|
||||
return fmt.Sprintf("%s (%d %s)", e.Message, e.Status, e.Code)
|
||||
}
|
||||
|
||||
// Fatal statisfies the Fatal interface
|
||||
//
|
||||
// It indicates which errors should be treated as fatal
|
||||
func (e *Error) Fatal() bool {
|
||||
return e.Status == 403 // 403 errors shouldn't be retried
|
||||
}
|
||||
|
||||
var _ fs.Fataler = (*Error)(nil)
|
||||
|
||||
// Account describes a B2 account
|
||||
type Account struct {
|
||||
ID string `json:"accountId"` // The identifier for the account.
|
||||
}
|
||||
|
||||
// Bucket describes a B2 bucket
|
||||
type Bucket struct {
|
||||
ID string `json:"bucketId"`
|
||||
AccountID string `json:"accountId"`
|
||||
Name string `json:"bucketName"`
|
||||
Type string `json:"bucketType"`
|
||||
}
|
||||
|
||||
// Timestamp is a UTC time when this file was uploaded. It is a base
|
||||
// 10 number of milliseconds since midnight, January 1, 1970 UTC. This
|
||||
// fits in a 64 bit integer such as the type "long" in the programming
|
||||
// language Java. It is intended to be compatible with Java's time
|
||||
// long. For example, it can be passed directly into the java call
|
||||
// Date.setTime(long time).
|
||||
type Timestamp time.Time
|
||||
|
||||
// MarshalJSON turns a Timestamp into JSON (in UTC)
|
||||
func (t *Timestamp) MarshalJSON() (out []byte, err error) {
|
||||
timestamp := (*time.Time)(t).UTC().UnixNano()
|
||||
return []byte(strconv.FormatInt(timestamp/1E6, 10)), nil
|
||||
}
|
||||
|
||||
// UnmarshalJSON turns JSON into a Timestamp
|
||||
func (t *Timestamp) UnmarshalJSON(data []byte) error {
|
||||
timestamp, err := strconv.ParseInt(string(data), 10, 64)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*t = Timestamp(time.Unix(timestamp/1E3, (timestamp%1E3)*1E6).UTC())
|
||||
return nil
|
||||
}
|
||||
|
||||
const versionFormat = "-v2006-01-02-150405.000"
|
||||
|
||||
// AddVersion adds the timestamp as a version string into the filename passed in.
|
||||
func (t Timestamp) AddVersion(remote string) string {
|
||||
ext := path.Ext(remote)
|
||||
base := remote[:len(remote)-len(ext)]
|
||||
s := (time.Time)(t).Format(versionFormat)
|
||||
// Replace the '.' with a '-'
|
||||
s = strings.Replace(s, ".", "-", -1)
|
||||
return base + s + ext
|
||||
}
|
||||
|
||||
// RemoveVersion removes the timestamp from a filename as a version string.
|
||||
//
|
||||
// It returns the new file name and a timestamp, or the old filename
|
||||
// and a zero timestamp.
|
||||
func RemoveVersion(remote string) (t Timestamp, newRemote string) {
|
||||
newRemote = remote
|
||||
ext := path.Ext(remote)
|
||||
base := remote[:len(remote)-len(ext)]
|
||||
if len(base) < len(versionFormat) {
|
||||
return
|
||||
}
|
||||
versionStart := len(base) - len(versionFormat)
|
||||
// Check it ends in -xxx
|
||||
if base[len(base)-4] != '-' {
|
||||
return
|
||||
}
|
||||
// Replace with .xxx for parsing
|
||||
base = base[:len(base)-4] + "." + base[len(base)-3:]
|
||||
newT, err := time.Parse(versionFormat, base[versionStart:])
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
return Timestamp(newT), base[:versionStart] + ext
|
||||
}
|
||||
|
||||
// IsZero returns true if the timestamp is unitialised
|
||||
func (t Timestamp) IsZero() bool {
|
||||
return (time.Time)(t).IsZero()
|
||||
}
|
||||
|
||||
// Equal compares two timestamps
|
||||
//
|
||||
// If either are !IsZero then it returns false
|
||||
func (t Timestamp) Equal(s Timestamp) bool {
|
||||
if (time.Time)(t).IsZero() {
|
||||
return false
|
||||
}
|
||||
if (time.Time)(s).IsZero() {
|
||||
return false
|
||||
}
|
||||
return (time.Time)(t).Equal((time.Time)(s))
|
||||
}
|
||||
|
||||
// File is info about a file
|
||||
type File struct {
|
||||
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
|
||||
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
|
||||
Action string `json:"action"` // Either "upload" or "hide". "upload" means a file that was uploaded to B2 Cloud Storage. "hide" means a file version marking the file as hidden, so that it will not show up in b2_list_file_names. The result of b2_list_file_names will contain only "upload". The result of b2_list_file_versions may have both.
|
||||
Size int64 `json:"size"` // The number of bytes in the file.
|
||||
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
|
||||
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
|
||||
ContentType string `json:"contentType"` // The MIME type of the file.
|
||||
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
|
||||
}
|
||||
|
||||
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
|
||||
type AuthorizeAccountResponse struct {
|
||||
AccountID string `json:"accountId"` // The identifier for the account.
|
||||
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
|
||||
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
|
||||
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
|
||||
}
|
||||
|
||||
// ListBucketsResponse is as returned from the b2_list_buckets call
|
||||
type ListBucketsResponse struct {
|
||||
Buckets []Bucket `json:"buckets"`
|
||||
}
|
||||
|
||||
// ListFileNamesRequest is as passed to b2_list_file_names or b2_list_file_versions
|
||||
type ListFileNamesRequest struct {
|
||||
BucketID string `json:"bucketId"` // required - The bucket to look for file names in.
|
||||
StartFileName string `json:"startFileName,omitempty"` // optional - The first file name to return. If there is a file with this name, it will be returned in the list. If not, the first file name after this the first one after this name.
|
||||
MaxFileCount int `json:"maxFileCount,omitempty"` // optional - The maximum number of files to return from this call. The default value is 100, and the maximum allowed is 1000.
|
||||
StartFileID string `json:"startFileId,omitempty"` // optional - What to pass in to startFileId for the next search to continue where this one left off.
|
||||
}
|
||||
|
||||
// ListFileNamesResponse is as received from b2_list_file_names or b2_list_file_versions
|
||||
type ListFileNamesResponse struct {
|
||||
Files []File `json:"files"` // An array of objects, each one describing one file.
|
||||
NextFileName *string `json:"nextFileName"` // What to pass in to startFileName for the next search to continue where this one left off, or null if there are no more files.
|
||||
NextFileID *string `json:"nextFileId"` // What to pass in to startFileId for the next search to continue where this one left off, or null if there are no more files.
|
||||
}
|
||||
|
||||
// GetUploadURLRequest is passed to b2_get_upload_url
|
||||
type GetUploadURLRequest struct {
|
||||
BucketID string `json:"bucketId"` // The ID of the bucket that you want to upload to.
|
||||
}
|
||||
|
||||
// GetUploadURLResponse is received from b2_get_upload_url
|
||||
type GetUploadURLResponse struct {
|
||||
BucketID string `json:"bucketId"` // The unique ID of the bucket.
|
||||
UploadURL string `json:"uploadUrl"` // The URL that can be used to upload files to this bucket, see b2_upload_file.
|
||||
AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when uploading files to this bucket, see b2_upload_file.
|
||||
}
|
||||
|
||||
// FileInfo is received from b2_upload_file, b2_get_file_info and b2_finish_large_file
|
||||
type FileInfo struct {
|
||||
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
|
||||
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
|
||||
Action string `json:"action"` // Either "upload" or "hide". "upload" means a file that was uploaded to B2 Cloud Storage. "hide" means a file version marking the file as hidden, so that it will not show up in b2_list_file_names. The result of b2_list_file_names will contain only "upload". The result of b2_list_file_versions may have both.
|
||||
AccountID string `json:"accountId"` // Your account ID.
|
||||
BucketID string `json:"bucketId"` // The bucket that the file is in.
|
||||
Size int64 `json:"contentLength"` // The number of bytes stored in the file.
|
||||
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
|
||||
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
|
||||
ContentType string `json:"contentType"` // The MIME type of the file.
|
||||
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
|
||||
}
|
||||
|
||||
// CreateBucketRequest is used to create a bucket
|
||||
type CreateBucketRequest struct {
|
||||
AccountID string `json:"accountId"`
|
||||
Name string `json:"bucketName"`
|
||||
Type string `json:"bucketType"`
|
||||
}
|
||||
|
||||
// DeleteBucketRequest is used to create a bucket
|
||||
type DeleteBucketRequest struct {
|
||||
ID string `json:"bucketId"`
|
||||
AccountID string `json:"accountId"`
|
||||
}
|
||||
|
||||
// DeleteFileRequest is used to delete a file version
|
||||
type DeleteFileRequest struct {
|
||||
ID string `json:"fileId"` // The ID of the file, as returned by b2_upload_file, b2_list_file_names, or b2_list_file_versions.
|
||||
Name string `json:"fileName"` // The name of this file.
|
||||
}
|
||||
|
||||
// HideFileRequest is used to delete a file
|
||||
type HideFileRequest struct {
|
||||
BucketID string `json:"bucketId"` // The bucket containing the file to hide.
|
||||
Name string `json:"fileName"` // The name of the file to hide.
|
||||
}
|
||||
|
||||
// GetFileInfoRequest is used to return a FileInfo struct with b2_get_file_info
|
||||
type GetFileInfoRequest struct {
|
||||
ID string `json:"fileId"` // The ID of the file, as returned by b2_upload_file, b2_list_file_names, or b2_list_file_versions.
|
||||
}
|
||||
|
||||
// StartLargeFileRequest (b2_start_large_file) Prepares for uploading the parts of a large file.
|
||||
//
|
||||
// If the original source of the file being uploaded has a last
|
||||
// modified time concept, Backblaze recommends using
|
||||
// src_last_modified_millis as the name, and a string holding the base
|
||||
// 10 number number of milliseconds since midnight, January 1, 1970
|
||||
// UTC. This fits in a 64 bit integer such as the type "long" in the
|
||||
// programming language Java. It is intended to be compatible with
|
||||
// Java's time long. For example, it can be passed directly into the
|
||||
// Java call Date.setTime(long time).
|
||||
//
|
||||
// If the caller knows the SHA1 of the entire large file being
|
||||
// uploaded, Backblaze recommends using large_file_sha1 as the name,
|
||||
// and a 40 byte hex string representing the SHA1.
|
||||
//
|
||||
// Example: { "src_last_modified_millis" : "1452802803026", "large_file_sha1" : "a3195dc1e7b46a2ff5da4b3c179175b75671e80d", "color": "blue" }
|
||||
type StartLargeFileRequest struct {
|
||||
BucketID string `json:"bucketId"` //The ID of the bucket that the file will go in.
|
||||
Name string `json:"fileName"` // The name of the file. See Files for requirements on file names.
|
||||
ContentType string `json:"contentType"` // The MIME type of the content of the file, which will be returned in the Content-Type header when downloading the file. Use the Content-Type b2/x-auto to automatically set the stored Content-Type post upload. In the case where a file extension is absent or the lookup fails, the Content-Type is set to application/octet-stream.
|
||||
Info map[string]string `json:"fileInfo"` // A JSON object holding the name/value pairs for the custom file info.
|
||||
}
|
||||
|
||||
// StartLargeFileResponse is the response to StartLargeFileRequest
|
||||
type StartLargeFileResponse struct {
|
||||
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
|
||||
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
|
||||
AccountID string `json:"accountId"` // The identifier for the account.
|
||||
BucketID string `json:"bucketId"` // The unique ID of the bucket.
|
||||
ContentType string `json:"contentType"` // The MIME type of the file.
|
||||
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
|
||||
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
|
||||
}
|
||||
|
||||
// GetUploadPartURLRequest is passed to b2_get_upload_part_url
|
||||
type GetUploadPartURLRequest struct {
|
||||
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
|
||||
}
|
||||
|
||||
// GetUploadPartURLResponse is received from b2_get_upload_url
|
||||
type GetUploadPartURLResponse struct {
|
||||
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
|
||||
UploadURL string `json:"uploadUrl"` // The URL that can be used to upload files to this bucket, see b2_upload_part.
|
||||
AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when uploading files to this bucket, see b2_upload_part.
|
||||
}
|
||||
|
||||
// UploadPartResponse is the response to b2_upload_part
|
||||
type UploadPartResponse struct {
|
||||
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
|
||||
PartNumber int64 `json:"partNumber"` // Which part this is (starting from 1)
|
||||
Size int64 `json:"contentLength"` // The number of bytes stored in the file.
|
||||
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
|
||||
}
|
||||
|
||||
// FinishLargeFileRequest is passed to b2_finish_large_file
|
||||
//
|
||||
// The response is a FileInfo object (with extra AccountID and BucketID fields which we ignore).
|
||||
//
|
||||
// Large files do not have a SHA1 checksum. The value will always be "none".
|
||||
type FinishLargeFileRequest struct {
|
||||
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
|
||||
SHA1s []string `json:"partSha1Array"` // A JSON array of hex SHA1 checksums of the parts of the large file. This is a double-check that the right parts were uploaded in the right order, and that none were missed. Note that the part numbers start at 1, and the SHA1 of the part 1 is the first string in the array, at index 0.
|
||||
}
|
||||
|
||||
// CancelLargeFileRequest is passed to b2_finish_large_file
|
||||
//
|
||||
// The response is a CancelLargeFileResponse
|
||||
type CancelLargeFileRequest struct {
|
||||
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
|
||||
}
|
||||
|
||||
// CancelLargeFileResponse is the response to CancelLargeFileRequest
|
||||
type CancelLargeFileResponse struct {
|
||||
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
|
||||
Name string `json:"fileName"` // The name of this file.
|
||||
AccountID string `json:"accountId"` // The identifier for the account.
|
||||
BucketID string `json:"bucketId"` // The unique ID of the bucket.
|
||||
}
|
||||
87
b2/api/types_test.go
Normal file
87
b2/api/types_test.go
Normal file
@@ -0,0 +1,87 @@
|
||||
package api_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/ncw/rclone/b2/api"
|
||||
"github.com/ncw/rclone/fstest"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
var (
|
||||
emptyT api.Timestamp
|
||||
t0 = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123456789Z"))
|
||||
t0r = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123000000Z"))
|
||||
t1 = api.Timestamp(fstest.Time("2001-02-03T04:05:06.123000000Z"))
|
||||
)
|
||||
|
||||
func TestTimestampMarshalJSON(t *testing.T) {
|
||||
resB, err := t0.MarshalJSON()
|
||||
res := string(resB)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "3661123", res)
|
||||
|
||||
resB, err = t1.MarshalJSON()
|
||||
res = string(resB)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "981173106123", res)
|
||||
}
|
||||
|
||||
func TestTimestampUnmarshalJSON(t *testing.T) {
|
||||
var tActual api.Timestamp
|
||||
err := tActual.UnmarshalJSON([]byte("981173106123"))
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, (time.Time)(t1), (time.Time)(tActual))
|
||||
}
|
||||
|
||||
func TestTimestampAddVersion(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
t api.Timestamp
|
||||
in string
|
||||
expected string
|
||||
}{
|
||||
{t0, "potato.txt", "potato-v1970-01-01-010101-123.txt"},
|
||||
{t1, "potato", "potato-v2001-02-03-040506-123"},
|
||||
{t1, "", "-v2001-02-03-040506-123"},
|
||||
} {
|
||||
actual := test.t.AddVersion(test.in)
|
||||
assert.Equal(t, test.expected, actual, test.in)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTimestampRemoveVersion(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
expectedT api.Timestamp
|
||||
expectedRemote string
|
||||
}{
|
||||
{"potato.txt", emptyT, "potato.txt"},
|
||||
{"potato-v1970-01-01-010101-123.txt", t0r, "potato.txt"},
|
||||
{"potato-v2001-02-03-040506-123", t1, "potato"},
|
||||
{"-v2001-02-03-040506-123", t1, ""},
|
||||
{"potato-v2A01-02-03-040506-123", emptyT, "potato-v2A01-02-03-040506-123"},
|
||||
{"potato-v2001-02-03-040506=123", emptyT, "potato-v2001-02-03-040506=123"},
|
||||
} {
|
||||
actualT, actualRemote := api.RemoveVersion(test.in)
|
||||
assert.Equal(t, test.expectedT, actualT, test.in)
|
||||
assert.Equal(t, test.expectedRemote, actualRemote, test.in)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTimestampIsZero(t *testing.T) {
|
||||
assert.True(t, emptyT.IsZero())
|
||||
assert.False(t, t0.IsZero())
|
||||
assert.False(t, t1.IsZero())
|
||||
}
|
||||
|
||||
func TestTimestampEqual(t *testing.T) {
|
||||
assert.False(t, emptyT.Equal(emptyT))
|
||||
assert.False(t, t0.Equal(emptyT))
|
||||
assert.False(t, emptyT.Equal(t0))
|
||||
assert.False(t, t0.Equal(t1))
|
||||
assert.False(t, t1.Equal(t0))
|
||||
assert.True(t, t0.Equal(t0))
|
||||
assert.True(t, t1.Equal(t1))
|
||||
}
|
||||
284
b2/b2_internal_test.go
Normal file
284
b2/b2_internal_test.go
Normal file
@@ -0,0 +1,284 @@
|
||||
package b2
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/ncw/rclone/fstest"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// Test b2 string encoding
|
||||
// https://www.backblaze.com/b2/docs/string_encoding.html
|
||||
|
||||
var encodeTest = []struct {
|
||||
fullyEncoded string
|
||||
minimallyEncoded string
|
||||
plainText string
|
||||
}{
|
||||
{fullyEncoded: "%20", minimallyEncoded: "+", plainText: " "},
|
||||
{fullyEncoded: "%21", minimallyEncoded: "!", plainText: "!"},
|
||||
{fullyEncoded: "%22", minimallyEncoded: "%22", plainText: "\""},
|
||||
{fullyEncoded: "%23", minimallyEncoded: "%23", plainText: "#"},
|
||||
{fullyEncoded: "%24", minimallyEncoded: "$", plainText: "$"},
|
||||
{fullyEncoded: "%25", minimallyEncoded: "%25", plainText: "%"},
|
||||
{fullyEncoded: "%26", minimallyEncoded: "%26", plainText: "&"},
|
||||
{fullyEncoded: "%27", minimallyEncoded: "'", plainText: "'"},
|
||||
{fullyEncoded: "%28", minimallyEncoded: "(", plainText: "("},
|
||||
{fullyEncoded: "%29", minimallyEncoded: ")", plainText: ")"},
|
||||
{fullyEncoded: "%2A", minimallyEncoded: "*", plainText: "*"},
|
||||
{fullyEncoded: "%2B", minimallyEncoded: "%2B", plainText: "+"},
|
||||
{fullyEncoded: "%2C", minimallyEncoded: "%2C", plainText: ","},
|
||||
{fullyEncoded: "%2D", minimallyEncoded: "-", plainText: "-"},
|
||||
{fullyEncoded: "%2E", minimallyEncoded: ".", plainText: "."},
|
||||
{fullyEncoded: "%2F", minimallyEncoded: "/", plainText: "/"},
|
||||
{fullyEncoded: "%30", minimallyEncoded: "0", plainText: "0"},
|
||||
{fullyEncoded: "%31", minimallyEncoded: "1", plainText: "1"},
|
||||
{fullyEncoded: "%32", minimallyEncoded: "2", plainText: "2"},
|
||||
{fullyEncoded: "%33", minimallyEncoded: "3", plainText: "3"},
|
||||
{fullyEncoded: "%34", minimallyEncoded: "4", plainText: "4"},
|
||||
{fullyEncoded: "%35", minimallyEncoded: "5", plainText: "5"},
|
||||
{fullyEncoded: "%36", minimallyEncoded: "6", plainText: "6"},
|
||||
{fullyEncoded: "%37", minimallyEncoded: "7", plainText: "7"},
|
||||
{fullyEncoded: "%38", minimallyEncoded: "8", plainText: "8"},
|
||||
{fullyEncoded: "%39", minimallyEncoded: "9", plainText: "9"},
|
||||
{fullyEncoded: "%3A", minimallyEncoded: ":", plainText: ":"},
|
||||
{fullyEncoded: "%3B", minimallyEncoded: ";", plainText: ";"},
|
||||
{fullyEncoded: "%3C", minimallyEncoded: "%3C", plainText: "<"},
|
||||
{fullyEncoded: "%3D", minimallyEncoded: "=", plainText: "="},
|
||||
{fullyEncoded: "%3E", minimallyEncoded: "%3E", plainText: ">"},
|
||||
{fullyEncoded: "%3F", minimallyEncoded: "%3F", plainText: "?"},
|
||||
{fullyEncoded: "%40", minimallyEncoded: "@", plainText: "@"},
|
||||
{fullyEncoded: "%41", minimallyEncoded: "A", plainText: "A"},
|
||||
{fullyEncoded: "%42", minimallyEncoded: "B", plainText: "B"},
|
||||
{fullyEncoded: "%43", minimallyEncoded: "C", plainText: "C"},
|
||||
{fullyEncoded: "%44", minimallyEncoded: "D", plainText: "D"},
|
||||
{fullyEncoded: "%45", minimallyEncoded: "E", plainText: "E"},
|
||||
{fullyEncoded: "%46", minimallyEncoded: "F", plainText: "F"},
|
||||
{fullyEncoded: "%47", minimallyEncoded: "G", plainText: "G"},
|
||||
{fullyEncoded: "%48", minimallyEncoded: "H", plainText: "H"},
|
||||
{fullyEncoded: "%49", minimallyEncoded: "I", plainText: "I"},
|
||||
{fullyEncoded: "%4A", minimallyEncoded: "J", plainText: "J"},
|
||||
{fullyEncoded: "%4B", minimallyEncoded: "K", plainText: "K"},
|
||||
{fullyEncoded: "%4C", minimallyEncoded: "L", plainText: "L"},
|
||||
{fullyEncoded: "%4D", minimallyEncoded: "M", plainText: "M"},
|
||||
{fullyEncoded: "%4E", minimallyEncoded: "N", plainText: "N"},
|
||||
{fullyEncoded: "%4F", minimallyEncoded: "O", plainText: "O"},
|
||||
{fullyEncoded: "%50", minimallyEncoded: "P", plainText: "P"},
|
||||
{fullyEncoded: "%51", minimallyEncoded: "Q", plainText: "Q"},
|
||||
{fullyEncoded: "%52", minimallyEncoded: "R", plainText: "R"},
|
||||
{fullyEncoded: "%53", minimallyEncoded: "S", plainText: "S"},
|
||||
{fullyEncoded: "%54", minimallyEncoded: "T", plainText: "T"},
|
||||
{fullyEncoded: "%55", minimallyEncoded: "U", plainText: "U"},
|
||||
{fullyEncoded: "%56", minimallyEncoded: "V", plainText: "V"},
|
||||
{fullyEncoded: "%57", minimallyEncoded: "W", plainText: "W"},
|
||||
{fullyEncoded: "%58", minimallyEncoded: "X", plainText: "X"},
|
||||
{fullyEncoded: "%59", minimallyEncoded: "Y", plainText: "Y"},
|
||||
{fullyEncoded: "%5A", minimallyEncoded: "Z", plainText: "Z"},
|
||||
{fullyEncoded: "%5B", minimallyEncoded: "%5B", plainText: "["},
|
||||
{fullyEncoded: "%5C", minimallyEncoded: "%5C", plainText: "\\"},
|
||||
{fullyEncoded: "%5D", minimallyEncoded: "%5D", plainText: "]"},
|
||||
{fullyEncoded: "%5E", minimallyEncoded: "%5E", plainText: "^"},
|
||||
{fullyEncoded: "%5F", minimallyEncoded: "_", plainText: "_"},
|
||||
{fullyEncoded: "%60", minimallyEncoded: "%60", plainText: "`"},
|
||||
{fullyEncoded: "%61", minimallyEncoded: "a", plainText: "a"},
|
||||
{fullyEncoded: "%62", minimallyEncoded: "b", plainText: "b"},
|
||||
{fullyEncoded: "%63", minimallyEncoded: "c", plainText: "c"},
|
||||
{fullyEncoded: "%64", minimallyEncoded: "d", plainText: "d"},
|
||||
{fullyEncoded: "%65", minimallyEncoded: "e", plainText: "e"},
|
||||
{fullyEncoded: "%66", minimallyEncoded: "f", plainText: "f"},
|
||||
{fullyEncoded: "%67", minimallyEncoded: "g", plainText: "g"},
|
||||
{fullyEncoded: "%68", minimallyEncoded: "h", plainText: "h"},
|
||||
{fullyEncoded: "%69", minimallyEncoded: "i", plainText: "i"},
|
||||
{fullyEncoded: "%6A", minimallyEncoded: "j", plainText: "j"},
|
||||
{fullyEncoded: "%6B", minimallyEncoded: "k", plainText: "k"},
|
||||
{fullyEncoded: "%6C", minimallyEncoded: "l", plainText: "l"},
|
||||
{fullyEncoded: "%6D", minimallyEncoded: "m", plainText: "m"},
|
||||
{fullyEncoded: "%6E", minimallyEncoded: "n", plainText: "n"},
|
||||
{fullyEncoded: "%6F", minimallyEncoded: "o", plainText: "o"},
|
||||
{fullyEncoded: "%70", minimallyEncoded: "p", plainText: "p"},
|
||||
{fullyEncoded: "%71", minimallyEncoded: "q", plainText: "q"},
|
||||
{fullyEncoded: "%72", minimallyEncoded: "r", plainText: "r"},
|
||||
{fullyEncoded: "%73", minimallyEncoded: "s", plainText: "s"},
|
||||
{fullyEncoded: "%74", minimallyEncoded: "t", plainText: "t"},
|
||||
{fullyEncoded: "%75", minimallyEncoded: "u", plainText: "u"},
|
||||
{fullyEncoded: "%76", minimallyEncoded: "v", plainText: "v"},
|
||||
{fullyEncoded: "%77", minimallyEncoded: "w", plainText: "w"},
|
||||
{fullyEncoded: "%78", minimallyEncoded: "x", plainText: "x"},
|
||||
{fullyEncoded: "%79", minimallyEncoded: "y", plainText: "y"},
|
||||
{fullyEncoded: "%7A", minimallyEncoded: "z", plainText: "z"},
|
||||
{fullyEncoded: "%7B", minimallyEncoded: "%7B", plainText: "{"},
|
||||
{fullyEncoded: "%7C", minimallyEncoded: "%7C", plainText: "|"},
|
||||
{fullyEncoded: "%7D", minimallyEncoded: "%7D", plainText: "}"},
|
||||
{fullyEncoded: "%7E", minimallyEncoded: "~", plainText: "~"},
|
||||
{fullyEncoded: "%7F", minimallyEncoded: "%7F", plainText: "\u007f"},
|
||||
{fullyEncoded: "%E8%87%AA%E7%94%B1", minimallyEncoded: "%E8%87%AA%E7%94%B1", plainText: "自由"},
|
||||
{fullyEncoded: "%F0%90%90%80", minimallyEncoded: "%F0%90%90%80", plainText: "𐐀"},
|
||||
}
|
||||
|
||||
func TestUrlEncode(t *testing.T) {
|
||||
for _, test := range encodeTest {
|
||||
got := urlEncode(test.plainText)
|
||||
if got != test.minimallyEncoded && got != test.fullyEncoded {
|
||||
t.Errorf("urlEncode(%q) got %q wanted %q or %q", test.plainText, got, test.minimallyEncoded, test.fullyEncoded)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestTimeString(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in time.Time
|
||||
want string
|
||||
}{
|
||||
{fstest.Time("1970-01-01T00:00:00.000000000Z"), "0"},
|
||||
{fstest.Time("2001-02-03T04:05:10.123123123Z"), "981173110123"},
|
||||
{fstest.Time("2001-02-03T05:05:10.123123123+01:00"), "981173110123"},
|
||||
} {
|
||||
got := timeString(test.in)
|
||||
if test.want != got {
|
||||
t.Logf("%v: want %v got %v", test.in, test.want, got)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestParseTimeString(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
want time.Time
|
||||
wantError string
|
||||
}{
|
||||
{"0", fstest.Time("1970-01-01T00:00:00.000000000Z"), ""},
|
||||
{"981173110123", fstest.Time("2001-02-03T04:05:10.123000000Z"), ""},
|
||||
{"", time.Time{}, ""},
|
||||
{"potato", time.Time{}, `strconv.ParseInt: parsing "potato": invalid syntax`},
|
||||
} {
|
||||
o := Object{}
|
||||
err := o.parseTimeString(test.in)
|
||||
got := o.modTime
|
||||
var gotError string
|
||||
if err != nil {
|
||||
gotError = err.Error()
|
||||
}
|
||||
if test.want != got {
|
||||
t.Logf("%v: want %v got %v", test.in, test.want, got)
|
||||
}
|
||||
if test.wantError != gotError {
|
||||
t.Logf("%v: want error %v got error %v", test.in, test.wantError, gotError)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestSendDir(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
lastDir string
|
||||
remote string
|
||||
level int
|
||||
dirNames []string
|
||||
newLastDir string
|
||||
}{
|
||||
{
|
||||
lastDir: "",
|
||||
remote: "test.txt",
|
||||
level: 100,
|
||||
dirNames: nil,
|
||||
newLastDir: "",
|
||||
},
|
||||
{
|
||||
lastDir: "",
|
||||
remote: "potato/test.txt",
|
||||
level: 100,
|
||||
dirNames: []string{"potato"},
|
||||
newLastDir: "potato",
|
||||
},
|
||||
{
|
||||
lastDir: "potato",
|
||||
remote: "potato/test.txt",
|
||||
level: 100,
|
||||
dirNames: nil,
|
||||
newLastDir: "potato",
|
||||
},
|
||||
{
|
||||
lastDir: "",
|
||||
remote: "potato/sausage/test.txt",
|
||||
level: 100,
|
||||
dirNames: []string{"potato", "potato/sausage"},
|
||||
newLastDir: "potato/sausage",
|
||||
},
|
||||
{
|
||||
lastDir: "potato",
|
||||
remote: "potato/sausage/test.txt",
|
||||
level: 100,
|
||||
dirNames: []string{"potato/sausage"},
|
||||
newLastDir: "potato/sausage",
|
||||
},
|
||||
{
|
||||
lastDir: "potato/sausage",
|
||||
remote: "potato/sausage/test.txt",
|
||||
level: 100,
|
||||
dirNames: nil,
|
||||
newLastDir: "potato/sausage",
|
||||
},
|
||||
{
|
||||
lastDir: "",
|
||||
remote: "a/b/c/d/e/f.txt",
|
||||
level: 100,
|
||||
dirNames: []string{"a", "a/b", "a/b/c", "a/b/c/d", "a/b/c/d/e"},
|
||||
newLastDir: "a/b/c/d/e",
|
||||
},
|
||||
{
|
||||
lastDir: "a/b/c/d/e",
|
||||
remote: "a/b/c/d/E/f.txt",
|
||||
level: 100,
|
||||
dirNames: []string{"a/b/c/d/E"},
|
||||
newLastDir: "a/b/c/d/E",
|
||||
},
|
||||
{
|
||||
lastDir: "a/b/c/d/e",
|
||||
remote: "a/b/C/D/E/f.txt",
|
||||
level: 100,
|
||||
dirNames: []string{"a/b/C", "a/b/C/D", "a/b/C/D/E"},
|
||||
newLastDir: "a/b/C/D/E",
|
||||
},
|
||||
{
|
||||
lastDir: "a/b/c",
|
||||
remote: "a/b/c/d/e/f.txt",
|
||||
level: 100,
|
||||
dirNames: []string{"a/b/c/d", "a/b/c/d/e"},
|
||||
newLastDir: "a/b/c/d/e",
|
||||
},
|
||||
{
|
||||
lastDir: "",
|
||||
remote: "a/b/c/d/e/f.txt",
|
||||
level: 1,
|
||||
dirNames: []string{"a"},
|
||||
newLastDir: "a/b/c/d/e",
|
||||
},
|
||||
{
|
||||
lastDir: "a/b/c",
|
||||
remote: "a/b/c/d/e/f.txt",
|
||||
level: 1,
|
||||
dirNames: nil,
|
||||
newLastDir: "a/b/c/d/e",
|
||||
},
|
||||
{
|
||||
lastDir: "",
|
||||
remote: "a/b/c/d/e/f.txt",
|
||||
level: 3,
|
||||
dirNames: []string{"a", "a/b", "a/b/c"},
|
||||
newLastDir: "a/b/c/d/e",
|
||||
},
|
||||
{
|
||||
lastDir: "a/b/C/D/E",
|
||||
remote: "a/b/c/d/e/f.txt",
|
||||
level: 3,
|
||||
dirNames: []string{"a/b/c"},
|
||||
newLastDir: "a/b/c/d/e",
|
||||
},
|
||||
} {
|
||||
dirNames, newLastDir := sendDir(test.lastDir, test.remote, test.level)
|
||||
assert.Equal(t, test.dirNames, dirNames, "dirNames fail for sendDir(%q,%q,%v)", test.lastDir, test.remote, test.level)
|
||||
assert.Equal(t, test.newLastDir, newLastDir, "newLastDir fail for sendDir(%q,%q,%v)", test.lastDir, test.remote, test.level)
|
||||
}
|
||||
}
|
||||
58
b2/b2_test.go
Normal file
58
b2/b2_test.go
Normal file
@@ -0,0 +1,58 @@
|
||||
// Test B2 filesystem interface
|
||||
//
|
||||
// Automatically generated - DO NOT EDIT
|
||||
// Regenerate with: make gen_tests
|
||||
package b2_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ncw/rclone/b2"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/fstest/fstests"
|
||||
)
|
||||
|
||||
func TestSetup(t *testing.T) {
|
||||
fstests.NilObject = fs.Object((*b2.Object)(nil))
|
||||
fstests.RemoteName = "TestB2:"
|
||||
}
|
||||
|
||||
// Generic tests for the Fs
|
||||
func TestInit(t *testing.T) { fstests.TestInit(t) }
|
||||
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
|
||||
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
|
||||
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
|
||||
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
|
||||
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
|
||||
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
|
||||
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
|
||||
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
|
||||
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
|
||||
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
|
||||
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
|
||||
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
|
||||
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
|
||||
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
|
||||
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
|
||||
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
|
||||
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
|
||||
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
|
||||
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
|
||||
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
|
||||
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
|
||||
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
|
||||
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
|
||||
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
|
||||
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
|
||||
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
|
||||
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
|
||||
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
|
||||
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
|
||||
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
|
||||
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
|
||||
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
|
||||
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
|
||||
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
|
||||
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
|
||||
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
|
||||
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }
|
||||
301
b2/upload.go
Normal file
301
b2/upload.go
Normal file
@@ -0,0 +1,301 @@
|
||||
// Upload large files for b2
|
||||
//
|
||||
// Docs - https://www.backblaze.com/b2/docs/large_files.html
|
||||
|
||||
package b2
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/sha1"
|
||||
"fmt"
|
||||
"io"
|
||||
"sync"
|
||||
|
||||
"github.com/ncw/rclone/b2/api"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/rest"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// largeUpload is used to control the upload of large files which need chunking
|
||||
type largeUpload struct {
|
||||
f *Fs // parent Fs
|
||||
o *Object // object being uploaded
|
||||
in io.Reader // read the data from here
|
||||
id string // ID of the file being uploaded
|
||||
size int64 // total size
|
||||
parts int64 // calculated number of parts
|
||||
sha1s []string // slice of SHA1s for each part
|
||||
uploadMu sync.Mutex // lock for upload variable
|
||||
uploads []*api.GetUploadPartURLResponse // result of get upload URL calls
|
||||
}
|
||||
|
||||
// newLargeUpload starts an upload of object o from in with metadata in src
|
||||
func (f *Fs) newLargeUpload(o *Object, in io.Reader, src fs.ObjectInfo) (up *largeUpload, err error) {
|
||||
remote := o.remote
|
||||
size := src.Size()
|
||||
parts := size / int64(chunkSize)
|
||||
if size%int64(chunkSize) != 0 {
|
||||
parts++
|
||||
}
|
||||
if parts > maxParts {
|
||||
return nil, errors.Errorf("%q too big (%d bytes) makes too many parts %d > %d - increase --b2-chunk-size", remote, size, parts, maxParts)
|
||||
}
|
||||
modTime := src.ModTime()
|
||||
opts := rest.Opts{
|
||||
Method: "POST",
|
||||
Path: "/b2_start_large_file",
|
||||
}
|
||||
bucketID, err := f.getBucketID()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var request = api.StartLargeFileRequest{
|
||||
BucketID: bucketID,
|
||||
Name: o.fs.root + remote,
|
||||
ContentType: fs.MimeType(src),
|
||||
Info: map[string]string{
|
||||
timeKey: timeString(modTime),
|
||||
},
|
||||
}
|
||||
// Set the SHA1 if known
|
||||
if calculatedSha1, err := src.Hash(fs.HashSHA1); err == nil && calculatedSha1 != "" {
|
||||
request.Info[sha1Key] = calculatedSha1
|
||||
}
|
||||
var response api.StartLargeFileResponse
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
resp, err := f.srv.CallJSON(&opts, &request, &response)
|
||||
return f.shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
up = &largeUpload{
|
||||
f: f,
|
||||
o: o,
|
||||
in: in,
|
||||
id: response.ID,
|
||||
size: size,
|
||||
parts: parts,
|
||||
sha1s: make([]string, parts),
|
||||
}
|
||||
return up, nil
|
||||
}
|
||||
|
||||
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
|
||||
//
|
||||
// This should be returned with returnUploadURL when finished
|
||||
func (up *largeUpload) getUploadURL() (upload *api.GetUploadPartURLResponse, err error) {
|
||||
up.uploadMu.Lock()
|
||||
defer up.uploadMu.Unlock()
|
||||
if len(up.uploads) == 0 {
|
||||
opts := rest.Opts{
|
||||
Method: "POST",
|
||||
Path: "/b2_get_upload_part_url",
|
||||
}
|
||||
var request = api.GetUploadPartURLRequest{
|
||||
ID: up.id,
|
||||
}
|
||||
err := up.f.pacer.Call(func() (bool, error) {
|
||||
resp, err := up.f.srv.CallJSON(&opts, &request, &upload)
|
||||
return up.f.shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to get upload URL")
|
||||
}
|
||||
} else {
|
||||
upload, up.uploads = up.uploads[0], up.uploads[1:]
|
||||
}
|
||||
return upload, nil
|
||||
}
|
||||
|
||||
// returnUploadURL returns the UploadURL to the cache
|
||||
func (up *largeUpload) returnUploadURL(upload *api.GetUploadPartURLResponse) {
|
||||
if upload == nil {
|
||||
return
|
||||
}
|
||||
up.uploadMu.Lock()
|
||||
up.uploads = append(up.uploads, upload)
|
||||
up.uploadMu.Unlock()
|
||||
}
|
||||
|
||||
// clearUploadURL clears the current UploadURL and the AuthorizationToken
|
||||
func (up *largeUpload) clearUploadURL() {
|
||||
up.uploadMu.Lock()
|
||||
up.uploads = nil
|
||||
up.uploadMu.Unlock()
|
||||
}
|
||||
|
||||
// Transfer a chunk
|
||||
func (up *largeUpload) transferChunk(part int64, body []byte) error {
|
||||
calculatedSHA1 := fmt.Sprintf("%x", sha1.Sum(body))
|
||||
up.sha1s[part-1] = calculatedSHA1
|
||||
size := int64(len(body))
|
||||
err := up.f.pacer.Call(func() (bool, error) {
|
||||
fs.Debug(up.o, "Sending chunk %d length %d", part, len(body))
|
||||
|
||||
// Get upload URL
|
||||
upload, err := up.getUploadURL()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// Authorization
|
||||
//
|
||||
// An upload authorization token, from b2_get_upload_part_url.
|
||||
//
|
||||
// X-Bz-Part-Number
|
||||
//
|
||||
// A number from 1 to 10000. The parts uploaded for one file
|
||||
// must have contiguous numbers, starting with 1.
|
||||
//
|
||||
// Content-Length
|
||||
//
|
||||
// The number of bytes in the file being uploaded. Note that
|
||||
// this header is required; you cannot leave it out and just
|
||||
// use chunked encoding. The minimum size of every part but
|
||||
// the last one is 100MB.
|
||||
//
|
||||
// X-Bz-Content-Sha1
|
||||
//
|
||||
// The SHA1 checksum of the this part of the file. B2 will
|
||||
// check this when the part is uploaded, to make sure that the
|
||||
// data arrived correctly. The same SHA1 checksum must be
|
||||
// passed to b2_finish_large_file.
|
||||
opts := rest.Opts{
|
||||
Method: "POST",
|
||||
Absolute: true,
|
||||
Path: upload.UploadURL,
|
||||
Body: fs.AccountPart(up.o, bytes.NewBuffer(body)),
|
||||
ExtraHeaders: map[string]string{
|
||||
"Authorization": upload.AuthorizationToken,
|
||||
"X-Bz-Part-Number": fmt.Sprintf("%d", part),
|
||||
sha1Header: calculatedSHA1,
|
||||
},
|
||||
ContentLength: &size,
|
||||
}
|
||||
|
||||
var response api.UploadPartResponse
|
||||
|
||||
resp, err := up.f.srv.CallJSON(&opts, nil, &response)
|
||||
retry, err := up.f.shouldRetryNoReauth(resp, err)
|
||||
// On retryable error clear PartUploadURL
|
||||
if retry {
|
||||
fs.Debug(up.o, "Clearing part upload URL because of error: %v", err)
|
||||
upload = nil
|
||||
}
|
||||
up.returnUploadURL(upload)
|
||||
return retry, err
|
||||
})
|
||||
if err != nil {
|
||||
fs.Debug(up.o, "Error sending chunk %d: %v", part, err)
|
||||
} else {
|
||||
fs.Debug(up.o, "Done sending chunk %d", part)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// finish closes off the large upload
|
||||
func (up *largeUpload) finish() error {
|
||||
opts := rest.Opts{
|
||||
Method: "POST",
|
||||
Path: "/b2_finish_large_file",
|
||||
}
|
||||
var request = api.FinishLargeFileRequest{
|
||||
ID: up.id,
|
||||
SHA1s: up.sha1s,
|
||||
}
|
||||
var response api.FileInfo
|
||||
err := up.f.pacer.Call(func() (bool, error) {
|
||||
resp, err := up.f.srv.CallJSON(&opts, &request, &response)
|
||||
return up.f.shouldRetry(resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return up.o.decodeMetaDataFileInfo(&response)
|
||||
}
|
||||
|
||||
// cancel aborts the large upload
|
||||
func (up *largeUpload) cancel() error {
|
||||
opts := rest.Opts{
|
||||
Method: "POST",
|
||||
Path: "/b2_cancel_large_file",
|
||||
}
|
||||
var request = api.CancelLargeFileRequest{
|
||||
ID: up.id,
|
||||
}
|
||||
var response api.CancelLargeFileResponse
|
||||
err := up.f.pacer.Call(func() (bool, error) {
|
||||
resp, err := up.f.srv.CallJSON(&opts, &request, &response)
|
||||
return up.f.shouldRetry(resp, err)
|
||||
})
|
||||
return err
|
||||
}
|
||||
|
||||
// Upload uploads the chunks from the input
|
||||
func (up *largeUpload) Upload() error {
|
||||
fs.Debug(up.o, "Starting upload of large file in %d chunks (id %q)", up.parts, up.id)
|
||||
remaining := up.size
|
||||
errs := make(chan error, 1)
|
||||
var wg sync.WaitGroup
|
||||
var err error
|
||||
fs.AccountByPart(up.o) // Cancel whole file accounting before reading
|
||||
outer:
|
||||
for part := int64(1); part <= up.parts; part++ {
|
||||
// Check any errors
|
||||
select {
|
||||
case err = <-errs:
|
||||
break outer
|
||||
default:
|
||||
}
|
||||
|
||||
reqSize := remaining
|
||||
if reqSize >= int64(chunkSize) {
|
||||
reqSize = int64(chunkSize)
|
||||
}
|
||||
|
||||
// Read the chunk
|
||||
buf := make([]byte, reqSize)
|
||||
_, err = io.ReadFull(up.in, buf)
|
||||
if err != nil {
|
||||
break outer
|
||||
}
|
||||
|
||||
// Transfer the chunk
|
||||
// Get upload Token
|
||||
up.f.getUploadToken()
|
||||
wg.Add(1)
|
||||
go func(part int64, buf []byte) {
|
||||
defer up.f.returnUploadToken()
|
||||
defer wg.Done()
|
||||
err := up.transferChunk(part, buf)
|
||||
if err != nil {
|
||||
select {
|
||||
case errs <- err:
|
||||
default:
|
||||
}
|
||||
}
|
||||
}(part, buf)
|
||||
|
||||
remaining -= reqSize
|
||||
}
|
||||
wg.Wait()
|
||||
if err == nil {
|
||||
select {
|
||||
case err = <-errs:
|
||||
default:
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
fs.Debug(up.o, "Cancelling large file upload due to error: %v", err)
|
||||
cancelErr := up.cancel()
|
||||
if cancelErr != nil {
|
||||
fs.ErrorLog(up.o, "Failed to cancel large file upload: %v", cancelErr)
|
||||
}
|
||||
return err
|
||||
}
|
||||
// Check any errors
|
||||
fs.Debug(up.o, "Finishing large file upload")
|
||||
return up.finish()
|
||||
}
|
||||
31
cmd/all/all.go
Normal file
31
cmd/all/all.go
Normal file
@@ -0,0 +1,31 @@
|
||||
// Package all imports all the commands
|
||||
package all
|
||||
|
||||
import (
|
||||
// Active commands
|
||||
_ "github.com/ncw/rclone/cmd"
|
||||
_ "github.com/ncw/rclone/cmd/authorize"
|
||||
_ "github.com/ncw/rclone/cmd/cat"
|
||||
_ "github.com/ncw/rclone/cmd/check"
|
||||
_ "github.com/ncw/rclone/cmd/cleanup"
|
||||
_ "github.com/ncw/rclone/cmd/config"
|
||||
_ "github.com/ncw/rclone/cmd/copy"
|
||||
_ "github.com/ncw/rclone/cmd/dedupe"
|
||||
_ "github.com/ncw/rclone/cmd/delete"
|
||||
_ "github.com/ncw/rclone/cmd/genautocomplete"
|
||||
_ "github.com/ncw/rclone/cmd/gendocs"
|
||||
_ "github.com/ncw/rclone/cmd/ls"
|
||||
_ "github.com/ncw/rclone/cmd/lsd"
|
||||
_ "github.com/ncw/rclone/cmd/lsl"
|
||||
_ "github.com/ncw/rclone/cmd/md5sum"
|
||||
_ "github.com/ncw/rclone/cmd/memtest"
|
||||
_ "github.com/ncw/rclone/cmd/mkdir"
|
||||
_ "github.com/ncw/rclone/cmd/mount"
|
||||
_ "github.com/ncw/rclone/cmd/move"
|
||||
_ "github.com/ncw/rclone/cmd/purge"
|
||||
_ "github.com/ncw/rclone/cmd/rmdir"
|
||||
_ "github.com/ncw/rclone/cmd/sha1sum"
|
||||
_ "github.com/ncw/rclone/cmd/size"
|
||||
_ "github.com/ncw/rclone/cmd/sync"
|
||||
_ "github.com/ncw/rclone/cmd/version"
|
||||
)
|
||||
24
cmd/authorize/authorize.go
Normal file
24
cmd/authorize/authorize.go
Normal file
@@ -0,0 +1,24 @@
|
||||
package authorize
|
||||
|
||||
import (
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(authorizeCmd)
|
||||
}
|
||||
|
||||
var authorizeCmd = &cobra.Command{
|
||||
Use: "authorize",
|
||||
Short: `Remote authorization.`,
|
||||
Long: `
|
||||
Remote authorization. Used to authorize a remote or headless
|
||||
rclone from a machine with a browser - use as instructed by
|
||||
rclone config.`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 3, command, args)
|
||||
fs.Authorize(args)
|
||||
},
|
||||
}
|
||||
40
cmd/cat/cat.go
Normal file
40
cmd/cat/cat.go
Normal file
@@ -0,0 +1,40 @@
|
||||
package cat
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(catCmd)
|
||||
}
|
||||
|
||||
var catCmd = &cobra.Command{
|
||||
Use: "cat remote:path",
|
||||
Short: `Concatenates any files and sends them to stdout.`,
|
||||
Long: `
|
||||
rclone cat sends any files to standard output.
|
||||
|
||||
You can use it like this to output a single file
|
||||
|
||||
rclone cat remote:path/to/file
|
||||
|
||||
Or like this to output any file in dir or subdirectories.
|
||||
|
||||
rclone cat remote:path/to/dir
|
||||
|
||||
Or like this to output any .txt files in dir or subdirectories.
|
||||
|
||||
rclone --include "*.txt" cat remote:path/to/dir
|
||||
`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fsrc := cmd.NewFsSrc(args)
|
||||
cmd.Run(false, command, func() error {
|
||||
return fs.Cat(fsrc, os.Stdout)
|
||||
})
|
||||
},
|
||||
}
|
||||
30
cmd/check/check.go
Normal file
30
cmd/check/check.go
Normal file
@@ -0,0 +1,30 @@
|
||||
package check
|
||||
|
||||
import (
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(checkCmd)
|
||||
}
|
||||
|
||||
var checkCmd = &cobra.Command{
|
||||
Use: "check source:path dest:path",
|
||||
Short: `Checks the files in the source and destination match.`,
|
||||
Long: `
|
||||
Checks the files in the source and destination match. It
|
||||
compares sizes and MD5SUMs and prints a report of files which
|
||||
don't match. It doesn't alter the source or destination.
|
||||
|
||||
` + "`" + `--size-only` + "`" + ` may be used to only compare the sizes, not the MD5SUMs.
|
||||
`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(2, 2, command, args)
|
||||
fsrc, fdst := cmd.NewFsSrcDst(args)
|
||||
cmd.Run(false, command, func() error {
|
||||
return fs.Check(fdst, fsrc)
|
||||
})
|
||||
},
|
||||
}
|
||||
27
cmd/cleanup/cleanup.go
Normal file
27
cmd/cleanup/cleanup.go
Normal file
@@ -0,0 +1,27 @@
|
||||
package cleanup
|
||||
|
||||
import (
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(cleanupCmd)
|
||||
}
|
||||
|
||||
var cleanupCmd = &cobra.Command{
|
||||
Use: "cleanup remote:path",
|
||||
Short: `Clean up the remote if possible`,
|
||||
Long: `
|
||||
Clean up the remote if possible. Empty the trash or delete old file
|
||||
versions. Not supported by all remotes.
|
||||
`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fsrc := cmd.NewFsSrc(args)
|
||||
cmd.Run(true, command, func() error {
|
||||
return fs.CleanUp(fsrc)
|
||||
})
|
||||
},
|
||||
}
|
||||
293
cmd/cmd.go
Normal file
293
cmd/cmd.go
Normal file
@@ -0,0 +1,293 @@
|
||||
// Package cmd implemnts the rclone command
|
||||
//
|
||||
// It is in a sub package so it's internals can be re-used elsewhere
|
||||
package cmd
|
||||
|
||||
// FIXME only attach the remote flags when using a remote???
|
||||
// would probably mean bringing all the flags in to here? Or define some flagsets in fs...
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"path"
|
||||
"runtime"
|
||||
"runtime/pprof"
|
||||
"time"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/pflag"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
)
|
||||
|
||||
// Globals
|
||||
var (
|
||||
// Flags
|
||||
cpuProfile = pflag.StringP("cpuprofile", "", "", "Write cpu profile to file")
|
||||
memProfile = pflag.String("memprofile", "", "Write memory profile to file")
|
||||
statsInterval = pflag.DurationP("stats", "", time.Minute*1, "Interval to print stats (0 to disable)")
|
||||
version bool
|
||||
logFile = pflag.StringP("log-file", "", "", "Log everything to this file")
|
||||
retries = pflag.IntP("retries", "", 3, "Retry operations this many times if they fail")
|
||||
)
|
||||
|
||||
// Root is the main rclone command
|
||||
var Root = &cobra.Command{
|
||||
Use: "rclone",
|
||||
Short: "Sync files and directories to and from local and remote object stores - " + fs.Version,
|
||||
Long: `
|
||||
Rclone is a command line program to sync files and directories to and
|
||||
from various cloud storage systems, such as:
|
||||
|
||||
* Google Drive
|
||||
* Amazon S3
|
||||
* Openstack Swift / Rackspace cloud files / Memset Memstore
|
||||
* Dropbox
|
||||
* Google Cloud Storage
|
||||
* Amazon Drive
|
||||
* Microsoft One Drive
|
||||
* Hubic
|
||||
* Backblaze B2
|
||||
* Yandex Disk
|
||||
* The local filesystem
|
||||
|
||||
Features
|
||||
|
||||
* MD5/SHA1 hashes checked at all times for file integrity
|
||||
* Timestamps preserved on files
|
||||
* Partial syncs supported on a whole file basis
|
||||
* Copy mode to just copy new/changed files
|
||||
* Sync (one way) mode to make a directory identical
|
||||
* Check mode to check for file hash equality
|
||||
* Can sync to and from network, eg two different cloud accounts
|
||||
|
||||
See the home page for installation, usage, documentation, changelog
|
||||
and configuration walkthroughs.
|
||||
|
||||
* http://rclone.org/
|
||||
`,
|
||||
}
|
||||
|
||||
// runRoot implements the main rclone command with no subcommands
|
||||
func runRoot(cmd *cobra.Command, args []string) {
|
||||
if version {
|
||||
ShowVersion()
|
||||
os.Exit(0)
|
||||
} else {
|
||||
_ = Root.Usage()
|
||||
fmt.Fprintf(os.Stderr, "Command not found.\n")
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
func init() {
|
||||
Root.Run = runRoot
|
||||
Root.Flags().BoolVarP(&version, "version", "V", false, "Print the version number")
|
||||
cobra.OnInitialize(initConfig)
|
||||
}
|
||||
|
||||
// ShowVersion prints the version to stdout
|
||||
func ShowVersion() {
|
||||
fmt.Printf("rclone %s\n", fs.Version)
|
||||
}
|
||||
|
||||
// newFsSrc creates a src Fs from a name
|
||||
//
|
||||
// This can point to a file
|
||||
func newFsSrc(remote string) fs.Fs {
|
||||
fsInfo, configName, fsPath, err := fs.ParseRemote(remote)
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
log.Fatalf("Failed to create file system for %q: %v", remote, err)
|
||||
}
|
||||
f, err := fsInfo.NewFs(configName, fsPath)
|
||||
if err == fs.ErrorIsFile {
|
||||
if !fs.Config.Filter.InActive() {
|
||||
fs.Stats.Error()
|
||||
log.Fatalf("Can't limit to single files when using filters: %v", remote)
|
||||
}
|
||||
// Limit transfers to this file
|
||||
err = fs.Config.Filter.AddFile(path.Base(fsPath))
|
||||
// Set --no-traverse as only one file
|
||||
fs.Config.NoTraverse = true
|
||||
}
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
log.Fatalf("Failed to create file system for %q: %v", remote, err)
|
||||
}
|
||||
return f
|
||||
}
|
||||
|
||||
// newFsDst creates a dst Fs from a name
|
||||
//
|
||||
// This must point to a directory
|
||||
func newFsDst(remote string) fs.Fs {
|
||||
f, err := fs.NewFs(remote)
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
log.Fatalf("Failed to create file system for %q: %v", remote, err)
|
||||
}
|
||||
return f
|
||||
}
|
||||
|
||||
// NewFsSrcDst creates a new src and dst fs from the arguments
|
||||
func NewFsSrcDst(args []string) (fs.Fs, fs.Fs) {
|
||||
fsrc, fdst := newFsSrc(args[0]), newFsDst(args[1])
|
||||
fs.CalculateModifyWindow(fdst, fsrc)
|
||||
return fsrc, fdst
|
||||
}
|
||||
|
||||
// NewFsSrc creates a new src fs from the arguments
|
||||
func NewFsSrc(args []string) fs.Fs {
|
||||
fsrc := newFsSrc(args[0])
|
||||
fs.CalculateModifyWindow(fsrc)
|
||||
return fsrc
|
||||
}
|
||||
|
||||
// NewFsDst creates a new dst fs from the arguments
|
||||
//
|
||||
// Dst fs-es can't point to single files
|
||||
func NewFsDst(args []string) fs.Fs {
|
||||
fdst := newFsDst(args[0])
|
||||
fs.CalculateModifyWindow(fdst)
|
||||
return fdst
|
||||
}
|
||||
|
||||
// Run the function with stats and retries if required
|
||||
func Run(Retry bool, cmd *cobra.Command, f func() error) {
|
||||
var err error
|
||||
stopStats := startStats()
|
||||
for try := 1; try <= *retries; try++ {
|
||||
err = f()
|
||||
if !Retry || (err == nil && !fs.Stats.Errored()) {
|
||||
break
|
||||
}
|
||||
if fs.IsFatalError(err) {
|
||||
fs.Log(nil, "Fatal error received - not attempting retries")
|
||||
break
|
||||
}
|
||||
if fs.IsNoRetryError(err) {
|
||||
fs.Log(nil, "Can't retry this error - not attempting retries")
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
fs.Log(nil, "Attempt %d/%d failed with %d errors and: %v", try, *retries, fs.Stats.GetErrors(), err)
|
||||
} else {
|
||||
fs.Log(nil, "Attempt %d/%d failed with %d errors", try, *retries, fs.Stats.GetErrors())
|
||||
}
|
||||
if try < *retries {
|
||||
fs.Stats.ResetErrors()
|
||||
}
|
||||
}
|
||||
close(stopStats)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to %s: %v", cmd.Name(), err)
|
||||
}
|
||||
if !fs.Config.Quiet || fs.Stats.Errored() || *statsInterval > 0 {
|
||||
fs.Log(nil, "%s", fs.Stats)
|
||||
}
|
||||
if fs.Config.Verbose {
|
||||
fs.Debug(nil, "Go routines at exit %d\n", runtime.NumGoroutine())
|
||||
}
|
||||
if fs.Stats.Errored() {
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// CheckArgs checks there are enough arguments and prints a message if not
|
||||
func CheckArgs(MinArgs, MaxArgs int, cmd *cobra.Command, args []string) {
|
||||
if len(args) < MinArgs {
|
||||
_ = cmd.Usage()
|
||||
fmt.Fprintf(os.Stderr, "Command %s needs %d arguments mininum\n", cmd.Name(), MinArgs)
|
||||
os.Exit(1)
|
||||
} else if len(args) > MaxArgs {
|
||||
_ = cmd.Usage()
|
||||
fmt.Fprintf(os.Stderr, "Command %s needs %d arguments maximum\n", cmd.Name(), MaxArgs)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
|
||||
// startStats prints the stats every statsInterval
|
||||
//
|
||||
// It returns a channel which should be closed to stop the stats.
|
||||
func startStats() chan struct{} {
|
||||
stopStats := make(chan struct{})
|
||||
if *statsInterval > 0 {
|
||||
go func() {
|
||||
ticker := time.NewTicker(*statsInterval)
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
fs.Stats.Log()
|
||||
case <-stopStats:
|
||||
ticker.Stop()
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
return stopStats
|
||||
}
|
||||
|
||||
// initConfig is run by cobra after initialising the flags
|
||||
func initConfig() {
|
||||
// Log file output
|
||||
if *logFile != "" {
|
||||
f, err := os.OpenFile(*logFile, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0640)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to open log file: %v", err)
|
||||
}
|
||||
_, err = f.Seek(0, os.SEEK_END)
|
||||
if err != nil {
|
||||
fs.ErrorLog(nil, "Failed to seek log file to end: %v", err)
|
||||
}
|
||||
log.SetOutput(f)
|
||||
fs.DebugLogger.SetOutput(f)
|
||||
redirectStderr(f)
|
||||
}
|
||||
|
||||
// Load the rest of the config now we have started the logger
|
||||
fs.LoadConfig()
|
||||
|
||||
// Write the args for debug purposes
|
||||
fs.Debug("rclone", "Version %q starting with parameters %q", fs.Version, os.Args)
|
||||
|
||||
// Setup CPU profiling if desired
|
||||
if *cpuProfile != "" {
|
||||
fs.Log(nil, "Creating CPU profile %q\n", *cpuProfile)
|
||||
f, err := os.Create(*cpuProfile)
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
log.Fatal(err)
|
||||
}
|
||||
err = pprof.StartCPUProfile(f)
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer pprof.StopCPUProfile()
|
||||
}
|
||||
|
||||
// Setup memory profiling if desired
|
||||
if *memProfile != "" {
|
||||
defer func() {
|
||||
fs.Log(nil, "Saving Memory profile %q\n", *memProfile)
|
||||
f, err := os.Create(*memProfile)
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
log.Fatal(err)
|
||||
}
|
||||
err = pprof.WriteHeapProfile(f)
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
log.Fatal(err)
|
||||
}
|
||||
err = f.Close()
|
||||
if err != nil {
|
||||
fs.Stats.Error()
|
||||
log.Fatal(err)
|
||||
}
|
||||
}()
|
||||
}
|
||||
}
|
||||
20
cmd/config/config.go
Normal file
20
cmd/config/config.go
Normal file
@@ -0,0 +1,20 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(configCmd)
|
||||
}
|
||||
|
||||
var configCmd = &cobra.Command{
|
||||
Use: "config",
|
||||
Short: `Enter an interactive configuration session.`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(0, 0, command, args)
|
||||
fs.EditConfig()
|
||||
},
|
||||
}
|
||||
63
cmd/copy/copy.go
Normal file
63
cmd/copy/copy.go
Normal file
@@ -0,0 +1,63 @@
|
||||
package copy
|
||||
|
||||
import (
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(copyCmd)
|
||||
}
|
||||
|
||||
var copyCmd = &cobra.Command{
|
||||
Use: "copy source:path dest:path",
|
||||
Short: `Copy files from source to dest, skipping already copied`,
|
||||
Long: `
|
||||
Copy the source to the destination. Doesn't transfer
|
||||
unchanged files, testing by size and modification time or
|
||||
MD5SUM. Doesn't delete files from the destination.
|
||||
|
||||
Note that it is always the contents of the directory that is synced,
|
||||
not the directory so when source:path is a directory, it's the
|
||||
contents of source:path that are copied, not the directory name and
|
||||
contents.
|
||||
|
||||
If dest:path doesn't exist, it is created and the source:path contents
|
||||
go there.
|
||||
|
||||
For example
|
||||
|
||||
rclone copy source:sourcepath dest:destpath
|
||||
|
||||
Let's say there are two files in sourcepath
|
||||
|
||||
sourcepath/one.txt
|
||||
sourcepath/two.txt
|
||||
|
||||
This copies them to
|
||||
|
||||
destpath/one.txt
|
||||
destpath/two.txt
|
||||
|
||||
Not to
|
||||
|
||||
destpath/sourcepath/one.txt
|
||||
destpath/sourcepath/two.txt
|
||||
|
||||
If you are familiar with ` + "`" + `rsync` + "`" + `, rclone always works as if you had
|
||||
written a trailing / - meaning "copy the contents of this directory".
|
||||
This applies to all commands and whether you are talking about the
|
||||
source or destination.
|
||||
|
||||
See the ` + "`" + `--no-traverse` + "`" + ` option for controlling whether rclone lists
|
||||
the destination directory or not.
|
||||
`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(2, 2, command, args)
|
||||
fsrc, fdst := cmd.NewFsSrcDst(args)
|
||||
cmd.Run(true, command, func() error {
|
||||
return fs.CopyDir(fdst, fsrc)
|
||||
})
|
||||
},
|
||||
}
|
||||
113
cmd/dedupe/dedupe.go
Normal file
113
cmd/dedupe/dedupe.go
Normal file
@@ -0,0 +1,113 @@
|
||||
package dedupe
|
||||
|
||||
import (
|
||||
"log"
|
||||
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
dedupeMode = fs.DeduplicateInteractive
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(dedupeCmd)
|
||||
dedupeCmd.Flags().VarP(&dedupeMode, "dedupe-mode", "", "Dedupe mode interactive|skip|first|newest|oldest|rename.")
|
||||
}
|
||||
|
||||
var dedupeCmd = &cobra.Command{
|
||||
Use: "dedupe [mode] remote:path",
|
||||
Short: `Interactively find duplicate files delete/rename them.`,
|
||||
Long: `
|
||||
By default ` + "`" + `dedup` + "`" + ` interactively finds duplicate files and offers to
|
||||
delete all but one or rename them to be different. Only useful with
|
||||
Google Drive which can have duplicate file names.
|
||||
|
||||
The ` + "`" + `dedupe` + "`" + ` command will delete all but one of any identical (same
|
||||
md5sum) files it finds without confirmation. This means that for most
|
||||
duplicated files the ` + "`" + `dedupe` + "`" + ` command will not be interactive. You
|
||||
can use ` + "`" + `--dry-run` + "`" + ` to see what would happen without doing anything.
|
||||
|
||||
Here is an example run.
|
||||
|
||||
Before - with duplicates
|
||||
|
||||
$ rclone lsl drive:dupes
|
||||
6048320 2016-03-05 16:23:16.798000000 one.txt
|
||||
6048320 2016-03-05 16:23:11.775000000 one.txt
|
||||
564374 2016-03-05 16:23:06.731000000 one.txt
|
||||
6048320 2016-03-05 16:18:26.092000000 one.txt
|
||||
6048320 2016-03-05 16:22:46.185000000 two.txt
|
||||
1744073 2016-03-05 16:22:38.104000000 two.txt
|
||||
564374 2016-03-05 16:22:52.118000000 two.txt
|
||||
|
||||
Now the ` + "`" + `dedupe` + "`" + ` session
|
||||
|
||||
$ rclone dedupe drive:dupes
|
||||
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
|
||||
one.txt: Found 4 duplicates - deleting identical copies
|
||||
one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
|
||||
one.txt: 2 duplicates remain
|
||||
1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
|
||||
2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
|
||||
s) Skip and do nothing
|
||||
k) Keep just one (choose which in next step)
|
||||
r) Rename all to be different (by changing file.jpg to file-1.jpg)
|
||||
s/k/r> k
|
||||
Enter the number of the file to keep> 1
|
||||
one.txt: Deleted 1 extra copies
|
||||
two.txt: Found 3 duplicates - deleting identical copies
|
||||
two.txt: 3 duplicates remain
|
||||
1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
|
||||
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
|
||||
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
|
||||
s) Skip and do nothing
|
||||
k) Keep just one (choose which in next step)
|
||||
r) Rename all to be different (by changing file.jpg to file-1.jpg)
|
||||
s/k/r> r
|
||||
two-1.txt: renamed from: two.txt
|
||||
two-2.txt: renamed from: two.txt
|
||||
two-3.txt: renamed from: two.txt
|
||||
|
||||
The result being
|
||||
|
||||
$ rclone lsl drive:dupes
|
||||
6048320 2016-03-05 16:23:16.798000000 one.txt
|
||||
564374 2016-03-05 16:22:52.118000000 two-1.txt
|
||||
6048320 2016-03-05 16:22:46.185000000 two-2.txt
|
||||
1744073 2016-03-05 16:22:38.104000000 two-3.txt
|
||||
|
||||
Dedupe can be run non interactively using the ` + "`" + `--dedupe-mode` + "`" + ` flag or by using an extra parameter with the same value
|
||||
|
||||
* ` + "`" + `--dedupe-mode interactive` + "`" + ` - interactive as above.
|
||||
* ` + "`" + `--dedupe-mode skip` + "`" + ` - removes identical files then skips anything left.
|
||||
* ` + "`" + `--dedupe-mode first` + "`" + ` - removes identical files then keeps the first one.
|
||||
* ` + "`" + `--dedupe-mode newest` + "`" + ` - removes identical files then keeps the newest one.
|
||||
* ` + "`" + `--dedupe-mode oldest` + "`" + ` - removes identical files then keeps the oldest one.
|
||||
* ` + "`" + `--dedupe-mode rename` + "`" + ` - removes identical files then renames the rest to be different.
|
||||
|
||||
For example to rename all the identically named photos in your Google Photos directory, do
|
||||
|
||||
rclone dedupe --dedupe-mode rename "drive:Google Photos"
|
||||
|
||||
Or
|
||||
|
||||
rclone dedupe rename "drive:Google Photos"
|
||||
`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 2, command, args)
|
||||
if len(args) > 1 {
|
||||
err := dedupeMode.Set(args[0])
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
args = args[1:]
|
||||
}
|
||||
fdst := cmd.NewFsSrc(args)
|
||||
cmd.Run(false, command, func() error {
|
||||
return fs.Deduplicate(fdst, dedupeMode)
|
||||
})
|
||||
},
|
||||
}
|
||||
41
cmd/delete/delete.go
Normal file
41
cmd/delete/delete.go
Normal file
@@ -0,0 +1,41 @@
|
||||
package delete
|
||||
|
||||
import (
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(deleteCmd)
|
||||
}
|
||||
|
||||
var deleteCmd = &cobra.Command{
|
||||
Use: "delete remote:path",
|
||||
Short: `Remove the contents of path.`,
|
||||
Long: `
|
||||
Remove the contents of path. Unlike ` + "`" + `purge` + "`" + ` it obeys include/exclude
|
||||
filters so can be used to selectively delete files.
|
||||
|
||||
Eg delete all files bigger than 100MBytes
|
||||
|
||||
Check what would be deleted first (use either)
|
||||
|
||||
rclone --min-size 100M lsl remote:path
|
||||
rclone --dry-run --min-size 100M delete remote:path
|
||||
|
||||
Then delete
|
||||
|
||||
rclone --min-size 100M delete remote:path
|
||||
|
||||
That reads "delete everything with a minimum size of 100 MB", hence
|
||||
delete all files bigger than 100MBytes.
|
||||
`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fsrc := cmd.NewFsSrc(args)
|
||||
cmd.Run(true, command, func() error {
|
||||
return fs.Delete(fsrc)
|
||||
})
|
||||
},
|
||||
}
|
||||
44
cmd/genautocomplete/genautocomplete.go
Normal file
44
cmd/genautocomplete/genautocomplete.go
Normal file
@@ -0,0 +1,44 @@
|
||||
package genautocomplete
|
||||
|
||||
import (
|
||||
"log"
|
||||
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(genautocompleteCmd)
|
||||
}
|
||||
|
||||
var genautocompleteCmd = &cobra.Command{
|
||||
Use: "genautocomplete [output_file]",
|
||||
Short: `Output bash completion script for rclone.`,
|
||||
Long: `
|
||||
Generates a bash shell autocompletion script for rclone.
|
||||
|
||||
This writes to /etc/bash_completion.d/rclone by default so will
|
||||
probably need to be run with sudo or as root, eg
|
||||
|
||||
sudo rclone genautocomplete
|
||||
|
||||
Logout and login again to use the autocompletion scripts, or source
|
||||
them directly
|
||||
|
||||
. /etc/bash_completion
|
||||
|
||||
If you supply a command line argument the script will be written
|
||||
there.
|
||||
`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(0, 1, command, args)
|
||||
out := "/etc/bash_completion.d/rclone"
|
||||
if len(args) > 0 {
|
||||
out = args[0]
|
||||
}
|
||||
err := cmd.Root.GenBashCompletionFile(out)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
},
|
||||
}
|
||||
55
cmd/gendocs/gendocs.go
Normal file
55
cmd/gendocs/gendocs.go
Normal file
@@ -0,0 +1,55 @@
|
||||
package gendocs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/cobra/doc"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(gendocsCmd)
|
||||
}
|
||||
|
||||
const gendocFrontmatterTemplate = `---
|
||||
date: %s
|
||||
title: "%s"
|
||||
slug: %s
|
||||
url: %s
|
||||
---
|
||||
`
|
||||
|
||||
var gendocsCmd = &cobra.Command{
|
||||
Use: "gendocs output_directory",
|
||||
Short: `Output markdown docs for rclone to the directory supplied.`,
|
||||
Long: `
|
||||
This produces markdown docs for the rclone commands to the directory
|
||||
supplied. These are in a format suitable for hugo to render into the
|
||||
rclone.org website.`,
|
||||
RunE: func(command *cobra.Command, args []string) error {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
out := args[0]
|
||||
err := os.MkdirAll(out, 0777)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
now := time.Now().Format(time.RFC3339)
|
||||
prepender := func(filename string) string {
|
||||
name := filepath.Base(filename)
|
||||
base := strings.TrimSuffix(name, path.Ext(name))
|
||||
url := "/commands/" + strings.ToLower(base) + "/"
|
||||
return fmt.Sprintf(gendocFrontmatterTemplate, now, strings.Replace(base, "_", " ", -1), base, url)
|
||||
}
|
||||
linkHandler := func(name string) string {
|
||||
base := strings.TrimSuffix(name, path.Ext(name))
|
||||
return "/commands/" + strings.ToLower(base) + "/"
|
||||
}
|
||||
return doc.GenMarkdownTreeCustom(cmd.Root, out, prepender, linkHandler)
|
||||
},
|
||||
}
|
||||
25
cmd/ls/ls.go
Normal file
25
cmd/ls/ls.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package ls
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(lsCmd)
|
||||
}
|
||||
|
||||
var lsCmd = &cobra.Command{
|
||||
Use: "ls remote:path",
|
||||
Short: `List all the objects in the the path with size and path.`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fsrc := cmd.NewFsSrc(args)
|
||||
cmd.Run(false, command, func() error {
|
||||
return fs.List(fsrc, os.Stdout)
|
||||
})
|
||||
},
|
||||
}
|
||||
25
cmd/lsd/lsd.go
Normal file
25
cmd/lsd/lsd.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package lsd
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(lsdCmd)
|
||||
}
|
||||
|
||||
var lsdCmd = &cobra.Command{
|
||||
Use: "lsd remote:path",
|
||||
Short: `List all directories/containers/buckets in the the path.`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fsrc := cmd.NewFsSrc(args)
|
||||
cmd.Run(false, command, func() error {
|
||||
return fs.ListDir(fsrc, os.Stdout)
|
||||
})
|
||||
},
|
||||
}
|
||||
25
cmd/lsl/lsl.go
Normal file
25
cmd/lsl/lsl.go
Normal file
@@ -0,0 +1,25 @@
|
||||
package lsl
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(lslCmd)
|
||||
}
|
||||
|
||||
var lslCmd = &cobra.Command{
|
||||
Use: "lsl remote:path",
|
||||
Short: `List all the objects path with modification time, size and path.`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fsrc := cmd.NewFsSrc(args)
|
||||
cmd.Run(false, command, func() error {
|
||||
return fs.ListLong(fsrc, os.Stdout)
|
||||
})
|
||||
},
|
||||
}
|
||||
29
cmd/md5sum/md5sum.go
Normal file
29
cmd/md5sum/md5sum.go
Normal file
@@ -0,0 +1,29 @@
|
||||
package md5sum
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(md5sumCmd)
|
||||
}
|
||||
|
||||
var md5sumCmd = &cobra.Command{
|
||||
Use: "md5sum remote:path",
|
||||
Short: `Produces an md5sum file for all the objects in the path.`,
|
||||
Long: `
|
||||
Produces an md5sum file for all the objects in the path. This
|
||||
is in the same format as the standard md5sum tool produces.
|
||||
`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fsrc := cmd.NewFsSrc(args)
|
||||
cmd.Run(false, command, func() error {
|
||||
return fs.Md5sum(fsrc, os.Stdout)
|
||||
})
|
||||
},
|
||||
}
|
||||
49
cmd/memtest/memtest.go
Normal file
49
cmd/memtest/memtest.go
Normal file
@@ -0,0 +1,49 @@
|
||||
package memtest
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
"sync"
|
||||
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(memtestCmd)
|
||||
}
|
||||
|
||||
var memtestCmd = &cobra.Command{
|
||||
Use: "memtest remote:path",
|
||||
Short: `Load all the objects at remote:path and report memory stats.`,
|
||||
Hidden: true,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fsrc := cmd.NewFsSrc(args)
|
||||
cmd.Run(false, command, func() error {
|
||||
objects, _, err := fs.Count(fsrc)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
objs := make([]fs.Object, 0, objects)
|
||||
var before, after runtime.MemStats
|
||||
runtime.GC()
|
||||
runtime.ReadMemStats(&before)
|
||||
var mu sync.Mutex
|
||||
err = fs.ListFn(fsrc, func(o fs.Object) {
|
||||
mu.Lock()
|
||||
objs = append(objs, o)
|
||||
mu.Unlock()
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
runtime.GC()
|
||||
runtime.ReadMemStats(&after)
|
||||
usedMemory := after.Alloc - before.Alloc
|
||||
fs.Log(nil, "%d objects took %d bytes, %.1f bytes/object", len(objs), usedMemory, float64(usedMemory)/float64(len(objs)))
|
||||
fs.Log(nil, "System memory changed from %d to %d bytes a change of %d bytes", before.Sys, after.Sys, after.Sys-before.Sys)
|
||||
return nil
|
||||
})
|
||||
},
|
||||
}
|
||||
23
cmd/mkdir/mkdir.go
Normal file
23
cmd/mkdir/mkdir.go
Normal file
@@ -0,0 +1,23 @@
|
||||
package mkdir
|
||||
|
||||
import (
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(mkdirCmd)
|
||||
}
|
||||
|
||||
var mkdirCmd = &cobra.Command{
|
||||
Use: "mkdir remote:path",
|
||||
Short: `Make the path if it doesn't already exist.`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fdst := cmd.NewFsDst(args)
|
||||
cmd.Run(true, command, func() error {
|
||||
return fs.Mkdir(fdst)
|
||||
})
|
||||
},
|
||||
}
|
||||
57
cmd/mount/createinfo.go
Normal file
57
cmd/mount/createinfo.go
Normal file
@@ -0,0 +1,57 @@
|
||||
// +build linux darwin freebsd
|
||||
|
||||
package mount
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
)
|
||||
|
||||
// info to create a new object
|
||||
type createInfo struct {
|
||||
f fs.Fs
|
||||
remote string
|
||||
}
|
||||
|
||||
func newCreateInfo(f fs.Fs, remote string) *createInfo {
|
||||
return &createInfo{
|
||||
f: f,
|
||||
remote: remote,
|
||||
}
|
||||
}
|
||||
|
||||
// Fs returns read only access to the Fs that this object is part of
|
||||
func (ci *createInfo) Fs() fs.Info {
|
||||
return ci.f
|
||||
}
|
||||
|
||||
// Remote returns the remote path
|
||||
func (ci *createInfo) Remote() string {
|
||||
return ci.remote
|
||||
}
|
||||
|
||||
// Hash returns the selected checksum of the file
|
||||
// If no checksum is available it returns ""
|
||||
func (ci *createInfo) Hash(fs.HashType) (string, error) {
|
||||
return "", fs.ErrHashUnsupported
|
||||
}
|
||||
|
||||
// ModTime returns the modification date of the file
|
||||
// It should return a best guess if one isn't available
|
||||
func (ci *createInfo) ModTime() time.Time {
|
||||
return time.Now()
|
||||
}
|
||||
|
||||
// Size returns the size of the file
|
||||
func (ci *createInfo) Size() int64 {
|
||||
// FIXME this means this won't work with all remotes...
|
||||
return 0
|
||||
}
|
||||
|
||||
// Storable says whether this object can be stored
|
||||
func (ci *createInfo) Storable() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
var _ fs.ObjectInfo = (*createInfo)(nil)
|
||||
377
cmd/mount/dir.go
Normal file
377
cmd/mount/dir.go
Normal file
@@ -0,0 +1,377 @@
|
||||
// +build linux darwin freebsd
|
||||
|
||||
package mount
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"bazil.org/fuse"
|
||||
fusefs "bazil.org/fuse/fs"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/pkg/errors"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// DirEntry describes the contents of a directory entry
|
||||
//
|
||||
// It can be a file or a directory
|
||||
//
|
||||
// node may be nil, but o may not
|
||||
type DirEntry struct {
|
||||
o fs.BasicInfo
|
||||
node fusefs.Node
|
||||
}
|
||||
|
||||
// Dir represents a directory entry
|
||||
type Dir struct {
|
||||
f fs.Fs
|
||||
path string
|
||||
mu sync.RWMutex // protects the following
|
||||
read bool
|
||||
items map[string]*DirEntry
|
||||
}
|
||||
|
||||
func newDir(f fs.Fs, path string) *Dir {
|
||||
return &Dir{
|
||||
f: f,
|
||||
path: path,
|
||||
}
|
||||
}
|
||||
|
||||
// addObject adds a new object or directory to the directory
|
||||
//
|
||||
// note that we add new objects rather than updating old ones
|
||||
func (d *Dir) addObject(o fs.BasicInfo, node fusefs.Node) *DirEntry {
|
||||
item := &DirEntry{
|
||||
o: o,
|
||||
node: node,
|
||||
}
|
||||
d.mu.Lock()
|
||||
d.items[path.Base(o.Remote())] = item
|
||||
d.mu.Unlock()
|
||||
return item
|
||||
}
|
||||
|
||||
// delObject removes an object from the directory
|
||||
func (d *Dir) delObject(leaf string) {
|
||||
d.mu.Lock()
|
||||
delete(d.items, leaf)
|
||||
d.mu.Unlock()
|
||||
}
|
||||
|
||||
// read the directory
|
||||
func (d *Dir) readDir() error {
|
||||
d.mu.Lock()
|
||||
defer d.mu.Unlock()
|
||||
if d.read {
|
||||
return nil
|
||||
}
|
||||
objs, dirs, err := fs.NewLister().SetLevel(1).Start(d.f, d.path).GetAll()
|
||||
if err == fs.ErrorDirNotFound {
|
||||
// We treat directory not found as empty because we
|
||||
// create directories on the fly
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
// Cache the items by name
|
||||
d.items = make(map[string]*DirEntry, len(objs)+len(dirs))
|
||||
for _, obj := range objs {
|
||||
name := path.Base(obj.Remote())
|
||||
d.items[name] = &DirEntry{
|
||||
o: obj,
|
||||
node: nil,
|
||||
}
|
||||
}
|
||||
for _, dir := range dirs {
|
||||
name := path.Base(dir.Remote())
|
||||
d.items[name] = &DirEntry{
|
||||
o: dir,
|
||||
node: nil,
|
||||
}
|
||||
}
|
||||
d.read = true
|
||||
return nil
|
||||
}
|
||||
|
||||
// lookup a single item in the directory
|
||||
//
|
||||
// returns fuse.ENOENT if not found.
|
||||
func (d *Dir) lookup(leaf string) (*DirEntry, error) {
|
||||
err := d.readDir()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
d.mu.RLock()
|
||||
item, ok := d.items[leaf]
|
||||
d.mu.RUnlock()
|
||||
if !ok {
|
||||
return nil, fuse.ENOENT
|
||||
}
|
||||
return item, nil
|
||||
}
|
||||
|
||||
// Check to see if a directory is empty
|
||||
func (d *Dir) isEmpty() (bool, error) {
|
||||
err := d.readDir()
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
d.mu.RLock()
|
||||
defer d.mu.RUnlock()
|
||||
return len(d.items) == 0, nil
|
||||
}
|
||||
|
||||
// Check interface satsified
|
||||
var _ fusefs.Node = (*Dir)(nil)
|
||||
|
||||
// Attr updates the attribes of a directory
|
||||
func (d *Dir) Attr(ctx context.Context, a *fuse.Attr) error {
|
||||
fs.Debug(d.path, "Dir.Attr")
|
||||
a.Mode = os.ModeDir | dirPerms
|
||||
// FIXME include Valid so get some caching? Also mtime
|
||||
return nil
|
||||
}
|
||||
|
||||
// lookupNode calls lookup then makes sure the node is not nil in the DirEntry
|
||||
func (d *Dir) lookupNode(leaf string) (item *DirEntry, err error) {
|
||||
item, err = d.lookup(leaf)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if item.node != nil {
|
||||
return item, nil
|
||||
}
|
||||
var node fusefs.Node
|
||||
switch x := item.o.(type) {
|
||||
case fs.Object:
|
||||
node, err = newFile(d, x), nil
|
||||
case *fs.Dir:
|
||||
node, err = newDir(d.f, x.Remote()), nil
|
||||
default:
|
||||
err = errors.Errorf("unknown type %T", item)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
item = d.addObject(item.o, node)
|
||||
return item, err
|
||||
}
|
||||
|
||||
// Check interface satisfied
|
||||
var _ fusefs.NodeRequestLookuper = (*Dir)(nil)
|
||||
|
||||
// Lookup looks up a specific entry in the receiver.
|
||||
//
|
||||
// Lookup should return a Node corresponding to the entry. If the
|
||||
// name does not exist in the directory, Lookup should return ENOENT.
|
||||
//
|
||||
// Lookup need not to handle the names "." and "..".
|
||||
func (d *Dir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.LookupResponse) (node fusefs.Node, err error) {
|
||||
path := path.Join(d.path, req.Name)
|
||||
fs.Debug(path, "Dir.Lookup")
|
||||
item, err := d.lookupNode(req.Name)
|
||||
if err != nil {
|
||||
if err != fuse.ENOENT {
|
||||
fs.ErrorLog(path, "Dir.Lookup error: %v", err)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
fs.Debug(path, "Dir.Lookup OK")
|
||||
return item.node, nil
|
||||
}
|
||||
|
||||
// Check interface satisfied
|
||||
var _ fusefs.HandleReadDirAller = (*Dir)(nil)
|
||||
|
||||
// ReadDirAll reads the contents of the directory
|
||||
func (d *Dir) ReadDirAll(ctx context.Context) (dirents []fuse.Dirent, err error) {
|
||||
fs.Debug(d.path, "Dir.ReadDirAll")
|
||||
err = d.readDir()
|
||||
if err != nil {
|
||||
fs.Debug(d.path, "Dir.ReadDirAll error: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
d.mu.RLock()
|
||||
defer d.mu.RUnlock()
|
||||
for _, item := range d.items {
|
||||
var dirent fuse.Dirent
|
||||
switch x := item.o.(type) {
|
||||
case fs.Object:
|
||||
dirent = fuse.Dirent{
|
||||
// Inode FIXME ???
|
||||
Type: fuse.DT_File,
|
||||
Name: path.Base(x.Remote()),
|
||||
}
|
||||
case *fs.Dir:
|
||||
dirent = fuse.Dirent{
|
||||
// Inode FIXME ???
|
||||
Type: fuse.DT_Dir,
|
||||
Name: path.Base(x.Remote()),
|
||||
}
|
||||
default:
|
||||
err = errors.Errorf("unknown type %T", item)
|
||||
fs.ErrorLog(d.path, "Dir.ReadDirAll error: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
dirents = append(dirents, dirent)
|
||||
}
|
||||
fs.Debug(d.path, "Dir.ReadDirAll OK with %d entries", len(dirents))
|
||||
return dirents, nil
|
||||
}
|
||||
|
||||
var _ fusefs.NodeCreater = (*Dir)(nil)
|
||||
|
||||
// Create makes a new file
|
||||
func (d *Dir) Create(ctx context.Context, req *fuse.CreateRequest, resp *fuse.CreateResponse) (fusefs.Node, fusefs.Handle, error) {
|
||||
path := path.Join(d.path, req.Name)
|
||||
fs.Debug(path, "Dir.Create")
|
||||
src := newCreateInfo(d.f, path)
|
||||
// This gets added to the directory when the file is written
|
||||
file := newFile(d, nil)
|
||||
fh, err := newWriteFileHandle(d, file, src)
|
||||
if err != nil {
|
||||
fs.ErrorLog(path, "Dir.Create error: %v", err)
|
||||
return nil, nil, err
|
||||
}
|
||||
fs.Debug(path, "Dir.Create OK")
|
||||
return file, fh, nil
|
||||
}
|
||||
|
||||
var _ fusefs.NodeMkdirer = (*Dir)(nil)
|
||||
|
||||
// Mkdir creates a new directory
|
||||
func (d *Dir) Mkdir(ctx context.Context, req *fuse.MkdirRequest) (fusefs.Node, error) {
|
||||
// We just pretend to have created the directory - rclone will
|
||||
// actually create the directory if we write files into it
|
||||
path := path.Join(d.path, req.Name)
|
||||
fs.Debug(path, "Dir.Mkdir")
|
||||
fsDir := &fs.Dir{
|
||||
Name: path,
|
||||
When: time.Now(),
|
||||
}
|
||||
dir := newDir(d.f, path)
|
||||
d.addObject(fsDir, dir)
|
||||
fs.Debug(path, "Dir.Mkdir OK")
|
||||
return dir, nil
|
||||
}
|
||||
|
||||
var _ fusefs.NodeRemover = (*Dir)(nil)
|
||||
|
||||
// Remove removes the entry with the given name from
|
||||
// the receiver, which must be a directory. The entry to be removed
|
||||
// may correspond to a file (unlink) or to a directory (rmdir).
|
||||
func (d *Dir) Remove(ctx context.Context, req *fuse.RemoveRequest) error {
|
||||
path := path.Join(d.path, req.Name)
|
||||
fs.Debug(path, "Dir.Remove")
|
||||
item, err := d.lookupNode(req.Name)
|
||||
if err != nil {
|
||||
fs.ErrorLog(path, "Dir.Remove error: %v", err)
|
||||
return err
|
||||
}
|
||||
switch x := item.o.(type) {
|
||||
case fs.Object:
|
||||
err = x.Remove()
|
||||
if err != nil {
|
||||
fs.ErrorLog(path, "Dir.Remove file error: %v", err)
|
||||
return err
|
||||
}
|
||||
case *fs.Dir:
|
||||
// Do nothing for deleting directory - rclone can't
|
||||
// currently remote a random directory
|
||||
//
|
||||
// Check directory is empty first though
|
||||
dir := item.node.(*Dir)
|
||||
empty, err := dir.isEmpty()
|
||||
if err != nil {
|
||||
fs.ErrorLog(path, "Dir.Remove dir error: %v", err)
|
||||
return err
|
||||
}
|
||||
if !empty {
|
||||
// return fuse.ENOTEMPTY - doesn't exist though so use EEXIST
|
||||
fs.ErrorLog(path, "Dir.Remove not empty")
|
||||
return fuse.EEXIST
|
||||
}
|
||||
default:
|
||||
fs.ErrorLog(path, "Dir.Remove unknown type %T", item)
|
||||
return errors.Errorf("unknown type %T", item)
|
||||
}
|
||||
// Remove the item from the directory listing
|
||||
d.delObject(req.Name)
|
||||
fs.Debug(path, "Dir.Remove OK")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check interface satisfied
|
||||
var _ fusefs.NodeRenamer = (*Dir)(nil)
|
||||
|
||||
// Rename the file
|
||||
func (d *Dir) Rename(ctx context.Context, req *fuse.RenameRequest, newDir fusefs.Node) error {
|
||||
oldPath := path.Join(d.path, req.OldName)
|
||||
destDir, ok := newDir.(*Dir)
|
||||
if !ok {
|
||||
err := errors.Errorf("Unknown Dir type %T", newDir)
|
||||
fs.ErrorLog(oldPath, "Dir.Rename error: %v", err)
|
||||
return err
|
||||
}
|
||||
newPath := path.Join(destDir.path, req.NewName)
|
||||
fs.Debug(oldPath, "Dir.Rename to %q", newPath)
|
||||
oldItem, err := d.lookupNode(req.OldName)
|
||||
if err != nil {
|
||||
fs.ErrorLog(oldPath, "Dir.Rename error: %v", err)
|
||||
return err
|
||||
}
|
||||
var newObj fs.BasicInfo
|
||||
switch x := oldItem.o.(type) {
|
||||
case fs.Object:
|
||||
oldObject := x
|
||||
do, ok := d.f.(fs.Mover)
|
||||
if !ok {
|
||||
err := errors.Errorf("Fs %q can't Move files", d.f)
|
||||
fs.ErrorLog(oldPath, "Dir.Rename error: %v", err)
|
||||
return err
|
||||
}
|
||||
newObject, err := do.Move(oldObject, newPath)
|
||||
if err != nil {
|
||||
fs.ErrorLog(oldPath, "Dir.Rename error: %v", err)
|
||||
return err
|
||||
}
|
||||
newObj = newObject
|
||||
case *fs.Dir:
|
||||
oldDir := oldItem.node.(*Dir)
|
||||
empty, err := oldDir.isEmpty()
|
||||
if err != nil {
|
||||
fs.ErrorLog(oldPath, "Dir.Rename dir error: %v", err)
|
||||
return err
|
||||
}
|
||||
if !empty {
|
||||
// return fuse.ENOTEMPTY - doesn't exist though so use EEXIST
|
||||
fs.ErrorLog(oldPath, "Dir.Rename can't rename non empty directory")
|
||||
return fuse.EEXIST
|
||||
}
|
||||
newObj = &fs.Dir{
|
||||
Name: newPath,
|
||||
When: time.Now(),
|
||||
}
|
||||
default:
|
||||
err = errors.Errorf("unknown type %T", oldItem)
|
||||
fs.ErrorLog(d.path, "Dir.ReadDirAll error: %v", err)
|
||||
return err
|
||||
}
|
||||
|
||||
// Show moved - delete from old dir and add to new
|
||||
d.delObject(req.OldName)
|
||||
destDir.addObject(newObj, nil)
|
||||
|
||||
// FIXME need to flush the dir also
|
||||
|
||||
// FIXME use DirMover to move a directory?
|
||||
// or maybe use MoveDir which can move anything
|
||||
// fallback to Copy/Delete if no Move?
|
||||
// if dir is empty then can move it
|
||||
|
||||
fs.ErrorLog(newPath, "Dir.Rename renamed from %q", oldPath)
|
||||
return nil
|
||||
}
|
||||
133
cmd/mount/dir_test.go
Normal file
133
cmd/mount/dir_test.go
Normal file
@@ -0,0 +1,133 @@
|
||||
// +build linux darwin freebsd
|
||||
|
||||
package mount
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestDirLs(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
run.checkDir(t, "")
|
||||
|
||||
run.mkdir(t, "a directory")
|
||||
run.createFile(t, "a file", "hello")
|
||||
|
||||
run.checkDir(t, "a directory/|a file 5")
|
||||
|
||||
run.rmdir(t, "a directory")
|
||||
run.rm(t, "a file")
|
||||
|
||||
run.checkDir(t, "")
|
||||
}
|
||||
|
||||
func TestDirCreateAndRemoveDir(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
run.mkdir(t, "dir")
|
||||
run.mkdir(t, "dir/subdir")
|
||||
run.checkDir(t, "dir/|dir/subdir/")
|
||||
|
||||
// Check we can't delete a directory with stuff in
|
||||
err := os.Remove(run.path("dir"))
|
||||
assert.Error(t, err, "file exists")
|
||||
|
||||
// Now delete subdir then dir - should produce no errors
|
||||
run.rmdir(t, "dir/subdir")
|
||||
run.checkDir(t, "dir/")
|
||||
run.rmdir(t, "dir")
|
||||
run.checkDir(t, "")
|
||||
}
|
||||
|
||||
func TestDirCreateAndRemoveFile(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
run.mkdir(t, "dir")
|
||||
run.createFile(t, "dir/file", "potato")
|
||||
run.checkDir(t, "dir/|dir/file 6")
|
||||
|
||||
// Check we can't delete a directory with stuff in
|
||||
err := os.Remove(run.path("dir"))
|
||||
assert.Error(t, err, "file exists")
|
||||
|
||||
// Now delete file
|
||||
run.rm(t, "dir/file")
|
||||
|
||||
run.checkDir(t, "dir/")
|
||||
run.rmdir(t, "dir")
|
||||
run.checkDir(t, "")
|
||||
}
|
||||
|
||||
func TestDirRenameFile(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
run.mkdir(t, "dir")
|
||||
run.createFile(t, "file", "potato")
|
||||
run.checkDir(t, "dir/|file 6")
|
||||
|
||||
err := os.Rename(run.path("file"), run.path("dir/file2"))
|
||||
require.NoError(t, err)
|
||||
run.checkDir(t, "dir/|dir/file2 6")
|
||||
|
||||
err = os.Rename(run.path("dir/file2"), run.path("dir/file3"))
|
||||
require.NoError(t, err)
|
||||
run.checkDir(t, "dir/|dir/file3 6")
|
||||
|
||||
run.rm(t, "dir/file3")
|
||||
run.rmdir(t, "dir")
|
||||
run.checkDir(t, "")
|
||||
}
|
||||
|
||||
func TestDirRenameEmptyDir(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
run.mkdir(t, "dir")
|
||||
run.mkdir(t, "dir1")
|
||||
run.checkDir(t, "dir/|dir1/")
|
||||
|
||||
err := os.Rename(run.path("dir1"), run.path("dir/dir2"))
|
||||
require.NoError(t, err)
|
||||
run.checkDir(t, "dir/|dir/dir2/")
|
||||
|
||||
err = os.Rename(run.path("dir/dir2"), run.path("dir/dir3"))
|
||||
require.NoError(t, err)
|
||||
run.checkDir(t, "dir/|dir/dir3/")
|
||||
|
||||
run.rmdir(t, "dir/dir3")
|
||||
run.rmdir(t, "dir")
|
||||
run.checkDir(t, "")
|
||||
}
|
||||
|
||||
func TestDirRenameFullDir(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
run.mkdir(t, "dir")
|
||||
run.mkdir(t, "dir1")
|
||||
run.createFile(t, "dir1/potato.txt", "maris piper")
|
||||
run.checkDir(t, "dir/|dir1/|dir1/potato.txt 11")
|
||||
|
||||
err := os.Rename(run.path("dir1"), run.path("dir/dir2"))
|
||||
require.Error(t, err, "file exists")
|
||||
// Can't currently rename directories with stuff in
|
||||
/*
|
||||
require.NoError(t, err)
|
||||
run.checkDir(t, "dir/|dir/dir2/|dir/dir2/potato.txt 11")
|
||||
|
||||
err = os.Rename(run.path("dir/dir2"), run.path("dir/dir3"))
|
||||
require.NoError(t, err)
|
||||
run.checkDir(t, "dir/|dir/dir3/|dir/dir3/potato.txt 11")
|
||||
|
||||
run.rm(t, "dir/dir3/potato.txt")
|
||||
run.rmdir(t, "dir/dir3")
|
||||
*/
|
||||
|
||||
run.rm(t, "dir1/potato.txt")
|
||||
run.rmdir(t, "dir1")
|
||||
run.rmdir(t, "dir")
|
||||
run.checkDir(t, "")
|
||||
}
|
||||
142
cmd/mount/file.go
Normal file
142
cmd/mount/file.go
Normal file
@@ -0,0 +1,142 @@
|
||||
// +build linux darwin freebsd
|
||||
|
||||
package mount
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"bazil.org/fuse"
|
||||
fusefs "bazil.org/fuse/fs"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/pkg/errors"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// File represents a file
|
||||
type File struct {
|
||||
d *Dir // parent directory - read only
|
||||
size int64 // size of file - read and written with atomic
|
||||
mu sync.RWMutex // protects the following
|
||||
o fs.Object // NB o may be nil if file is being written
|
||||
writers int // number of writers for this file
|
||||
}
|
||||
|
||||
// newFile creates a new File
|
||||
func newFile(d *Dir, o fs.Object) *File {
|
||||
return &File{
|
||||
d: d,
|
||||
o: o,
|
||||
}
|
||||
}
|
||||
|
||||
// addWriters increments or decrements the writers
|
||||
func (f *File) addWriters(n int) {
|
||||
f.mu.Lock()
|
||||
f.writers += n
|
||||
f.mu.Unlock()
|
||||
}
|
||||
|
||||
// Check interface satisfied
|
||||
var _ fusefs.Node = (*File)(nil)
|
||||
|
||||
// Attr fills out the attributes for the file
|
||||
func (f *File) Attr(ctx context.Context, a *fuse.Attr) error {
|
||||
f.mu.Lock()
|
||||
defer f.mu.Unlock()
|
||||
fs.Debug(f.o, "File.Attr")
|
||||
a.Mode = filePerms
|
||||
// if o is nil it isn't valid yet, so return the size so far
|
||||
if f.o == nil {
|
||||
a.Size = uint64(atomic.LoadInt64(&f.size))
|
||||
} else {
|
||||
a.Size = uint64(f.o.Size())
|
||||
if !noModTime {
|
||||
modTime := f.o.ModTime()
|
||||
a.Atime = modTime
|
||||
a.Mtime = modTime
|
||||
a.Ctime = modTime
|
||||
a.Crtime = modTime
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Update the size while writing
|
||||
func (f *File) written(n int64) {
|
||||
atomic.AddInt64(&f.size, n)
|
||||
}
|
||||
|
||||
// Update the object when written
|
||||
func (f *File) setObject(o fs.Object) {
|
||||
f.mu.Lock()
|
||||
defer f.mu.Unlock()
|
||||
f.o = o
|
||||
f.d.addObject(o, f)
|
||||
}
|
||||
|
||||
// Wait for f.o to become non nil for a short time returning it or an
|
||||
// error
|
||||
//
|
||||
// Call without the mutex held
|
||||
func (f *File) waitForValidObject() (o fs.Object, err error) {
|
||||
for i := 0; i < 50; i++ {
|
||||
f.mu.Lock()
|
||||
o = f.o
|
||||
writers := f.writers
|
||||
f.mu.Unlock()
|
||||
if o != nil {
|
||||
return o, nil
|
||||
}
|
||||
if writers == 0 {
|
||||
return nil, errors.New("can't open file - writer failed")
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
return nil, fuse.ENOENT
|
||||
}
|
||||
|
||||
// Check interface satisfied
|
||||
var _ fusefs.NodeOpener = (*File)(nil)
|
||||
|
||||
// Open the file for read or write
|
||||
func (f *File) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenResponse) (fusefs.Handle, error) {
|
||||
// if o is nil it isn't valid yet
|
||||
o, err := f.waitForValidObject()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fs.Debug(o, "File.Open")
|
||||
|
||||
// Files aren't seekable
|
||||
resp.Flags |= fuse.OpenNonSeekable
|
||||
|
||||
switch {
|
||||
case req.Flags.IsReadOnly():
|
||||
return newReadFileHandle(o)
|
||||
case req.Flags.IsWriteOnly():
|
||||
src := newCreateInfo(f.d.f, o.Remote())
|
||||
fh, err := newWriteFileHandle(f.d, f, src)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return fh, nil
|
||||
case req.Flags.IsReadWrite():
|
||||
return nil, errors.New("can't open read and write")
|
||||
}
|
||||
|
||||
/*
|
||||
// File was opened in append-only mode, all writes will go to end
|
||||
// of file. OS X does not provide this information.
|
||||
OpenAppend OpenFlags = syscall.O_APPEND
|
||||
OpenCreate OpenFlags = syscall.O_CREAT
|
||||
OpenDirectory OpenFlags = syscall.O_DIRECTORY
|
||||
OpenExclusive OpenFlags = syscall.O_EXCL
|
||||
OpenNonblock OpenFlags = syscall.O_NONBLOCK
|
||||
OpenSync OpenFlags = syscall.O_SYNC
|
||||
OpenTruncate OpenFlags = syscall.O_TRUNC
|
||||
*/
|
||||
return nil, errors.New("can't figure out how to open")
|
||||
}
|
||||
67
cmd/mount/fs.go
Normal file
67
cmd/mount/fs.go
Normal file
@@ -0,0 +1,67 @@
|
||||
// FUSE main Fs
|
||||
|
||||
// +build linux darwin freebsd
|
||||
|
||||
package mount
|
||||
|
||||
import (
|
||||
"bazil.org/fuse"
|
||||
fusefs "bazil.org/fuse/fs"
|
||||
"github.com/ncw/rclone/fs"
|
||||
)
|
||||
|
||||
// Default permissions
|
||||
const (
|
||||
dirPerms = 0755
|
||||
filePerms = 0644
|
||||
)
|
||||
|
||||
// FS represents the top level filing system
|
||||
type FS struct {
|
||||
f fs.Fs
|
||||
}
|
||||
|
||||
// Check interface satistfied
|
||||
var _ fusefs.FS = (*FS)(nil)
|
||||
|
||||
// Root returns the root node
|
||||
func (f *FS) Root() (fusefs.Node, error) {
|
||||
fs.Debug(f.f, "Root()")
|
||||
return newDir(f.f, ""), nil
|
||||
}
|
||||
|
||||
// mount the file system
|
||||
//
|
||||
// The mount point will be ready when this returns.
|
||||
//
|
||||
// returns an error, and an error channel for the serve process to
|
||||
// report an error when fusermount is called.
|
||||
func mount(f fs.Fs, mountpoint string) (<-chan error, error) {
|
||||
c, err := fuse.Mount(mountpoint)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
filesys := &FS{
|
||||
f: f,
|
||||
}
|
||||
|
||||
// Serve the mount point in the background returning error to errChan
|
||||
errChan := make(chan error, 1)
|
||||
go func() {
|
||||
err := fusefs.Serve(c, filesys)
|
||||
closeErr := c.Close()
|
||||
if err == nil {
|
||||
err = closeErr
|
||||
}
|
||||
errChan <- err
|
||||
}()
|
||||
|
||||
// check if the mount process has an error to report
|
||||
<-c.Ready
|
||||
if err := c.MountError; err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return errChan, nil
|
||||
}
|
||||
264
cmd/mount/fs_test.go
Normal file
264
cmd/mount/fs_test.go
Normal file
@@ -0,0 +1,264 @@
|
||||
// +build linux darwin freebsd
|
||||
|
||||
// Test suite for rclonefs
|
||||
|
||||
package mount
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
_ "github.com/ncw/rclone/fs/all"
|
||||
"github.com/ncw/rclone/fstest"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// Globals
|
||||
var (
|
||||
RemoteName = flag.String("remote", "", "Remote to test with, defaults to local filesystem")
|
||||
SubDir = flag.Bool("subdir", false, "Set to test with a sub directory")
|
||||
Verbose = flag.Bool("verbose", false, "Set to enable logging")
|
||||
DumpHeaders = flag.Bool("dump-headers", false, "Set to dump headers (needs -verbose)")
|
||||
DumpBodies = flag.Bool("dump-bodies", false, "Set to dump bodies (needs -verbose)")
|
||||
Individual = flag.Bool("individual", false, "Make individual bucket/container/directory for each test - much slower")
|
||||
LowLevelRetries = flag.Int("low-level-retries", 10, "Number of low level retries")
|
||||
)
|
||||
|
||||
// TestMain drives the tests
|
||||
func TestMain(m *testing.M) {
|
||||
flag.Parse()
|
||||
run = newRun()
|
||||
rc := m.Run()
|
||||
run.Finalise()
|
||||
os.Exit(rc)
|
||||
}
|
||||
|
||||
// Run holds the remotes for a test run
|
||||
type Run struct {
|
||||
mountPath string
|
||||
fremote fs.Fs
|
||||
fremoteName string
|
||||
cleanRemote func()
|
||||
umountResult <-chan error
|
||||
skip bool
|
||||
}
|
||||
|
||||
// run holds the master Run data
|
||||
var run *Run
|
||||
|
||||
// newRun initialise the remote mount for testing and returns a run
|
||||
// object.
|
||||
//
|
||||
// r.fremote is an empty remote Fs
|
||||
//
|
||||
// Finalise() will tidy them away when done.
|
||||
func newRun() *Run {
|
||||
r := &Run{
|
||||
umountResult: make(chan error, 1),
|
||||
}
|
||||
|
||||
// Never ask for passwords, fail instead.
|
||||
// If your local config is encrypted set environment variable
|
||||
// "RCLONE_CONFIG_PASS=hunter2" (or your password)
|
||||
*fs.AskPassword = false
|
||||
fs.LoadConfig()
|
||||
fs.Config.Verbose = *Verbose
|
||||
fs.Config.Quiet = !*Verbose
|
||||
fs.Config.DumpHeaders = *DumpHeaders
|
||||
fs.Config.DumpBodies = *DumpBodies
|
||||
fs.Config.LowLevelRetries = *LowLevelRetries
|
||||
var err error
|
||||
r.fremote, r.fremoteName, r.cleanRemote, err = fstest.RandomRemote(*RemoteName, *SubDir)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to open remote %q: %v", *RemoteName, err)
|
||||
}
|
||||
|
||||
r.mountPath, err = ioutil.TempDir("", "rclonefs-mount")
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to create mount dir: %v", err)
|
||||
}
|
||||
|
||||
// Mount it up
|
||||
r.mount()
|
||||
|
||||
return r
|
||||
}
|
||||
|
||||
func (r *Run) mount() {
|
||||
log.Printf("mount %q %q", r.fremote, r.mountPath)
|
||||
var err error
|
||||
r.umountResult, err = mount(r.fremote, r.mountPath)
|
||||
if err != nil {
|
||||
log.Printf("mount failed: %v", err)
|
||||
r.skip = true
|
||||
}
|
||||
log.Printf("mount OK")
|
||||
}
|
||||
|
||||
func (r *Run) umount() {
|
||||
if r.skip {
|
||||
log.Printf("FUSE not found so skipping umount")
|
||||
return
|
||||
}
|
||||
log.Printf("Calling fusermount -u %q", r.mountPath)
|
||||
err := exec.Command("fusermount", "-u", r.mountPath).Run()
|
||||
if err != nil {
|
||||
log.Printf("fusermount failed: %v", err)
|
||||
}
|
||||
log.Printf("Waiting for umount")
|
||||
err = <-r.umountResult
|
||||
if err != nil {
|
||||
log.Fatalf("umount failed: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Run) skipIfNoFUSE(t *testing.T) {
|
||||
if r.skip {
|
||||
t.Skip("FUSE not found so skipping test")
|
||||
}
|
||||
}
|
||||
|
||||
// Finalise cleans the remote and unmounts
|
||||
func (r *Run) Finalise() {
|
||||
r.umount()
|
||||
r.cleanRemote()
|
||||
err := os.RemoveAll(r.mountPath)
|
||||
if err != nil {
|
||||
log.Printf("Failed to clean mountPath %q: %v", r.mountPath, err)
|
||||
}
|
||||
}
|
||||
|
||||
func (r *Run) path(filepath string) string {
|
||||
return path.Join(run.mountPath, filepath)
|
||||
}
|
||||
|
||||
type dirMap map[string]struct{}
|
||||
|
||||
// Create a dirMap from a string
|
||||
func newDirMap(dirString string) (dm dirMap) {
|
||||
dm = make(dirMap)
|
||||
for _, entry := range strings.Split(dirString, "|") {
|
||||
if entry != "" {
|
||||
dm[entry] = struct{}{}
|
||||
}
|
||||
}
|
||||
return dm
|
||||
}
|
||||
|
||||
// Returns a dirmap with only the files in
|
||||
func (dm dirMap) filesOnly() dirMap {
|
||||
newDm := make(dirMap)
|
||||
for name := range dm {
|
||||
if !strings.HasSuffix(name, "/") {
|
||||
newDm[name] = struct{}{}
|
||||
}
|
||||
}
|
||||
return newDm
|
||||
}
|
||||
|
||||
// reads the local tree into dir
|
||||
func (r *Run) readLocal(t *testing.T, dir dirMap, filepath string) {
|
||||
realPath := r.path(filepath)
|
||||
files, err := ioutil.ReadDir(realPath)
|
||||
require.NoError(t, err)
|
||||
for _, fi := range files {
|
||||
name := path.Join(filepath, fi.Name())
|
||||
if fi.IsDir() {
|
||||
dir[name+"/"] = struct{}{}
|
||||
r.readLocal(t, dir, name)
|
||||
assert.Equal(t, fi.Mode().Perm(), os.FileMode(dirPerms))
|
||||
} else {
|
||||
dir[fmt.Sprintf("%s %d", name, fi.Size())] = struct{}{}
|
||||
assert.Equal(t, fi.Mode().Perm(), os.FileMode(filePerms))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// reads the remote tree into dir
|
||||
func (r *Run) readRemote(t *testing.T, dir dirMap, filepath string) {
|
||||
objs, dirs, err := fs.NewLister().SetLevel(1).Start(r.fremote, filepath).GetAll()
|
||||
if err == fs.ErrorDirNotFound {
|
||||
return
|
||||
}
|
||||
require.NoError(t, err)
|
||||
for _, obj := range objs {
|
||||
dir[fmt.Sprintf("%s %d", obj.Remote(), obj.Size())] = struct{}{}
|
||||
}
|
||||
for _, d := range dirs {
|
||||
name := d.Remote()
|
||||
dir[name+"/"] = struct{}{}
|
||||
r.readRemote(t, dir, name)
|
||||
}
|
||||
}
|
||||
|
||||
// checkDir checks the local and remote against the string passed in
|
||||
func (r *Run) checkDir(t *testing.T, dirString string) {
|
||||
dm := newDirMap(dirString)
|
||||
localDm := make(dirMap)
|
||||
r.readLocal(t, localDm, "")
|
||||
remoteDm := make(dirMap)
|
||||
r.readRemote(t, remoteDm, "")
|
||||
// Ignore directories for remote compare
|
||||
assert.Equal(t, dm.filesOnly(), remoteDm.filesOnly(), "expected vs remote")
|
||||
assert.Equal(t, dm, localDm, "expected vs fuse mount")
|
||||
}
|
||||
|
||||
func (r *Run) createFile(t *testing.T, filepath string, contents string) {
|
||||
filepath = r.path(filepath)
|
||||
err := ioutil.WriteFile(filepath, []byte(contents), 0600)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func (r *Run) readFile(t *testing.T, filepath string) string {
|
||||
filepath = r.path(filepath)
|
||||
result, err := ioutil.ReadFile(filepath)
|
||||
require.NoError(t, err)
|
||||
return string(result)
|
||||
}
|
||||
|
||||
func (r *Run) mkdir(t *testing.T, filepath string) {
|
||||
filepath = r.path(filepath)
|
||||
err := os.Mkdir(filepath, 0700)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func (r *Run) rm(t *testing.T, filepath string) {
|
||||
filepath = r.path(filepath)
|
||||
err := os.Remove(filepath)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func (r *Run) rmdir(t *testing.T, filepath string) {
|
||||
filepath = r.path(filepath)
|
||||
err := os.Remove(filepath)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// Check that the Fs is mounted by seeing if the mountpoint is
|
||||
// in the mount output
|
||||
func TestMount(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
out, err := exec.Command("mount").Output()
|
||||
require.NoError(t, err)
|
||||
assert.Contains(t, string(out), run.mountPath)
|
||||
}
|
||||
|
||||
// Check root directory is present and correct
|
||||
func TestRoot(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
fi, err := os.Lstat(run.mountPath)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, fi.IsDir())
|
||||
assert.Equal(t, fi.Mode().Perm(), os.FileMode(dirPerms))
|
||||
}
|
||||
118
cmd/mount/mount.go
Normal file
118
cmd/mount/mount.go
Normal file
@@ -0,0 +1,118 @@
|
||||
// Package mount implents a FUSE mounting system for rclone remotes.
|
||||
|
||||
// +build linux darwin freebsd
|
||||
|
||||
package mount
|
||||
|
||||
import (
|
||||
"bazil.org/fuse"
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
// Globals
|
||||
var (
|
||||
noModTime = false
|
||||
debugFUSE = false
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(mountCmd)
|
||||
mountCmd.Flags().BoolVarP(&noModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up).")
|
||||
mountCmd.Flags().BoolVarP(&debugFUSE, "debug-fuse", "", false, "Debug the FUSE internals - needs -v.")
|
||||
}
|
||||
|
||||
var mountCmd = &cobra.Command{
|
||||
Use: "mount remote:path /path/to/mountpoint",
|
||||
Short: `Mount the remote as a mountpoint. **EXPERIMENTAL**`,
|
||||
Long: `
|
||||
rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's
|
||||
cloud storage systems as a file system with FUSE.
|
||||
|
||||
This is **EXPERIMENTAL** - use with care.
|
||||
|
||||
First set up your remote using ` + "`rclone config`" + `. Check it works with ` + "`rclone ls`" + ` etc.
|
||||
|
||||
Start the mount like this
|
||||
|
||||
rclone mount remote:path/to/files /path/to/local/mount &
|
||||
|
||||
Stop the mount with
|
||||
|
||||
fusermount -u /path/to/local/mount
|
||||
|
||||
Or with OS X
|
||||
|
||||
umount -u /path/to/local/mount
|
||||
|
||||
### Limitations ###
|
||||
|
||||
This can only read files seqentially, or write files sequentially. It
|
||||
can't read and write or seek in files.
|
||||
|
||||
rclonefs inherits rclone's directory handling. In rclone's world
|
||||
directories don't really exist. This means that empty directories
|
||||
will have a tendency to disappear once they fall out of the directory
|
||||
cache.
|
||||
|
||||
The bucket based FSes (eg swift, s3, google compute storage, b2) won't
|
||||
work from the root - you will need to specify a bucket, or a path
|
||||
within the bucket. So ` + "`swift:`" + ` won't work whereas ` + "`swift:bucket`" + ` will
|
||||
as will ` + "`swift:bucket/path`" + `.
|
||||
|
||||
Only supported on Linux, FreeBSD and OS X at the moment.
|
||||
|
||||
### rclone mount vs rclone sync/copy ##
|
||||
|
||||
File systems expect things to be 100% reliable, whereas cloud storage
|
||||
systems are a long way from 100% reliable. The rclone sync/copy
|
||||
commands cope with this with lots of retries. However rclone mount
|
||||
can't use retries in the same way without making local copies of the
|
||||
uploads. This might happen in the future, but for the moment rclone
|
||||
mount won't do that, so will be less reliable than the rclone command.
|
||||
|
||||
### Bugs ###
|
||||
|
||||
* All the remotes should work for read, but some may not for write
|
||||
* those which need to know the size in advance won't - eg B2
|
||||
* maybe should pass in size as -1 to mean work it out
|
||||
|
||||
### TODO ###
|
||||
|
||||
* Check hashes on upload/download
|
||||
* Preserve timestamps
|
||||
* Move directories
|
||||
`,
|
||||
RunE: func(command *cobra.Command, args []string) error {
|
||||
cmd.CheckArgs(2, 2, command, args)
|
||||
fdst := cmd.NewFsDst(args)
|
||||
return Mount(fdst, args[1])
|
||||
},
|
||||
}
|
||||
|
||||
// Mount mounts the remote at mountpoint.
|
||||
//
|
||||
// If noModTime is set then it
|
||||
func Mount(f fs.Fs, mountpoint string) error {
|
||||
if debugFUSE {
|
||||
fuse.Debug = func(msg interface{}) {
|
||||
fs.Debug("fuse", "%v", msg)
|
||||
}
|
||||
}
|
||||
|
||||
// Mount it
|
||||
errChan, err := mount(f, mountpoint)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to mount FUSE fs")
|
||||
}
|
||||
|
||||
// Wait for umount
|
||||
err = <-errChan
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to umount FUSE fs")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
6
cmd/mount/mount_unsupported.go
Normal file
6
cmd/mount/mount_unsupported.go
Normal file
@@ -0,0 +1,6 @@
|
||||
// Build for mount for unsupported platforms to stop go complaining
|
||||
// about "no buildable Go source files "
|
||||
|
||||
// +build !linux,!darwin,!freebsd
|
||||
|
||||
package mount
|
||||
130
cmd/mount/read.go
Normal file
130
cmd/mount/read.go
Normal file
@@ -0,0 +1,130 @@
|
||||
// +build linux darwin freebsd
|
||||
|
||||
package mount
|
||||
|
||||
import (
|
||||
"io"
|
||||
"sync"
|
||||
|
||||
"bazil.org/fuse"
|
||||
fusefs "bazil.org/fuse/fs"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// ReadFileHandle is an open for read file handle on a File
|
||||
type ReadFileHandle struct {
|
||||
mu sync.Mutex
|
||||
closed bool // set if handle has been closed
|
||||
r io.ReadCloser
|
||||
o fs.Object
|
||||
readCalled bool // set if read has been called
|
||||
}
|
||||
|
||||
func newReadFileHandle(o fs.Object) (*ReadFileHandle, error) {
|
||||
r, err := o.Open()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &ReadFileHandle{
|
||||
r: r,
|
||||
o: o,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Check interface satisfied
|
||||
var _ fusefs.Handle = (*ReadFileHandle)(nil)
|
||||
|
||||
// Check interface satisfied
|
||||
var _ fusefs.HandleReader = (*ReadFileHandle)(nil)
|
||||
|
||||
// Read from the file handle
|
||||
func (fh *ReadFileHandle) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) error {
|
||||
fs.Debug(fh.o, "ReadFileHandle.Open")
|
||||
if fh.closed {
|
||||
fs.ErrorLog(fh.o, "ReadFileHandle.Read error: %v", errClosedFileHandle)
|
||||
return errClosedFileHandle
|
||||
}
|
||||
fh.readCalled = true
|
||||
// We don't actually enforce Offset to match where previous read
|
||||
// ended. Maybe we should, but that would mean'd we need to track
|
||||
// it. The kernel *should* do it for us, based on the
|
||||
// fuse.OpenNonSeekable flag.
|
||||
//
|
||||
// One exception to the above is if we fail to fully populate a
|
||||
// page cache page; a read into page cache is always page aligned.
|
||||
// Make sure we never serve a partial read, to avoid that.
|
||||
buf := make([]byte, req.Size)
|
||||
n, err := io.ReadFull(fh.r, buf)
|
||||
if err == io.ErrUnexpectedEOF || err == io.EOF {
|
||||
err = nil
|
||||
}
|
||||
resp.Data = buf[:n]
|
||||
if err != nil {
|
||||
fs.ErrorLog(fh.o, "ReadFileHandle.Open error: %v", err)
|
||||
} else {
|
||||
fs.Debug(fh.o, "ReadFileHandle.Open OK")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// close the file handle returning errClosedFileHandle if it has been
|
||||
// closed already.
|
||||
//
|
||||
// Must be called with fh.mu held
|
||||
func (fh *ReadFileHandle) close() error {
|
||||
if fh.closed {
|
||||
return errClosedFileHandle
|
||||
}
|
||||
fh.closed = true
|
||||
return fh.r.Close()
|
||||
}
|
||||
|
||||
// Check interface satisfied
|
||||
var _ fusefs.HandleFlusher = (*ReadFileHandle)(nil)
|
||||
|
||||
// Flush is called each time the file or directory is closed.
|
||||
// Because there can be multiple file descriptors referring to a
|
||||
// single opened file, Flush can be called multiple times.
|
||||
func (fh *ReadFileHandle) Flush(ctx context.Context, req *fuse.FlushRequest) error {
|
||||
fh.mu.Lock()
|
||||
defer fh.mu.Unlock()
|
||||
fs.Debug(fh.o, "ReadFileHandle.Flush")
|
||||
// If Read hasn't been called then ignore the Flush - Release
|
||||
// will pick it up
|
||||
if !fh.readCalled {
|
||||
fs.Debug(fh.o, "ReadFileHandle.Flush ignoring flush on unread handle")
|
||||
return nil
|
||||
|
||||
}
|
||||
err := fh.close()
|
||||
if err != nil {
|
||||
fs.ErrorLog(fh.o, "ReadFileHandle.Flush error: %v", err)
|
||||
return err
|
||||
}
|
||||
fs.Debug(fh.o, "ReadFileHandle.Flush OK")
|
||||
return nil
|
||||
}
|
||||
|
||||
var _ fusefs.HandleReleaser = (*ReadFileHandle)(nil)
|
||||
|
||||
// Release is called when we are finished with the file handle
|
||||
//
|
||||
// It isn't called directly from userspace so the error is ignored by
|
||||
// the kernel
|
||||
func (fh *ReadFileHandle) Release(ctx context.Context, req *fuse.ReleaseRequest) error {
|
||||
fh.mu.Lock()
|
||||
defer fh.mu.Unlock()
|
||||
if fh.closed {
|
||||
fs.Debug(fh.o, "ReadFileHandle.Release nothing to do")
|
||||
return nil
|
||||
}
|
||||
fs.Debug(fh.o, "ReadFileHandle.Release closing")
|
||||
err := fh.close()
|
||||
if err != nil {
|
||||
fs.ErrorLog(fh.o, "ReadFileHandle.Release error: %v", err)
|
||||
} else {
|
||||
fs.Debug(fh.o, "ReadFileHandle.Release OK")
|
||||
}
|
||||
return err
|
||||
}
|
||||
79
cmd/mount/read_test.go
Normal file
79
cmd/mount/read_test.go
Normal file
@@ -0,0 +1,79 @@
|
||||
// +build linux darwin freebsd
|
||||
|
||||
package mount
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"syscall"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// Read by byte including don't read any bytes
|
||||
func TestReadByByte(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
var data = []byte("hellohello")
|
||||
run.createFile(t, "testfile", string(data))
|
||||
run.checkDir(t, "testfile 10")
|
||||
|
||||
for i := 0; i < len(data); i++ {
|
||||
fd, err := os.Open(run.path("testfile"))
|
||||
assert.NoError(t, err)
|
||||
for j := 0; j < i; j++ {
|
||||
buf := make([]byte, 1)
|
||||
n, err := io.ReadFull(fd, buf)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, n)
|
||||
assert.Equal(t, buf[0], data[j])
|
||||
}
|
||||
err = fd.Close()
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
run.rm(t, "testfile")
|
||||
}
|
||||
|
||||
// Test double close
|
||||
func TestReadFileDoubleClose(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
run.createFile(t, "testdoubleclose", "hello")
|
||||
|
||||
in, err := os.Open(run.path("testdoubleclose"))
|
||||
assert.NoError(t, err)
|
||||
fd := in.Fd()
|
||||
|
||||
fd1, err := syscall.Dup(int(fd))
|
||||
assert.NoError(t, err)
|
||||
|
||||
fd2, err := syscall.Dup(int(fd))
|
||||
assert.NoError(t, err)
|
||||
|
||||
// close one of the dups - should produce no error
|
||||
err = syscall.Close(fd1)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// read from the file
|
||||
buf := make([]byte, 1)
|
||||
_, err = in.Read(buf)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// close it
|
||||
err = in.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
// read from the other dup - should produce no error as this
|
||||
// file is now buffered
|
||||
n, err := syscall.Read(fd2, buf)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, n)
|
||||
|
||||
// close the dup - should produce an error
|
||||
err = syscall.Close(fd2)
|
||||
assert.Error(t, err, "input/output error")
|
||||
|
||||
run.rm(t, "testdoubleclose")
|
||||
}
|
||||
157
cmd/mount/write.go
Normal file
157
cmd/mount/write.go
Normal file
@@ -0,0 +1,157 @@
|
||||
// +build linux darwin freebsd
|
||||
|
||||
package mount
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"sync"
|
||||
|
||||
"bazil.org/fuse"
|
||||
fusefs "bazil.org/fuse/fs"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
var errClosedFileHandle = errors.New("Attempt to use closed file handle")
|
||||
|
||||
// WriteFileHandle is an open for write handle on a File
|
||||
type WriteFileHandle struct {
|
||||
mu sync.Mutex
|
||||
closed bool // set if handle has been closed
|
||||
remote string
|
||||
pipeReader *io.PipeReader
|
||||
pipeWriter *io.PipeWriter
|
||||
o fs.Object
|
||||
result chan error
|
||||
file *File
|
||||
writeCalled bool // set the first time Write() is called
|
||||
}
|
||||
|
||||
// Check interface satisfied
|
||||
var _ fusefs.Handle = (*WriteFileHandle)(nil)
|
||||
|
||||
func newWriteFileHandle(d *Dir, f *File, src fs.ObjectInfo) (*WriteFileHandle, error) {
|
||||
fh := &WriteFileHandle{
|
||||
remote: src.Remote(),
|
||||
result: make(chan error, 1),
|
||||
file: f,
|
||||
}
|
||||
fh.pipeReader, fh.pipeWriter = io.Pipe()
|
||||
go func() {
|
||||
o, err := d.f.Put(fh.pipeReader, src)
|
||||
fh.o = o
|
||||
fh.result <- err
|
||||
}()
|
||||
fh.file.addWriters(1)
|
||||
return fh, nil
|
||||
}
|
||||
|
||||
// Check interface satisfied
|
||||
var _ fusefs.HandleWriter = (*WriteFileHandle)(nil)
|
||||
|
||||
// Write data to the file handle
|
||||
func (fh *WriteFileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) error {
|
||||
fs.Debug(fh.remote, "WriteFileHandle.Write len=%d", len(req.Data))
|
||||
fh.mu.Lock()
|
||||
defer fh.mu.Unlock()
|
||||
if fh.closed {
|
||||
fs.ErrorLog(fh.remote, "WriteFileHandle.Write error: %v", errClosedFileHandle)
|
||||
return errClosedFileHandle
|
||||
}
|
||||
fh.writeCalled = true
|
||||
// FIXME should probably check the file isn't being seeked?
|
||||
n, err := fh.pipeWriter.Write(req.Data)
|
||||
resp.Size = n
|
||||
fh.file.written(int64(n))
|
||||
if err != nil {
|
||||
fs.ErrorLog(fh.remote, "WriteFileHandle.Write error: %v", err)
|
||||
return err
|
||||
}
|
||||
fs.Debug(fh.remote, "WriteFileHandle.Write OK (%d bytes written)", n)
|
||||
return nil
|
||||
}
|
||||
|
||||
// close the file handle returning errClosedFileHandle if it has been
|
||||
// closed already.
|
||||
//
|
||||
// Must be called with fh.mu held
|
||||
func (fh *WriteFileHandle) close() error {
|
||||
if fh.closed {
|
||||
return errClosedFileHandle
|
||||
}
|
||||
fh.closed = true
|
||||
fh.file.addWriters(-1)
|
||||
writeCloseErr := fh.pipeWriter.Close()
|
||||
err := <-fh.result
|
||||
readCloseErr := fh.pipeReader.Close()
|
||||
if err == nil {
|
||||
fh.file.setObject(fh.o)
|
||||
err = writeCloseErr
|
||||
}
|
||||
if err == nil {
|
||||
err = readCloseErr
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// Check interface satisfied
|
||||
var _ fusefs.HandleFlusher = (*WriteFileHandle)(nil)
|
||||
|
||||
// Flush is called on each close() of a file descriptor. So if a
|
||||
// filesystem wants to return write errors in close() and the file has
|
||||
// cached dirty data, this is a good place to write back data and
|
||||
// return any errors. Since many applications ignore close() errors
|
||||
// this is not always useful.
|
||||
//
|
||||
// NOTE: The flush() method may be called more than once for each
|
||||
// open(). This happens if more than one file descriptor refers to an
|
||||
// opened file due to dup(), dup2() or fork() calls. It is not
|
||||
// possible to determine if a flush is final, so each flush should be
|
||||
// treated equally. Multiple write-flush sequences are relatively
|
||||
// rare, so this shouldn't be a problem.
|
||||
//
|
||||
// Filesystems shouldn't assume that flush will always be called after
|
||||
// some writes, or that if will be called at all.
|
||||
func (fh *WriteFileHandle) Flush(ctx context.Context, req *fuse.FlushRequest) error {
|
||||
fh.mu.Lock()
|
||||
defer fh.mu.Unlock()
|
||||
fs.Debug(fh.remote, "WriteFileHandle.Flush")
|
||||
// If Write hasn't been called then ignore the Flush - Release
|
||||
// will pick it up
|
||||
if !fh.writeCalled {
|
||||
fs.Debug(fh.remote, "WriteFileHandle.Flush ignoring flush on unwritten handle")
|
||||
return nil
|
||||
|
||||
}
|
||||
err := fh.close()
|
||||
if err != nil {
|
||||
fs.ErrorLog(fh.remote, "WriteFileHandle.Flush error: %v", err)
|
||||
} else {
|
||||
fs.Debug(fh.remote, "WriteFileHandle.Flush OK")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
var _ fusefs.HandleReleaser = (*WriteFileHandle)(nil)
|
||||
|
||||
// Release is called when we are finished with the file handle
|
||||
//
|
||||
// It isn't called directly from userspace so the error is ignored by
|
||||
// the kernel
|
||||
func (fh *WriteFileHandle) Release(ctx context.Context, req *fuse.ReleaseRequest) error {
|
||||
fh.mu.Lock()
|
||||
defer fh.mu.Unlock()
|
||||
if fh.closed {
|
||||
fs.Debug(fh.remote, "WriteFileHandle.Release nothing to do")
|
||||
return nil
|
||||
}
|
||||
fs.Debug(fh.remote, "WriteFileHandle.Release closing")
|
||||
err := fh.close()
|
||||
if err != nil {
|
||||
fs.ErrorLog(fh.remote, "WriteFileHandle.Release error: %v", err)
|
||||
} else {
|
||||
fs.Debug(fh.remote, "WriteFileHandle.Release OK")
|
||||
}
|
||||
return err
|
||||
}
|
||||
103
cmd/mount/write_test.go
Normal file
103
cmd/mount/write_test.go
Normal file
@@ -0,0 +1,103 @@
|
||||
// +build linux darwin freebsd
|
||||
|
||||
package mount
|
||||
|
||||
import (
|
||||
"os"
|
||||
"syscall"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// Test writing a file with no write()'s to it
|
||||
func TestWriteFileNoWrite(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
fd, err := os.Create(run.path("testnowrite"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = fd.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
run.checkDir(t, "testnowrite 0")
|
||||
|
||||
run.rm(t, "testnowrite")
|
||||
}
|
||||
|
||||
// Test open file in directory listing
|
||||
func FIXMETestWriteOpenFileInDirListing(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
fd, err := os.Create(run.path("testnowrite"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
run.checkDir(t, "testnowrite 0")
|
||||
|
||||
err = fd.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
run.rm(t, "testnowrite")
|
||||
}
|
||||
|
||||
// Test writing a file and reading it back
|
||||
func TestWriteFileWrite(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
run.createFile(t, "testwrite", "data")
|
||||
run.checkDir(t, "testwrite 4")
|
||||
contents := run.readFile(t, "testwrite")
|
||||
assert.Equal(t, "data", contents)
|
||||
run.rm(t, "testwrite")
|
||||
}
|
||||
|
||||
// Test overwriting a file
|
||||
func TestWriteFileOverwrite(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
run.createFile(t, "testwrite", "data")
|
||||
run.checkDir(t, "testwrite 4")
|
||||
run.createFile(t, "testwrite", "potato")
|
||||
contents := run.readFile(t, "testwrite")
|
||||
assert.Equal(t, "potato", contents)
|
||||
run.rm(t, "testwrite")
|
||||
}
|
||||
|
||||
// Test double close
|
||||
func TestWriteFileDoubleClose(t *testing.T) {
|
||||
run.skipIfNoFUSE(t)
|
||||
|
||||
out, err := os.Create(run.path("testdoubleclose"))
|
||||
assert.NoError(t, err)
|
||||
fd := out.Fd()
|
||||
|
||||
fd1, err := syscall.Dup(int(fd))
|
||||
assert.NoError(t, err)
|
||||
|
||||
fd2, err := syscall.Dup(int(fd))
|
||||
assert.NoError(t, err)
|
||||
|
||||
// close one of the dups - should produce no error
|
||||
err = syscall.Close(fd1)
|
||||
assert.NoError(t, err)
|
||||
|
||||
// write to the file
|
||||
buf := []byte("hello")
|
||||
n, err := out.Write(buf)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 5, n)
|
||||
|
||||
// close it
|
||||
err = out.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
// write to the other dup - should produce an error
|
||||
n, err = syscall.Write(fd2, buf)
|
||||
assert.Error(t, err, "input/output error")
|
||||
|
||||
// close the dup - should produce an error
|
||||
err = syscall.Close(fd2)
|
||||
assert.Error(t, err, "input/output error")
|
||||
|
||||
run.rm(t, "testdoubleclose")
|
||||
}
|
||||
40
cmd/move/move.go
Normal file
40
cmd/move/move.go
Normal file
@@ -0,0 +1,40 @@
|
||||
package move
|
||||
|
||||
import (
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(moveCmd)
|
||||
}
|
||||
|
||||
var moveCmd = &cobra.Command{
|
||||
Use: "move source:path dest:path",
|
||||
Short: `Move files from source to dest.`,
|
||||
Long: `
|
||||
Moves the contents of the source directory to the destination
|
||||
directory. Rclone will error if the source and destination overlap.
|
||||
|
||||
If no filters are in use and if possible this will server side move
|
||||
` + "`" + `source:path` + "`" + ` into ` + "`" + `dest:path` + "`" + `. After this ` + "`" + `source:path` + "`" + ` will no
|
||||
longer longer exist.
|
||||
|
||||
Otherwise for each file in ` + "`" + `source:path` + "`" + ` selected by the filters (if
|
||||
any) this will move it into ` + "`" + `dest:path` + "`" + `. If possible a server side
|
||||
move will be used, otherwise it will copy it (server side if possible)
|
||||
into ` + "`" + `dest:path` + "`" + ` then delete the original (if no errors on copy) in
|
||||
` + "`" + `source:path` + "`" + `.
|
||||
|
||||
**Important**: Since this can cause data loss, test first with the
|
||||
--dry-run flag.
|
||||
`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(2, 2, command, args)
|
||||
fsrc, fdst := cmd.NewFsSrcDst(args)
|
||||
cmd.Run(true, command, func() error {
|
||||
return fs.MoveDir(fdst, fsrc)
|
||||
})
|
||||
},
|
||||
}
|
||||
28
cmd/purge/purge.go
Normal file
28
cmd/purge/purge.go
Normal file
@@ -0,0 +1,28 @@
|
||||
package purge
|
||||
|
||||
import (
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(purgeCmd)
|
||||
}
|
||||
|
||||
var purgeCmd = &cobra.Command{
|
||||
Use: "purge remote:path",
|
||||
Short: `Remove the path and all of its contents.`,
|
||||
Long: `
|
||||
Remove the path and all of its contents. Note that this does not obey
|
||||
include/exclude filters - everything will be removed. Use ` + "`" + `delete` + "`" + ` if
|
||||
you want to selectively delete files.
|
||||
`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fdst := cmd.NewFsDst(args)
|
||||
cmd.Run(true, command, func() error {
|
||||
return fs.Purge(fdst)
|
||||
})
|
||||
},
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
// Tests for rclone
|
||||
package main
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"testing"
|
||||
16
cmd/redirect_stderr.go
Normal file
16
cmd/redirect_stderr.go
Normal file
@@ -0,0 +1,16 @@
|
||||
// Log the panic to the log file - for oses which can't do this
|
||||
|
||||
// +build !windows,!darwin,!dragonfly,!freebsd,!linux,!nacl,!netbsd,!openbsd
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
)
|
||||
|
||||
// redirectStderr to the file passed in
|
||||
func redirectStderr(f *os.File) {
|
||||
fs.ErrorLog(nil, "Can't redirect stderr to file")
|
||||
}
|
||||
19
cmd/redirect_stderr_unix.go
Normal file
19
cmd/redirect_stderr_unix.go
Normal file
@@ -0,0 +1,19 @@
|
||||
// Log the panic under unix to the log file
|
||||
|
||||
// +build darwin dragonfly freebsd linux nacl netbsd openbsd
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// redirectStderr to the file passed in
|
||||
func redirectStderr(f *os.File) {
|
||||
err := syscall.Dup2(int(f.Fd()), int(os.Stderr.Fd()))
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to redirect stderr to file: %v", err)
|
||||
}
|
||||
}
|
||||
39
cmd/redirect_stderr_windows.go
Normal file
39
cmd/redirect_stderr_windows.go
Normal file
@@ -0,0 +1,39 @@
|
||||
// Log the panic under windows to the log file
|
||||
//
|
||||
// Code from minix, via
|
||||
//
|
||||
// http://play.golang.org/p/kLtct7lSUg
|
||||
|
||||
// +build windows
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
var (
|
||||
kernel32 = syscall.MustLoadDLL("kernel32.dll")
|
||||
procSetStdHandle = kernel32.MustFindProc("SetStdHandle")
|
||||
)
|
||||
|
||||
func setStdHandle(stdhandle int32, handle syscall.Handle) error {
|
||||
r0, _, e1 := syscall.Syscall(procSetStdHandle.Addr(), 2, uintptr(stdhandle), uintptr(handle), 0)
|
||||
if r0 == 0 {
|
||||
if e1 != 0 {
|
||||
return error(e1)
|
||||
}
|
||||
return syscall.EINVAL
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// redirectStderr to the file passed in
|
||||
func redirectStderr(f *os.File) {
|
||||
err := setStdHandle(syscall.STD_ERROR_HANDLE, syscall.Handle(f.Fd()))
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to redirect stderr to file: %v", err)
|
||||
}
|
||||
}
|
||||
26
cmd/rmdir/rmdir.go
Normal file
26
cmd/rmdir/rmdir.go
Normal file
@@ -0,0 +1,26 @@
|
||||
package rmdir
|
||||
|
||||
import (
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(rmdirCmd)
|
||||
}
|
||||
|
||||
var rmdirCmd = &cobra.Command{
|
||||
Use: "rmdir remote:path",
|
||||
Short: `Remove the path if empty.`,
|
||||
Long: `
|
||||
Remove the path. Note that you can't remove a path with
|
||||
objects in it, use purge for that.`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fdst := cmd.NewFsDst(args)
|
||||
cmd.Run(true, command, func() error {
|
||||
return fs.Rmdir(fdst)
|
||||
})
|
||||
},
|
||||
}
|
||||
29
cmd/sha1sum/sha1sum.go
Normal file
29
cmd/sha1sum/sha1sum.go
Normal file
@@ -0,0 +1,29 @@
|
||||
package sha1sum
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(sha1sumCmd)
|
||||
}
|
||||
|
||||
var sha1sumCmd = &cobra.Command{
|
||||
Use: "sha1sum remote:path",
|
||||
Short: `Produces an sha1sum file for all the objects in the path.`,
|
||||
Long: `
|
||||
Produces an sha1sum file for all the objects in the path. This
|
||||
is in the same format as the standard sha1sum tool produces.
|
||||
`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fsrc := cmd.NewFsSrc(args)
|
||||
cmd.Run(false, command, func() error {
|
||||
return fs.Sha1sum(fsrc, os.Stdout)
|
||||
})
|
||||
},
|
||||
}
|
||||
31
cmd/size/size.go
Normal file
31
cmd/size/size.go
Normal file
@@ -0,0 +1,31 @@
|
||||
package size
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(sizeCmd)
|
||||
}
|
||||
|
||||
var sizeCmd = &cobra.Command{
|
||||
Use: "size remote:path",
|
||||
Short: `Prints the total size and number of objects in remote:path.`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fsrc := cmd.NewFsSrc(args)
|
||||
cmd.Run(false, command, func() error {
|
||||
objects, size, err := fs.Count(fsrc)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fmt.Printf("Total objects: %d\n", objects)
|
||||
fmt.Printf("Total size: %s (%d Bytes)\n", fs.SizeSuffix(size).Unit("Bytes"), size)
|
||||
return nil
|
||||
})
|
||||
},
|
||||
}
|
||||
43
cmd/sync/sync.go
Normal file
43
cmd/sync/sync.go
Normal file
@@ -0,0 +1,43 @@
|
||||
package sync
|
||||
|
||||
import (
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(syncCmd)
|
||||
}
|
||||
|
||||
var syncCmd = &cobra.Command{
|
||||
Use: "sync source:path dest:path",
|
||||
Short: `Make source and dest identical, modifying destination only.`,
|
||||
Long: `
|
||||
Sync the source to the destination, changing the destination
|
||||
only. Doesn't transfer unchanged files, testing by size and
|
||||
modification time or MD5SUM. Destination is updated to match
|
||||
source, including deleting files if necessary.
|
||||
|
||||
**Important**: Since this can cause data loss, test first with the
|
||||
` + "`" + `--dry-run` + "`" + ` flag to see exactly what would be copied and deleted.
|
||||
|
||||
Note that files in the destination won't be deleted if there were any
|
||||
errors at any point.
|
||||
|
||||
It is always the contents of the directory that is synced, not the
|
||||
directory so when source:path is a directory, it's the contents of
|
||||
source:path that are copied, not the directory name and contents. See
|
||||
extended explanation in the ` + "`" + `copy` + "`" + ` command above if unsure.
|
||||
|
||||
If dest:path doesn't exist, it is created and the source:path contents
|
||||
go there.
|
||||
`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(2, 2, command, args)
|
||||
fsrc, fdst := cmd.NewFsSrcDst(args)
|
||||
cmd.Run(true, command, func() error {
|
||||
return fs.Sync(fdst, fsrc)
|
||||
})
|
||||
},
|
||||
}
|
||||
19
cmd/version/version.go
Normal file
19
cmd/version/version.go
Normal file
@@ -0,0 +1,19 @@
|
||||
package version
|
||||
|
||||
import (
|
||||
"github.com/ncw/rclone/cmd"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(versionCmd)
|
||||
}
|
||||
|
||||
var versionCmd = &cobra.Command{
|
||||
Use: "version",
|
||||
Short: `Show the version number.`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(0, 0, command, args)
|
||||
cmd.ShowVersion()
|
||||
},
|
||||
}
|
||||
6
cmd/versioncheck.go
Normal file
6
cmd/versioncheck.go
Normal file
@@ -0,0 +1,6 @@
|
||||
//+build !go1.5
|
||||
|
||||
package cmd
|
||||
|
||||
// Upgrade to Go version 1.5 to compile rclone.
|
||||
func init() { Go_version_1_5_required_for_compilation() }
|
||||
@@ -1,26 +1,62 @@
|
||||
#!/bin/sh
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
# This uses gox from https://github.com/mitchellh/gox
|
||||
# Make sure you've run gox -build-toolchain
|
||||
# Make sure you've run gox -build-toolchain - not required for go >= 1.5
|
||||
|
||||
if [ "$1" == "" ]; then
|
||||
echo "Syntax: $0 Version"
|
||||
exit 1
|
||||
fi
|
||||
VERSION="$1"
|
||||
|
||||
rm -rf build
|
||||
|
||||
gox -output "build/{{.OS}}/{{.Arch}}/{{.Dir}}"
|
||||
# Disable CGO and dynamic builds on all platforms (including build patform)
|
||||
export CGO_ENABLED=0
|
||||
|
||||
cat <<'#EOF' > build/README.txt
|
||||
This directory contains builds of the rclone program.
|
||||
# Arch pairs we build for
|
||||
# gox -osarch-list for definitive list
|
||||
|
||||
Rclone is a program to transfer files to and from cloud storage
|
||||
systems such as Google Drive, Amazon S3 and Swift (Rackspace
|
||||
Cloudfiles).
|
||||
OSARCH="\
|
||||
windows/386
|
||||
windows/amd64
|
||||
darwin/386
|
||||
darwin/amd64
|
||||
linux/386
|
||||
linux/amd64
|
||||
linux/arm
|
||||
freebsd/386
|
||||
freebsd/amd64
|
||||
freebsd/arm
|
||||
netbsd/386
|
||||
netbsd/amd64
|
||||
netbsd/arm
|
||||
openbsd/386
|
||||
openbsd/amd64
|
||||
plan9/386
|
||||
plan9/amd64
|
||||
solaris/amd64"
|
||||
|
||||
See the project website here: https://github.com/ncw/rclone for more
|
||||
details.
|
||||
# Make space separated
|
||||
OSARCH=${OSARCH//$'\n'/ }
|
||||
|
||||
The files in this directory are organised by OS and processor type
|
||||
gox --ldflags "-s -X github.com/ncw/rclone/fs.Version=${VERSION}" -output "build/{{.Dir}}-${VERSION}-{{.OS}}-{{.Arch}}/{{.Dir}}" -osarch "${OSARCH}"
|
||||
|
||||
#EOF
|
||||
mv build/rclone-${VERSION}-darwin-amd64 build/rclone-${VERSION}-osx-amd64
|
||||
mv build/rclone-${VERSION}-darwin-386 build/rclone-${VERSION}-osx-386
|
||||
|
||||
mv build/darwin build/osx
|
||||
cd build
|
||||
|
||||
( cd build ; tree . >> README.txt )
|
||||
for d in `ls`; do
|
||||
cp -a ../MANUAL.txt $d/README.txt
|
||||
cp -a ../MANUAL.html $d/README.html
|
||||
cp -a ../rclone.1 $d/
|
||||
zip -r9 $d.zip $d
|
||||
d_current=${d/-${VERSION}/-current}
|
||||
ln $d.zip $d_current.zip
|
||||
rm -rf $d
|
||||
done
|
||||
|
||||
cd ..
|
||||
|
||||
608
crypt/cipher.go
Normal file
608
crypt/cipher.go
Normal file
@@ -0,0 +1,608 @@
|
||||
package crypt
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/aes"
|
||||
gocipher "crypto/cipher"
|
||||
"crypto/rand"
|
||||
"encoding/base32"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
"sync"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/ncw/rclone/crypt/pkcs7"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"golang.org/x/crypto/nacl/secretbox"
|
||||
"golang.org/x/crypto/scrypt"
|
||||
|
||||
"github.com/rfjakob/eme"
|
||||
)
|
||||
|
||||
// Constancs
|
||||
const (
|
||||
nameCipherBlockSize = aes.BlockSize
|
||||
fileMagic = "RCLONE\x00\x00"
|
||||
fileMagicSize = len(fileMagic)
|
||||
fileNonceSize = 24
|
||||
fileHeaderSize = fileMagicSize + fileNonceSize
|
||||
blockHeaderSize = secretbox.Overhead
|
||||
blockDataSize = 64 * 1024
|
||||
blockSize = blockHeaderSize + blockDataSize
|
||||
encryptedSuffix = ".bin" // when file name encryption is off we add this suffix to make sure the cloud provider doesn't process the file
|
||||
)
|
||||
|
||||
// Errors returned by cipher
|
||||
var (
|
||||
ErrorBadDecryptUTF8 = errors.New("bad decryption - utf-8 invalid")
|
||||
ErrorBadDecryptControlChar = errors.New("bad decryption - contains control chars")
|
||||
ErrorNotAMultipleOfBlocksize = errors.New("not a multiple of blocksize")
|
||||
ErrorTooShortAfterDecode = errors.New("too short after base32 decode")
|
||||
ErrorEncryptedFileTooShort = errors.New("file is too short to be encrypted")
|
||||
ErrorEncryptedFileBadHeader = errors.New("file has truncated block header")
|
||||
ErrorEncryptedBadMagic = errors.New("not an encrypted file - bad magic string")
|
||||
ErrorEncryptedBadBlock = errors.New("failed to authenticate decrypted block - bad password?")
|
||||
ErrorBadBase32Encoding = errors.New("bad base32 filename encoding")
|
||||
ErrorFileClosed = errors.New("file already closed")
|
||||
ErrorNotAnEncryptedFile = errors.New("not an encrypted file - no \"" + encryptedSuffix + "\" suffix")
|
||||
defaultSalt = []byte{0xA8, 0x0D, 0xF4, 0x3A, 0x8F, 0xBD, 0x03, 0x08, 0xA7, 0xCA, 0xB8, 0x3E, 0x58, 0x1F, 0x86, 0xB1}
|
||||
)
|
||||
|
||||
// Global variables
|
||||
var (
|
||||
fileMagicBytes = []byte(fileMagic)
|
||||
)
|
||||
|
||||
// Cipher is used to swap out the encryption implementations
|
||||
type Cipher interface {
|
||||
// EncryptFileName encrypts a file path
|
||||
EncryptFileName(string) string
|
||||
// DecryptFileName decrypts a file path, returns error if decrypt was invalid
|
||||
DecryptFileName(string) (string, error)
|
||||
// EncryptDirName encrypts a directory path
|
||||
EncryptDirName(string) string
|
||||
// DecryptDirName decrypts a directory path, returns error if decrypt was invalid
|
||||
DecryptDirName(string) (string, error)
|
||||
// EncryptData
|
||||
EncryptData(io.Reader) (io.Reader, error)
|
||||
// DecryptData
|
||||
DecryptData(io.ReadCloser) (io.ReadCloser, error)
|
||||
// EncryptedSize calculates the size of the data when encrypted
|
||||
EncryptedSize(int64) int64
|
||||
// DecryptedSize calculates the size of the data when decrypted
|
||||
DecryptedSize(int64) (int64, error)
|
||||
}
|
||||
|
||||
// NameEncryptionMode is the type of file name encryption in use
|
||||
type NameEncryptionMode int
|
||||
|
||||
// NameEncryptionMode levels
|
||||
const (
|
||||
NameEncryptionOff NameEncryptionMode = iota
|
||||
NameEncryptionStandard
|
||||
)
|
||||
|
||||
// NewNameEncryptionMode turns a string into a NameEncryptionMode
|
||||
func NewNameEncryptionMode(s string) (mode NameEncryptionMode, err error) {
|
||||
s = strings.ToLower(s)
|
||||
switch s {
|
||||
case "off":
|
||||
mode = NameEncryptionOff
|
||||
case "standard":
|
||||
mode = NameEncryptionStandard
|
||||
default:
|
||||
err = errors.Errorf("Unknown file name encryption mode %q", s)
|
||||
}
|
||||
return mode, err
|
||||
}
|
||||
|
||||
// String turns mode into a human readable string
|
||||
func (mode NameEncryptionMode) String() (out string) {
|
||||
switch mode {
|
||||
case NameEncryptionOff:
|
||||
out = "off"
|
||||
case NameEncryptionStandard:
|
||||
out = "standard"
|
||||
default:
|
||||
out = fmt.Sprintf("Unknown mode #%d", mode)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
type cipher struct {
|
||||
dataKey [32]byte // Key for secretbox
|
||||
nameKey [32]byte // 16,24 or 32 bytes
|
||||
nameTweak [nameCipherBlockSize]byte // used to tweak the name crypto
|
||||
block gocipher.Block
|
||||
mode NameEncryptionMode
|
||||
buffers sync.Pool // encrypt/decrypt buffers
|
||||
cryptoRand io.Reader // read crypto random numbers from here
|
||||
}
|
||||
|
||||
// newCipher initialises the cipher. If salt is "" then it uses a built in salt val
|
||||
func newCipher(mode NameEncryptionMode, password, salt string) (*cipher, error) {
|
||||
c := &cipher{
|
||||
mode: mode,
|
||||
cryptoRand: rand.Reader,
|
||||
}
|
||||
c.buffers.New = func() interface{} {
|
||||
return make([]byte, blockSize)
|
||||
}
|
||||
err := c.Key(password, salt)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// Key creates all the internal keys from the password passed in using
|
||||
// scrypt.
|
||||
//
|
||||
// If salt is "" we use a fixed salt just to make attackers lives
|
||||
// slighty harder than using no salt.
|
||||
//
|
||||
// Note that empty passsword makes all 0x00 keys which is used in the
|
||||
// tests.
|
||||
func (c *cipher) Key(password, salt string) (err error) {
|
||||
const keySize = len(c.dataKey) + len(c.nameKey) + len(c.nameTweak)
|
||||
var saltBytes = defaultSalt
|
||||
if salt != "" {
|
||||
saltBytes = []byte(salt)
|
||||
}
|
||||
var key []byte
|
||||
if password == "" {
|
||||
key = make([]byte, keySize)
|
||||
} else {
|
||||
key, err = scrypt.Key([]byte(password), saltBytes, 16384, 8, 1, keySize)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
copy(c.dataKey[:], key)
|
||||
copy(c.nameKey[:], key[len(c.dataKey):])
|
||||
copy(c.nameTweak[:], key[len(c.dataKey)+len(c.nameKey):])
|
||||
// Key the name cipher
|
||||
c.block, err = aes.NewCipher(c.nameKey[:])
|
||||
return err
|
||||
}
|
||||
|
||||
// getBlock gets a block from the pool of size blockSize
|
||||
func (c *cipher) getBlock() []byte {
|
||||
return c.buffers.Get().([]byte)
|
||||
}
|
||||
|
||||
// putBlock returns a block to the pool of size blockSize
|
||||
func (c *cipher) putBlock(buf []byte) {
|
||||
if len(buf) != blockSize {
|
||||
panic("bad blocksize returned to pool")
|
||||
}
|
||||
c.buffers.Put(buf)
|
||||
}
|
||||
|
||||
// check to see if the byte string is valid with no control characters
|
||||
// from 0x00 to 0x1F and is a valid UTF-8 string
|
||||
func checkValidString(buf []byte) error {
|
||||
for i := range buf {
|
||||
c := buf[i]
|
||||
if c >= 0x00 && c < 0x20 || c == 0x7F {
|
||||
return ErrorBadDecryptControlChar
|
||||
}
|
||||
}
|
||||
if !utf8.Valid(buf) {
|
||||
return ErrorBadDecryptUTF8
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// encodeFileName encodes a filename using a modified version of
|
||||
// standard base32 as described in RFC4648
|
||||
//
|
||||
// The standard encoding is modified in two ways
|
||||
// * it becomes lower case (no-one likes upper case filenames!)
|
||||
// * we strip the padding character `=`
|
||||
func encodeFileName(in []byte) string {
|
||||
encoded := base32.HexEncoding.EncodeToString(in)
|
||||
encoded = strings.TrimRight(encoded, "=")
|
||||
return strings.ToLower(encoded)
|
||||
}
|
||||
|
||||
// decodeFileName decodes a filename as encoded by encodeFileName
|
||||
func decodeFileName(in string) ([]byte, error) {
|
||||
if strings.HasSuffix(in, "=") {
|
||||
return nil, ErrorBadBase32Encoding
|
||||
}
|
||||
// First figure out how many padding characters to add
|
||||
roundUpToMultipleOf8 := (len(in) + 7) &^ 7
|
||||
equals := roundUpToMultipleOf8 - len(in)
|
||||
in = strings.ToUpper(in) + "========"[:equals]
|
||||
return base32.HexEncoding.DecodeString(in)
|
||||
}
|
||||
|
||||
// encryptSegment encrypts a path segment
|
||||
//
|
||||
// This uses EME with AES
|
||||
//
|
||||
// EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the
|
||||
// 2003 paper "A Parallelizable Enciphering Mode" by Halevi and
|
||||
// Rogaway.
|
||||
//
|
||||
// This makes for determinstic encryption which is what we want - the
|
||||
// same filename must encrypt to the same thing.
|
||||
//
|
||||
// This means that
|
||||
// * filenames with the same name will encrypt the same
|
||||
// * filenames which start the same won't have a common prefix
|
||||
func (c *cipher) encryptSegment(plaintext string) string {
|
||||
if plaintext == "" {
|
||||
return ""
|
||||
}
|
||||
paddedPlaintext := pkcs7.Pad(nameCipherBlockSize, []byte(plaintext))
|
||||
ciphertext := eme.Transform(c.block, c.nameTweak[:], paddedPlaintext, eme.DirectionEncrypt)
|
||||
return encodeFileName(ciphertext)
|
||||
}
|
||||
|
||||
// decryptSegment decrypts a path segment
|
||||
func (c *cipher) decryptSegment(ciphertext string) (string, error) {
|
||||
if ciphertext == "" {
|
||||
return "", nil
|
||||
}
|
||||
rawCiphertext, err := decodeFileName(ciphertext)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if len(rawCiphertext)%nameCipherBlockSize != 0 {
|
||||
return "", ErrorNotAMultipleOfBlocksize
|
||||
}
|
||||
if len(rawCiphertext) == 0 {
|
||||
// not possible if decodeFilename() working correctly
|
||||
return "", ErrorTooShortAfterDecode
|
||||
}
|
||||
paddedPlaintext := eme.Transform(c.block, c.nameTweak[:], rawCiphertext, eme.DirectionDecrypt)
|
||||
plaintext, err := pkcs7.Unpad(nameCipherBlockSize, paddedPlaintext)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
err = checkValidString(plaintext)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(plaintext), err
|
||||
}
|
||||
|
||||
// encryptFileName encrypts a file path
|
||||
func (c *cipher) encryptFileName(in string) string {
|
||||
segments := strings.Split(in, "/")
|
||||
for i := range segments {
|
||||
segments[i] = c.encryptSegment(segments[i])
|
||||
}
|
||||
return strings.Join(segments, "/")
|
||||
}
|
||||
|
||||
// EncryptFileName encrypts a file path
|
||||
func (c *cipher) EncryptFileName(in string) string {
|
||||
if c.mode == NameEncryptionOff {
|
||||
return in + encryptedSuffix
|
||||
}
|
||||
return c.encryptFileName(in)
|
||||
}
|
||||
|
||||
// EncryptDirName encrypts a directory path
|
||||
func (c *cipher) EncryptDirName(in string) string {
|
||||
if c.mode == NameEncryptionOff {
|
||||
return in
|
||||
}
|
||||
return c.encryptFileName(in)
|
||||
}
|
||||
|
||||
// decryptFileName decrypts a file path
|
||||
func (c *cipher) decryptFileName(in string) (string, error) {
|
||||
segments := strings.Split(in, "/")
|
||||
for i := range segments {
|
||||
var err error
|
||||
segments[i], err = c.decryptSegment(segments[i])
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
return strings.Join(segments, "/"), nil
|
||||
}
|
||||
|
||||
// DecryptFileName decrypts a file path
|
||||
func (c *cipher) DecryptFileName(in string) (string, error) {
|
||||
if c.mode == NameEncryptionOff {
|
||||
remainingLength := len(in) - len(encryptedSuffix)
|
||||
if remainingLength > 0 && strings.HasSuffix(in, encryptedSuffix) {
|
||||
return in[:remainingLength], nil
|
||||
}
|
||||
return "", ErrorNotAnEncryptedFile
|
||||
}
|
||||
return c.decryptFileName(in)
|
||||
}
|
||||
|
||||
// DecryptDirName decrypts a directory path
|
||||
func (c *cipher) DecryptDirName(in string) (string, error) {
|
||||
if c.mode == NameEncryptionOff {
|
||||
return in, nil
|
||||
}
|
||||
return c.decryptFileName(in)
|
||||
}
|
||||
|
||||
// nonce is an NACL secretbox nonce
|
||||
type nonce [fileNonceSize]byte
|
||||
|
||||
// pointer returns the nonce as a *[24]byte for secretbox
|
||||
func (n *nonce) pointer() *[fileNonceSize]byte {
|
||||
return (*[fileNonceSize]byte)(n)
|
||||
}
|
||||
|
||||
// fromReader fills the nonce from an io.Reader - normally the OSes
|
||||
// crypto random number generator
|
||||
func (n *nonce) fromReader(in io.Reader) error {
|
||||
read, err := io.ReadFull(in, (*n)[:])
|
||||
if read != fileNonceSize {
|
||||
return errors.Wrap(err, "short read of nonce")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// fromBuf fills the nonce from the buffer passed in
|
||||
func (n *nonce) fromBuf(buf []byte) {
|
||||
read := copy((*n)[:], buf)
|
||||
if read != fileNonceSize {
|
||||
panic("buffer to short to read nonce")
|
||||
}
|
||||
}
|
||||
|
||||
// increment to add 1 to the nonce
|
||||
func (n *nonce) increment() {
|
||||
for i := 0; i < len(*n); i++ {
|
||||
digit := (*n)[i]
|
||||
newDigit := digit + 1
|
||||
(*n)[i] = newDigit
|
||||
if newDigit >= digit {
|
||||
// exit if no carry
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// encrypter encrypts an io.Reader on the fly
|
||||
type encrypter struct {
|
||||
in io.Reader
|
||||
c *cipher
|
||||
nonce nonce
|
||||
buf []byte
|
||||
readBuf []byte
|
||||
bufIndex int
|
||||
bufSize int
|
||||
err error
|
||||
}
|
||||
|
||||
// newEncrypter creates a new file handle encrypting on the fly
|
||||
func (c *cipher) newEncrypter(in io.Reader) (*encrypter, error) {
|
||||
fh := &encrypter{
|
||||
in: in,
|
||||
c: c,
|
||||
buf: c.getBlock(),
|
||||
readBuf: c.getBlock(),
|
||||
bufSize: fileHeaderSize,
|
||||
}
|
||||
// Initialise nonce
|
||||
err := fh.nonce.fromReader(c.cryptoRand)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Copy magic into buffer
|
||||
copy(fh.buf, fileMagicBytes)
|
||||
// Copy nonce into buffer
|
||||
copy(fh.buf[fileMagicSize:], fh.nonce[:])
|
||||
return fh, nil
|
||||
}
|
||||
|
||||
// Read as per io.Reader
|
||||
func (fh *encrypter) Read(p []byte) (n int, err error) {
|
||||
if fh.err != nil {
|
||||
return 0, fh.err
|
||||
}
|
||||
if fh.bufIndex >= fh.bufSize {
|
||||
// Read data
|
||||
// FIXME should overlap the reads with a go-routine and 2 buffers?
|
||||
readBuf := fh.readBuf[:blockDataSize]
|
||||
n, err = io.ReadFull(fh.in, readBuf)
|
||||
if err == io.EOF {
|
||||
// ReadFull only returns n=0 and EOF
|
||||
return fh.finish(io.EOF)
|
||||
} else if err == io.ErrUnexpectedEOF {
|
||||
// Next read will return EOF
|
||||
} else if err != nil {
|
||||
return fh.finish(err)
|
||||
}
|
||||
// Write nonce to start of block
|
||||
copy(fh.buf, fh.nonce[:])
|
||||
// Encrypt the block using the nonce
|
||||
block := fh.buf
|
||||
secretbox.Seal(block[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey)
|
||||
fh.bufIndex = 0
|
||||
fh.bufSize = blockHeaderSize + n
|
||||
fh.nonce.increment()
|
||||
}
|
||||
n = copy(p, fh.buf[fh.bufIndex:fh.bufSize])
|
||||
fh.bufIndex += n
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// finish sets the final error and tidies up
|
||||
func (fh *encrypter) finish(err error) (int, error) {
|
||||
if fh.err != nil {
|
||||
return 0, fh.err
|
||||
}
|
||||
fh.err = err
|
||||
fh.c.putBlock(fh.buf)
|
||||
fh.c.putBlock(fh.readBuf)
|
||||
return 0, err
|
||||
}
|
||||
|
||||
// Encrypt data encrypts the data stream
|
||||
func (c *cipher) EncryptData(in io.Reader) (io.Reader, error) {
|
||||
out, err := c.newEncrypter(in)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// decrypter decrypts an io.ReaderCloser on the fly
|
||||
type decrypter struct {
|
||||
rc io.ReadCloser
|
||||
nonce nonce
|
||||
c *cipher
|
||||
buf []byte
|
||||
readBuf []byte
|
||||
bufIndex int
|
||||
bufSize int
|
||||
err error
|
||||
}
|
||||
|
||||
// newDecrypter creates a new file handle decrypting on the fly
|
||||
func (c *cipher) newDecrypter(rc io.ReadCloser) (*decrypter, error) {
|
||||
fh := &decrypter{
|
||||
rc: rc,
|
||||
c: c,
|
||||
buf: c.getBlock(),
|
||||
readBuf: c.getBlock(),
|
||||
}
|
||||
// Read file header (magic + nonce)
|
||||
readBuf := fh.readBuf[:fileHeaderSize]
|
||||
_, err := io.ReadFull(fh.rc, readBuf)
|
||||
if err == io.EOF || err == io.ErrUnexpectedEOF {
|
||||
// This read from 0..fileHeaderSize-1 bytes
|
||||
return nil, fh.finishAndClose(ErrorEncryptedFileTooShort)
|
||||
} else if err != nil {
|
||||
return nil, fh.finishAndClose(err)
|
||||
}
|
||||
// check the magic
|
||||
if !bytes.Equal(readBuf[:fileMagicSize], fileMagicBytes) {
|
||||
return nil, fh.finishAndClose(ErrorEncryptedBadMagic)
|
||||
}
|
||||
// retreive the nonce
|
||||
fh.nonce.fromBuf(readBuf[fileMagicSize:])
|
||||
return fh, nil
|
||||
}
|
||||
|
||||
// Read as per io.Reader
|
||||
func (fh *decrypter) Read(p []byte) (n int, err error) {
|
||||
if fh.err != nil {
|
||||
return 0, fh.err
|
||||
}
|
||||
if fh.bufIndex >= fh.bufSize {
|
||||
// Read data
|
||||
// FIXME should overlap the reads with a go-routine and 2 buffers?
|
||||
readBuf := fh.readBuf
|
||||
n, err = io.ReadFull(fh.rc, readBuf)
|
||||
if err == io.EOF {
|
||||
// ReadFull only returns n=0 and EOF
|
||||
return 0, fh.finish(io.EOF)
|
||||
} else if err == io.ErrUnexpectedEOF {
|
||||
// Next read will return EOF
|
||||
} else if err != nil {
|
||||
return 0, fh.finish(err)
|
||||
}
|
||||
// Check header + 1 byte exists
|
||||
if n <= blockHeaderSize {
|
||||
return 0, fh.finish(ErrorEncryptedFileBadHeader)
|
||||
}
|
||||
// Decrypt the block using the nonce
|
||||
block := fh.buf
|
||||
_, ok := secretbox.Open(block[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey)
|
||||
if !ok {
|
||||
return 0, fh.finish(ErrorEncryptedBadBlock)
|
||||
}
|
||||
fh.bufIndex = 0
|
||||
fh.bufSize = n - blockHeaderSize
|
||||
fh.nonce.increment()
|
||||
}
|
||||
n = copy(p, fh.buf[fh.bufIndex:fh.bufSize])
|
||||
fh.bufIndex += n
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// finish sets the final error and tidies up
|
||||
func (fh *decrypter) finish(err error) error {
|
||||
if fh.err != nil {
|
||||
return fh.err
|
||||
}
|
||||
fh.err = err
|
||||
fh.c.putBlock(fh.buf)
|
||||
fh.c.putBlock(fh.readBuf)
|
||||
return err
|
||||
}
|
||||
|
||||
// Close
|
||||
func (fh *decrypter) Close() error {
|
||||
// Check already closed
|
||||
if fh.err == ErrorFileClosed {
|
||||
return fh.err
|
||||
}
|
||||
// Closed before reading EOF so not finish()ed yet
|
||||
if fh.err == nil {
|
||||
_ = fh.finish(io.EOF)
|
||||
}
|
||||
// Show file now closed
|
||||
fh.err = ErrorFileClosed
|
||||
return fh.rc.Close()
|
||||
}
|
||||
|
||||
// finishAndClose does finish then Close()
|
||||
//
|
||||
// Used when we are returning a nil fh from new
|
||||
func (fh *decrypter) finishAndClose(err error) error {
|
||||
_ = fh.finish(err)
|
||||
_ = fh.Close()
|
||||
return err
|
||||
}
|
||||
|
||||
// DecryptData decrypts the data stream
|
||||
func (c *cipher) DecryptData(rc io.ReadCloser) (io.ReadCloser, error) {
|
||||
out, err := c.newDecrypter(rc)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// EncryptedSize calculates the size of the data when encrypted
|
||||
func (c *cipher) EncryptedSize(size int64) int64 {
|
||||
blocks, residue := size/blockDataSize, size%blockDataSize
|
||||
encryptedSize := int64(fileHeaderSize) + blocks*(blockHeaderSize+blockDataSize)
|
||||
if residue != 0 {
|
||||
encryptedSize += blockHeaderSize + residue
|
||||
}
|
||||
return encryptedSize
|
||||
}
|
||||
|
||||
// DecryptedSize calculates the size of the data when decrypted
|
||||
func (c *cipher) DecryptedSize(size int64) (int64, error) {
|
||||
size -= int64(fileHeaderSize)
|
||||
if size < 0 {
|
||||
return 0, ErrorEncryptedFileTooShort
|
||||
}
|
||||
blocks, residue := size/blockSize, size%blockSize
|
||||
decryptedSize := blocks * blockDataSize
|
||||
if residue != 0 {
|
||||
residue -= blockHeaderSize
|
||||
if residue <= 0 {
|
||||
return 0, ErrorEncryptedFileBadHeader
|
||||
}
|
||||
}
|
||||
decryptedSize += residue
|
||||
return decryptedSize, nil
|
||||
}
|
||||
|
||||
// check interfaces
|
||||
var (
|
||||
_ Cipher = (*cipher)(nil)
|
||||
_ io.ReadCloser = (*decrypter)(nil)
|
||||
_ io.Reader = (*encrypter)(nil)
|
||||
)
|
||||
843
crypt/cipher_test.go
Normal file
843
crypt/cipher_test.go
Normal file
@@ -0,0 +1,843 @@
|
||||
package crypt
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base32"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/ncw/rclone/crypt/pkcs7"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestNewNameEncryptionMode(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
expected NameEncryptionMode
|
||||
expectedErr string
|
||||
}{
|
||||
{"off", NameEncryptionOff, ""},
|
||||
{"standard", NameEncryptionStandard, ""},
|
||||
{"potato", NameEncryptionMode(0), "Unknown file name encryption mode \"potato\""},
|
||||
} {
|
||||
actual, actualErr := NewNameEncryptionMode(test.in)
|
||||
assert.Equal(t, actual, test.expected)
|
||||
if test.expectedErr == "" {
|
||||
assert.NoError(t, actualErr)
|
||||
} else {
|
||||
assert.Error(t, actualErr, test.expectedErr)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewNameEncryptionModeString(t *testing.T) {
|
||||
assert.Equal(t, NameEncryptionOff.String(), "off")
|
||||
assert.Equal(t, NameEncryptionStandard.String(), "standard")
|
||||
assert.Equal(t, NameEncryptionMode(2).String(), "Unknown mode #2")
|
||||
}
|
||||
|
||||
func TestValidString(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
expected error
|
||||
}{
|
||||
{"", nil},
|
||||
{"\x01", ErrorBadDecryptControlChar},
|
||||
{"a\x02", ErrorBadDecryptControlChar},
|
||||
{"abc\x03", ErrorBadDecryptControlChar},
|
||||
{"abc\x04def", ErrorBadDecryptControlChar},
|
||||
{"\x05d", ErrorBadDecryptControlChar},
|
||||
{"\x06def", ErrorBadDecryptControlChar},
|
||||
{"\x07", ErrorBadDecryptControlChar},
|
||||
{"\x08", ErrorBadDecryptControlChar},
|
||||
{"\x09", ErrorBadDecryptControlChar},
|
||||
{"\x0A", ErrorBadDecryptControlChar},
|
||||
{"\x0B", ErrorBadDecryptControlChar},
|
||||
{"\x0C", ErrorBadDecryptControlChar},
|
||||
{"\x0D", ErrorBadDecryptControlChar},
|
||||
{"\x0E", ErrorBadDecryptControlChar},
|
||||
{"\x0F", ErrorBadDecryptControlChar},
|
||||
{"\x10", ErrorBadDecryptControlChar},
|
||||
{"\x11", ErrorBadDecryptControlChar},
|
||||
{"\x12", ErrorBadDecryptControlChar},
|
||||
{"\x13", ErrorBadDecryptControlChar},
|
||||
{"\x14", ErrorBadDecryptControlChar},
|
||||
{"\x15", ErrorBadDecryptControlChar},
|
||||
{"\x16", ErrorBadDecryptControlChar},
|
||||
{"\x17", ErrorBadDecryptControlChar},
|
||||
{"\x18", ErrorBadDecryptControlChar},
|
||||
{"\x19", ErrorBadDecryptControlChar},
|
||||
{"\x1A", ErrorBadDecryptControlChar},
|
||||
{"\x1B", ErrorBadDecryptControlChar},
|
||||
{"\x1C", ErrorBadDecryptControlChar},
|
||||
{"\x1D", ErrorBadDecryptControlChar},
|
||||
{"\x1E", ErrorBadDecryptControlChar},
|
||||
{"\x1F", ErrorBadDecryptControlChar},
|
||||
{"\x20", nil},
|
||||
{"\x7E", nil},
|
||||
{"\x7F", ErrorBadDecryptControlChar},
|
||||
{"£100", nil},
|
||||
{`hello? sausage/êé/Hello, 世界/ " ' @ < > & ?/z.txt`, nil},
|
||||
{"£100", nil},
|
||||
// Following tests from http://www.php.net/manual/en/reference.pcre.pattern.modifiers.php#54805
|
||||
{"a", nil}, // Valid ASCII
|
||||
{"\xc3\xb1", nil}, // Valid 2 Octet Sequence
|
||||
{"\xc3\x28", ErrorBadDecryptUTF8}, // Invalid 2 Octet Sequence
|
||||
{"\xa0\xa1", ErrorBadDecryptUTF8}, // Invalid Sequence Identifier
|
||||
{"\xe2\x82\xa1", nil}, // Valid 3 Octet Sequence
|
||||
{"\xe2\x28\xa1", ErrorBadDecryptUTF8}, // Invalid 3 Octet Sequence (in 2nd Octet)
|
||||
{"\xe2\x82\x28", ErrorBadDecryptUTF8}, // Invalid 3 Octet Sequence (in 3rd Octet)
|
||||
{"\xf0\x90\x8c\xbc", nil}, // Valid 4 Octet Sequence
|
||||
{"\xf0\x28\x8c\xbc", ErrorBadDecryptUTF8}, // Invalid 4 Octet Sequence (in 2nd Octet)
|
||||
{"\xf0\x90\x28\xbc", ErrorBadDecryptUTF8}, // Invalid 4 Octet Sequence (in 3rd Octet)
|
||||
{"\xf0\x28\x8c\x28", ErrorBadDecryptUTF8}, // Invalid 4 Octet Sequence (in 4th Octet)
|
||||
{"\xf8\xa1\xa1\xa1\xa1", ErrorBadDecryptUTF8}, // Valid 5 Octet Sequence (but not Unicode!)
|
||||
{"\xfc\xa1\xa1\xa1\xa1\xa1", ErrorBadDecryptUTF8}, // Valid 6 Octet Sequence (but not Unicode!)
|
||||
} {
|
||||
actual := checkValidString([]byte(test.in))
|
||||
assert.Equal(t, actual, test.expected, fmt.Sprintf("in=%q", test.in))
|
||||
}
|
||||
}
|
||||
|
||||
func TestEncodeFileName(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
expected string
|
||||
}{
|
||||
{"", ""},
|
||||
{"1", "64"},
|
||||
{"12", "64p0"},
|
||||
{"123", "64p36"},
|
||||
{"1234", "64p36d0"},
|
||||
{"12345", "64p36d1l"},
|
||||
{"123456", "64p36d1l6o"},
|
||||
{"1234567", "64p36d1l6org"},
|
||||
{"12345678", "64p36d1l6orjg"},
|
||||
{"123456789", "64p36d1l6orjge8"},
|
||||
{"1234567890", "64p36d1l6orjge9g"},
|
||||
{"12345678901", "64p36d1l6orjge9g64"},
|
||||
{"123456789012", "64p36d1l6orjge9g64p0"},
|
||||
{"1234567890123", "64p36d1l6orjge9g64p36"},
|
||||
{"12345678901234", "64p36d1l6orjge9g64p36d0"},
|
||||
{"123456789012345", "64p36d1l6orjge9g64p36d1l"},
|
||||
{"1234567890123456", "64p36d1l6orjge9g64p36d1l6o"},
|
||||
} {
|
||||
actual := encodeFileName([]byte(test.in))
|
||||
assert.Equal(t, actual, test.expected, fmt.Sprintf("in=%q", test.in))
|
||||
recovered, err := decodeFileName(test.expected)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, string(recovered), test.in, fmt.Sprintf("reverse=%q", test.expected))
|
||||
in := strings.ToUpper(test.expected)
|
||||
recovered, err = decodeFileName(in)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, string(recovered), test.in, fmt.Sprintf("reverse=%q", in))
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecodeFileName(t *testing.T) {
|
||||
// We've tested decoding the valid ones above, now concentrate on the invalid ones
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
expectedErr error
|
||||
}{
|
||||
{"64=", ErrorBadBase32Encoding},
|
||||
{"!", base32.CorruptInputError(0)},
|
||||
{"hello=hello", base32.CorruptInputError(5)},
|
||||
} {
|
||||
actual, actualErr := decodeFileName(test.in)
|
||||
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr))
|
||||
}
|
||||
}
|
||||
|
||||
func TestEncryptSegment(t *testing.T) {
|
||||
c, _ := newCipher(NameEncryptionStandard, "", "")
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
expected string
|
||||
}{
|
||||
{"", ""},
|
||||
{"1", "p0e52nreeaj0a5ea7s64m4j72s"},
|
||||
{"12", "l42g6771hnv3an9cgc8cr2n1ng"},
|
||||
{"123", "qgm4avr35m5loi1th53ato71v0"},
|
||||
{"1234", "8ivr2e9plj3c3esisjpdisikos"},
|
||||
{"12345", "rh9vu63q3o29eqmj4bg6gg7s44"},
|
||||
{"123456", "bn717l3alepn75b2fb2ejmi4b4"},
|
||||
{"1234567", "n6bo9jmb1qe3b1ogtj5qkf19k8"},
|
||||
{"12345678", "u9t24j7uaq94dh5q53m3s4t9ok"},
|
||||
{"123456789", "37hn305g6j12d1g0kkrl7ekbs4"},
|
||||
{"1234567890", "ot8d91eplaglb62k2b1trm2qv0"},
|
||||
{"12345678901", "h168vvrgb53qnrtvvmb378qrcs"},
|
||||
{"123456789012", "s3hsdf9e29ithrqbjqu01t8q2s"},
|
||||
{"1234567890123", "cf3jimlv1q2oc553mv7s3mh3eo"},
|
||||
{"12345678901234", "moq0uqdlqrblrc5pa5u5c7hq9g"},
|
||||
{"123456789012345", "eeam3li4rnommi3a762h5n7meg"},
|
||||
{"1234567890123456", "mijbj0frqf6ms7frcr6bd9h0env53jv96pjaaoirk7forcgpt70g"},
|
||||
} {
|
||||
actual := c.encryptSegment(test.in)
|
||||
assert.Equal(t, test.expected, actual, fmt.Sprintf("Testing %q", test.in))
|
||||
recovered, err := c.decryptSegment(test.expected)
|
||||
assert.NoError(t, err, fmt.Sprintf("Testing reverse %q", test.expected))
|
||||
assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %q", test.expected))
|
||||
in := strings.ToUpper(test.expected)
|
||||
recovered, err = c.decryptSegment(in)
|
||||
assert.NoError(t, err, fmt.Sprintf("Testing reverse %q", in))
|
||||
assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %q", in))
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecryptSegment(t *testing.T) {
|
||||
// We've tested the forwards above, now concentrate on the errors
|
||||
c, _ := newCipher(NameEncryptionStandard, "", "")
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
expectedErr error
|
||||
}{
|
||||
{"64=", ErrorBadBase32Encoding},
|
||||
{"!", base32.CorruptInputError(0)},
|
||||
{encodeFileName([]byte("a")), ErrorNotAMultipleOfBlocksize},
|
||||
{encodeFileName([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize},
|
||||
{encodeFileName([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong},
|
||||
{c.encryptSegment("\x01"), ErrorBadDecryptControlChar},
|
||||
{c.encryptSegment("\xc3\x28"), ErrorBadDecryptUTF8},
|
||||
} {
|
||||
actual, actualErr := c.decryptSegment(test.in)
|
||||
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr))
|
||||
}
|
||||
}
|
||||
|
||||
func TestEncryptFileName(t *testing.T) {
|
||||
// First standard mode
|
||||
c, _ := newCipher(NameEncryptionStandard, "", "")
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1"))
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12"))
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123"))
|
||||
// Now off mode
|
||||
c, _ = newCipher(NameEncryptionOff, "", "")
|
||||
assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123"))
|
||||
}
|
||||
|
||||
func TestDecryptFileName(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
mode NameEncryptionMode
|
||||
in string
|
||||
expected string
|
||||
expectedErr error
|
||||
}{
|
||||
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s", "1", nil},
|
||||
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "1/12", nil},
|
||||
{NameEncryptionStandard, "p0e52nreeAJ0A5EA7S64M4J72S/L42G6771HNv3an9cgc8cr2n1ng", "1/12", nil},
|
||||
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
|
||||
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize},
|
||||
{NameEncryptionOff, "1/12/123.bin", "1/12/123", nil},
|
||||
{NameEncryptionOff, "1/12/123.bix", "", ErrorNotAnEncryptedFile},
|
||||
{NameEncryptionOff, ".bin", "", ErrorNotAnEncryptedFile},
|
||||
} {
|
||||
c, _ := newCipher(test.mode, "", "")
|
||||
actual, actualErr := c.DecryptFileName(test.in)
|
||||
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
|
||||
assert.Equal(t, test.expected, actual, what)
|
||||
assert.Equal(t, test.expectedErr, actualErr, what)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEncryptDirName(t *testing.T) {
|
||||
// First standard mode
|
||||
c, _ := newCipher(NameEncryptionStandard, "", "")
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptDirName("1"))
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptDirName("1/12"))
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptDirName("1/12/123"))
|
||||
// Now off mode
|
||||
c, _ = newCipher(NameEncryptionOff, "", "")
|
||||
assert.Equal(t, "1/12/123", c.EncryptDirName("1/12/123"))
|
||||
}
|
||||
|
||||
func TestDecryptDirName(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
mode NameEncryptionMode
|
||||
in string
|
||||
expected string
|
||||
expectedErr error
|
||||
}{
|
||||
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s", "1", nil},
|
||||
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "1/12", nil},
|
||||
{NameEncryptionStandard, "p0e52nreeAJ0A5EA7S64M4J72S/L42G6771HNv3an9cgc8cr2n1ng", "1/12", nil},
|
||||
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
|
||||
{NameEncryptionStandard, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize},
|
||||
{NameEncryptionOff, "1/12/123.bin", "1/12/123.bin", nil},
|
||||
{NameEncryptionOff, "1/12/123", "1/12/123", nil},
|
||||
{NameEncryptionOff, ".bin", ".bin", nil},
|
||||
} {
|
||||
c, _ := newCipher(test.mode, "", "")
|
||||
actual, actualErr := c.DecryptDirName(test.in)
|
||||
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
|
||||
assert.Equal(t, test.expected, actual, what)
|
||||
assert.Equal(t, test.expectedErr, actualErr, what)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEncryptedSize(t *testing.T) {
|
||||
c, _ := newCipher(NameEncryptionStandard, "", "")
|
||||
for _, test := range []struct {
|
||||
in int64
|
||||
expected int64
|
||||
}{
|
||||
{0, 32},
|
||||
{1, 32 + 16 + 1},
|
||||
{65536, 32 + 16 + 65536},
|
||||
{65537, 32 + 16 + 65536 + 16 + 1},
|
||||
{1 << 20, 32 + 16*(16+65536)},
|
||||
{(1 << 20) + 65535, 32 + 16*(16+65536) + 16 + 65535},
|
||||
{1 << 30, 32 + 16384*(16+65536)},
|
||||
{(1 << 40) + 1, 32 + 16777216*(16+65536) + 16 + 1},
|
||||
} {
|
||||
actual := c.EncryptedSize(test.in)
|
||||
assert.Equal(t, test.expected, actual, fmt.Sprintf("Testing %d", test.in))
|
||||
recovered, err := c.DecryptedSize(test.expected)
|
||||
assert.NoError(t, err, fmt.Sprintf("Testing reverse %d", test.expected))
|
||||
assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %d", test.expected))
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecryptedSize(t *testing.T) {
|
||||
// Test the errors since we tested the reverse above
|
||||
c, _ := newCipher(NameEncryptionStandard, "", "")
|
||||
for _, test := range []struct {
|
||||
in int64
|
||||
expectedErr error
|
||||
}{
|
||||
{0, ErrorEncryptedFileTooShort},
|
||||
{0, ErrorEncryptedFileTooShort},
|
||||
{1, ErrorEncryptedFileTooShort},
|
||||
{7, ErrorEncryptedFileTooShort},
|
||||
{32 + 1, ErrorEncryptedFileBadHeader},
|
||||
{32 + 16, ErrorEncryptedFileBadHeader},
|
||||
{32 + 16 + 65536 + 1, ErrorEncryptedFileBadHeader},
|
||||
{32 + 16 + 65536 + 16, ErrorEncryptedFileBadHeader},
|
||||
} {
|
||||
_, actualErr := c.DecryptedSize(test.in)
|
||||
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("Testing %d", test.in))
|
||||
}
|
||||
}
|
||||
|
||||
func TestNoncePointer(t *testing.T) {
|
||||
var x nonce
|
||||
assert.Equal(t, (*[24]byte)(&x), x.pointer())
|
||||
}
|
||||
|
||||
func TestNonceFromReader(t *testing.T) {
|
||||
var x nonce
|
||||
buf := bytes.NewBufferString("123456789abcdefghijklmno")
|
||||
err := x.fromReader(buf)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, nonce{'1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o'}, x)
|
||||
buf = bytes.NewBufferString("123456789abcdefghijklmn")
|
||||
err = x.fromReader(buf)
|
||||
assert.Error(t, err, "short read of nonce")
|
||||
}
|
||||
|
||||
func TestNonceFromBuf(t *testing.T) {
|
||||
var x nonce
|
||||
buf := []byte("123456789abcdefghijklmnoXXXXXXXX")
|
||||
x.fromBuf(buf)
|
||||
assert.Equal(t, nonce{'1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o'}, x)
|
||||
buf = []byte("0123456789abcdefghijklmn")
|
||||
x.fromBuf(buf)
|
||||
assert.Equal(t, nonce{'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n'}, x)
|
||||
buf = []byte("0123456789abcdefghijklm")
|
||||
assert.Panics(t, func() { x.fromBuf(buf) })
|
||||
}
|
||||
|
||||
func TestNonceIncrement(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in nonce
|
||||
out nonce
|
||||
}{
|
||||
{
|
||||
nonce{0x00},
|
||||
nonce{0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF},
|
||||
nonce{0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01},
|
||||
},
|
||||
{
|
||||
nonce{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF},
|
||||
nonce{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
|
||||
},
|
||||
} {
|
||||
x := test.in
|
||||
x.increment()
|
||||
assert.Equal(t, test.out, x)
|
||||
}
|
||||
}
|
||||
|
||||
// randomSource can read or write a random sequence
|
||||
type randomSource struct {
|
||||
counter int64
|
||||
size int64
|
||||
}
|
||||
|
||||
func newRandomSource(size int64) *randomSource {
|
||||
return &randomSource{
|
||||
size: size,
|
||||
}
|
||||
}
|
||||
|
||||
func (r *randomSource) next() byte {
|
||||
r.counter++
|
||||
return byte(r.counter % 257)
|
||||
}
|
||||
|
||||
func (r *randomSource) Read(p []byte) (n int, err error) {
|
||||
for i := range p {
|
||||
if r.counter >= r.size {
|
||||
err = io.EOF
|
||||
break
|
||||
}
|
||||
p[i] = r.next()
|
||||
n++
|
||||
}
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (r *randomSource) Write(p []byte) (n int, err error) {
|
||||
for i := range p {
|
||||
if p[i] != r.next() {
|
||||
return 0, errors.Errorf("Error in stream at %d", r.counter)
|
||||
}
|
||||
}
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
func (r *randomSource) Close() error { return nil }
|
||||
|
||||
// Check interfaces
|
||||
var (
|
||||
_ io.ReadCloser = (*randomSource)(nil)
|
||||
_ io.WriteCloser = (*randomSource)(nil)
|
||||
)
|
||||
|
||||
// Test test infrastructure first!
|
||||
func TestRandomSource(t *testing.T) {
|
||||
source := newRandomSource(1E8)
|
||||
sink := newRandomSource(1E8)
|
||||
n, err := io.Copy(sink, source)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(1E8), n)
|
||||
|
||||
source = newRandomSource(1E8)
|
||||
buf := make([]byte, 16)
|
||||
_, _ = source.Read(buf)
|
||||
sink = newRandomSource(1E8)
|
||||
n, err = io.Copy(sink, source)
|
||||
assert.Error(t, err, "Error in stream")
|
||||
}
|
||||
|
||||
type zeroes struct{}
|
||||
|
||||
func (z *zeroes) Read(p []byte) (n int, err error) {
|
||||
for i := range p {
|
||||
p[i] = 0
|
||||
n++
|
||||
}
|
||||
return n, nil
|
||||
}
|
||||
|
||||
// Test encrypt decrypt with different buffer sizes
|
||||
func testEncryptDecrypt(t *testing.T, bufSize int, copySize int64) {
|
||||
c, err := newCipher(NameEncryptionStandard, "", "")
|
||||
assert.NoError(t, err)
|
||||
c.cryptoRand = &zeroes{} // zero out the nonce
|
||||
buf := make([]byte, bufSize)
|
||||
source := newRandomSource(copySize)
|
||||
encrypted, err := c.newEncrypter(source)
|
||||
assert.NoError(t, err)
|
||||
decrypted, err := c.newDecrypter(ioutil.NopCloser(encrypted))
|
||||
assert.NoError(t, err)
|
||||
sink := newRandomSource(copySize)
|
||||
n, err := io.CopyBuffer(sink, decrypted, buf)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, copySize, n)
|
||||
blocks := copySize / blockSize
|
||||
if (copySize % blockSize) != 0 {
|
||||
blocks++
|
||||
}
|
||||
var expectedNonce = nonce{byte(blocks), byte(blocks >> 8), byte(blocks >> 16), byte(blocks >> 32)}
|
||||
assert.Equal(t, expectedNonce, encrypted.nonce)
|
||||
assert.Equal(t, expectedNonce, decrypted.nonce)
|
||||
}
|
||||
|
||||
func TestEncryptDecrypt1(t *testing.T) {
|
||||
testEncryptDecrypt(t, 1, 1E7)
|
||||
}
|
||||
|
||||
func TestEncryptDecrypt32(t *testing.T) {
|
||||
testEncryptDecrypt(t, 32, 1E8)
|
||||
}
|
||||
|
||||
func TestEncryptDecrypt4096(t *testing.T) {
|
||||
testEncryptDecrypt(t, 4096, 1E8)
|
||||
}
|
||||
|
||||
func TestEncryptDecrypt65536(t *testing.T) {
|
||||
testEncryptDecrypt(t, 65536, 1E8)
|
||||
}
|
||||
|
||||
func TestEncryptDecrypt65537(t *testing.T) {
|
||||
testEncryptDecrypt(t, 65537, 1E8)
|
||||
}
|
||||
|
||||
var (
|
||||
file0 = []byte{
|
||||
0x52, 0x43, 0x4c, 0x4f, 0x4e, 0x45, 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
|
||||
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
|
||||
}
|
||||
file1 = []byte{
|
||||
0x52, 0x43, 0x4c, 0x4f, 0x4e, 0x45, 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
|
||||
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
|
||||
0x09, 0x5b, 0x44, 0x6c, 0xd6, 0x23, 0x7b, 0xbc, 0xb0, 0x8d, 0x09, 0xfb, 0x52, 0x4c, 0xe5, 0x65,
|
||||
0xAA,
|
||||
}
|
||||
file16 = []byte{
|
||||
0x52, 0x43, 0x4c, 0x4f, 0x4e, 0x45, 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
|
||||
0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
|
||||
0xb9, 0xc4, 0x55, 0x2a, 0x27, 0x10, 0x06, 0x29, 0x18, 0x96, 0x0a, 0x3e, 0x60, 0x8c, 0x29, 0xb9,
|
||||
0xaa, 0x8a, 0x5e, 0x1e, 0x16, 0x5b, 0x6d, 0x07, 0x5d, 0xe4, 0xe9, 0xbb, 0x36, 0x7f, 0xd6, 0xd4,
|
||||
}
|
||||
)
|
||||
|
||||
func TestEncryptData(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in []byte
|
||||
expected []byte
|
||||
}{
|
||||
{[]byte{}, file0},
|
||||
{[]byte{1}, file1},
|
||||
{[]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, file16},
|
||||
} {
|
||||
c, err := newCipher(NameEncryptionStandard, "", "")
|
||||
assert.NoError(t, err)
|
||||
c.cryptoRand = newRandomSource(1E8) // nodge the crypto rand generator
|
||||
|
||||
// Check encode works
|
||||
buf := bytes.NewBuffer(test.in)
|
||||
encrypted, err := c.EncryptData(buf)
|
||||
assert.NoError(t, err)
|
||||
out, err := ioutil.ReadAll(encrypted)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, test.expected, out)
|
||||
|
||||
// Check we can decode the data properly too...
|
||||
buf = bytes.NewBuffer(out)
|
||||
decrypted, err := c.DecryptData(ioutil.NopCloser(buf))
|
||||
assert.NoError(t, err)
|
||||
out, err = ioutil.ReadAll(decrypted)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, test.in, out)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewEncrypter(t *testing.T) {
|
||||
c, err := newCipher(NameEncryptionStandard, "", "")
|
||||
assert.NoError(t, err)
|
||||
c.cryptoRand = newRandomSource(1E8) // nodge the crypto rand generator
|
||||
|
||||
z := &zeroes{}
|
||||
|
||||
fh, err := c.newEncrypter(z)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, nonce{0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18}, fh.nonce)
|
||||
assert.Equal(t, []byte{'R', 'C', 'L', 'O', 'N', 'E', 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18}, fh.buf[:32])
|
||||
|
||||
// Test error path
|
||||
c.cryptoRand = bytes.NewBufferString("123456789abcdefghijklmn")
|
||||
fh, err = c.newEncrypter(z)
|
||||
assert.Nil(t, fh)
|
||||
assert.Error(t, err, "short read of nonce")
|
||||
|
||||
}
|
||||
|
||||
type errorReader struct {
|
||||
err error
|
||||
}
|
||||
|
||||
func (er errorReader) Read(p []byte) (n int, err error) {
|
||||
return 0, er.err
|
||||
}
|
||||
|
||||
type closeDetector struct {
|
||||
io.Reader
|
||||
closed int
|
||||
}
|
||||
|
||||
func newCloseDetector(in io.Reader) *closeDetector {
|
||||
return &closeDetector{
|
||||
Reader: in,
|
||||
}
|
||||
}
|
||||
|
||||
func (c *closeDetector) Close() error {
|
||||
c.closed++
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestNewDecrypter(t *testing.T) {
|
||||
c, err := newCipher(NameEncryptionStandard, "", "")
|
||||
assert.NoError(t, err)
|
||||
c.cryptoRand = newRandomSource(1E8) // nodge the crypto rand generator
|
||||
|
||||
cd := newCloseDetector(bytes.NewBuffer(file0))
|
||||
fh, err := c.newDecrypter(cd)
|
||||
assert.NoError(t, err)
|
||||
// check nonce is in place
|
||||
assert.Equal(t, file0[8:32], fh.nonce[:])
|
||||
assert.Equal(t, 0, cd.closed)
|
||||
|
||||
// Test error paths
|
||||
for i := range file0 {
|
||||
cd := newCloseDetector(bytes.NewBuffer(file0[:i]))
|
||||
fh, err = c.newDecrypter(cd)
|
||||
assert.Nil(t, fh)
|
||||
assert.Error(t, err, ErrorEncryptedFileTooShort.Error())
|
||||
assert.Equal(t, 1, cd.closed)
|
||||
}
|
||||
|
||||
er := &errorReader{errors.New("potato")}
|
||||
cd = newCloseDetector(er)
|
||||
fh, err = c.newDecrypter(cd)
|
||||
assert.Nil(t, fh)
|
||||
assert.Error(t, err, "potato")
|
||||
assert.Equal(t, 1, cd.closed)
|
||||
|
||||
// bad magic
|
||||
file0copy := make([]byte, len(file0))
|
||||
copy(file0copy, file0)
|
||||
for i := range fileMagic {
|
||||
file0copy[i] ^= 0x1
|
||||
cd := newCloseDetector(bytes.NewBuffer(file0copy))
|
||||
fh, err := c.newDecrypter(cd)
|
||||
assert.Nil(t, fh)
|
||||
assert.Error(t, err, ErrorEncryptedBadMagic.Error())
|
||||
file0copy[i] ^= 0x1
|
||||
assert.Equal(t, 1, cd.closed)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecrypterRead(t *testing.T) {
|
||||
c, err := newCipher(NameEncryptionStandard, "", "")
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Test truncating the header
|
||||
for i := 1; i < blockHeaderSize; i++ {
|
||||
cd := newCloseDetector(bytes.NewBuffer(file1[:len(file1)-i]))
|
||||
fh, err := c.newDecrypter(cd)
|
||||
assert.NoError(t, err)
|
||||
_, err = ioutil.ReadAll(fh)
|
||||
assert.Error(t, err, ErrorEncryptedFileBadHeader.Error())
|
||||
assert.Equal(t, 0, cd.closed)
|
||||
}
|
||||
|
||||
// Test producing an error on the file on Read the underlying file
|
||||
in1 := bytes.NewBuffer(file1)
|
||||
in2 := &errorReader{errors.New("potato")}
|
||||
in := io.MultiReader(in1, in2)
|
||||
cd := newCloseDetector(in)
|
||||
fh, err := c.newDecrypter(cd)
|
||||
assert.NoError(t, err)
|
||||
_, err = ioutil.ReadAll(fh)
|
||||
assert.Error(t, err, "potato")
|
||||
assert.Equal(t, 0, cd.closed)
|
||||
|
||||
// Test corrupting the input
|
||||
// shouldn't be able to corrupt any byte without some sort of error
|
||||
file16copy := make([]byte, len(file16))
|
||||
copy(file16copy, file16)
|
||||
for i := range file16copy {
|
||||
file16copy[i] ^= 0xFF
|
||||
fh, err := c.newDecrypter(ioutil.NopCloser(bytes.NewBuffer(file16copy)))
|
||||
if i < fileMagicSize {
|
||||
assert.Error(t, err, ErrorEncryptedBadMagic.Error())
|
||||
assert.Nil(t, fh)
|
||||
} else {
|
||||
assert.NoError(t, err)
|
||||
_, err = ioutil.ReadAll(fh)
|
||||
assert.Error(t, err, ErrorEncryptedFileBadHeader.Error())
|
||||
}
|
||||
file16copy[i] ^= 0xFF
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecrypterClose(t *testing.T) {
|
||||
c, err := newCipher(NameEncryptionStandard, "", "")
|
||||
assert.NoError(t, err)
|
||||
|
||||
cd := newCloseDetector(bytes.NewBuffer(file16))
|
||||
fh, err := c.newDecrypter(cd)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, cd.closed)
|
||||
|
||||
// close before reading
|
||||
assert.Equal(t, nil, fh.err)
|
||||
err = fh.Close()
|
||||
assert.Equal(t, ErrorFileClosed, fh.err)
|
||||
assert.Equal(t, 1, cd.closed)
|
||||
|
||||
// double close
|
||||
err = fh.Close()
|
||||
assert.Error(t, err, ErrorFileClosed.Error())
|
||||
assert.Equal(t, 1, cd.closed)
|
||||
|
||||
// try again reading the file this time
|
||||
cd = newCloseDetector(bytes.NewBuffer(file1))
|
||||
fh, err = c.newDecrypter(cd)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, cd.closed)
|
||||
|
||||
// close after reading
|
||||
out, err := ioutil.ReadAll(fh)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, []byte{1}, out)
|
||||
assert.Equal(t, io.EOF, fh.err)
|
||||
err = fh.Close()
|
||||
assert.Equal(t, ErrorFileClosed, fh.err)
|
||||
assert.Equal(t, 1, cd.closed)
|
||||
}
|
||||
|
||||
func TestPutGetBlock(t *testing.T) {
|
||||
c, err := newCipher(NameEncryptionStandard, "", "")
|
||||
assert.NoError(t, err)
|
||||
|
||||
block := c.getBlock()
|
||||
c.putBlock(block)
|
||||
c.putBlock(block)
|
||||
|
||||
assert.Panics(t, func() { c.putBlock(block[:len(block)-1]) })
|
||||
}
|
||||
|
||||
func TestKey(t *testing.T) {
|
||||
c, err := newCipher(NameEncryptionStandard, "", "")
|
||||
assert.NoError(t, err)
|
||||
|
||||
// Check zero keys OK
|
||||
assert.Equal(t, [32]byte{}, c.dataKey)
|
||||
assert.Equal(t, [32]byte{}, c.nameKey)
|
||||
assert.Equal(t, [16]byte{}, c.nameTweak)
|
||||
|
||||
require.NoError(t, c.Key("potato", ""))
|
||||
assert.Equal(t, [32]byte{0x74, 0x55, 0xC7, 0x1A, 0xB1, 0x7C, 0x86, 0x5B, 0x84, 0x71, 0xF4, 0x7B, 0x79, 0xAC, 0xB0, 0x7E, 0xB3, 0x1D, 0x56, 0x78, 0xB8, 0x0C, 0x7E, 0x2E, 0xAF, 0x4F, 0xC8, 0x06, 0x6A, 0x9E, 0xE4, 0x68}, c.dataKey)
|
||||
assert.Equal(t, [32]byte{0x76, 0x5D, 0xA2, 0x7A, 0xB1, 0x5D, 0x77, 0xF9, 0x57, 0x96, 0x71, 0x1F, 0x7B, 0x93, 0xAD, 0x63, 0xBB, 0xB4, 0x84, 0x07, 0x2E, 0x71, 0x80, 0xA8, 0xD1, 0x7A, 0x9B, 0xBE, 0xC1, 0x42, 0x70, 0xD0}, c.nameKey)
|
||||
assert.Equal(t, [16]byte{0xC1, 0x8D, 0x59, 0x32, 0xF5, 0x5B, 0x28, 0x28, 0xC5, 0xE1, 0xE8, 0x72, 0x15, 0x52, 0x03, 0x10}, c.nameTweak)
|
||||
|
||||
require.NoError(t, c.Key("Potato", ""))
|
||||
assert.Equal(t, [32]byte{0xAE, 0xEA, 0x6A, 0xD3, 0x47, 0xDF, 0x75, 0xB9, 0x63, 0xCE, 0x12, 0xF5, 0x76, 0x23, 0xE9, 0x46, 0xD4, 0x2E, 0xD8, 0xBF, 0x3E, 0x92, 0x8B, 0x39, 0x24, 0x37, 0x94, 0x13, 0x3E, 0x5E, 0xF7, 0x5E}, c.dataKey)
|
||||
assert.Equal(t, [32]byte{0x54, 0xF7, 0x02, 0x6E, 0x8A, 0xFC, 0x56, 0x0A, 0x86, 0x63, 0x6A, 0xAB, 0x2C, 0x9C, 0x51, 0x62, 0xE5, 0x1A, 0x12, 0x23, 0x51, 0x83, 0x6E, 0xAF, 0x50, 0x42, 0x0F, 0x98, 0x1C, 0x86, 0x0A, 0x19}, c.nameKey)
|
||||
assert.Equal(t, [16]byte{0xF8, 0xC1, 0xB6, 0x27, 0x2D, 0x52, 0x9B, 0x4A, 0x8F, 0xDA, 0xEB, 0x42, 0x4A, 0x28, 0xDD, 0xF3}, c.nameTweak)
|
||||
|
||||
require.NoError(t, c.Key("potato", "sausage"))
|
||||
assert.Equal(t, [32]uint8{0x8e, 0x9b, 0x6b, 0x99, 0xf8, 0x69, 0x4, 0x67, 0xa0, 0x71, 0xf9, 0xcb, 0x92, 0xd0, 0xaa, 0x78, 0x7f, 0x8f, 0xf1, 0x78, 0xbe, 0xc9, 0x6f, 0x99, 0x9f, 0xd5, 0x20, 0x6e, 0x64, 0x4a, 0x1b, 0x50}, c.dataKey)
|
||||
assert.Equal(t, [32]uint8{0x3e, 0xa9, 0x5e, 0xf6, 0x81, 0x78, 0x2d, 0xc9, 0xd9, 0x95, 0x5d, 0x22, 0x5b, 0xfd, 0x44, 0x2c, 0x6f, 0x5d, 0x68, 0x97, 0xb0, 0x29, 0x1, 0x5c, 0x6f, 0x46, 0x2e, 0x2a, 0x9d, 0xae, 0x2c, 0xe3}, c.nameKey)
|
||||
assert.Equal(t, [16]uint8{0xf1, 0x7f, 0xd7, 0x14, 0x1d, 0x65, 0x27, 0x4f, 0x36, 0x3f, 0xc2, 0xa0, 0x4d, 0xd2, 0x14, 0x8a}, c.nameTweak)
|
||||
|
||||
require.NoError(t, c.Key("potato", "Sausage"))
|
||||
assert.Equal(t, [32]uint8{0xda, 0x81, 0x8c, 0x67, 0xef, 0x11, 0xf, 0xc8, 0xd5, 0xc8, 0x62, 0x4b, 0x7f, 0xe2, 0x9e, 0x35, 0x35, 0xb0, 0x8d, 0x79, 0x84, 0x89, 0xac, 0xcb, 0xa0, 0xff, 0x2, 0x72, 0x3, 0x1a, 0x5e, 0x64}, c.dataKey)
|
||||
assert.Equal(t, [32]uint8{0x2, 0x81, 0x7e, 0x7b, 0xea, 0x99, 0x81, 0x5a, 0xd0, 0x2d, 0xb9, 0x64, 0x48, 0xb0, 0x28, 0x27, 0x7c, 0x20, 0xb4, 0xd4, 0xa4, 0x68, 0xad, 0x4e, 0x5c, 0x29, 0xf, 0x79, 0xef, 0xee, 0xdb, 0x3b}, c.nameKey)
|
||||
assert.Equal(t, [16]uint8{0x9a, 0xb5, 0xb, 0x3d, 0xcb, 0x60, 0x59, 0x55, 0xa5, 0x4d, 0xe6, 0xb6, 0x47, 0x3, 0x23, 0xe2}, c.nameTweak)
|
||||
|
||||
require.NoError(t, c.Key("", ""))
|
||||
assert.Equal(t, [32]byte{}, c.dataKey)
|
||||
assert.Equal(t, [32]byte{}, c.nameKey)
|
||||
assert.Equal(t, [16]byte{}, c.nameTweak)
|
||||
}
|
||||
430
crypt/crypt.go
Normal file
430
crypt/crypt.go
Normal file
@@ -0,0 +1,430 @@
|
||||
// Package crypt provides wrappers for Fs and Object which implement encryption
|
||||
package crypt
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"path"
|
||||
"sync"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// Register with Fs
|
||||
func init() {
|
||||
fs.Register(&fs.RegInfo{
|
||||
Name: "crypt",
|
||||
Description: "Encrypt/Decrypt a remote",
|
||||
NewFs: NewFs,
|
||||
Options: []fs.Option{{
|
||||
Name: "remote",
|
||||
Help: "Remote to encrypt/decrypt.",
|
||||
}, {
|
||||
Name: "filename_encryption",
|
||||
Help: "How to encrypt the filenames.",
|
||||
Examples: []fs.OptionExample{
|
||||
{
|
||||
Value: "off",
|
||||
Help: "Don't encrypt the file names. Adds a \".bin\" extension only.",
|
||||
}, {
|
||||
Value: "standard",
|
||||
Help: "Encrypt the filenames see the docs for the details.",
|
||||
},
|
||||
},
|
||||
}, {
|
||||
Name: "password",
|
||||
Help: "Password or pass phrase for encryption.",
|
||||
IsPassword: true,
|
||||
}, {
|
||||
Name: "password2",
|
||||
Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.",
|
||||
IsPassword: true,
|
||||
Optional: true,
|
||||
}},
|
||||
})
|
||||
}
|
||||
|
||||
// NewFs contstructs an Fs from the path, container:path
|
||||
func NewFs(name, rpath string) (fs.Fs, error) {
|
||||
mode, err := NewNameEncryptionMode(fs.ConfigFile.MustValue(name, "filename_encryption", "standard"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
password := fs.ConfigFile.MustValue(name, "password", "")
|
||||
if password == "" {
|
||||
return nil, errors.New("password not set in config file")
|
||||
}
|
||||
password, err = fs.Reveal(password)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to decrypt password")
|
||||
}
|
||||
salt := fs.ConfigFile.MustValue(name, "password2", "")
|
||||
if salt != "" {
|
||||
salt, err = fs.Reveal(salt)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to decrypt password2")
|
||||
}
|
||||
}
|
||||
cipher, err := newCipher(mode, password, salt)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to make cipher")
|
||||
}
|
||||
remote := fs.ConfigFile.MustValue(name, "remote")
|
||||
// Look for a file first
|
||||
remotePath := path.Join(remote, cipher.EncryptFileName(rpath))
|
||||
wrappedFs, err := fs.NewFs(remotePath)
|
||||
// if that didn't produce a file, look for a directory
|
||||
if err != fs.ErrorIsFile {
|
||||
remotePath = path.Join(remote, cipher.EncryptDirName(rpath))
|
||||
wrappedFs, err = fs.NewFs(remotePath)
|
||||
}
|
||||
if err != fs.ErrorIsFile && err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to make remote %q to wrap", remotePath)
|
||||
}
|
||||
f := &Fs{
|
||||
Fs: wrappedFs,
|
||||
cipher: cipher,
|
||||
mode: mode,
|
||||
}
|
||||
return f, err
|
||||
}
|
||||
|
||||
// Fs represents a wrapped fs.Fs
|
||||
type Fs struct {
|
||||
fs.Fs
|
||||
cipher Cipher
|
||||
mode NameEncryptionMode
|
||||
}
|
||||
|
||||
// String returns a description of the FS
|
||||
func (f *Fs) String() string {
|
||||
return fmt.Sprintf("Encrypted %s", f.Fs.String())
|
||||
}
|
||||
|
||||
// List the Fs into a channel
|
||||
func (f *Fs) List(opts fs.ListOpts, dir string) {
|
||||
f.Fs.List(f.newListOpts(opts, dir), f.cipher.EncryptDirName(dir))
|
||||
}
|
||||
|
||||
// NewObject finds the Object at remote.
|
||||
func (f *Fs) NewObject(remote string) (fs.Object, error) {
|
||||
o, err := f.Fs.NewObject(f.cipher.EncryptFileName(remote))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return f.newObject(o), nil
|
||||
}
|
||||
|
||||
// Put in to the remote path with the modTime given of the given size
|
||||
//
|
||||
// May create the object even if it returns an error - if so
|
||||
// will return the object and the error, otherwise will return
|
||||
// nil and the error
|
||||
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo) (fs.Object, error) {
|
||||
wrappedIn, err := f.cipher.EncryptData(in)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
o, err := f.Fs.Put(wrappedIn, f.newObjectInfo(src))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return f.newObject(o), nil
|
||||
}
|
||||
|
||||
// Hashes returns the supported hash sets.
|
||||
func (f *Fs) Hashes() fs.HashSet {
|
||||
return fs.HashSet(fs.HashNone)
|
||||
}
|
||||
|
||||
// Purge all files in the root and the root directory
|
||||
//
|
||||
// Implement this if you have a way of deleting all the files
|
||||
// quicker than just running Remove() on the result of List()
|
||||
//
|
||||
// Return an error if it doesn't exist
|
||||
func (f *Fs) Purge() error {
|
||||
do, ok := f.Fs.(fs.Purger)
|
||||
if !ok {
|
||||
return fs.ErrorCantPurge
|
||||
}
|
||||
return do.Purge()
|
||||
}
|
||||
|
||||
// Copy src to this remote using server side copy operations.
|
||||
//
|
||||
// This is stored with the remote path given
|
||||
//
|
||||
// It returns the destination Object and a possible error
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantCopy
|
||||
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
|
||||
do, ok := f.Fs.(fs.Copier)
|
||||
if !ok {
|
||||
return nil, fs.ErrorCantCopy
|
||||
}
|
||||
o, ok := src.(*Object)
|
||||
if !ok {
|
||||
return nil, fs.ErrorCantCopy
|
||||
}
|
||||
oResult, err := do.Copy(o.Object, f.cipher.EncryptFileName(remote))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return f.newObject(oResult), nil
|
||||
}
|
||||
|
||||
// Move src to this remote using server side move operations.
|
||||
//
|
||||
// This is stored with the remote path given
|
||||
//
|
||||
// It returns the destination Object and a possible error
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantMove
|
||||
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
|
||||
do, ok := f.Fs.(fs.Mover)
|
||||
if !ok {
|
||||
return nil, fs.ErrorCantMove
|
||||
}
|
||||
o, ok := src.(*Object)
|
||||
if !ok {
|
||||
return nil, fs.ErrorCantMove
|
||||
}
|
||||
oResult, err := do.Move(o.Object, f.cipher.EncryptFileName(remote))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return f.newObject(oResult), nil
|
||||
}
|
||||
|
||||
// DirMove moves src to this remote using server side move
|
||||
// operations.
|
||||
//
|
||||
// Will only be called if src.Fs().Name() == f.Name()
|
||||
//
|
||||
// If it isn't possible then return fs.ErrorCantDirMove
|
||||
//
|
||||
// If destination exists then return fs.ErrorDirExists
|
||||
func (f *Fs) DirMove(src fs.Fs) error {
|
||||
do, ok := f.Fs.(fs.DirMover)
|
||||
if !ok {
|
||||
return fs.ErrorCantDirMove
|
||||
}
|
||||
srcFs, ok := src.(*Fs)
|
||||
if !ok {
|
||||
fs.Debug(srcFs, "Can't move directory - not same remote type")
|
||||
return fs.ErrorCantDirMove
|
||||
}
|
||||
return do.DirMove(srcFs.Fs)
|
||||
}
|
||||
|
||||
// UnWrap returns the Fs that this Fs is wrapping
|
||||
func (f *Fs) UnWrap() fs.Fs {
|
||||
return f.Fs
|
||||
}
|
||||
|
||||
// Object describes a wrapped for being read from the Fs
|
||||
//
|
||||
// This decrypts the remote name and decrypts the data
|
||||
type Object struct {
|
||||
fs.Object
|
||||
f *Fs
|
||||
}
|
||||
|
||||
func (f *Fs) newObject(o fs.Object) *Object {
|
||||
return &Object{
|
||||
Object: o,
|
||||
f: f,
|
||||
}
|
||||
}
|
||||
|
||||
// Fs returns read only access to the Fs that this object is part of
|
||||
func (o *Object) Fs() fs.Info {
|
||||
return o.f
|
||||
}
|
||||
|
||||
// Return a string version
|
||||
func (o *Object) String() string {
|
||||
if o == nil {
|
||||
return "<nil>"
|
||||
}
|
||||
return o.Remote()
|
||||
}
|
||||
|
||||
// Remote returns the remote path
|
||||
func (o *Object) Remote() string {
|
||||
remote := o.Object.Remote()
|
||||
decryptedName, err := o.f.cipher.DecryptFileName(remote)
|
||||
if err != nil {
|
||||
fs.Debug(remote, "Undecryptable file name: %v", err)
|
||||
return remote
|
||||
}
|
||||
return decryptedName
|
||||
}
|
||||
|
||||
// Size returns the size of the file
|
||||
func (o *Object) Size() int64 {
|
||||
size, err := o.f.cipher.DecryptedSize(o.Object.Size())
|
||||
if err != nil {
|
||||
fs.Debug(o, "Bad size for decrypt: %v", err)
|
||||
}
|
||||
return size
|
||||
}
|
||||
|
||||
// Hash returns the selected checksum of the file
|
||||
// If no checksum is available it returns ""
|
||||
func (o *Object) Hash(hash fs.HashType) (string, error) {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
// Open opens the file for read. Call Close() on the returned io.ReadCloser
|
||||
func (o *Object) Open() (io.ReadCloser, error) {
|
||||
in, err := o.Object.Open()
|
||||
if err != nil {
|
||||
return in, err
|
||||
}
|
||||
return o.f.cipher.DecryptData(in)
|
||||
}
|
||||
|
||||
// Update in to the object with the modTime given of the given size
|
||||
func (o *Object) Update(in io.Reader, src fs.ObjectInfo) error {
|
||||
wrappedIn, err := o.f.cipher.EncryptData(in)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return o.Object.Update(wrappedIn, o.f.newObjectInfo(src))
|
||||
}
|
||||
|
||||
// newDir returns a dir with the Name decrypted
|
||||
func (f *Fs) newDir(dir *fs.Dir) *fs.Dir {
|
||||
new := *dir
|
||||
remote := dir.Name
|
||||
decryptedRemote, err := f.cipher.DecryptDirName(remote)
|
||||
if err != nil {
|
||||
fs.Debug(remote, "Undecryptable dir name: %v", err)
|
||||
} else {
|
||||
new.Name = decryptedRemote
|
||||
}
|
||||
return &new
|
||||
}
|
||||
|
||||
// ObjectInfo describes a wrapped fs.ObjectInfo for being the source
|
||||
//
|
||||
// This encrypts the remote name and adjusts the size
|
||||
type ObjectInfo struct {
|
||||
fs.ObjectInfo
|
||||
f *Fs
|
||||
}
|
||||
|
||||
func (f *Fs) newObjectInfo(src fs.ObjectInfo) *ObjectInfo {
|
||||
return &ObjectInfo{
|
||||
ObjectInfo: src,
|
||||
f: f,
|
||||
}
|
||||
}
|
||||
|
||||
// Fs returns read only access to the Fs that this object is part of
|
||||
func (o *ObjectInfo) Fs() fs.Info {
|
||||
return o.f
|
||||
}
|
||||
|
||||
// Remote returns the remote path
|
||||
func (o *ObjectInfo) Remote() string {
|
||||
return o.f.cipher.EncryptFileName(o.ObjectInfo.Remote())
|
||||
}
|
||||
|
||||
// Size returns the size of the file
|
||||
func (o *ObjectInfo) Size() int64 {
|
||||
return o.f.cipher.EncryptedSize(o.ObjectInfo.Size())
|
||||
}
|
||||
|
||||
// ListOpts wraps a listopts decrypting the directory listing and
|
||||
// replacing the Objects
|
||||
type ListOpts struct {
|
||||
fs.ListOpts
|
||||
f *Fs
|
||||
dir string // dir we are listing
|
||||
mu sync.Mutex // to protect dirs
|
||||
dirs map[string]struct{} // keep track of synthetic directory objects added
|
||||
}
|
||||
|
||||
// Make a ListOpts wrapper
|
||||
func (f *Fs) newListOpts(lo fs.ListOpts, dir string) *ListOpts {
|
||||
if dir != "" {
|
||||
dir += "/"
|
||||
}
|
||||
return &ListOpts{
|
||||
ListOpts: lo,
|
||||
f: f,
|
||||
dir: dir,
|
||||
dirs: make(map[string]struct{}),
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// Level gets the recursion level for this listing.
|
||||
//
|
||||
// Fses may ignore this, but should implement it for improved efficiency if possible.
|
||||
//
|
||||
// Level 1 means list just the contents of the directory
|
||||
//
|
||||
// Each returned item must have less than level `/`s in.
|
||||
func (lo *ListOpts) Level() int {
|
||||
return lo.ListOpts.Level()
|
||||
}
|
||||
|
||||
// Add an object to the output.
|
||||
// If the function returns true, the operation has been aborted.
|
||||
// Multiple goroutines can safely add objects concurrently.
|
||||
func (lo *ListOpts) Add(obj fs.Object) (abort bool) {
|
||||
remote := obj.Remote()
|
||||
_, err := lo.f.cipher.DecryptFileName(remote)
|
||||
if err != nil {
|
||||
fs.Debug(remote, "Skipping undecryptable file name: %v", err)
|
||||
return lo.ListOpts.IsFinished()
|
||||
}
|
||||
return lo.ListOpts.Add(lo.f.newObject(obj))
|
||||
}
|
||||
|
||||
// AddDir adds a directory to the output.
|
||||
// If the function returns true, the operation has been aborted.
|
||||
// Multiple goroutines can safely add objects concurrently.
|
||||
func (lo *ListOpts) AddDir(dir *fs.Dir) (abort bool) {
|
||||
remote := dir.Name
|
||||
_, err := lo.f.cipher.DecryptDirName(remote)
|
||||
if err != nil {
|
||||
fs.Debug(remote, "Skipping undecryptable dir name: %v", err)
|
||||
return lo.ListOpts.IsFinished()
|
||||
}
|
||||
return lo.ListOpts.AddDir(lo.f.newDir(dir))
|
||||
}
|
||||
|
||||
// IncludeDirectory returns whether this directory should be
|
||||
// included in the listing (and recursed into or not).
|
||||
func (lo *ListOpts) IncludeDirectory(remote string) bool {
|
||||
decryptedRemote, err := lo.f.cipher.DecryptDirName(remote)
|
||||
if err != nil {
|
||||
fs.Debug(remote, "Not including undecryptable directory name: %v", err)
|
||||
return false
|
||||
}
|
||||
return lo.ListOpts.IncludeDirectory(decryptedRemote)
|
||||
}
|
||||
|
||||
// Check the interfaces are satisfied
|
||||
var (
|
||||
_ fs.Fs = (*Fs)(nil)
|
||||
_ fs.Purger = (*Fs)(nil)
|
||||
_ fs.Copier = (*Fs)(nil)
|
||||
_ fs.Mover = (*Fs)(nil)
|
||||
_ fs.DirMover = (*Fs)(nil)
|
||||
// _ fs.PutUncheckeder = (*Fs)(nil)
|
||||
_ fs.UnWrapper = (*Fs)(nil)
|
||||
_ fs.ObjectInfo = (*ObjectInfo)(nil)
|
||||
_ fs.Object = (*Object)(nil)
|
||||
_ fs.ListOpts = (*ListOpts)(nil)
|
||||
)
|
||||
59
crypt/crypt2_test.go
Normal file
59
crypt/crypt2_test.go
Normal file
@@ -0,0 +1,59 @@
|
||||
// Test Crypt filesystem interface
|
||||
//
|
||||
// Automatically generated - DO NOT EDIT
|
||||
// Regenerate with: make gen_tests
|
||||
package crypt_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ncw/rclone/crypt"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/fstest/fstests"
|
||||
_ "github.com/ncw/rclone/local"
|
||||
)
|
||||
|
||||
func TestSetup2(t *testing.T) {
|
||||
fstests.NilObject = fs.Object((*crypt.Object)(nil))
|
||||
fstests.RemoteName = "TestCrypt2:"
|
||||
}
|
||||
|
||||
// Generic tests for the Fs
|
||||
func TestInit2(t *testing.T) { fstests.TestInit(t) }
|
||||
func TestFsString2(t *testing.T) { fstests.TestFsString(t) }
|
||||
func TestFsRmdirEmpty2(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
|
||||
func TestFsRmdirNotFound2(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
|
||||
func TestFsMkdir2(t *testing.T) { fstests.TestFsMkdir(t) }
|
||||
func TestFsListEmpty2(t *testing.T) { fstests.TestFsListEmpty(t) }
|
||||
func TestFsListDirEmpty2(t *testing.T) { fstests.TestFsListDirEmpty(t) }
|
||||
func TestFsNewObjectNotFound2(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
|
||||
func TestFsPutFile12(t *testing.T) { fstests.TestFsPutFile1(t) }
|
||||
func TestFsPutFile22(t *testing.T) { fstests.TestFsPutFile2(t) }
|
||||
func TestFsUpdateFile12(t *testing.T) { fstests.TestFsUpdateFile1(t) }
|
||||
func TestFsListDirFile22(t *testing.T) { fstests.TestFsListDirFile2(t) }
|
||||
func TestFsListDirRoot2(t *testing.T) { fstests.TestFsListDirRoot(t) }
|
||||
func TestFsListSubdir2(t *testing.T) { fstests.TestFsListSubdir(t) }
|
||||
func TestFsListLevel22(t *testing.T) { fstests.TestFsListLevel2(t) }
|
||||
func TestFsListFile12(t *testing.T) { fstests.TestFsListFile1(t) }
|
||||
func TestFsNewObject2(t *testing.T) { fstests.TestFsNewObject(t) }
|
||||
func TestFsListFile1and22(t *testing.T) { fstests.TestFsListFile1and2(t) }
|
||||
func TestFsCopy2(t *testing.T) { fstests.TestFsCopy(t) }
|
||||
func TestFsMove2(t *testing.T) { fstests.TestFsMove(t) }
|
||||
func TestFsDirMove2(t *testing.T) { fstests.TestFsDirMove(t) }
|
||||
func TestFsRmdirFull2(t *testing.T) { fstests.TestFsRmdirFull(t) }
|
||||
func TestFsPrecision2(t *testing.T) { fstests.TestFsPrecision(t) }
|
||||
func TestObjectString2(t *testing.T) { fstests.TestObjectString(t) }
|
||||
func TestObjectFs2(t *testing.T) { fstests.TestObjectFs(t) }
|
||||
func TestObjectRemote2(t *testing.T) { fstests.TestObjectRemote(t) }
|
||||
func TestObjectHashes2(t *testing.T) { fstests.TestObjectHashes(t) }
|
||||
func TestObjectModTime2(t *testing.T) { fstests.TestObjectModTime(t) }
|
||||
func TestObjectSetModTime2(t *testing.T) { fstests.TestObjectSetModTime(t) }
|
||||
func TestObjectSize2(t *testing.T) { fstests.TestObjectSize(t) }
|
||||
func TestObjectOpen2(t *testing.T) { fstests.TestObjectOpen(t) }
|
||||
func TestObjectUpdate2(t *testing.T) { fstests.TestObjectUpdate(t) }
|
||||
func TestObjectStorable2(t *testing.T) { fstests.TestObjectStorable(t) }
|
||||
func TestFsIsFile2(t *testing.T) { fstests.TestFsIsFile(t) }
|
||||
func TestFsIsFileNotFound2(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
|
||||
func TestObjectRemove2(t *testing.T) { fstests.TestObjectRemove(t) }
|
||||
func TestObjectPurge2(t *testing.T) { fstests.TestObjectPurge(t) }
|
||||
func TestFinalise2(t *testing.T) { fstests.TestFinalise(t) }
|
||||
27
crypt/crypt_config_test.go
Normal file
27
crypt/crypt_config_test.go
Normal file
@@ -0,0 +1,27 @@
|
||||
package crypt_test
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/fstest/fstests"
|
||||
)
|
||||
|
||||
// Create the TestCrypt: remote
|
||||
func init() {
|
||||
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard")
|
||||
name := "TestCrypt"
|
||||
tempdir2 := filepath.Join(os.TempDir(), "rclone-crypt-test-off")
|
||||
name2 := name + "2"
|
||||
fstests.ExtraConfig = []fstests.ExtraConfigItem{
|
||||
{Name: name, Key: "type", Value: "crypt"},
|
||||
{Name: name, Key: "remote", Value: tempdir},
|
||||
{Name: name, Key: "password", Value: fs.MustObscure("potato")},
|
||||
{Name: name, Key: "filename_encryption", Value: "standard"},
|
||||
{Name: name2, Key: "type", Value: "crypt"},
|
||||
{Name: name2, Key: "remote", Value: tempdir2},
|
||||
{Name: name2, Key: "password", Value: fs.MustObscure("potato2")},
|
||||
{Name: name2, Key: "filename_encryption", Value: "off"},
|
||||
}
|
||||
}
|
||||
59
crypt/crypt_test.go
Normal file
59
crypt/crypt_test.go
Normal file
@@ -0,0 +1,59 @@
|
||||
// Test Crypt filesystem interface
|
||||
//
|
||||
// Automatically generated - DO NOT EDIT
|
||||
// Regenerate with: make gen_tests
|
||||
package crypt_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/ncw/rclone/crypt"
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/ncw/rclone/fstest/fstests"
|
||||
_ "github.com/ncw/rclone/local"
|
||||
)
|
||||
|
||||
func TestSetup(t *testing.T) {
|
||||
fstests.NilObject = fs.Object((*crypt.Object)(nil))
|
||||
fstests.RemoteName = "TestCrypt:"
|
||||
}
|
||||
|
||||
// Generic tests for the Fs
|
||||
func TestInit(t *testing.T) { fstests.TestInit(t) }
|
||||
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
|
||||
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
|
||||
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
|
||||
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
|
||||
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
|
||||
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
|
||||
func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
|
||||
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
|
||||
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
|
||||
func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
|
||||
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
|
||||
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
|
||||
func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
|
||||
func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
|
||||
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
|
||||
func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
|
||||
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
|
||||
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
|
||||
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
|
||||
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
|
||||
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
|
||||
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
|
||||
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
|
||||
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
|
||||
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
|
||||
func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
|
||||
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
|
||||
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
|
||||
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
|
||||
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
|
||||
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
|
||||
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
|
||||
func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
|
||||
func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
|
||||
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
|
||||
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
|
||||
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }
|
||||
63
crypt/pkcs7/pkcs7.go
Normal file
63
crypt/pkcs7/pkcs7.go
Normal file
@@ -0,0 +1,63 @@
|
||||
// Package pkcs7 implements PKCS#7 padding
|
||||
//
|
||||
// This is a standard way of encoding variable length buffers into
|
||||
// buffers which are a multiple of an underlying crypto block size.
|
||||
package pkcs7
|
||||
|
||||
import "github.com/pkg/errors"
|
||||
|
||||
// Errors Unpad can return
|
||||
var (
|
||||
ErrorPaddingNotFound = errors.New("Bad PKCS#7 padding - not padded")
|
||||
ErrorPaddingNotAMultiple = errors.New("Bad PKCS#7 padding - not a multiple of blocksize")
|
||||
ErrorPaddingTooLong = errors.New("Bad PKCS#7 padding - too long")
|
||||
ErrorPaddingTooShort = errors.New("Bad PKCS#7 padding - too short")
|
||||
ErrorPaddingNotAllTheSame = errors.New("Bad PKCS#7 padding - not all the same")
|
||||
)
|
||||
|
||||
// Pad buf using PKCS#7 to a multiple of n.
|
||||
//
|
||||
// Appends the padding to buf - make a copy of it first if you don't
|
||||
// want it modified.
|
||||
func Pad(n int, buf []byte) []byte {
|
||||
if n <= 1 || n >= 256 {
|
||||
panic("bad multiple")
|
||||
}
|
||||
length := len(buf)
|
||||
padding := n - (length % n)
|
||||
for i := 0; i < padding; i++ {
|
||||
buf = append(buf, byte(padding))
|
||||
}
|
||||
if (len(buf) % n) != 0 {
|
||||
panic("padding failed")
|
||||
}
|
||||
return buf
|
||||
}
|
||||
|
||||
// Unpad buf using PKCS#7 from a multiple of n returning a slice of
|
||||
// buf or an error if malformed.
|
||||
func Unpad(n int, buf []byte) ([]byte, error) {
|
||||
if n <= 1 || n >= 256 {
|
||||
panic("bad multiple")
|
||||
}
|
||||
length := len(buf)
|
||||
if length == 0 {
|
||||
return nil, ErrorPaddingNotFound
|
||||
}
|
||||
if (length % n) != 0 {
|
||||
return nil, ErrorPaddingNotAMultiple
|
||||
}
|
||||
padding := int(buf[length-1])
|
||||
if padding > n {
|
||||
return nil, ErrorPaddingTooLong
|
||||
}
|
||||
if padding == 0 {
|
||||
return nil, ErrorPaddingTooShort
|
||||
}
|
||||
for i := 0; i < padding; i++ {
|
||||
if buf[length-1-i] != byte(padding) {
|
||||
return nil, ErrorPaddingNotAllTheSame
|
||||
}
|
||||
}
|
||||
return buf[:length-padding], nil
|
||||
}
|
||||
73
crypt/pkcs7/pkcs7_test.go
Normal file
73
crypt/pkcs7/pkcs7_test.go
Normal file
@@ -0,0 +1,73 @@
|
||||
package pkcs7
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestPad(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
n int
|
||||
in string
|
||||
expected string
|
||||
}{
|
||||
{8, "", "\x08\x08\x08\x08\x08\x08\x08\x08"},
|
||||
{8, "1", "1\x07\x07\x07\x07\x07\x07\x07"},
|
||||
{8, "12", "12\x06\x06\x06\x06\x06\x06"},
|
||||
{8, "123", "123\x05\x05\x05\x05\x05"},
|
||||
{8, "1234", "1234\x04\x04\x04\x04"},
|
||||
{8, "12345", "12345\x03\x03\x03"},
|
||||
{8, "123456", "123456\x02\x02"},
|
||||
{8, "1234567", "1234567\x01"},
|
||||
{8, "abcdefgh", "abcdefgh\x08\x08\x08\x08\x08\x08\x08\x08"},
|
||||
{8, "abcdefgh1", "abcdefgh1\x07\x07\x07\x07\x07\x07\x07"},
|
||||
{8, "abcdefgh12", "abcdefgh12\x06\x06\x06\x06\x06\x06"},
|
||||
{8, "abcdefgh123", "abcdefgh123\x05\x05\x05\x05\x05"},
|
||||
{8, "abcdefgh1234", "abcdefgh1234\x04\x04\x04\x04"},
|
||||
{8, "abcdefgh12345", "abcdefgh12345\x03\x03\x03"},
|
||||
{8, "abcdefgh123456", "abcdefgh123456\x02\x02"},
|
||||
{8, "abcdefgh1234567", "abcdefgh1234567\x01"},
|
||||
{8, "abcdefgh12345678", "abcdefgh12345678\x08\x08\x08\x08\x08\x08\x08\x08"},
|
||||
{16, "", "\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10"},
|
||||
{16, "a", "a\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f"},
|
||||
} {
|
||||
actual := Pad(test.n, []byte(test.in))
|
||||
assert.Equal(t, test.expected, string(actual), fmt.Sprintf("Pad %d %q", test.n, test.in))
|
||||
recovered, err := Unpad(test.n, actual)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, []byte(test.in), recovered, fmt.Sprintf("Unpad %d %q", test.n, test.in))
|
||||
}
|
||||
assert.Panics(t, func() { Pad(1, []byte("")) }, "bad multiple")
|
||||
assert.Panics(t, func() { Pad(256, []byte("")) }, "bad multiple")
|
||||
}
|
||||
|
||||
func TestUnpad(t *testing.T) {
|
||||
// We've tested the OK decoding in TestPad, now test the error cases
|
||||
for _, test := range []struct {
|
||||
n int
|
||||
in string
|
||||
err error
|
||||
}{
|
||||
{8, "", ErrorPaddingNotFound},
|
||||
{8, "1", ErrorPaddingNotAMultiple},
|
||||
{8, "12", ErrorPaddingNotAMultiple},
|
||||
{8, "123", ErrorPaddingNotAMultiple},
|
||||
{8, "1234", ErrorPaddingNotAMultiple},
|
||||
{8, "12345", ErrorPaddingNotAMultiple},
|
||||
{8, "123456", ErrorPaddingNotAMultiple},
|
||||
{8, "1234567", ErrorPaddingNotAMultiple},
|
||||
{8, "1234567\xFF", ErrorPaddingTooLong},
|
||||
{8, "1234567\x09", ErrorPaddingTooLong},
|
||||
{8, "1234567\x00", ErrorPaddingTooShort},
|
||||
{8, "123456\x01\x02", ErrorPaddingNotAllTheSame},
|
||||
{8, "\x07\x08\x08\x08\x08\x08\x08\x08", ErrorPaddingNotAllTheSame},
|
||||
} {
|
||||
result, actualErr := Unpad(test.n, []byte(test.in))
|
||||
assert.Equal(t, test.err, actualErr, fmt.Sprintf("Unpad %d %q", test.n, test.in))
|
||||
assert.Equal(t, result, []byte(nil))
|
||||
}
|
||||
assert.Panics(t, func() { _, _ = Unpad(1, []byte("")) }, "bad multiple")
|
||||
assert.Panics(t, func() { _, _ = Unpad(256, []byte("")) }, "bad multiple")
|
||||
}
|
||||
267
dircache/dircache.go
Normal file
267
dircache/dircache.go
Normal file
@@ -0,0 +1,267 @@
|
||||
// Package dircache provides a simple cache for caching directory to path lookups
|
||||
package dircache
|
||||
|
||||
// _methods are called without the lock
|
||||
|
||||
import (
|
||||
"log"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// DirCache caches paths to directory IDs and vice versa
|
||||
type DirCache struct {
|
||||
cacheMu sync.RWMutex
|
||||
cache map[string]string
|
||||
invCache map[string]string
|
||||
mu sync.Mutex
|
||||
fs DirCacher // Interface to find and make stuff
|
||||
trueRootID string // ID of the absolute root
|
||||
root string // the path we are working on
|
||||
rootID string // ID of the root directory
|
||||
rootParentID string // ID of the root's parent directory
|
||||
foundRoot bool // Whether we have found the root or not
|
||||
}
|
||||
|
||||
// DirCacher describes an interface for doing the low level directory work
|
||||
type DirCacher interface {
|
||||
FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error)
|
||||
CreateDir(pathID, leaf string) (newID string, err error)
|
||||
}
|
||||
|
||||
// New makes a DirCache
|
||||
//
|
||||
// The cache is safe for concurrent use
|
||||
func New(root string, trueRootID string, fs DirCacher) *DirCache {
|
||||
d := &DirCache{
|
||||
trueRootID: trueRootID,
|
||||
root: root,
|
||||
fs: fs,
|
||||
}
|
||||
d.Flush()
|
||||
d.ResetRoot()
|
||||
return d
|
||||
}
|
||||
|
||||
// Get an ID given a path
|
||||
func (dc *DirCache) Get(path string) (id string, ok bool) {
|
||||
dc.cacheMu.RLock()
|
||||
id, ok = dc.cache[path]
|
||||
dc.cacheMu.RUnlock()
|
||||
return
|
||||
}
|
||||
|
||||
// GetInv gets a path given an ID
|
||||
func (dc *DirCache) GetInv(id string) (path string, ok bool) {
|
||||
dc.cacheMu.RLock()
|
||||
path, ok = dc.invCache[id]
|
||||
dc.cacheMu.RUnlock()
|
||||
return
|
||||
}
|
||||
|
||||
// Put a path, id into the map
|
||||
func (dc *DirCache) Put(path, id string) {
|
||||
dc.cacheMu.Lock()
|
||||
dc.cache[path] = id
|
||||
dc.invCache[id] = path
|
||||
dc.cacheMu.Unlock()
|
||||
}
|
||||
|
||||
// Flush the map of all data
|
||||
func (dc *DirCache) Flush() {
|
||||
dc.cacheMu.Lock()
|
||||
dc.cache = make(map[string]string)
|
||||
dc.invCache = make(map[string]string)
|
||||
dc.cacheMu.Unlock()
|
||||
}
|
||||
|
||||
// SplitPath splits a path into directory, leaf
|
||||
//
|
||||
// Path shouldn't start or end with a /
|
||||
//
|
||||
// If there are no slashes then directory will be "" and leaf = path
|
||||
func SplitPath(path string) (directory, leaf string) {
|
||||
lastSlash := strings.LastIndex(path, "/")
|
||||
if lastSlash >= 0 {
|
||||
directory = path[:lastSlash]
|
||||
leaf = path[lastSlash+1:]
|
||||
} else {
|
||||
directory = ""
|
||||
leaf = path
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// FindDir finds the directory passed in returning the directory ID
|
||||
// starting from pathID
|
||||
//
|
||||
// Path shouldn't start or end with a /
|
||||
//
|
||||
// If create is set it will make the directory if not found
|
||||
//
|
||||
// Algorithm:
|
||||
// Look in the cache for the path, if found return the pathID
|
||||
// If not found strip the last path off the path and recurse
|
||||
// Now have a parent directory id, so look in the parent for self and return it
|
||||
func (dc *DirCache) FindDir(path string, create bool) (pathID string, err error) {
|
||||
dc.mu.Lock()
|
||||
defer dc.mu.Unlock()
|
||||
return dc._findDir(path, create)
|
||||
}
|
||||
|
||||
// Look for the root and in the cache - safe to call without the mu
|
||||
func (dc *DirCache) _findDirInCache(path string) string {
|
||||
// fmt.Println("Finding",path,"create",create,"cache",cache)
|
||||
// If it is the root, then return it
|
||||
if path == "" {
|
||||
// fmt.Println("Root")
|
||||
return dc.rootID
|
||||
}
|
||||
|
||||
// If it is in the cache then return it
|
||||
pathID, ok := dc.Get(path)
|
||||
if ok {
|
||||
// fmt.Println("Cache hit on", path)
|
||||
return pathID
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// Unlocked findDir - must have mu
|
||||
func (dc *DirCache) _findDir(path string, create bool) (pathID string, err error) {
|
||||
pathID = dc._findDirInCache(path)
|
||||
if pathID != "" {
|
||||
return pathID, nil
|
||||
}
|
||||
|
||||
// Split the path into directory, leaf
|
||||
directory, leaf := SplitPath(path)
|
||||
|
||||
// Recurse and find pathID for parent directory
|
||||
parentPathID, err := dc._findDir(directory, create)
|
||||
if err != nil {
|
||||
return "", err
|
||||
|
||||
}
|
||||
|
||||
// Find the leaf in parentPathID
|
||||
pathID, found, err := dc.fs.FindLeaf(parentPathID, leaf)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// If not found create the directory if required or return an error
|
||||
if !found {
|
||||
if create {
|
||||
pathID, err = dc.fs.CreateDir(parentPathID, leaf)
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "failed to make directory")
|
||||
}
|
||||
} else {
|
||||
return "", fs.ErrorDirNotFound
|
||||
}
|
||||
}
|
||||
|
||||
// Store the leaf directory in the cache
|
||||
dc.Put(path, pathID)
|
||||
|
||||
// fmt.Println("Dir", path, "is", pathID)
|
||||
return pathID, nil
|
||||
}
|
||||
|
||||
// FindPath finds the leaf and directoryID from a path
|
||||
//
|
||||
// If create is set parent directories will be created if they don't exist
|
||||
func (dc *DirCache) FindPath(path string, create bool) (leaf, directoryID string, err error) {
|
||||
dc.mu.Lock()
|
||||
defer dc.mu.Unlock()
|
||||
directory, leaf := SplitPath(path)
|
||||
directoryID, err = dc._findDir(directory, create)
|
||||
return
|
||||
}
|
||||
|
||||
// FindRoot finds the root directory if not already found
|
||||
//
|
||||
// Resets the root directory
|
||||
//
|
||||
// If create is set it will make the directory if not found
|
||||
func (dc *DirCache) FindRoot(create bool) error {
|
||||
dc.mu.Lock()
|
||||
defer dc.mu.Unlock()
|
||||
if dc.foundRoot {
|
||||
return nil
|
||||
}
|
||||
rootID, err := dc._findDir(dc.root, create)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
dc.foundRoot = true
|
||||
dc.rootID = rootID
|
||||
|
||||
// Find the parent of the root while we still have the root
|
||||
// directory tree cached
|
||||
rootParentPath, _ := SplitPath(dc.root)
|
||||
dc.rootParentID, _ = dc.Get(rootParentPath)
|
||||
|
||||
// Reset the tree based on dc.root
|
||||
dc.Flush()
|
||||
// Put the root directory in
|
||||
dc.Put("", dc.rootID)
|
||||
return nil
|
||||
}
|
||||
|
||||
// FoundRoot returns whether the root directory has been found yet
|
||||
//
|
||||
// Call this from FindLeaf or CreateDir only
|
||||
func (dc *DirCache) FoundRoot() bool {
|
||||
return dc.foundRoot
|
||||
}
|
||||
|
||||
// RootID returns the ID of the root directory
|
||||
//
|
||||
// This should be called after FindRoot
|
||||
func (dc *DirCache) RootID() string {
|
||||
dc.mu.Lock()
|
||||
defer dc.mu.Unlock()
|
||||
if !dc.foundRoot {
|
||||
log.Fatalf("Internal Error: RootID() called before FindRoot")
|
||||
}
|
||||
return dc.rootID
|
||||
}
|
||||
|
||||
// RootParentID returns the ID of the parent of the root directory
|
||||
//
|
||||
// This should be called after FindRoot
|
||||
func (dc *DirCache) RootParentID() (string, error) {
|
||||
dc.mu.Lock()
|
||||
defer dc.mu.Unlock()
|
||||
if !dc.foundRoot {
|
||||
return "", errors.New("internal error: RootID() called before FindRoot")
|
||||
}
|
||||
if dc.rootParentID == "" {
|
||||
return "", errors.New("internal error: didn't find rootParentID")
|
||||
}
|
||||
if dc.rootID == dc.trueRootID {
|
||||
return "", errors.New("is root directory")
|
||||
}
|
||||
return dc.rootParentID, nil
|
||||
}
|
||||
|
||||
// ResetRoot resets the root directory to the absolute root and clears
|
||||
// the DirCache
|
||||
func (dc *DirCache) ResetRoot() {
|
||||
dc.mu.Lock()
|
||||
defer dc.mu.Unlock()
|
||||
dc.foundRoot = false
|
||||
dc.Flush()
|
||||
|
||||
// Put the true root in
|
||||
dc.rootID = dc.trueRootID
|
||||
|
||||
// Put the root directory in
|
||||
dc.Put("", dc.rootID)
|
||||
}
|
||||
82
dircache/list.go
Normal file
82
dircache/list.go
Normal file
@@ -0,0 +1,82 @@
|
||||
// Listing utility functions for fses which use dircache
|
||||
|
||||
package dircache
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/ncw/rclone/fs"
|
||||
)
|
||||
|
||||
// ListDirJob describe a directory listing that needs to be done
|
||||
type ListDirJob struct {
|
||||
DirID string
|
||||
Path string
|
||||
Depth int
|
||||
}
|
||||
|
||||
// ListDirer describes the interface necessary to use ListDir
|
||||
type ListDirer interface {
|
||||
// ListDir reads the directory specified by the job into out, returning any more jobs
|
||||
ListDir(out fs.ListOpts, job ListDirJob) (jobs []ListDirJob, err error)
|
||||
}
|
||||
|
||||
// listDir lists the directory using a recursive list from the root
|
||||
//
|
||||
// It does this in parallel, calling f.ListDir to do the actual reading
|
||||
func listDir(f ListDirer, out fs.ListOpts, dirID string, path string) {
|
||||
// Start some directory listing go routines
|
||||
var wg sync.WaitGroup // sync closing of go routines
|
||||
var traversing sync.WaitGroup // running directory traversals
|
||||
buffer := out.Buffer()
|
||||
in := make(chan ListDirJob, buffer)
|
||||
for i := 0; i < buffer; i++ {
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for job := range in {
|
||||
jobs, err := f.ListDir(out, job)
|
||||
if err != nil {
|
||||
out.SetError(err)
|
||||
fs.Debug(f, "Error reading %s: %s", path, err)
|
||||
} else {
|
||||
traversing.Add(len(jobs))
|
||||
go func() {
|
||||
// Now we have traversed this directory, send these
|
||||
// jobs off for traversal in the background
|
||||
for _, job := range jobs {
|
||||
in <- job
|
||||
}
|
||||
}()
|
||||
}
|
||||
traversing.Done()
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Start the process
|
||||
traversing.Add(1)
|
||||
in <- ListDirJob{DirID: dirID, Path: path, Depth: out.Level() - 1}
|
||||
traversing.Wait()
|
||||
close(in)
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
// List walks the path returning iles and directories into out
|
||||
func (dc *DirCache) List(f ListDirer, out fs.ListOpts, dir string) {
|
||||
defer out.Finished()
|
||||
err := dc.FindRoot(false)
|
||||
if err != nil {
|
||||
out.SetError(err)
|
||||
return
|
||||
}
|
||||
id, err := dc.FindDir(dir, false)
|
||||
if err != nil {
|
||||
out.SetError(err)
|
||||
return
|
||||
}
|
||||
if dir != "" {
|
||||
dir += "/"
|
||||
}
|
||||
listDir(f, out, id, dir)
|
||||
}
|
||||
6
docs/README.md
Normal file
6
docs/README.md
Normal file
@@ -0,0 +1,6 @@
|
||||
Docs
|
||||
====
|
||||
|
||||
See the content directory for the docs in markdown format.
|
||||
|
||||
Use [hugo](https://github.com/spf13/hugo) to build the website.
|
||||
15
docs/config.json
Normal file
15
docs/config.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"indexes": {
|
||||
"tag": "tags",
|
||||
"group": "groups",
|
||||
"menu": "menu"
|
||||
},
|
||||
"baseurl": "http://rclone.org",
|
||||
"title": "rclone - rsync for cloud storage",
|
||||
"description": "rclone - rsync for cloud storage: google drive, s3, swift, cloudfiles, dropbox, memstore...",
|
||||
"canonifyurls": true,
|
||||
"blackfriday": {
|
||||
"smartDashes": false,
|
||||
"plainIDAnchors": true
|
||||
}
|
||||
}
|
||||
43
docs/content/about.md
Normal file
43
docs/content/about.md
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
title: "Rclone"
|
||||
description: "rclone syncs files to and from Google Drive, S3, Swift, Cloudfiles, Dropbox, Google Cloud Storage and Amazon Drive."
|
||||
type: page
|
||||
date: "2015-09-06"
|
||||
groups: ["about"]
|
||||
---
|
||||
|
||||
Rclone
|
||||
======
|
||||
|
||||
[](http://rclone.org/)
|
||||
|
||||
Rclone is a command line program to sync files and directories to and from
|
||||
|
||||
* Google Drive
|
||||
* Amazon S3
|
||||
* Openstack Swift / Rackspace cloud files / Memset Memstore
|
||||
* Dropbox
|
||||
* Google Cloud Storage
|
||||
* Amazon Drive
|
||||
* Microsoft One Drive
|
||||
* Hubic
|
||||
* Backblaze B2
|
||||
* Yandex Disk
|
||||
* The local filesystem
|
||||
|
||||
Features
|
||||
|
||||
* MD5/SHA1 hashes checked at all times for file integrity
|
||||
* Timestamps preserved on files
|
||||
* Partial syncs supported on a whole file basis
|
||||
* Copy mode to just copy new/changed files
|
||||
* Sync (one way) mode to make a directory identical
|
||||
* Check mode to check for file hash equality
|
||||
* Can sync to and from network, eg two different cloud accounts
|
||||
|
||||
Links
|
||||
|
||||
* <i class="fa fa-home"></i> [Home page](http://rclone.org/)
|
||||
* <i class="fa fa-github"></i> [Github project page for source and bug tracker](http://github.com/ncw/rclone)
|
||||
* <i class="fa fa-google-plus"></i> <a href="https://google.com/+RcloneOrg" rel="publisher">Google+ page</a></li>
|
||||
* <i class="fa fa-cloud-download"></i>[Downloads](/downloads/)
|
||||
158
docs/content/amazonclouddrive.md
Normal file
158
docs/content/amazonclouddrive.md
Normal file
@@ -0,0 +1,158 @@
|
||||
---
|
||||
title: "Amazon Drive"
|
||||
description: "Rclone docs for Amazon Drive"
|
||||
date: "2016-07-11"
|
||||
---
|
||||
|
||||
<i class="fa fa-amazon"></i> Amazon Drive
|
||||
-----------------------------------------
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
Paths may be as deep as required, eg `remote:directory/subdirectory`.
|
||||
|
||||
The initial setup for Amazon Drive involves getting a token from
|
||||
Amazon which you need to do in your browser. `rclone config` walks
|
||||
you through it.
|
||||
|
||||
Here is an example of how to make a remote called `remote`. First run:
|
||||
|
||||
rclone config
|
||||
|
||||
This will guide you through an interactive setup process:
|
||||
|
||||
```
|
||||
n) New remote
|
||||
d) Delete remote
|
||||
q) Quit config
|
||||
e/n/d/q> n
|
||||
name> remote
|
||||
Type of storage to configure.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Drive
|
||||
\ "amazon cloud drive"
|
||||
2 / Amazon S3 (also Dreamhost, Ceph)
|
||||
\ "s3"
|
||||
3 / Backblaze B2
|
||||
\ "b2"
|
||||
4 / Dropbox
|
||||
\ "dropbox"
|
||||
5 / Google Cloud Storage (this is not Google Drive)
|
||||
\ "google cloud storage"
|
||||
6 / Google Drive
|
||||
\ "drive"
|
||||
7 / Hubic
|
||||
\ "hubic"
|
||||
8 / Local Disk
|
||||
\ "local"
|
||||
9 / Microsoft OneDrive
|
||||
\ "onedrive"
|
||||
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
||||
\ "swift"
|
||||
11 / Yandex Disk
|
||||
\ "yandex"
|
||||
Storage> 1
|
||||
Amazon Application Client Id - leave blank normally.
|
||||
client_id>
|
||||
Amazon Application Client Secret - leave blank normally.
|
||||
client_secret>
|
||||
Remote config
|
||||
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
|
||||
Log in and authorize rclone for access
|
||||
Waiting for code...
|
||||
Got code
|
||||
--------------------
|
||||
[remote]
|
||||
client_id =
|
||||
client_secret =
|
||||
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
|
||||
--------------------
|
||||
y) Yes this is OK
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
See the [remote setup docs](/remote_setup/) for how to set it up on a
|
||||
machine with no Internet browser available.
|
||||
|
||||
Note that rclone runs a webserver on your local machine to collect the
|
||||
token as returned from Amazon. This only runs from the moment it
|
||||
opens your browser to the moment you get back the verification
|
||||
code. This is on `http://127.0.0.1:53682/` and this it may require
|
||||
you to unblock it temporarily if you are running a host firewall.
|
||||
|
||||
Once configured you can then use `rclone` like this,
|
||||
|
||||
List directories in top level of your Amazon Drive
|
||||
|
||||
rclone lsd remote:
|
||||
|
||||
List all the files in your Amazon Drive
|
||||
|
||||
rclone ls remote:
|
||||
|
||||
To copy a local directory to an Amazon Drive directory called backup
|
||||
|
||||
rclone copy /home/source remote:backup
|
||||
|
||||
### Modified time and MD5SUMs ###
|
||||
|
||||
Amazon Drive doesn't allow modification times to be changed via
|
||||
the API so these won't be accurate or used for syncing.
|
||||
|
||||
It does store MD5SUMs so for a more accurate sync, you can use the
|
||||
`--checksum` flag.
|
||||
|
||||
### Deleting files ###
|
||||
|
||||
Any files you delete with rclone will end up in the trash. Amazon
|
||||
don't provide an API to permanently delete files, nor to empty the
|
||||
trash, so you will have to do that with one of Amazon's apps or via
|
||||
the Amazon Drive website.
|
||||
|
||||
### Specific options ###
|
||||
|
||||
Here are the command line options specific to this cloud storage
|
||||
system.
|
||||
|
||||
#### --acd-templink-threshold=SIZE ####
|
||||
|
||||
Files this size or more will be downloaded via their `tempLink`. This
|
||||
is to work around a problem with Amazon Drive which blocks downloads
|
||||
of files bigger than about 10GB. The default for this is 9GB which
|
||||
shouldn't need to be changed.
|
||||
|
||||
To download files above this threshold, rclone requests a `tempLink`
|
||||
which downloads the file through a temporary URL directly from the
|
||||
underlying S3 storage.
|
||||
|
||||
#### --acd-upload-wait-time=TIME ####
|
||||
|
||||
Sometimes Amazon Drive gives an error when a file has been fully
|
||||
uploaded but the file appears anyway after a little while. This
|
||||
controls the time rclone waits - 2 minutes by default. You might want
|
||||
to increase the time if you are having problems with very big files.
|
||||
Upload with the `-v` flag for more info.
|
||||
|
||||
### Limitations ###
|
||||
|
||||
Note that Amazon Drive is case insensitive so you can't have a
|
||||
file called "Hello.doc" and one called "hello.doc".
|
||||
|
||||
Amazon Drive has rate limiting so you may notice errors in the
|
||||
sync (429 errors). rclone will automatically retry the sync up to 3
|
||||
times by default (see `--retries` flag) which should hopefully work
|
||||
around this problem.
|
||||
|
||||
Amazon Drive has an internal limit of file sizes that can be uploaded
|
||||
to the service. This limit is not officially published, but all files
|
||||
larger than this will fail.
|
||||
|
||||
At the time of writing (Jan 2016) is in the area of 50GB per file.
|
||||
This means that larger files are likely to fail.
|
||||
|
||||
Unfortunatly there is no way for rclone to see that this failure is
|
||||
because of file size, so it will retry the operation, as any other
|
||||
failure. To avoid this problem, use `--max-size=50GB` option to limit
|
||||
the maximum size of uploaded files.
|
||||
39
docs/content/authors.md
Normal file
39
docs/content/authors.md
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
title: "Authors"
|
||||
description: "Rclone Authors and Contributors"
|
||||
date: "2016-04-22"
|
||||
---
|
||||
|
||||
Authors
|
||||
-------
|
||||
|
||||
* Nick Craig-Wood <nick@craig-wood.com>
|
||||
|
||||
Contributors
|
||||
------------
|
||||
|
||||
* Alex Couper <amcouper@gmail.com>
|
||||
* Leonid Shalupov <leonid@shalupov.com>
|
||||
* Shimon Doodkin <helpmepro1@gmail.com>
|
||||
* Colin Nicholson <colin@colinn.com>
|
||||
* Klaus Post <klauspost@gmail.com>
|
||||
* Sergey Tolmachev <tolsi.ru@gmail.com>
|
||||
* Adriano Aurélio Meirelles <adriano@atinge.com>
|
||||
* C. Bess <cbess@users.noreply.github.com>
|
||||
* Dmitry Burdeev <dibu28@gmail.com>
|
||||
* Joseph Spurrier <github@josephspurrier.com>
|
||||
* Björn Harrtell <bjorn@wololo.org>
|
||||
* Xavier Lucas <xavier.lucas@corp.ovh.com>
|
||||
* Werner Beroux <werner@beroux.com>
|
||||
* Brian Stengaard <brian@stengaard.eu>
|
||||
* Jakub Gedeon <jgedeon@sofi.com>
|
||||
* Jim Tittsler <jwt@onjapan.net>
|
||||
* Michal Witkowski <michal@improbable.io>
|
||||
* Fabian Ruff <fabian.ruff@sap.com>
|
||||
* Leigh Klotz <klotz@quixey.com>
|
||||
* Romain Lapray <lapray.romain@gmail.com>
|
||||
* Justin R. Wilson <jrw972@gmail.com>
|
||||
* Antonio Messina <antonio.s.messina@gmail.com>
|
||||
* Stefan G. Weichinger <office@oops.co.at>
|
||||
* Per Cederberg <cederberg@gmail.com>
|
||||
* Radek Šenfeld <rush@logic.cz>
|
||||
248
docs/content/b2.md
Normal file
248
docs/content/b2.md
Normal file
@@ -0,0 +1,248 @@
|
||||
---
|
||||
title: "B2"
|
||||
description: "Backblaze B2"
|
||||
date: "2016-06-15"
|
||||
---
|
||||
|
||||
<i class="fa fa-fire"></i>Backblaze B2
|
||||
----------------------------------------
|
||||
|
||||
B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/).
|
||||
|
||||
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
|
||||
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
|
||||
|
||||
Here is an example of making a b2 configuration. First run
|
||||
|
||||
rclone config
|
||||
|
||||
This will guide you through an interactive setup process. You will
|
||||
need your account number (a short hex number) and key (a long hex
|
||||
number) which you can get from the b2 control panel.
|
||||
|
||||
```
|
||||
No remotes found - make a new one
|
||||
n) New remote
|
||||
q) Quit config
|
||||
n/q> n
|
||||
name> remote
|
||||
Type of storage to configure.
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Drive
|
||||
\ "amazon cloud drive"
|
||||
2 / Amazon S3 (also Dreamhost, Ceph)
|
||||
\ "s3"
|
||||
3 / Backblaze B2
|
||||
\ "b2"
|
||||
4 / Dropbox
|
||||
\ "dropbox"
|
||||
5 / Google Cloud Storage (this is not Google Drive)
|
||||
\ "google cloud storage"
|
||||
6 / Google Drive
|
||||
\ "drive"
|
||||
7 / Hubic
|
||||
\ "hubic"
|
||||
8 / Local Disk
|
||||
\ "local"
|
||||
9 / Microsoft OneDrive
|
||||
\ "onedrive"
|
||||
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
|
||||
\ "swift"
|
||||
11 / Yandex Disk
|
||||
\ "yandex"
|
||||
Storage> 3
|
||||
Account ID
|
||||
account> 123456789abc
|
||||
Application Key
|
||||
key> 0123456789abcdef0123456789abcdef0123456789
|
||||
Endpoint for the service - leave blank normally.
|
||||
endpoint>
|
||||
Remote config
|
||||
--------------------
|
||||
[remote]
|
||||
account = 123456789abc
|
||||
key = 0123456789abcdef0123456789abcdef0123456789
|
||||
endpoint =
|
||||
--------------------
|
||||
y) Yes this is OK
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
This remote is called `remote` and can now be used like this
|
||||
|
||||
See all buckets
|
||||
|
||||
rclone lsd remote:
|
||||
|
||||
Make a new bucket
|
||||
|
||||
rclone mkdir remote:bucket
|
||||
|
||||
List the contents of a bucket
|
||||
|
||||
rclone ls remote:bucket
|
||||
|
||||
Sync `/home/local/directory` to the remote bucket, deleting any
|
||||
excess files in the bucket.
|
||||
|
||||
rclone sync /home/local/directory remote:bucket
|
||||
|
||||
### Modified time ###
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
`X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
|
||||
in the Backblaze standard. Other tools should be able to use this as
|
||||
a modified time.
|
||||
|
||||
Modified times are used in syncing and are fully supported except in
|
||||
the case of updating a modification time on an existing object. In
|
||||
this case the object will be uploaded again as B2 doesn't have an API
|
||||
method to set the modification time independent of doing an upload.
|
||||
|
||||
### SHA1 checksums ###
|
||||
|
||||
The SHA1 checksums of the files are checked on upload and download and
|
||||
will be used in the syncing process.
|
||||
|
||||
Large files which are uploaded in chunks will store their SHA1 on the
|
||||
object as `X-Bz-Info-large_file_sha1` as recommended by Backblaze.
|
||||
|
||||
### Transfers ###
|
||||
|
||||
Backblaze recommends that you do lots of transfers simultaneously for
|
||||
maximum speed. In tests from my SSD equiped laptop the optimum
|
||||
setting is about `--transfers 32` though higher numbers may be used
|
||||
for a slight speed improvement. The optimum number for you may vary
|
||||
depending on your hardware, how big the files are, how much you want
|
||||
to load your computer, etc. The default of `--transfers 4` is
|
||||
definitely too low for Backblaze B2 though.
|
||||
|
||||
Note that uploading big files (bigger than 200 MB by default) will use
|
||||
a 96 MB RAM buffer by default. There can be at most `--transfers` of
|
||||
these in use at any moment, so this sets the upper limit on the memory
|
||||
used.
|
||||
|
||||
### Versions ###
|
||||
|
||||
When rclone uploads a new version of a file it creates a [new version
|
||||
of it](https://www.backblaze.com/b2/docs/file_versions.html).
|
||||
Likewise when you delete a file, the old version will still be
|
||||
available.
|
||||
|
||||
Old versions of files are visible using the `--b2-versions` flag.
|
||||
|
||||
If you wish to remove all the old versions then you can use the
|
||||
`rclone cleanup remote:bucket` command which will delete all the old
|
||||
versions of files, leaving the current ones intact. You can also
|
||||
supply a path and only old versions under that path will be deleted,
|
||||
eg `rclone cleanup remote:bucket/path/to/stuff`.
|
||||
|
||||
When you `purge` a bucket, the current and the old versions will be
|
||||
deleted then the bucket will be deleted.
|
||||
|
||||
However `delete` will cause the current versions of the files to
|
||||
become hidden old versions.
|
||||
|
||||
Here is a session showing the listing and and retreival of an old
|
||||
version followed by a `cleanup` of the old versions.
|
||||
|
||||
Show current version and all the versions with `--b2-versions` flag.
|
||||
|
||||
```
|
||||
$ rclone -q ls b2:cleanup-test
|
||||
9 one.txt
|
||||
|
||||
$ rclone -q --b2-versions ls b2:cleanup-test
|
||||
9 one.txt
|
||||
8 one-v2016-07-04-141032-000.txt
|
||||
16 one-v2016-07-04-141003-000.txt
|
||||
15 one-v2016-07-02-155621-000.txt
|
||||
```
|
||||
|
||||
Retreive an old verson
|
||||
|
||||
```
|
||||
$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
|
||||
|
||||
$ ls -l /tmp/one-v2016-07-04-141003-000.txt
|
||||
-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
|
||||
```
|
||||
|
||||
Clean up all the old versions and show that they've gone.
|
||||
|
||||
```
|
||||
$ rclone -q cleanup b2:cleanup-test
|
||||
|
||||
$ rclone -q ls b2:cleanup-test
|
||||
9 one.txt
|
||||
|
||||
$ rclone -q --b2-versions ls b2:cleanup-test
|
||||
9 one.txt
|
||||
```
|
||||
|
||||
### Specific options ###
|
||||
|
||||
Here are the command line options specific to this cloud storage
|
||||
system.
|
||||
|
||||
#### --b2-chunk-size valuee=SIZE ####
|
||||
|
||||
When uploading large files chunk the file into this size. Note that
|
||||
these chunks are buffered in memory and there might a maximum of
|
||||
`--transfers` chunks in progress at once. 100,000,000 Bytes is the
|
||||
minimim size (default 96M).
|
||||
|
||||
#### --b2-upload-cutoff=SIZE ####
|
||||
|
||||
Cutoff for switching to chunked upload (default 190.735 MiB == 200
|
||||
MB). Files above this size will be uploaded in chunks of
|
||||
`--b2-chunk-size`.
|
||||
|
||||
This value should be set no larger than 4.657GiB (== 5GB) as this is
|
||||
the largest file size that can be uploaded.
|
||||
|
||||
|
||||
#### --b2-test-mode=FLAG ####
|
||||
|
||||
This is for debugging purposes only.
|
||||
|
||||
Setting FLAG to one of the strings below will cause b2 to return
|
||||
specific errors for debugging purposes.
|
||||
|
||||
* `fail_some_uploads`
|
||||
* `expire_some_account_authorization_tokens`
|
||||
* `force_cap_exceeded`
|
||||
|
||||
These will be set in the `X-Bz-Test-Mode` header which is documented
|
||||
in the [b2 integrations
|
||||
checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).
|
||||
|
||||
#### --b2-versions ####
|
||||
|
||||
When set rclone will show and act on older versions of files. For example
|
||||
|
||||
Listing without `--b2-versions`
|
||||
|
||||
```
|
||||
$ rclone -q ls b2:cleanup-test
|
||||
9 one.txt
|
||||
```
|
||||
|
||||
And with
|
||||
|
||||
```
|
||||
$ rclone -q --b2-versions ls b2:cleanup-test
|
||||
9 one.txt
|
||||
8 one-v2016-07-04-141032-000.txt
|
||||
16 one-v2016-07-04-141003-000.txt
|
||||
15 one-v2016-07-02-155621-000.txt
|
||||
```
|
||||
|
||||
Showing that the current version is unchanged but older versions can
|
||||
be seen. These have the UTC date that they were uploaded to the
|
||||
server to the nearest millisecond appended to them.
|
||||
|
||||
Note that when using `--b2-versions` no file write operations are
|
||||
permitted, so you can't upload files or delete them.
|
||||
31
docs/content/bugs.md
Normal file
31
docs/content/bugs.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
title: "Bugs"
|
||||
description: "Rclone Bugs and Limitations"
|
||||
date: "2014-06-16"
|
||||
---
|
||||
|
||||
Bugs and Limitations
|
||||
--------------------
|
||||
|
||||
### Empty directories are left behind / not created ##
|
||||
|
||||
With remotes that have a concept of directory, eg Local and Drive,
|
||||
empty directories may be left behind, or not created when one was
|
||||
expected.
|
||||
|
||||
This is because rclone doesn't have a concept of a directory - it only
|
||||
works on objects. Most of the object storage systems can't actually
|
||||
store a directory so there is nowhere for rclone to store anything
|
||||
about directories.
|
||||
|
||||
You can work round this to some extent with the`purge` command which
|
||||
will delete everything under the path, **inluding** empty directories.
|
||||
|
||||
This may be fixed at some point in
|
||||
[Issue #100](https://github.com/ncw/rclone/issues/100)
|
||||
|
||||
### Directory timestamps aren't preserved ##
|
||||
|
||||
For the same reason as the above, rclone doesn't have a concept of a
|
||||
directory - it only works on objects, therefore it can't preserve the
|
||||
timestamps of directories.
|
||||
432
docs/content/changelog.md
Normal file
432
docs/content/changelog.md
Normal file
@@ -0,0 +1,432 @@
|
||||
---
|
||||
title: "Documentation"
|
||||
description: "Rclone Changelog"
|
||||
date: "2016-08-24"
|
||||
---
|
||||
|
||||
Changelog
|
||||
---------
|
||||
|
||||
* v1.33 - 2016-08-24
|
||||
* New Features
|
||||
* Implement encryption
|
||||
* data encrypted in NACL secretbox format
|
||||
* with optional file name encryption
|
||||
* New commands
|
||||
* rclone mount - implements FUSE mounting of remotes (EXPERIMENTAL)
|
||||
* works on Linux, FreeBSD and OS X (need testers for the last 2!)
|
||||
* rclone cat - outputs remote file or files to the terminal
|
||||
* rclone genautocomplete - command to make a bash completion script for rclone
|
||||
* Editing a remote using `rclone config` now goes through the wizard
|
||||
* Compile with go 1.7 - this fixes rclone on macOS Sierra and on 386 processors
|
||||
* Use cobra for sub commands and docs generation
|
||||
* drive
|
||||
* Document how to make your own client_id
|
||||
* s3
|
||||
* User-configurable Amazon S3 ACL (thanks Radek Šenfeld)
|
||||
* b2
|
||||
* Fix stats accounting for upload - no more jumping to 100% done
|
||||
* On cleanup delete hide marker if it is the current file
|
||||
* New B2 API endpoint (thanks Per Cederberg)
|
||||
* Set maximum backoff to 5 Minutes
|
||||
* onedrive
|
||||
* Fix URL escaping in file names - eg uploading files with `+` in them.
|
||||
* amazon cloud drive
|
||||
* Fix token expiry during large uploads
|
||||
* Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
|
||||
* local
|
||||
* Fix filenames with invalid UTF-8 not being uploaded
|
||||
* Fix problem with some UTF-8 characters on OS X
|
||||
* v1.32 - 2016-07-13
|
||||
* Backblaze B2
|
||||
* Fix upload of files large files not in root
|
||||
* v1.31 - 2016-07-13
|
||||
* New Features
|
||||
* Reduce memory on sync by about 50%
|
||||
* Implement --no-traverse flag to stop copy traversing the destination remote.
|
||||
* This can be used to reduce memory usage down to the smallest possible.
|
||||
* Useful to copy a small number of files into a large destination folder.
|
||||
* Implement cleanup command for emptying trash / removing old versions of files
|
||||
* Currently B2 only
|
||||
* Single file handling improved
|
||||
* Now copied with --files-from
|
||||
* Automatically sets --no-traverse when copying a single file
|
||||
* Info on using installing with ansible - thanks Stefan Weichinger
|
||||
* Implement --no-update-modtime flag to stop rclone fixing the remote modified times.
|
||||
* Bug Fixes
|
||||
* Fix move command - stop it running for overlapping Fses - this was causing data loss.
|
||||
* Local
|
||||
* Fix incomplete hashes - this was causing problems for B2.
|
||||
* Amazon Drive
|
||||
* Rename Amazon Cloud Drive to Amazon Drive - no changes to config file needed.
|
||||
* Swift
|
||||
* Add support for non-default project domain - thanks Antonio Messina.
|
||||
* S3
|
||||
* Add instructions on how to use rclone with minio.
|
||||
* Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions.
|
||||
* Skip setting the modified time for objects > 5GB as it isn't possible.
|
||||
* Backblaze B2
|
||||
* Add --b2-versions flag so old versions can be listed and retreived.
|
||||
* Treat 403 errors (eg cap exceeded) as fatal.
|
||||
* Implement cleanup command for deleting old file versions.
|
||||
* Make error handling compliant with B2 integrations notes.
|
||||
* Fix handling of token expiry.
|
||||
* Implement --b2-test-mode to set `X-Bz-Test-Mode` header.
|
||||
* Set cutoff for chunked upload to 200MB as per B2 guidelines.
|
||||
* Make upload multi-threaded.
|
||||
* Dropbox
|
||||
* Don't retry 461 errors.
|
||||
* v1.30 - 2016-06-18
|
||||
* New Features
|
||||
* Directory listing code reworked for more features and better error reporting (thanks to Klaus Post for help). This enables
|
||||
* Directory include filtering for efficiency
|
||||
* --max-depth parameter
|
||||
* Better error reporting
|
||||
* More to come
|
||||
* Retry more errors
|
||||
* Add --ignore-size flag - for uploading images to onedrive
|
||||
* Log -v output to stdout by default
|
||||
* Display the transfer stats in more human readable form
|
||||
* Make 0 size files specifiable with `--max-size 0b`
|
||||
* Add `b` suffix so we can specify bytes in --bwlimit, --min-size etc
|
||||
* Use "password:" instead of "password>" prompt - thanks Klaus Post and Leigh Klotz
|
||||
* Bug Fixes
|
||||
* Fix retry doing one too many retries
|
||||
* Local
|
||||
* Fix problems with OS X and UTF-8 characters
|
||||
* Amazon Drive
|
||||
* Check a file exists before uploading to help with 408 Conflict errors
|
||||
* Reauth on 401 errors - this has been causing a lot of problems
|
||||
* Work around spurious 403 errors
|
||||
* Restart directory listings on error
|
||||
* Google Drive
|
||||
* Check a file exists before uploading to help with duplicates
|
||||
* Fix retry of multipart uploads
|
||||
* Backblaze B2
|
||||
* Implement large file uploading
|
||||
* S3
|
||||
* Add AES256 server-side encryption for - thanks Justin R. Wilson
|
||||
* Google Cloud Storage
|
||||
* Make sure we don't use conflicting content types on upload
|
||||
* Add service account support - thanks Michal Witkowski
|
||||
* Swift
|
||||
* Add auth version parameter
|
||||
* Add domain option for openstack (v3 auth) - thanks Fabian Ruff
|
||||
* v1.29 - 2016-04-18
|
||||
* New Features
|
||||
* Implement `-I, --ignore-times` for unconditional upload
|
||||
* Improve `dedupe`command
|
||||
* Now removes identical copies without asking
|
||||
* Now obeys `--dry-run`
|
||||
* Implement `--dedupe-mode` for non interactive running
|
||||
* `--dedupe-mode interactive` - interactive the default.
|
||||
* `--dedupe-mode skip` - removes identical files then skips anything left.
|
||||
* `--dedupe-mode first` - removes identical files then keeps the first one.
|
||||
* `--dedupe-mode newest` - removes identical files then keeps the newest one.
|
||||
* `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
|
||||
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
|
||||
* Bug fixes
|
||||
* Make rclone check obey the `--size-only` flag.
|
||||
* Use "application/octet-stream" if discovered mime type is invalid.
|
||||
* Fix missing "quit" option when there are no remotes.
|
||||
* Google Drive
|
||||
* Increase default chunk size to 8 MB - increases upload speed of big files
|
||||
* Speed up directory listings and make more reliable
|
||||
* Add missing retries for Move and DirMove - increases reliability
|
||||
* Preserve mime type on file update
|
||||
* Backblaze B2
|
||||
* Enable mod time syncing
|
||||
* This means that B2 will now check modification times
|
||||
* It will upload new files to update the modification times
|
||||
* (there isn't an API to just set the mod time.)
|
||||
* If you want the old behaviour use `--size-only`.
|
||||
* Update API to new version
|
||||
* Fix parsing of mod time when not in metadata
|
||||
* Swift/Hubic
|
||||
* Don't return an MD5SUM for static large objects
|
||||
* S3
|
||||
* Fix uploading files bigger than 50GB
|
||||
* v1.28 - 2016-03-01
|
||||
* New Features
|
||||
* Configuration file encryption - thanks Klaus Post
|
||||
* Improve `rclone config` adding more help and making it easier to understand
|
||||
* Implement `-u`/`--update` so creation times can be used on all remotes
|
||||
* Implement `--low-level-retries` flag
|
||||
* Optionally disable gzip compression on downloads with `--no-gzip-encoding`
|
||||
* Bug fixes
|
||||
* Don't make directories if `--dry-run` set
|
||||
* Fix and document the `move` command
|
||||
* Fix redirecting stderr on unix-like OSes when using `--log-file`
|
||||
* Fix `delete` command to wait until all finished - fixes missing deletes.
|
||||
* Backblaze B2
|
||||
* Use one upload URL per go routine fixes `more than one upload using auth token`
|
||||
* Add pacing, retries and reauthentication - fixes token expiry problems
|
||||
* Upload without using a temporary file from local (and remotes which support SHA1)
|
||||
* Fix reading metadata for all files when it shouldn't have been
|
||||
* Drive
|
||||
* Fix listing drive documents at root
|
||||
* Disable copy and move for Google docs
|
||||
* Swift
|
||||
* Fix uploading of chunked files with non ASCII characters
|
||||
* Allow setting of `storage_url` in the config - thanks Xavier Lucas
|
||||
* S3
|
||||
* Allow IAM role and credentials from environment variables - thanks Brian Stengaard
|
||||
* Allow low privilege users to use S3 (check if directory exists during Mkdir) - thanks Jakub Gedeon
|
||||
* Amazon Drive
|
||||
* Retry on more things to make directory listings more reliable
|
||||
* v1.27 - 2016-01-31
|
||||
* New Features
|
||||
* Easier headless configuration with `rclone authorize`
|
||||
* Add support for multiple hash types - we now check SHA1 as well as MD5 hashes.
|
||||
* `delete` command which does obey the filters (unlike `purge`)
|
||||
* `dedupe` command to deduplicate a remote. Useful with Google Drive.
|
||||
* Add `--ignore-existing` flag to skip all files that exist on destination.
|
||||
* Add `--delete-before`, `--delete-during`, `--delete-after` flags.
|
||||
* Add `--memprofile` flag to debug memory use.
|
||||
* Warn the user about files with same name but different case
|
||||
* Make `--include` rules add their implict exclude * at the end of the filter list
|
||||
* Deprecate compiling with go1.3
|
||||
* Amazon Drive
|
||||
* Fix download of files > 10 GB
|
||||
* Fix directory traversal ("Next token is expired") for large directory listings
|
||||
* Remove 409 conflict from error codes we will retry - stops very long pauses
|
||||
* Backblaze B2
|
||||
* SHA1 hashes now checked by rclone core
|
||||
* Drive
|
||||
* Add `--drive-auth-owner-only` to only consider files owned by the user - thanks Björn Harrtell
|
||||
* Export Google documents
|
||||
* Dropbox
|
||||
* Make file exclusion error controllable with -q
|
||||
* Swift
|
||||
* Fix upload from unprivileged user.
|
||||
* S3
|
||||
* Fix updating of mod times of files with `+` in.
|
||||
* Local
|
||||
* Add local file system option to disable UNC on Windows.
|
||||
* v1.26 - 2016-01-02
|
||||
* New Features
|
||||
* Yandex storage backend - thank you Dmitry Burdeev ("dibu")
|
||||
* Implement Backblaze B2 storage backend
|
||||
* Add --min-age and --max-age flags - thank you Adriano Aurélio Meirelles
|
||||
* Make ls/lsl/md5sum/size/check obey includes and excludes
|
||||
* Fixes
|
||||
* Fix crash in http logging
|
||||
* Upload releases to github too
|
||||
* Swift
|
||||
* Fix sync for chunked files
|
||||
* One Drive
|
||||
* Re-enable server side copy
|
||||
* Don't mask HTTP error codes with JSON decode error
|
||||
* S3
|
||||
* Fix corrupting Content-Type on mod time update (thanks Joseph Spurrier)
|
||||
* v1.25 - 2015-11-14
|
||||
* New features
|
||||
* Implement Hubic storage system
|
||||
* Fixes
|
||||
* Fix deletion of some excluded files without --delete-excluded
|
||||
* This could have deleted files unexpectedly on sync
|
||||
* Always check first with `--dry-run`!
|
||||
* Swift
|
||||
* Stop SetModTime losing metadata (eg X-Object-Manifest)
|
||||
* This could have caused data loss for files > 5GB in size
|
||||
* Use ContentType from Object to avoid lookups in listings
|
||||
* One Drive
|
||||
* disable server side copy as it seems to be broken at Microsoft
|
||||
* v1.24 - 2015-11-07
|
||||
* New features
|
||||
* Add support for Microsoft One Drive
|
||||
* Add `--no-check-certificate` option to disable server certificate verification
|
||||
* Add async readahead buffer for faster transfer of big files
|
||||
* Fixes
|
||||
* Allow spaces in remotes and check remote names for validity at creation time
|
||||
* Allow '&' and disallow ':' in Windows filenames.
|
||||
* Swift
|
||||
* Ignore directory marker objects where appropriate - allows working with Hubic
|
||||
* Don't delete the container if fs wasn't at root
|
||||
* S3
|
||||
* Don't delete the bucket if fs wasn't at root
|
||||
* Google Cloud Storage
|
||||
* Don't delete the bucket if fs wasn't at root
|
||||
* v1.23 - 2015-10-03
|
||||
* New features
|
||||
* Implement `rclone size` for measuring remotes
|
||||
* Fixes
|
||||
* Fix headless config for drive and gcs
|
||||
* Tell the user they should try again if the webserver method failed
|
||||
* Improve output of `--dump-headers`
|
||||
* S3
|
||||
* Allow anonymous access to public buckets
|
||||
* Swift
|
||||
* Stop chunked operations logging "Failed to read info: Object Not Found"
|
||||
* Use Content-Length on uploads for extra reliability
|
||||
* v1.22 - 2015-09-28
|
||||
* Implement rsync like include and exclude flags
|
||||
* swift
|
||||
* Support files > 5GB - thanks Sergey Tolmachev
|
||||
* v1.21 - 2015-09-22
|
||||
* New features
|
||||
* Display individual transfer progress
|
||||
* Make lsl output times in localtime
|
||||
* Fixes
|
||||
* Fix allowing user to override credentials again in Drive, GCS and ACD
|
||||
* Amazon Drive
|
||||
* Implement compliant pacing scheme
|
||||
* Google Drive
|
||||
* Make directory reads concurrent for increased speed.
|
||||
* v1.20 - 2015-09-15
|
||||
* New features
|
||||
* Amazon Drive support
|
||||
* Oauth support redone - fix many bugs and improve usability
|
||||
* Use "golang.org/x/oauth2" as oauth libary of choice
|
||||
* Improve oauth usability for smoother initial signup
|
||||
* drive, googlecloudstorage: optionally use auto config for the oauth token
|
||||
* Implement --dump-headers and --dump-bodies debug flags
|
||||
* Show multiple matched commands if abbreviation too short
|
||||
* Implement server side move where possible
|
||||
* local
|
||||
* Always use UNC paths internally on Windows - fixes a lot of bugs
|
||||
* dropbox
|
||||
* force use of our custom transport which makes timeouts work
|
||||
* Thanks to Klaus Post for lots of help with this release
|
||||
* v1.19 - 2015-08-28
|
||||
* New features
|
||||
* Server side copies for s3/swift/drive/dropbox/gcs
|
||||
* Move command - uses server side copies if it can
|
||||
* Implement --retries flag - tries 3 times by default
|
||||
* Build for plan9/amd64 and solaris/amd64 too
|
||||
* Fixes
|
||||
* Make a current version download with a fixed URL for scripting
|
||||
* Ignore rmdir in limited fs rather than throwing error
|
||||
* dropbox
|
||||
* Increase chunk size to improve upload speeds massively
|
||||
* Issue an error message when trying to upload bad file name
|
||||
* v1.18 - 2015-08-17
|
||||
* drive
|
||||
* Add `--drive-use-trash` flag so rclone trashes instead of deletes
|
||||
* Add "Forbidden to download" message for files with no downloadURL
|
||||
* dropbox
|
||||
* Remove datastore
|
||||
* This was deprecated and it caused a lot of problems
|
||||
* Modification times and MD5SUMs no longer stored
|
||||
* Fix uploading files > 2GB
|
||||
* s3
|
||||
* use official AWS SDK from github.com/aws/aws-sdk-go
|
||||
* **NB** will most likely require you to delete and recreate remote
|
||||
* enable multipart upload which enables files > 5GB
|
||||
* tested with Ceph / RadosGW / S3 emulation
|
||||
* many thanks to Sam Liston and Brian Haymore at the [Utah
|
||||
Center for High Performance Computing](https://www.chpc.utah.edu/) for a Ceph test account
|
||||
* misc
|
||||
* Show errors when reading the config file
|
||||
* Do not print stats in quiet mode - thanks Leonid Shalupov
|
||||
* Add FAQ
|
||||
* Fix created directories not obeying umask
|
||||
* Linux installation instructions - thanks Shimon Doodkin
|
||||
* v1.17 - 2015-06-14
|
||||
* dropbox: fix case insensitivity issues - thanks Leonid Shalupov
|
||||
* v1.16 - 2015-06-09
|
||||
* Fix uploading big files which was causing timeouts or panics
|
||||
* Don't check md5sum after download with --size-only
|
||||
* v1.15 - 2015-06-06
|
||||
* Add --checksum flag to only discard transfers by MD5SUM - thanks Alex Couper
|
||||
* Implement --size-only flag to sync on size not checksum & modtime
|
||||
* Expand docs and remove duplicated information
|
||||
* Document rclone's limitations with directories
|
||||
* dropbox: update docs about case insensitivity
|
||||
* v1.14 - 2015-05-21
|
||||
* local: fix encoding of non utf-8 file names - fixes a duplicate file problem
|
||||
* drive: docs about rate limiting
|
||||
* google cloud storage: Fix compile after API change in "google.golang.org/api/storage/v1"
|
||||
* v1.13 - 2015-05-10
|
||||
* Revise documentation (especially sync)
|
||||
* Implement --timeout and --conntimeout
|
||||
* s3: ignore etags from multipart uploads which aren't md5sums
|
||||
* v1.12 - 2015-03-15
|
||||
* drive: Use chunked upload for files above a certain size
|
||||
* drive: add --drive-chunk-size and --drive-upload-cutoff parameters
|
||||
* drive: switch to insert from update when a failed copy deletes the upload
|
||||
* core: Log duplicate files if they are detected
|
||||
* v1.11 - 2015-03-04
|
||||
* swift: add region parameter
|
||||
* drive: fix crash on failed to update remote mtime
|
||||
* In remote paths, change native directory separators to /
|
||||
* Add synchronization to ls/lsl/lsd output to stop corruptions
|
||||
* Ensure all stats/log messages to go stderr
|
||||
* Add --log-file flag to log everything (including panics) to file
|
||||
* Make it possible to disable stats printing with --stats=0
|
||||
* Implement --bwlimit to limit data transfer bandwidth
|
||||
* v1.10 - 2015-02-12
|
||||
* s3: list an unlimited number of items
|
||||
* Fix getting stuck in the configurator
|
||||
* v1.09 - 2015-02-07
|
||||
* windows: Stop drive letters (eg C:) getting mixed up with remotes (eg drive:)
|
||||
* local: Fix directory separators on Windows
|
||||
* drive: fix rate limit exceeded errors
|
||||
* v1.08 - 2015-02-04
|
||||
* drive: fix subdirectory listing to not list entire drive
|
||||
* drive: Fix SetModTime
|
||||
* dropbox: adapt code to recent library changes
|
||||
* v1.07 - 2014-12-23
|
||||
* google cloud storage: fix memory leak
|
||||
* v1.06 - 2014-12-12
|
||||
* Fix "Couldn't find home directory" on OSX
|
||||
* swift: Add tenant parameter
|
||||
* Use new location of Google API packages
|
||||
* v1.05 - 2014-08-09
|
||||
* Improved tests and consequently lots of minor fixes
|
||||
* core: Fix race detected by go race detector
|
||||
* core: Fixes after running errcheck
|
||||
* drive: reset root directory on Rmdir and Purge
|
||||
* fs: Document that Purger returns error on empty directory, test and fix
|
||||
* google cloud storage: fix ListDir on subdirectory
|
||||
* google cloud storage: re-read metadata in SetModTime
|
||||
* s3: make reading metadata more reliable to work around eventual consistency problems
|
||||
* s3: strip trailing / from ListDir()
|
||||
* swift: return directories without / in ListDir
|
||||
* v1.04 - 2014-07-21
|
||||
* google cloud storage: Fix crash on Update
|
||||
* v1.03 - 2014-07-20
|
||||
* swift, s3, dropbox: fix updated files being marked as corrupted
|
||||
* Make compile with go 1.1 again
|
||||
* v1.02 - 2014-07-19
|
||||
* Implement Dropbox remote
|
||||
* Implement Google Cloud Storage remote
|
||||
* Verify Md5sums and Sizes after copies
|
||||
* Remove times from "ls" command - lists sizes only
|
||||
* Add add "lsl" - lists times and sizes
|
||||
* Add "md5sum" command
|
||||
* v1.01 - 2014-07-04
|
||||
* drive: fix transfer of big files using up lots of memory
|
||||
* v1.00 - 2014-07-03
|
||||
* drive: fix whole second dates
|
||||
* v0.99 - 2014-06-26
|
||||
* Fix --dry-run not working
|
||||
* Make compatible with go 1.1
|
||||
* v0.98 - 2014-05-30
|
||||
* s3: Treat missing Content-Length as 0 for some ceph installations
|
||||
* rclonetest: add file with a space in
|
||||
* v0.97 - 2014-05-05
|
||||
* Implement copying of single files
|
||||
* s3 & swift: support paths inside containers/buckets
|
||||
* v0.96 - 2014-04-24
|
||||
* drive: Fix multiple files of same name being created
|
||||
* drive: Use o.Update and fs.Put to optimise transfers
|
||||
* Add version number, -V and --version
|
||||
* v0.95 - 2014-03-28
|
||||
* rclone.org: website, docs and graphics
|
||||
* drive: fix path parsing
|
||||
* v0.94 - 2014-03-27
|
||||
* Change remote format one last time
|
||||
* GNU style flags
|
||||
* v0.93 - 2014-03-16
|
||||
* drive: store token in config file
|
||||
* cross compile other versions
|
||||
* set strict permissions on config file
|
||||
* v0.92 - 2014-03-15
|
||||
* Config fixes and --config option
|
||||
* v0.91 - 2014-03-15
|
||||
* Make config file
|
||||
* v0.90 - 2013-06-27
|
||||
* Project named rclone
|
||||
* v0.00 - 2012-11-18
|
||||
* Project started
|
||||
143
docs/content/commands/rclone.md
Normal file
143
docs/content/commands/rclone.md
Normal file
@@ -0,0 +1,143 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone"
|
||||
slug: rclone
|
||||
url: /commands/rclone/
|
||||
---
|
||||
## rclone
|
||||
|
||||
Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
Rclone is a command line program to sync files and directories to and
|
||||
from various cloud storage systems, such as:
|
||||
|
||||
* Google Drive
|
||||
* Amazon S3
|
||||
* Openstack Swift / Rackspace cloud files / Memset Memstore
|
||||
* Dropbox
|
||||
* Google Cloud Storage
|
||||
* Amazon Drive
|
||||
* Microsoft One Drive
|
||||
* Hubic
|
||||
* Backblaze B2
|
||||
* Yandex Disk
|
||||
* The local filesystem
|
||||
|
||||
Features
|
||||
|
||||
* MD5/SHA1 hashes checked at all times for file integrity
|
||||
* Timestamps preserved on files
|
||||
* Partial syncs supported on a whole file basis
|
||||
* Copy mode to just copy new/changed files
|
||||
* Sync (one way) mode to make a directory identical
|
||||
* Check mode to check for file hash equality
|
||||
* Can sync to and from network, eg two different cloud accounts
|
||||
|
||||
See the home page for installation, usage, documentation, changelog
|
||||
and configuration walkthroughs.
|
||||
|
||||
* http://rclone.org/
|
||||
|
||||
|
||||
```
|
||||
rclone
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
-V, --version Print the version number
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization.
|
||||
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
|
||||
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
|
||||
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible
|
||||
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
|
||||
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied
|
||||
* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files delete/rename them.
|
||||
* [rclone delete](/commands/rclone_delete/) - Remove the contents of path.
|
||||
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output bash completion script for rclone.
|
||||
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
|
||||
* [rclone ls](/commands/rclone_ls/) - List all the objects in the the path with size and path.
|
||||
* [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the the path.
|
||||
* [rclone lsl](/commands/rclone_lsl/) - List all the objects path with modification time, size and path.
|
||||
* [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path.
|
||||
* [rclone mkdir](/commands/rclone_mkdir/) - Make the path if it doesn't already exist.
|
||||
* [rclone mount](/commands/rclone_mount/) - Mount the remote as a mountpoint. **EXPERIMENTAL**
|
||||
* [rclone move](/commands/rclone_move/) - Move files from source to dest.
|
||||
* [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents.
|
||||
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty.
|
||||
* [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path.
|
||||
* [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path.
|
||||
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
|
||||
* [rclone version](/commands/rclone_version/) - Show the version number.
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
93
docs/content/commands/rclone_authorize.md
Normal file
93
docs/content/commands/rclone_authorize.md
Normal file
@@ -0,0 +1,93 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone authorize"
|
||||
slug: rclone_authorize
|
||||
url: /commands/rclone_authorize/
|
||||
---
|
||||
## rclone authorize
|
||||
|
||||
Remote authorization.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
Remote authorization. Used to authorize a remote or headless
|
||||
rclone from a machine with a browser - use as instructed by
|
||||
rclone config.
|
||||
|
||||
```
|
||||
rclone authorize
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
96
docs/content/commands/rclone_check.md
Normal file
96
docs/content/commands/rclone_check.md
Normal file
@@ -0,0 +1,96 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone check"
|
||||
slug: rclone_check
|
||||
url: /commands/rclone_check/
|
||||
---
|
||||
## rclone check
|
||||
|
||||
Checks the files in the source and destination match.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
Checks the files in the source and destination match. It
|
||||
compares sizes and MD5SUMs and prints a report of files which
|
||||
don't match. It doesn't alter the source or destination.
|
||||
|
||||
`--size-only` may be used to only compare the sizes, not the MD5SUMs.
|
||||
|
||||
|
||||
```
|
||||
rclone check source:path dest:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
93
docs/content/commands/rclone_cleanup.md
Normal file
93
docs/content/commands/rclone_cleanup.md
Normal file
@@ -0,0 +1,93 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone cleanup"
|
||||
slug: rclone_cleanup
|
||||
url: /commands/rclone_cleanup/
|
||||
---
|
||||
## rclone cleanup
|
||||
|
||||
Clean up the remote if possible
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
Clean up the remote if possible. Empty the trash or delete old file
|
||||
versions. Not supported by all remotes.
|
||||
|
||||
|
||||
```
|
||||
rclone cleanup remote:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
90
docs/content/commands/rclone_config.md
Normal file
90
docs/content/commands/rclone_config.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone config"
|
||||
slug: rclone_config
|
||||
url: /commands/rclone_config/
|
||||
---
|
||||
## rclone config
|
||||
|
||||
Enter an interactive configuration session.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
Enter an interactive configuration session.
|
||||
|
||||
```
|
||||
rclone config
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
129
docs/content/commands/rclone_copy.md
Normal file
129
docs/content/commands/rclone_copy.md
Normal file
@@ -0,0 +1,129 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone copy"
|
||||
slug: rclone_copy
|
||||
url: /commands/rclone_copy/
|
||||
---
|
||||
## rclone copy
|
||||
|
||||
Copy files from source to dest, skipping already copied
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
Copy the source to the destination. Doesn't transfer
|
||||
unchanged files, testing by size and modification time or
|
||||
MD5SUM. Doesn't delete files from the destination.
|
||||
|
||||
Note that it is always the contents of the directory that is synced,
|
||||
not the directory so when source:path is a directory, it's the
|
||||
contents of source:path that are copied, not the directory name and
|
||||
contents.
|
||||
|
||||
If dest:path doesn't exist, it is created and the source:path contents
|
||||
go there.
|
||||
|
||||
For example
|
||||
|
||||
rclone copy source:sourcepath dest:destpath
|
||||
|
||||
Let's say there are two files in sourcepath
|
||||
|
||||
sourcepath/one.txt
|
||||
sourcepath/two.txt
|
||||
|
||||
This copies them to
|
||||
|
||||
destpath/one.txt
|
||||
destpath/two.txt
|
||||
|
||||
Not to
|
||||
|
||||
destpath/sourcepath/one.txt
|
||||
destpath/sourcepath/two.txt
|
||||
|
||||
If you are familiar with `rsync`, rclone always works as if you had
|
||||
written a trailing / - meaning "copy the contents of this directory".
|
||||
This applies to all commands and whether you are talking about the
|
||||
source or destination.
|
||||
|
||||
See the `--no-traverse` option for controlling whether rclone lists
|
||||
the destination directory or not.
|
||||
|
||||
|
||||
```
|
||||
rclone copy source:path dest:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
171
docs/content/commands/rclone_dedupe.md
Normal file
171
docs/content/commands/rclone_dedupe.md
Normal file
@@ -0,0 +1,171 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone dedupe"
|
||||
slug: rclone_dedupe
|
||||
url: /commands/rclone_dedupe/
|
||||
---
|
||||
## rclone dedupe
|
||||
|
||||
Interactively find duplicate files delete/rename them.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
By default `dedup` interactively finds duplicate files and offers to
|
||||
delete all but one or rename them to be different. Only useful with
|
||||
Google Drive which can have duplicate file names.
|
||||
|
||||
The `dedupe` command will delete all but one of any identical (same
|
||||
md5sum) files it finds without confirmation. This means that for most
|
||||
duplicated files the `dedupe` command will not be interactive. You
|
||||
can use `--dry-run` to see what would happen without doing anything.
|
||||
|
||||
Here is an example run.
|
||||
|
||||
Before - with duplicates
|
||||
|
||||
$ rclone lsl drive:dupes
|
||||
6048320 2016-03-05 16:23:16.798000000 one.txt
|
||||
6048320 2016-03-05 16:23:11.775000000 one.txt
|
||||
564374 2016-03-05 16:23:06.731000000 one.txt
|
||||
6048320 2016-03-05 16:18:26.092000000 one.txt
|
||||
6048320 2016-03-05 16:22:46.185000000 two.txt
|
||||
1744073 2016-03-05 16:22:38.104000000 two.txt
|
||||
564374 2016-03-05 16:22:52.118000000 two.txt
|
||||
|
||||
Now the `dedupe` session
|
||||
|
||||
$ rclone dedupe drive:dupes
|
||||
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
|
||||
one.txt: Found 4 duplicates - deleting identical copies
|
||||
one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
|
||||
one.txt: 2 duplicates remain
|
||||
1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
|
||||
2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
|
||||
s) Skip and do nothing
|
||||
k) Keep just one (choose which in next step)
|
||||
r) Rename all to be different (by changing file.jpg to file-1.jpg)
|
||||
s/k/r> k
|
||||
Enter the number of the file to keep> 1
|
||||
one.txt: Deleted 1 extra copies
|
||||
two.txt: Found 3 duplicates - deleting identical copies
|
||||
two.txt: 3 duplicates remain
|
||||
1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
|
||||
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
|
||||
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
|
||||
s) Skip and do nothing
|
||||
k) Keep just one (choose which in next step)
|
||||
r) Rename all to be different (by changing file.jpg to file-1.jpg)
|
||||
s/k/r> r
|
||||
two-1.txt: renamed from: two.txt
|
||||
two-2.txt: renamed from: two.txt
|
||||
two-3.txt: renamed from: two.txt
|
||||
|
||||
The result being
|
||||
|
||||
$ rclone lsl drive:dupes
|
||||
6048320 2016-03-05 16:23:16.798000000 one.txt
|
||||
564374 2016-03-05 16:22:52.118000000 two-1.txt
|
||||
6048320 2016-03-05 16:22:46.185000000 two-2.txt
|
||||
1744073 2016-03-05 16:22:38.104000000 two-3.txt
|
||||
|
||||
Dedupe can be run non interactively using the `--dedupe-mode` flag or by using an extra parameter with the same value
|
||||
|
||||
* `--dedupe-mode interactive` - interactive as above.
|
||||
* `--dedupe-mode skip` - removes identical files then skips anything left.
|
||||
* `--dedupe-mode first` - removes identical files then keeps the first one.
|
||||
* `--dedupe-mode newest` - removes identical files then keeps the newest one.
|
||||
* `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
|
||||
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
|
||||
|
||||
For example to rename all the identically named photos in your Google Photos directory, do
|
||||
|
||||
rclone dedupe --dedupe-mode rename "drive:Google Photos"
|
||||
|
||||
Or
|
||||
|
||||
rclone dedupe rename "drive:Google Photos"
|
||||
|
||||
|
||||
```
|
||||
rclone dedupe [mode] remote:path
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
```
|
||||
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename.
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
107
docs/content/commands/rclone_delete.md
Normal file
107
docs/content/commands/rclone_delete.md
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone delete"
|
||||
slug: rclone_delete
|
||||
url: /commands/rclone_delete/
|
||||
---
|
||||
## rclone delete
|
||||
|
||||
Remove the contents of path.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
Remove the contents of path. Unlike `purge` it obeys include/exclude
|
||||
filters so can be used to selectively delete files.
|
||||
|
||||
Eg delete all files bigger than 100MBytes
|
||||
|
||||
Check what would be deleted first (use either)
|
||||
|
||||
rclone --min-size 100M lsl remote:path
|
||||
rclone --dry-run --min-size 100M delete remote:path
|
||||
|
||||
Then delete
|
||||
|
||||
rclone --min-size 100M delete remote:path
|
||||
|
||||
That reads "delete everything with a minimum size of 100 MB", hence
|
||||
delete all files bigger than 100MBytes.
|
||||
|
||||
|
||||
```
|
||||
rclone delete remote:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
105
docs/content/commands/rclone_genautocomplete.md
Normal file
105
docs/content/commands/rclone_genautocomplete.md
Normal file
@@ -0,0 +1,105 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone genautocomplete"
|
||||
slug: rclone_genautocomplete
|
||||
url: /commands/rclone_genautocomplete/
|
||||
---
|
||||
## rclone genautocomplete
|
||||
|
||||
Output bash completion script for rclone.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
Generates a bash shell autocompletion script for rclone.
|
||||
|
||||
This writes to /etc/bash_completion.d/rclone by default so will
|
||||
probably need to be run with sudo or as root, eg
|
||||
|
||||
sudo rclone genautocomplete
|
||||
|
||||
Logout and login again to use the autocompletion scripts, or source
|
||||
them directly
|
||||
|
||||
. /etc/bash_completion
|
||||
|
||||
If you supply a command line argument the script will be written
|
||||
there.
|
||||
|
||||
|
||||
```
|
||||
rclone genautocomplete [output_file]
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
93
docs/content/commands/rclone_gendocs.md
Normal file
93
docs/content/commands/rclone_gendocs.md
Normal file
@@ -0,0 +1,93 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone gendocs"
|
||||
slug: rclone_gendocs
|
||||
url: /commands/rclone_gendocs/
|
||||
---
|
||||
## rclone gendocs
|
||||
|
||||
Output markdown docs for rclone to the directory supplied.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
This produces markdown docs for the rclone commands to the directory
|
||||
supplied. These are in a format suitable for hugo to render into the
|
||||
rclone.org website.
|
||||
|
||||
```
|
||||
rclone gendocs output_directory
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
90
docs/content/commands/rclone_ls.md
Normal file
90
docs/content/commands/rclone_ls.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone ls"
|
||||
slug: rclone_ls
|
||||
url: /commands/rclone_ls/
|
||||
---
|
||||
## rclone ls
|
||||
|
||||
List all the objects in the the path with size and path.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
List all the objects in the the path with size and path.
|
||||
|
||||
```
|
||||
rclone ls remote:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
90
docs/content/commands/rclone_lsd.md
Normal file
90
docs/content/commands/rclone_lsd.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone lsd"
|
||||
slug: rclone_lsd
|
||||
url: /commands/rclone_lsd/
|
||||
---
|
||||
## rclone lsd
|
||||
|
||||
List all directories/containers/buckets in the the path.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
List all directories/containers/buckets in the the path.
|
||||
|
||||
```
|
||||
rclone lsd remote:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
90
docs/content/commands/rclone_lsl.md
Normal file
90
docs/content/commands/rclone_lsl.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone lsl"
|
||||
slug: rclone_lsl
|
||||
url: /commands/rclone_lsl/
|
||||
---
|
||||
## rclone lsl
|
||||
|
||||
List all the objects path with modification time, size and path.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
List all the objects path with modification time, size and path.
|
||||
|
||||
```
|
||||
rclone lsl remote:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
93
docs/content/commands/rclone_md5sum.md
Normal file
93
docs/content/commands/rclone_md5sum.md
Normal file
@@ -0,0 +1,93 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone md5sum"
|
||||
slug: rclone_md5sum
|
||||
url: /commands/rclone_md5sum/
|
||||
---
|
||||
## rclone md5sum
|
||||
|
||||
Produces an md5sum file for all the objects in the path.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
Produces an md5sum file for all the objects in the path. This
|
||||
is in the same format as the standard md5sum tool produces.
|
||||
|
||||
|
||||
```
|
||||
rclone md5sum remote:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
90
docs/content/commands/rclone_mkdir.md
Normal file
90
docs/content/commands/rclone_mkdir.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone mkdir"
|
||||
slug: rclone_mkdir
|
||||
url: /commands/rclone_mkdir/
|
||||
---
|
||||
## rclone mkdir
|
||||
|
||||
Make the path if it doesn't already exist.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
Make the path if it doesn't already exist.
|
||||
|
||||
```
|
||||
rclone mkdir remote:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
106
docs/content/commands/rclone_move.md
Normal file
106
docs/content/commands/rclone_move.md
Normal file
@@ -0,0 +1,106 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone move"
|
||||
slug: rclone_move
|
||||
url: /commands/rclone_move/
|
||||
---
|
||||
## rclone move
|
||||
|
||||
Move files from source to dest.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
Moves the contents of the source directory to the destination
|
||||
directory. Rclone will error if the source and destination overlap.
|
||||
|
||||
If no filters are in use and if possible this will server side move
|
||||
`source:path` into `dest:path`. After this `source:path` will no
|
||||
longer longer exist.
|
||||
|
||||
Otherwise for each file in `source:path` selected by the filters (if
|
||||
any) this will move it into `dest:path`. If possible a server side
|
||||
move will be used, otherwise it will copy it (server side if possible)
|
||||
into `dest:path` then delete the original (if no errors on copy) in
|
||||
`source:path`.
|
||||
|
||||
**Important**: Since this can cause data loss, test first with the
|
||||
--dry-run flag.
|
||||
|
||||
|
||||
```
|
||||
rclone move source:path dest:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
94
docs/content/commands/rclone_purge.md
Normal file
94
docs/content/commands/rclone_purge.md
Normal file
@@ -0,0 +1,94 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone purge"
|
||||
slug: rclone_purge
|
||||
url: /commands/rclone_purge/
|
||||
---
|
||||
## rclone purge
|
||||
|
||||
Remove the path and all of its contents.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
Remove the path and all of its contents. Note that this does not obey
|
||||
include/exclude filters - everything will be removed. Use `delete` if
|
||||
you want to selectively delete files.
|
||||
|
||||
|
||||
```
|
||||
rclone purge remote:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
92
docs/content/commands/rclone_rmdir.md
Normal file
92
docs/content/commands/rclone_rmdir.md
Normal file
@@ -0,0 +1,92 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone rmdir"
|
||||
slug: rclone_rmdir
|
||||
url: /commands/rclone_rmdir/
|
||||
---
|
||||
## rclone rmdir
|
||||
|
||||
Remove the path if empty.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
Remove the path. Note that you can't remove a path with
|
||||
objects in it, use purge for that.
|
||||
|
||||
```
|
||||
rclone rmdir remote:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
93
docs/content/commands/rclone_sha1sum.md
Normal file
93
docs/content/commands/rclone_sha1sum.md
Normal file
@@ -0,0 +1,93 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone sha1sum"
|
||||
slug: rclone_sha1sum
|
||||
url: /commands/rclone_sha1sum/
|
||||
---
|
||||
## rclone sha1sum
|
||||
|
||||
Produces an sha1sum file for all the objects in the path.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
|
||||
Produces an sha1sum file for all the objects in the path. This
|
||||
is in the same format as the standard sha1sum tool produces.
|
||||
|
||||
|
||||
```
|
||||
rclone sha1sum remote:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
90
docs/content/commands/rclone_size.md
Normal file
90
docs/content/commands/rclone_size.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
date: 2016-08-24T23:01:36+01:00
|
||||
title: "rclone size"
|
||||
slug: rclone_size
|
||||
url: /commands/rclone_size/
|
||||
---
|
||||
## rclone size
|
||||
|
||||
Prints the total size and number of objects in remote:path.
|
||||
|
||||
### Synopsis
|
||||
|
||||
|
||||
Prints the total size and number of objects in remote:path.
|
||||
|
||||
```
|
||||
rclone size remote:path
|
||||
```
|
||||
|
||||
### Options inherited from parent commands
|
||||
|
||||
```
|
||||
--acd-templink-threshold int Files >= this size will be downloaded via their tempLink.
|
||||
--acd-upload-wait-time duration Time to wait after a failed complete upload to see if it appears. (default 2m0s)
|
||||
--ask-password Allow prompt for password for encrypted configuration. (default true)
|
||||
--b2-chunk-size int Upload chunk size. Must fit in memory.
|
||||
--b2-test-mode string A flag string for X-Bz-Test-Mode header.
|
||||
--b2-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--b2-versions Include old versions in directory listings.
|
||||
--bwlimit int Bandwidth limit in kBytes/s, or use suffix b|k|M|G
|
||||
--checkers int Number of checkers to run in parallel. (default 8)
|
||||
-c, --checksum Skip based on checksum & size, not mod-time & size
|
||||
--config string Config file. (default "/home/ncw/.rclone.conf")
|
||||
--contimeout duration Connect timeout (default 1m0s)
|
||||
--cpuprofile string Write cpu profile to file
|
||||
--delete-after When synchronizing, delete files on destination after transfering
|
||||
--delete-before When synchronizing, delete files on destination before transfering
|
||||
--delete-during When synchronizing, delete files during transfer (default)
|
||||
--delete-excluded Delete files on dest excluded from sync
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
|
||||
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k.
|
||||
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
|
||||
--drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
|
||||
--drive-upload-cutoff int Cutoff for switching to chunked upload
|
||||
--drive-use-trash Send files to the trash instead of deleting permanently.
|
||||
--dropbox-chunk-size int Upload chunk size. Max 150M.
|
||||
-n, --dry-run Do a trial run with no permanent changes
|
||||
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
|
||||
--dump-filters Dump the filters to the output
|
||||
--dump-headers Dump HTTP headers - may contain sensitive info
|
||||
--exclude string Exclude files matching pattern
|
||||
--exclude-from string Read exclude patterns from file
|
||||
--files-from string Read list of source-file names from file
|
||||
-f, --filter string Add a file-filtering rule
|
||||
--filter-from string Read filtering patterns from a file
|
||||
--ignore-existing Skip all files that exist on destination
|
||||
--ignore-size Ignore size when skipping use mod-time or checksum.
|
||||
-I, --ignore-times Don't skip files that match size and time - transfer all files
|
||||
--include string Include files matching pattern
|
||||
--include-from string Read include patterns from file
|
||||
--log-file string Log everything to this file
|
||||
--low-level-retries int Number of low level retries to do. (default 10)
|
||||
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--max-depth int If set limits the recursion depth to this. (default -1)
|
||||
--max-size int Don't transfer any file larger than this in k or suffix b|k|M|G
|
||||
--memprofile string Write memory profile to file
|
||||
--min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
|
||||
--min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G
|
||||
--modify-window duration Max time diff to be considered the same (default 1ns)
|
||||
--no-check-certificate Do not verify the server SSL certificate. Insecure.
|
||||
--no-gzip-encoding Don't set Accept-Encoding: gzip.
|
||||
--no-traverse Don't traverse destination file system on copy.
|
||||
--no-update-modtime Don't update destination mod-time if files identical.
|
||||
--onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k.
|
||||
--onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--retries int Retry operations this many times if they fail (default 3)
|
||||
--size-only Skip based on size only, not mod-time or checksum
|
||||
--stats duration Interval to print stats (0 to disable) (default 1m0s)
|
||||
--swift-chunk-size int Above this size files will be chunked into a _segments container.
|
||||
--timeout duration IO idle timeout (default 5m0s)
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
-u, --update Skip files that are newer on the destination.
|
||||
-v, --verbose Print lots more stuff
|
||||
```
|
||||
|
||||
### SEE ALSO
|
||||
* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.33-DEV
|
||||
|
||||
###### Auto generated by spf13/cobra on 24-Aug-2016
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user