1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

4207 Commits

Author SHA1 Message Date
Nick Craig-Wood
10dea1ca47 vfs: fix saving from chrome without --vfs-cache-mode writes
Due to Chrome's rather complicated use of file handles when saving
files from the download windows, rclone was attempting to truncate a
closed file.

The file appeared closed due to the handling of 0 length files.

This patch removes the check for the file being closed in the
WriteFileHandle.Truncate call. This is safe because the only action
this method takes is to emit an error message if the file is the wrong
size.

See: https://forum.rclone.org/t/google-drive-cannot-save-files-directly-from-browser-to-gdrive-mounted-path/17992/
2020-07-22 10:22:40 +01:00
Nick Craig-Wood
ff84351655 operations: factor Check and related functions into its own files 2020-07-21 22:08:13 +01:00
Nick Craig-Wood
8b6f2bbb4b check,cryptcheck: add reporting of filenames for same/missing/changed #3264
See: https://forum.rclone.org/t/rclone-check-v-doesnt-show-once-per-minute-update-counts/17402
2020-07-21 22:08:13 +01:00
Nick Craig-Wood
d2efb4b29b ftp: add support for --dump bodies and --dump auth
See: https://forum.rclone.org/t/rclone-copy-gives-error-connection-reset-by-peer-using-ftp/17934/27
2020-07-21 16:26:31 +01:00
Nick Craig-Wood
db56b1bfec serve/ftp: use refactored goftp.io/server library for binary shrink
This uses the refactored goftp library which doesn't include the minio
driver. This reduces the binary size by 1.5MB

See: https://gitea.com/goftp/server/pulls/120
2020-07-21 16:23:55 +01:00
Nick Craig-Wood
990a33b393 build: go mod tidy 2020-07-21 16:19:24 +01:00
Nick Craig-Wood
664c658da6 ftp: Update github.com/jlaffaye/ftp to fix interop with pure-ftpd
See: https://github.com/jlaffaye/ftp/pull/190
2020-07-21 16:17:37 +01:00
Nick Craig-Wood
d1617ce7ce Stop doing vendoring - fixes #4032 2020-07-21 16:09:53 +01:00
Nick Craig-Wood
2b50d44a2f Remove vendor directory #4032 2020-07-21 16:09:53 +01:00
Nick Craig-Wood
ddfde68140 vfs: fix directory locking caused by slow directory listings
Before this fix we took the directory lock to read the ModTime of the
directory. This was causing locking on directories which were being
re-read from the backend.

This commit gives the modtime its own lock so it can be read even when
the directory is being updated.

See: https://forum.rclone.org/t/high-cpu-load-with-rclone-mount/17604
2020-07-18 09:08:18 +01:00
Nick Craig-Wood
811b30d116 sync: fix deadlock with --track-renames-strategy modtime - fixes #4427
Before this change we could exit the popRenameMap function with the
lock held.

This fixes the problem by defer-ring the unlock.

See: https://forum.rclone.org/t/track-renames-strategy-modtime-doesnt-work/16992
2020-07-17 17:09:58 +01:00
Nick Craig-Wood
bcd362fcd5 accounting: fix documentation for speed/speedAvg
Fix the documentation for the very confusingly names speed and
speedAvg stats items.

See: https://github.com/rclone/rclone-webui-react/issues/99
2020-07-17 15:35:15 +01:00
Nick Craig-Wood
1fafcd4d28 Add Dmitry Ustalov to contributors 2020-07-15 23:15:06 +01:00
Nick Craig-Wood
2807a85f68 Add Morten Linderud to contributors 2020-07-15 23:15:06 +01:00
Nick Craig-Wood
39515acf68 Add Kevin to contributors 2020-07-15 23:15:06 +01:00
Dmitry Ustalov
aaed74fe4e docs: workaround and policy for Google Drive API
* workaround for Google Drive API
* mention the use of Google User Data
* unified wording for user data policy
2020-07-15 23:14:39 +01:00
Nick Craig-Wood
126efaadcc vfs: fix renamed files not being uploaded with --vfs-cache-mode minimal
Before this change files that were in the cache and renamed with
--vfs-cache-mode minimal weren't renamed at all.

This fixes the problem and adds tests for all the different
combinations of cache modes and in and out of the cache.
2020-07-15 16:22:12 +01:00
Nick Craig-Wood
2adc057d95 vfs: fix very high load caused by slow directory listings
In this commit (released in v1.52.0)

6ca7198f mount: fix disappearing cwd problem

SetSys was introduced to cache node lookups.

Unfortunately taking the vfs.(*Dir) lock in SetSys causes any FUSE
operations on a directory to pile up behind slow directory listings.

In some situations this leads to very high load.

This commit fixes it by using atomic operations to read and write the
Sys value make it independent of the lock.

See: https://forum.rclone.org/t/high-cpu-load-with-rclone-mount/17604
See: #4104
2020-07-15 16:22:12 +01:00
Nick Craig-Wood
59770a4953 ftp: add note to docs about home vs root directory selection
See: https://forum.rclone.org/t/update-docs-for-ftp-path-absolute-vs-relative/17875
2020-07-15 15:27:47 +01:00
Morten Linderud
67098511db make_manual: Support SOURCE_DATE_EPOCH
The documentation contains an embedded datetime which do not read
SOURCE_DATE_EPOCH. This makes the documentation unreproducible when
distribution tries to recreate previous build of the package.

This patch ensures we attempt to read the environment variable before
default to the current build time.

Signed-off-by: Morten Linderud <morten@linderud.pw>
2020-07-15 13:23:08 +01:00
Kevin
07b2ce4ab2 Avoid comma rendered in URL in onedrive.md (#4438)
Removed comma from the end of the Azure AD Applications List Blade URL since it was not resolving and customers were opening up support tickets with the Microsoft Azure AD team.
2020-07-15 15:55:00 +08:00
Nick Craig-Wood
80d2f38192 s3: fix bucket Region auto detection when Region unset in config #2915
Previous to this fix if Region was not set and Endpoint was not set
then we set the endpoint to "https://s3.amazonaws.com/".

This is unecessary because if the Region alone isn't set then we set
it to "us-east-1" which has the same endpoint.

Having the endpoint set breaks the bucket region auto detection with
the error "Failed to update region for bucket: can't set region to
"xxx" as endpoint is set".

This fix removes that check.
2020-07-10 17:16:59 +01:00
Nick Craig-Wood
0792f4722c swift: fix purge not deleting directory markers
At some point Purge stopped deleting directory markers. We don't have
an integration test for this so it went unnoticed.

This patch fixes the problem but doesn't introduce an integration test
as we don't have a framework for making directory markers yet.
2020-07-10 15:16:11 +01:00
Nick Craig-Wood
db37360a1d swift: fix dangling large objects breaking the listing
Before this change, large objects which had had their contents deleted
would return "Object not found" and break the listing.

This change makes these objects appear as 0 sized entities so they can
be listed and deleted.
2020-07-10 11:03:08 +01:00
Nick Craig-Wood
44ff766f98 mkdir: warn when using mkdir on remotes which can't have empty directories
It is a source of confusion for users that `rclone mkdir` on a remote
which can't have empty directories such as s3/b2 does nothing.

This emits a warning at NOTICE level if the user tries to mkdir a
directory not at the root for a remote which can't have empty
directories.

See: https://forum.rclone.org/t/mkdir-on-b2-consider-adding-some-output/17689
2020-07-08 17:55:58 +01:00
Max Sum
bfa5715017 fs/accounting: sort transfers by start time 2020-07-07 15:42:06 +01:00
Max Sum
e2183ad661 fs/accounting: use transferMap instead of stringSet 2020-07-07 15:42:06 +01:00
Nick Craig-Wood
e2201689cf build: add ARMv7 to the supported builds - fixes #748 2020-07-07 12:13:20 +01:00
Nick Craig-Wood
0f72aa8a5f docs: add section about config variable precedence
See: https://forum.rclone.org/t/precedence-rules-for-config/17707
2020-07-07 11:12:35 +01:00
Nick Craig-Wood
b2f4f52b64 vfs cache: make logging consistent and remove some debug logging 2020-07-06 17:32:53 +01:00
Nick Craig-Wood
c65ed26a7e vfs: vfscache: Fix renaming of items while they are being uploaded
Previous to the fix, if an item was being uploaded and it was renamed,
the upload would fail with missing checksum errors.

This change cancels any uploads in progress if the file is renamed.
2020-07-06 17:32:53 +01:00
Nick Craig-Wood
df5dbaf49b vfs: writeback: add Rename call for renaming items in the writeback queue 2020-07-06 17:32:53 +01:00
Nick Craig-Wood
80fe1f16db b2: note that b2's encoding now allows \ but rclone's hasn't changed
See: https://forum.rclone.org/t/why-are-there-error-messages-about-non-existing-files-in-the-log/17608
2020-07-06 16:28:30 +01:00
Nick Craig-Wood
f524a4c1cc vendor: drop unused github.com/djherbis/times 2020-07-06 14:29:42 +01:00
Nick Craig-Wood
c61c3cddbd Add Evan Harris to contributors 2020-07-06 14:29:42 +01:00
Nick Craig-Wood
51767aee23 Add Garrett Squire to contributors 2020-07-06 14:29:42 +01:00
Evan Harris
cd3d7e2dca docs: Update install.md to reflect minimum Go version
Fixes #3765
2020-07-06 13:42:47 +01:00
Garrett Squire
4f7f5404ce build: fix file handle leak in GitHub release tool
This is a small patch to remove a defer statement found in a for loop.
It instead closes the file after it is done copying the bytes from the
tar file reader.
2020-07-04 10:51:37 +01:00
Nick Craig-Wood
d4b2709fb0 pcloud: fix oauth on European region "eapi.pcloud.com"
Pcloud appears to have opened up a new region and they are returning
the hostname in the oauth callback, thus

    GET /?code=XXX&locationid=1&hostname=api.pcloud.com&state=XXX HTTP/1.1
    GET /?code=XXX&locationid=2&hostname=eapi.pcloud.com&state=XXX HTTP/1.1

This isn't documented yet, however pCloud have confirmed that this is
the correct interpretation.

Rclone now reads the "hostname" parameter in the oauth callback and
stores it in the config file. It uses it for all subequent API calls.
2020-07-03 20:38:42 +01:00
Nick Craig-Wood
e6fdc3a932 drive: make dangling shortcuts appear in listings
Previous to this a dangling shortcut would error the directory
listing.

This patch makes dangling shortcuts appear as 0 sized objects in the
directory listing so they can be deleted. These objects can't be read
though.
2020-07-02 22:12:44 +01:00
Nick Craig-Wood
63ebe4ca8d vfs: Reduce logging of metadata expiry to debug 2020-07-02 14:52:12 +01:00
Nick Craig-Wood
8d5bc7f28b fs/cache: fix moveto/copyto remote:file remote:file2
Before this change, if the cache was given a source `remote:file` it
stored `remote:` with the error `fs.ErrorIsFile` attached. This meant
that if it `remote:` was subsequently looked up it would return the
`fs.ErrorIsFile` error.

This broke `moveto remote:file remote:file2` as moveto would lookup
`remote:` from the second argument and erroneously get the
`fs.ErrorIsFile` error.

This likely broke other commands too.

This was broken in

4c9836035 fs/cache: Add Pin and Unpin and canonicalised lookup

Which was released in v1.52.0

The fix is to make a new cache entry for `remote:` with no error
attached in the case that the original call returned `fs.ErrorIsFile`.
2020-07-02 10:55:36 +01:00
Nick Craig-Wood
50e36fb482 onedrive: Fix reverting to Copy when Move would have worked
For some objects the onedrive backend has been doing a server side
copy and a delete when a server side move would have worked OK.

This was caused by not detecting the home drive correctly (when it was
an empty string) and assuming that these transfers were cross drive.

This is fixed by comparing canonicalizing drive IDs before comparing them.
2020-07-02 10:55:36 +01:00
Nick Craig-Wood
a1c5e76c27 Add Kai Lüke to contributors 2020-07-02 10:55:36 +01:00
Kai Lüke
54f2587c1e gcs: add support for anonymous access
Currently credentials are required to download a public bucket file
which is not really necessary and makes automated usage more complex.
Add a new option "anonymous" which when enabled configures the gcs
backend to use an anonymous HTTP client. This of course only works
for read access and trying to write will lead to errors like that:
"googleapi: Error 401: Anonymous caller does not not have
storage.objects.create access to the Google Cloud Storage object.",
as expected. By default the anonymous access option is disabled so that
the GCS Application Default Credentials are still used by default as
before and an error is given if they can't be found.
2020-07-01 20:54:49 +01:00
Nick Craig-Wood
99c293a403 log: fix --use-json-log going to stderr not --log-file on Windows - fixes #4367 2020-07-01 20:47:37 +01:00
Nick Craig-Wood
fefcbf60fa sftp: use the absolute path instead of the relative path
Before this change rclone used the relative path from the current
working directory.

It appears that WS FTP doesn't like this and the openssh sftp tool
also uses absolute paths which is a good reason for switching to
absolute paths.

This change reads the current working directory at startup and bases
all file requests from there.

See: https://forum.rclone.org/t/sftp-ssh-fx-failure-directory-not-found/17436
2020-06-30 16:07:23 +01:00
Nick Craig-Wood
96c2fdb445 vfs: update VFS help 2020-06-30 12:03:39 +01:00
Nick Craig-Wood
8301a72453 vfs: Fix over downloading with --vfs-cache-mode full and --buffer-size 0
This was caused by the signal to stop buffering being ignored when
there was no buffer!

This is fixed by explicitly checking for no buffering and stopping.
2020-06-30 12:03:39 +01:00
Nick Craig-Wood
05ddef117a accounting: add HasBuffer method to Account 2020-06-30 12:03:39 +01:00
Nick Craig-Wood
15402e46c9 vfs: Add recovered items on cache reload to directory listings
Before this change, if we restarted an upload after a restart then the
file would get uploaded but never added to the directory listings.

This change makes sure we add virtual items to the directory cache
when reloading the cache so that they show up properly.
2020-06-30 12:03:39 +01:00
Nick Craig-Wood
939860eb85 vfs: vfscache make TestCacheCleaner test more reliable 2020-06-30 12:03:39 +01:00
Nick Craig-Wood
530dc77cde vfs: Fix race condition in vfscache 2020-06-30 12:03:39 +01:00
Nick Craig-Wood
5db15cb157 vfs: make dir.ForgetAll and friends not forget virtual entries
Before this change dir.ForgetAll and vfs/forget would forget about
virtual directory entries.

This change preserves them.
2020-06-30 12:03:39 +01:00
Nick Craig-Wood
06a12f5e27 vfs: stop virtual directory entries dropping out of the directory cache
Rclone adds virtual directory entries to the directory cache when it
creates a file or directory.

Before this change these dropped out of the directory cache when the
directory cache was reloaded. This meant that when the directory cache
expired:

- On bucket based backends, empty directories would disappear
- When using VFS writeback, files in the process of uploading would disappear

This is fixed by keeping track of the virtual entries in each
directory. The virtual entries are removed when they become real - ie
the object is read back from the listing.

This also keeps tracks of deletes in the same way so if a file is
deleted, it will not re-appear when the directory cache is reloaded if
the deletion hasn't finished yet.
2020-06-30 12:03:39 +01:00
Nick Craig-Wood
143abe39f2 vfs: add tests for downloaders 2020-06-30 12:03:39 +01:00
Nick Craig-Wood
ee04732cbb vfs: factor writeback and downloaders into their own packages 2020-06-30 12:03:39 +01:00
Nick Craig-Wood
79455cc71e vfs: downloaders: remove unused osPath 2020-06-30 12:03:39 +01:00
Nick Craig-Wood
042e5fe097 vfs: downloader: limit the reader to 10 errors before giving up 2020-06-30 12:03:39 +01:00
Nick Craig-Wood
d273a9d82d vfs: remove items from writeback when dirty, don't just cancel the upload
This stops open items continually trying to be uploaded
2020-06-30 12:03:39 +01:00
Nick Craig-Wood
3eded3c4ac vfs: remove workaround Sleep() calls from tests 2020-06-30 12:03:39 +01:00
Nick Craig-Wood
20f4fda3c9 local: fix race conditions updating and reading Object metadata 2020-06-30 12:03:39 +01:00
Nick Craig-Wood
ed32a759ed vfs: add test for writeBack.cancelUpload 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
ef2d036884 vfs: make writeback heap sort in insertion order if expiry times equal
This makes the tests 100% consistent on platforms which have a lower
resolution timer like Windows.
2020-06-30 12:01:36 +01:00
Nick Craig-Wood
746c41f527 vfs: fix race in writeback tests 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
b0fb457746 vfs: add tests for writeback 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
b9ff495483 vfs: writeback - stop the timer explicitly on transfers exceeded 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
8506066926 vfs: use call after functions in writeback to simplify code
This also fixes a bug in the uploader which didn't restart the timer
when the queue was empty.
2020-06-30 12:01:36 +01:00
Nick Craig-Wood
43018973ac vfs: decouple writeback from Item so it can be tested 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
7e4ba54608 vfs: allow ReadAt and WriteAt to run concurrently with themselves
This should help with throughput on mounts and help when multiple
readers have the file open.

See: https://forum.rclone.org/t/concurrent-read-accesses-on-the-same-file-through-rclone-vfs-mount/17192
2020-06-30 12:01:36 +01:00
Nick Craig-Wood
2f66355f20 vfs: re-use existing VFS if possible 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
7781ea8d59 vfs: add an optional fs parameter to vfs rc methods
Before this change we initialized the rc for a single VFS. However
rclone can have multiple VFSes in use now so this is no longer
adequate.

This change adds an optional fs parameter to all the VFS methods to
disambiguate VFSes when there is more than one in use.

It also adds a method vfs/list to show all the active VFSes.

This adds outline tests for the rc commands which didn't have tests
before.
2020-06-30 12:01:36 +01:00
Nick Craig-Wood
ce065614e2 Revert "mount2,cmount: skip unreliable tests #4171"
The VFS is now reliable enough so that the mount tests don't fail.

This reverts commit 4808958f93.
2020-06-30 12:01:36 +01:00
Nick Craig-Wood
fa472a340e vfs: fix writeback deadlocks and other bugs
- fix deadlock when cancelling upload
- fix double upload and panic after cancelled upload
- fix cancelation strategy of uploading files
    - don't cancel uploads if we don't modify the file
    - cancel uploads if we do modify the file
- fix deadlock between Item and writeback
- fix confusion about whether writeback item was being uploaded
- fix cornercases in cancelling uploads and removing files
2020-06-30 12:01:36 +01:00
Nick Craig-Wood
279a516c53 vfs: add tests and fix bugs for vfscache.Item
Item
- Remove unused method getName
- Fix Truncate on unopened file
- Fix bug when downloading segments to fill out file on close
- Fix bug when WriteAt extends the file and we don't mark space as used

downloader
- Retry failed waiters every 5 seconds
2020-06-30 12:01:36 +01:00
Nick Craig-Wood
9ac5c6de14 vfs: cache: rework file downloader
- Download to multiple places at once in the stream
- Restart as necessary
- Timeout unused downloaders
- Close reader if too much data skipped
- Only use one file handle as use item for writing
- Implement --vfs-read-chunk-size and --vfs-read-chunk-limit
- fix deadlock between asyncbuffer and vfs cache downloader
- fix display of stream abandoned error which should be ignored
2020-06-30 12:01:36 +01:00
Nick Craig-Wood
58a7faa281 vfs: Make tests run reliably
On file Remove
- cancel any writebacks in progress
- ignore error message deleting non existent file if file was in the
  process of being uploaded

Writeback
- Don't transfer the file if it has disappeared in the meantime
- Take our own copy of the file name to avoid deadlocks
- Fix delayed retry logic
- Wait for upload to finish when cancelling upload

Fix race condition in item saving

Fix race condition in vfscache test

Make sure we delete the file on the error path - this makes cascading
failures much less likely
2020-06-30 12:01:36 +01:00
Nick Craig-Wood
496a87a665 vfs: restart pending uploads on restart of the cache 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
e4e53a2e61 vfs: add --vfs-writeback option to delay writes back to cloud storage
This is enabled by default and can be disabled with --vfs-writeback 0
2020-06-30 12:01:36 +01:00
Nick Craig-Wood
28255f1bac vfs: fix errors when using > 260 char files in the cache in Windows
This makes the cache use UNC paths on Windows. This stops the cache
exploding when using > 260 character paths
2020-06-30 12:01:36 +01:00
Nick Craig-Wood
917cb4acb3 vfs: implement partial reads in --vfs-cache-mode full
This allows reads to only read part of the file and it keeps on disk a
cache of what parts of each file have been loaded.

File data itself is kept in sparse files.
2020-06-30 12:01:36 +01:00
Nick Craig-Wood
d84527a730 lib/ranges: ranges libary for dealing with partially downloaded files 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
7d0783aad5 lib/readers: add Seek method to PatternReader 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
7622506fe2 local: factor UNCPath into lib/file 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
ae8bbc63da cache: export Canonicalize method for external use 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
79f5d940cf fs: add Fingerprint to detect changes in an object 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
25662b9e05 fstest: add ability for mock objects and filesystems to have hashes 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
c820576329 fs: define SlowModTime and SlowHash features in the relevant backends 2020-06-30 12:01:36 +01:00
Nick Craig-Wood
af601575cb serve ftp: Use new facilities in goftp to fix and simplify auth proxy #4394
- Use Driver.CheckPasswd instead of server.CheckPasswd
- Make server.CheckPasswd return an error
- Remove awful findID to find parent function hack
- Remove Driver.Init as it is no longer called
- Fix backwards incompatible PublicIp -> PublicIP change

See: https://gitea.com/goftp/server/issues/117
2020-06-30 09:34:13 +01:00
Nick Craig-Wood
c7eae60944 vendor: update goftp.io/server to v0.3.5-master to fix auth proxy #4394
See upstream issue: https://gitea.com/goftp/server/issues/117
2020-06-30 09:16:43 +01:00
Nick Craig-Wood
0afd5a2204 build: drop xgo builds for macOS, Linux and Windows
The xgo builds for macOS, Linux and Windows are used for testing - the
actual builds are built on the correct platform.

Since the darwin build has stopped working, this can be an excuse for
removing these builds as they really are only for testing.

The Android and IOS builds will continue to be built by xgo

See: https://github.com/billziss-gh/cgofuse/issues/47
2020-06-29 14:51:16 +01:00
Nick Craig-Wood
92cb21f0f2 serve ftp: Add error message if auth proxy fails #4394 2020-06-29 14:45:39 +01:00
Nick Craig-Wood
0031130111 serve ftp: don't compile on < go1.13 after dependency update 2020-06-29 14:45:39 +01:00
Nick Craig-Wood
2a3b377d34 azureblob: don't compile on < go1.13 after dependency update 2020-06-29 14:45:39 +01:00
Nick Craig-Wood
2aed3bf9ab vendor: roll back bazil.org/fuse to the last version which supports macOS #4393
Roll back the bazil.org/fuse update to give us some time to explore
alternatives for macOS.

See upstream issue: https://github.com/bazil/fuse/issues/224
2020-06-29 14:45:02 +01:00
Nick Craig-Wood
ec4e0e4d58 vendor: revert goftp.io/server from v0.3.4 to v0.3.3 to fix auth proxy #4394
See upstream issue: https://gitea.com/goftp/server/issues/117
2020-06-29 14:45:02 +01:00
Nick Craig-Wood
696d012c05 vendor: update all dependencies 2020-06-29 14:44:57 +01:00
Nick Craig-Wood
61ff7306ae crypt: add --crypt-server-side-across-configs flag
This can be used for changing filename encryption mode without
re-uploading data.

See: https://forum.rclone.org/t/revert-filename-encryption-method/17454/
2020-06-27 11:40:15 +01:00
Nick Craig-Wood
0bcf4769fe local: make --local-no-updated provide a consistent view of the objects
Before this change the --local-no-updated flag would not error if the
files changed in size during the transfer. The file could still be
read beyond the size advertised though which caused problems with
certain backends.

After this change we attempt to provide a consistent view of the file
once it has been opened.

Once the file has had stat() called on it for the first time we

- Only transfer the size that stat gave
- Only checksum the size that stat gave
- Don't update the stat info for the file

This means that files that are extending can be transferred - rclone
will transfer the length it saw the first time it listed the file.

See: https://forum.rclone.org/t/transport-connection-broken/16494/21
2020-06-27 10:00:43 +01:00
Nick Craig-Wood
0bfbecf9cb build: add btest Makefile target for pasteable download message 2020-06-26 16:26:29 +01:00
David
9058ec32e1 s3: Use regional s3 us-east-1 endpoint 2020-06-26 16:25:52 +01:00
Nick Craig-Wood
61e4b4db42 drive: Allow the use of --drive-impersonate with the root_folder_id "appDataFolder"
In this commit

5c5ad6220 drive: fix --drive-impersonate with cached root_folder_id

We disabled the use of root_folder_id with --drive-impersonate to fix
a problem with a cached root_folder_id giving the wrong results.

This, alas, broke one users setup with a root_folder_id of
appDataFolder. Since this is identifiable and definitely couldn't have
been cached, we can safely skip this check in this case.

See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215/10
2020-06-25 21:43:11 +01:00
Nick Craig-Wood
fd7c63bc78 s3: add backend restore command to restore objects from GLACIER
See: https://forum.rclone.org/t/rclone-settier-fails-with-scaleway-entitytoolarge/17384
2020-06-25 21:33:23 +01:00
Nick Craig-Wood
49a7d08a40 qingstor: cancel in progress multipart uploads on rclone exit #4300 2020-06-25 15:22:53 +01:00
Nick Craig-Wood
2c10ce64aa onedrive: rework cancel of multipart uploads on rclone exit #4300
This now uses the atexit.OnError framework rather than a home grown one.
2020-06-25 15:22:53 +01:00
Nick Craig-Wood
a41a294e1d box: cancel in progress multipart uploads and copies on rclone exit #4300 2020-06-25 15:22:53 +01:00
Nick Craig-Wood
47b17dc1bb b2: cancel in progress multipart uploads and copies on rclone exit #4300 2020-06-25 15:22:53 +01:00
Nick Craig-Wood
5f75444ef6 s3: cancel in progress multipart uploads and copies on rclone exit #4300 2020-06-25 12:55:56 +01:00
Nick Craig-Wood
54fda3422e atexit: implement OnError for cancelling multpart uploads 2020-06-25 12:55:56 +01:00
Nick Craig-Wood
fcc2db8093 doc: disable smart typography (eg en-dash) in MANUAL.* and man page
Before this change MANUAL.html and rclone.1 would show flags like –addr

Now it shows --addr which is easy to copy and paste.

This was pointed out in #4362
2020-06-25 12:20:01 +01:00
Nick Craig-Wood
89b7ffbd5c Add Tim Burke to contributors 2020-06-25 12:20:01 +01:00
Nick Craig-Wood
ada43b0e58 Add Petri Salminen to contributors 2020-06-25 12:20:01 +01:00
Tim Burke
5050c33162 dlna: Mark flags in docs as code
Otherwise, we get en dashes in the man page, making args more difficult
to copy/paste to a command line.

Before:

    Use –addr to specify ...

After:

    Use --addr to specify ...
2020-06-25 12:18:54 +01:00
Harry
4e8fda228d docs: Updated OneDrive max file size 2020-06-25 12:04:52 +01:00
Petri Salminen
cdfb3f7194 docs: make the website navbar stick to top
Navbar will remain visible on the top of the screen, even when
the user has scrolled down.

This makes it easier to navigate the site quickly.
2020-06-25 11:47:56 +01:00
Chaitanya Bankanhal
a2dd23efd3 rc: Add tests for operations/uploadfile
rc: Go import file rc_test.go
2020-06-25 11:38:24 +01:00
Chaitanya Bankanhal
fa43d02874 rc: Add operations/uploadfile to upload a file through rc using encoding multipart/form-data 2020-06-25 11:38:24 +01:00
Chaitanya
d0de39ebcd rc: add NeedsRequest to call. 2020-06-25 11:38:24 +01:00
Nick Craig-Wood
2121c0fa23 dircache: factor DirMove code out of backends into dircache
Before this change there was lots of duplicated code in all the
dircache using backends to support DirMove.

This change factors this code into the dircache library.
2020-06-25 09:41:36 +01:00
Nick Craig-Wood
a8652e2252 dircache: simplify interface, fix corner cases and apply to backends
Dircache was changed to:

- Remove special cases for the root directory
- Remove Fatal errors
- Call FindRoot on behalf of the user wherever possible
- Bring up to modern Go standards

Backends were changed to:

- Remove calls to FindRoot
- Change calls to FindRootAndPath to FindPath
- Don't make special cases for the root

This fixes several corner cases, for example removing a non existent
directory if FindRoot hasn't been called.
2020-06-25 09:41:36 +01:00
Nick Craig-Wood
81151523af drive: fix shortcut tests 2020-06-24 15:52:02 +01:00
Nick Craig-Wood
3e82771413 Start v1.52.2-DEV development 2020-06-24 14:35:12 +01:00
Nick Craig-Wood
9445b12328 check: make it show stats by default 2020-06-24 11:23:34 +01:00
Nick Craig-Wood
4bb103ef43 build: speed up tidy-beta script by doing fewer directory traversals 2020-06-24 10:01:24 +01:00
Nick Craig-Wood
0dba7b8a46 swift: speed up deletes by not retrying segment container deletes
Before this fix rclone would continually try to delete non empty
segment containers which made deleting lots of files very slow.

This fix makes rclone just try the delete once and then carry on which
was the original intent of the code before the retry logic got put in.
2020-06-24 10:01:24 +01:00
buengese
e247811db5 jottacloud: remove debug Printf accidentally left in 2020-06-23 13:16:23 +02:00
buengese
6768f999ed docs/overview: pcloud now supports link sharing 2020-06-21 17:22:56 +02:00
buengese
ce767bc3cf pcloud: implement PublicLink 2020-06-21 17:22:56 +02:00
Caleb Case
e780cda1d4 backend/tardigrade: Upgrade to uplink v1.1.1
This fixes issue #4370 by restoring the correct error response.
2020-06-20 16:44:06 +01:00
Nick Craig-Wood
a55d882b7b webdav: Fix free/used display for rclone about/df for certain backends - fixes #4348
Before this change if the server sent us xml like this

```
<D:propstat>
<D:prop>
<g0:quota-available-bytes/>
<g0:quota-used-bytes/>
</D:prop>
<D:status>HTTP/1.1 404 Not Found</D:status>
</D:propstat>
```

Rclone would read the empty XML items as containing 0

After this fix we make sure that we have a value before using it.
2020-06-20 15:15:15 +01:00
Nick Craig-Wood
5c5ad62208 drive: fix --drive-impersonate with cached root_folder_id
Before this fix rclone v1.51 and 1.52 would incorrectly use the cached
root_folder_id when the --drive-impersonate flag was in use. This
meant that rclone could be looking up the wrong directory ID with
unpredictable results - usually all files apparently being missing.

This fix makes rclone look up the root_folder_id always when using
--drive-impersonate. It does this by clearing the root_folder_id and
making a NOTICE message that it is ignoring the cached value.

It also stops rclone caching the root_folder_id when using
--drive-impersonate.

See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215
2020-06-20 15:01:37 +01:00
Nick Craig-Wood
62a1a561cf build: test-repeat.sh add -tag to buildflags 2020-06-20 14:52:04 +01:00
Nick Craig-Wood
ce394426b0 check: fix successful retries with --download counting errors
See: https://forum.rclone.org/t/tons-of-data-corruption-after-rclone-copy-to-mega-can-rclone-correct-it/17017
2020-06-20 14:52:04 +01:00
buengese
6606602f1e docs/box: add some info regarding the CleanUp implementation 2020-06-18 23:39:59 +02:00
buengese
b6b8958fb4 box: implement CleanUp - fixes #4326 2020-06-18 23:39:59 +02:00
Nick Craig-Wood
d8eea0e397 build: run gofmt -s to simplify the code: suggested by Go Report Card 2020-06-18 18:45:39 +01:00
Nick Craig-Wood
df9c930581 dropbox: fix public link by removing expires parameter
Adding the expires parameter gives settings_error/not_authorized/.. errors.

The expires setting isn't in the documentation so this commit removes
it for now.
2020-06-18 18:40:33 +01:00
Nick Craig-Wood
85bcacac90 s3: Cap expiry duration to 1 Week and return error when sharing dir 2020-06-18 17:50:50 +01:00
Nick Craig-Wood
4b4ee72796 fstest: fix PublicLink tests to send non zero expiry and work with s3 2020-06-18 17:50:12 +01:00
Nick Craig-Wood
40611fc4fc check: retry downloads if they fail when using the --download flag
See: https://forum.rclone.org/t/tons-of-data-corruption-after-rclone-copy-to-mega-can-rclone-correct-it/17017/7
2020-06-18 16:16:19 +01:00
Nick Craig-Wood
7c4ba9fcb2 check: Fix misleading message which printed errors instead of differences
See: https://forum.rclone.org/t/tons-of-data-corruption-after-rclone-copy-to-mega-can-rclone-correct-it/17017/7
2020-06-18 16:16:19 +01:00
Nick Craig-Wood
a1c9612d75 build: fix windows/386 icon/version embedding #4304
Make sure we install goversioninfo binary for the running architecture
otherwise we don't get a binary.
2020-06-18 16:08:38 +01:00
Nick Craig-Wood
33c8709439 build: fix windows/amd64 build and icon/version embedding #4304
The parameters were being passed to goversioninfo in the wrong order
so that the 64 bit .syso was actually a 32 bit .syso thus calling the
linker to fail.
2020-06-18 16:08:38 +01:00
Nick Craig-Wood
5e6f4ab281 drive: fix creating a directory inside a shortcut
See: https://forum.rclone.org/t/cant-create-new-directory-on-google-drive-remote/17208
2020-06-17 11:32:28 +01:00
Nick Craig-Wood
3efdf5e095 operations: implement --refresh-times flag to set modtimes on hashless backends 2020-06-17 10:48:13 +01:00
Nick Craig-Wood
d174b97af7 errors: add WSAECONNREFUSED and more to the list of retriable Windows errors
This adds the missing WSAECONNREFUSED error to the list of errors we
can retry under Windows.

> Connection refused.  No connection could be made because the target
> computer actively refused it.

It also adds any relevant errors I could see in the error code list.

See: https://forum.rclone.org/t/failing-to-upload-large-file-to-b2/17085
2020-06-17 10:46:22 +01:00
Nick Craig-Wood
fff8822239 Add jtagcat to contributors 2020-06-17 10:46:22 +01:00
Nick Craig-Wood
7cfe3760f4 Add Matteo Pietro Dazzi to contributors 2020-06-17 10:46:22 +01:00
NoLooseEnds
298bd640f3 build: fix custom timezone in Docker image
Added the tzdata package to fix this ref. 

See: https://forum.rclone.org/t/rclone-in-docker-uses-tz-as-utc-not-as-provided/17173
2020-06-17 10:43:03 +01:00
buengese
945a37d0d2 docs/jottacloud: add some info about the setup for whitelabel version 2020-06-16 18:15:49 +02:00
Chaitanya Bankanhal
68afa28b27 rc: Add mount to list if mount point was successfully created 2020-06-16 15:17:55 +01:00
jtagcat
d6a9017298 fs: fix formatting of errInvalidCharacters error message
errInvalidCharacters: 'and space .' -> 'and space.' Remove a space between the word, space, and period.

Co-authored-by: jtagcat <gitlab@c7.ee>
2020-06-16 15:08:09 +01:00
jtagcat
da862f82cf docs: add how to squash to contributing
Co-authored-by: jtagcat <gitlab@c7.ee>
2020-06-16 15:06:03 +01:00
jtagcat
f8b6727190 docs: document valid remote names 2020-06-16 15:02:15 +01:00
jtagcat
2d88d24881 config: reject remote names starting with a dash. (#4261) 2020-06-16 15:00:34 +01:00
Matteo Pietro Dazzi
62650a3eb3 serve dlna: Fix file list on Samsung Series 6+ TVs
This fixes the command "serve dlna" in order correctly show the list
of files on Samsung TV models starting from Series 6.
2020-06-16 14:56:02 +01:00
buengese
2c4f7b61c1 jottacloud: switch to new api root - fixes #4295
- also implement a very ugly workaround for the DirMove failures
2020-06-16 15:44:34 +02:00
Nick Craig-Wood
a3f6fe5287 dedupe: fix logging to be easier to understand #4321 2020-06-16 11:41:56 +01:00
Nick Craig-Wood
8d85c51a28 Add Heiko Bornholdt to contributors 2020-06-16 11:41:56 +01:00
Heiko Bornholdt
17d5a72416 ftp: add explicit tls support
Add support for explicit FTP over TLS.

Fixes #4100
2020-06-16 09:13:50 +01:00
Heiko Bornholdt
c4ce260b49 vendor: update jlaffaye/ftp 2020-06-16 09:13:50 +01:00
Nick Craig-Wood
4808958f93 mount2,cmount: skip unreliable tests #4171 2020-06-15 21:34:37 +01:00
Nick Craig-Wood
b58bb03e95 test: Don't run unreliable tests on CI #4171 2020-06-15 21:34:37 +01:00
Nick Craig-Wood
ba7fbfa8a7 testy: test utility functions 2020-06-15 21:34:37 +01:00
Nick Craig-Wood
117ff1d781 serve sftp: fix race in the tests #4171 2020-06-15 21:34:33 +01:00
Nick Craig-Wood
160c97da13 build: re-add accidentally deleted ci_upload 2020-06-15 20:21:20 +01:00
Nick Craig-Wood
0760bc09aa build: fix Windows exe info insertion
The goversioninfo tool wasn't being installed in the correct place.

This also gets rid of the old Travis and Appveyor stuff from the
Makefile
2020-06-15 19:29:43 +01:00
Nick Craig-Wood
5ca82e2f05 Add Vincent Feltz to contributors 2020-06-14 10:11:13 +01:00
Nick Craig-Wood
746a6ef8d3 Add Zac Rubin to contributors 2020-06-14 10:11:13 +01:00
Gary Kim
763944f673 rcd: fix incorrect prometheus metrics - fixes #4341
This was caused by using the stats group from the context passed in by the rcd
rather than the global stats group.

Signed-off-by: Gary Kim <gary@garykim.dev>
2020-06-14 10:09:24 +01:00
Vincent Feltz
f4d7e41f24 s3: add Scaleway provider - fixes #4338 2020-06-13 11:55:37 +01:00
Zac Rubin
f9306218f8 sftp: Fix SSH key PEM loading
For SSH authentication, `key_pem` should both override `key_file`
and not require other SSH authentication methods to be set.

Prior to this fix, rclone would attempt to use an ssh-agent
when `key_pem` was the only SSH authentication method set.

Fixes #4240
2020-06-12 22:46:33 +01:00
Nick Craig-Wood
fb06427c69 sync: fix --track-renames-strategy modtime
Before this change `--track-renames-strategy` was broken. The hashing
method it used could declare times that were very close together to be
different.

The time hash was discarded and instead we check the modification time
window on every hash match.

Provided that the user doesn't use `--track-renames-strategy` on a
huge number of identically sized files this will perform just fine.

See: https://forum.rclone.org/t/track-renames-strategy-modtime-doesnt-work/16992/5
2020-06-12 15:38:35 +01:00
Nick Craig-Wood
93bd601149 touch: add ability to set nanosecond resolution times 2020-06-12 15:38:35 +01:00
Nick Craig-Wood
848c5b78e1 drive: fix not being able to delete a directory with a trashed shortcut
When we resolve the shortcut we now propagate the trashed status of
the shortcut into the resolved item which fixes the issue.
2020-06-12 15:10:35 +01:00
buengese
84d5df3c84 jottacloud: bring back legacy authentification for use with whitelabel versions - fixes #4299 2020-06-12 12:08:27 +02:00
Nick Craig-Wood
63e6d9d2d1 serve webdav,serve restic: Fix flags so they use environment variables
See: https://forum.rclone.org/t/serve-restic-append-only-environment-variable/17050
2020-06-11 19:28:51 +01:00
albertony
6a2b7b97d7 build: Add file properties and icon to Windows executable (fixes #4304) 2020-06-11 09:26:14 +01:00
Chaitanya
d8d19072c5 mount: Add call for unmount all
mount: handle locking through single mutex.
2020-06-10 22:19:34 +01:00
Chaitanya
830ab37371 rc: Add mount list option for listing current mounts 2020-06-10 22:19:34 +01:00
Nick Craig-Wood
7e48ee8758 cache: fix dedupe on caches wrapping drives - fixes #4320
This implements the MergeDirs optional method.
2020-06-10 21:52:52 +01:00
Nick Craig-Wood
d55053098f check: make check do --checkers files concurrently - fixes #4318 2020-06-10 17:20:54 +01:00
Nick Craig-Wood
63cf0b1cdd check: make check command obey --dry-run/-i/--interactive - fixes #4325 2020-06-10 17:20:54 +01:00
Nick Craig-Wood
5866b1b017 bin: test-repeat.sh - script to run tests many times with individual logs 2020-06-10 17:08:31 +01:00
Nick Craig-Wood
8493f3939c bin: not-in-stable.go - script to help with merging fixes to the stable branch 2020-06-10 17:07:43 +01:00
Nick Craig-Wood
095f4e9b9d build: fix docker release build action 2020-06-10 17:00:33 +01:00
Nick Craig-Wood
a1382a03aa Start v1.52.1-DEV development 2020-06-10 16:49:55 +01:00
Nick Craig-Wood
844b903595 docs: promote the use of -i/--interactive and "rclone sync -i" everywhere #1574 2020-06-10 12:33:53 +01:00
Nick Craig-Wood
a3b3e1f646 tree: remove -i shorthand for --noindent as it conflicts with -i/--interactive 2020-06-10 12:33:53 +01:00
Nick Craig-Wood
b23cf58a41 operations: Add skip all, do all, quit operations to --interactive - fixes #3886
This also adds SkipDestructive into all the remaing places --dry-run
was used and adds documentation.
2020-06-10 12:33:53 +01:00
fishbullet
ba5eb230fb operations: interactive mode -i/--interactive for destructive operations #3886 2020-06-10 12:33:53 +01:00
Nick Craig-Wood
2ea15a72bc s3: fix --header-upload - Fixes #4303
Before this change we were setting the headers on the PUT
request for normal and multipart uploads. For normal uploads this caused the error

    403 Forbidden: There were headers present in the request which were not signed

After this fix we set the headers in the object upload request itself
as the s3 SDK expects.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Amz-Tagging
- X-Amz-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Amz-Meta-Key: value".

This now works for multipart uploads and single part uploads

See also #59
2020-06-10 12:28:48 +01:00
Nick Craig-Wood
b5c654a100 lib/structs: factor reflection based structure manipulation into a library 2020-06-10 12:28:48 +01:00
Nick Craig-Wood
6807b0e42f Add Kamil Trzciński to contributors 2020-06-10 12:16:27 +01:00
Cenk Alti
16422a6b78 putio: fix panic on Object.Open #4315 2020-06-10 12:16:09 +01:00
Rob Calistri
b2ded6212b vfs: Change modtime of file before upload to current
Previously files before they get uploaded will inherit the directory modtime.  This changes that to use the current time instead.
2020-06-10 12:10:50 +01:00
Nick Craig-Wood
88df5927f9 vfs: funnel all read/write calls through ReadAt/WriteAt
This is in preparation for partial reads for read/write files
2020-06-09 18:07:41 +01:00
Nick Craig-Wood
8c37262e05 vfs: don't use embedded methods for read/write handles for clarity 2020-06-09 18:07:23 +01:00
Nick Craig-Wood
3c14a893fb asyncreader: Make StopBuffer as well as Abandon and fix confusion in callers 2020-06-09 18:05:12 +01:00
Nick Craig-Wood
05bc19c331 vfs: Remove uneeded locking from read write handle String() 2020-06-09 18:04:50 +01:00
Caleb Case
40fe97e946 backend/tardigrade: Set UserAgent to rclone
This provides two things:

* It gives Storj insight into which uplink clients are using the
  network.
* It facilitate rclone participating in the Tardigrade Open Source
  Partner Program https://tardigrade.io/partner/
2020-06-09 14:20:28 +01:00
Kamil Trzciński
7458d37d2a s3: add max_upload_parts support - fixes #4159
* s3: add `max_upload_parts` support

This allows to configure a maximum amount of chunks used to upload file:

- Support Scaleway which has a limit of 1k chunks currently
- Reduce a cost on S3 when each request costs some money at the expense of memory used

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2020-06-08 18:22:34 +01:00
Nick Craig-Wood
c4110780bf lib/file: fix SetSparse on Windows 7 which fixes downloads of files > 250MB
Before this change we passed both lpOverlapped and lpBytesReturned as NULL.

> If lpOverlapped is NULL, lpBytesReturned cannot be NULL. Even when
> an operation produces no output data, and lpOutBuffer can be NULL,
> the DeviceIoControl function makes use of the variable pointed to by
> lpBytesReturned. After such an operation, the value of the variable
> is without meaning.

After this change we set lpBytesReturned to a valid pointer.

See: https://forum.rclone.org/t/errors-when-downloading-any-file-over-250mb-from-google-drive-windows-sparse-files/16889
2020-06-06 13:13:15 +01:00
Nick Craig-Wood
d729004554 Add Roman Kredentser to contributors 2020-06-05 14:51:52 +01:00
Roman Kredentser
c0521791db s3: implement link sharing with PublicLink 2020-06-05 14:51:05 +01:00
Roman Kredentser
55ad1354b6 link: Add --expire and --unlink flags
This adds expire and unlink fields to the PublicLink interface.

This fixes up the affected backends and removes unlink parameters
where they are present.
2020-06-05 14:51:05 +01:00
Nick Craig-Wood
fb61ed8506 b2: Implement server side copy for files > 5GB - fixes #3991
This factors copy out of SetModTime and Copy so it can be called from
both places.

This also reworks all the multipart uploading to use sync.Errgroup and
memory pooling like the other backends. This makes it more memory
efficient and handle errors better.

See: https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/10
2020-06-05 13:27:53 +01:00
Nick Craig-Wood
4c7f7582fd obscure: write more help as we are referencing it elsewhere 2020-06-05 12:48:28 +01:00
Caleb Case
a4f1f3d4e8 backend/tardigrade: Upgrade uplink to v1.0.7
This fixes a regression in the rclone tests from the v1.0.6 upgrade of
uplink. The failure was due to an improperly converted error resulting
in the wrong type of error.
2020-06-05 10:51:33 +01:00
Nick Craig-Wood
973e3d6a7b backends: make sure backends expand ~ and environment vars in file names they use
See: https://forum.rclone.org/t/relative-path-in-rclone-config-service-account-json/16693
2020-06-03 17:39:08 +01:00
Nick Craig-Wood
b62d08d136 config: set RCLONE_CONFIG_DIR for use in config files and subprocesses
See: https://forum.rclone.org/t/relative-path-in-rclone-config-service-account-json/16693
2020-06-03 17:39:08 +01:00
Nick Craig-Wood
50e31c6636 vfs: fix OS vs Unix path confusion - fixes ChangeNotify on Windows
See: https://forum.rclone.org/t/windows-mount-polling-not-recognising-all-changes-made-by-another-box/16708
2020-06-03 17:05:58 +01:00
Nick Craig-Wood
151f03378f s3: fix upload of single files into buckets without create permission
Before this change, attempting to upload a single file into an s3
bucket which did not have create permission gave AccessDenied: Access
Denied error when it tried to create the bucket.

This was masked until e2bf91452a was
fixed.

This fix marks the bucket as OK if a fetch on an object indicates it
is OK. This stops rclone thinking it has to create the bucket in the
first place.

Fixes #4297
2020-06-02 14:33:21 +01:00
Nick Craig-Wood
26fb9007da build: fix xgo build after go1.14 go.mod update
Before this change xgo was getting added to go.mod - the build then failed with

    go: inconsistent vendoring in /usr/src/rclone:
    github.com/karalabe/xgo@v0.0.0-20191115072854-c5ccff8648a7: is explicitly required in go.mod, but not marked as explicit in vendor/modules.txt

This change gets xgo in GOPATH mode to avoid it getting added to go.mod
2020-06-02 13:41:54 +01:00
Nick Craig-Wood
3b20335d2a docs: remove leading slash in page reference in footer when present 2020-06-02 12:09:45 +01:00
Nick Craig-Wood
8d55367a6a build: set user_allow_other in /etc/fuse.conf in the Docker image
This allows non root mounts to use the --allow-other flag

See: https://forum.rclone.org/t/trying-utilize-docker-for-the-first-time-having-some-issues-with-an-rclone-mount-user-allow-other-error-etc-fuse-conf-has-been-updated-to-allow/16393
2020-06-01 21:33:48 +01:00
Nick Craig-Wood
187ee62e3d Add edwardxml to contributors 2020-06-01 21:33:48 +01:00
Nick Craig-Wood
10e2ec1fbb Add Matteo Pietro Dazzi to contributors 2020-06-01 21:33:48 +01:00
edwardxml
83999cd1d1 docs: minor tense, punctuation, brevity and positivity changes for the home page 2020-06-01 16:32:27 +01:00
Nick Craig-Wood
fef90ef0a9 build: update Docker build workflows
- prune docker images to ones we normally build binaries for
- add fixed versions
- add fetch-depth to fetch the tags so the version number is correct
- rename the job names
2020-06-01 16:13:23 +01:00
Matteo Pietro Dazzi
72ae5626b0 build: Build Docker images with GitHub actions 2020-06-01 16:13:23 +01:00
Nick Craig-Wood
eee28d0d39 build: remove quicktest from Dockerfile
This is making docker builds take too long and it isn't the place of
the Docker file to be running unit tests.
2020-06-01 16:13:23 +01:00
Nick Craig-Wood
b59999dd59 build: update go.mod to go1.14 to enable -mod=vendor build
When the main module contains a top-level vendor directory and its
go.mod file specifies go 1.14 or higher, the go command now defaults
to -mod=vendor for operations that accept that flag.
2020-06-01 16:13:23 +01:00
Nick Craig-Wood
e62c032184 docs: remove manually set dates and use git dates instead 2020-06-01 13:07:46 +01:00
Nick Craig-Wood
1635b37ff1 docs: Add link to source and modified time to footer of every page 2020-06-01 13:07:46 +01:00
Nick Craig-Wood
8774381e2e cmd: Note commands which need obscured input in the docs - fixes #4252 2020-05-31 12:59:09 +01:00
Nick Craig-Wood
cbfe7a405b Add Alex Guerrero to contributors 2020-05-31 12:59:09 +01:00
Alex Guerrero
80391fbcd4 dropbox: Add copyright detector info in limitations section in the docs
Dropbox files protected by copyright can't be synced and results
in an error. This issue reflects that in the Dropbox's limitations
section to warn rclone users and give info about why this error
occurs due that by default Dropbox API doesn't give enough info.

Proposed by the original repo owner in this comment:
https://github.com/rclone/rclone/issues/2301#issuecomment-388291079

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2020-05-31 12:22:54 +01:00
Nick Craig-Wood
cbf3d43561 drive: fix missing items when listing using --fast-list / ListR
This is caused by a bug in Google drive where, in some circumstances
querying for "(A in parents) or (B in parents)" returns nothing
whereas querying for "A in parents" and "B in parents" separately
works fine.

This has been reported here:

https://issuetracker.google.com/issues/149522397

This workaround detects this condition by seeing if a listing for more
than one directory at once returns nothing.

If it does then it retries each one individually.

This can potentially have a false positive if the user has multiple
empty directories which are queried at once. The consequence of this
will be that ListR is disabled for a while until the directories are
found to be actually empty in which case it will be re-enabled.

Fixes #3114 and Fixes #4289
2020-05-31 11:44:15 +01:00
Caleb Case
e7bd392a69 backend/tardigrade: Upgrade to uplink v1.0.6
This fixes an important bug with listing that affects users with more
than 500 objects in a listing operation.
2020-05-29 18:00:08 +01:00
Nick Craig-Wood
764b90a519 Update release notes with Docker build instructions 2020-05-28 13:12:13 +01:00
Nick Craig-Wood
d785942ed5 mountlib: fix rc tests when mount does not work
This fixed the Docker 1.52 build
2020-05-28 13:11:42 +01:00
Nick Craig-Wood
1cceadaf7c Start v1.52.0-DEV development 2020-05-27 18:36:32 +01:00
Nick Craig-Wood
6882aeff97 Version v1.52.0 2020-05-27 17:31:10 +01:00
Nick Craig-Wood
a0922643e6 build: fix build after nfpm change to drop bindir 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
4d7254f88f Add funding links to rclone repo 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
8499090038 docs: Updates to the front page from Edward Barker 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
ca0c4c7585 docs: update index and donate pages after review 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
5aec01100e docs: make primary buttons rclone colors 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
4d6af44045 docs: update all auto generated docs 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
256d17eba8 docs: markdownify the strings in the commands index 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
5777f4481f docs: add missing shortcode 2020-05-27 17:31:09 +01:00
Martin Michlmayr
6e945fba82 docs: wrap code example into <code> 2020-05-27 17:31:09 +01:00
Martin Michlmayr
ce7047d88a docs: fix cosmetic issues 2020-05-27 17:31:09 +01:00
Martin Michlmayr
4a35100130 docs: fix typos 2020-05-27 17:31:09 +01:00
Martin Michlmayr
ef7662d2fa docs: fix cosmetic issue in menu
The menu items shouldn't end with a full stop since that looks
weird and is just clutter.
2020-05-27 17:31:09 +01:00
Nick Craig-Wood
16e89706d3 docs: add a status notice to the cache backend 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
41a58d7f6e docs: tweaks to index and donate page 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
2a7f3eecf4 docs: reduce height of provider entries 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
41e12114a8 docs: fix validation errors 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
1b12d5d346 docs: move most of the chrome into baseof.html as per a modern hugo install 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
74b8cbfb84 docs: set unsafe HTML parsing to false and fix raw HTML insertion
This means that markdown files can't contain <thing> any more.
2020-05-27 17:31:09 +01:00
Nick Craig-Wood
06427371eb docs: add "make validate_website" and fix validation errors 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
d08e1616a7 docs: add autogenerated date to footer 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
79b011305e docs: increase space around titles for better visual feel 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
050879f0ca docs: make command docs titles be one higher 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
0a74f8022e docs: tweak rendering of tables to match the new theme 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
0db20c9c56 docs: Remove spurious styles from donate page
Correct link when not running on real URL
2020-05-27 17:31:09 +01:00
Nick Craig-Wood
33e9db9745 docs: Use versioned css/jss 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
d0145d4359 docs: rename css and js includes to have their version numbers in 2020-05-27 17:31:09 +01:00
Nick Craig-Wood
80dab10ec9 docs: Fix generated docs
In distributed docs
- Make img tags absolute
- Add donate link in docs
- Fix provider table
2020-05-27 17:31:08 +01:00
Nick Craig-Wood
022cda3109 docs: update provider list on front page 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
4030672b66 docs: Add upload_test_website target to Makefile 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
348379625c docs: update donations page 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
ab4a2275eb docs: add copy to clipboard javascript
add popper library for popup menus
2020-05-27 17:31:08 +01:00
Nick Craig-Wood
c8683fc916 docs: bring README up to date 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
273ee0d696 docs: remove unused page types 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
6d19bbba73 docs: add description to commands and index page 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
d6c31b51c6 docs: new content for index and donate pages 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
af19f924ff docs: layout tweaks 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
80572c544b docs: more config fixes for hugo 0.7 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
11f8cb32d1 docs: new flags 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
62781d0925 docs: fix layouts 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
2bd786a452 docs: fix rc docs and update anchors for new Hugo version 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
038648aaca docs: rename about.md to _index.md after Hugo upgrade
Use the official way of including markdown content into index.html
2020-05-27 17:31:08 +01:00
Nick Craig-Wood
0e3ac6b13c docs: updated donate page 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
1e274b01fe docs: fix layouts after bootstrap upgrade
- use bootstrap recommended viewport
2020-05-27 17:31:08 +01:00
Nick Craig-Wood
c4f6439715 docs: update Hugo config after updating to 0.69.2 2020-05-27 17:31:08 +01:00
Nick Craig-Wood
cdecb44789 docs: update bootstrap to 4.4.1 and jQuery to 3.5.1 2020-05-27 17:31:08 +01:00
calisro
8af3f61b6e doc: fix shortcut creation language (#4281) 2020-05-27 11:23:27 -04:00
Martin Michlmayr
48eef2fb3c docs: add missing word 2020-05-26 13:49:09 +01:00
Martin Michlmayr
7a5b531bd0 docs: fix cosmetics issues 2020-05-26 13:49:09 +01:00
Nick Craig-Wood
78ca08ba8a pcloud: fix initial config "Auth state doesn't match" message #4210
pCloud should be passing back the state parameter that rclone passed
in on config but it seems to have got lost somewhere.

This sets a work-around for the pCloud backend allowing an empty state
parameter.

See: https://forum.rclone.org/t/cannot-connect-to-pcloud/16592
See: https://forum.rclone.org/t/cannot-create-pcloud-config-file-on-osx/16583
2020-05-26 11:27:01 +01:00
Nick Craig-Wood
49ba4eeb86 oauthutil: tidy interface to Config to add Options struct
The interface was getting so that a new function was needed for every
Config variant. Adding an Options struct fixes this.
2020-05-26 11:27:01 +01:00
Nick Craig-Wood
c08617c70f box: Calculate Free amount in About call 2020-05-25 16:47:34 +01:00
Nick Craig-Wood
1bd3365868 vfs: fix TestVFSStatfs with known total, used and unknown free 2020-05-25 16:46:56 +01:00
Nick Craig-Wood
31f21551bf mount: change maximum leaf name length to 1024 bytes - fixes #3884
This limit was previously 4k set in 59026c4761 however leaf
names above 1k now produce an IO error.

WinFSP seems to have its own method for dropping too long file names
above 255 long.
2020-05-25 15:41:11 +01:00
Nick Craig-Wood
baaec5b126 testserver: don't allow starting of an empty remote 2020-05-25 15:26:06 +01:00
Martin Michlmayr
2b72c7f709 docs: move link to correct location
The link is for WinFsp, so link from "WinFsp" rather than "open
source".
2020-05-25 12:04:34 +01:00
Martin Michlmayr
f204d95a8b docs: fix range
This is a range so there should be two dots (as in the other ranges).
2020-05-25 11:25:23 +01:00
Martin Michlmayr
7a5b814e59 docs: fix cosmetic issues 2020-05-25 11:25:23 +01:00
Martin Michlmayr
aa6c06d751 docs: use code elements more consistently 2020-05-25 11:25:23 +01:00
Martin Michlmayr
ffb031a71b docs: link to usage section of docs
Since the page refers to the "Usage" section, let's actually link
to that specific section.
2020-05-25 11:25:23 +01:00
Martin Michlmayr
041b201abd doc: fix typos throughout docs and code 2020-05-25 11:23:58 +01:00
Nick Craig-Wood
9db8ecbc32 box: implement About to read size used - fixes #4264 2020-05-23 18:46:44 +01:00
Nick Craig-Wood
518d39815c sync,copy,move: add --check-first to do all checking before starting transfers
See: https://forum.rclone.org/t/rclone-sync-doing-transfer-and-checking-in-paralel/16352/
2020-05-22 17:50:07 +01:00
Nick Craig-Wood
147f97d1f7 sync: allow --max-backlog to be -ve meaning as large as possible 2020-05-22 17:50:07 +01:00
Nick Craig-Wood
7f44735709 Add Daniel Slyman to contributors 2020-05-22 17:50:07 +01:00
Martin Michlmayr
db9d94f5de doc: improve wording 2020-05-20 15:54:51 +01:00
Martin Michlmayr
a36ef8582f doc: use consistent capitalization 2020-05-20 15:54:51 +01:00
Martin Michlmayr
f34a40a709 swift: fix cosmetic issue in error message 2020-05-20 15:54:51 +01:00
Martin Michlmayr
4aee962233 doc: fix typos throughout docs and code 2020-05-20 15:54:51 +01:00
Fred
5f71d186b2 seafile: implement 2FA 2020-05-20 15:46:35 +01:00
Daniel Slyman
56c9fdb53c docs: updates filesize limitations for OneDrive Personal 2020-05-20 08:11:25 +01:00
Animosity022
47474687eb Update drive.md
Changing the and to or..
2020-05-19 21:03:08 -04:00
Animosity022
abe753ca86 Update drive.md
Just adding in for the different scenarios if a Google Suite user or a vanilla Google account.
2020-05-19 21:02:11 -04:00
Nick Craig-Wood
cf5d0f5c1f Revert "drive: server side copy docs use default description if empty"
This reverts commit 9e4b68a364.

This does not work as intended - it only changes docs files and to
make it change drive files would take an extra roundtrip.

I think the sematics of server side copy are now correct - additional
features should be added with a new flag.

See #4230
2020-05-19 16:48:02 +01:00
Nick Craig-Wood
4d431e94b9 oauth2: try to make token expiry messages more helpful - fixes #4250
See also: #4251
2020-05-19 16:19:35 +01:00
Nick Craig-Wood
bdafbad61e cache: fix tests writing to empty path
This meant the tests were writing to the current directory instead of
a temporary directory.
2020-05-19 16:01:35 +01:00
Nick Craig-Wood
eb6e9b194a fspath: Stop empty strings being a valid path - fixes #4239
Before this change you could use "" as a valid remote, so `rclone lsf
""` would work. This was treated as the current directory.

This is unexpected and creates a footgun for scripting when an empty
variable is passed to rclone by accident.

This fix returns the error "can't use empty string as a path" instead
of allowing it.
2020-05-19 12:34:23 +01:00
Nick Craig-Wood
ecdfd80459 Add Brandon McNama to contributors 2020-05-19 12:22:59 +01:00
Nick Craig-Wood
a268eedb1d Add Martin Michlmayr to contributors 2020-05-19 12:22:59 +01:00
Brandon McNama
19ff7c9302 cache: Fix Server Side Copy with Temp Upload
When wrapping a backend that supports Server Side Copy (e.g. `b2`, `s3`)
and configuring the `tmp_upload_path` option, the `cache` backend would
erroneously report that Server Side Copy/Move was not supported, causing
operations such as file moves to fail. This change fixes this issue
under these circumstances such that Server Side Copy will now be used
when the wrapped backend supports it.

Fixes #3206
2020-05-19 12:17:40 +01:00
Martin Michlmayr
fb169a8b54 doc: fix typos throughout docs 2020-05-19 12:02:44 +01:00
calisro
bcbfad1482 sft[: added --sftp-pem-key to support inline key files 2020-05-19 11:55:38 +01:00
Thibault Molleman
8c37ae8f5c docs: drive: google console doesn't list 'other' anymore
The option of 'other' seems to be gone from the

https://console.developers.google.com/apis/credentials/oauthclient

page. It only lists these now:

- Web application
- Android
- Chrome app
- iOS
- TVs and Limited Input devices
- Desktop app
- Universal Windows Platform (UWP)
2020-05-19 11:52:23 +01:00
Nick Craig-Wood
610f40f700 local: implement --local-no-sparse flag for disabling sparse files #2469
This also introduces a one time warning for sparse files and updates
the docs to warn about them.
2020-05-19 10:16:43 +01:00
Nick Craig-Wood
5cb2a2fa3c lib/file: add Implemented constants 2020-05-19 10:15:20 +01:00
Nick Craig-Wood
919a180ad2 config: make config show take "remote:" as well as "remote" 2020-05-18 19:53:13 +01:00
Nick Craig-Wood
951099dbed vfs: change default --vfs-read-wait to 20ms
In my testing with local and remote storage this is a good compromise
between delaying the seeks and failing to wait for in sequence reads.

See: https://forum.rclone.org/t/constantly-high-iowait-add-log/14156/40
2020-05-18 18:09:23 +01:00
Nick Craig-Wood
0f9267d5fc vfs: factor waiting code from read and writes into common function 2020-05-18 18:09:23 +01:00
Nick Craig-Wood
3de9bd9d04 vfs: fix hang in read wait code - Fixes #4039
Before this fix, rclone would sometimes hang in vfs.readAt().

This was due to a race condition causing rclone to miss the timeout
signal.

This was fixed by a small amount of extra locking.

This very likely also fixes a number of "failed to wait for
in-sequence read" errors.
2020-05-18 18:09:23 +01:00
Nick Craig-Wood
57ee25d75a GitHub: enable forum link and disable blank issues 2020-05-18 18:08:24 +01:00
Brandon Philips
633f50cd3e googlephotos: create feature/favorites directory - Fixes #4189
Enable access “Favorite” images on Google Photos backend.

This adds a “feature/favorites” folder in the Google Photos backend
and uses the Feature Filter API:

https://developers.google.com/photos/library/reference/rest/v1/mediaItems/search#Filters
2020-05-18 17:55:16 +01:00
calisro
d04d4edc40 Update ftp.md (#4241) 2020-05-16 15:28:15 -04:00
Nick Craig-Wood
98c34e413d config: add --obscure and --no-obscure flags to config create/update
Before this change there was some ambiguity about whether passwords
were obscured on not passing them into config create or config update.

This change adds the --obscure and --no-obscure flags to make the
intent clear.

It also updates the remote control and the tests.

Fixes #3728
2020-05-15 16:41:37 +01:00
Nick Craig-Wood
c4bc249b66 Add Ben Zenker to contributors 2020-05-15 16:18:25 +01:00
Ben Zenker
899c8e0697 march: added flag to allow Unicode filenames to remain unique
If your filenames contain two near-identical Unicode characters,
rclone will normalize these, making them identical. This flag
gives you the ability to keep them unique. This might
create unintended side effects, such as duplicating files that
contain certain Unicode characters, when downloading them from
certain cloud providers to a macOS filesystem.

Fixes #4228
2020-05-15 12:28:01 +01:00
Nick Craig-Wood
4006345cfb mountlib: add tests for rc mount/mount and friends 2020-05-14 16:38:37 +00:00
Nick Craig-Wood
1319d7333c mountlib: add rc command mount/types and rename mountOption to mountType 2020-05-14 16:38:37 +00:00
Chaitanya
5f168b3b96 rc: add mount/mount command 2020-05-14 16:38:37 +00:00
Nick Craig-Wood
e4f1e19127 sftp: fix post transfer copies failing with 0 size when using set_modtime=false
Before this change we early exited the SetModTime call which means we
skipped reading the info about the file.

This change reads info about the file in the SetModTime call even if
we are skipping setting the modtime.

See: https://forum.rclone.org/t/sftp-and-set-modtime-false-error/16362
2020-05-14 17:30:01 +01:00
Nick Craig-Wood
4a1b644bfb azureblob: implement streaming of unknown sized files
See: https://forum.rclone.org/t/rclone-rcat-azure-blob-container-sas-token-403-error/16286/3
2020-05-14 11:56:15 +01:00
Nick Craig-Wood
8c9c86c3d6 putio: fix parsing of remotes with leading and trailing /
See: https://forum.rclone.org/t/unable-to-copy-from-remote-but-mount-works/16351/
2020-05-14 11:52:43 +01:00
Nick Craig-Wood
8a58e0235d s3: don't leak memory or tokens in edge cases for multipart upload 2020-05-14 07:48:18 +01:00
Nick Craig-Wood
52b7337d28 crypt: change backend encode/decode to output a plain list
This commit changes the output of the rclone backend encode crypt: and
decode commands to output a plain list of decoded or encoded file
names.

This makes the command much more useful for command line scripting.
2020-05-13 18:11:45 +01:00
Nick Craig-Wood
995cd0dc32 backend: add --json flag to always output JSON 2020-05-13 18:07:41 +01:00
Nick Craig-Wood
5eb558e058 backend: fix output of an array of strings 2020-05-13 17:55:56 +01:00
Max Sum
33d9310c49 union: enable ListR when upstreams contain local
Enable fast list functions for union backend when:

- at least one of the upstreams supports fast list
- upstreams only consist of backends that support fast list and local backend.

Fixes #3000
2020-05-13 13:10:35 +01:00
Nick Craig-Wood
aba89e2737 Add Caleb Case @calebcase as the tardigrade backend maintainer 2020-05-13 12:57:29 +01:00
Nick Craig-Wood
d685e7b4b5 bin: add check-merged.go to find local branches which might have been merged 2020-05-13 12:42:45 +01:00
Nick Craig-Wood
9e4b68a364 drive: server side copy docs use default description if empty
When server side copying Google docs files we attempt to preserve the
description.

This patch makes it so that we use the default description if the
original description was empty.

See: 6fdd7149c1 (commitcomment-38008638)
2020-05-13 12:31:37 +01:00
Nick Craig-Wood
044a3b3920 fserrors: Make "tls: use of closed connection" a retriable error
This has happened when uploading very large files to B2. It is
probably a bug in the go runtime but we'll attempt to work-around it
here.

See: https://forum.rclone.org/t/large-file-upload-to-backblaze-failed/16128/5
2020-05-13 11:42:37 +01:00
Nick Craig-Wood
d342f9f942 azureblob: fix permission error on SAS URL limited to container
Before this change, for some operations, eg rcat or copyto (of a file)
rclone would attempt to create the container when using a SAS URL
limited to a container.

After this change we assume the container does not need creating when
using a container SAS URL.

See: https://forum.rclone.org/t/rclone-rcat-azure-blob-container-sas-token-403-error/16286
2020-05-13 09:11:51 +01:00
Nick Craig-Wood
e91b509578 fs: allow --min-age/--max-age to take a date as well as a duration
Fixes #4211
2020-05-12 17:49:33 +01:00
Nick Craig-Wood
8ddb3fbb2e drive: fix using list recursive on shortcuts to directories 2020-05-12 17:08:05 +01:00
Nick Craig-Wood
b91e01fd22 drive: strip trailing slashes in shortcut command #4098
This also fixes typo in the name of the function, and allows making
shortcuts from the root directory which are useful in cross drive
shortcut creation.

This also adds a basic suite of tests for creating listing, removing
shortcuts.
2020-05-12 17:08:05 +01:00
Nick Craig-Wood
177195aeeb accounting: fix race clearing stats
This race was introduced by

10a6a92e52 accounting: reset bytes read during copy retry
2020-05-12 17:02:32 +01:00
Nick Craig-Wood
cb5979a468 accounting: factor stats into its own structure
This makes it very obvious which mutex to take for accessing the
values.
2020-05-12 17:02:32 +01:00
Nick Craig-Wood
32507774de Add Caleb Case to contributors 2020-05-12 17:01:26 +01:00
Caleb Case
0ce662faad Tardigrade Backend 2020-05-12 15:56:50 +00:00
Caleb Case
03b629064a Tardigrade Backend: Dependencies 2020-05-12 15:56:50 +00:00
albertony
962fbc8257 jottacloud: update docs regarding cleanup, removed remains from old auth, and added warning about special mountpoints. 2020-05-11 11:41:17 +02:00
Nick Craig-Wood
bd4b91bd57 test_all: revert running fichier tests one at a time
This didn't achieve the objective of getting the tests to run clean
and makes them take 16 hours!

This reverts commit 97f6f8fe19.
2020-05-11 08:51:33 +01:00
Ankur Gupta
10a6a92e52 accounting: reset bytes read during copy retry - fixes #4178
During a copy/sync command, if an operation fails due to a network
issue and is retried, the underlying io.Reader is re-initialised,
but the stats for bytes already read are not reset, leading to incorrect
stats. THis was fixed by resetting the bytes read when an Account is
re-initialized.
2020-05-10 17:58:22 +00:00
Max Sum
54b16bd054 union: implement ListR 2020-05-10 17:57:03 +00:00
Max Sum
f21e97001b union: fix server-side copy 2020-05-10 17:56:18 +00:00
Nick Craig-Wood
1f005a82ad http: add missing comment to pacify linter 2020-05-10 18:53:38 +01:00
Nick Craig-Wood
bb65974e2f drive: implement backend shortcut command for creating shortcuts #4098 2020-05-09 15:16:15 +01:00
Nick Craig-Wood
bc0f487369 drive: look for dirs as well as files on NewObject
This means that we can return ErrorNotAFile when there is an object
with the same name as a directory rather than potentially creating a
duplicate name.
2020-05-09 15:16:15 +01:00
Nick Craig-Wood
e103c4c26a Add Maxime Suret to contributors 2020-05-09 15:16:15 +01:00
Maxime Suret
79d29bb41e serve sftp: add support for multiple host keys by repeating --key flag 2020-05-09 14:43:17 +01:00
calisro
c80b6d96dd http: improved directory listing with new template from Caddy project
This includes a new directory listing template which was originally
from the Caddy project (used with permission and copyright attribution).

This is used whenever we serve directory listings so `rclone serve
http`, `rclone serve webdav` and `rclone rcd --rc-serve`

This also modifies the tests so they work with the original template which
is easier to debug.
2020-05-08 16:15:21 +01:00
Nick Craig-Wood
97f6f8fe19 test_all: run fichier tests one at a time
This is in attempt to get the tests to run cleanly without hitting the
rate limiter.
2020-05-07 20:39:54 +01:00
Nick Craig-Wood
94920d39ae test_all: increase the test timeout to 60m from 30m
Some tests are failing at 30m - 60m doesn't seem unreasonable
2020-05-07 20:38:40 +01:00
Nick Craig-Wood
9403bd2990 test_all: allow -list-retries to be overridden on the command line 2020-05-07 20:36:45 +01:00
Nick Craig-Wood
e098924e61 fstest: change -list-retries default to 3 from 6
The integration tests have got much better about retrying the tests,
and we aren't testing ACD any more so we don't need it this high.
2020-05-07 20:28:41 +01:00
Nick Craig-Wood
437f9e2cef Add Sébastien Gross to contributors 2020-05-07 11:25:09 +01:00
Sébastien Gross
395f259978 cmd: when running --password-command allow use of stdin
Bind rclone standard input to password command's standard input. This
allows to provide password from a pipe and collect it using cat.

The typical use case is when rclone is on a remote server with an
encrypted configuration. This solved the environment variable
issue (#3368) and the password storage on remote host.

Now the following chain is allowed:

    echo 'secret' | ssh host.example.com \
       sudo -u rclone \
       rclone --config /path/to/rclone.conf \
       --password-command 'cat' ls remote:

Signed-off-by: Sébastien Gross <seb•ɑƬ•chezwam•ɖɵʈ•org>

Co-authored-by: Sébastien Gross <seb•ɑƬ•chezwam•ɖɵʈ•org>
2020-05-07 11:02:52 +01:00
Nick Craig-Wood
b78adc9f03 Add Fred @creativeproject as the seafile backend maintainer 2020-05-07 10:18:10 +01:00
Nick Craig-Wood
b7c21310b4 Add Fred to contributors 2020-05-06 18:33:44 +01:00
Fred
c754e89906 seafile: New backend for seafile server 2020-05-06 17:33:22 +00:00
Fred
62cfe3f384 vendor: add github.com/coreos/go-semver 2020-05-06 17:33:22 +00:00
Nick Craig-Wood
afde340c9e gcs: fix --header-upload - #59
Before this code we were settig the headers on the PUT request. However this isn't where GCS needs them.

After this fix we set the headers in the object upload request itself.

This means that we only support a limited range of headers

- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Goog-Meta-

Note for the last of those are for setting custom metadata in the form
"X-Goog-Meta-Key: value".
2020-05-06 17:34:23 +01:00
Nick Craig-Wood
d0ad83de4b Add ElonH to contributors 2020-05-06 17:34:23 +01:00
ElonH
d119bfd934 rcd: disable duplicate log
if running `rclone rcd --rc-user=admin --rc-pass=admin
--rc-allow-origin="*"`, lots of duplicate warnings apperent in log

Warning: Allow origin set to *. This can cause serious security problems.
Warning: Allow origin set to *. This can cause serious security problems.
....

This is not conducive to analyzing debugging info.

Therefore, let's show it only once.
2020-05-05 13:47:25 +00:00
Nick Craig-Wood
bdc91eda0f serve http: add Last-Modified headers to files and directories
This means that using `rclone serve http` preserves modification times
when used with the http backend.

Fixes #4201
2020-05-05 09:41:08 +01:00
Nick Craig-Wood
ef9e6794c2 docs: make serve http/webdav template docs into a table 2020-05-04 17:36:31 +00:00
calistri
4362ca7bb9 serve http, serve webdav: Added a --template flag for user defined markup 2020-05-04 17:36:31 +00:00
Nick Craig-Wood
dcf945ed58 docs: add bin/.ignored-emails for removing email addresses from authors.md
Remove an email as requested for Anagh Kumar Baranwal
2020-05-04 17:38:25 +01:00
Anagh Kumar Baranwal
a86196a156 drive: Added command to change service_account_file and chunk_size
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-05-04 16:23:33 +00:00
Anagh Kumar Baranwal
856c2b565f crypt: Added decode/encode commands to replicate functionality of cryptdecode
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-05-04 16:23:33 +00:00
Nick Craig-Wood
1a8c5708c5 vfs: ignore file not found errors from Hash in Read.Release
There is nothing we can do about this at this point and this error can
happen when moving files so we ignore it to clean the logs up.
2020-05-04 12:18:28 +01:00
Nick Craig-Wood
14cab0fff0 local: fix "file not found" errors on post transfer Hash calculation
Before this change the local backend was returning file not found
errors for post transfer hashes for files which were moved. This was
caused by the routine which checks for the object being changed.

After this change we ignore file not found errors while checking to
see if the object has changed. If the hash has to be computed then a
file not found error will be thrown when it is opened, otherwise the
cached hash will be returned.
2020-05-04 12:17:46 +01:00
Nick Craig-Wood
69888bf966 cmount: send a hint as to whether the filesystem is case insensitive or not 2020-05-04 11:38:07 +01:00
Nick Craig-Wood
d260238f99 cmount: use ReaddirPlus on Windows to improve directory listing performance
Before this change Windows would read a directory then immedately stat
every item in the directory.

After this change we return the stat information along with the
directory which stops so many callbacks.
2020-05-04 11:38:07 +01:00
Nick Craig-Wood
6ca7198f57 mount: fix disappearing cwd problem - fixes #4104
Before this change, the current working directory could disappear
according to the Linux kernel.

This was caused by mount returning different nodes with the same
information in.

This change uses vfs.Node.SetSys to cache the information so we always
return the same node.
2020-05-04 11:38:07 +01:00
Nick Craig-Wood
cfcdc85b26 vfs: Add SetSys() methods to Node to allow caching stuff on a node 2020-05-04 11:38:07 +01:00
Nick Craig-Wood
7f8d74e903 docs: refresh rclone authorize help 2020-05-04 10:40:26 +01:00
Nick Craig-Wood
f2b1fedc4f drive: follow shortcuts by default, skip with --drive-skip-shortcuts
Before this change rclone would skip all shortcuts with a message

    Ignoring unknown document type "application/vnd.google-apps.shortcut"

After this message rclone resolves the shortcuts by default to the
actual files that they point to. See the docs for more info.

The --drive-skip-shortcuts flag can be used to skip shortcuts.
2020-05-02 18:28:38 +01:00
Nick Craig-Wood
b03cad3cf6 vendor: update google.golang.org/api/drive to pull in shortcuts definition 2020-05-02 18:28:38 +01:00
Nick Craig-Wood
70db13e6e8 vfs: pin the Fs in use in the Fs cache
This means we can reliably look up the Fs from the cache when using
`backend/command`.
2020-05-01 17:11:45 +01:00
Nick Craig-Wood
4c98360356 fs/cache: Add Pin and Unpin and canonicalised lookup
Before this change we stored cached Fs under the config string the
user gave us. As the result of fs.ConfigString() can often be
different after the backend has canonicalised the paths this meant
that we could not look up backends in the cache reliably.

After this change we store cached Fs under their config string as
returned from fs.ConfigString(f) after the Fs has been created. We
also store a map of user to canonical names (where they are different)
so the users can look up Fs under the names they passed to rclone too.

This change along with Pin and Unpin is necessary so we can look up
the Fs in use reliably in the `backend/command` remote control
interface.
2020-05-01 17:11:45 +01:00
Nick Craig-Wood
42f9f7fb5d lib/cache: add Rename, Pin and Unpin 2020-05-01 17:11:45 +01:00
Nick Craig-Wood
ca1856724c fs: add ConfigString function to return a canonical config string 2020-05-01 17:11:45 +01:00
Nick Craig-Wood
b52a39a84e drive: fix merge breakage
In 2f5a2d3c48 an incorrect merge caused compilation to fail
2020-05-01 13:02:32 +01:00
Nick Craig-Wood
a97261c54f Add gitch1 to contributors 2020-05-01 13:02:32 +01:00
gitch1
0f5579c0ba docs : add --password-command Powershell Wiki link
add Windows Powershell example wiki link to  --password-command doc
2020-04-30 17:31:58 +01:00
Nick Craig-Wood
2f5a2d3c48 drive: Don't return nil Object with nil error from newObject* functions.
Before this change the newObject* functions could return object=nil
with err=nil.  The result of these functions are passed outside of the
backend code (eg in Copy, Move) and returning a nil object with a nil
error leads to crashes elsewhere as it breaks expectations.

After this change we return (nil, fs.ErrorObjectNotFound) in these
cases. The one place this is actually needd internally (when turning
items into listings) we detect that error and use it to mean skip the
directory item.

This problem was noticed while testing the shortcuts code. It
shouldn't happen normally but it is conceivable it could.
2020-04-30 17:11:36 +01:00
Nick Craig-Wood
ba7f7c8319 build: add -trimpath to release build for reproduceable builds 2020-04-30 12:43:40 +01:00
Nick Craig-Wood
77fb3c2511 vfs: bring DO NOT EDIT comments in line with "go help generate" 2020-04-30 12:24:44 +01:00
Nick Craig-Wood
74d9dabdff b2: force the case of the SHA1 to lowercase - fixes #4162
Apparently some tools (eg duplicati) upload the SHA1 in uppercase to
b2 to be stored in the `large_file_sha1` metadata. This patch forces
it to lower case.
2020-04-29 17:08:21 +01:00
Nick Craig-Wood
54edf38d0e Add Matan Rosenberg to contributors 2020-04-29 17:08:21 +01:00
Matan Rosenberg
22f06590f7 genautocomplete: add support for fish shell 2020-04-29 16:19:35 +01:00
Matan Rosenberg
3b4c24af4e vendor: update github.com/spf13/cobra to v1.0.0 2020-04-29 16:19:35 +01:00
Nick Craig-Wood
f37af9afec lsjson: Add --hash-type parameter and use it in lsf to speed up hashing
Before this change if you specified --hash MD5 in rclone lsf it would
calculate all the hashes and just return the MD5 hash which was very
slow on the local backend.

Likewise specifying --hash on rclone lsjson was equally slow.

This change introduces the --hash-type flag (and corresponding
internal parameter) so that the hashes required can be selected in
lsjson.

This is used internally in lsf when the --hash parameter is selected
to speed up the hashing by only hashing with the one hash specified.

Fixes #4181
2020-04-29 16:09:45 +01:00
Nick Craig-Wood
a3f0992a22 Add Kush to contributors 2020-04-29 16:09:45 +01:00
Kush
f555873f18 delete: added --rmdirs flag to delete directories as well - fixes #4055
If you supply the --rmdirs flag with delete command,
it will remove all empty directories along with it
leaving the root intact.
2020-04-29 12:15:30 +01:00
Nick Craig-Wood
1c8eab81a5 dbhashsum: hide the command now it is deprecated 2020-04-29 10:12:12 +01:00
Nick Craig-Wood
cbc5af329f cachestats: deprecate in favour of rclone backend stats cache: 2020-04-29 10:10:57 +01:00
Nick Craig-Wood
90d738b561 cache: implement rclone backend stats command 2020-04-29 10:10:57 +01:00
Nick Craig-Wood
d80fdad6da rc: implement backend/command for running backend commands remotely 2020-04-29 10:10:57 +01:00
Nick Craig-Wood
e2916f3a55 local: implement backend command "noop" for testing purposes 2020-04-29 10:10:57 +01:00
Nick Craig-Wood
1aa1a2c174 backend: add new backend command for backend specific commands
These commands are for implementing backend specific
functionality. They have documentation which is placed automatically
into the backend doc.

There is a simple test for the feature in the backend tests.
2020-04-29 10:10:57 +01:00
Nick Craig-Wood
195d152785 rc: add GetStructMissingOK 2020-04-29 09:42:31 +01:00
Nick Craig-Wood
1f61027f51 rc: add -o/--opt and -a/--arg for more structured input 2020-04-29 09:42:31 +01:00
Nick Craig-Wood
37a53570d4 azureblob: implement memory pooling to control memory use
This commit implements memory pooling to control excessive memory use
as was implemented in the s3 backend.
2020-04-28 17:47:10 +01:00
Nick Craig-Wood
ee7219aa20 azureblob: add --azureblob-disable-checksum flag 2020-04-28 17:47:10 +01:00
Nick Craig-Wood
b1d8da484b azureblob: retry InvalidBlobOrBlock error as it may indicate block concurrency problems
According to Microsoft support this error can be caused by

> A timing/concurrency issue where the PUT operations are happening
> about the same time for a single blob. The Put Block List operation
> writes a blob by specifying the list of block IDs that make up the
> blob. In order to be written as part of a blob, a block must have
> been successfully written to the server in a prior Put Block
> operation.
>
> Documentation reference:
>
> https://docs.microsoft.com/en-us/rest/api/storageservices/put-block
>
> This error can happen when doing concurrent upload commits after you
> have started the upload but before you commit. In that case, the
> upload fails. The application can retry this error or attempt some
> other recovery action based on the required scenario.

See: https://forum.rclone.org/t/error-while-syncing-with-azure-blob-storage-x-ms-error-code-invalidbloborblock/15561
2020-04-28 17:47:10 +01:00
Nick Craig-Wood
4e869e03f7 s3: improve docs for --s3-disable-checksum 2020-04-28 17:47:10 +01:00
Nick Craig-Wood
52c9647b06 b2: improve docs for --b2-disable-checksum 2020-04-28 17:47:10 +01:00
Nick Craig-Wood
7238ae18f9 Add Adam Stroud to contributors 2020-04-28 17:47:10 +01:00
Nick Craig-Wood
551a829eba googlephotos: don't put an image in error message - fixes #4144
For a certain class of broken or missing image Google Photos puts an
image in the error message.

Before this fix we blindly chucked it into the error message.

After this fix we replace it with some sensible text.
2020-04-28 16:51:47 +01:00
Adam Stroud
8e91f83174 googlecloudstorage: Add ARCHIVE storage class to help 2020-04-27 11:40:21 +01:00
buengese
7f776c64f0 fichier: implement custom pacer to deal with the new rate limiting 2020-04-26 20:38:56 +02:00
Nick Craig-Wood
8bf6ab2c52 accounting: fix race condition in tests 2020-04-24 12:32:09 +01:00
Nick Craig-Wood
75fc3fe389 fs: fix FixRangeOption so it doesn't add HTTPOptions in place of bad Ranges
Before this fix, FixRangeOption would substitute RangeOptions it
wanted to get rid of with with empty HTTPOption. This caused a problem
now that backends interpret HTTPOptions.

This fix subsitutes those with NullOptions which aren't interpreted as
HTTPOptions. This patch also improves the unit tests.
2020-04-24 12:32:09 +01:00
Xiaoxing Ye
c4572ebc91 rc: fix misplaced http server config - fixes #4130 2020-04-23 20:22:47 +01:00
David
e56976839a jwtutil: Fix error handling 2020-04-23 17:52:14 +01:00
David
0c0ed2fe04 box: Remove unnecessary iat from jws claims 2020-04-23 17:52:14 +01:00
Nick Craig-Wood
ab6ed256e5 putio: add support for --header-upload and --header-download #59 2020-04-23 15:55:52 +01:00
Nick Craig-Wood
7c98ecd3ab putio: make downloading files use the rclone http Client
This fixes `--download-header` and these transactions being missed from
`--dump bodies` or `--tpslimit`
2020-04-23 15:48:30 +01:00
Nick Craig-Wood
f6346a4d29 fs: add --header flag to add options to every HTTP transaction #59 2020-04-23 15:24:21 +01:00
Nick Craig-Wood
b502a74cff gcs: add support for --header-upload and --header-download #59 2020-04-23 11:41:57 +01:00
Nick Craig-Wood
8e9c25063a swift: add support for --header-upload and --header-download #59 2020-04-23 11:34:36 +01:00
Nick Craig-Wood
1dced3b3c4 rcat: add support for --header-upload #59 2020-04-23 11:34:31 +01:00
Nick Craig-Wood
087bf1d584 cat: add support for --header-download #59 2020-04-23 11:34:24 +01:00
Nick Craig-Wood
e051a34fc1 dbhashsum: deprecate: use rclone hashsum DropboxHash instead 2020-04-23 11:13:13 +01:00
Nick Craig-Wood
f5455d865b accounting: check for max transfer in WriteTo
Before this change the max transfer tests were failing for remotes
which were using WriterTo.
2020-04-23 11:13:13 +01:00
Tim Gallant
b705ead3fd docs: adds --header-download and --header-upload 2020-04-23 11:07:21 +01:00
Tim Gallant
c390fc8100 onedrive: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
14f6ce1e77 premiumizeme: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
385542e2f9 sharefile: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
fc946d0c44 fichier: pass options to rest.Opts for uploadFile 2020-04-23 11:07:21 +01:00
Tim Gallant
854c84d0ca pcloud: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
90bd0eb44c webdav: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
3130f870bb sugarsync: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
51b617f601 yandex: pass options to rest.Opts for upload 2020-04-23 11:07:21 +01:00
Tim Gallant
011ca244b2 jottacloud: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
9ea1361044 googlephotos: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
776966e22c opendrive: pass options to rest.Opts for Put and Update 2020-04-23 11:07:21 +01:00
Tim Gallant
01cb256b84 box: pass options to rest.Opts for uploadPart 2020-04-23 11:07:21 +01:00
Tim Gallant
0b0163dde2 box: pass options to rest.Opts for upload 2020-04-23 11:07:21 +01:00
Tim Gallant
38123c70eb b2: pass options to rest.Opts for Update 2020-04-23 11:07:21 +01:00
Tim Gallant
5cb7229a16 s3: add support for HTTPOption 2020-04-23 11:07:21 +01:00
Tim Gallant
9bf3d3da4c fs: add UploadHeaders, DownloadHeaders to Update/Put/Open options 2020-04-23 11:07:21 +01:00
Tim Gallant
93caa459e3 fs/config: add header-download and header-upload flags 2020-04-23 11:07:21 +01:00
Nick Craig-Wood
f8039deb7c s3: fix detection of BucketAlreadyOwnedByYou and BucketAlreadyExists error
This was being silently ignored until this commit

e2bf91452a s3: report errors on bucket creation (mkdir) correctly
2020-04-22 18:14:03 +01:00
Nick Craig-Wood
86eaf43b00 vfs: fix tests for Statfs when running on backends with unknowns
This was broken in da41db4712
2020-04-22 18:14:03 +01:00
Nick Craig-Wood
8176202e6d Add Sunil Patra to contributors 2020-04-22 18:14:03 +01:00
Nick Craig-Wood
1b74879b8b Add David Bramwell to contributors 2020-04-22 18:14:03 +01:00
Sunil Patra
39319b4858 @Sunil-P
box: Added support for interchangeable root folder for Box backend - #3422
2020-04-22 17:00:13 +01:00
Sunil Patra
4af5c9aed7 pCloud: Added support for interchangeable root folder for pCloud backend - #3957 2020-04-22 16:58:01 +01:00
David Bramwell
8a3c4c6a7b box: add token renew function for jwt auth - Fixes #4901 2020-04-22 16:53:03 +01:00
Nick Craig-Wood
d22e6f5a96 cryptcheck: remove duplicated debug OK message 2020-04-22 11:33:48 +01:00
Nick Craig-Wood
1648c1a0f3 crypt: calculate hashes for uploads from local disk
Before this change crypt would not calculate hashes for files it was
uploading. This is because, in the general case, they have to be
downloaded, encrypted and hashed which is too resource intensive.

However this causes backends which need the hash first before
uploading (eg s3/b2 when uploading chunked files) not to have a hash
of the file. This causes cryptcheck to complain about missing hashes
on large files uploaded via s3/b2.

This change calculates hashes for the upload if the upload is coming
from a local filesystem. It does this by encrypting and hashing the
local file re-using the code used by cryptcheck. For a local disk this
is not a lot more intensive than calculating the hash.

See: https://forum.rclone.org/t/strange-output-for-cryptcheck/15437
Fixes: #2809
2020-04-22 11:33:48 +01:00
Nick Craig-Wood
44b1a591a8 crypt: get rid of the unused Cipher interface as it obfuscated the code 2020-04-22 11:33:48 +01:00
Nick Craig-Wood
bbb6f94377 fstest: create AssertTimeEqualWithPrecision from CheckTimeEqualWithPrecision 2020-04-22 11:33:00 +01:00
Nick Craig-Wood
3f654dac37 mount: map more rclone errors into file systems errors
This improves the error reporting, in particular for
fs.ErrorPermissionDenied which was being reported as an IO error.
2020-04-21 16:31:43 +01:00
Nick Craig-Wood
eed9c5738d vfs: factor the vfs cache into its own package 2020-04-20 10:42:33 +01:00
Nick Craig-Wood
fd39cbc193 vfstests: move functional tests from mountlib and make them work with VFS
The tests are now run for the mount commands and for the plain VFS.

This makes the tests much easier to debug when running with a VFS than
through a mount.
2020-04-20 10:42:33 +01:00
Nick Craig-Wood
b25f5eb0d1 serve sftp: use VFS utility functions instead of own copy 2020-04-19 15:40:55 +01:00
Nick Craig-Wood
0961763082 vfs: add utility methods to match os package 2020-04-19 15:40:55 +01:00
Nick Craig-Wood
49e5299a95 asyncreader: make ErrorStreamAbandoned public 2020-04-19 15:18:49 +01:00
Nick Craig-Wood
07908f3f54 vfs: bring open_tests.go generator back into line with the generated tests
In

54deb01f00 vfs: Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC

The generated file open_test.go was edited directly without editing
the generator.

This commit brings the generator make_open_tests.go back into line
with that edit. It also makes it so `go generate` can be used to
regenerate the tests.
2020-04-19 15:18:49 +01:00
Nick Craig-Wood
fdada79ebf accounting: support WriterTo for less memory copying
This should reduce memory copying when the async buffer is in use and
improve speeds.
2020-04-19 15:18:49 +01:00
Nick Craig-Wood
7f15cc9556 operations: make ReOpen and NewReOpen public for re-use elsewhere 2020-04-19 15:18:49 +01:00
Nick Craig-Wood
cd3c699f28 lib/readers: factor ErrorReader from multiple sources 2020-04-19 15:18:49 +01:00
Nick Craig-Wood
36d2c46bcf local: factor PreAllocate and SetSparse to lib/file 2020-04-19 15:18:49 +01:00
Nick Craig-Wood
1f50b70919 vfs: consistently use f.Path() or f._path() in File logs to avoid deadlocks
Previously we were using f which calls f.String() which calls f.Path()
which can cause a deadlock if uses carelessly.

This patch explicitly calls f.Path() or f._path() to bring attention
to the fact that there is a call to a method.
2020-04-19 15:16:43 +01:00
Nick Craig-Wood
19db0df639 vfs: stop reading Dir members from outside dir.go 2020-04-19 15:16:43 +01:00
Nick Craig-Wood
238f26cc90 vfs: stop reading File members from outside file.go
This also fixes locking for ReadFileHandle and WriteFileHandle
accessing File members
2020-04-19 15:16:43 +01:00
Nick Craig-Wood
268fcbb973 vfs: implement lock ordering between File and Dir to eliminate deadlocks
As part of this we take a copy of the directory path as calling
d.Path() violates the total locking order.

See the comment at the top of file.go for details
2020-04-19 15:16:43 +01:00
Nick Craig-Wood
1e4589db18 Add Martin Stone to contributors 2020-04-19 12:41:38 +01:00
Denis
31a1cc46b7 copyurl: add no-clobber flag - fixes #3950 2020-04-19 12:40:17 +01:00
Martin Stone
9d4b3580a5 docs: fix description of --quiet - fixes #4132 2020-04-16 18:12:17 +01:00
Nick Craig-Wood
b07bef2a6b Add Daven to contributors 2020-04-15 18:13:35 +01:00
Nick Craig-Wood
705e60d0ad Add Brandon Philips to contributors 2020-04-15 18:13:35 +01:00
Daven
4c258787b5 googlephotos: make the start year configurable - fixes #3630 2020-04-15 18:08:07 +01:00
Brandon Philips
58ea15078f Dockerfile: remove GOOS and GOARCH
GOOS and GOARCH being set like this makes it impossible to compile on
other archs.

For me GOARCH prevents compilation on my ARM machine.

For others GOOS will prevent windows.

xref https://github.com/rclone/rclone/issues/4086
2020-04-15 17:09:51 +01:00
Dan Dascalescu
756d47fb50 copy: fix typo 2020-04-15 17:08:44 +01:00
Jon Fautley
53874bd8ee cmd: add --error-on-no-transfer option
This allows rclone to exit with a non-zero return code if no files are
transferred. This is useful when calling rclone as part of a workflow/script
pipeline as it allows the end user to stop processing if no files have been
transferred.

NB: Enabling this option will return in rclone exiting non-zero if there are no
transfers. Depending on how your're currently using rclone in your scripts,
this may break existing setups!
2020-04-15 17:06:40 +01:00
Nick Craig-Wood
e2bf91452a s3: report errors on bucket creation (mkdir) correctly
Before this fix errors on bucket creation were being silently
swallowed.

See: https://forum.rclone.org/t/rclone-with-brand-new-aws-account-for-s3/15590
2020-04-15 13:13:13 +01:00
Nick Craig-Wood
9eb17e4ade fs: fix typo in error message 2020-04-15 12:50:26 +01:00
Nick Craig-Wood
2c4aadb588 Revert "install.sh: create ~/.config/rclone directory"
This reverts commit d694bb30e5.

If it creates a config directory then it leaves it owned by root which
is very confusing for new users.
2020-04-11 18:40:46 +01:00
Nick Craig-Wood
424554bc85 fs: generalise machinery for putting extra values when using --use-json-log 2020-04-11 18:16:21 +01:00
reddi
12a208a880 fs: expand stats output for json log 2020-04-11 18:16:21 +01:00
Michał Matczuk
6893ce0bbf s3: do not resize buf on put to memBuf
This is handled by Pool implementation.
2020-04-11 16:35:48 +01:00
Michał Matczuk
399cf18013 s3: use single memory pool
Previously we had a map of pools for different chunk sizes.
In practice the mapping is not very useful and requires a lock.
Pools of size other that ChunkSize can only happen when we have a huge file (over 10k * ChunkSize).
We need to have a bunch of identically sized huge files.
In such case most likely ChunkSize should be increased.

The mapping and its lock is replaced with a single initialised pool for ChunkSize, in other cases pool is allocated and freed on per file basis.
2020-04-11 16:34:05 +01:00
buengese
64b5105edd jottacloud: implement cleanup 2020-04-11 16:42:25 +02:00
buengese
2c2f4a6a05 jottacloud: implement --jottacloud-trashed-only 2020-04-11 16:42:25 +02:00
Nick Craig-Wood
da41db4712 vfs,mount,cmount: report 1PB free for unknown disk sizes
Factor the logic into the VFS layer so we don't have to duplicate it
into mount and cmount.

See: https://forum.rclone.org/t/rclone-mount-question/15454/
2020-04-11 13:31:10 +01:00
Nick Craig-Wood
9f3449d944 Add Michael G to contributors 2020-04-11 13:29:13 +01:00
Michael G
ec8a884787 doc: Clarify 'key' option for host key on serve sftp
The option --key would set the sftp host key. It could be mistaken for a default-user-key. Instead, explicitly call it 'host key' to avoid confusion.
2020-04-10 15:23:58 +01:00
Nick Craig-Wood
fc663d98d1 New email for Ankur Gupta in contributors 2020-04-03 11:51:08 +01:00
Ankur Gupta
08c2cb784f filter: Added --files-from-raw flag
--files-from parses input files by ignoring comments starting with # and ;
and stripping whitespace from start and end of strings.

The --files-from-raw flag was added that reads every line from the file ignoring
comment characters and not stripping whitespace while maintaining
backwards compatibility.

Fixes #3762
2020-04-03 10:36:24 +01:00
Nick Craig-Wood
3911a49256 vfs: make File lock and Dir lock not overlap to avoid deadlock
This was caused by this commit which wasn't part of 1.51.0

3c91abce74 vfs: fix race condition caused by unlocked reading of Dir.path
2020-04-02 21:14:45 +01:00
Nick Craig-Wood
2a62471e4c Add Jack Anderson to contributors 2020-03-31 18:17:36 +01:00
Jack Anderson
815ae7df45 backend/s3: add SSE-C support for AWS, Ceph, and MinIO 2020-03-31 18:16:45 +01:00
Nick Craig-Wood
ff0a299bfb drive: don't delete files with multiple parents to avoid data loss
Rclone can't safely delete files with multiple parents without
PATCHing the parents list. This can be done, but since multiple
parents are going away to be replaced by drive shortcuts we return an
error for now.

See #4013
2020-03-31 17:28:32 +01:00
Nick Craig-Wood
5fa6a28f70 dedupe: Stop dedupe deleting files with identical IDs #4013
Before this change if there were two files with the same name and the
same ID in the same directory, dedupe would delete one of them but
since these are actually the same file (with the same ID) then both
files would be deleted leading to data loss.

This should never actually happen, however it did happen as part of a
bug introduced in rclone which was fixed by

dfc7215bf9 drive: fix duplicate items when using --drive-shared-with-me #4018

This change checks to see if any of the duplicates have the same ID
and if they do it refuses to delete them.
2020-03-31 17:28:26 +01:00
Nick Craig-Wood
9a5178be7a test_all: optionally run Cleanup before cleaning a remote
Add this to the QingStor remote to stop it running out of buckets
2020-03-31 15:56:58 +01:00
Nick Craig-Wood
66e08e0cc8 mount: warn if --allow-non-empty used on Windows and clarify docs 2020-03-31 12:16:03 +01:00
Nick Craig-Wood
b5f78cd7b4 b2: ignore directory markers at the root also
See: https://forum.rclone.org/t/issue-with-lsf-r-files-only-first-line-is-blank/15229/
2020-03-31 11:46:17 +01:00
Nick Craig-Wood
ef99ca68aa gcs: ignore directory markers at the root also
See: https://forum.rclone.org/t/issue-with-lsf-r-files-only-first-line-is-blank/15229/
2020-03-31 11:46:10 +01:00
Nick Craig-Wood
a5c2f2c138 s3: ignore directory markers at the root also
See: https://forum.rclone.org/t/issue-with-lsf-r-files-only-first-line-is-blank/15229/
2020-03-31 11:45:52 +01:00
Nick Craig-Wood
b2c9ef23fa sync: make --track-renames tests only check rename count if expecting renames 2020-03-31 10:58:49 +01:00
Nick Craig-Wood
5f9be3dd05 sync: make --track-renames tests less fragile by using rename stat
Before this change these tests attempted to measure transfers and
checks in lieu of having a rename statistic with a very complicated
heuristic.

The change switches over to using the rename statistic which should be
100% reliable.
2020-03-30 18:30:33 +01:00
Nick Craig-Wood
b5f1bebc52 fs: add renames statistic for file and directory renames 2020-03-30 18:22:28 +01:00
Nick Craig-Wood
ad9c7ff7ed sync: Fix incorrect "nothing to transfer" message using --delete-before
Before this change the first pass of --delete-before would output
"There was nothing to transfer" and then proceed to transfer things.

This makes sure the message isn't printed in the delete phase.

See: https://forum.rclone.org/t/incorrect-debug-output/15267
2020-03-30 16:45:02 +01:00
Nick Craig-Wood
1af9fcbbfa Add Samantha McVey to contributors 2020-03-28 18:33:34 +00:00
Samantha McVey
6765303de4 docs: unmystify how crypt stores encryption password in config
Without explaining exactly how this is generated, it can be confusing
and worrying to not know how the password that encrypts your data is
stored.

This also brings peace of mind to the user that even though
the same password is obscured differently each time, all the data to
get back to the original password remains. Explaining how it works
is much better than the reader of the documentation having to trust
a blackboxy/magical mechanism.
2020-03-26 17:14:45 +00:00
Nick Craig-Wood
304ee97944 Add harry to contributors 2020-03-26 16:23:46 +00:00
harry
d91a547d59 dropbox: make error insufficient space to be fatal 2020-03-26 16:19:50 +00:00
harry
7d9ca3998e drive: Extend --drive-stop-on-upload-limit to respond to teamDriveFileLimitExceeded.
Fixes #3979
2020-03-26 16:19:50 +00:00
harry
9aa32bc269 onedrive: make error quotaLimitReached to be fatal - Fixes #4089 2020-03-26 16:19:50 +00:00
Nick Craig-Wood
d9c8c47e02 onedrive: add missing drive on config - fixes #4068
Before this change we queries /me/drives for a list of the users
drives and asked the user to choose. Sometimes this does not return
the users main drive for reasons unknown.

After this change we query /me/drives first then /me/drive and add
that to the list of drives if it wasn't already there.
2020-03-24 08:44:10 +00:00
Nick Craig-Wood
243a868a5b touch: add --localtime flag to make --timestamp localtime not UTC
Fixes #4067
2020-03-23 17:12:56 +00:00
Nick Craig-Wood
6c351c15f8 Add Mark Spieth to contributors 2020-03-23 17:12:56 +00:00
Mark Spieth
45b63e2d45 oauth: Use custom http client so that --no-check-certificate is honored by oauth token fetch
Fixes #4085
2020-03-22 12:28:19 +00:00
Nick Craig-Wood
32df634cb6 sync: fix --track-renames-strategy modtime test on remotes which don't support modtime 2020-03-22 11:52:40 +00:00
Nick Craig-Wood
e569977c06 Add @Max-Sum as the union backend maintainer 2020-03-21 18:16:42 +00:00
Nick Craig-Wood
b49ab9f9c7 Add Max Sum to contributors 2020-03-21 18:13:05 +00:00
Max Sum
78a9e7440a union: Implement multiple writable remotes 2020-03-21 18:11:24 +00:00
Nick Craig-Wood
93f5125f51 sync: fix --track-renames-strategy tests
This commit corrects the logic for --track-renames-strategy which
broke the integration tests.

It also improves the parsing of the argument and adds a test for that.
2020-03-21 17:39:51 +00:00
Nick Craig-Wood
410e17a2ec Add Elan Ruusamäe to contributors 2020-03-21 17:39:51 +00:00
Elan Ruusamäe
23f7943645 Dropbox: Add info about required redirect url 2020-03-20 14:51:53 +00:00
Nick Craig-Wood
1108895180 Add Bernd Schoolmann to contributors 2020-03-20 14:01:02 +00:00
Bernd Schoolmann
158870bcdb fs: Add --track-renames-strategy for configurable matching criteria for --track-renames
This commit adds the `--track-renames-strategy` flag which allows the
user to choose the strategy for tracking renames when using the
`--track-renames` flag.

This can be "hash" or "modtime" or both currently.

This, when used with `--track-renames-strategy modtime` enables
support for tracking renames in encrypted remotes.

Fixes #3696
Fixes #2721
2020-03-20 13:04:56 +00:00
Nick Craig-Wood
36717c7d98 serve restic: fix tests after restic project removed vendoring 2020-03-18 16:50:01 +00:00
Fernando
1d3987bbbd docs: Fix typo and reword contact.md 2020-03-18 14:21:58 +00:00
Nick Craig-Wood
472d4799d1 qingstor: make rclone cleanup remove pending multipart uploads older than 24h 2020-03-18 12:49:21 +00:00
Nick Craig-Wood
84caf1e158 qingstor: try harder to cancel failed multipart uploads 2020-03-18 12:49:21 +00:00
Nick Craig-Wood
72eba4dbb6 Add greatroar to contributors 2020-03-18 12:49:21 +00:00
greatroar
0f20f23651 cache: move methods used for testing into test file 2020-03-16 18:41:32 +00:00
Nick Craig-Wood
47e2d5c415 config: fsync the config file after writing #3411
This should help with data integrity
2020-03-16 18:20:16 +00:00
Nick Craig-Wood
1e9b8e043a Add fishbullet to contributors 2020-03-16 17:17:08 +00:00
fishbullet
eb0fc21533 fs: filter flags ability to read from stdin - fixes #4034 2020-03-16 17:16:50 +00:00
Lars Lehtonen
a6a2eec392 backend/b2: remove unused largeUpload.clearUploadURL() 2020-03-16 17:11:19 +00:00
Nick Craig-Wood
c227a90b52 sync: implement --order-by xxx,mixed 2020-03-16 15:50:04 +00:00
Nick Craig-Wood
1e3d899db8 sync: replace container/heap with github.com/aalpar/deheap 2020-03-16 15:50:04 +00:00
Nick Craig-Wood
84369286df vendor: add github.com/aalpar/deheap 2020-03-16 15:50:04 +00:00
Nick Craig-Wood
4c82b1f3c6 operations: fix --max-transfer test with jottacloud
Jottacloud was deduplicating the uploads, so make a different upload
each time
2020-03-16 14:05:49 +00:00
Nick Craig-Wood
f94257115f operations: skip part of the --max-transfer test under chunker
This test relies on there being 1 file copied and chunker copies several
2020-03-16 14:05:05 +00:00
Nick Craig-Wood
77e94be280 onedrive: implement --onedrive-server-side-across-configs - fixes #4058 2020-03-15 21:10:23 +00:00
Nick Craig-Wood
37d5e75a56 operations: fix --max-transfer test to have a higher threshold
Before this change backends which introduce overhead (eg crypt) were
failing to upload the first file.

This change increases the threshold to 2k to allow the first file to
go through even with some overhead but the next file to definitely
fail.
2020-03-15 11:13:27 +00:00
Nick Craig-Wood
dc06973796 s3: use rclone's low level retries instead of AWS SDK to fix listing retries
In 5470d34740 "backend/s3: use low-level-retries as the number
of SDK retries" we switched over to using the AWS SDK low level
retries instead of rclone's low level retry logic.

This had the unfortunate attempt that retrying listings to correct XML
Syntax errors failed on non S3 backends such as CEPH. The AWS SDK was
also retrying the XML Syntax error request which doesn't make sense.

This change turns off the AWS SDK retries in favour of just using
rclone's retry logic.
2020-03-14 18:04:24 +00:00
Nick Craig-Wood
b03462ab04 Add Patryk Jakuszew to contributors 2020-03-13 21:45:09 +00:00
Patryk Jakuszew
d4e87a841d fs/log: add support for syslog LOCAL facilities - fixes #4061 2020-03-13 21:44:52 +00:00
Nick Craig-Wood
6d0063d685 operations: Make --max-transfer more accurate
Before this change we checked the transfer was out of range only
before the Read call. This means that we returned all the data to the
reader before declaring an error. This means that some backends wrote
the file even though an error was returned.

This fix checks the transfer after the Read as well, and chops the
excess characters off the read data if we are over the limit so that
we don't ever deliver all the data.

This fixes the tests introduced as part of 6f1766dd9e and #2672
on backends other than local.
2020-03-13 16:40:38 +00:00
Nick Craig-Wood
6fdd7149c1 drive: don't overwrite the description on sever side copy
See: https://forum.rclone.org/t/is-there-a-way-to-sync-while-keeping-file-description-on-the-destination/14609
2020-03-12 10:39:00 +00:00
Harry
fdb07f2f89 onedrive: Added maximum chunk size limit warning in the docs
If chunk size is more than 250M (262,144,000 bytes) then API throws the following error:

Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big. The server does not allow messages larger than 262144000 bytes.
2020-03-10 15:14:08 +00:00
Nick Craig-Wood
a433698b00 Add Joachim Brandon LeBlanc to contributors 2020-03-10 12:00:30 +00:00
Anuar Serdaliyev
f14871caf7 accounting: Correct exitcode on Transfer Limit Exceeded flag. Fixes #3203
Before this change the exit code for transfer limit exceeded was
incorrect. This was because the `resolveExitCode` function unwraps the
error thus reading the underlying error which is not the same as the
error it was comparing to (`ErrorMaxTransferLimitReached`).

This change fixes it by splitting the error definition in two so that
when the Fatal error is unwrapped we match against
`ErrorMaxTransferLimitReached` however when we return the error we
return `ErrorMaxTransferLimitReachedFatal`.
2020-03-10 12:00:10 +00:00
Joachim Brandon LeBlanc
132ce94139 backend/s3: use the provided size parameter when allocating a new memory pool - fixes #4047 (#4049) 2020-03-09 16:56:21 +00:00
Nick Craig-Wood
a492c0fb0e local: speed up multi thread downloads by using sparse files on Windows
Before this change rclone didn't use sparse files on Windows. This
means that when you downloaded a file with multithread download it
wrote the entire file with zeros first on the first write not at the
start of the file.

This change makes the file be sparse on Windows. Linux/macOS files
were already sparse.
2020-03-09 10:55:52 +00:00
Nick Craig-Wood
dfc7215bf9 drive: fix duplicate items when using --drive-shared-with-me #4018
Before this change shared with me items with multiple parents (ie most
of them that aren't in the root) would appear twice in the directory
listings.

This fixes the problem by doing an early exit for shared with me
items.
2020-03-07 16:46:53 +00:00
Nick Craig-Wood
38e59ebdf3 drive: fix missing files when using --fast-list and --drive-shared-with-me
This bug was introduced here by removing some necessary code detecting
shared with me items at the root with no parents.

4453fa4ba6 "drive: fix --fast-list when using appDataFolder"

This fix reverts that part of the patch.

Fixes #4018
2020-03-07 16:46:53 +00:00
Yves G
5ee24f804f webdav: report full and consistent usage with about
— allow either Used or Available to be ==0 (remote full or empty)
— compute Total if both values are received
2020-03-05 15:10:19 +00:00
Nick Craig-Wood
747edf42c1 azureblob: document container level SAS URL from root now needs container
In 8a0775ce3c which was released in v1.49.0 we inadvertently
stopped SAS URLs working from the root without a container name.

Previously to this change you could use `rclone mount azsas:` and it
would actually be equivalent to `rclone mount azsas:container`. After
this change, only `rclone mount azsas:container` will work, `rclone
mount azsas:` will have a directory in the root called "container".

After some discussion it was decided not to revert this change as the
current behaviour is more logical and in line with the similar
behaviour for the b2 backend.

Instead the documentation was updated to show exactly how container
level SAS URLs behave.

Fixes #4028
2020-03-05 14:56:36 +00:00
Nick Craig-Wood
ce23cb2093 Add evileye to contributors 2020-03-05 14:07:32 +00:00
evileye
6ff0bb825e mount: fix fail because of too long volume name - fixes #4026 2020-03-05 13:57:20 +00:00
Lars Lehtonen
fef2c6bf7a backend/s3: replace deprecated session.New() with session.NewSession() 2020-03-05 11:34:10 +00:00
Ishuah Kariuki
0c6f14c694 copy/sync: only create empty directories when they don't exist on the remote
Sync/copy now only creates empty directories when they don't exist on the remote (--create-empty-src-dirs flag) - fixes #2800
2020-03-03 16:24:22 +00:00
Nick Craig-Wood
1c800efbac Add Robert-André Mauchin to contributors 2020-03-03 12:41:08 +00:00
Robert-André Mauchin
e2e400e63c Use proper import path go.etcd.io/bbolt
Signed-off-by: Robert-André Mauchin <zebob.m@gmail.com>
2020-03-03 12:40:52 +00:00
Nick Craig-Wood
4d8d1e287b googlephotos: fix "concurrent map write" error - fixes #4003
This adds a bit of missed locking around the uploaded info to fix the
concurrent map write.

All the other accesses have locking - this one must have got missed.
2020-03-02 18:12:46 +00:00
Nick Craig-Wood
452fdbf1c1 Add Franklyn Tackitt to contributors 2020-03-02 17:31:23 +00:00
Nick Craig-Wood
51686bd1ef Add Shing Kit Chan to contributors 2020-03-02 17:31:23 +00:00
Gary Kim
38a4d50e73 rcd: Add Prometheus metrics support - fixes #3858
Signed-off-by: Gary Kim <gary@garykim.dev>
2020-03-01 09:58:34 +00:00
Gary Kim
3fd38cbe8d vendor: add github.com/prometheus/client_golang/prometheus
Signed-off-by: Gary Kim <gary@garykim.dev>
2020-03-01 09:58:34 +00:00
Franklyn Tackitt
2b3d13a841 fs: Use --cutoff-mode hard,soft,catious instead of 3 --max-transfer-mode flags
Fixes #2672
2020-03-01 09:49:55 +00:00
Shing Kit Chan
6f1766dd9e fs: Add support for --max-transfer-cutoff modes #2672
This also adds max transfer cut off check for server side copies too
2020-03-01 09:49:55 +00:00
Nick Craig-Wood
7d70eb0346 ftp: attempt to work-around pureftp sending spurious 150 messages
pureftpd has a bug where it sends messages like this

```
    150-Accepted data connection\r\n
        Response code: File status okay; about to open data connection (150)
        Response arg: Accepted data connection
    150 32768.0 kbytes to download\r\n
    150 0.014 seconds (measured here), 1665.27 Mbytes per second\r\n
```

The last `150` is treated as a new response - the previous `150` should have been `150-`.

This means that rclone sees the `150 0.014 seconds (measured here),
1665.27 Mbytes per second` as a reply to the next message and reports
it as an error.

This fix ignores that specific message when it is received in the
`Close` method. It dumps the FTP connection after as it is out of
sync.

See: #3984
Fixes #3445
2020-03-01 09:17:51 +00:00
Nick Craig-Wood
bae2644667 Add Yves G to contributors 2020-03-01 09:17:51 +00:00
Valeriy.Vyrva
f6f95822c1 doc: fix links in generated documentation 2020-03-01 09:14:37 +00:00
Yves G
b1b5e09081 vfs: make df output more consistent on a rclone mount.
When 2 values are known among vfs:{free,used,total}, compute the 3rd
2020-03-01 08:54:07 +00:00
Nick Craig-Wood
2b268f9724 build: fixup formatting after go1.14 go fmt changes 2020-02-28 16:58:33 +00:00
Nick Craig-Wood
7a5a74cecb crypt: clarify that directory_name_encryption depends on filename_encryption
See: https://forum.rclone.org/t/directory-name-encryption-is-set-to-always-false-when-choosing-filename-encryption-off/14600
2020-02-28 16:26:45 +00:00
Nick Craig-Wood
54a0c6b8ad Add valery1707 to contributors 2020-02-28 16:26:45 +00:00
valery1707
1ad23c4dc8 mailru: Describe 2FA requirements (#4015)
Fair enough
2020-02-28 16:54:09 +03:00
Dan Walters
7586a345ff dlna: cds: use modification time as date in dlna metadata
We havn't been outputting anything for this until now, which leads to my
Samsung showing an epoch/1970 date for all files.
2020-02-27 18:05:18 +01:00
Nick Craig-Wood
393b94bb70 vfs: add --vfs-read-wait and --vfs-write-wait flags
--vfs-read-wait duration    Time to wait for in-sequence read before seeking. (default 5ms)
    --vfs-write-wait duration   Time to wait for in-sequence write before giving error. (default 1s)

See: https://forum.rclone.org/t/constantly-high-iowait-add-log/14156
2020-02-27 16:12:33 +00:00
Nick Craig-Wood
e3c11c9ca1 mount: add --async-read flag to disable asynchronous reads
See: https://forum.rclone.org/t/constantly-high-iowait-add-log/14156
2020-02-27 16:12:33 +00:00
Nick Craig-Wood
3c91abce74 vfs: fix race condition caused by unlocked reading of Dir.path 2020-02-27 15:50:41 +00:00
Nick Craig-Wood
87d856d71b cache: disable race tests until bbolt is fixed
bbolt fails with "unsafe pointer conversion" under the go1.14 race
detector.

Disable race tests until https://github.com/etcd-io/bbolt/issues/187
is fixed.
2020-02-27 08:05:28 +00:00
Nick Craig-Wood
3855c003ce build: update to use go1.14 for the build 2020-02-26 21:26:47 +00:00
Nick Craig-Wood
abb9f89f65 vendor: update all dependencies 2020-02-26 21:26:46 +00:00
Nick Craig-Wood
17b4058ee9 mount: constrain to go1.13 or above otherwise bazil.org/fuse fails to compile 2020-02-26 21:26:46 +00:00
Nick Craig-Wood
9663f9b2ab mount: ignore --allow-root flag with a warning as it has been removed upstream
For background see: https://github.com/bazil/fuse/issues/144
2020-02-26 21:11:25 +00:00
Nick Craig-Wood
d6e10dba33 docs: fix confusion over processor names in download table
See: https://forum.rclone.org/t/intel-processor-download-help/14558
2020-02-26 16:39:46 +00:00
Nick Craig-Wood
da5cbc194a ftp: fix lockup on Close failures when using concurrency limit #3984
Before this change if rclone failed to close a file download for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.

This fix returns the concurrency token regardless of the error status.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
e8eb658ba5 ftp: fix lockup on failed upload when using concurrency limit #3984
Before this change if rclone failed to upload a file for some reason
it would leak a concurrency token. When all the tokens were leaked
then rclone would lock up.

The fix returns the concurrency token regardless of the error state.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
28f69f25a0 ftp: fix lockup when using concurrency limit on failed connections #3984
Before this change if rclone failed to make an FTP connection for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.

The fix returns the concurrency token if creating the FTP connection
returns an error.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
07e4b9bb7f operations: fix multithread copy test to use the correct modify window
In bde0334bd8 "operations: fix setting the timestamp on Windows
for multithread copy" the test for multithread copy failed to take
into account the modify window of the remote under test.
2020-02-25 13:30:35 +00:00
Aleksandar Jankovic
708b967f15 backend/s3: fix multipart abort context
S3 couldn't abort multi-part upload when context is canceled
because canceled context prevents abort request from being sent.
2020-02-25 12:11:32 +01:00
Dan Walters
7e2568a312 dlna: cds: don't specify childCount at all when unknown
Basically, solving #3541 with a different approach - bringing in
the upstream upnpav module, and changing ChildCount from int to a
*int to avoid childCount="0" in the XML output when that value is
simply unknown.

Current approach is leading to some recursion issues and according
to the DLNA spec it shouldn't be necessary, anyway.
2020-02-25 08:41:00 +01:00
Nick Craig-Wood
bde0334bd8 operations: fix setting the timestamp on Windows for multithread copy
Before this fix we attempted to set the modification time on the file
when it was open. This works fine on Linux but not on Windows. The
test was also incorrect testing the source file rather than the
destination file.

This closes the file before setting the modification time and fixes
the tests.

Fixes #3994
2020-02-24 17:30:09 +00:00
Aleksandar Janković
5470d34740 backend/s3: use low-level-retries as the number of SDK retries
Amazon S3 is built to handle different kinds of workloads.
In rare cases where S3 is not able to scale for whatever reason users
will face status 500 errors.
Main mechanism for handling these errors are retries.
Amount of needed retries varies for each different use case.

This change is making retries for s3 backend configurable by using
--low-level-retries option.
2020-02-24 16:43:44 +01:00
Maciej Zimnoch
ac9cb50fdb backend/s3: use memory pool for buffer allocations
Currently each multipart upload allocated his own buffers, which after
file upload was garbaged. Next files couldn't leverage already allocated
memory which resulted in inefficent memory management. This change
introduces backend memory pool keeping memory chunks which can be
used during object operations.

Fixes #3967
2020-02-24 13:32:32 +01:00
buengese
4a8b548add jottacloud: use RawURLEncoding when decoding base64 encoded login token - fixes #3945 2020-02-22 23:12:56 +01:00
Lars Lehtonen
481c8a40ea backend/azureblob: fmt nit 2020-02-20 15:50:53 +01:00
Lars Lehtonen
25ef3a281b backend/azureblob: remove unused Object.parseTimeString() 2020-02-20 15:50:53 +01:00
Lars Lehtonen
219bd97e8a backend/qingstor: lint fix 2020-02-14 18:11:01 +00:00
Lars Lehtonen
8b14cd24aa backend/qingstor: prune multiUploader.list() 2020-02-14 18:11:01 +00:00
Nick Craig-Wood
3893c14889 operations: make rcat obey --ignore-checksum 2020-02-14 12:47:11 +00:00
Nick Craig-Wood
c41fbc0f90 operations: move Rcat tests back to main test file now S3 prob is fixed 2020-02-14 12:26:52 +00:00
Nick Craig-Wood
f45425e5a9 operations: factor CommonHash out of Copy for re-use elsewhere 2020-02-14 12:12:10 +00:00
Nick Craig-Wood
bd9fd629bc test: add TestS3MinioEdge to test leading edge minio too #3934 2020-02-13 11:01:06 +00:00
Nick Craig-Wood
3b19f48929 testserver: add provider to TestS3Minio #3934
This is necessary to pass the TestIntegration/FsMkdir/FsEncoding
listing tests.
2020-02-13 11:01:06 +00:00
Outvi V
4996edc030 oauthutil: make instructions for rclone authorize more robust
It appends "--" to the rclone authorize command line before client_id,
in case that the client_id or client_secret has a prefix of "-"
(OneDrive's does), which affects the argument parsing.
2020-02-13 10:18:23 +00:00
Michał Matczuk
964f1f6a7e fs/accounting: Restore "Max number of stats groups reached" log line
Changed log level to debug.
2020-02-12 21:21:25 +00:00
Michał Matczuk
e75c1f70bb backend/s3: Added 500 as retryErrorCode
The error code 500 Internal Error indicates that Amazon S3 is unable to handle the request at that time. The error code 503 Slow Down typically indicates that the requests to the S3 bucket are very high, exceeding the request rates described in Request Rate and Performance Guidelines.

Because Amazon S3 is a distributed service, a very small percentage of 5xx errors are expected during normal use of the service. All requests that return 5xx errors from Amazon S3 can and should be retried, so we recommend that applications making requests to Amazon S3 have a fault-tolerance mechanism to recover from these errors.

https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/
2020-02-12 11:43:18 +00:00
Michał Matczuk
19a4d74ee7 backend/s3: Fail fast multipart upload
When a part upload request fails error is returned and gCtx is cancelled.
This does not prevent from other parts being tried.
They immediately fail due to a canceled context, but are retried by rclone anyway...

Example AWS debug output

```
-----------------------------------------------------
2020/02/11 14:12:17 DEBUG: Retrying Request s3/UploadPart, attempt 4
2020/02/11 14:12:17 DEBUG: Request s3/UploadPart Details:
---[ REQUEST POST-SIGN ]-----------------------------
PUT /backuptest-rclone/huge/file.db?partNumber=11&uploadId=190939b4-3c43-4b98-ac11-92303e3f11b0 HTTP/1.1
Host: 192.168.100.99:9000
User-Agent: aws-sdk-go/1.23.8 (go1.13.1; linux; amd64)
Content-Length: 5242880
Authorization: AWS4-HMAC-SHA256 Credential=miniouser/20200211/us-east-1/s3/aws4_request, SignedHeaders=content-length;content-md5;expect;host;x-amz-content-sha256;x-amz-date, Signature=3fc03a01f651cec09b05290459e9ceb26db9a8aa00c4e1b16e8cf5617eb81da8
Content-Md5: XzY+DlipXwbL6bvGYsXftg==
Expect: 100-Continue
X-Amz-Content-Sha256: c036cbb7553a909f8b8877d4461924307f27ecb66cff928eeeafd569c3887e29
X-Amz-Date: 20200211T131217Z
Accept-Encoding: gzip

-----------------------------------------------------
http://192.168.100.99:9000/backuptest-rclone/huge/file.db?partNumber=11&uploadId=190939b4-3c43-4b98-ac11-92303e3f11b0
2020/02/11 14:12:17 DEBUG: Response s3/UploadPart Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 500 InternalServerError
Content-Length: 0
-----------------------------------------------------
UploadPartWithContext() error InternalError: We encountered an internal error. Please try again
	status code: 500, request id: , host id:

2020/02/11 14:12:18 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
2020/02/11 14:12:20 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
2020/02/11 14:12:22 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
```

This adds a fail fast behaviour in case the context was cancelled.
2020-02-12 11:40:34 +00:00
Nick Craig-Wood
55b5eded23 Add Frederick Zhang to contributors 2020-02-12 11:33:56 +00:00
Lars Lehtonen
3dbcf0af2d backend/cache: Remove Unused Functions
This removes the unused functions run.writeRemoteRandomBytes() run.writeObjectRandomBytes() run.listPath() Directory.parentRemote() and Persistent.dumpRoot().
2020-02-12 11:23:57 +00:00
Lars Lehtonen
4e1a511f88 vfs: explicitly ignore unused variables 2020-02-12 11:20:54 +00:00
Frederick Zhang
b71e1a16b1 docs: update account type during OneDrive app registration 2020-02-12 11:04:49 +00:00
Nick Craig-Wood
ec1271818f mount2: hide mount2 command for the moment 2020-02-11 14:28:13 +00:00
Nick Craig-Wood
8318020387 Implement mount2 with go-fuse
This passes the tests and works efficiently with the non sequential vfs ReadAt fix.
2020-02-11 14:28:13 +00:00
Nick Craig-Wood
c38d7be373 vendor: add github.com/hanwen/go-fuse/v2@master for mount2 2020-02-11 14:28:13 +00:00
Nick Craig-Wood
dc31212c3d Add Tim Gallant to contributors 2020-02-11 14:28:13 +00:00
Lars Lehtonen
ac60b36e77 backend/premiumizeme: prune unused functions 2020-02-11 12:17:35 +00:00
Tim Gallant
1d73f071f6 fs: improve log output when no changes are made - fixes #3454
- changes a few log messages to debug level
- adds a log message for when 0 bytes are transferred
2020-02-11 12:16:15 +00:00
Nick Craig-Wood
5c869d5bd3 cmd: make stats be printed on non-zero exit code
This also rationalises the exit sequence so that dumping
goroutines/open files happens regardless of exit error state.

See: https://forum.rclone.org/t/transfer-stats-on-non-0-exit/14211
2020-02-10 16:40:43 +00:00
Nick Craig-Wood
a54210a2e4 docs: restore missing mount --daemon docs
This was done as part of ebfeec9fb4 which unfortunately patched
the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
040d226028 docs: restore missing QingStor doc fix
This was done as part of 64fce8438b which unfortunately patched
the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
8b664c3ec5 docs: restore lost spelling fixes
These came from 3d424c6e08 which unfortunately got added the
docs to the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
102a38bb95 docs: restore lost VFS poll interval docs
These came from 3d475dc0ee which unfortunately got added the
docs to the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
7a54e13110 docs: restore lost VFS case insensitive docs
These came from 1c4e33d4ad which unfortunately
added the docs to the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
feee92c790 docs: restore lost mount share docs
These came from 162fdfe455 which unfortunately added the docs to
the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
de93852512 docs: restore lost auth proxy logs
These came from f2a789ea98 which unfortunately added the docs to
the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
dfb710eab7 gendocs: add autogenerated header to all docs 2020-02-10 15:29:39 +00:00
Nick Craig-Wood
25cfeb2a64 webdav: Fix X-OC-Mtime header for Transip compatibility - fixes #3126 2020-02-10 11:57:12 +00:00
Nick Craig-Wood
90377f5e65 s3: Specify that Minio supports URL encoding in listings
Thanks to @harshavardhana for pointing this out

See #3934 for background
2020-02-09 12:03:20 +00:00
Lars Lehtonen
f1d9bd5eab lib/oauthutil: replace deprecated oauth2.NoContext 2020-02-07 17:49:29 +00:00
Lars Lehtonen
4ee3c21a9d cmd/serve/ftp: replace deprecated os.SEEK_SET with io.SeekStart 2020-02-06 10:58:34 +00:00
Lars Lehtonen
fe6f4135b4 fs/rc: fix dropped error 2020-02-04 11:31:06 +00:00
Nick Craig-Wood
3dfa63b85c onedrive: fix occasional 416 errors on multipart uploads
Before this change, when uploading multipart files, onedrive would
sometimes return an unexpected 416 error and rclone would abort the
transfer.

This is usually after a 500 error which caused rclone to do a retry.

This change checks the upload position on a 416 error and works how
much of the current chunk to skip, then retries (or skips) the current
chunk as appropriate.

If the position is before the current chunk or after the current chunk
then rclone will abort the transfer.

See: https://forum.rclone.org/t/fragment-overlap-error-with-encrypted-onedrive/14001

Fixes #3131
2020-02-01 21:15:07 +00:00
Gary Kim
ff2343475a docs: Update README.md shields for changed CI
Signed-off-by: Gary Kim <gary@garykim.dev>
2020-02-01 20:17:40 +00:00
Nick Craig-Wood
bffd7f0f14 docs: note how to use GitHub's online editor to edit rclone's docs 2020-02-01 13:44:03 +00:00
Nick Craig-Wood
7c55fafe33 Add Durval Menezes to contributors 2020-02-01 13:41:47 +00:00
Nick Craig-Wood
2e7fe06beb Add Dave Koston to contributors 2020-02-01 13:41:47 +00:00
Durval Menezes
8ff91ff31b docs: Update the "Making your own client_id" section for drive
...so it accurately describes the new "Enhanced Security" Google process to get your own Client ID and Client Secret to use with rclone.
2020-02-01 13:41:18 +00:00
Nick Craig-Wood
4d1c616e97 Start v1.51.0-DEV development 2020-02-01 12:32:21 +00:00
Nick Craig-Wood
43daecd89b Version v1.51.0 2020-02-01 10:40:01 +00:00
Dave Koston
9f99c20232 s3: Add StackPath Object Storage Support 2020-01-31 16:05:44 +00:00
Nick Craig-Wood
97ed8db75d drive: hide dangerous config from the configurator
This hides:

- "use_created_date"
- "use_shared_date"
- "size_as_quota"

from the configurator (rclone config) as they interfere with normal
operations and shouldn't be set in a backend config.

They can still be put in the config file by hand and will still work
as variables, etc.

This adds some more docs to "size_as_quota" also.

Fixes #3912
2020-01-31 10:09:33 +00:00
Nick Craig-Wood
f80d98553a dbhashsum: stop it returning UNSUPPORTED on dropbox 2020-01-29 19:49:42 +00:00
Nick Craig-Wood
b3e7a9d01c Add Benjapol Worakan to contributors 2020-01-29 13:38:21 +00:00
Benjapol Worakan
01fc063128 docs: fix spacing error under -P in docs.md 2020-01-29 13:38:03 +00:00
Gary Kim
e71edd5577 cmd: always print elapsed time to tenth place seconds in progress
Before this change, the elapsed time shown with the --progress flag
would not print ".0s" so the elapsed time.

This change will make it so that the line width is kept a bit more
consistent by always printing to a fixed-point.

This does change the displayed value when the elapsed time
is less than 1s, in which it used to be that the value would be shown
in ms or smaller units.

Signed-off-by: Gary Kim <gary@garykim.dev>
2020-01-29 12:28:01 +00:00
Nick Craig-Wood
27a34dd183 Add Motonori IWAMURO to contributors 2020-01-29 12:16:37 +00:00
Motonori IWAMURO
7662f15939 onedrive: add support "Retry-After" header 2020-01-29 12:16:18 +00:00
Nick Craig-Wood
bfd9f32188 lsjson: add --no-mimetype flag, speed up lsf
Before this changed we unconditionally fetched the MimeType. On Some
backends like s3 and swift this takes an extra transaction which meant
that `lsf` on those backends was needlessly slow.

This adds an internal option so `lsf` can declare whether it wants
MimeTypes or not depending on whether the user asked for them and an
external flag `--no-mimetype` for `lsjson`.

See: https://forum.rclone.org/t/reliably-setup-incremental-updates/14006/8
2020-01-26 16:38:00 +00:00
Nick Craig-Wood
9c9cdf1712 fs: don't run tests for --max-duration on remote backends
This is a timing dependent test and to make it long enough so that it
would work with the remotes would make it too long for local tests.

The code paths are identical for local vs non-local so just run on
local.

This fixes the integration tests.
2020-01-26 09:23:03 +00:00
Nick Craig-Wood
0e5537cd25 Add unbelauscht to contributors 2020-01-26 09:23:03 +00:00
unbelauscht
151d0a274e swift: Update OVH API endpoint - Fixes #3890 2020-01-24 17:43:00 +00:00
Nick Craig-Wood
dc77ec4ba1 Add boosh to contributors 2020-01-24 13:29:36 +00:00
boosh
0d7573dd81 fs: Add --max-duration flag to control the maximum duration of a transfer session
This gives you more control over how long rclone will run for, making
it easier to script backups, e.g. via cron. Once the `--max-duration`
time limit is reached, no new transfers will be initiated, but those
already in-flight will be allowed to complete.

Fixes #985
2020-01-24 13:28:56 +00:00
Nick Craig-Wood
e4d2d228bd webdav: add Referer header to fix problems with WAFs - fixes #3868 2020-01-23 15:56:17 +00:00
Nick Craig-Wood
ede36b001b Add Damon Permezel to contributors 2020-01-23 15:40:23 +00:00
Nick Craig-Wood
3afb2a4798 config: use SpaceSepList for argument to --password-command
This is to enable arguments with spaces in.
2020-01-23 15:39:15 +00:00
Nick Craig-Wood
62dbdcdbcc config: use the environment variable which goes with --password-command 2020-01-23 15:39:15 +00:00
Damon Permezel
06df133159 config: add --password-command to allow dynamic config password - fixes #3694 2020-01-23 15:39:15 +00:00
Xiaoxing Ye
0ab2693da6 doc: add desc about gzip and http dump
Fix #3872
2020-01-23 12:42:44 +00:00
Nick Craig-Wood
4b1cb1be43 Add jtagcat to contributors 2020-01-23 12:38:56 +00:00
Nick Craig-Wood
9d96680329 Add thestigma to contributors 2020-01-23 12:38:56 +00:00
jtagcat
d694bb30e5 install.sh: create ~/.config/rclone directory
This allows people with their config already made copy their config to the new server(s), without having to create the directories.
2020-01-23 12:37:46 +00:00
buengese
9c858c3228 crypt: correctly handle trailing dot 2020-01-22 01:40:04 +01:00
thestigma
7125cb10f5 doc: Add note to filtering.md about --files-from and rclone lsf 2020-01-20 17:37:24 +00:00
Nick Craig-Wood
ba421fd069 Add landall to contributors 2020-01-20 17:30:27 +00:00
landall
77e55b8265 hashsum: Add flag --base64 flag - fixes #3663
This flag can be used to output QuickXorHash in the same format as MS
Graph API.
2020-01-20 17:29:58 +00:00
Nick Craig-Wood
18d26e2ddb Revert "vendor: update x/crypto/ssh - to fix Windows password length issues fixes #3798"
This turned out to introduce a regression, not being able to press Enter.

See: #3888 and https://github.com/golang/go/issues/36609

This reverts commit 251cfc100e.
2020-01-20 12:46:52 +00:00
Nick Craig-Wood
f338a2d907 Add Benjamin Richter to contributors 2020-01-20 12:46:46 +00:00
Benjamin Richter
77fa8194f2 onedrive: add Sites.Read.All permission - Fixes #1770 2020-01-20 12:30:19 +00:00
Xiaoxing Ye
ccaca04a5d rcd: move webgui apart; option to disable browser
Fix #3601, #3785
2020-01-20 12:27:55 +00:00
Nick Craig-Wood
84191ac6dc vfs: fix incorrect modtime for mv into mount with --vfs-cache-modes write
When a file has its modtime set while it is open we delay setting the
modtime until the file is closed.

The file is then uploaded in Flush. In Release we check the cached
file has been uploaded by comparing modtimes and or hashes and upload
it again if it has changed.

Before this change we forgot to change the time on the cached file
when we updated the time file on the object, so this mean that Release
reset the time to the wrong time and uploaded the file again on
remotes which don't support hashes (eg crypt).

The fix was to set the modtime of the cached file at the same time we
set the modtime of the remote object. This means that the files check
as identical in Release so it doesn't try to upload the file.

This means that we avoid a double upload and the modtime is correct.

See: https://forum.rclone.org/t/modification-time-with-vfs-cache/13906/8
2020-01-19 12:52:48 +00:00
Nick Craig-Wood
7cf8ea354c dedupe: add missing modes to help string 2020-01-19 11:09:45 +00:00
Nick Craig-Wood
24ef00a258 build: implement a framework for starting test servers during tests
Test servers are implemented by docker containers and run real servers
for rclone to test against.
2020-01-18 16:47:37 +00:00
Nick Craig-Wood
00d30ce0d7 opendrive: implement --opendrive-chunk-size #3707 2020-01-18 11:56:01 +00:00
Nick Craig-Wood
db39adeb3e sftp: open files for update write only to fix AWS SFTP interop - fixes #3776 2020-01-18 11:46:56 +00:00
Nick Craig-Wood
ef7ac088c0 operations: make NewOverrideObjectInfo public and factor uses 2020-01-18 11:41:33 +00:00
Nick Craig-Wood
08a3957880 cache: fix fatal error: concurrent map writes - fixes #2378 2020-01-18 11:27:00 +00:00
Nick Craig-Wood
4499b08afc drive: log an ERROR if an incomplete search is returned 2020-01-18 11:22:26 +00:00
Nick Craig-Wood
422ad38e5b copyurl: add --stdout flag to write to stdout 2020-01-18 11:15:51 +00:00
Nick Craig-Wood
0b7f959433 cmount: when setting dates discard out of range dates
It appears that sometimes Windows/WinFSP/cgofuse sends dates which are
the epoch to rclone.  These dates appear as 1601-01-01 00:00:00 plus
or minus the timezone.

These dates aren't being sent from rclone.

This patch filters dates out before 1601-01-02 so rclone does not
attempt to set them.

See: https://forum.rclone.org/t/bug-corruption-of-modtime-via-vfs-layer/12204
See: https://forum.rclone.org/t/io-error-googleapi-error-403-insufficient-permission-insufficientpermissions/11372
See: https://github.com/billziss-gh/cgofuse/issues/35
2020-01-18 11:13:35 +00:00
Nick Craig-Wood
4b9da601be dropbox: treat insufficient_space errors as non retriable errors
Before this change rclone would keep trying to upload files after
dropbox had signalled it was full.

This change makes the relevant error a non-retriable error.

See: https://forum.rclone.org/t/why-does-a-file-transfer-continue-when-there-is-no-available-storage/13677
2020-01-18 11:10:18 +00:00
Nick Craig-Wood
c789436580 The memory backend
This is a bucket based remote
2020-01-18 10:41:08 +00:00
Nick Craig-Wood
277d94feac fshttp: add --expect-continue-timeout default 1s - fixes #3835
Before this change the expect/continue timeout was set to
--conntimeout which was 60s by default which is too long to wait.

This was noticed when using s3 with a proxy which apparently didn't
support expect / continue properly.

Set --expect-continue-timeout 0 to disable expect/continue.
2020-01-18 09:49:22 +00:00
Nick Craig-Wood
6757244918 drive: use multipart resumable uploads for streaming and uploads in mount
Before this change we used non multipart uploads for files of unknown
size (streaming and uploads in mount).  This is slower and less
reliable and is not recommended by Google for files smaller than 5MB.

After this change we use multipart resumable uploads for all files of
unknown length.  This will use an extra transaction so is less
efficient for files under the chunk size, however the natural
buffering in the operations.Rcat call specified by
`--streaming-upload-cutoff` will overcome this.

See: https://forum.rclone.org/t/upload-behaviour-and-speed-when-using-vfs-cache/9920/
2020-01-17 22:03:10 +00:00
Nick Craig-Wood
36157d8ae5 vendor: update t3rm1n4l/go-mega - fixes mega: couldn't login: crypto/aes: invalid key size 0
Fixes #3740
2020-01-17 21:42:32 +00:00
Nick Craig-Wood
251cfc100e vendor: update x/crypto/ssh - to fix Windows password length issues fixes #3798 2020-01-17 21:33:47 +00:00
Nick Craig-Wood
9fb10064ee vendor: run go mod tidy 2020-01-17 21:19:46 +00:00
Nick Craig-Wood
bedeaf23af sugarsync: new backend - fixes #622 2020-01-17 17:39:34 +00:00
Nick Craig-Wood
14e93bfd8a rest: call the Signer with the mutex unlocked
This enables the signer to adjust rest parameters and call rest again
if necessary.
2020-01-17 15:00:23 +00:00
Nick Craig-Wood
65071599a2 rest: don't canonicalise headers starting with *
This leaves a way of adding headers which shouldn't be canonicalised.
2020-01-17 15:00:23 +00:00
Nick Craig-Wood
5403e1c79a lib/encoder: remove noencode tag and update CONTRIBUTING 2020-01-17 15:00:01 +00:00
Nick Craig-Wood
5697caf20b Add Felix Hungenberg to contributors 2020-01-17 15:00:01 +00:00
Felix Hungenberg
68056f08ab docs: Use correct link for sftp wikipedia article 2020-01-17 13:20:42 +00:00
Nick Craig-Wood
81002747c5 dedupe: implement keep smallest too
This is to help deduping google docs and their exported versions if
they accidentally get uploaded to the source again.

See: https://forum.rclone.org/t/my-stupidity-or-a-bug/13861
2020-01-17 13:08:37 +00:00
Nick Craig-Wood
1bd9f522e0 build: compress the test builds
This is to use less space and to fix the caddy/rclone interaction
using too much bandwidth.

See: https://caddy.community/t/how-to-stop-caddy-sniffing-mime-types-and-return-a-default-mime-type/6819
2020-01-17 11:47:40 +00:00
Thomas Eales
3a1b41ac22 docs: fixed small typo
is -> it
2020-01-16 15:24:36 +00:00
Nick Craig-Wood
375d25f158 sync: implement --order-by flag to order transfers - fixes #1205 2020-01-16 15:24:36 +00:00
Nick Craig-Wood
0e57335396 docs: document new backend encoder parameter 2020-01-16 14:40:36 +00:00
Nick Craig-Wood
bafe7d5a73 backends: move encoding definitions from fs/encodings 2020-01-16 14:40:36 +00:00
Nick Craig-Wood
c555dc71c2 lib/encoder: move definitions here and remove uint casts 2020-01-16 14:40:36 +00:00
Nick Craig-Wood
3c620d521d backend: adjust backends to have encoding parameter
Fixes #3761
Fixes #3836
Fixes #3841
2020-01-16 14:40:36 +00:00
Nick Craig-Wood
0a5c83ece1 lib/encoder: add string rendering and parsing for encodings 2020-01-16 14:40:36 +00:00
Nick Craig-Wood
1ba5e99152 graphics: add more differently sized logos 2020-01-15 16:25:20 +00:00
Nick Craig-Wood
95c83b37fb vfs: only run TestRWCacheRename on remotes which can rename
This fixes the 1fichier integration tests.
2020-01-15 16:25:04 +00:00
Nick Craig-Wood
89634795b0 Add Paul Tinsley to contributors 2020-01-15 16:25:04 +00:00
Nick Craig-Wood
b88dec51e5 proxy: replace use of bcrypt with sha256
Unfortunately bcrypt only hashes the first 72 bytes of a given input
which meant that using it on ssh keys which are longer than 72 bytes
was incorrect.

This swaps over to using sha256 which should be adequate for the
purpose of protecting in memory passwords where the unencrypted
password is likely in memory too.
2020-01-15 16:23:57 +00:00
Paul Tinsley
f2a789ea98 serve sftp: Add support for public key with auth proxy - fixes #3572 2020-01-15 16:23:57 +00:00
Nick Craig-Wood
63128834da vfs: fix open file renaming on drive when using --vfs-cache-mode writes
Before this change, when uploading files from the VFS cache which were
pending a rename, rclone would use the new path of the object when
specifiying the destination remote.  This didn't cause a problem with
most backends as the subsequent rename did nothing, however with the
drive backend, since it updates objects, the incorrect Remote was
embedded in the object.  This caused the rename to apparently succeed
but the object be at the wrong location.

The fix for this was to make sure we upload to the path stored in the
object if available.

This problem was spotted by the new rename tests for the VFS layer.
2020-01-13 17:37:54 +00:00
Nick Craig-Wood
5f822f2660 sftp: fix "failed to parse private key file: ssh: not an encrypted key" error
This error started happening after updating golang/x/crypto which was
done as a side effect of:

3801b8109 vendor: update termbox-go to fix ncdu command on FreeBSD

This turned out to be a deliberate policy of making
ssh.ParsePrivateKeyWithPassphrase fail if the passphrase was empty.

See: https://go-review.googlesource.com/c/crypto/+/207599

This fix calls ssh.ParsePrivateKey if the passphrase is empty and
ssh.ParsePrivateKeyWithPassphrase otherwise which fixes the problem.
2020-01-13 11:05:16 +00:00
Nick Craig-Wood
b81601baff test_all: ignore TestIntegration/FsMkdir/FsPutFiles/FsPutStream/0 on wasabi
This has been reported to Wasabi and they've confirmed as a known
issue that multipart uploads can't be 0 sized even though that is
incompatible with AWS S3.
2020-01-13 09:47:11 +00:00
Nick Craig-Wood
58064bdd2b drive: add --drive-stop-on-upload-limit flag to stop syncs when upload limit reached
If the --drive-stop-on-upload-limit flag is in effect this checks the
error string from Google Drive to see if it is the error you get when
you've breached your 750GB a day limit.

If so then it turns this error into a Fatal error which should stop
the sync.

Fixes #3857
2020-01-12 15:47:31 +00:00
Nick Craig-Wood
ba01d5e8ab Add Thomas Eales to contributors 2020-01-12 14:23:57 +00:00
Nick Craig-Wood
e510d460c2 Add Kuang-che Wu to contributors 2020-01-12 14:23:57 +00:00
Thomas Eales
42de601fa6 crypt: reorder the filename encryption options
This brings the default `standard` to the top of the list to replace `off`
2020-01-12 14:23:35 +00:00
Kuang-che Wu
3801b8109e vendor: update termbox-go to fix ncdu command on FreeBSD
see 58d4fcbce2
2020-01-12 14:20:12 +00:00
Nick Craig-Wood
e0d41da3e3 vendor: add uncommitted files from previous change 2020-01-11 17:56:14 +00:00
Nick Craig-Wood
92662baceb vendor: update github.com/t3rm1n4l/go-mega to fix mega "illegal base64 data at input byte 22"
Thanks to Ajaja for figuring this out.

See: https://forum.rclone.org/t/problem-to-login-with-mega/12276
2020-01-11 16:47:06 +00:00
Nick Craig-Wood
87c844bce1 onedrive: clarify docs around making your own client ID
See: https://forum.rclone.org/t/onedrive-token-configuration-failure/13634
2020-01-11 16:30:13 +00:00
Nick Craig-Wood
ae340cf7d9 log: factor flags into logflags package - fixes #3792 2020-01-09 13:25:37 +00:00
Nick Craig-Wood
11f501bd44 operations: move interface assertion to tests to remove pflag dependency #3792 2020-01-09 13:25:37 +00:00
Nick Craig-Wood
a4bc4daf30 mounttest: fix unreliable tests on Windows CI
The failure is this which is not reproducable locally, only on the CI
servers.

    --- FAIL: TestMount/CacheMode=minimal/TestWriteFileOverwrite (1.01s)
        fs.go:351:
            Error Trace:    fs.go:351
                            write.go:65
            Error:          Received unexpected error:
                            open E:testwrite: The request could not be performed because of an I/O device error.
            Test:           TestMount/CacheMode=minimal/TestWriteFileOverwrite

The corresponding ERROR from the log is this:

    ERROR : IO error: truncate C:\Users\runneradmin\AppData\Local\rclone\vfs\local\C\Users\RUNNER~1\AppData\Local\Temp\rclone298719627\testwrite: Access is denied.

Instead of using ioutil.WriteFile this fix uses an equivalent based on
rclone's lib/file which doesn't set the exclusive flag on
Windows. This allows files to be deleted that are open.  It also
deletes existing files if an error is received and retries.
2020-01-09 11:11:49 +00:00
Nick Craig-Wood
51dca8c8d4 bin: update windows test paths for new setup 2020-01-09 10:55:18 +00:00
Nick Craig-Wood
6b3021209a Add Ole Schütt to contributors 2020-01-09 10:36:44 +00:00
Ole Schütt
f263828edc operations: write debug message when hashes could not be checked 2020-01-09 10:35:31 +00:00
Maciej Zimnoch
b7019a91c2 fs/operations: Clear accounting before low level retry
Statistics of ransfers which were interrupted are not cleared before
retry iteration. These transfers completed with over 100 percentage.

This change clears transfer accounting before next retry iteration is
done in order to keep numbers in track.

Fixes #3861
2020-01-09 10:32:49 +00:00
Alex Chen
27c3481ea4 build: fix CI for forks and related docs (#3847) 2020-01-09 01:27:44 +08:00
Nick Craig-Wood
706da80d88 mount: don't build on go1.10 as bazil/fuse no longer supports it 2020-01-08 08:44:02 +00:00
Nick Craig-Wood
b6e86b2c7f s3: fix missing x-amz-meta-md5chksum headers for multipart uploads
This reverts "s3: fix DisableChecksum condition" which introduced the
problem.

This reverts commit c05bb63f96.

The code was correct as it stands - the comment was incorrect and this
commit updates it.

See: https://forum.rclone.org/t/s3-upload-md5-check-sum/13706
2020-01-07 19:39:39 +00:00
Nick Craig-Wood
4453fa4ba6 drive: fix --fast-list when using appDataFolder
In listings if the ID `appDataFolder` is used to list a directory the
parents of the items returned have the actual ID instead the alias
`appDataFolder`.  This confused the ListR routine into ignoring all
these items.

This change makes the listing routine accept all parent IDs returned
if there was only one ID in the query.  This fixes the `appDataFolder`
problem. This means we are relying on Google Drive to only return the
items we asked for which is probably OK.

Fixes #3851
2020-01-05 19:57:13 +00:00
Nick Craig-Wood
540fd3f173 local: fix update of hidden files on Windows - fixes #3839 2020-01-05 19:52:22 +00:00
Nick Craig-Wood
1af4bb0c84 Add Tennix to contributors 2020-01-05 19:50:00 +00:00
Tennix
15d19131bd s3: use aws web identity role provider 2020-01-05 19:49:31 +00:00
Nick Craig-Wood
9d993e584b s3: force path style bucket access to off for AWS deprecation
AWS are deprecating path style bucket access so rclone should stop
using it by default for this provider.

This change shouldn't break any workflows as all AWS endpoints support
virtual hosted style lookups of buckets.  It may even improve
performance.

See: https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/
2020-01-05 17:53:45 +00:00
Nick Craig-Wood
21b17b14a9 vendor: update bazil.org/fuse to fix FreeBSD 12.1 - fixes #3697 2020-01-05 16:35:30 +00:00
Nick Craig-Wood
1b89b38a4c vfs: skip rename tests on remotes which can't rename 2020-01-05 12:34:47 +00:00
Nick Craig-Wood
7242c7ce95 s3: fix multipart upload uploading 0 length files
This regression was introduced by the recent re-write of the s3
multipart upload code.
2020-01-05 12:32:55 +00:00
Nick Craig-Wood
ad2bb86d8c fstests: add test for 0 sized streaming upload 2020-01-05 12:32:55 +00:00
Michał Matczuk
eb10ac346f fs/accounting: Added StatsInfo locking in statsGroups sum function (#3844)
Without the fix we can have a race, example:

```
Write at 0x00c000432039 by goroutine 187:
  github.com/rclone/rclone/fs/accounting.(*StatsInfo).Error()
      fs/accounting/stats.go:495 +0x3f1
  github.com/rclone/rclone/fs/accounting.(*StatsInfo).Error-fm()
      fs/accounting/stats.go:477 +0x55
  github.com/rclone/rclone/fs/walk.listRwalk.func1()
      fs/walk/walk.go:162 +0xd2
  github.com/rclone/rclone/fs/walk.walk.func2()
      fs/walk/walk.go:402 +0x30f

Previous read at 0x00c000432039 by goroutine 184:
  github.com/rclone/rclone/fs/accounting.(*statsGroups).sum()
      fs/accounting/stats_groups.go:351 +0xcae
  github.com/rclone/rclone/fs/accounting.rcTransferredStats()
      fs/accounting/stats_groups.go:132 +0x1f4
```

Fixes #3844
2020-01-04 16:45:24 +00:00
Nick Craig-Wood
7e6fac8b1e s3: re-implement multipart upload to fix memory issues
There have been quite a few reports of problems with the multipart
uploader using too much memory and not retrying possible errors.

Before this change the multipart uploader used the s3manager
abstraction in the AWS SDK.  There are numerous bug reports of this
using up too much memory.

This change re-implements a much simplified version of the s3manager
code specialized for rclone's purposes.

This should use much less memory and retry chunks properly.

See: https://forum.rclone.org/t/memory-usage-s3-alike-to-glacier-without-big-directories/13563
See: https://forum.rclone.org/t/copy-from-local-to-s3-has-high-memory-usage/13405
See: https://forum.rclone.org/t/big-file-upload-to-s3-fails/13575
2020-01-03 22:19:28 +00:00
Nick Craig-Wood
2e0774f3cf Add Thomas Kriechbaumer to contributors 2020-01-03 22:18:23 +00:00
Aleksandar Jankovic
b9fb313f71 fs/accounting: add option to delete stats
Removed PruneAllTransfers because it had no use. startedTransfers are
set to nil in ResetCounters.
2020-01-03 17:44:05 +00:00
Aleksandar Jankovic
0e64df4b4c fs/accounting: consistency cleanup 2020-01-03 17:44:05 +00:00
buengese
69ac04fec9 docs: add GetSky to list of supported providers 2020-01-02 15:37:33 +01:00
buengese
8a2d1dbe24 jottacloud: add support whitelabel versions 2020-01-02 15:37:33 +01:00
Thomas Kriechbaumer
584e705c0c s3: introduce list_chunk option for bucket listing
The S3 ListObject API returns paginated bucket listings, with
"MaxKeys" items for each GET call.

The default value is 1000 entries, but for buckets with millions of
objects it might make sense to request more elements per request, if
the backend supports it. This commit adds a "list_chunk" option for
the user to specify a lower or higher value.

This commit does not add safe guards around this value - if a user
decides to request a too large list, it might result in connection
timeouts (on the server or client).

In AWS S3, there is a fixed limit of 1000, some other services might
have one too.  In Ceph, this can be configured in RadosGW.
2020-01-02 12:15:01 +00:00
Nick Craig-Wood
32a3ba9e3f Add Outvi V to contributors 2020-01-02 11:52:43 +00:00
Outvi V
db1c7f9ca8 s3: Add new region Asia Patific (Hong Kong) 2020-01-02 11:10:48 +00:00
Nick Craig-Wood
207474abab sync: add --no-check-dest flag - fixes #3616 2019-12-29 16:47:57 +00:00
Nick Craig-Wood
f754d897e5 Add Wei He to contributors 2019-12-28 13:29:08 +00:00
Wei He
4daecd3158 docs: fix in-page anchor navigation positioning 2019-12-22 23:33:12 +00:00
Cnly
59c75ba442 accounting: fix error count shown as checks - fixes #3814 2019-12-23 03:03:19 +08:00
Nick Craig-Wood
0ecb8bc2f9 s3: fix url decoding of NextMarker - fixes #3799
Before this patch we were failing to URL decode the NextMarker when
url encoding was used for the listing.

The result of this was duplicated listings entries for directories
with >1000 entries where the NextMarker was a file containing a space.
2019-12-12 13:33:30 +00:00
Nick Craig-Wood
1ab4985046 vfs: when renaming files in the cache, rename the cache item in memory too 2019-12-12 13:31:10 +00:00
Nick Craig-Wood
6e683b4359 vfs: fix rename of open files when using the VFS cache
Before this change, renaming an open file when using the VFS cache was
delayed until the file was closed.  This meant that the file was not
readable after a rename even though it is was in the cache.

After this change we rename the local cache file and the in memory
cache, delaying only the rename of the file in object storage.

See: https://forum.rclone.org/t/xen-orchestra-ebadf-bad-file-descriptor-write/13104
2019-12-12 13:31:10 +00:00
Nick Craig-Wood
241921c786 vfs: don't cache the path in RW file objects to fix renaming 2019-12-12 13:31:10 +00:00
buengese
a186284b23 asyncreader: fix EOF error 2019-12-10 12:12:29 +00:00
Ivan Andreev
41ba1bba2b chunker: reduce length of temporary suffix 2019-12-09 16:56:32 +00:00
Nick Craig-Wood
50bb9b7bdd check: fix --one-way recursing more directories than it needs to
Before this change rclone traversed all directories in the destination.

After this change rclone doesn't traverse directories in the
destination that don't exist in the source if the `--one-way` flag is
set.

See: https://forum.rclone.org/t/check-with-one-way-flag-should-not-traverses-all-destination-directories/13263
2019-12-07 13:26:55 +00:00
Nick Craig-Wood
4537d9b5cf operations: make reopen code error on NoLowLevelRetry errors - fixes #3777 2019-12-06 10:54:03 +00:00
Nick Craig-Wood
684dbe0e9d local: make source file being updated errors be NoLowLevelRetry errors #3777 2019-12-06 10:54:03 +00:00
Nick Craig-Wood
572c1079a5 fserrors: Make a new NoLowLevelRetry error and don't retry them #3777 2019-12-06 10:54:03 +00:00
Nick Craig-Wood
cb97239a60 build: pin actions/checkout to v1 to fix build failure 2019-12-04 13:48:03 +00:00
Nick Craig-Wood
e48145f959 Add David Cole to contributors 2019-12-04 12:14:30 +00:00
Nick Craig-Wood
2150cf7362 Add email for Aleksandar Janković 2019-12-04 12:14:21 +00:00
David Cole
707e51eac7 docs: correct typo in gui docs 2019-12-04 12:08:52 +00:00
Nick Craig-Wood
0d10640aaa s3: add --s3-copy-cutoff for size to switch to multipart copy
Before this change we used the same (relatively low limits) for server
side copy as we did for multipart uploads.  It doesn't make sense to
use the same limits since no data is being downloaded or uploaded for
a server side copy.

This change introduces a new parameter --s3-copy-cutoff to control
when the switch from single to multipart server size copy happens and
defaults it to the maximum 5GB.

This makes server side copies much more efficient.

It also fixes the erroneous error when trying to set the modification
time of a file bigger than 5GB.

See #3778
2019-12-03 10:37:55 +00:00
Nick Craig-Wood
f4746f5064 s3: fix multipart copy - fixes #3778
Before this change multipart copies were giving the error

    Range specified is not valid for source object of size

This was due to an off by one error in the range source introduced in
7b1274e29a "s3: support for multipart copy"
2019-12-03 10:37:55 +00:00
Aleksandar Janković
c05bb63f96 s3: fix DisableChecksum condition 2019-12-02 15:15:59 +00:00
Danil Semelenov
e2773b3b4e Fix completion with an encrypted config
Closes #3767.
2019-11-29 14:48:12 +00:00
Nick Craig-Wood
d3b0bed091 drive: make sure invalid auth for teamdrives always reports an error
For some reason Google doesn't return an error if you use a service
account with the wrong permissions to list a team drive.  This gives
the user the false impression that the drive is empty.

This change:
- calls teamdrives get on rclone about
- calls teamdrives get on a listing of the root which returned no entries

These will both detect a team drive which has the incorrect auth and
workaround the issue.

Fixes: #3763
See: https://forum.rclone.org/t/rclone-missing-error-code-when-sas-have-no-permission/13086
See: https://forum.rclone.org/t/need-need-bug-verification-rclone-about-doesnt-work-on-teamdrives-empty-output/13105
2019-11-28 10:51:17 +00:00
Nick Craig-Wood
33c80bbb96 jottacloud: add URL to generate Login Token to config wizard 2019-11-28 10:03:48 +00:00
Nick Craig-Wood
705e4694ed webdav: fix case of "Bearer" in Authorization: header to agree with RFC
Before this change rclone used "Authorization: BEARER token".  However
according the the RFC this should be "Bearer"

https://tools.ietf.org/html/rfc6750#section-2.1

This changes it to "Authorization: Bearer token"

Fixes #3751 and interop with Salesforce Webdav server
2019-11-27 12:04:31 +00:00
Nick Craig-Wood
4fbc90d115 webdav: make nextcloud only upload SHA1 checksums
When using nextcloud, before this change we only uploaded one of SHA1
or MD5 checksum in the OC-Checksum header with preference to SHA1 if
both were set.

This makes the MD5 checksums read as empty string which makes syncing
with checksums less useful than they should be as all the MD5
checksums are blank.

This change makes it so that we only upload the SHA1 to nextcloud.

The behaviour of owncloud is unchanged as owncloud uses the checksum
as an upload integrity check only and calculates its own checksums.

See: https://forum.rclone.org/t/how-to-specify-hash-method-to-checksum/13055
2019-11-27 11:58:55 +00:00
Nick Craig-Wood
ed39adc65b Add Fernando to contributors 2019-11-27 11:40:44 +00:00
Fernando
162fdfe455 mount: document remotes as network shares on Windows
Provided instructions for mounting remotes as network shares/network drives in a Windows environment
2019-11-27 11:40:24 +00:00
buengese
8f33c932f2 jottacloud: update docs for new auth method 2019-11-26 13:49:49 +00:00
buengese
4195bd7880 jottacloud: use new auth method used by official client 2019-11-26 13:49:49 +00:00
Marco Molteni
d72f3e31c0 docs/install: explain how to workaround macOS Gatekeeper requiring notarization
Fix #3689
2019-11-26 12:33:30 +00:00
Garry McNulty
11f44cff50 drive: add --drive-use-shared-date to use date file was shared instead of modified date - fixes #3624 2019-11-26 12:19:44 +00:00
SezalAgrawal
c3751e9a50 operations: fix dedupe continuing on errors like insufficientFilePermisson - fixes #3470
* Fix dedupe on merge continuing on errors like insufficientFilePermisson
* Sorted the directories to remove recursion logic
2019-11-26 10:58:52 +00:00
Nick Craig-Wood
420ae905b5 vfs: make sure existing files opened for write show correct size
Before this change if an existing file was opened for write without
truncate its size would show as 0 rather than the full size of the
file.
2019-11-25 11:31:44 +00:00
Nick Craig-Wood
a7d65bd519 sftp: add --sftp-skip-links to skip symlinks and non regular files - fixes #3716
This also corrects the symlink detection logic to only check symlink
files.  Previous to this it was checking all directories too which was
making it do more stat calls than was necessary.
2019-11-24 16:10:53 +00:00
Nick Craig-Wood
1db31d7149 swift: fix parsing of X-Object-Manifest
Before this change we forgot to URL decode the X-Object-Manifest in a dynamic large object.

This problem was introduced by 2fe8285f89 "swift: reserve
segments of dynamic large object when delete objects in container what
was enabled versioning."
2019-11-21 13:25:02 +00:00
Nick Craig-Wood
4641bd5116 Add anuar45 to contributors 2019-11-21 11:16:04 +00:00
anuar45
7e602dbf39 stats: show deletes in stats and hide zero stats
This shows deletes in the stats.  It also doesn't show zero stats
in order not to make the stats block too long.
2019-11-21 11:15:47 +00:00
Nick Craig-Wood
e14d968f8d Start v1.50.2-DEV development 2019-11-19 16:51:32 +00:00
Nick Craig-Wood
e0eeeaafcd accounting: don't show entries in both transferring and checking
See: https://forum.rclone.org/t/showing-progress-checking/12958
2019-11-19 13:22:33 +00:00
Nick Craig-Wood
d46f8d0ae5 accounting: fix memory leak on retries operations
Before this change if an operation was retried on operations.Copy and
the operation was large enough to use an async buffer then an async
buffer was leaked on the retry.  This leaked memory, a file handle and
a go routine.

After this change if Account.WithBuffer is called and there is already
a buffer, then a new one won't be allocated.
2019-11-19 12:11:59 +00:00
Nick Craig-Wood
1e6278556c Add Maciej Zimnoch to contributors 2019-11-18 16:28:19 +00:00
Nick Craig-Wood
303f4ee152 Add Ankur Gupta to contributors 2019-11-18 16:28:19 +00:00
Nguyễn Hữu Luân
2fe8285f89 swift: reserve segments of dynamic large object when delete objects in container what was enabled versioning.
add code handle move object when moving the object is contained by the container what was enabled versioning with "X-History-Location".
2019-11-18 16:26:10 +00:00
Maciej Zimnoch
f5443ac939 accounting: clear finished transfer in stats-reset
In order to reduce memory usage `stats-reset` also
clears finished transfers.

Fixes #3734
2019-11-18 14:25:32 +00:00
Maciej Zimnoch
7cf056b2c2 accounting: allow MaxCompletedTransfers to be configurable
rclone library users might be intrested in changing default value to
other, or even disabling it. With current version it's impossible which
leads to races when number of uploaded objects exceeds default limit.

Fixes #3732
2019-11-18 14:25:32 +00:00
Ankur Gupta
75a6c49f87 Fix error counter - fixes #3650
For few commands, RClone counts a error multiple times. This was fixed by
creating a new error type which keeps a flag to remember if the error has
already been counted or not. The CountError function now wraps the original
error eith the above new error type and returns it.
2019-11-18 14:13:02 +00:00
Nick Craig-Wood
19229b1215 drive: fix --drive-root-folder-id with team/shared drives
Before this change rclone used the team_drive ID as the root if set
even if the root_folder_id was set too.

This change uses the root_folder_id in preference over the team_drive
which restores the functionality.

This problem was introduced by ba7c2ac443

Fixes #3742
2019-11-16 18:38:21 +00:00
Nick Craig-Wood
b5bb4c2a21 vfs: fix tests not to upload a 0 length file
Some remotes can't upload 0 length files, so this fixes the
TestCacheRename test so that it writes something to the file.
2019-11-15 09:26:40 +00:00
Nick Craig-Wood
479c803fd9 vendor: update all dependencies 2019-11-14 21:51:34 +00:00
Nick Craig-Wood
3dcf1e61cf cache: follow move of upstream library github.com/coreos/bbolt github.com/etcd-io/bbolt 2019-11-14 21:51:34 +00:00
Nick Craig-Wood
3da1cbfc81 Add Marco Molteni to contributors 2019-11-14 21:51:34 +00:00
Marco Molteni
0c9a8cf776 doc: add Scaleway to the S3 table of contents
Hello, documentation for Scaleway was already there, but the TOC was missing it.
2019-11-14 21:49:43 +00:00
Nick Craig-Wood
f3871377c3 Add Sebastian Brandt to contributors 2019-11-14 12:54:42 +00:00
Nick Craig-Wood
cc9a7dc073 Add Barry Muldrey to contributors 2019-11-14 12:54:42 +00:00
Nick Craig-Wood
b61dd809ee Add new email for Anagh Kumar Baranwal 2019-11-14 12:54:38 +00:00
Sebastian Brandt
f158a398f3 sftp: Retry Creation of Connection - fixes #3656
Removes the existing rate limiter because it is implicitly included in
the pacer.
2019-11-14 12:50:01 +00:00
jaKa
acefa5c40d koofr: use rclone HTTP client. 2019-11-14 11:36:44 +00:00
Barry Muldrey
2784c3234b fs/config/configflags: fix --compare-dest and --copy-dest help strings
from rsync manual:

--compare-dest=DIR
    This option instructs rsync to use DIR on the destination machine as an
    additional hierarchy to compare destination files against doing transfers
    (if the files are missing in the destination directory). If a file is found
    in DIR that is identical to the sender's file, the file will NOT be
    transferred to the destination directory. This is useful for creating
    a sparse backup of just files that have changed from an earlier backup.

--copy-dest=DIR
    This option behaves like --compare-dest, but rsync will also copy unchanged
     files found in DIR to the destination directory using a local copy.
      This is useful for doing transfers to a new destination while leaving
       existing files intact, and then doing a flash-cutover when all files
        have been successfully transferred.
2019-11-12 13:37:58 +00:00
Nick Craig-Wood
c21a4fee58 mount,cmount: make sure we call unmount when exiting 2019-11-11 22:08:52 +00:00
Nick Craig-Wood
358f5a8084 vfs: fix edge cases when reading ModTime from file
This fixes the unreliable test TestMount/CacheMode=full/TestFileModTime
2019-11-11 16:20:28 +00:00
Nick Craig-Wood
9115752679 proxy: reduce the internal bcrypt strength to fix race tests
Before this change the race tests were taking too long.  The bcrypt
function went from about 20ms to 1s under the race detector and this
is called for every transaction on webdav.

This change reduces the bcrypt strength so it takes 1ms non race so
the race tests pass and still has adequate security for in memory only
storage.
2019-11-11 16:20:28 +00:00
Nick Craig-Wood
51efb349ac vfs: revise locking in file and dir to fix race conditions 2019-11-11 16:20:27 +00:00
Nick Craig-Wood
e0d9314059 mounttest: fix occasionally failing test TestRenameOpenHandle 2019-11-11 16:20:27 +00:00
Nick Craig-Wood
21c6babdbb mount: enable async reads for a 20% speedup
Now that the vfs can cope with out of order reads we can enable the
async read feature for an increase in througput on the local disk of
about 20%.
2019-11-11 16:20:27 +00:00
Nick Craig-Wood
5beeac7959 vfs: make ReadAt for non cached files work better with non-sequential reads
This makes ReadAt for non cached files wait a short time (up to 5mS)
if it gets an out of order read (which would normally cause a seek and
which take a long time) to see if the gap will be filled with an in
order read.

This makes mount2 based on go-fuse work more efficiently and enables
async reading in normal mount.

A similar change was done for WriteAt in af030f74f5
2019-11-11 16:20:27 +00:00
Nick Craig-Wood
be5392f448 vfs: only calculate one hash for reads
This speeds up mounting on the local backend enormously.
2019-11-11 16:20:27 +00:00
Nick Craig-Wood
c00dcb7e67 chunkedreader: disable hash calculation for first segment
This will produce a slight speedup for small files.
2019-11-11 16:20:27 +00:00
Nick Craig-Wood
6150ae89d6 vfs: add a newly created file straight into the directory 2019-11-11 15:20:09 +00:00
Nick Craig-Wood
1e423d21e1 drive: fix listing of the root directory with drive.files scope
We attempt to find the ID of the root folder by doing a GET on the
folder ID "root". With scope "drive.files" this fails with a 404
message.

After this change if we get the 404 message, we just carry on using
"root" as the root folder ID and we cache that for future lookups.

This means that changenotify messages will not work correctly in the
root folder but otherwise has minor consequences.

See: https://forum.rclone.org/t/fresh-raspberry-pi-build-google-drive-404-error-failed-to-ls-googleapi-error-404-file-not-found/12791
2019-11-11 09:07:34 +00:00
Brett Dutro
53d55ae760 Add test for cache renaming functionality 2019-11-10 11:58:46 +00:00
Anagh Kumar Baranwal
5928704e1b On rename, rename in cache too if the file exists
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2019-11-10 11:58:46 +00:00
buengese
5ddfa9f7f6 config: SetValueAndSave ignore error if config section does not exist yet 2019-11-09 16:44:08 +00:00
Nick Craig-Wood
9b5308144f s3: Reduce memory usage streaming files by reducing max stream upload size
Before this change rclone would allow the user to stream (eg with
rclone mount, rclone rcat or uploading google photos or docs) 5TB
files.  This meant that rclone allocated 4 * 525 MB buffers per
transfer which is way too much memory by default.

This change makes rclone use the configured chunk size for streamed
uploads.  This is 5MB by default which means that rclone can stream
upload files up to 48GB by default staying below the 10,000 chunks
limit.

This can be increased with --s3-chunk-size if necessary.

If rclone detects that a file is being streamed to s3 it will make a
single NOTICE level log stating the limitation.

This fixes the enormous memory usage.

Fixes #3568
See: https://forum.rclone.org/t/how-much-memory-does-rclone-need/12743
2019-11-09 15:55:19 +00:00
Aleksandar Jankovic
4b20afa94a backend/s3: fix ExpiryWindow value
ExpiryWindow accepts duration but it was set to value 3.
This changes it to 3 * time.Minute since default is 5 min.
2019-11-05 13:55:55 +00:00
Nick Craig-Wood
049ff1f269 config: check a remote exists when creating a new one 2019-11-05 12:39:33 +00:00
Nick Craig-Wood
3f7af64316 config: give config questions default values - fixes #3672 2019-11-05 11:53:44 +00:00
Nick Craig-Wood
0eaf5475ef Start v1.50.1-DEV development 2019-11-02 15:26:01 +00:00
Nick Craig-Wood
7bf056316f local: fix listings of . on Windows - fixes #3676 2019-10-30 16:00:18 +00:00
Xiaoxing Ye
520ddbcceb config: do not open browser on headless if google fs
On google fs (drive, google photos, and google cloud storage), if
headless is selected, do not open browser.

This also supplies a new option "auth-no-open-browser" for authorize
if the user does not want it.

This should fix #3323.
2019-10-30 14:12:42 +00:00
Nick Craig-Wood
1ce1ea34aa hash: fix hash names for DropboxHash and CRC-32
These were unintentionally renamed as part of 1dc8bcd48c

Fixes #3679
2019-10-30 12:20:10 +00:00
Nick Craig-Wood
e6378daadf fshttp: don't print token bucket errors on context cancelled
These happen as a natural part of exceeding --max-transfer and we
don't need to worry the user with them.
2019-10-30 12:20:10 +00:00
Nick Craig-Wood
7ff95c6250 Add Xiaoxing Ye to contributors 2019-10-30 12:20:10 +00:00
Xiaoxing Ye
6d58d9a86f vendor: change goftp/server url
Closing #3674
2019-10-29 17:41:56 +00:00
Chaitanya
e0356f5aae rcd: Adding group parameter to stats 2019-10-29 16:39:37 +00:00
Xiaoxing Ye
191cfb79d1 onedrive: no trailing slash reading metadata...
No trailing slash when reading metadata of an item given item ID.

This should fix #3664.
2019-10-29 13:33:11 +00:00
Nick Craig-Wood
e81eca4055 fshttp: fix error reporting on tpslimit token bucket errors 2019-10-28 22:11:38 +00:00
Nick Craig-Wood
ee3215ac76 build: make replacement of new rclone binary atomic on build
This avoids the "text file busy" message when trying to replace the
binary of a running rclone.
2019-10-28 22:11:38 +00:00
Nick Craig-Wood
199ac61bde rc: add methods to turn on blocking and mutex profiling 2019-10-28 22:11:38 +00:00
Nick Craig-Wood
a40cc1167d Add zero-24 to contributors 2019-10-28 16:49:33 +00:00
zero-24
c57ea8d867 docs: add instructions to create your own dropbox app ID 2019-10-28 16:49:16 +00:00
Nick Craig-Wood
1868c77e16 rc: fix formatting of docs 2019-10-27 10:43:40 +00:00
Brett Dutro
378a3f4133 mount: replace use of WriteAt with Write for cache mode >= writes and O_APPEND
os.File.WriteAt returns an error if a file was opened with O_APPEND.
This replaces it with os.File.Write if the file was opened with
O_APPEND.
2019-10-26 17:27:52 +01:00
Nick Craig-Wood
daff5a824e Start v1.50.0-DEV development 2019-10-26 12:42:06 +01:00
Nick Craig-Wood
6fabf476cf Version v1.50.0 2019-10-26 11:04:54 +01:00
Nick Craig-Wood
ab895390f4 s3: fix nil pointer reference if no metadata returned for object
Fixes #3651 Fixes #3652
2019-10-25 13:45:47 +01:00
Nick Craig-Wood
a3a5857874 drive: fix change notify polling when using appDataFolder
See: https://forum.rclone.org/t/remote-changes-arent-picked-up/12520
2019-10-24 12:51:01 +01:00
Nick Craig-Wood
0f0079ff71 b2: remove unverified: prefix on sha1 - fixes #3654 2019-10-23 08:41:56 +01:00
Nick Craig-Wood
18c029e0f0 Add dausruddin to contributors 2019-10-21 22:28:44 +01:00
dausruddin
7eee2f904a drive: fix typo 2019-10-21 22:28:28 +01:00
Nick Craig-Wood
3ef0c73826 drive: fix ChangeNotify polling for shared drives
Before this fix we neglected to add the shared drive ID to the request
when asking for an initial change notify token and this caused a lot
more results to be returned than was necessary.
2019-10-21 20:51:11 +01:00
Nick Craig-Wood
59026c4761 mount, cmount: don't pass huge filenames (>4k) to FUSE as it can't cope 2019-10-21 20:51:11 +01:00
Nick Craig-Wood
76f5e273d2 vfs: stop change notify polling clearing so much of the directory cache
Before this change, change notify polls would clear the directory
cache recursively. So uploading a file to the root would clear the
entire directory cache.

After this change we just invalidate the directory cache of the parent
directory of the item and if the item was a directory we invalidate it
too.
2019-10-21 20:51:11 +01:00
Nick Craig-Wood
2bbfcc74e9 drive: fix --drive-shared-with-me from the root with ls and --fast-list
When we changed recursive lists to use --fast-list by default this
broke listing with --drive-shared-with-me from the root.

This turned out to be an unwarranted assumption in the ListR code that
all items would have a parent folder that we had searched for - this
isn't true for shared with me items.

This was fixed when using --drive-shared-with-me to give items that
didn't have any parents a synthetic parent.

Fixes #3639
2019-10-21 12:16:01 +01:00
Nick Craig-Wood
ba7c2ac443 drive: make sure that drive root ID is always canonical
Before this change we used the id "root" as an alias for the root drive ID.

However this causes problems when we receive IDs back from drive which
are not in this format and have been expanded to their canonical ID.

This change looks up the ID "root" and stores it in the
"drive_folder_id" parameter in the config file.

This helps with
- Notifying changes at the root
- Files shared with me at the root

See #3639
2019-10-21 12:16:01 +01:00
Nick Craig-Wood
2d9b8cb981 azureblob: disable logging to the Windows event log
See: https://forum.rclone.org/t/event-log-warning/12430
2019-10-21 11:50:31 +01:00
Ivan Andreev
2e50543053 Add Ivan Andreev to maintainers 2019-10-20 00:33:16 +03:00
Nick Craig-Wood
22bf8589cd Add Saksham Khanna to contributors 2019-10-17 15:05:46 +01:00
Nick Craig-Wood
0871c57f1b Add Carlos Ferreyra to contributors 2019-10-17 15:05:46 +01:00
Saksham Khanna
0c265713fd rc: added command core/quit 2019-10-17 15:04:22 +01:00
Carlos Ferreyra
9cb549a227 sftp: include more ciphers with use_insecure_cipher 2019-10-17 14:58:31 +01:00
Nick Craig-Wood
13e46c4b3f accounting: cull the old time ranges when possible to save memory 2019-10-17 11:43:32 +01:00
Nick Craig-Wood
d40972bf1a accounting: allow up to 100 completed transfers in the accounting list
This fixes the core/transfers rc so it shows items again.
2019-10-16 22:13:17 +01:00
Nick Craig-Wood
b002ff8d54 accounting: fix total duration calculation
This was broken in e337cae0c5 when we deleted the transfers
immediately.

This is fixed by keeping a merged slice of time ranges of completed
transfers and adding those to the current transfers.
2019-10-16 22:13:17 +01:00
Nick Craig-Wood
38652d046d drive: disable HTTP/2 by default to work around INTERNAL_ERROR problems
Before this change when rclone was compiled with go1.13 it used HTTP/2
to contact drive by default.

This causes lockups and INTERNAL_ERRORs from the HTTP/2 code.

This is a workaround disabling the HTTP/2 code on an option.

It can be re-enabled with `--drive-disable-http2=false`

See #3631
2019-10-16 11:26:08 +01:00
Nick Craig-Wood
0b6cdb7677 fshttp: allow Transport to be customized #3631 2019-10-16 11:26:08 +01:00
Nick Craig-Wood
543100070a sync: free objects after they come out of the transfer pipe
This reduces memory when the transfer pipe shrinks

See: https://forum.rclone.org/t/rclone-memory-consumption-increasing-linearly/12244
2019-10-16 10:27:07 +01:00
Nick Craig-Wood
e337cae0c5 accounting: fix memory leak noticeable for transfers of large numbers of objects
Before this fix we weren't removing transfers from the transfer stats.
For transfers with 1000s of objects this uses a noticeable amount of
memory.

See: https://forum.rclone.org/t/rclone-memory-consumption-increasing-linearly/12244
2019-10-16 10:27:07 +01:00
Nick Craig-Wood
90a23ae01b Add Bryce Larson to contributors 2019-10-16 10:27:07 +01:00
Bryce Larson
dd150efdd7 docs: fix --use-server-modtime spelling in docs 2019-10-15 19:54:42 +01:00
Nick Craig-Wood
af05e290cf Fix --files-from without --no-traverse doing a recursive scan
In a28239f005 we made --files-from obey --no-traverse.  In the
process this caused --files-from without --no-traverse to do a
complete recursive scan unecessarily.

This was only noticeable in users of fs/march, so sync/copy/move/etc
not in ls/lsf/etc.

This fix makes sure that we use conventional directory listings in
fs/march unless `--files-from` and `--no-traverse` is set or
`--fast-list` is active.

Fixes #3619
2019-10-15 19:51:01 +01:00
Nick Craig-Wood
f9f9d5029b fserror: make http2 "stream error:" a retriable error
It was reported that v1.49.4 which was accidentally compiled with
go1.13 instead of go1.12 produced errors like this:

    Failed to get StartPageToken: Get https://www.googleapis.com/drive/v3/changes/startPageToken?XXX: stream error: stream ID 1789; INTERNAL_ERROR
    IO error: open file failed: Get https://www.googleapis.com/drive/v3/files/XXX?alt=media: stream error: stream ID 1781; INTERNAL_ERROR

These are errors from the http2 library.  It appears that go1.13 when
communicating with google drive defaults to http2 whereas with go1.12
it doesn't.

It is unclear what is causing these errors, but retrying them since
they don't happen very often seems like a valid strategy.

This was fixed in v1.49.5 by compiling with go1.12 - this fix is
designed to work with go1.13

See: https://forum.rclone.org/t/1-49-4-plex-internal-errors-on-google-drive/12108/
2019-10-15 19:46:44 +01:00
Nick Craig-Wood
7d3b67f6cc Add AlexandrBoltris to contributors 2019-10-15 19:46:44 +01:00
Cenk Alti
929f275ae5 putio: add ability to resume uploads 2019-10-14 20:01:16 +01:00
AlexandrBoltris
c526bdb579 docs: typo fix in faq.md 2019-10-14 17:07:29 +01:00
Nick Craig-Wood
1b2ffbeca0 cmd: fix environment variables not setting command line flags
Before this fix quite a lot of the commands were ignoring environment
variables intended to set flags.
2019-10-14 17:02:09 +01:00
Nick Craig-Wood
19429083ad cmd: fix spelling of Definition 2019-10-14 17:02:09 +01:00
Nick Craig-Wood
6e378d7d32 config: fix setting of non top level flags from environment variables
Before this fix, attempting to set a non top level environment
variable would fail with "Couldn't find flag".

This fixes it by passing in the flags that the env var is being set
from.

Fixes #3615
2019-10-14 17:02:09 +01:00
Nick Craig-Wood
1fe1a19339 vfs: stop empty dirs disappearing when renamed on bucket based remotes
Before this change when we renamed a directory this cleared the
directory cache for the parent directory too.

If the directory was remaining in the same parent this wasn't
necessary and caused the empty directory to fall out of the cache.

Fixes #3597
2019-10-14 14:38:30 +01:00
Chaitanya
b63e9befe8 rc docs: fix code section not rendering properly due to missing quotes 2019-10-13 12:26:37 +01:00
Nick Craig-Wood
b4b59c53f1 mount: fix "mount_fusefs: -o timeout=: option not supported" on FreeBSD
Before this change `rclone mount` would give this error on FreeBSD

    mount helper error: mount_fusefs: -o timeout=: option not supported

Because the default value for FreeBSD was set to 15m for
--daemon-timeout and that FreeBSD does not support the timeout option.

This change sets the default for --daemon-timeout to 0 on FreeBSD
which fixes the problem.

Fixes #3610
2019-10-13 11:36:51 +01:00
Ivan Andreev
77b42aa33a chunker: fix integration tests and hashsum issues 2019-10-13 10:43:46 +01:00
Ivan Andreev
910c80bd02 chunker: option to hash all files 2019-10-13 10:43:46 +01:00
Ivan Andreev
9049bb62ca chunker: prevent chunk corruption, survive meta-like input 2019-10-13 10:43:46 +01:00
Ivan Andreev
7aa2b4191c chunker: reservations for future extensions 2019-10-13 10:43:46 +01:00
Alex Chen
41ed33b08e docs: update onedrive/sharepoint docs on some known issues 2019-10-12 12:08:22 +01:00
Nick Craig-Wood
f3b0f8a9f0 sync: --update/-u not transfer files that haven't changed - fixes #3232
Before this change --update would transfer any file which was newer
than the destination regardless of whether it had changed or not.
This is needlessly wasteful of bandwidth.

After this change --update will only transfer files if they are newer
**and** they are different (checked with checksum and size).
2019-10-12 11:54:56 +01:00
Nick Craig-Wood
65a82fe77d dropbox: fix nil pointer exception on restricted files
See: https://forum.rclone.org/t/issues-syncing-dropbox/12233
2019-10-11 16:21:24 +01:00
Nick Craig-Wood
c892a6f8ef Add Michele Caci to contributors 2019-10-11 16:17:24 +01:00
Michele Caci
02c777ffbf filter: prevent mix opts when filesfrom is present - fixes #3599 2019-10-11 16:17:02 +01:00
Nick Craig-Wood
bc45f6f952 Add Arijit Biswas to contributors 2019-10-11 15:25:20 +01:00
Arijit Biswas
3d807ab449 docs: update onedrive docs on creating own client ID
Updated the creating "own Client ID and Key" based on new portal (portal.azure.com).
2019-10-11 15:24:54 +01:00
Jon Fautley
5d33236050 ftp: allow disabling EPSV mode 2019-10-10 21:00:41 +01:00
Nick Craig-Wood
a4d572d004 Add Vighnesh SK to contributors 2019-10-10 16:05:29 +01:00
Cenk Alti
58f280b8a2 fserrors: fix a bug in Cause function 2019-10-10 16:05:15 +01:00
Vighnesh SK
ec09de1628 Change the Debug message in NeedTranser (#3608)
'Couldn't find file - Need to Transfer' changed to 'Need to transfer -
File Not Found at Destination' because while reading the debug logs, it
confuses with failure in operation.
2019-10-10 13:44:05 +01:00
Nick Craig-Wood
6abaa9e22c fstests: allow skipping of the broken UTF-8 test for the cache backend 2019-10-10 10:36:18 +01:00
Nick Craig-Wood
e8b92f4853 sftp: fix test failures
This was introduced by 50a3a96e27
2019-10-09 17:43:03 +01:00
Nick Craig-Wood
50a3a96e27 serve sftp: fix crash on unsupported operations (eg Readlink)
Before this change the sftp handler returned a nil error for unknown
operations which meant the server crashed when one was encountered.

In particular the "Readlink" operations was causing problems.

After this change the handler returns ErrSshFxOpUnsupported which
signals to the remote end that we don't support that operation.

See: https://forum.rclone.org/t/rclone-serve-sftp-not-working-in-windows/12209
2019-10-09 16:12:21 +01:00
Dan Walters
8950b586c4 dlna: associate subtitles with all possible media nodes
When there was a .nfo and a .mp4, they were being associated only with
the .nfo.
2019-10-09 11:57:42 +01:00
Nick Craig-Wood
3f40849343 Add Brett Dutro to contributors 2019-10-09 10:07:50 +01:00
Nick Craig-Wood
7271a404db Add Tyler to contributors 2019-10-09 10:07:50 +01:00
Brett Dutro
7d0d7e66ca vfs: move writeback of dirty data out of close() method into its own method (FlushWrites) and remove close() call from Flush()
If a file handle is duplicated with dup() and the duplicate handle is
flushed, rclone will go ahead and close the file, making the original
file handle stale. This change removes the close() call from Flush() and
replaces it with FlushWrites() so that the file only gets closed when
Release() is called. The new FlushWrites method takes care of actually
writing the file back to the underlying storage.

Fixes #3381
2019-10-09 10:07:29 +01:00
Tyler
0cac9d9bd0 Fix 1fichier link in Readme 2019-10-08 21:55:12 +01:00
Nick Craig-Wood
8c1edf410c dropbox: make disallowed filenames return no retry error - fixes #3569
Before this change we silently skipped uploads to dropbox of
disallowed file names.  However this then caused "corrupted on
transfer" errors because the sizes were wrong.

After this change we return an no retry error which will mean that the
sync fails (as it should - not all files were uploaded) but no
unecessary retries happened.
2019-10-08 19:59:47 +01:00
Nick Craig-Wood
1833167d10 vendor: run go mod tidy / go mod vendor with go1.13 2019-10-08 19:59:47 +01:00
Nick Craig-Wood
455b9280ba config: use alternating Red/Green in config to make more obvious 2019-10-08 19:59:47 +01:00
Nick Craig-Wood
45e440d356 vendor: remove github.com/Azure/go-ansiterm 2019-10-08 19:59:47 +01:00
Nick Craig-Wood
593de059be lib/terminal: factor from cmd/progress, swap Azure/go-ansiterm for mattn/go-colorable 2019-10-08 19:59:47 +01:00
Nick Craig-Wood
c78d1dd18b vendor: add github.com/mattn/go-colorable 2019-10-08 19:59:47 +01:00
Nick Craig-Wood
2a82aca225 Add Sezal Agrawal to contributors 2019-10-08 19:59:47 +01:00
SezalAgrawal
7712b780ba operations: display 'Deleted X extra copies' only if dedupe successful - fixes #3551 2019-10-08 16:35:53 +01:00
Sezal Agrawal
5c2dfeee46 operations: display 'All duplicates removed' only if dedupe successful -fixes rclone#3550 2019-10-08 16:34:13 +01:00
Dan Walters
572d302620 dlna: simplify search method for associating subtitles with media nodes
Seems to be some corner cases that are not being handled, so taking a different
approach that should be a little more robust.

Also, changing resources to be served under a subpath:  We've been serving
media at /res?path=%2Fdir%2Ffilename.mp4; change that to be just /r/dir/filename.mp4.
It's cleaner, easier to reason about, and a necessary first step towards just
serving the resources via httplib anyway.
2019-10-08 07:49:39 +01:00
Henning Surmeier
eff11b44cf webdav: parse and return sharepoint error response
fixes #3176
2019-10-06 20:17:13 +01:00
Nick Craig-Wood
15b1feea9d mount: fix panic on File.Open - Fixes #3595
This problem was introduced in "mount: allow files of unkown size to
be read properly" 0baafb158f by failure to check that the
DirEntry was nil or not.
2019-10-06 19:26:58 +01:00
Dan Walters
6337cc70d3 dlna: support for external srt subtitles
Allows for filename.srt, filename.en.srt, etc., to be automatically associated with video.mp4 (or whatever) when playing over dlna.

This is the "modern" method, which I've verified to work on VLC and in LG webOS 2.  There is a vendor specific mechanism for Samsung that I havn't been able to get working on my F series.

Also made some minor corrections to logging and container IDs.
2019-10-06 12:18:56 +01:00
Nick Craig-Wood
d210fecf3b Add Raphael to contributors 2019-10-05 17:07:02 +01:00
Nick Craig-Wood
f962fb9499 Add SwitchJS to contributors 2019-10-05 17:07:02 +01:00
Raphael
7f378ca8e3 documentation: add sharepoint required flags fixes #3564
Enhance the WebDAV documentation with information regarding the flags that are required to make Rclone work correctly with SharePoint.
fixes #3564
2019-10-05 17:06:44 +01:00
SwitchJS
9a5ea9c8a8 docs: fix spelling 2019-10-05 16:00:39 +01:00
Nick Craig-Wood
d15425e8c8 Start v1.49.5-DEV development 2019-10-05 12:42:28 +01:00
Nick Craig-Wood
b3faee9471 build: fix macOS build after brew changes 2019-10-05 11:51:28 +01:00
Nick Craig-Wood
5271fe3b3f yandex: use lib/encoder 2019-10-05 10:22:43 +01:00
Nick Craig-Wood
7da1c84a7f build: don't deploy xgo build on pull requests 2019-10-04 16:53:51 +01:00
Nick Craig-Wood
cbdab14057 Add 庄天翼 to contributors 2019-10-04 16:53:51 +01:00
庄天翼
7b1274e29a s3: support for multipart copy
Fixes #2375 Fixes #3579
2019-10-04 16:49:06 +01:00
Nick Craig-Wood
d21ddf280c mailru: comment out some debugging statements 2019-10-02 20:10:01 +01:00
Nick Craig-Wood
135717e12b mailru: use lib/encoder 2019-10-02 20:10:01 +01:00
Aleksandar Jankovic
6b55b8b133 s3: add option for multipart failiure behaviour
This is needed for resuming uploads across different sessions.
2019-10-02 16:49:16 +01:00
Nick Craig-Wood
b94b2a3723 mega: fix after lib/encoder change 2019-10-02 12:41:52 +01:00
Nick Craig-Wood
e2914c0097 test_all: ignore some encoding tests with nextcloud integration test 2019-10-02 11:34:08 +01:00
Nick Craig-Wood
fd51f24906 putio: use lib/encoder
And in the process
- fix a bug with + and & in file name
- fix NewObject returning directories as files
2019-10-02 11:34:08 +01:00
Nick Craig-Wood
4615343b73 premiumizeme: use lib/encoder 2019-10-02 11:34:08 +01:00
Fionera
1dc8bcd48c Remove backend dependency from fs/hash 2019-10-01 16:29:58 +01:00
Nick Craig-Wood
def411da62 build: use the release builds not master of nfpm and github-release
Fixes #3580
2019-10-01 16:23:36 +01:00
Nick Craig-Wood
f73dae1e77 bin/get-github-release: support tar.bz2 files 2019-10-01 16:23:36 +01:00
Nick Craig-Wood
77a520c97c fichier: fix accessing files > 2GB on 32 bit systems - fixes #3581 2019-10-01 16:03:49 +01:00
Nick Craig-Wood
23bf6bb4d8 test_all: mark expected failures for minio, wasabi and FTP 2019-10-01 15:40:32 +01:00
Nick Craig-Wood
04eb96b50b fichier: fix NewObject after lib/encoder changes
This bug was introduced as part of the lib/encoder changes in
8d8fad724b.  It caused NewObject not to work for a file with
escaped characters in it.
2019-10-01 15:30:51 +01:00
Fabian Möller
b9bd15a8c9 koofr: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:25 +01:00
Nick Craig-Wood
b581f2de26 sharefile: use lib/encoder 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
5cef5f8b49 lib/encoder: add LeftPeriod encoding 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
8d8fad724b ficher: use lib/encoder 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
4098907511 lib/encoder: add more encode symbols and split existing 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
5b8a339baf docs: Add notes on how to find out the encodings used in a backend 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
3e53376a49 build_csv: fix output of control characters 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
d122d1d191 qingstor: use lib/encoder 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
35d6ff89bf azureblob: use lib/encoder 2019-09-30 22:00:24 +01:00
Nick Craig-Wood
53bec33027 swift: use lib/encoder 2019-09-30 22:00:24 +01:00
Fabian Möller
3304bb7a56 googlecloudstorage: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:24 +01:00
Nick Craig-Wood
f55a99218c lib/encoder: add CrLf encoding 2019-09-30 22:00:24 +01:00
Nick Craig-Wood
6e053ecbd0 s3: only ask for URL encoded directory listings if we need them on Ceph
This works around a bug in Ceph which doesn't encode CommonPrefixes
when using URL encoded directory listings.

See: https://tracker.ceph.com/issues/41870
2019-09-30 22:00:24 +01:00
Nick Craig-Wood
7e738c9d71 fstest: remove WinPath as it is no longer needed 2019-09-30 22:00:24 +01:00
Fabian Möller
7689bd7e21 amazonclouddrive: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:24 +01:00
Fabian Möller
33f129fbbc s3: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:24 +01:00
Nick Craig-Wood
a8adce9c59 s3: fix encoding for control characters - Fixes #3345 2019-09-30 22:00:24 +01:00
Nick Craig-Wood
6ae7bd7914 local: encode invalid UTF-8 on macOS 2019-09-30 22:00:24 +01:00
Nick Craig-Wood
32af4cd6f3 ftp: use lib/encoder 2019-09-30 22:00:24 +01:00
Nick Craig-Wood
ced2616da5 fstests: allow Purge to fail with ErrorDirNotFound 2019-09-30 22:00:24 +01:00
Nick Craig-Wood
b90e4a8769 sftp: fix hashes of files with backslashes 2019-09-30 22:00:24 +01:00
Fabian Möller
00b2c02bf4 pcloud: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:24 +01:00
Fabian Möller
33aea5d43f mega: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:24 +01:00
Fabian Möller
13d8b7979d b2: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:24 +01:00
Nick Craig-Wood
57c1284df7 fstests: make integration tests to check all backends can store any file name
This tests the encoder is working properly
2019-09-30 22:00:24 +01:00
Fabian Möller
f0c2249086 encoder: add edge control characters and fix edge test generation 2019-09-30 14:05:49 +01:00
Fabian Möller
6ba08b8612 info: rewrite invalid character test and reporting 2019-09-30 14:05:49 +01:00
Fabian Möller
c8d3e57418 encodings: add . and .. to all backends, except Drive 2019-09-30 14:05:49 +01:00
Fabian Möller
d5cd026547 encoder: add option to encode . and .. names 2019-09-30 14:05:49 +01:00
Fabian Möller
6c0a749a42 crypt: remove checkValidString
Remove the usage of checkValidString in decryptSegment to allow all
paths which can be created by encryptSegment to be decryptable.
2019-09-30 14:05:49 +01:00
Fabian Möller
4b9fdb8475 opendrive: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
dac20093c5 onedrive: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
d211347d46 dropbox: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
4837bc3546 jottacloud: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
69c51325bb drive: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
05e4f10436 box: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
a98a750fc9 local: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
c09b62a088 encodings: add all known backend encodings 2019-09-30 14:05:49 +01:00
Fabian Möller
a56c9ab61d docs: add section for restricted filenames 2019-09-30 14:05:48 +01:00
Fabian Möller
97a218903c fstest: remove WinPath from fstest.Item 2019-09-30 14:05:48 +01:00
Nick Craig-Wood
4627ac5709 New backend for Citrix Sharefile - Fixes #1543
Many thanks to Bob Droog for organizing a test account and extensive
testing.
2019-09-30 12:28:33 +01:00
Nick Craig-Wood
1e7144eb63 docs: Add more notes on making backend docs 2019-09-30 12:27:03 +01:00
Nick Craig-Wood
f29e5b6e7d lib/oauthutil: refactor web server and allow an auth callback 2019-09-30 11:34:30 +01:00
Nick Craig-Wood
25a0e7e8aa lib/oauthutil: add a new redirect URL for oauth.rclone.org
This is for use with oauth providers which won't accept http: links.
2019-09-30 11:23:21 +01:00
Nick Craig-Wood
262ba28dec config: add ReadNonEmptyLine utility function 2019-09-30 11:23:21 +01:00
Nick Craig-Wood
74f6300875 Start v1.49.4-DEV development 2019-09-29 19:47:59 +01:00
Nick Craig-Wood
86dcb54c38 fstests: make tests pass when using -remote :backend: 2019-09-29 17:25:54 +01:00
Nick Craig-Wood
25a0703b45 Add Richard Patel to contributors 2019-09-29 11:14:33 +01:00
Richard Patel
32d5af8fb6 cmd/rcd: Address ZipSlip vulnerability
Don't create files outside of target
directory while unzipping.

Fixes #3529 reported by Nico Waisman at Semmle Security Team
2019-09-29 11:14:21 +01:00
Richard Patel
44b603d2bd lib: add plugin support
This enables loading plugins from RCLONE_PLUGIN_PATH if set.
2019-09-29 11:05:10 +01:00
Nick Craig-Wood
349112df6b oauthutil: fix security problem when running with two users on the same machine
Before this change two users could run `rclone config` for the same
backend on the same machine at the same time.

User A would get as far as starting the web server.  User B would then
fail to start the webserver, but it would open the browser on the
/auth URL which would redirect the user to the login.  This would then
cause user B to authenticate to user A's rclone.

This changes fixes the problem in two ways.

Firstly it passes the state to the /auth call before redirecting and
checks it there, erroring with a 403 error if it doesn't match.  This
would have fixed the problem on its own.

Secondly it delays the opening of the web browser until after the auth
webserver has started which prevents the user entering the credentials
if another auth server is running.

Fixes #3573
2019-09-29 10:42:02 +01:00
Nick Craig-Wood
fef8b98be2 ftp: fix listing of an empty root returning: error dir not found
Before this change if rclone listed an empty root directory then it
would return an error dir not found.

After this change we assume the root directory exists and don't
attempt to check it which was failing before.

See: https://forum.rclone.org/t/ftp-empty-directory-yields-directory-not-found-error/12069/
2019-09-28 18:01:12 +01:00
Nick Craig-Wood
6750af6167 build: make VERSION file be master of the last release - fixes #3570
Prior to this beta releases would appear to be older than the point
release, eg v1.49.0-096-gc41812fc which was released after v1.49.3 and
contains all the patches from v1.49.3.
2019-09-26 16:51:44 +01:00
Nick Craig-Wood
8681ef36d6 build: replace Circle CI build and make GitHub actions the default CI 2019-09-25 16:38:10 +01:00
Nick Craig-Wood
ec9914205f build: remove Appveyor, Circle CI, Travis and Pkgr builds 2019-09-25 16:38:10 +01:00
Ivan Andreev
ccecfa9cb1 chunker: finish meta-format before release
changes:
- chunker: remove GetTier and SetTier
- remove wdmrcompat metaformat
- remove fastopen strategy
- make hash_type option non-advanced
- adverise hash support when possible
- add metadata field "ver", run strict checks
- describe internal behavior in comments
- improve documentation

note:
wdmrcompat used to write file name in the metadata, so maximum metadata
size was 1K; removing it allows to cap size by 200 bytes now.
2019-09-25 11:03:33 +01:00
Ivan Andreev
c41812fc88 tests: bring memory hungry tests close to end 2019-09-24 12:45:12 +01:00
Ivan Andreev
d98d1be3fe accounting: fix panic due to server-side copy fallback 2019-09-24 12:45:12 +01:00
Ivan Andreev
661dc568f3 fstest: let backends advertise maximum file size 2019-09-24 12:45:12 +01:00
Ivan Andreev
1e4691f951 tests/sync: adjust transfer counts for chunker 2019-09-24 12:45:12 +01:00
Ivan Andreev
be674faff1 tests/config: integration tests for chunker
Recommended `rclone.conf` snippets for this `config.yaml`:
```
[TestChunkerLocal]
type = chunker
meta_format = simplejson
remote = /tmp/rclone-chunker-test

[TestChunkerChunk3bLocal]
type = chunker
chunk_size = 3b
meta_format = simplejson
remote = /tmp/rclone-chunker-test

[TestChunkerNometaLocal]
type = chunker
meta_format = none
remote = /tmp/rclone-chunker-test

[TestChunkerChunk3bNometaLocal]
type = chunker
chunk_size = 3b
meta_format = none
remote = /tmp/rclone-chunker-test

[TestChunkerCompatLocal]
type = chunker
meta_format = wdmrcompat
remote = /tmp/rclone-chunker-test
```
2019-09-24 12:45:12 +01:00
Ivan Andreev
c68c919cea docs: chunker documentation 2019-09-24 12:45:12 +01:00
Ivan Andreev
59dba1de88 chunker: implementation + required fstest patch
Note: chunker implements many irrelevant methods (UserInfo, Disconnect etc),
but they are required by TestIntegration/FsCheckWrap and cannot be removed.

Dropped API methods: MergeDirs DirCacheFlush PublicLink UserInfo Disconnect OpenWriterAt

Meta formats:
- renamed old simplejson format to wdmrcompat.
- new simplejson format supports hash sums and verification of chunk size/count.

Change list:
- split-chunking overlay for mailru
- add to all
- fix linter errors
- fix integration tests
- support chunks without meta object
- fix package paths
- propagate context
- fix formatting
- implement new required wrapper interfaces
- also test large file uploads
- simplify options
- user friendly name pattern
- set default chunk size 2G
- fix building with golang 1.9
- fix ci/cd on a separate branch
- fix updated object name (SyncUTFNorm failed)
- fix panic in Box overlay
- workaround: Box rename failed if name taken
- enhance comments in unit test
- fix formatting
- embed wrapped remote rather than inherit
- require wrapped remote to support move (or copy)
- implement 3 (keep fstest)
- drop irrelevant file system interfaces
- factor out Object.mainChunk
- refactor TestLargeUpload as InternalTest
- add unit test for chunk name formats
- new improved simplejson meta format
- tricky case in test FsIsFile (fix+ignore)
- remove debugging print
- hide temporary objects from listings
- fix bugs in chunking reader:
  - return EOF immediately when all data is sent
  - handle case when wrapped remote puts by hash (bug detected by TestRcat)
- chunked file hashing (feature)
- server-side copy across configs (feature)
- robust cleanup of temporary chunks in Put
- linear download strategy (no read-ahead, feature)
- fix unexpected EOF in the box multipart uploader
- throw error if destination ignores data
2019-09-24 12:45:12 +01:00
Fionera
49d6d6425c serve/httplib: Write the template to a buffer to catch render errors
Fixes #3559
2019-09-22 21:31:11 +01:00
Nick Craig-Wood
28cc2009d4 Add Anthony Rusdi to contributors 2019-09-21 14:39:03 +01:00
Nick Craig-Wood
dd4fe9ff60 Add David to contributors 2019-09-21 14:39:03 +01:00
Anthony Rusdi
899f285319 s3: fix signature v2_auth headers
When used with v2_auth = true, PresignRequest doesn't return
signed headers, so remote dest authentication would be fail.
This commit copying back HTTPRequest.Header to headers.

Tested with RiakCS v2.1.0.

Signed-off-by: Anthony Rusdi <33247310+antrusd@users.noreply.github.com>
2019-09-21 14:38:51 +01:00
David
4788545b05 box: add options to get access token via JWT auth 2019-09-20 17:15:16 +01:00
David
1934426789 jwtutil: functionality to get an access token via JWT authentication 2019-09-20 17:15:16 +01:00
David
643192b347 vendor: add pkcs8 helpers for decrypting encrypted private keys 2019-09-20 17:15:16 +01:00
Nick Craig-Wood
1031bcfc5a build: remove azure pipelines build 2019-09-20 16:08:18 +01:00
Nick Craig-Wood
ce00c0a0d9 build: build rclone with github actions 2019-09-20 16:08:18 +01:00
Nick Craig-Wood
1164eed2af lib/pacer: make tests more reliable 2019-09-20 16:07:55 +01:00
Nick Craig-Wood
557edecd40 log: add Stack() function for debugging who calls what 2019-09-20 11:53:08 +01:00
Nick Craig-Wood
b242b0a078 lib/cache,rc/jobs: make tests more reliable 2019-09-20 11:53:08 +01:00
Nick Craig-Wood
08b86cc94b mount: skip tests on <= 2 CPUs to avoid lockup in #3154 2019-09-20 11:53:08 +01:00
Nick Craig-Wood
56544bb2fd accounting: fix file handle leak on errors - fixes #3547
In 53a1a0e3ef we introduced a problem where if there was an
error on the file being transferred then the file was re-opened and
the old one wasn't closed.

This was partially fixed in bfbddab46b however this didn't
address the case of the old file being closed.

This is now fixed by
- marking the file as open again in UpdateReader
- moving the stopping the accounting machinery to a new method Done
2019-09-19 16:20:07 +01:00
Matei David
70e043e641 Fixed typo in Docker image doc. 2019-09-19 16:17:57 +01:00
Dan Walters
c49a71f438 dlna: move root descriptor xml template to the static assets
Reduce binary size.
2019-09-17 12:52:32 +01:00
Dan Walters
5f07bbf8ce dlna: fake out implementation of X_MS_MediaReceiverRegistrar
Using the same responses as minidlna.

Fixes #3502.
2019-09-17 12:52:02 +01:00
Dan Walters
2f10472df3 dlna: count the number of children in the response to BrowseMetadata 2019-09-17 12:28:20 +01:00
Matei David
ab89e93968 Add Matei David to contributors 2019-09-17 10:12:32 +01:00
Matei David
070a8bfcd8 Dockerfile fixes
- ref: https://forum.rclone.org/t/run-docker-container-in-userspace/11734/7
- enable userspace operation
- enable Docker userspace mount exposed to the host
- add more Docker image usage documentation
2019-09-17 10:12:32 +01:00
Ivan Andreev
8fe87c8157 mailru: skip extra http request if data fits in hash 2019-09-17 10:04:51 +01:00
Ivan Andreev
8fb44a822d mailru: fix rare nil pointer panic 2019-09-17 10:04:51 +01:00
Nick Craig-Wood
3cff258577 sftp: fix --sftp-ask-password trying to contact the ssh agent
See: https://forum.rclone.org/t/rclone-command-line/11766
2019-09-16 11:16:27 +01:00
Nick Craig-Wood
66347aff2a fstest: calculate hashes for uploaded test files to fix minio integration tests
Before this change we didn't calculate any hashes for test files
created in the Run framework.

This means that files were uploaded to S3 without a `Content-MD5`
header.  This in turn caused minio to disengage `--compat` mode which
in turn caused the `TestSyncAfterChangingModtimeOnlyWithNoUpdateModTime`
test to fail in `fs/sync`.

After this change we supply all hashes supported by the destination Fs
on the upload object.

This means that the `Content-MD5` is set and minio engages `--compat`
mode to fix the problem.  Using `--compat` on the command line also
fixes the problem.

This much better replicates how objects are actually uploaded with
operations.Copy so should improve the integration tests.
2019-09-16 10:59:01 +01:00
Nick Craig-Wood
b8b12a4000 Start v1.49.3-DEV development 2019-09-15 18:10:08 +01:00
Dan Walters
8c038326b9 dlna: correct output for ContentDirectoryService#Browse with BrowseMetadata
We were marshalling the "cds object" instead of the "upnp object".

Fixes #3253  (I think)
2019-09-15 16:30:39 +01:00
pataquets
fd4b25932c Contrib: Add sample WebDAV server Docker Compose manifest. 2019-09-15 16:06:54 +01:00
pataquets
4374fd1df1 Contrib: Add sample DLNA server Docker Compose manifest. 2019-09-15 16:06:54 +01:00
Nick Craig-Wood
b6065561cf test_all: add ignores for tests which will never pass
- s3 backends which don't support SetTier
- mega which makes a duplicate for TestDirRename
2019-09-15 13:16:15 +01:00
Nick Craig-Wood
ef7bfd3f03 fs: Make prefix free backend config read prefix free env var also
Before this change you could only configure the local backend flags
which don't have the local prefix (eg `--copy-links`) with
`RCLONE_LOCAL_COPY_LINKS`.

This change makes `RCLONE_COPY_LINKS` valid too which is much more
logical for the users.

Fixes #3534
2019-09-14 18:26:07 +01:00
Nick Craig-Wood
ae2edc3b5b help: add short options to backend documentation also 2019-09-14 18:24:05 +01:00
Nick Craig-Wood
0baafb158f mount: allow files of unkown size to be read properly
Before this change, files of unknown size (eg Google Docs) would
appear in file listings with 0 size and would only allow 0 bytes to be
read.

This change sets the direct_io flag in the FUSE return which bypasses
the cache for these files.  This means that they can be read properly.

This is compatible with some, but not all applications.
2019-09-14 13:22:33 +01:00
Nick Craig-Wood
ba121eddf0 vfs: make objects of unknown size readable through the VFS
These objects (eg Google Docs) appear with 0 length in the VFS.

Before this change, these only read 0 bytes.

After this change, even though the size appears to be 0, the objects
can be read to the end.  If the objects are read to the end then the
size on the handle will be updated.
2019-09-14 13:09:07 +01:00
Nick Craig-Wood
2e80e035c9 fstest/mockobject: add UnknownSize() method to make Size() return -1 2019-09-14 13:07:01 +01:00
Nick Craig-Wood
ea9b6087cf fstest/mockfs: allow fs.Objects to be added to the root 2019-09-14 13:05:36 +01:00
Nick Craig-Wood
6959c997e2 config: remove error: can't use --size-only and --ignore-size together.
As sometimes they are useful together, for example when the remote
changes the sizes of the uploaded files.

See: https://forum.rclone.org/t/files-upload-will-be-auto-compress-how-do-i-sync-a-file-to-remote/10578
2019-09-14 10:13:44 +01:00
Nick Craig-Wood
25786cafd3 s3: fix SetModTime on GLACIER/ARCHIVE objects and implement set/get tier
- Read the storage class for each object
- Implement SetTier/GetTier
- Check the storage class on the **object** before using SetModTime

This updates the fix in 1a2fb52 so that SetModTime works when you are
using objects which have been migrated to GLACIER but you aren't using
GLACIER as a storage class.

Fixes #3522
2019-09-14 09:18:55 +01:00
Nick Craig-Wood
23dc313fa5 azureblob: add missing type assertions for GetTier/SetTier 2019-09-14 09:18:55 +01:00
Nick Craig-Wood
1a16849df0 http: fix race introduced in 7982aaf151 2019-09-14 08:48:13 +01:00
Nick Craig-Wood
3b68340eac http: add --http-no-head to stop rclone doing HEAD in listings #3523 2019-09-14 00:17:39 +01:00
Nick Craig-Wood
7982aaf151 http: HEAD directory entries in parallel to speedup #3523 2019-09-14 00:16:44 +01:00
Nick Craig-Wood
7b29ed8ec1 Add Lars Lehtonen to contributors 2019-09-13 23:50:54 +01:00
Lars Lehtonen
c93e0ff8ee rest: fix missing error check 2019-09-13 23:50:39 +01:00
Nick Craig-Wood
3b91fb6a2f Add David Baumgold to contributors 2019-09-13 18:39:36 +01:00
David Baumgold
7d8c15c030 docs: GitHub has a capital H 2019-09-13 18:39:23 +01:00
Nick Craig-Wood
bfbddab46b fs/accounting: Fix "file already closed" on transfer retries
This was caused by the recent reworking of the accounting interface.
The Transfer object was recycling the Accounting object without
resetting the stream.

See: https://forum.rclone.org/t/error-file-already-closed/11469/
See: https://forum.rclone.org/t/rclone-b2-sync-post-error-method-not-supported/11718/
2019-09-13 18:35:02 +01:00
Nick Craig-Wood
e09a4ff019 cmd: Make --progress work in git bash on Windows - fixes #3531
This detects the presence of a VT100 terminal by using the TERM
environment variable and switches to using VT100 codes directly under
windows if it is found.

This makes --progress work correctly with git bash.
2019-09-13 15:24:47 +01:00
Nick Craig-Wood
48e23d8c85 build: fix circleci build to use billziss/xgo-cgofuse container
Use built rclone to do upload
2019-09-12 21:59:50 +01:00
Aleksandar Jankovic
934440a9df accounting: fix total duration calculation
Fixes: #3498
2019-09-12 10:29:40 +01:00
Nick Craig-Wood
29b4f211ab gcs: add context to SDK calls #3257 2019-09-09 23:27:07 +01:00
Nick Craig-Wood
bd863f8868 drive: add context to SDK calls #3257 2019-09-09 23:27:07 +01:00
Nick Craig-Wood
66c23723e3 Add context to all http.NewRequest #3257
When we drop support for go1.12 we can use http.NewRequestWithContext
2019-09-09 23:27:07 +01:00
Nick Craig-Wood
58a531a203 rest: add context propagation to rest library #3257
This fixes up the calling and propagates the contexts for the backends
which use lib/rest.
2019-09-09 23:27:07 +01:00
Ivan Andreev
ba1daea072 mailru: backend for mail.ru 2019-09-09 21:56:16 +01:00
Ivan Andreev
bdcd0b4c64 Add mailru hash (mrhash) 2019-09-09 21:34:15 +01:00
Nick Craig-Wood
94eb9a4014 Start v1.49.2-DEV development 2019-09-08 17:36:21 +01:00
Nick Craig-Wood
e028c006fc test_all: write index.json and add branch, commit and Go version to report 2019-09-08 11:35:56 +01:00
Nick Craig-Wood
3f3f038b73 build: make sure we add version info to test_all build 2019-09-08 11:35:36 +01:00
calisro
2298834e83 docs: spell out google photos download limitations 2019-09-07 11:47:53 +01:00
Nick Craig-Wood
07dfb3aa11 bin: convert python scripts to python3 2019-09-06 22:55:28 +01:00
Danil Semelenov
1382dba3c8 cmd: make autocomplete compatible with bash's posix mode #3489 2019-09-06 13:11:08 +01:00
Nick Craig-Wood
f1347139fa config: check config names more carefully and report errors - fixes #3506
Before this change it was possible to make a remote with an invalid
name in the config file, either manually or with `rclone config
create` (but not with `rclone config`).

When this remote was used, because it was invalid, rclone would
presume this remote name was a local directory for a very suprising
user experience!

This change checks remote names more carefully and returns errors
- when the user tries to use an invalid remote name on the command line
- when an invalid remote name is used in `rclone config create/update/password`
- when the user tries to enter an invalid remote name in `rclone config`

This does not prevent the user entering a remote name with invalid
characters in the config manually, but such a remote will fail
immediately when it is used on the command line.
2019-09-06 12:07:09 +01:00
Nick Craig-Wood
27a730ef8f docs: Update changelog and RELEASE.md from v1.49.1 release 2019-09-06 12:07:09 +01:00
Ivan Andreev
d0c6e5cf5a vfs: skip TestCaseSensitivity on case insensitive backends 2019-09-06 10:44:59 +01:00
Nick Craig-Wood
cf9b973fe4 accounting: fix locking in Transfer to avoid deadlock with --progress
Before this change, using -P occasionally deadlocked on the Transfer
mutex when Transfer.Done() was called with a non nil error and the
StatsInfo mutex since they mutually call each other.

This was fixed by making sure that the Transfer mutex is always
released before calling any StatsInfo methods.

This improves on: 6f87267b34

Fixes #3505
2019-09-06 10:00:44 +01:00
Nick Craig-Wood
ffa1dac10b build: apply gofmt from go1.13 to change case of number literals 2019-09-05 13:59:06 +01:00
Nick Craig-Wood
7b0966880e Add Ivan Andreev to contributors 2019-09-04 21:32:16 +01:00
Ivan Andreev
1c4e33d4ad vfs: add flag --vfs-case-insensitive for windows/macOS mounts
rclone mount when run on Windows & macOS will now default to `--vfs-case-insensitive`.
This means that
2019-09-04 21:30:48 +01:00
Nick Craig-Wood
530ba66d35 operations: fix -u/--update with google photos / files of unknown size
Before this change if -u/--update was in effect we compared the size
of the files to see if the transfer should go ahead.  This was
comparing -1 with an actual size so the transfer always proceeded.

After this change we use the existing `sizeDiffers` function which
does the correct comparison with -1 for files of unknown length.

See: https://forum.rclone.org/t/sync-with-google-photos-to-local-drive-will-result-in-recoping/11605
2019-09-04 17:31:17 +01:00
Danil Semelenov
b3db38ae31 Disable __rclone_custom_func if posix mode is on
A workaround for #3489. Code in `__rclone_custom_func` relies on process substitutions `<(...)` to preserve changes of variables within `while` bodies, which is not supported in the posix mode.
2019-09-04 14:48:10 +01:00
Danil Semelenov
c0d1869204 Fix 'compopt: command not found' on autocomplete on macOS
As reported in #3489.
2019-09-04 14:47:26 +01:00
Nick Craig-Wood
89b6d89077 build: drop support for go1.9 2019-09-04 10:23:48 +01:00
Nick Craig-Wood
ef7b001626 build: update to use go1.13 for the build 2019-09-04 10:23:48 +01:00
Nick Craig-Wood
f97a3e853e Add Alfonso Montero to contributors 2019-09-03 17:26:13 +01:00
Denis
b71ac141cc copyurl: add --auto-filename flag for using file name from url in destination path (#3451) 2019-09-03 17:25:19 +01:00
Nick Craig-Wood
5932acfee3 rc: fix docs for config/create /update /password 2019-09-03 08:34:15 +01:00
Alfonso Montero
e2ce687f93 README.md: Add Docker pulls badge 2019-09-02 15:14:45 +01:00
Nick Craig-Wood
a3fb460c6b docs: add info on how to build and use the docker images 2019-09-02 14:30:11 +01:00
Nick Craig-Wood
8d296d0e1d Start v1.49.1-DEV development 2019-09-02 13:10:47 +01:00
Nick Craig-Wood
20a57aaccb gcs: fix need for elevated permissions on SetModTime - fixes #3493
Before this change we used PATCH on the object to update the metadata.

Apparently this requires the "full_control" scope which Google were
unhappy with in their oauth review.

This changes it to update the metadata by copying the object ontop of
itself (which is the way s3 works).  This can be done with normal
permissions.
2019-09-02 09:26:33 +01:00
Nick Craig-Wood
50a4ed8fc4 operations: fix accounting for server side copies
See: https://forum.rclone.org/t/b2-server-side-copy-doesnt-show-cumulative-progress/11154
2019-09-02 09:26:33 +01:00
Cnly
e2b5ed6c7a docs: fix template argument for mktemp in install.sh 2019-09-02 13:01:45 +08:00
Alfonso Montero
16e7da2cb5 Add Docker workflow support #3460
* Use a multi-stage build to reduce final image size.
* Run 'quicktest' make target before building.
* Built binary won't run on Alpine unless statically linked.
2019-08-29 11:04:57 +01:00
yparitcher
52df19ad34 allow usage of -short in the testing framework 2019-08-29 09:53:23 +01:00
Nick Craig-Wood
693112d57e config: Fix generated passwords being stored as empty password - Fixes #3492 2019-08-28 12:11:03 +01:00
Nick Craig-Wood
0edbc9578d googlephotos,onedrive: fix crash on error response - fixes #3491
This fixes a crash on the google photos backend when an error is
returned from the rest.Call function.

This turned out to be a mis-understanding of the rest docs so
- improved rest.Call docs
- fixed mis-understanding in google photos backend
- fixed similar mis-understading in onedrive backend
2019-08-28 12:11:03 +01:00
Chaitanya
7211c2dca7 rcd: Added missing parameter for web-gui info logs. 2019-08-27 17:21:21 +01:00
Nick Craig-Wood
af192d2507 vendor: update all dependencies 2019-08-26 18:00:17 +01:00
Nick Craig-Wood
d1a39dcc4b Start v1.49.0-DEV development 2019-08-26 17:44:09 +01:00
Nick Craig-Wood
a6387e1f81 Version v1.49.0 2019-08-26 15:25:20 +01:00
Nick Craig-Wood
a992a910ef rest: use readers.NoCloser to stop body being closed
Before this change, if you passed a io.ReadCloser to opt.Body then the
transaction would close it.  This happens as part of http.NewRequest
which documents that the io.Reader passed in will be upgraded to a
Closer if possible and closed as part of the Do call.

After this change, we wrap any io.ReadClosers to stop them being
upgraded.  This means that they will never get closed and that the
caller should always close them.

This fixes a panic in the googlephotos integration tests.
2019-08-26 12:23:31 +01:00
Nick Craig-Wood
ce3340621f lib/readers: add NoCloser to stop upgrades from io.Reader to io.ReadCloser 2019-08-26 12:23:31 +01:00
Nick Craig-Wood
73e010aff9 docs: make the config walkthroughs consistent for each backend 2019-08-26 10:47:17 +01:00
Nick Craig-Wood
a3faf98aa0 docs: add docs about GUI 2019-08-25 20:32:41 +01:00
Nick Craig-Wood
ed85092edb docs: remove social media tracking javascript and replace with links 2019-08-25 11:09:20 +01:00
Nick Craig-Wood
193c30d570 Review random string/password generation
- factor password generation into lib/random.Password
- call from appropriate places
- choose appropriate use of random.String vs random.Password
2019-08-25 11:09:19 +01:00
Nick Craig-Wood
beb8d5c134 docs: update analytics 2019-08-25 11:09:19 +01:00
Nick Craig-Wood
93810a739d docs: update fontawesome free to 5.10.2 and fixup broken images 2019-08-25 11:09:19 +01:00
Nick Craig-Wood
5d4d5d2b07 docs: update logo on website 2019-08-25 11:09:19 +01:00
Nick Craig-Wood
f02fc5d5b5 Add Andreas Chlupka to contributors 2019-08-25 11:09:19 +01:00
Andreas Chlupka
eab999f631 graphics: update rclone logos to new design
Committed-By: Nick Craig-Wood <nick@craig-wood.com>
2019-08-24 09:31:33 +01:00
Nick Craig-Wood
bd61eb89bc serve http/webdav/restic/rc: rename --prefix flag to --baseurl #3398
The name baseurl is widely accepted for this feature so I decided to
rename it before it made it into a stable release.
2019-08-24 09:10:50 +01:00
Nick Craig-Wood
077b45322d vfs: fix --vfs-cache-mode minimal,writes ignoring cached files
Before this change, with --vfs-cache-mode minimal,writes if files were
opened they would always be read from the remote, regardless of
whether they were in the cache or not.

This change checks to see if the file is in the cache when opening a
file with --vfs-cache-mode >= minimal and if so then it uses it from
the cache.

This makes --vfs-cache-mode writes in particular much more
efficient. No longer is a file uploaded (with write mode) then
immediately downloaded (with read only mode).

Fixes #3330
2019-08-23 13:58:15 +01:00
Nick Craig-Wood
67fae720d7 serve dlna: add more builtin mime types to cover standard audio/video
Add a minimal number of mime types to augment go's built in types
for environments which don't have access to a mime.types file (eg
Termux on android)

Fixes #3475
2019-08-23 13:30:48 +01:00
Nick Craig-Wood
39ae7c7ac0 serve dlna: fix missing mime types on Android causing missing videos
Before this fix serve dlna was only using the built in database of
mime types to look up the mime types of files.  On Android (and
possibly other systems) this is very small.

The symptoms of this problem was serve dlna only listing images and
not videos.

After this fix we use the backend's idea of the mime type if possible
which will be more accurate.

Fixes #3475
2019-08-23 13:30:48 +01:00
Nick Craig-Wood
f67798d73e Add Cenk Alti to contributors 2019-08-23 12:11:51 +01:00
Cenk Alti
a1ca65bd80 putio: add new backend 2019-08-23 12:11:36 +01:00
Cenk Alti
566aa0fca7 vendor: add github.com/putdotio/go-putio for putio client 2019-08-23 12:11:36 +01:00
Cenk Alti
8159658e67 hash: add CRC-32 support 2019-08-23 12:11:36 +01:00
Nick Craig-Wood
6f16588123 s3,b2,googlecloudstorage,swift,qingstor,azureblob: fixes after code review #3421
- change the interface of listBuckets() removing dir parameter and adding context
- add makeBucket() and use in place of Mkdir("")
    - this fixes some corner cases in Copy/Update
- mark all the listed buckets OK in ListR

Thanks to @yparitcher for the review.
2019-08-22 23:06:59 +01:00
Nick Craig-Wood
e339c9ff8f lib/bucket: shorten locking window where possible 2019-08-22 23:06:59 +01:00
Michał Matczuk
3247e69cf5 fs/rc/jobs: ExecuteJob propagate the error returned by function
Without this patch the resulting error is first converted to string and then recreated.
This makes it impossible to use the defined error types to figure out the cause of the error,
and may result in invalid HTTP status codes.

This patch adds a test TestExecuteJobErrorPropagation to validate that the errors are
properly propagated.
2019-08-22 16:10:48 +01:00
Nick Craig-Wood
341d880027 mount: remove nonseekable flag from write files - fixes #3461
Before this change rclone marked files opened for write without VFS
cache with the non seekable flag.

This caused problems with rclone mount layerd with mergerfs.

This change removes the hint and lets rclone do all the checking for
seekability.
2019-08-22 13:13:59 +01:00
Nick Craig-Wood
941dde6940 fstest: clear the fs cache between test runs
The fs cache makes test runs no longer independent and this can cause
a problem with some tests.

Clearing the fs cache between tests runs fixes the problem.

This was spotted by @cenkalti as part of merging #3469
2019-08-22 11:57:35 +01:00
Nick Craig-Wood
40cc8180f0 lib/dircache: add a way to dump the DirCache for debugging 2019-08-22 11:57:35 +01:00
Chaitanya
159f2e29a8 rcd: prefix patch for rcd and web-gui 2019-08-22 08:36:10 +01:00
Chaitanya
efd826ad4b rcd: auto-login for web-gui
rcd: auto use authentication if none is provided for web-gui
2019-08-22 08:36:10 +01:00
Michal Matczuk
5d6593de4f * rc/jobs: Add SetInitialJobID function that allows for setting the jobID 2019-08-21 11:01:39 +01:00
Nick Craig-Wood
82c6c77e07 Add Patrick Wang to contributors 2019-08-20 17:46:13 +01:00
Patrick Wang
badc8b3293 mount: Fix typo in argument checking 2019-08-20 17:46:04 +01:00
Nick Craig-Wood
27a9d0f570 serve dlna: only select interfaces which can multicast for SSDP
Before this change we used all UP interfaces - now we need the
interfaces to be UP and MULTICAST capable.

See: https://forum.rclone.org/t/error-using-rclone-serve-dlna-on-termux/11083
2019-08-20 16:24:56 +01:00
Nick Craig-Wood
6ca00c21a4 mount: update docs to show mounting from root OK for bucket based #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
b619430bcf qingstor: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
8a0775ce3c azureblob: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
d8e9b1a67c gcs: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
e0e0e0c7bd b2: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
eaaf2ded94 s3: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
eaeef4811f swift: make all operations work from the root #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
d266a171c2 lib/bucket: utilities for dealing with bucket based backends #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
df8bdf0dcb fstests: add tests for operations from the root of the Fs #3421 2019-08-17 10:30:41 +01:00
Nick Craig-Wood
743dabf159 fstest: add precision to CompareItems so it works on non-local remotes 2019-08-17 10:30:38 +01:00
Nick Craig-Wood
9f549f848d fs: add feature flag BucketBasedRootOK #3421
This is for bucket based remotes which can be used from the root.
Eventually all bucket based remotes will support this.
2019-08-17 09:54:19 +01:00
Nick Craig-Wood
af3c47d282 fstest: remove -subdir flag as it no longer tests anything useful #3421 2019-08-17 09:54:19 +01:00
yparitcher
ba0e1ea6ae Docs: add emptydir support to table 2019-08-17 09:45:20 +01:00
yparitcher
82b3bfec3c fix empty dir test for object based remotes 2019-08-17 09:45:20 +01:00
buengese
898782ac35 help/showBackend: fixed advanced option category when there are no standard options 2019-08-15 11:46:56 +00:00
buengese
4e43fa746a jottacloud: update config docs 2019-08-15 11:46:56 +00:00
buengese
acc9dadcdc jottacloud: refactor configuration and minor cleanup 2019-08-15 11:46:56 +00:00
Michał Matczuk
712f7e38f7 backend/local: fadvise run syscall on a dedicated go routine
Before we issued an additional syscall periodically on a hot path.
This patch offloads the fadvise syscall to a dedicated go routine.
2019-08-14 21:01:39 +01:00
Nick Craig-Wood
24161d12ab fs: make sure config is persisted to the config file when using config.Mapper 2019-08-14 20:54:08 +01:00
Nick Craig-Wood
fa539b9d9b sftp: save the md5/sha1 command in use to the config file 2019-08-14 20:54:08 +01:00
Nick Craig-Wood
3ea82032e7 sftp: support md5/sha1 with rsync.net #3254
rsync.net uses the freebsd equivalent of sha1sum and md5sum so adapt
to that.
2019-08-14 20:54:08 +01:00
Nick Craig-Wood
71e172a139 serve/sftp: support empty "md5sum" and "sha1sum" commands
This is to enable the new command detection to work with the sftp
backend.
2019-08-14 20:54:08 +01:00
Nick Craig-Wood
6929f5d6e6 build: make azure pipelines stop if installs fail 2019-08-14 17:47:55 +01:00
Nick Craig-Wood
c2050172aa qingstor: upgrade to v3 SDK and fix listing loop 2019-08-14 16:15:34 +01:00
Nick Craig-Wood
a72ef7ca0e vendor: update github.com/yunify/qingstor-sdk-go to v3 2019-08-14 16:15:34 +01:00
Nick Craig-Wood
b84cc0cae7 vendor: run go tidy and go vendor 2019-08-14 16:15:34 +01:00
Nick Craig-Wood
93228dfcc9 operations: debug successful hashes as well as failures #3419 2019-08-14 15:07:38 +01:00
Nick Craig-Wood
eb087a3b04 operations: disable multi thread copy for local to local copies #3419
...unless --multi-thread-streams has been set explicitly
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
ec8e0a6c58 fstest/mockobject: add SetFs method so it can have a valid Fs() #3419 2019-08-14 15:07:38 +01:00
Nick Craig-Wood
f0e0d6cc3c fs: add IsLocal feature to identify local backend #3419 2019-08-14 15:07:38 +01:00
Nick Craig-Wood
752d43d6fa fs: Implement UnWrapObject and UnWrapFs 2019-08-14 15:07:38 +01:00
Nick Craig-Wood
7c146e2618 operations: check transfer hashes when using --size-only mode #3419
Before this change we didn't calculate or check hashes of transferred
files if --size-only mode was explicitly set.

This problem was introduced in 20da3e6352 which was released with v1.37

After this change hashes are checked for all transfers unless
--ignore-checksums is set.
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
f9ceade9b4 operations: don't calculate checksums when using --ignore-checksum #3419
Before this change we calculated the checkums when using
--ignore-checksum but ignored them at the end.

Now we don't calculate the checksums at all which is more efficient.
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
ae9c0e56c8 operations: run hashing operations in parallel #3419
Before this change for a post copy Hash check we would run the hashes sequentially.

Now we run the hashes in parallel for a useful speedup.

Note that this refactors the hash check in Copy to use the standard
hash checking routine.
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
402aaca7fe local: don't calculate any hashes by default #3419
Before this change, if the caller didn't provide a hint, we would
calculate all hashes for reads and writes.

The new whirlpool hash is particularly expensive and that has become noticeable.

Now we don't calculate any hashes on upload or download unless hints are provided.

This means that some operations may run slower and these will need to be discovered!

It does not affect anything calling operations.Copy which already puts
the corrects hints in.
2019-08-14 15:07:38 +01:00
Nick Craig-Wood
106cf1852d Add ginvine to contributors 2019-08-14 13:40:15 +01:00
Nick Craig-Wood
50b8f15b5d Add another email for Laura Hausmann to contributors 2019-08-14 13:40:07 +01:00
ginvine
1e7bc359be drive: Add error for purge with --drive-trashed-only - fixes #3407
Purge should not be used with --drive-trashed-only flag as it leads to
unexpected behavior. After this commit if TrashedOnly option is set to
true, error message is returned.

See also: https://forum.rclone.org/t/drive-trashed-only-weird-occurrence/11066/14
2019-08-14 13:34:52 +01:00
Nick Craig-Wood
23a0332185 config: don't offer hidden values for editing in the config - fixes #3416 2019-08-14 08:40:22 +01:00
buengese
6812844b3d march: Fix checking sub-directories when using --no-traverse 2019-08-13 19:30:56 +01:00
buengese
3a04d0d1a9 march: rework testcases to better reflect real use 2019-08-13 19:30:56 +01:00
buengese
6f4b86e569 jottacloud: use new api for retrieving internal username - fixes #3434 2019-08-13 17:18:14 +00:00
Laura Hausmann
9aa889bfa2 fichier: fix character encoding for file names, fixes rclone#3298 2019-08-13 16:56:59 +01:00
Nick Craig-Wood
8247c8a6af rc: add anchor tags to the docs so links are consistent 2019-08-13 11:57:01 +01:00
Nick Craig-Wood
535f5f3c99 rc: fix --loopback with rc/list and others
Before this change `rclone rc --loopback` would give the error "bad
JSON".

This was because the output of the `rc/list` command was not serialzed
through JSON.

This serializes it through JSON and fixes that (and probably other)
command.
2019-08-13 11:51:16 +01:00
Nick Craig-Wood
7f7946564d error: make "bad record MAC" a retriable error - Fixes #3338
The error "tls: bad record MAC" is very likely to be caused by
hardware issues.  It indicates that a packet got corrupted somewhere.

As a work around, this change treats it as retriable error which
allows the chunk to get retried and the transfer to continue.
2019-08-12 20:37:10 +01:00
Chaitanya
bbb8d43716 rc: (docs) Add new parameters --rc-web-gui and --rc-allow-origin, --rc-web-fetch-url and rc-web-gui-update to documentation. 2019-08-12 19:04:12 +01:00
Nick Craig-Wood
5e0a30509c http: add --http-headers flag for setting arbitrary headers 2019-08-12 18:04:24 +01:00
Nick Craig-Wood
cd7ca2a320 googlephotos: implement optional features UserInfo and Disconnect
As part of rclone's UX review it was required that rclone had a means
of disconnecting from google photos and showing which user is
connected.
2019-08-12 13:49:23 +01:00
Nick Craig-Wood
a808e98fe1 config: add reconnect, userinfo and disconnect subcommands.
- reconnect runs through the oauth flow again.
- userinfo shows the connected user info if available
- disconnect revokes the token
2019-08-12 13:49:23 +01:00
Nick Craig-Wood
3ebcb555f4 fs: add optional features UserInfo and Disconnect 2019-08-12 13:49:23 +01:00
Nick Craig-Wood
a1263e70cf premiumizeme: new backend for premiumize.me - Fixes #3063 2019-08-10 19:17:51 +01:00
Nick Craig-Wood
f47e5220a2 Add Abhinav Sharma to contributors 2019-08-10 17:31:25 +01:00
Abhinav Sharma
4db742dc77 oauthutil: note that the same version is recommended for remote auth 2019-08-10 17:31:08 +01:00
Nick Craig-Wood
3ecbd603ab rc: move job expire flags to rc to fix initalization problem
See: https://forum.rclone.org/t/rc-rc-job-expire-interval-bug/11188

rclone was ignoring the --rc-job-expire-duration and --rc-job-interval
flags.  This turned out to be an initialization order problem and was
fixed by moving those flags out of global config into rc config.
2019-08-10 17:12:22 +01:00
Nick Craig-Wood
0693deea1c rc: fix unmarshalable http.AuthFn in options and put in test for marshalability 2019-08-10 16:22:17 +01:00
Nick Craig-Wood
99eaa76dc8 Add Macavirus to contributors 2019-08-10 14:13:24 +01:00
Macavirus
ba3b0a175e docs: Add rsync.net stub link to SFTP page 2019-08-10 14:13:15 +01:00
Macavirus
01c0c0b009 docs: Add C14 Cold Storage to homepage and SFTP backend 2019-08-10 14:13:15 +01:00
Nick Craig-Wood
7d85ccb11e fs/cache: test for fix cached values pointing to files #3424 2019-08-10 08:39:56 +01:00
buengese
0c1eaf1bcb cache: correctly handle fs.ErrorIsFile in GetFn - fixes #3424 2019-08-09 21:45:46 +00:00
Chaitanya
873e87fc38 rc: WebGUI should check for new update only when rc-web-gui-update is specified or not already downloaded.
rc: WebGUI should check for new update only when rc-web-gui-update is specified or not already downloaded.

rc: change permission to 0755 instead of 755 to prevent unexpected behaviour.
2019-08-09 15:14:52 +01:00
Chaitanya
33677ff367 rc: Added command line parameter to control the cross origin resource sharing (CORS) in the rcd. (Security Improvement)
rc: Import statements


Fixing the problem with test
2019-08-09 15:14:52 +01:00
Nick Craig-Wood
5195075677 Add Michał Matczuk to contributors 2019-08-08 23:42:03 +01:00
Michał Matczuk
f396550934 backend/local: Avoid polluting page cache when uploading local files to remote backends
This patch makes rclone keep linux page cache usage under control when
uploading local files to remote backends. When opening a file it issues
FADV_SEQUENTIAL to configure read ahead strategy. While reading
the file it issues FADV_DONTNEED every 128kB to free page cache from
already consumed pages.

```
fadvise64(5, 0, 0, POSIX_FADV_SEQUENTIAL) = 0
read(5, "\324\375\251\376\213\361\240\224>\t5E\301\331X\274^\203oA\353\303.2'\206z\177N\27fB"..., 32768) = 32768
read(5, "\361\311\vW!\354_\317hf\276t\307\30L\351\272T\342C\243\370\240\213\355\210\v\221\201\177[\333"..., 32768) = 32768
read(5, ":\371\337Gn\355C\322\334 \253f\373\277\301;\215\n\240\347\305\6N\257\313\4\365\276ANq!"..., 32768) = 32768
read(5, "\312\243\360P\263\242\267H\304\240Y\310\367sT\321\256\6[b\310\224\361\344$Ms\234\5\314\306i"..., 32768) = 32768
fadvise64(5, 0, 131072, POSIX_FADV_DONTNEED) = 0
read(5, "m\251\7a\306\226\366-\v~\"\216\353\342~0\fht\315DK0\236.\\\201!A#\177\320"..., 32768) = 32768
read(5, "\7\324\207,\205\360\376\307\276\254\250\232\21G\323n\255\354\234\257P\322y\3502\37\246\21\334^42"..., 32768) = 32768
read(5, "e{*\225\223R\320\212EG:^\302\377\242\337\10\222J\16A\305\0\353\354\326P\336\357A|-"..., 32768) = 32768
read(5, "n\23XA4*R\352\234\257\364\355Y\204t9T\363\33\357\333\3674\246\221T\360\226\326G\354\374"..., 32768) = 32768
fadvise64(5, 131072, 131072, POSIX_FADV_DONTNEED) = 0
read(5, "SX\331\251}\24\353\37\310#\307|h%\372\34\310\3070YX\250s\2269\242\236\371\302z\357_"..., 32768) = 32768
read(5, "\177\3500\236Y\245\376NIY\177\360p!\337L]\2726\206@\240\246pG\213\254N\274\226\303\357"..., 32768) = 32768
read(5, "\242$*\364\217U\264]\221Y\245\342r\t\253\25Hr\363\263\364\336\322\t\325\325\f\37z\324\201\351"..., 32768) = 32768
read(5, "\2305\242\366\370\203tM\226<\230\25\316(9\25x\2\376\212\346Q\223 \353\225\323\264jf|\216"..., 32768) = 32768
fadvise64(5, 262144, 131072, POSIX_FADV_DONTNEED) = 0
```

Page cache consumption per file can be checked with tools like [pcstat](https://github.com/tobert/pcstat).

This patch does not have a performance impact. Please find below results
of an experiment comparing local copy of 1GB file with and without this
patch.

With the patch:

```
(mmt/fadvise)$ pcstat 1GB.bin.1
+-----------+----------------+------------+-----------+---------+
| Name      | Size (bytes)   | Pages      | Cached    | Percent |
|-----------+----------------+------------+-----------+---------|
| 1GB.bin.1 | 1073741824     | 262144     | 0         | 000.000 |
+-----------+----------------+------------+-----------+---------+
(mmt/fadvise)$ taskset -c 0 /usr/bin/time -v ./rclone copy 1GB.bin.1 /var/empty/rclone
        Command being timed: "./rclone copy 1GB.bin.1 /var/empty/rclone"
        User time (seconds): 13.19
        System time (seconds): 1.12
        Percent of CPU this job got: 96%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:14.81
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 27660
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 2212
        Voluntary context switches: 5755
        Involuntary context switches: 9782
        Swaps: 0
        File system inputs: 4155264
        File system outputs: 2097152
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
(mmt/fadvise)$ pcstat 1GB.bin.1
+-----------+----------------+------------+-----------+---------+
| Name      | Size (bytes)   | Pages      | Cached    | Percent |
|-----------+----------------+------------+-----------+---------|
| 1GB.bin.1 | 1073741824     | 262144     | 0         | 000.000 |
+-----------+----------------+------------+-----------+---------+
```

Without the patch:

```
(master)$ taskset -c 0 /usr/bin/time -v ./rclone copy 1GB.bin.1 /var/empty/rclone
        Command being timed: "./rclone copy 1GB.bin.1 /var/empty/rclone"
        User time (seconds): 14.46
        System time (seconds): 0.81
        Percent of CPU this job got: 93%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:16.41
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 27600
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 2228
        Voluntary context switches: 7190
        Involuntary context switches: 1980
        Swaps: 0
        File system inputs: 2097152
        File system outputs: 2097152
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
(master)$ pcstat 1GB.bin.1
+-----------+----------------+------------+-----------+---------+
| Name      | Size (bytes)   | Pages      | Cached    | Percent |
|-----------+----------------+------------+-----------+---------|
| 1GB.bin.1 | 1073741824     | 262144     | 262144    | 100.000 |
+-----------+----------------+------------+-----------+---------+
```
2019-08-08 23:41:52 +01:00
Nick Craig-Wood
6f87267b34 accounting: fix locking in Transfer to avoid deadlock with --progress
Before this change, using -P occasionally deadlocked on the transfer
mutex and the stats mutex since they call each other via the progress
printing.

This is fixed by shortening the locking windows and converting the
mutex to a RW mutex.
2019-08-08 15:46:46 +01:00
Nick Craig-Wood
9d1fb2f4e7 Revert "cmd: shorten the locking window when using --progress to avoid deadlock"
This reverts commit fdef567da6.

The problem turned out to be elsewhere.
2019-08-08 15:19:41 +01:00
Nick Craig-Wood
99b3154abd Revert "filter: Add BoundedRecursion method"
This reverts commit 047f00a411.

It turns out that BoundedRecursion is the wrong thing to measure.
2019-08-08 14:15:50 +01:00
Nick Craig-Wood
6c38bddf3e walk: fix listing with filters listing whole remote
Prior to this fix, a request such as

    rclone lsf -R --include "/dir/**" remote:

Would use ListR which is very inefficient as it lists the whole remote
for one directory.

This changes it to use recursive walking if the filters imply any
directory filtering.  So `--include *.jpg` and `--exclude *.jpg` will
still use ListR wheras `--include "/dir/**` will not.
2019-08-08 14:15:50 +01:00
Nick Craig-Wood
a00a0471a8 filter: Add UsesDirectoryFilters method 2019-08-08 14:15:50 +01:00
Nick Craig-Wood
9e81fc343e swift: fix upload when using no_chunk to return the correct size
When using the VFS with swift and --swift-no-chunk, PutStream was
returning objects with size -1 which was causing corrupted transfer
messages.

This was fixed by counting the bytes transferred in a streamed file
and updating the metadata with that.
2019-08-08 12:41:46 +01:00
Nick Craig-Wood
fdef567da6 cmd: shorten the locking window when using --progress to avoid deadlock
Before this change, using -P occasionally deadlocked on the progress
mutex and the stats mutex since they call each other.

This is fixed by shortening the locking window in the progress routine
so as not to include the stats calculation.
2019-08-08 12:37:50 +01:00
Nick Craig-Wood
d377842395 vfs: make write without cache more efficient
This updates the out of sequence write code to be more efficient using
a conditional lock with a timeout.
2019-08-08 12:37:50 +01:00
Nick Craig-Wood
c014b2e66b rcat: fix slowdown on systems with multiple hashes
Before this fix rclone calculated all the hashes on transfer.  This
was particularly slow for the local backend.

After the fix we just calculate one hash which is enough for data
integrity.
2019-08-08 12:37:50 +01:00
Nick Craig-Wood
62b769a0a7 serve sftp: fix spurious debugs on server close 2019-08-08 12:37:50 +01:00
Nick Craig-Wood
84b5da089e serve sftp: fix detection of whether server is authorized 2019-08-08 12:37:50 +01:00
Nick Craig-Wood
d0c65b4c5e copyurl: fix copying files that return HTTP errors 2019-08-07 22:29:44 +01:00
Nick Craig-Wood
e502be475a azureblob/b2/dropbox/gcs/koofr/qingstor/s3: fix 0 length files
In 0386d22cc9 we introduced a test for 0 length files read the
way mount does.

This test failed on these backends which we fix up here.
2019-08-06 15:18:08 +01:00
negative0
27a075e9fc rcd: Removed the shorthand for webgui. Shorthand is reserved for rsync compatibility. 2019-08-06 12:50:31 +01:00
Nick Craig-Wood
5065c422b4 lib/random: unify random string generation into random.String
This was factored from fstest as we were including the testing
enviroment into the main binary because of it.

This was causing opening the browser to fail because of 8243ff8bc8.
2019-08-06 12:44:08 +01:00
Nick Craig-Wood
72d5b11d1b serve restic: rename test file to avoid it being linked into main binary 2019-08-06 12:42:52 +01:00
Nick Craig-Wood
526a3347ac rcd: Fix permissions problems on cache directory with web gui download 2019-08-06 12:06:57 +01:00
Nick Craig-Wood
23910ba53b servetest: add tests for --auth-proxy 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
ee7101e6af serve: factor out common testing parts for ftp, sftp and webdav tests 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
36c1b37dd9 serve webdav: support --auth-proxy 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
72782bdda6 serve ftp: implement --auth-proxy 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
b94eef16c1 serve ftp: refactor to bring into line with other serve commands 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
d75fbe4852 serve sftp: implement auth proxy 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
e6ab237fcd serve: add auth proxy infrastructure 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
a7eec91d69 vfs: add Fs() method to return underlying fs.Fs 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
b3e94b018c cache: factor fs cache into lib/cache 2019-08-06 11:43:42 +01:00
Nick Craig-Wood
ca0e9ea55d build: add Azure Pipelines build status to README 2019-08-06 10:46:36 +01:00
Nick Craig-Wood
53e3c2e263 build: add azure pipelines build 2019-08-06 10:31:32 +01:00
Nick Craig-Wood
02eb747d71 serve http/webdav/restic: implement --prefix - fixes #3398
--prefix enables the servers to serve from a non root prefix.  This
enables easier proxying.
2019-08-06 10:30:48 +01:00
Chaitanya Bankanhal
d51a970932 rcd: Change URL after webgui move to rclone organization 2019-08-05 16:22:40 +01:00
Nick Craig-Wood
a9438cf364 build: add .gitattributes to mark generated files
This makes sure that GitHub ignores the auto generated documentation
files for language detection and diffs.

See: https://github.com/github/linguist#overrides for more info
2019-08-04 15:20:15 +01:00
Nick Craig-Wood
5ef3c988eb bin: add script to test all commits compile for git bisect 2019-08-04 13:29:59 +01:00
Nick Craig-Wood
78150e82a2 docs: update bugs and limitations document 2019-08-04 12:33:39 +01:00
Nick Craig-Wood
6f0cc51eeb Add Chaitanya Bankanhal to contributors 2019-08-04 12:33:39 +01:00
Chaitanya Bankanhal
84e2806c4b rc: Rclone-WebUI integration with rclone
This adds experimental support for web gui integration so that rclone can fetch and run a web based GUI using the --rc-web-ui and related flags.

It downloads and caches a webui zip file which it then unpacks and opens in the browser.
2019-08-04 12:32:37 +01:00
Nick Craig-Wood
0386d22cc9 vfs: add test for 0 length files read in the way mount does 2019-08-03 18:25:44 +01:00
Nick Craig-Wood
0be14120e4 swift: use FixRangeOption to fix 0 length files via the VFS 2019-08-03 18:25:44 +01:00
Nick Craig-Wood
95af1f9ccf fs: fix FixRangeOption so it works with 0 length files 2019-08-03 18:25:44 +01:00
Nick Craig-Wood
629b7eacd8 b2: fix integration tests after accounting changes
In 53a1a0e3ef we started returning non nil from NewObject when
an object isn't found.  This breaks the integration tests and the API
expected of a backend.

This fixes it.
2019-08-03 13:30:31 +01:00
yparitcher
d3149acc32 b2: link sharing 2019-08-03 13:30:31 +01:00
Aleksandar Jankovic
6a3e301303 accounting: add call to clear stats
- Make calls more consistent by changing path to kebab case.
- Add stacktrace information to job panics
2019-08-02 16:56:19 +01:00
Nick Craig-Wood
5be968c0ca drive: update API for teamdrive use - fixes #3348 2019-08-02 16:06:23 +01:00
Nick Craig-Wood
f1a687c540 Add justina777 to contributors 2019-08-02 15:57:25 +01:00
justina777
94ee43fe54 log: add object and objectType to json logs 2019-08-02 15:57:09 +01:00
Nick Craig-Wood
c2635e39cc build: fix appveyor secure variables after project move 2019-07-28 22:46:26 +01:00
Nick Craig-Wood
8c511ec9cd docs: fix star count on website 2019-07-28 20:58:21 +01:00
Nick Craig-Wood
ac0dce78d0 cmd: fix up stats printing on macOS after accounting change 2019-07-28 20:38:20 +01:00
Nick Craig-Wood
f347514f62 build: fix up CI and CI badges after repo move 2019-07-28 20:07:04 +01:00
Nick Craig-Wood
57d5de6fba build: fix up package paths after repo move
git grep -l github.com/ncw/rclone | xargs -d'\n' perl -i~ -lpe 's|github.com/ncw/rclone|github.com/rclone/rclone|g'
goimports -w `find . -name \*.go`
2019-07-28 18:47:38 +01:00
Aleksandar Jankovic
4ba6532915 accounting: make stats response consistent
core/stats can return two different schemas in 'transferring' field.
One is object with fields the other is just plain string.
This is confusing, unnecessary and makes defining response schema
more difficult. It also returns `lastError` as value which can be
rendered differently depending on source of error.

This change standardizes 'transferring' filed to always return
object but with reduced fields if they are not available.
Former string item is converted to {name:remote_name} object.

'lastError' is forced to be a string as in some cases it can be encoded
as an object.
2019-07-28 14:48:19 +01:00
Aleksandar Jankovic
ff235e4e56 docs: update documentation for stats 2019-07-28 14:48:19 +01:00
Aleksandar Jankovic
68e641f6cf accounting: add limits and listing to stats groups 2019-07-28 14:48:19 +01:00
Aleksandar Jankovic
53a1a0e3ef accounting: add reference to completed transfers
Add core/transferred call that lists completed transfers and their
status.
2019-07-28 14:48:19 +01:00
Aleksandar Jankovic
8243ff8bc8 accounting: isolate stats to groups
Introduce stats groups that will isolate accounting for logically
different transferring operations. That way multiple accounting
operations can be done in parallel without interfering with each other
stats.

Using groups is optional. There is dedicated global stats that will be
used by default if no group is specified. This is operating mode for CLI
usage which is just fire and forget operation.

For running rclone as rc http server each request will create it's own
group. Also there is an option to specify your own group.
2019-07-28 14:48:19 +01:00
Aleksandar Jankovic
be0464f5f1 accounting: change stats interface
This is done to make clear ownership over accounting object and prepare
for removing global stats object.

Stats elapsed time calculation has been altered to account for actual
transfer time instead of stats creation time.
2019-07-28 14:48:19 +01:00
Nick Craig-Wood
2d561b51db Add EliEron to contributors 2019-07-28 12:33:21 +01:00
Nick Craig-Wood
9241a93c2d Add justinalin to contributors 2019-07-28 12:33:21 +01:00
EliEron
fb32f77bac Docs: Fix typos in filtering demonstrations 2019-07-28 12:32:53 +01:00
justinalin
520fb03bfd log: add --use-json-log for JSON logging 2019-07-28 12:05:50 +01:00
justinalin
a3449bda30 vendor: add github.com/sirupsen/logrus 2019-07-28 12:05:50 +01:00
yparitcher
ccc416e62b b2: Fix link sharing #3314 2019-07-28 11:47:31 +01:00
jaKa
a35aa1360e Support setting modification times on Koofr backend.
Configuration time option to disable the above for if using Dropbox (does not
allow setting mtime on copy) or Amazon Drive (neither on upload nor on copy).
2019-07-24 21:11:58 +01:00
jaKa
3df9dbf887 vendor: updated github.com/koofr/go-koofrclient for set mtime support. 2019-07-24 21:11:58 +01:00
Nick Craig-Wood
9af0a704af Add Paul Millar to contributors 2019-07-24 20:34:39 +01:00
Nick Craig-Wood
691e5ae5f0 Add Yi FU to contributors 2019-07-24 20:34:39 +01:00
Nick Craig-Wood
5a44bafa4e fstest: add fs.ErrorCantShareDirectories for backends which can only share files 2019-07-24 20:34:29 +01:00
Nick Craig-Wood
8fdce31700 config: Fix hiding of options from the configurator 2019-07-24 20:34:29 +01:00
Nick Craig-Wood
493dfb68fd opendrive: refactor to use existing lib/rest facilities for uploads
This also checks the return of the call to make sure the number of
bytes written was as expected.
2019-07-24 20:34:29 +01:00
Nick Craig-Wood
71587344c6 lib/rest: allow Form upload with no file to upload 2019-07-24 20:34:29 +01:00
yparitcher
8e8b78d7e5 Implement --compare-dest & --copy-dest Fixes #3278 2019-07-22 19:42:29 +01:00
Nick Craig-Wood
266600dba7 build: reduce parallelism in cross compile to reduce memory and fix Travis
Before this change Travis builds were running out of memory when cross
compiling all the OSes.
2019-07-22 17:10:26 +01:00
Paul Millar
e4f6ccbff2 webdav: add docs for using bearer_token_command with oidc-agent 2019-07-22 16:01:55 +01:00
Nick Craig-Wood
1f1ab179a6 webdav: refresh token when it expires with --webdav-bearer-token-command
Fixes #2380
2019-07-22 16:01:55 +01:00
Nick Craig-Wood
c642531a1e webdav: add --webdav-bearer-token-command - fixes #2380
This can be used with oidc-agent to get a bearer token
2019-07-22 15:59:54 +01:00
buengese
19ae053168 rcserver: remove _async key from input parameters after parsing so later operations won't get confused - fixes #3346 2019-07-20 19:35:10 +02:00
buengese
def790986c fichier: make FolderID int and adjust related code - fixes #3359 2019-07-20 02:49:08 +02:00
Yi FU
0a1169e659 ssh: opt-in support for diffie-hellman-group-exchange-sha256 diffie-hellman-group-exchange-sha1 - fixes #1810 2019-07-13 12:21:56 +02:00
Nick Craig-Wood
5433021e8b drive: fix server side copy of big files
Before this change rclone was sending a MimeType in the requests for
server side Move and Copy.

The conjecture is that if you attempt to set the MimeType to something
different in a Copy then Google Drive has to do an actual copy of the
file data.  This takes a very long time (since it is large) and fails
after a 90s timeout.

After the change we no longer set the MimeType in Move or Copy and the
copies happen instantly and correctly.

Many thanks to @darthShadow for discovering that this was causing the
problem.

Fixes #3070
Fixes #3033
Fixes #3300
Fixes #3155
2019-07-05 10:49:19 +01:00
Nick Craig-Wood
c9f77719e4 b2: enable server side copy to copy between buckets - fixes #3303 2019-07-05 10:07:05 +01:00
Nick Craig-Wood
3cd63a00be googlephotos: fix configuration walkthrough 2019-07-04 15:19:59 +01:00
Nick Craig-Wood
d7016866e0 googlephotos: fix creation of duplicated albums
Also make sure we don't list the albums twice
2019-07-04 13:45:52 +01:00
yparitcher
d72e4105fb b2: Fix link sharing #3314 2019-07-04 11:53:59 +01:00
yparitcher
b4266da4eb sync: fix SyncSuffix tests #3272 2019-07-03 17:36:22 +01:00
yparitcher
3f5767b94e b2: Implement link sharing #2178 2019-07-03 14:10:25 +01:00
Nick Craig-Wood
1510e12659 build: move race test to go modules build
In an attempt to even out the build times.
2019-07-03 12:07:29 +01:00
Nick Craig-Wood
ede03258bc build: use go modules proxy when building modules 2019-07-03 12:07:29 +01:00
Nick Craig-Wood
7fcbb47b1c build: split other OS build into a separate builder
This is in order to make longest build (the Linux build) quicker
2019-07-03 12:07:29 +01:00
Nick Craig-Wood
9cafeeb4b6 dirtree: make tests more reliable 2019-07-02 16:29:40 +01:00
Nick Craig-Wood
a1cfe61ffd googlephotos: Backend for accessing Google Photos #369 2019-07-02 15:26:55 +01:00
Nick Craig-Wood
5eebbaaac4 test_all: add tests parameter to limit which tests to run for a backend 2019-07-02 15:26:55 +01:00
Nick Craig-Wood
bc70bff125 fs/dirtree: factor DirTree out of fs/walk and add tests 2019-07-02 15:26:55 +01:00
Nick Craig-Wood
cf15b88efa build: make explicit which matrix items we will deploy 2019-07-02 14:13:46 +01:00
Nick Craig-Wood
dcaee0016a build: add a builder for building with go modules 2019-07-02 12:33:03 +01:00
Nick Craig-Wood
387b496d1e operations: fix tests TestMoveFileBackupDir and TestCopyFileBackupDir again
Commit 734f504d5f wasn't tested properly and had a typo which
caused it not to build :-(
2019-07-02 12:22:29 +01:00
Nick Craig-Wood
734f504d5f operations: fix tests TestMoveFileBackupDir and TestCopyFileBackupDir
..so they don't run on backends which can't move or copy.
2019-07-02 10:46:49 +01:00
Dan Dascalescu
7153909390 docs: fix typo in google drive docs 2019-07-02 10:10:08 +01:00
Russell Davis
ea35e807db docs: clarify --update and --use-server-mod-time
It's likely a mistake to use `--use-server-modtime` if you're not also using `--update`. It might even make sense to emit a warning in the code when doing this, but for now, I made it more clear in the docs.

I also clarified how `--use-server-modtime` can be useful in the `--update` section.
2019-07-02 10:09:03 +01:00
Nick Craig-Wood
5df5a3b78e vendor: tidy go.mod and go.sum - fixes #3317 2019-07-02 09:47:00 +01:00
Nick Craig-Wood
37c1144b46 Add Russell Davis to contributors 2019-07-02 07:57:18 +01:00
Nick Craig-Wood
8d116ba0c9 Add Matti Niemenmaa to contributors 2019-07-02 07:57:18 +01:00
Russell Davis
6a3c3d9b89 Update docs on S3 policy to include ListAllMyBuckets permission
This permission is required for `rclone lsd`.
2019-07-02 07:56:54 +01:00
Matti Niemenmaa
a6dca4c13f s3: Add INTELLIGENT_TIERING storage class
For Intelligent-Tiering:
https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access
2019-07-01 18:17:48 +01:00
Aleksandar Jankovic
cc0800a72e march: return errors when listing dirs
Partially fixes #3172
2019-07-01 15:32:46 +01:00
Nick Craig-Wood
1be1fc073e Add AbelThar to contributors 2019-07-01 12:09:35 +01:00
AbelThar
70c6b01f54 fs: Higher units for ETA - fixes #3221 2019-07-01 12:09:19 +01:00
Aleksandar Jankovic
7b2b396d37 context: fix errgroup interaction with context
Fixes #3307
2019-07-01 11:51:51 +01:00
Nick Craig-Wood
af2596f98b Add yparitcher to contributors 2019-07-01 11:13:53 +01:00
Nick Craig-Wood
61fb326a80 Add full name for Laura Hausmann 2019-07-01 10:53:47 +01:00
yparitcher
de14378734 Implement --suffix without --backup-dir for current dir
Fixes #2801
2019-07-01 10:46:26 +01:00
yparitcher
eea1b6de32 Abstract --Backup-dir checks so can be applied across Sync, Copy, Move 2019-07-01 10:46:26 +01:00
Nick Craig-Wood
6bae3595a8 Add Laura to contributors 2019-06-30 20:07:35 +01:00
Laura
dde4dd0198 fichier: 1fichier support - fixes #2908
This was started by Fionera, finished off by Laura with fixes and more
docs from Nick.

Co-authored-by: Fionera <fionera@fionera.de>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-06-30 18:35:01 +01:00
Laura
2d0e9885bd vendor: add jzelinskie/whirlpool 2019-06-30 18:35:01 +01:00
Nick Craig-Wood
9ed81ac451 vfs: fix tests for backends which can't upload 0 length files 2019-06-30 18:35:01 +01:00
Nick Craig-Wood
3245c0ae0d fstests: add integration test for uploading empty files
This tests a remote can upload empty files.  If the remote can't
upload empty files it should return fs.ErrorCantUploadEmptyFiles.
2019-06-30 18:35:01 +01:00
Laura
6ff7b2eaab fs: add fs.ErrorCantUploadEmptyFiles
Any backends which can't upload 0 length files should return this
errror.
2019-06-30 18:11:45 +01:00
Laura
38ebdf54be sync/operations: don't use zero length files in tests
We now have a backend (fichier) which doesn't support 0 length
files. Therefore all 0 length files in the tests have been replaced
with length 1.

In a future commit we will implement a test for 0 length files.
2019-06-30 18:11:45 +01:00
Fionera
6cd7c3b774 lib/rest: Calculate correct Content-Length on MultiPart Uploads
This is used in the pcloud backend and in the upcoming 1fichier backend.
2019-06-30 17:57:22 +01:00
Nick Craig-Wood
07e2c3a50f b2: fix nil pointer error introduced by context propagation patch
For some reason f78cd1e043 introduced an unrelated change -
perhaps a merge error.  Removing this change fixes the nil pointer
problem.
2019-06-28 22:38:41 +01:00
Jon Fautley
cd762f04b8 sftp: Completely ignore all modtime checks if SetModTime=false 2019-06-28 10:33:14 +01:00
Sandeep
6907242cae azureblob: Updated config help details to remove connection string references (#3306) 2019-06-27 18:53:33 -07:00
Nick Craig-Wood
d61ba7ef78 vendor: update all dependencies 2019-06-27 13:52:32 +01:00
Nick Craig-Wood
b221d79273 Add nguyenhuuluan434 to contributors 2019-06-27 13:28:52 +01:00
nguyenhuuluan434
940d88b695 refactor code 2019-06-27 13:28:35 +01:00
nguyenhuuluan434
ca324b5084 trying to capture segments info during upload to swift backend and
delete if there is error duing upload object.
2019-06-27 13:28:35 +01:00
Nick Craig-Wood
9f4589a997 gcs: reduce oauth scope requested as suggested by Google
As part of getting the rclone oauth consent screen approved by Google,
it came up that the scope in use by the gcs backend was too large.

This change reduces it to the minimum scope which still allows rclone
to work correctly.

Old scope: https://www.googleapis.com/auth/devstorage.full_control
New scope: https://www.googleapis.com/auth/devstorage.read_write
2019-06-27 12:05:49 +01:00
Sandeep
fc44eb4093 Azure Storage Emulator support (#3285)
* azureblob - Add support for Azure Storage Emulator to test things locally.

Testing - Verified changes by testing manually.

* docs: update azureblob docs to reflect support of storage emulator
2019-06-26 20:46:22 -07:00
Nick Craig-Wood
a1840f6fc7 sftp: add missing interface check and fix About #3257
This bug was introduced as part of adding context to the backends and
slipped through the net because the About call did not have an
interface assertion in the sftp backend.

I checked there were no other missing interface assertions on all the
optional methods on all the backends.
2019-06-26 16:56:33 +01:00
Gary Kim
0cb7130dd2 ncdu: Display/Copy to Clipboard Current Path 2019-06-26 16:49:53 +01:00
Gary Kim
2655bea86f vendor: add github.com/atotto/clipboard 2019-06-26 16:49:53 +01:00
Gary Kim
08bf8faa2f vendor: update github.com/jlaffaye/ftp 2019-06-26 16:42:12 +01:00
Nick Craig-Wood
4e64ee38e2 mount: default --deamon-timout to 15 minutes on macOS and FreeBSD
See: https://forum.rclone.org/t/macos-fuse-mount-contents-disappear-after-writes-while-using-vfs-cache/10566/
2019-06-25 15:30:42 +01:00
Nick Craig-Wood
276f8cccf6 rc: return current settings if core/bwlimit called without parameters 2019-06-24 13:22:24 +01:00
Nick Craig-Wood
0ae844d1f8 config: reset environment variables in config file test to fix build 2019-06-22 17:49:23 +01:00
Nick Craig-Wood
4ee6de5c3e docs: add a new page with global flags and link to it from the command docs
In f544234 we removed the global flags from each command as it was
making each page very big and causing 1000s of lines of duplication in
the man page.

This change adds a new flags page with all the global flags on and
links each command page to it.

Fixes #3273
2019-06-20 16:45:44 +01:00
Nick Craig-Wood
71a19a1972 Add Maran to contributors 2019-06-19 19:36:20 +01:00
Maran
ba72e62b41 fs/config: Add method to reload configfile from disk
Fixes #3268
2019-06-19 14:47:54 +01:00
Aleksandar Jankovic
5935cb0a29 jobs: add ability to stop async jobs
Depends on #3257
2019-06-19 14:17:41 +01:00
Aleksandar Jankovic
f78cd1e043 Add context propagation to rclone
- Change rclone/fs interfaces to accept context.Context
- Update interface implementations to use context.Context
- Change top level usage to propagate context to lover level functions

Context propagation is needed for stopping transfers and passing other
request-scoped values.
2019-06-19 11:59:46 +01:00
Aleksandar Jankovic
a2c317b46e chore: update .gitignore 2019-06-19 11:59:46 +01:00
Nick Craig-Wood
6a2a075c14 fs/cache: fix locking
This was causing `fatal error: sync: unlock of unlocked mutex` if a
panic ocurred in fsNewFs.
2019-06-19 10:50:59 +01:00
Nick Craig-Wood
628530362a local: add --local-case-sensitive and --local-case-insensitive
This is to force the remote to declare itself as case sensitive or
insensitive where the defaults for the operating system are wrong.

See: https://forum.rclone.org/t/duplicate-object-found-in-source-ignoring-dedupe-not-finding-anything/10465
2019-06-17 17:09:48 +01:00
Nick Craig-Wood
4549305fec Start v1.48.0-DEV development 2019-06-15 18:32:17 +01:00
Nick Craig-Wood
245fed513a Version v1.48.0 2019-06-15 13:55:41 +01:00
Nick Craig-Wood
52332a4b24 moveto: fix detection of same file name to include the root
Fixes problem introduced in d2be792d5e
2019-06-15 13:55:41 +01:00
Nick Craig-Wood
3087c5d559 webdav: retry on 423 Locked errors #3263 2019-06-15 10:58:13 +01:00
Nick Craig-Wood
75606dcc27 sync: fix tests on union remote 2019-06-15 10:53:36 +01:00
Nick Craig-Wood
f3719fe269 fs/cache: unlock mutex in cache.Get to allow recursive calls
This fixes the test lockup in the union tests
2019-06-15 10:42:53 +01:00
Gary Kim
d2be792d5e moveto: fix case-insensitive same remote move 2019-06-15 10:06:01 +01:00
Wojciech Smigielski
2793d4b4cc remove duplicate code 2019-06-15 10:02:25 +01:00
Wojciech Smigielski
30ac9d920a enable creating encrypted config through external script invocation - fixes #3127 2019-06-15 10:02:25 +01:00
Gary Kim
6e8e620e71 serve webdav: fix serveDir not being updated with changes from webdav
Fixes an issue where changes such as renaming done using webdav
would not be reflected in the html directory listing
2019-06-15 10:00:46 +01:00
Gary Kim
5597d6d871 serve webdav: add tests for serve http functionality 2019-06-15 10:00:46 +01:00
Gary Kim
622e0d19ce serve webdav: combine serve webdav and serve http 2019-06-15 10:00:46 +01:00
Nick Craig-Wood
ce400a8fdc Add rsync.net as a provider #3254 2019-06-15 09:56:17 +01:00
Nick Craig-Wood
49c05cb89b Add notes on --b2-version not working with crypt #1627
See also: https://forum.rclone.org/t/how-to-restore-a-previous-version-of-a-folde/10456/2
2019-06-15 09:56:17 +01:00
Nick Craig-Wood
d533de0f5c docs: clarify that s3 can't access Glacier Vault 2019-06-15 09:56:17 +01:00
Nick Craig-Wood
1a4fe4bc6c Add Aleksandar Jankovic to contributors 2019-06-15 09:56:17 +01:00
Aleksandar Jankovic
93207ead9c rc/jobs: make job expiry timeouts configurable 2019-06-15 09:55:32 +01:00
Nick Craig-Wood
22368b997c b2: implement SetModTime #3210
SetModTime() is implemented by copying an object onto itself and
updating the metadata in the process.
2019-06-13 17:31:33 +01:00
Nick Craig-Wood
a5bed67016 b2: implement server side copy - fixes #3210 2019-06-13 17:31:33 +01:00
Gary Kim
44f6491731 bin: update make_changelog.py to support semver 2019-06-13 14:51:42 +01:00
Nick Craig-Wood
12c2a750f5 build: fix build lockup by increasing GOMAXPROCS - Fixes #3154
It was discovered after lots of experimentation that the cmd/mount
tests have a tendency to lock up if GOMAXPROCS=1 or 2.  Since the
Travis builders only have 2 vCPUs by default, this happens on the
build server very often.

This workaround increases GOMAXPROCS to make the mount test lockup
less likely.

Ideally this should be fixed in the mount tests at some point.
2019-06-13 13:47:43 +01:00
Nick Craig-Wood
92bbae5cca Add Florian Apolloner to contributors 2019-06-13 13:47:43 +01:00
Florian Apolloner
939b19c3b7 cmd: add support for private repositories in serve restic - fixes #3247 2019-06-12 13:39:38 +01:00
Nick Craig-Wood
64fb4effa7 docs: add FAQ entry about rclone using too much memory
See: #2200 #1391 3196
2019-06-12 11:08:19 +01:00
Nick Craig-Wood
4d195d5a52 gcs: Fix upload errors when uploading pre 1970 files
Before this change rclone attempted to set the "updated" field in
uploaded objects to the modification time.

However when this modification time was before 1970, google drive
would return the rather cryptic error:

    googleapi: Error 400: Invalid value for UnsignedLong: -42000, invalid

However API docs: https://cloud.google.com/storage/docs/json_api/v1/objects#resource
state the "updated" field is read only and tests confirm that.  Even
though the field is read only, it looks like Google parses it.

This change therefore removes the attempt to set the "updated" field
(which was doing nothing anyway) and fixes the problem uploading pre
1970 files.

See #3196 and https://forum.rclone.org/t/invalid-value-for-unsignedlong-file-missing-date-modified/3466
2019-06-12 10:51:49 +01:00
albertony
976a020a2f Use rclone.conf from rclone executable directory if already existing 2019-06-12 10:08:00 +01:00
Nick Craig-Wood
550ab441c5 rc: Skip auth for OPTIONS request
Before this change using --user and --pass was impossible on the rc
from a browser as the browser needed to make the OPTIONS request first
before sending Authorization: headers, but the OPTIONS request
required an Authorization: header.

After this change we allow OPTIONS requests to go through without
checking the Authorization: header.
2019-06-10 19:33:45 +01:00
Nick Craig-Wood
e24cadc7a1 box: Fix ineffectual assignment (ineffassign) 2019-06-10 19:33:10 +01:00
Nick Craig-Wood
903ede52cd config: make config create/update encrypt passwords where necessary
Before this change when using "rclone config create" it wasn't
possible to add passwords in one go, it was necessary to call "rclone
config password" to add the passwords afterwards as "rclone config
create" didn't obscure passwords.

After this change "rclone config create" and "rclone config update"
will obscure passwords as necessary as will the corresponding API
calls config/create and config/update.

This makes "rclone config password" and its API config/password
obsolete, however they will be left for backwards compatibility.
2019-06-10 18:08:55 +01:00
Nick Craig-Wood
f681d32996 rc: Fix serving bucket based objects with --rc-serve
Before this change serving bucket based objects
`[remote:bucket]/path/to/object` would fail with 404 not found.

This was because the leading `/` in `/path/to/object` was being passed
to NewObject.
2019-06-10 11:59:06 +01:00
Gary Kim
2c72e7f0a2 docs: add implicit FTP over TLS documentation 2019-06-09 16:06:39 +01:00
Gary Kim
db8cd1a993 ftp: Add no_check_certificate option for FTPS 2019-06-09 16:06:39 +01:00
Gary Kim
2890b69c48 ftp: Add FTP over TLS support 2019-06-09 16:06:39 +01:00
Gary Kim
66b3795eb8 vendor: update github.com/jlaffaye/ftp 2019-06-09 16:06:39 +01:00
Nick Craig-Wood
45f41c2c4a Add forgems to contributors 2019-06-09 16:02:20 +01:00
Garry McNulty
34f03ce590 operations: ignore negative sizes when calculating total (#3135) 2019-06-09 16:00:41 +01:00
Garry McNulty
e2fde62cd9 drive: add --drive-size-as-quota to show storage quota usage for file size - fixes #3135 2019-06-09 16:00:41 +01:00
forgems
4b27c6719b fs: Allow sync of a file and a directory with the same name
When sorting fs.DirEntries we sort by DirEntry type and
when synchronizing files let the directories be before objects,
so when the destintation fs doesn't support duplicate names,
we will only lose duplicated object instead of whole directory.

The enables synchronisation to work with a file and a directory of the same name
which is reasonably common on bucket based remotes.
2019-06-09 15:57:05 +01:00
Nick Craig-Wood
fb6966b5fe docs: Fix warnings after hugo upgrade to v0.55 2019-06-08 11:55:51 +01:00
Nick Craig-Wood
454dfd3c9e rc: Add operations/fsinfo: Return information about the remote
This returns a information about the remote including Name, Root,
Hashes and optional Features.
2019-06-08 09:19:07 +01:00
Nick Craig-Wood
e1cf551ded fs: add Features.Enabled to return map of enabled features by name 2019-06-08 08:46:53 +01:00
Nick Craig-Wood
bd10344d65 rc: add --loopback flag to run commands directly without a server 2019-06-08 08:45:55 +01:00
Nick Craig-Wood
1aa65d60e1 lsjson: add IsBucket field for bucket based remote listing of the root 2019-06-07 17:28:15 +01:00
Nick Craig-Wood
aa81957586 drive: add --drive-server-side-across-configs
In #2728 and 55b9a4e we decided to allow server side operations
between google drives with different configurations.

This works in some cases (eg between teamdrives) but does not work in
the general case, and this caused breakage in quite a number of
people's workflows.

This change makes the feature conditional on the
--drive-server-side-across-configs flag which defaults to off.

See: https://forum.rclone.org/t/gdrive-to-gdrive-error-404-file-not-found/9621/10

Fixes #3119
2019-06-07 12:12:49 +01:00
Nick Craig-Wood
b7800e96d7 vendor: update golang.org/x/net/webdav - fixes #3002
This fixes duplicacy working with rclone serve webdav
2019-06-07 11:54:57 +01:00
Cnly
fb1bbecb41 fstests: add FsRootCollapse test - #3164 2019-06-06 15:48:46 +01:00
Cnly
e4c2468244 onedrive: More accurately check if root is found - fixes #3164 2019-06-06 15:48:46 +01:00
Nick Craig-Wood
ac4c8d8dfc cmd/providers: add DefaultStr, ValueStr and Type fields
These fields are auto generated:
- DefaultStr - a string rendering of Default
- ValueStr - a string rendering of Value
- Type - the type of the option
2019-06-05 16:23:42 +01:00
Nick Craig-Wood
e2b6172f7d build: stop using a GITHUB_USER as secure variable to fix build log
Before this change any occurences of `ncw` were escaped as `[secure]`
in the build log which is needless obfuscation and quite confusing.
2019-06-05 16:23:16 +01:00
Nick Craig-Wood
32f2895472 Add garry415 to contributors 2019-06-03 21:12:55 +01:00
garry415
1124c423ee fs: add --ignore-case-sync for forced case insensitivity - fixes #2773 2019-06-03 21:12:10 +01:00
Nick Craig-Wood
cd5a2d80ca Add JorisE to contributors 2019-06-03 21:08:15 +01:00
JorisE
1fe0773da6 docs: Update Microsoft OneDrive numbering
I think an option has been added and now the manual for OneDrive is off by one.
2019-06-03 21:08:00 +01:00
Nick Craig-Wood
5a941cdcdc local: fix preallocate warning on Linux with ZFS
Under Linux, rclone attempts to preallocate files for efficiency.

Before this change, pre-allocation would fail on ZFS with the error

    Failed to pre-allocate: operation not supported

After this change rclone tries a different flag combination for ZFS
then disables pre-allocate if that doesn't work.

Fixes #3066
2019-06-03 18:02:26 +01:00
Nick Craig-Wood
62681e45fb Add Philip Harvey to contributors 2019-06-03 17:34:44 +01:00
Philip Harvey
1a2fb52266 s3: make SetModTime work for GLACIER while syncing - Fixes #3224
Before this change rclone would fail with

    Failed to set modification time: InvalidObjectState: Operation is not valid for the source object's storage class

when attempting to set the modification time of an object in GLACIER.

After this change rclone will re-upload the object as part of a sync if it needs to change the modification time.

See: https://forum.rclone.org/t/suspected-bug-in-s3-or-compatible-sync-logic-to-glacier/10187
2019-06-03 15:28:19 +01:00
Nick Craig-Wood
ec4e7316f2 docs: update advice for avoiding mega logins 2019-05-29 17:37:53 +01:00
buengese
11264c4fb8 jottacloud: update docs regarding devices and mountpoints 2019-05-29 17:16:42 +01:00
buengese
25f7f2b60a jottacloud: Add support for selecting device and mountpoint. fixes #3069 2019-05-29 17:16:42 +01:00
Nick Craig-Wood
e7c20e0bce operations: make move and copy individual files obey --backup-dir
Before this change, when using rclone copy or move with --backup-dir
and the source was a single file, rclone would fail to use the backup
directory.

This change looks up the backup directory in the Fs cache and uses it
as appropriate.

This affects any commands which call operations.MoveFile or
operations.CopyFile which includes rclone move/moveto/copy/copyto
where the source is a single file.

Fixes #3219
2019-05-27 16:14:55 +01:00
Nick Craig-Wood
8ee6034b23 Look for Fs in the cache rather than calling NewFs directly
This will save operations when rclone is used in remote control mode
or with the same remote multiple times in the command line.
2019-05-27 16:14:55 +01:00
Nick Craig-Wood
206e1caa99 fs/cache: factor Fs caching from fs/rc into its own package 2019-05-27 16:14:55 +01:00
Dan Walters
f0e439de0d dlna: improve logging and error handling
Mostly trying to get logging to happen through rclone's log methods.
Added request logging, and a trace parameter that will dump the
entire request/response for debugging when dealing with poorly
written clients.

Also added a flag to specify the device's "Friendly Name" explicitly,
and made an attempt at allowing mime types in addition to video.
2019-05-27 14:42:33 +01:00
Dan Walters
e5464a2a35 dlna: add some additional metadata, headers, and samsung extensions
Again, mostly just copying what I see in other implementations.  This
does seem to have done the trick so that I can now pause, fast forward,
rewind, etc., on my Samsung F series.
2019-05-27 14:42:33 +01:00
Dan Walters
78d38dda56 dlna: icons and compatibility improvements
Brings in icons for devices to display.  Based on what some
other open implementations have done, it's worth having a simple
stub implmentation of ConnectionManagerService.  Advertise
X_MS_MediaReceiverRegistrar as well, which sounds like it
is necessary for certain MSFT devices (like the X-Box.)
2019-05-27 14:42:33 +01:00
Dan Walters
60bb01b22c dlna: refactor the serve mux
Trying to make it a little easier to understand and work on all the
available routes, etc.
2019-05-27 14:42:33 +01:00
Dan Walters
95a74e02c7 dlna: use a template to render the root service descriptor
For various reasons, it seems to make sense to move away from generating
the XML with objects.  Namespace support is minimal in go, the objects we
have are in an upstream project, and some subtitlties seem likely to
cause problems with poorly written clients.

This removes the empty <iconList></iconList>, but is otherwise the
same output.
2019-05-27 14:42:33 +01:00
Dan Walters
d014aef011 dlna: reformat descriptors with tabs
Reduces size of embedded assets.
2019-05-27 14:42:33 +01:00
Dan Walters
be8c23f0b4 dlna: use vfsgen for static assets
As more assets are added, using vfsgen makes things a bit easier.
2019-05-27 14:42:33 +01:00
Nick Craig-Wood
da3b685cd8 vendor: update github.com/pkg/sftp to fix sftp client issues
See: https://forum.rclone.org/t/failed-to-copy-sftp-folder-not-found-c-ftpsites-ssh-fx-failure/9778
See: https://github.com/pkg/sftp/issues/288
2019-05-24 15:03:23 +01:00
Nick Craig-Wood
9aac2d6965 drive: add notes that cleanup works in the background on drive 2019-05-22 20:48:59 +01:00
Gary Kim
81fad0f0e3 docs: fix typo in cache documentation 2019-05-20 19:20:59 +01:00
Nick Craig-Wood
cff85f0b95 docs: add note on failure to login to mega docs 2019-05-16 10:10:58 +01:00
Nick Craig-Wood
9c0dac4ccd Add Robert Marko to contributors 2019-05-15 13:36:15 +01:00
Robert Marko
5ccc2dcb8f s3: add config info for Wasabi's EU Central endpoint
Wasabi has a EU Central endpoint for a couple months now, so add it to the list.

Signed-off-by: Robert Marko <robimarko@gmail.com>
2019-05-15 13:35:55 +01:00
Gary Kim
8c5503631a docs: add sftp about documentation 2019-05-14 15:22:43 +01:00
Nick Craig-Wood
2f3d794ec6 Add id01 to contributors 2019-05-14 07:55:38 +01:00
id01
abeb12c6df lib/env: Make env_test.go support Windows 2019-05-14 07:55:08 +01:00
Nick Craig-Wood
9c6f3ae82c local: log errors when listing instead of returning an error
Before this change, rclone would return an error from the listing if
there was an unreadable directory, or if there was a problem stat-ing
a directory entry.  This was frustrating because the command
completely aborts at that point when there is work it could do.

After this change rclone lists the directories and reports ERRORs for
unreadable directories or problems stat-ing files, but does return an
error from the listing.  It does set the error flag which means the
command will fail (and objects won't be deleted with `rclone sync`).

This brings rclone's behaviour exactly in to line with rsync's
behaviour.  It does as much as possible, but doesn't let the errors
pass silently.

Fixes #3179
2019-05-13 18:30:33 +01:00
Nick Craig-Wood
870b15313e cmd: log an ERROR for all commands which exit with non-zero status
Before this change, rclone didn't report errors for commands which
didn't return an error directly.  For example `rclone ls` could
encounter an error and rclone would log nothing, even though the exit
code was non zero.

After this change we always log a message if we are exiting with a
non-zero exit code.
2019-05-13 18:28:21 +01:00
Nick Craig-Wood
62a7e44e86 Add didil to contributors 2019-05-13 17:37:46 +01:00
didil
296e4936a0 install: linux skip man pages if no mandb - fixes #3175 2019-05-13 17:37:30 +01:00
Nick Craig-Wood
a0b9d4a239 docs: fix doc generators home directory in backend docs
Thanks @ctlaltdefeat for spotting this
2019-05-13 17:28:36 +01:00
Nick Craig-Wood
99bc013c0a crypt: remove stray debug in ChangeNotify 2019-05-12 16:50:03 +01:00
Nick Craig-Wood
d9cad9d10b mega: cleanup: add logs for -v and -vv 2019-05-12 10:46:21 +01:00
Nick Craig-Wood
0e23c4542f sync: fix integrations tests
2eb31a4f1d broke the integration tests for remotes which use
Copy+Delete as server side Move.
2019-05-12 09:50:20 +01:00
Nick Craig-Wood
f544234e26 gendocs: remove global flags from command help pages 2019-05-11 23:39:50 +01:00
Nick Craig-Wood
dbf9800cbc docs: add serve to README, main page and menu 2019-05-11 23:39:50 +01:00
Nick Craig-Wood
1f19b63264 serve sftp: serve an rclone remote over SFTP 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
5c0e5b85f7 Factor ShellExpand from sftp backend to lib/env 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
edda6d91cd Use go-homedir to read the home directory more reliably 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
1fefa6adfd vendor: add github.com/mitchellh/go-homedir 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
af030f74f5 vfs: make WriteAt for non cached files work with non-sequential writes
This makes WriteAt for non cached files wait a short time if it gets
an out of order write (which would normally cause an error) to see if
the gap will be filled with an in order write.

The makes the SFTP backend work fine with --vfs-cache-mode off
2019-05-11 23:39:04 +01:00
Nick Craig-Wood
ada8c22a97 sftp: send custom client version and debug server version 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
610466c18c sftp: fix about parsing of df results so it can cope with -ve results
This is useful when interacting with "serve sftp" which returns -ve
results when the corresponding value is unknown.
2019-05-11 23:39:04 +01:00
Nick Craig-Wood
9950bb9b7c about: fix crash if backend returns a nil usage 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
7d70e92664 operations: enable multi threaded downloads - Fixes #2252
This implements the --multi-thread-cutoff and --multi-thread-streams
flags to control multi thread downloading to the local backend.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
687cbf3ded operations: if --ignore-checksum is in effect, don't calculate checksum
Before this change we calculated the checksum which is potentially
time consuming and then ignored the result.  After the change we don't
calculate the checksum if we are about to ignore it.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
c3af0a1eca local: only calculate the required hashes for big speedup
Before this change we calculated all possible hashes for the file when
the `Hashes` method was called.

After we only calculate the Hash requested.

Almost all uses of `Hash` just need one checksum.  This will slow down
`rclone lsjson` with the `--hash` flag.  Perhaps lsjson should have a
`--hash-type` flag.

However it will speed up sync/copy/move/check/md5sum/sha1sum etc.

Before it took 12.4 seconds to md5sum a 1GB file, after it takes 3.1
seconds which is the same time the md5sum utility takes.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
822483aac5 accounting: enable accounting without passing through the stream #2252
This is in preparation for multithreaded downloads
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
2eb31a4f1d sync: move Transferring into operations.Copy
This makes the code more consistent with the operations code setting
the transfer statistics up.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
0655738da6 operations: re-work reopen framework so it can take a RangeOption #2252
This is in preparation for multipart downloads.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
7c4fe3eb75 local: define OpenWriterAt interface and test and implement it #2252
This will enable multipart downloads in future commits
2019-05-11 23:35:19 +01:00
Stefan Breunig
72721f4c8d copyurl: honor --no-check-certificate 2019-05-11 17:44:58 +01:00
Nick Craig-Wood
0c60c00187 Add Peter Berbec to contributors 2019-05-11 17:40:17 +01:00
Peter Berbec
0d511b7878 cmd: implement --stats-one-line-date and --stats-one-line-date-format 2019-05-11 17:39:57 +01:00
Nick Craig-Wood
bd2a7ffcf4 Add Jeff Quinn to contributors 2019-05-11 17:26:56 +01:00
Nick Craig-Wood
7a5ee968e7 Add Jon to contributors 2019-05-11 17:26:56 +01:00
Jeff Quinn
c809334b3d ftp: add FTP List timeout, fixes #3086
The timeout is controlled by the --timeout flag
2019-05-11 17:26:23 +01:00
Animosity022
b88e50cc36 docs: Typo fixes with "a existing"
Fixed a typo with "a existing" to "an existing"
2019-05-11 16:49:48 +01:00
Jon
bbe28df800 docs: Fix typo: Dump HTTP bodies -> Dump HTTP headers 2019-05-11 16:40:40 +01:00
calistri
f865280afa Adds a public IP flag for ftp. Closes #3158
Fixed variable names
2019-05-09 22:52:21 +01:00
Nick Craig-Wood
8beab1aaf2 build: more pre go1.8 workarounds removed 2019-05-08 15:14:51 +01:00
Fionera
b9e16b36e5 Fix Multipart upload check
In the Documentation it states:
// If (opts.MultipartParams or opts.MultipartContentName) and
// opts.Body are set then CallJSON will do a multipart upload with a
// file attached.
2019-05-02 16:22:17 +01:00
Nick Craig-Wood
b68c3ce74d s3: suppport S3 Accelerated endpoints with --s3-use-accelerate-endpoint
Fixes #3123
2019-05-02 14:00:00 +01:00
Fabian Möller
d04b0b856a fserrors: use errors.Walk for the wrapped error types 2019-05-01 16:56:08 +01:00
Gary Kim
d0ff07bdb0 mega: add cleanup support
Fixes #3138
2019-05-01 16:32:34 +01:00
Nick Craig-Wood
577fda059d rc: fix race in tests 2019-05-01 16:09:50 +01:00
Nick Craig-Wood
49d2ab512d test_all: run restic integration tests against local backend 2019-05-01 16:09:50 +01:00
Nick Craig-Wood
9df322e889 tests: make test servers choose a random port to make more reliable
Tests have been randomly failing with messages like

    listen tcp 127.0.0.1:51778: bind: address already in use

Rework all the test servers so they choose a random free port on
startup and use that for the tests to avoid.
2019-05-01 16:09:50 +01:00
Nick Craig-Wood
8f89b03d7b vendor: update github.com/t3rm1n4l/go-mega and dependencies
This is to fix a crash reported in #3140
2019-05-01 16:09:50 +01:00
Fabian Möller
48c09608ea fix spelling 2019-04-30 14:12:18 +02:00
Nick Craig-Wood
7963320a29 lib/errors: don't panic on uncomparable errors #3123
Same fix as c5775cf73d
2019-04-26 09:56:20 +01:00
Nick Craig-Wood
81f8a5e0d9 Use golangci-lint to check everything
Now that this issue is fixed: https://github.com/golangci/golangci-lint/issues/204

We can use golangci-lint to check the printfuncs too.
2019-04-25 15:58:49 +01:00
Nick Craig-Wood
2763598bd1 Add Gary Kim to contributors 2019-04-25 10:51:47 +01:00
Gary Kim
49d7b0d278 sftp: add About support - fixes #3107
This adds support for using About with SFTP remotes. This works by calling the df command remotely.
2019-04-25 10:51:15 +01:00
Animosity022
3d475dc0ee mount: Fix poll interval documentation 2019-04-24 18:21:04 +01:00
Fionera
2657d70567 drive: fix move and copy from TeamDrive to GDrive 2019-04-24 18:11:34 +01:00
Nick Craig-Wood
45df37f55f Add Kyle E. Mitchell to contributors 2019-04-24 18:09:40 +01:00
Kyle E. Mitchell
81007c10cb doc: Fix "conververt" typo 2019-04-24 18:09:24 +01:00
Nick Craig-Wood
aba15f11d8 cache: note unsupported optional methods 2019-04-16 13:34:06 +01:00
Nick Craig-Wood
a57756a05c lsjson, lsf: support showing the Tier of the object 2019-04-16 13:34:06 +01:00
Nick Craig-Wood
eeab7a0a43 crypt: Implement Optional methods SetTier, GetTier - fixes #2895
This implements optional methods on Object
- ID
- SetTier
- GetTier

And declares that it will not implement MimeType for the FsCheck test.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
ac8d1db8d3 crypt: support PublicLink (rclone link) of underlying backend - fixes #3042 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
cd0d43fffb fs: add missing PublicLink to mask
The enables wrapping file systems to declare that they don't support
PublicLink if the underlying fs doesn't.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
cdf12b1fc8 crypt: Fix wrapping of ChangeNotify to decrypt directories properly
Also change the way it is added to make the FsCheckWrap test pass
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
7981e450a4 crypt: make rclone dedupe work through crypt
Implement these optional methods:

- WrapFs
- SetWrapper
- MergeDirs
- DirCacheFlush

Fixes #2233 Fixes #2689
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
e7fc3dcd31 fs: copy the ID too when we copy a Directory object
This means that crypt which wraps directory objects will retain the ID
of the underlying object.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
2386c5adc1 hubic: fix tests for optional methods 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
2f21aa86b4 fstest: add tests for coverage of optional methods for wrapping Fs 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
16d8014cbb build: drop support for go1.8 2019-04-15 21:49:58 +01:00
Nick Craig-Wood
613a9bb86b vendor: update all dependencies 2019-04-15 20:12:56 +01:00
calisro
8190a81201 lsjson: added EncryptedPath to output - fixes #3094 2019-04-15 18:12:09 +01:00
Nick Craig-Wood
f5795db6d2 build: fix fetch_binaries not to fetch test binaries 2019-04-13 13:08:53 +01:00
Nick Craig-Wood
e2a2eb349f Start v1.47.0-DEV development 2019-04-13 13:08:37 +01:00
Nick Craig-Wood
a0d4fdb2fa Version v1.47.0 2019-04-13 11:01:58 +01:00
Nick Craig-Wood
a28239f005 filter: Make --files-from traverse as before unless --no-traverse is set
In c5ac96e9e7 we made --files-from only read the objects specified and
don't scan directories.

This caused problems with Google drive (very very slow) and B2
(excessive API consumption) so it was decided to make the old
behaviour (traversing the directories) the default with --files-from
and use the existing --no-traverse flag (which has exactly the right
semantics) to enable the new non scanning behaviour.

See: https://forum.rclone.org/t/using-files-from-with-drive-hammers-the-api/8726

Fixes #3102 Fixes #3095
2019-04-12 17:16:49 +01:00
Nick Craig-Wood
b05da61c82 build: move linter build tags into Makefile to fix golangci-lint 2019-04-12 15:48:36 +01:00
Nick Craig-Wood
41f01da625 docs: add link to Go report card 2019-04-12 15:28:37 +01:00
Nick Craig-Wood
901811bb26 docs: Remove references to Google+ 2019-04-12 15:25:17 +01:00
Oliver Heyme
0d4a3520ad jottacloud: add device registration
jottacloud: Updated documenation
2019-04-11 16:31:27 +01:00
calistri
5855714474 lsjson: added --files-only and --dirs-only flags
Factored common code from lsf/lsjson into operations.ListJSON
2019-04-11 11:43:25 +01:00
Nick Craig-Wood
120de505a9 Add Manu to contributors 2019-04-11 10:22:17 +01:00
Manu
6e86526c9d s3: add support for "Glacier Deep Archive" storage class - fixes #3088 2019-04-11 10:21:41 +01:00
Nick Craig-Wood
0862dc9b2b build: update to Xenial in travis build to fix link errors 2019-04-10 15:22:21 +01:00
Nick Craig-Wood
1c301f9f7a drive: Fix creation of duplicates with server side copy - fixes #3067 2019-03-29 16:58:19 +00:00
Nick Craig-Wood
9f6b09dfaf Add Ben Boeckel to contributors 2019-03-28 15:13:17 +00:00
Ben Boeckel
3d424c6e08 docs: fix various typos 2019-03-28 15:12:51 +00:00
Nick Craig-Wood
6fb1c8f51c b2: ignore malformed src_last_modified_millis
This fixes rclone returning `listing failed: strconv.ParseInt` errors
when listing files which have a malformed `src_last_modified_millis`.
This is uploaded by the client so care is needed in interpreting it as
it can be malformed.

Fixes #3065
2019-03-25 15:51:45 +00:00
Nick Craig-Wood
626f0d1886 copy: account for server side copy bytes and obey --max-transfer 2019-03-25 15:36:38 +00:00
Nick Craig-Wood
9ee9fe3885 swift: obey Retry-After to fix OVH restore from cold storage
In as many methods as possible we attempt to obey the Retry-After
header where it is provided.

This means that when objects are being requested from OVH cold storage
rclone will sleep the correct amount of time before retrying.

If the sleeps are short it does them immediately, if long then it
returns an ErrorRetryAfter which will cause the outer retry to sleep
before retrying.

Fixes #3041
2019-03-25 13:41:34 +00:00
Nick Craig-Wood
b0380aad95 vendor: update github.com/ncw/swift to help with #3041 2019-03-25 13:41:34 +00:00
Nick Craig-Wood
2065e73d0b cmd: implement RetryAfter errors which cause a sleep before a retry
Use NewRetryAfterError to return an error which will cause a high
level retry after the delay specified.
2019-03-25 13:41:34 +00:00
Nick Craig-Wood
d3e3bbedf3 docs: Fix typo - fixes #3071 2019-03-25 13:18:41 +00:00
Cnly
8d29d69ade onedrive: Implement graceful cancel 2019-03-19 15:54:51 +08:00
Nick Craig-Wood
6e70d88f54 swift: work around token expiry on CEPH
This implements the Expiry interface so token expiry works properly

This change makes sure that this change from the swift library works
correctly with rclone's custom authenticator.

> Renew the token 60s before the expiry time
>
> The v2 and v3 auth schemes both return the expiry time of the token,
> so instead of waiting for a 401 error, renew the token 60s before this
> time.
>
> This makes transfers more efficient and also works around a bug in
> CEPH which returns 403 instead of 401 when the token expires.
>
> http://tracker.ceph.com/issues/22223
2019-03-18 13:30:59 +00:00
Nick Craig-Wood
595fea757d vendor: update github.com/ncw/swift to bring in Expires changes 2019-03-18 13:30:59 +00:00
Nick Craig-Wood
bb80586473 bin/get-github-release: fetch the most recent not the least recent 2019-03-18 11:29:37 +00:00
Nick Craig-Wood
0d475958c7 Fix errors discovered with go vet nilness tool 2019-03-18 11:23:00 +00:00
Nick Craig-Wood
2728948fb0 Add xopez to contributors 2019-03-18 11:04:10 +00:00
Nick Craig-Wood
3756f211b5 Add Danil Semelenov to contributors 2019-03-18 11:04:10 +00:00
xopez
2faf2aed80 docs: Update Copyright to current Year 2019-03-18 11:03:45 +00:00
Nick Craig-Wood
1bd8183af1 build: use matrix build for travis
This makes the build more efficient, the .travis.yml file more
comprehensible and reduces the Makefile spaghetti.

Windows support is commented out for the moment as it isn't very
reliable yet.
2019-03-17 14:58:18 +00:00
Nick Craig-Wood
5aa706831f b2: ignore already_hidden error on remove
Sometimes (possibly through eventual consistency) b2 returns an
already_hidden error on a delete.  Ignore this since it is harmless.
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
ac7e1dbf62 test_all: add the vfs tests to the integration tests
Fix failing tests for some remotes
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
14ef4437e5 dedupe: fix bug introduced when converting to use walk.ListR #2902
Before the fix we were only de-duping the ListR batches.

Afterwards we dedupe everything.

This will have the consequence that rclone uses more memory as it will
build a map of all the directory names, not just the names in a given
directory.
2019-03-17 11:01:20 +00:00
Danil Semelenov
a0d2ab5b4f cmd: Fix autocompletion of remote paths with spaces - fixes #3047 2019-03-17 10:15:20 +00:00
Nick Craig-Wood
3bfde5f52a ftp: add --ftp-concurrency to limit maximum number of connections
Fixes #2166
2019-03-17 09:57:14 +00:00
Nick Craig-Wood
2b05bd9a08 rc: implement operations/publiclink the equivalent of rclone link
Fixes #3042
2019-03-17 09:41:31 +00:00
Nick Craig-Wood
1318be3b0a vendor: update github.com/goftp/server to fix hang while reading a file from the server
See: https://forum.rclone.org/t/minor-issue-with-linux-ftp-client-and-rclone-ftp-access-denied/8959
2019-03-17 09:30:57 +00:00
Nick Craig-Wood
f4a754a36b drive: add --skip-checksum-gphotos to ignore incorrect checksums on Google Photos
First implementation by @jammin84, re-written by @ncw

Fixes #2207
2019-03-17 09:10:51 +00:00
Nick Craig-Wood
fef73763aa lib/atexit: add SIGTERM to signals which run the exit handlers on unix 2019-03-16 17:47:02 +00:00
Nick Craig-Wood
7267d19ad8 fstest: Use walk.ListR for listing 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
47099466c0 cache: Use walk.ListR for listing the temporary Fs. 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
4376019062 dedupe: Use walk.ListR for listing commands.
This dramatically increases the speed (7x in my tests) of the de-dupe
as google drive supports ListR directly and dedupe did not work with
`--fast-list`.

Fixes #2902
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
e5f4210b09 serve restic: use walk.ListR for listing
This is effectively what the old code did anyway so this should not
make any functional changes.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
d5f2df2f3d Use walk.ListR for listing operations
This will increase speed for backends which support ListR and will not
have the memory overhead of using --fast-list.

It also means that errors are queued until the end so as much of the
remote will be listed as possible before returning an error.

Commands affected are:
- lsf
- ls
- lsl
- lsjson
- lsd
- md5sum/sha1sum/hashsum
- size
- delete
- cat
- settier
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
efd720b533 walk: Implement walk.ListR which will use ListR if at all possible
It otherwise has the nearly the same interface as walk.Walk which it
will fall back to if it can't use ListR.

Using walk.ListR will speed up file system operations by default and
use much less memory and start immediately compared to if --fast-list
had been supplied.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
047f00a411 filter: Add BoundedRecursion method
This indicates that the filter set could be satisfied by a bounded
directory recursion.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
bb5ac8efbe http: fix socket leak on 404 errors 2019-03-15 17:04:28 +00:00
Nick Craig-Wood
e62bbf761b http: add --http-no-slash for websites with directories with no slashes #3053
See: https://forum.rclone.org/t/is-there-a-way-to-log-into-an-htpp-server/8484
2019-03-15 17:04:06 +00:00
Nick Craig-Wood
54a2e99d97 http: remove duplicates from listings 2019-03-15 16:59:36 +00:00
Nick Craig-Wood
28230d93b4 sync: Implement --suffix-keep-extension for use with --suffix - fixes #3032 2019-03-15 14:21:39 +00:00
Florian Gamböck
3c4407442d cmd: fix completion of remotes
The previous behavior of the remotes completion was that only
alphanumeric characters were allowed in a remote name. This limitation
has been lifted somewhat by #2985, which also allowed an underscore.

With the new implementation introduced in this commit, the completion of
the remote name has been simplified: If there is no colon (":") in the
current word, then complete remote name. Otherwise, complete the path
inside the specified remote. This allows correct completion of all
remote names that are allowed by the config (including - and _).
Actually it matches much more than that, even remote names that are not
allowed by the config, but in such a case there already would be a wrong
identifier in the configuration file.

With this simpler string comparison, we can get rid of the regular
expression, which makes the completion multiple times faster. For a
sample benchmark, try the following:

     # Old way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path =~ ^[[:alnum:]]*$ ]]; done'

     real    0m15,637s
     user    0m15,613s
     sys     0m0,024s

     # New way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path != *:* ]]; done'

     real    0m1,324s
     user    0m1,304s
     sys     0m0,020s
2019-03-15 13:16:42 +00:00
Dan Walters
caf318d499 dlna: add connection manager service description
The UPnP MediaServer spec says that the ConnectionManager service is
required, and adding it was enough to get dlna support working on my
other TV (LG webOS 2.2.1).
2019-03-15 13:14:31 +00:00
Nick Craig-Wood
2fbb504b66 webdav: fix About/df when reading the available/total returns 0
Some WebDAV servers return an empty Available and Used which parses as 0.

This caused About to return the Total as 0 which can confused mounted
file systems.

After this change we ignore the result if Available and Used are both 0.

See: https://forum.rclone.org/t/windows-mounted-webdav-drive-has-no-free-space/8938
2019-03-15 12:03:04 +00:00
Alex Chen
2b58d1a46f docs: onedrive: Add guide to refreshing token after MFA is enabled 2019-03-14 00:21:05 +08:00
Cnly
1582a21408 onedrive: Always add trailing colon to path when addressing items - #2720, #3039 2019-03-13 11:30:15 +08:00
Nick Craig-Wood
229898dcee Add Dan Walters to contributors 2019-03-11 17:31:46 +00:00
Dan Walters
95194adfd5 dlna: fix root XML service descriptor
The SCPD URL was being set after marshalling the XML, and thus coming
out blank.  Now works on my Samsung TV, and likely fixes some issues
reported by others in #2648.
2019-03-11 17:31:32 +00:00
Nick Craig-Wood
4827496234 webdav: fix race when creating directories - fixes #3035
Before this change a race condition existed in mkdir
- the directory was attempted to be created
- the parent didn't exist so it failed
- the parent was created
- the directory was created again

The last step failed as the directory was created in a different thread.

This was fixed by checking the error messages of MKCOL for both
directory creations, rather than only the first.
2019-03-11 16:20:05 +00:00
Nick Craig-Wood
415eeca6cf drive: fix range requests on 0 length files
Before this change a range request on a 0 length file would fail

    $ rclone cat --head 128 drive:test/emptyfile
    ERROR : open file failed: googleapi: Error 416: Request range not satisfiable, requestedRangeNotSatisfiable

To fix this we remove Range: headers on requests for zero length files.
2019-03-10 15:47:34 +00:00
Nick Craig-Wood
58d9a3e1b5 filter: reload filter when the options are set via the rc - fixes #3018 2019-03-10 13:09:44 +00:00
Nick Craig-Wood
cccadfa7ae rc: add ability for options blocks to register reload functions 2019-03-10 13:09:44 +00:00
ishuah
1b52f8d2a5 copy/sync/move: add --create-empty-src-dirs flag - fixes #2869 2019-03-10 11:56:38 +00:00
Nick Craig-Wood
2078ad68a5 gcs: Allow bucket policy only buckets - fixes #3014
This introduces a new config variable bucket_policy_only.  If this is
set then rclone:

- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
2019-03-10 11:45:42 +00:00
Nick Craig-Wood
368ed9e67d docs: add a FAQ entry about --max-backlog 2019-03-09 16:19:24 +00:00
Nick Craig-Wood
7c30993bb7 Add Fionera to contributors 2019-03-09 16:19:24 +00:00
Fionera
55b9a4ed30 Add ServerSideAcrossConfig Flag and check for it. fixes #2728 2019-03-09 16:18:45 +00:00
jaKa
118a8b949e koofr: implemented a backend for Koofr cloud storage service.
Implemented a Koofr REST API backend.
Added said backend to tests.
Added documentation for said backend.
2019-03-06 13:41:43 +00:00
jaKa
1d14e30383 vendor: add github.com/koofr/go-koofrclient
* added koofr client SDK dep for koofr backend
2019-03-06 13:41:43 +00:00
Nick Craig-Wood
27714e29c3 s3: note incompatibility with CEPH Jewel - fixes #3015 2019-03-06 11:50:37 +00:00
Nick Craig-Wood
9f8e1a1dc5 drive: fix imports of text files
Before this change text file imports were ignored.  This was because
the mime type wasn't matched.

Fix this by adjusting the keys in the mime type maps as well as the
values.

See: https://forum.rclone.org/t/how-to-upload-text-files-to-google-drive-as-google-docs/9014
2019-03-05 17:20:31 +00:00
Nick Craig-Wood
1692c6bd0a vfs: shorten the locking window for vfs/refresh
Before this change we locked the root directory, recursively fetched
the listing, applied it then unlocked the root directory.

After this change we recursively fetch the listing then apply it with
the root directory locked which shortens the time that the root
directory is locked greatly.

With the original method and the new method the subdirectories are
left unlocked and so potentially could be changed leading to
inconsistencies.  This change makes the potential for inconsistencies
slightly worse by leaving the root directory unlocked at a gain of a
much more responsive system while runing vfs/refresh.

See: https://forum.rclone.org/t/rclone-rc-vfs-refresh-locking-directory-being-refreshed/9004
2019-03-05 14:17:42 +00:00
Nick Craig-Wood
d233efbf63 Add marcintustin to contributors 2019-03-01 17:10:26 +00:00
marcintustin
e9a45a5a34 googlecloudstorage: fall back to default application credentials
Fall back to default application credentials when all other credentials sources fail

This change allows users with default application credentials
configured (notably when running on google compute instances) to
dispense with explicitly configuring google cloud storage credentials
in rclone's own configuration.
2019-03-01 18:05:31 +01:00
Nick Craig-Wood
f6eb5c6983 lib/pacer: fix test on macOS 2019-03-01 12:27:33 +00:00
Nick Craig-Wood
2bf19787d5 Add Dr.Rx to contributors 2019-03-01 12:25:16 +00:00
Dr.Rx
0ea3a57ecb azureblob: Enable MD5 checksums when uploading files bigger than the "Cutoff"
This enables MD5 checksum calculation and publication when uploading file above the "Cutoff" limit.
It was explictely ignored in case of multi-block (a.k.a. multipart) uploads to Azure Blob Storage.
2019-03-01 11:12:23 +01:00
Nick Craig-Wood
b353c730d8 vfs: make tests work on remotes which don't support About 2019-02-28 14:05:21 +00:00
Nick Craig-Wood
173dfbd051 vfs: read directory and check for a file before mkdir
Before this change when doing Mkdir the VFS layer could add the new
item to an unread directory which caused confusion.

It could also do mkdir on a file when run on a bucket based remote
which would temporarily overwrite the file with a directory.

Fixes #2993
2019-02-28 14:05:17 +00:00
Nick Craig-Wood
e3bceb9083 operations: fix Overlapping test for Windows native paths 2019-02-28 11:39:32 +00:00
Nick Craig-Wood
52c6b373cc Add calisro to contributors 2019-02-28 10:20:35 +00:00
calisro
0bc0f62277 Recommendation for creating own client ID 2019-02-28 11:20:08 +01:00
Cnly
12c8ee4b4b atexit: allow functions to be unregistered 2019-02-27 23:37:24 +01:00
Nick Craig-Wood
5240f9d1e5 sync: fix integration tests to check correct error 2019-02-27 22:05:16 +00:00
Nick Craig-Wood
997654d77d ncdu: fix display corruption with Chinese characters - #2989 2019-02-27 09:55:28 +00:00
Nick Craig-Wood
f1809451f6 docs: add more examples of config-less usage 2019-02-27 09:41:40 +00:00
Nick Craig-Wood
84c650818e sync: don't allow syncs on overlapping remotes - fixes #2932 2019-02-26 19:25:52 +00:00
Nick Craig-Wood
c5775cf73d fserrors: don't panic on uncomparable errors 2019-02-26 15:39:16 +00:00
Nick Craig-Wood
dca482e058 Add Alexandru Bumbacea to contributors 2019-02-26 15:39:16 +00:00
Nick Craig-Wood
6943169cef Add Six to contributors 2019-02-26 15:38:25 +00:00
Alexandru Bumbacea
4fddec113c sftp: allow custom ssh client config 2019-02-26 16:37:54 +01:00
Six
2114fd8f26 cmd: Fix tab-completion for remotes with underscores in their names 2019-02-26 16:25:45 +01:00
Nick Craig-Wood
63bb6de491 build: update to use go1.12 for the build 2019-02-26 13:18:31 +00:00
Nick Craig-Wood
0a56a168ff bin/get-github-release.go: scrape the downloads page to avoid the API limit
This should fix pull requests build failures which can't use the
github token.
2019-02-25 21:34:59 +00:00
Nick Craig-Wood
88e22087a8 Add Nestar47 to contributors 2019-02-25 21:34:59 +00:00
Nestar47
9404ed703a drive: add docs on team drives and --fast-list eventual consistency 2019-02-25 21:46:27 +01:00
Nick Craig-Wood
c7ecccd5ca mount: remove an obsolete EXPERIMENTAL tag from the docs 2019-02-25 17:53:53 +00:00
Sebastian Bünger
972e27a861 jottacloud: fix token refresh - fixes #2992 2019-02-21 19:26:18 +01:00
Fabian Möller
8f4ea77c07 fs: remove unnecessary pacer warning 2019-02-18 08:42:36 +01:00
Fabian Möller
61616ba864 pacer: make pacer more flexible
Make the pacer package more flexible by extracting the pace calculation
functions into a separate interface. This also allows to move features
that require the fs package like logging and custom errors into the fs
package.

Also add a RetryAfterError sentinel error that can be used to signal a
desired retry time to the Calculator.
2019-02-16 14:38:07 +00:00
Fabian Möller
9ed721a3f6 errors: add lib/errors package 2019-02-16 14:38:07 +00:00
Nick Craig-Wood
0b9d7fec0c lsf: add 'e' format to show encrypted names and 'o' for original IDs
This brings it up to par with lsjson.

This commit also reworks the framework to use ListJSON internally
which removes duplicated code and makes testing easier.
2019-02-14 14:45:35 +00:00
Nick Craig-Wood
240c15883f accounting: fix total ETA when --stats-unit bits is in effect 2019-02-14 07:56:52 +00:00
Nick Craig-Wood
38864adc9c cmd: Use private custom func to fix clash between rclone and kubectl
Before this change, rclone used the `__custom_func` hook to control
the completions of remote files.  However this clashes with other
cobra users, the most notable example being kubectl.

Upgrading cobra to master allows us to use a namespaced function
`__rclone_custom_func` which fixes the problem.

Fixes #1529
2019-02-13 23:02:22 +00:00
Nick Craig-Wood
5991315990 vendor: update github.com/spf13/cobra to master 2019-02-13 23:02:22 +00:00
Nick Craig-Wood
73f0a67d98 s3: Update Dreamhost endpoint - fixes #2974 2019-02-13 21:10:43 +00:00
Nick Craig-Wood
ffe067d6e7 azureblob: fix SAS URL support - fixes #2969
This was broken accidentally in 5d1d93e163 as part of #2654
2019-02-13 17:36:14 +00:00
Nick Craig-Wood
b5f563fb0f vfs: Ignore Truncate if called with no readers and already the correct size
This fixes FreeBSD which seems to call SetAttr with a size even on
read only files.

This is probably a bug in the FreeBSD FUSE implementation as it
happens with mount and cmount.

See: https://forum.rclone.org/t/freebsd-question/8662/12
2019-02-12 17:27:04 +00:00
Nick Craig-Wood
9310c7f3e2 build: update to use go1.12rc1 for the build 2019-02-12 16:23:08 +00:00
Nick Craig-Wood
1c1a8ef24b webdav: allow IsCollection property to be integer or boolean - fixes #2964
It turns out that some servers emit "true" or "false" rather than "1"
or "0" for this property, so adapt accordingly.
2019-02-12 12:33:08 +00:00
Nick Craig-Wood
2cfbc2852d docs: move --no-traverse docs to the correct section 2019-02-12 12:26:19 +00:00
Nick Craig-Wood
b167d30420 Add client side TLS/SSL flags --ca-cert/--client-cert/--client-key
Fixes #2966
2019-02-12 12:26:19 +00:00
Nick Craig-Wood
ec59760d9c pcloud: remove duplicated UserInfo.Result field spotted by go vet 2019-02-12 11:53:26 +00:00
Nick Craig-Wood
076d3da825 operations: resume downloads if the reader fails in copy - fixes #2108
This puts a shim on the reader opened by Copy so that if an error is
returned, the reader is re-opened at the correct seek point.

This should make downloading very large files more reliable.
2019-02-12 11:47:57 +00:00
Nick Craig-Wood
c3eecbe933 dropbox: retry blank errors to fix long listings
Sometimes dropbox returns blank errors in listings - retry this

See: https://forum.rclone.org/t/bug-sync-dropbox-to-gdrive-failing-for-large-files-50gb-error-unexpected-eof/8595
2019-02-10 20:55:16 +00:00
Nick Craig-Wood
d8e5b19ed4 build: switch to semvar compliant version tags
Fixes #2960
2019-02-10 20:55:16 +00:00
Nick Craig-Wood
43bc381e90 vendor: update all dependencies 2019-02-10 20:55:16 +00:00
Nick Craig-Wood
fb5ee22112 Add Vince to contributors 2019-02-10 20:55:16 +00:00
Vince
35327dad6f b2: allow manual configuration of backblaze downloadUrl - fixes #2808 2019-02-10 20:54:10 +00:00
Fabian Möller
ef5e1909a0 encoder: add lib/encoder to handle character subsitution and quoting 2019-02-09 18:23:47 +00:00
Fabian Möller
bca5d8009e onedrive: return errors instead of panic for invalid uploads 2019-02-09 18:23:47 +00:00
Fabian Möller
334f19c974 info: improve allowed character testing 2019-02-09 18:23:47 +00:00
Fabian Möller
42a5bf1d9f golangci: enable lints excluded by default 2019-02-09 18:18:22 +00:00
Nick Craig-Wood
71d1890316 build: ignore testbuilds when uploading to github 2019-02-09 12:22:06 +00:00
Nick Craig-Wood
d29c545627 Start v1.46-DEV development 2019-02-09 12:21:57 +00:00
Nick Craig-Wood
eb85ecc9c4 Version v1.46 2019-02-09 10:42:57 +00:00
Nick Craig-Wood
0dc08e1e61 Add James Carpenter to contributors 2019-02-09 09:00:22 +00:00
James Carpenter
76532408ef b2: Application Key usage clarifications 2019-02-09 09:00:05 +00:00
Nick Craig-Wood
60a4a8a86d genautocomplete: add remote path completion for bash - fixes #1529
Thanks to:
- Christopher Peterson (@cspeterson) for the original script
- Danil Semelenov (@sgtpep) for many refinements
2019-02-08 19:03:30 +00:00
Fabian Möller
a0d4c04687 backend: fix misspellings 2019-02-07 19:51:03 +01:00
Fabian Möller
f3874707ee drive: fix ListR for items with multiple parents
Fixes #2946
2019-02-07 19:46:50 +01:00
Fabian Möller
f8c2689e77 drive: improve ChangeNotify support for items with multiple parents 2019-02-07 19:46:50 +01:00
Nick Craig-Wood
8ec55ae20b Fix broken flag type tests
Introduced in fc1bf5f931
2019-02-07 16:42:26 +00:00
Nick Craig-Wood
fc1bf5f931 Make flags show up with their proper names, eg SizeSuffix rather than int 2019-02-07 11:57:26 +00:00
Nick Craig-Wood
578d00666c test_all: make -clean not give up on the first error 2019-02-07 11:29:52 +00:00
Nick Craig-Wood
f5c853b5c8 Add Jonathan to contributors 2019-02-07 11:29:16 +00:00
Jonathan
23c0cd2482 Update README.md 2019-02-07 11:28:42 +00:00
Nick Craig-Wood
8217f361cc webdav: if MKCOL fails with 423 Locked assume the directory exists
This fixes the integration tests with owncloud
2019-02-07 11:00:28 +00:00
Nick Craig-Wood
a0016e00d1 mega: return error if an unknown length file is attempted to be uploaded
This fixes the integration test created in #2947 to attempt to flush
out non-conforming backends.
2019-02-07 10:43:31 +00:00
Nick Craig-Wood
99c37028ee build: disable go modules for travis build 2019-02-06 21:25:32 +00:00
Nick Craig-Wood
cfba337ef0 lib/pool: fix memory leak by freeing buffers on flush 2019-02-06 17:20:54 +00:00
Nick Craig-Wood
fd370fcad2 vendor: update github.com/t3rm1n4l/go-mega to add new error codes 2019-02-05 17:22:28 +00:00
Nick Craig-Wood
c680bb3254 box: document how to use rclone with Enterprise SSO
Thanks to Lorenzo Grassi for help with this.
2019-02-05 14:29:13 +00:00
Nick Craig-Wood
7d5d6c041f vendor: update github.com/t3rm1n4l/go-mega to fix v2 account login
Fixes #2771
2019-02-04 17:33:15 +00:00
Nick Craig-Wood
bdc638530e walk: make NewDirTree always use ListR #2946
This fixes vfs/refresh with recurse=true needing the --fast-list flag
2019-02-04 10:37:27 +00:00
Nick Craig-Wood
315cee23a0 http: add an example with username and password 2019-02-04 10:30:05 +00:00
Nick Craig-Wood
2135879dda lsjson: use exactly the correct number of decimal places in the seconds 2019-02-03 20:03:23 +00:00
Nick Craig-Wood
da90069462 lib/pool: only flush buffers if they are unused between flush intervals 2019-02-03 19:07:50 +00:00
Nick Craig-Wood
08c4854e00 webdav: fix identification of directories for Bitrix Site Manager - #2716
Bitrix Site Manager emits `<D:resourcetype><collection/></D:resourcetype>`
missing the namespace on the `collection` tag.  This causes the item
to be identified as a file instead of a directory.

To work around this look at the Microsoft extension prop
`iscollection` which seems to be emitted as well.
2019-02-03 12:34:18 +00:00
Nick Craig-Wood
a838add230 fstests: skip chunked uploading tests with -short 2019-02-03 12:28:44 +00:00
Nick Craig-Wood
d68b091170 hubic: make error message more informative if authentication fails 2019-02-03 12:25:19 +00:00
Nick Craig-Wood
d809bed438 Add weetmuts to contributors 2019-02-03 12:19:08 +00:00
weetmuts
3aa1818870 listremotes: remove -l short flag as it conflicts with the new global flag 2019-02-03 12:17:15 +00:00
weetmuts
96f6708461 s3: add aws endpoint eu-north-1 2019-02-03 12:17:15 +00:00
weetmuts
6641a25f8c gcs: update google cloud storage endpoints 2019-02-03 12:17:15 +00:00
Cnly
cd46ce916b fstests: ensure Fs.Put and Object.Update don't panic on unknown-sized uploads 2019-02-03 11:47:57 +00:00
Cnly
318d1bb6f9 fs: clarify behaviour of Put() and Upload() for unknown-sized objects 2019-02-03 11:47:57 +00:00
Cnly
b8b53901e8 operations: call Rcat in Copy when size is -1 - #2832 2019-02-03 11:47:57 +00:00
Nick Craig-Wood
6e153781a7 rc: add help to show how to set log level with options/set 2019-02-03 11:47:57 +00:00
Nick Craig-Wood
f27c2d9760 vfs: make cache tests more reliable 2019-02-02 16:26:55 +00:00
Nick Craig-Wood
eb91356e28 fs/asyncreader: optionally user mmap for memory allocation with --use-mmap #2200
This replaces the `sync.Pool` allocator with lib/pool.  This
implements a pool of buffers of up to 64MB which can be re-used but is
flushed every 5 seconds.

If `--use-mmap` is set then rclone will use mmap for memory
allocations which is much better at returning memory to the OS.
2019-02-02 14:35:56 +00:00
Nick Craig-Wood
bed2971bf0 lib/pool: a buffer recycling library which can be optionally be used with mmap 2019-02-02 14:35:56 +00:00
Nick Craig-Wood
f0696dfe30 lib/mmap: library to do memory allocation with anonymous memory maps 2019-02-02 14:35:56 +00:00
Nick Craig-Wood
a43ed567ee vfs: implement --vfs-cache-max-size to limit the total size of the cache 2019-02-02 12:30:10 +00:00
Nick Craig-Wood
fffdbb31f5 bin/get-github-release.go: Use GOPATH/bin by preference to place binary 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
cacefb9a82 bin/get-github-release.go: automatically choose the right os/arch
This fixes the install of golangci-lint on non Linux platforms
2019-02-02 11:45:07 +00:00
Nick Craig-Wood
d966cef14c build: fix problems found with unconvert 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
a551978a3f build: fix problems found with structcheck linter 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
97752ca8fb build: fix problems found with ineffasign linter 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
8d5d332daf build: fix problems found with golint 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
6b3a9bf26a build: fix problems found by the deadcode linter 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
c1d9a1e174 build: use golangci-lint for code quality checks 2019-02-02 11:45:07 +00:00
Nick Craig-Wood
98120bb864 bin/get-github-release.go: enable extraction of binary not in root of tar
Also fix project name regexp to allow -
2019-02-02 11:34:51 +00:00
Nick Craig-Wood
f8ced557e3 mount: print more things in seek_speed test 2019-02-02 11:30:49 +00:00
Cnly
7b20139c6a onedrive: return err instead of panic on unknown-sized uploads 2019-02-02 16:37:33 +08:00
Nick Craig-Wood
c496efe9a4 Add Wojciech Smigielski to contributors 2019-02-01 17:12:43 +00:00
Nick Craig-Wood
cf583e0237 Add Rémy Léone to contributors 2019-02-01 17:12:43 +00:00
Wojciech Smigielski
f09d0f5fef b2: added disable sha1sum flag 2019-02-01 17:12:24 +00:00
Rémy Léone
1e6cbaa355 s3: Add Scaleway to s3 documentation 2019-02-01 17:09:57 +00:00
Nick Craig-Wood
be643ecfbc sftp: don't error on dangling symlinks 2019-02-01 16:43:26 +00:00
Nick Craig-Wood
0c4ed35b9b build: improve beta tidy script 2019-02-01 16:40:55 +00:00
Nick Craig-Wood
4e4feebf0a drive: fix google docs in rclone mount in some circumstances #1732
Before this change any attempt to access a google doc in an rclone
mount would give the error "partial downloads are not supported while
exporting Google Documents" as the mount uses ranged requests to read
data.

This implements ranged requests for a limited number of scenarios,
just enough so that Google docs can be cat-ed from an rclone mount.
When they are cat-ed then they receive their correct size also.
2019-01-31 10:39:13 +00:00
Sebastian Bünger
291f270904 jottacloud: add support for 2-factor authentification fixes #2722 2019-01-30 08:13:46 +00:00
Nick Craig-Wood
f799be1d6a local: fix symlink tests under Windows 2019-01-29 15:40:49 +00:00
Nick Craig-Wood
74297a0c55 local: make sure we close file handle in local tests
...as Windows can't remove a directory with an open file handle in
2019-01-29 15:23:42 +00:00
Nick Craig-Wood
7e13103ba2 Add kayrus to contributors 2019-01-29 14:43:25 +00:00
kayrus
34baf05d9d Swift: introduce application credential auth support 2019-01-29 14:43:10 +00:00
kayrus
38c0018906 Bump github.com/ncw/swift to v1.0.44 2019-01-29 14:43:10 +00:00
Nick Craig-Wood
6f25e48cbb ftp: fix docs to note ftp_proxy isn't supported 2019-01-29 14:38:26 +00:00
Nick Craig-Wood
7e99abb5da Add Matt Robinson to contributors 2019-01-29 14:38:26 +00:00
Nick Craig-Wood
629019c3e4 Add yair@unicorn to contributors 2019-01-29 14:38:26 +00:00
Matt Robinson
1402fcb234 fix typo in rcd docs 2019-01-29 14:37:58 +00:00
Nick Craig-Wood
b26276b416 union: fix poll-interval not working - fixes #2837
Before this change the union remote was using whether the writable
union could poll for changes to decide whether the union mount could
poll for changes.

The fix causes the union backend to signal it can poll for changes if
**any** of the remotes can poll for changes.
2019-01-28 14:43:12 +00:00
Nick Craig-Wood
e317f04098 local: make using -l/--links with -L/--copy-links throw an error #1152 2019-01-28 13:47:27 +00:00
Nick Craig-Wood
65ff330602 local: add tests for -l feature #1152 2019-01-28 13:47:27 +00:00
Nick Craig-Wood
52763e1918 local: when using -l fix setting modification times of symlinks #1152
Before this change it was setting the modification times of the things
that the symlinks pointed to.

Note that this is only implemented for unix style OSes.  Other OSes
will not attempt to set the modification time of a symlink.
2019-01-28 13:47:27 +00:00
yair@unicorn
23e06cedbd local: Add support for '-l' (symbolic link translation) #1152 2019-01-28 13:47:27 +00:00
yair@unicorn
b369fcde28 local: add documentation for -l option #1152 2019-01-28 13:47:27 +00:00
Nick Craig-Wood
c294068780 webdav: support About - fixes #2937
This means that `rclone about` will work with Webdav backends and `df`
will be correct when using them in `rclone mount`.
2019-01-28 13:45:09 +00:00
Nick Craig-Wood
8a774a3dd4 webdav: support MD5 and SHA1 hashes with Owncloud and Nextcloud - fixes #2379 2019-01-28 13:45:09 +00:00
Nick Craig-Wood
53a8b5a275 vfs: Implement renaming of directories for backends without DirMove #2539
Previously to this change, backends without the optional interface
DirMove could not rename directories.

This change uses the new operations.DirMove call to implement renaming
directories which will fall back to Move/Copy as necessary.
2019-01-27 21:26:56 +00:00
Nick Craig-Wood
bbd03f49a4 operations: Implement DirMove for moving a directory #2539
This does the equivalent of sync.Move but is specialised for moving
files in one backend.
2019-01-27 21:26:56 +00:00
Nick Craig-Wood
e31578e03c s3: Auto detect region for buckets on operation failure - fixes #2915
If an incorrect region error is returned while using a bucket then the
region is updated, the session is remade and the operation is retried.
2019-01-27 21:22:49 +00:00
Nick Craig-Wood
0855608bc1 drive: add --drive-pacer-burst config to control bursting of the rate limiter 2019-01-27 21:19:20 +00:00
Nick Craig-Wood
f8dbf8292a pacer: control the minSleep with a rate limiter and allow burst
This will mean rclone tracks the minimum sleep values more precisely
when it isn't rate limiting.

Allowing burst is good for some backends (eg Google Drive).
2019-01-27 21:19:20 +00:00
Nick Craig-Wood
144daec800 drive: set default pacer to 100ms for 10 tps - fixes #2880 2019-01-27 21:19:20 +00:00
Nick Craig-Wood
6a832b7173 qingstor: default --qingstor-upload-concurrency to 1 to work around bug
If the upload concurrency is set > 1 then the hash becomes corrupted.
The upload is fine, and can be downloaded fine, however the hash is no
longer the md5sum of the object.  It is not known whether this is
rclone's fault or a bug at QingStor.
2019-01-27 21:09:11 +00:00
Nick Craig-Wood
184a9c8da6 mountlib: clip blocks returned to 32 bit number for Windows 32 bit - fixes #2934 2019-01-27 12:04:56 +00:00
Sebastian Bünger
88592a1779 jottacloud: Use token auth for all API requests
Don't store password anymore
2019-01-27 11:32:11 +00:00
Sebastian Bünger
92fa30a787 jottacloud: resume/deduplication fixups 2019-01-27 11:32:11 +00:00
Oliver Heyme
e4dfe78ef0 jottacloud: resume and deduplication support 2019-01-27 11:32:11 +00:00
Nick Craig-Wood
ba84eecd94 build: don't attempt to upload artifacts for pull requests on circleci 2019-01-25 17:27:02 +00:00
Cnly
ea12d76c03 onedrive: fix root ID not normalised #2930 2019-01-24 19:59:23 +08:00
Nick Craig-Wood
5f0a8a4e28 rest: fix upload of 0 length files
Before this change if ContentLength was set in the options but 0 then
we would upload using chunked encoding.  Fix this to always upload
with a "Content-Length" header even if the size is 0.

Remove workarounds for this from b2 and onedrive backends.

This fixes the issue for the webdav backend described here:

https://forum.rclone.org/t/code-500-errors-with-webdav-nextcloud/8440/
2019-01-24 11:38:00 +00:00
Nick Craig-Wood
2fc095cd3e azureblob: Stop Mkdir attempting to create existing containers
Before this change azureblob would attempt to create already existing
containers.  This causes problems with limited permissions keys.

This change checks the container exists before trying to create it in
the same way the s3 backend does.  This uses no more requests in the
usual case of the container existing.

See: https://forum.rclone.org/t/copying-individual-files-to-azure-blob-storage/8397
2019-01-23 09:58:46 +00:00
Nick Craig-Wood
a2341cc412 qingstor: add upload chunk size/concurrency/cutoff control #2851
* --upload-chunk-size
* --upload-concurrency
* --upload-cutoff
2019-01-18 15:20:20 +00:00
Nick Craig-Wood
9685be64cd qingstor: fix go routine leak on multipart upload errors - fixes #2851 2019-01-18 15:20:20 +00:00
Nick Craig-Wood
39f5059d48 s3: add --s3-bucket-acl to control bucket ACL - fixes #2918
Before this change buckets were created with the same ACL as objects.

After this change, the user can set just --s3-acl to set the ACL of
buckets and objects, or use --s3-bucket-acl as well to have a
different ACL used for bucket creation.

This also logs at INFO level the creation and deletion of buckets.
2019-01-18 15:12:11 +00:00
Nick Craig-Wood
a30e80564d config: when using auto confirm make user interaction configurable
* drive: don't run teamdrive config if auto confirm set
* onedrive: don't run extra config if auto confirm set
* make Confirm results customisable by config

Fixes #1010
2019-01-18 14:46:26 +00:00
Nick Craig-Wood
8e107b9657 build: update the build container to use latest go version for circleci 2019-01-18 13:26:27 +00:00
Nick Craig-Wood
21a0693b79 build: upload circleci builds for the beta release latest too 2019-01-17 15:18:03 +00:00
Nick Craig-Wood
4846d9393d ftp: wait for 60 seconds for connection Close then declare it dead
This helps with indefinite hangs when transferring very large files on
some ftp server.

Fixes #2912
2019-01-15 17:32:14 +00:00
Nick Craig-Wood
fc4f20d52f build: upload circleci builds for the beta release 2019-01-15 12:18:50 +00:00
Onno Zweers
60558b5d37 webdav: update docs about dcache and macaroons
Added link to dcache.org

Updated link to macaroon script to new location
2019-01-15 09:21:34 +00:00
Nick Craig-Wood
5990573ccd accounting: fix layout of stats - fixes #2910
This fixes several things wrong with the layout of the stats.

Transfers which haven't started are printed in the same format as
those which have so the stats with `--progress` don't show horrible
artifacts.

Checkers and transfers now get a ": checkers" and ": transfers" label
on the end of the stats line.  Transfers will have the transfer stats
when the transfer has started instead of this.

There was a bug in the routine which shortened the file names (it
always produces strings 1 too long).  This is now fixed with a test.

The formatting string was wrong with a fixed width of 45 - this is now
replaces with the value of `--stats-file-name-length`.

This also meant that there were unecessary leading spaces in the file
names.  So the default `--stats-file-name-length` was raised to 45
from 40.
2019-01-14 16:12:39 +00:00
Nick Craig-Wood
bd11d3cb62 vfs: Fix panic on rename with --dry-run set - fixes #2911 2019-01-14 12:07:25 +00:00
Nick Craig-Wood
5e5578d2c3 docs: rclone config file instead of rclone -h to find config file 2019-01-13 17:56:57 +00:00
Nick Craig-Wood
1318c6aec8 s3: Add Alibaba OSS to integration tests and fix storage classes 2019-01-12 20:41:47 +00:00
Nick Craig-Wood
f29757de3b test_all: make a way of ignoring integration test failures
Use this to ignore known failures
2019-01-12 20:18:05 +00:00
Nick Craig-Wood
f397c35935 fstest/test_all: add alternate s3 and swift providers to the integration tests 2019-01-12 18:33:31 +00:00
Nick Craig-Wood
f365230aea doc: Add more info on testing to CONTRIBUTING 2019-01-12 18:28:51 +00:00
Nick Craig-Wood
ff0b8e10af s3: Support Alibaba Cloud (Aliyun) OSS
The existing s3 backend passed all integration tests with OSS provided
`force_path_style = false`.

This makes sure that is so and adds documentation and configuration
for OSS.

Thanks to @luolibin for their work on the OSS backend which we ended
up not needing.

Fixes #1641
Fixes #1237
2019-01-12 17:28:04 +00:00
Nick Craig-Wood
8d16a5693c vendor: update github.com/goftp/server - fixes #2845 2019-01-12 17:09:11 +00:00
Nick Craig-Wood
781142a73f Add qip to contributors 2019-01-11 17:35:47 +00:00
qip
f471a7e3f5 fshttp: Add cookie support with cmdline switch --use-cookies
Cookies are handled by cookiejar in memory with fshttp module through
the entire session.

One useful scenario is, with HTTP storage system where index server
adds authentication cookie while redirecting to CDN for actual files.

Also, it can be helpful to reuse fshttp in other storage systems
requiring cookie.
2019-01-11 17:35:29 +00:00
Nick Craig-Wood
d7a1fd2a6b Add Dario Guzik to contributors 2019-01-11 14:14:12 +00:00
Dario Guzik
7782eda88e check: Add stats showing total files matched. 2019-01-11 14:13:48 +00:00
Nick Craig-Wood
d08453d402 local: fix renaming/deleting open files on Windows #2730
This uses the lib/file package to open files in such a way open files
can be renamed or deleted even under Windows.
2019-01-11 10:26:34 +00:00
Nick Craig-Wood
71e98ea584 vfs: fix renaming/deleting open files with cache mode "writes" under Windows
Before this change, renaming and deleting of open files (which can
easily happen due to the asynchronous nature of file systems) would
produce an error, for example saving files with Firefox.

After this change we open files with the flags necessary for open
files to be renamed or deleted.

Fixes #2730
2019-01-11 10:26:34 +00:00
Nick Craig-Wood
42d997f639 lib/file: reimplement os.OpenFile allowing rename/delete open files under Windows
Normally os.OpenFile under Windows does not allow renaming or deleting
open file handles.  This package provides equivelents for os.OpenFile,
os.Open and os.Create which do allow that.
2019-01-11 10:26:34 +00:00
Nick Craig-Wood
571b4c060b mount: check that mountpoint and local directory to mount don't overlap
If the mountpoint and the directory to mount overlap this causes a
lockup.

Fixes #2905
2019-01-10 14:18:00 +00:00
Nick Craig-Wood
ff72059a94 operations: warn if --checksum is set but there are no hashes available
Also caveat the help of --checksum

Fixes #2903
2019-01-10 11:07:10 +00:00
Nick Craig-Wood
2e6ef4f6ec test_all: fix run with -remotes that aren't in the config file 2019-01-10 10:59:32 +00:00
Nick Craig-Wood
0ec6dd9f4b Add nicolov to contributors 2019-01-09 19:29:26 +00:00
nicolov
0b7fdf16a2 serve: add dlna server 2019-01-09 19:14:14 +00:00
nicolov
5edfd31a6d vendor: add github.com/anacrolix/dms 2019-01-09 19:14:14 +00:00
Nick Craig-Wood
7ee7bc87ae vfs: fix tests after --dir-perms changes
This was introduced in 554ee0d963
2019-01-09 09:49:34 +00:00
Fabian Möller
1433558c01 sftp: perform environment variable expansion on key-file 2019-01-09 10:11:33 +01:00
Fabian Möller
0458b961c5 sftp: add option to force the usage of an ssh-agent
Also adds the possibility to specify a specific key to request from the
ssh-agent.
2019-01-09 10:11:33 +01:00
Fabian Möller
c1998c4efe sftp: add support for PEM encrypted private keys 2019-01-09 10:11:33 +01:00
Alex Chen
49da220b65 onedrive: fix broken support for "shared with me" folders - fixes #2536, #2778 (#2876) 2019-01-09 13:11:00 +08:00
Nick Craig-Wood
554ee0d963 vfs: add --dir-perms and --file-perms flags - fixes #2897
This allows files to be shown with the execute bit which allows
binaries to be run under Windows and Linux.
2019-01-08 17:29:38 +00:00
Denis Skovpen
2d2533a08a cmd/copyurl: fix checking of --dry-run 2019-01-08 11:28:05 +00:00
Nick Craig-Wood
733b072d4f azureblob: ignore directory markers in inital Fs creation too - fixes #2806
This is a follow-up to feea0532 including the initial Fs creation
where the backend detects whether the path is pointing to a file or a
directory.
2019-01-08 11:21:20 +00:00
Nick Craig-Wood
2d01a65e36 oauthutil: read a fresh token config file before using the refresh token.
This means that rclone will pick up tokens from concurrently running
rclones.  This helps for Box which only allows each refresh token to
be used once.

Without this fix, rclone caches the refresh token at the start of the
run, then when the token expires the refresh token may have been used
already by a concurrently running rclone.

This also will retry the oauth up to 5 times at 1 second intervals.

See: https://forum.rclone.org/t/box-token-refresh-timing/8175
2019-01-08 11:01:30 +00:00
Nick Craig-Wood
b8280521a5 drive: supply correct scopes to when using --drive-impersonate
This fixes using --drive-impersonate and appfolders.
2019-01-07 11:50:05 +00:00
Nick Craig-Wood
60e6af2605 Add andrea rota to contributors 2019-01-05 20:56:03 +00:00
andrea rota
9d16822c63 note on minimum version with support for b2 multi application keys
This trivial patch adds a note about the minimum version of rclone
needed in order to be able to use multiple application keys with the b2
backend.

As Debian stable (amongst other distros) is shipping an older version,
users running rclone < 1.43 and reading about this feature in the online
docs may struggle to realise why they are not able to sync to b2 when
configured to use an application key other than the master one.

For reference: https://github.com/ncw/rclone/issues/2513
2019-01-04 11:22:20 +00:00
Cnly
38a0946071 docs: update OneDrive limitations and versioning issue 2019-01-04 16:42:19 +08:00
Nick Craig-Wood
95e52e1ac3 cmd: improve error reporting for too many/few arguments - fixes #2860
Improve docs on the different kind of flag passing.
2018-12-29 17:40:21 +00:00
Nick Craig-Wood
51ab1c940a Add Sebastian Bünger - @buengese to the MAINTAINERS list
Also add areas of specific responsibility
2018-12-29 16:08:40 +00:00
Nick Craig-Wood
6f30427357 yandex: note --timeout increase needed for large files
See: https://forum.rclone.org/t/rclone-stucks-at-the-end-of-a-big-file-upload/8102
2018-12-29 16:08:40 +00:00
Cnly
3220acc729 fstests: fix TestFsName fails when using remote:with/path 2018-12-29 09:34:04 +00:00
Nick Craig-Wood
3c97933416 oauthutil: suppress ERROR message when doing remote config
Before this change doing a remote config using rclone authorize gave
this error.  The token is saved a bit later anyway so the error is
needlessly confusing.

    ERROR : Failed to save new token in config file: section 'remote' not found.

This commit suppresses that error.

https://forum.rclone.org/t/onedrive-for-business-failed-to-save-token/8061
2018-12-28 09:53:53 +00:00
Nick Craig-Wood
039e2a9649 vendor: pull in github.com/ncw/swift latest to fix reauth on big files 2018-12-28 09:23:57 +00:00
Nick Craig-Wood
1c01d0b84a vendor: update dropbox SDK to fix failing integration tests #2829 2018-12-26 15:17:03 +00:00
Nick Craig-Wood
39eac7a765 Add Jay to contributors 2018-12-26 15:08:09 +00:00
Jay
082a7065b1 Use vfsgen for static HTML templates 2018-12-26 15:07:21 +00:00
Jay
f7b08a6982 vendor: add github.com/shurcooL/vfsgen 2018-12-26 15:07:21 +00:00
Nick Craig-Wood
37e32d8c80 Add Arkadius Stefanski to contributors 2018-12-26 15:03:27 +00:00
Arkadius Stefanski
f2a1b991de readme: fix copying link 2018-12-26 15:03:08 +00:00
Nick Craig-Wood
4128e696d6 Add François Leurent to contributors
New email for Animosity022
2018-12-26 15:00:16 +00:00
François Leurent
7e7f3de355 qingcloud: fix typos in trace messages 2018-12-26 14:51:48 +00:00
Nick Craig-Wood
1f6a1cd26d vfs: add test_vfs code for hunting for deadlocks 2018-12-26 09:08:27 +00:00
Nick Craig-Wood
2cfe2354df vfs: fix deadlock between RWFileHandle.close and File.Remove - fixes #2857
Before this change we took the locks file.mu and file.muRW in an
inconsistent order - after the change we always take them in the same
order to fix the deadlock.
2018-12-26 09:08:27 +00:00
Nick Craig-Wood
13387c0838 vfs: fix deadlock on concurrent operations on a directory - fixes #2811
Before this fix there were two paths where concurrent use of a
directory could take the file lock then directory lock and the other
would take the locks in the reverse order.

Fix this by narrowing the locking windows so the file lock and
directory lock don't overlap.
2018-12-26 09:08:27 +00:00
Animosity022
5babf2dc5c Update drive.md
Fixed the Google Drive API documentation
2018-12-22 18:37:00 +00:00
Nick Craig-Wood
9012d7c6c1 cmd: fix --progress crash under Windows Jenkins - fixes #2846 2018-12-22 18:05:13 +00:00
Nick Craig-Wood
df1faa9a8f webdav: fail soft on time parsing errors
The time format provided by webdav servers seems to vary wildly from
that specified in the RFC - rclone already parses times in 5 different
formats!

If an unparseable time is found, then fail softly logging an ERROR
(just once) but returning the epoch.

This will mean that webdav servers with bad time formats will still be
usable by rclone.
2018-12-20 12:10:15 +00:00
Nick Craig-Wood
3de7ad5223 b2: for a bucket limited application key check the bucket name
Before this fix rclone would just use the authorised bucket regardless
of what bucket you put on the command line.

This uses the new `bucketName` response in the API and checks that the
user is using the correct bucket name to avoid accidents.

Fixes #2839
2018-12-20 12:07:35 +00:00
Garry McNulty
9cb3a68c38 crypt: check for maximum length before decrypting filename
The EME Transform() method will panic if the input data is larger than
2048 bytes.

Fixes #2826
2018-12-19 11:51:44 +00:00
Nick Craig-Wood
c1dd76788d httplib: make http serving with auth generate INFO messages on auth fail
2018/12/13 12:13:44 INFO  : /: 127.0.0.1:39696: Basic auth challenge sent
2018/12/13 12:13:54 INFO  : /: 127.0.0.1:40050: Unauthorized request from ncw

Fixes #2834
2018-12-14 13:38:49 +00:00
Nick Craig-Wood
5ee1816a71 filter: parallelise reading of --files-from - fixes #2835
Before this change rclone would read the list of files from the
files-from parameter and check they existed one at a time.  This could
take a very long time for lots of files.

After this change, rclone will check up to --checkers in parallel.
2018-12-13 13:22:30 +00:00
Nick Craig-Wood
63b51c6742 vendor: add golang.org/x/sync as a dependency 2018-12-13 10:45:52 +00:00
Nick Craig-Wood
e7684b7ed5 Add William Cocker to contributors 2018-12-06 21:53:53 +00:00
William Cocker
dda23baf42 s3 : update doc for Glacier storage class
s3 : update doc for Glacier storage class : related to #923
2018-12-06 21:53:38 +00:00
William Cocker
8575abf599 s3: add GLACIER storage class
Fixes #923
2018-12-06 21:53:05 +00:00
Nick Craig-Wood
feea0532cd azureblob: ignore directory markers - fixes #2806
This ignores 0 length blobs if
- they end with /
- they have the metadata hdi_isfolder = true
2018-12-06 21:47:03 +00:00
Nick Craig-Wood
d3e8ae1820 Add Mark Otway to contributors 2018-12-06 15:13:03 +00:00
Nick Craig-Wood
91a9a959a2 Add Mathieu Carbou to contributors 2018-12-06 15:12:58 +00:00
Mark Otway
04eae51d11 Fix install for Synology
7z check doesn't work due to misplaced comma, so installation fails on Synology.
2018-12-06 15:12:21 +00:00
Mathieu Carbou
8fb707e16d Fixes #1788: Retry-After support for Dropbox backend 2018-12-05 22:03:30 +00:00
Mathieu Carbou
4138d5aa75 Issue #1788: Pointing to Dropbox's v5.0.0 tag 2018-12-05 22:03:30 +00:00
Nick Craig-Wood
fc654a4cec http: fix backend with --files-from and non-existent files
Before this fix the http backend was returning the wrong error code
when files were not found.  This was causing --files-from to error on
missing files instead of skipping them like it should.
2018-12-04 17:40:44 +00:00
Nick Craig-Wood
26b5f55cba Update after goimports change 2018-12-04 10:11:57 +00:00
Nick Craig-Wood
3f572e6bf2 webdav: fix infinite loop on failed directory creation - fixes #2714 2018-12-02 21:03:12 +00:00
Nick Craig-Wood
941ad6bc62 azureblob: use the s3 pacer for 0 delay - fixes #2799 2018-12-02 20:55:16 +00:00
Nick Craig-Wood
5d1d93e163 azureblob: use the rclone HTTP client - fixes #2654
This enables --dump headers and --timeout to work properly.
2018-12-02 20:55:16 +00:00
Nick Craig-Wood
35fba5bfdd Add Garry McNulty to contributors 2018-12-02 20:52:04 +00:00
Garry McNulty
887834da91 b2: cleanup unfinished large files
The `cleanup` command will delete unfinished large file uploads that
were started more than a day ago (to avoid deleting uploads that are
potentially still in progress).

Fixes #2617
2018-12-02 20:51:13 +00:00
Nick Craig-Wood
107293c80e copy,move: restore --no-traverse flag
The --no-traverse flag was not implemented when the new sync routines
(using the march package) was implemented.

This re-implements --no-traverse in march by trying to find a match
for each object with NewObject rather than from a directory listing.
2018-12-02 20:28:13 +00:00
Nick Craig-Wood
e3c4ebd59a march: factor calling parameters into a structure 2018-12-02 18:07:26 +00:00
Nick Craig-Wood
d99ffde7c0 s3: change --s3-upload-concurrency default to 4 to increase perfomance #2772
Increasing the --s3-upload-concurrency to 4 (from 2) gives an
additional 45% throughput at the cost of 10MB extra memory per transfer.

After testing the upload perfoc
2018-12-02 17:58:34 +00:00
Nick Craig-Wood
198c34ce21 s3: implement --s3-upload-cutoff for single part uploads below this - fixes #2772
Before this change rclone would use multipart uploads for any size of
file.  However multipart uploads are less efficient for smaller files
and don't have MD5 checksums so it is advantageous to use single part
uploads if possible.

This implements single part uploads for all files smaller than the
upload_cutoff size.  Streamed files must be uploaded as multipart
files though.
2018-12-02 17:58:34 +00:00
Nick Craig-Wood
0eba88bbfe sftp: check directory is empty before issuing rmdir
Some SFTP servers allow rmdir on full directories which is allowed
according to the RFC so make sure we don't accidentally delete data
here.

See: https://forum.rclone.org/t/rmdir-and-delete-empty-src-dirs-file-does-not-exist/7737
2018-12-02 11:16:30 +00:00
Nick Craig-Wood
aeea4430d5 swift: efficiency: slim Object and reduce requests on upload
- Slim down Object to only include necessary data
- Don't HEAD an object after PUT - read the hash from the response
2018-12-02 10:23:55 +00:00
Nick Craig-Wood
4b15c4215c sftp: fix rmdir on Windows based servers (eg CrushFTP)
Before this change we used Remove to remove directories.  This works
fine on Unix based systems but not so well on Windows based ones.
Swap to using RemoveDirectory instead.
2018-11-29 21:34:37 +00:00
Nick Craig-Wood
50452207d9 swift: add --swift-no-chunk to disable segmented uploads in rcat/mount
Fixes #2776
2018-11-29 11:11:30 +00:00
Nick Craig-Wood
01fcad9b9c rc: fix docs for sync/{sync,copy,move} and operations/{copy,move}file 2018-11-29 11:11:30 +00:00
themylogin
eb41253764 azureblob: allow building azureblob backend on *BSD
FreeBSD support was added in Azure/azure-storage-blob-go@0562badec5
OpenBSD and NetBSD support was added in Azure/azure-storage-blob-go@1d6dd77d74
2018-11-27 12:20:48 +00:00
Nick Craig-Wood
89625e54cf vendor: update dependencies to latest 2018-11-26 14:10:33 +00:00
Nick Craig-Wood
58f7141c96 drive, googlecloudstorage: disallow on go1.8 due to dependent library changes
golang.org/x/oauth2/google no longer builds on go1.8
2018-11-26 14:10:33 +00:00
Nick Craig-Wood
e56c6402a7 serve restic: disallow on go1.8 because of dependent library changes
golang.org/x/net/http2 no longer builds on go1.8
2018-11-26 14:10:33 +00:00
Nick Craig-Wood
d0eb8ddc30 serve webdav: disallow on go1.8 due to dependent library changes
golang.org/x/net/webdav no longer builds with go1.8
2018-11-26 14:10:33 +00:00
Nick Craig-Wood
a6c28a5faa Start v1.45-DEV development 2018-11-24 15:20:24 +00:00
Nick Craig-Wood
d35bd15762 Version v1.45 2018-11-24 13:44:25 +00:00
Nick Craig-Wood
8b8220c4f7 azureblob: wait for up to 60s to create a just deleted container
When a container is deleted, a container with the same name cannot be
created for at least 30 seconds; the container may not be available
for more than 30 seconds if the service is still processing the
request.

We sleep so that we wait at most 60 seconds.  This is mostly useful in
the integration tests where containers get deleted and remade
immediately.
2018-11-24 10:57:37 +00:00
Nick Craig-Wood
5fe3b0ad71 Add Stephen Harris to contributors 2018-11-24 10:57:37 +00:00
Stephen Harris
4c8c87a935 Update PROXY section of the FAQ 2018-11-23 20:14:36 +00:00
Nick Craig-Wood
bb10a51b39 test_all: limit to go1.11 so the template used is supported 2018-11-23 17:17:19 +00:00
Nick Craig-Wood
df01f7a4eb test_all: fix regexp for retrying nested tests 2018-11-23 17:17:19 +00:00
Nick Craig-Wood
e84790ef79 swift: add pacer for retries to make swift more reliable #2740 2018-11-22 22:15:52 +00:00
Nick Craig-Wood
369a8ee17b ncdu: fix deleting files 2018-11-22 21:41:17 +00:00
Nick Craig-Wood
84e21ade6b cmount: fix on Linux - only apply volname for Windows and macOS 2018-11-22 20:41:05 +00:00
Sebastian Bünger
703b0535a4 yandex: update docs 2018-11-22 20:14:50 +00:00
Sebastian Bünger
155264ae12 yandex: complete rewrite
Get rid of the api client and use rest/pacer for all API calls
Add Copy, Move, DirMove, PublicLink, About optional interfaces
Improve general error handling
Remove ListR for now due to inconsitent behaviour
fixes #2586, progress on #2740 and #2178
2018-11-22 20:14:50 +00:00
Nick Craig-Wood
31e2ce03c3 fstests: re-arrange backend integration tests so they can be retried
Before this change backend integration tests depended on each other,
so tests could not be retried.

After this change we nest tests to ensure that tests are provided with
the starting state they expect.

Tell the integration test runner that it can retry backend tests also.

This also includes bin/test_independence.go which runs each test
individually for a backend to prove that they are independent.
2018-11-22 20:12:12 +00:00
Nick Craig-Wood
e969505ae4 info: fix control character map output 2018-11-20 14:04:27 +00:00
Nick Craig-Wood
26e2f1a998 Add Alexander to contributors 2018-11-20 10:22:11 +00:00
Alexander
2682d5a9cf - install with busybox if any 2018-11-20 10:22:00 +00:00
Nick Craig-Wood
2191592e80 Add Henry Ptasinski to contributors 2018-11-19 13:33:59 +00:00
Nick Craig-Wood
18f758294e Add Peter Kaminski to contributors 2018-11-19 13:33:59 +00:00
Henry Ptasinski
f95c1c61dd s3: add config info for Wasabi's US-West endpoint
Wasabi has two location, US East and US West, with different endpoint URLs.
When configuring S3 to use Wasabi, provide the endpoint information for both
locations.
2018-11-19 13:33:42 +00:00
Nick Craig-Wood
8c8dcdd521 webdav: fix config parsing so --webdav-user and --webdav-pass flags work 2018-11-17 13:14:54 +00:00
Nick Craig-Wood
141c133818 fstest: Wait for longer if neccessary in TestFsChangeNotify 2018-11-16 07:45:24 +00:00
Nick Craig-Wood
0f03e55cd1 fstests: ignore main directory creation in TestFsChangeNotify 2018-11-15 18:39:28 +00:00
Nick Craig-Wood
9e6ba92a11 fstests: attempt to fix TestFsChangeNotify flakiness
This now uses testPut to upload the test files which will retry on
errors properly.
2018-11-15 18:39:28 +00:00
Nick Craig-Wood
762561f88e fstest: factor out retry logic from put code and make testPut return the object too 2018-11-15 18:39:28 +00:00
Nick Craig-Wood
084fe38922 fstests: fixes the integration test errors running crypt over swift.
Skip tests involving errors creating or removing dirs on non root
bucket based fs
2018-11-15 18:39:28 +00:00
Peter Kaminski
63a2a935fc fix typos in original files, per #2727 review request 2018-11-14 22:48:58 +00:00
Peter Kaminski
64fce8438b docs: Fix a couple of minor typos in rclone_mount.md
* "transferring" instead of "transfering"
* "connection" instead of "connnection"
* "mount" instead of "mount mount"
2018-11-14 22:48:58 +00:00
Nick Craig-Wood
f92beb4e14 fstest: Fix TestPurge causing errors with subsequent tests on azure
Before this change TestPurge would remove a container and subsequent
tests would fail because the container was still being deleted so
couldn't be created.

This was fixed by introducing an fstest.NewRunIndividual() test runner
for TestPurge which causes the test to be run on a new container.
2018-11-14 17:14:02 +00:00
Nick Craig-Wood
f7ce2e8d95 azureblob: fix erroneous Rmdir error "directory not empty"
Before this change Rmdir would check the root rather than the
directory specified for being empty and return "directory not empty"
when it shouldn't have done.
2018-11-14 17:13:39 +00:00
Nick Craig-Wood
3975d82b3b Add brused27 to contributors 2018-11-13 17:00:26 +00:00
brused27
d87aa33ec5 azureblob: Avoid context deadline exceeded error by setting a large TryTimeout value - Fixes #2647 2018-11-13 16:59:53 +00:00
Anagh Kumar Baranwal
1b78f4d1ea Changed the docs scripts to use $HOME & $USER instead of specific values
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-11-13 11:00:34 +00:00
Nick Craig-Wood
b3704597f3 cmount: make --volname work for Windows - fixes #2679 2018-11-12 16:32:02 +00:00
Nick Craig-Wood
16f797a7d7 filter: add --ignore-case flag - fixes #502
The --ignore-case flag causes the filtering of file names to be case
insensitive.  The flag name comes from GNU tar.
2018-11-12 14:29:37 +00:00
Nick Craig-Wood
ee700ec01a lib/readers: add mutex to RepeatableReader - fixes #2572 2018-11-12 12:02:05 +00:00
Nick Craig-Wood
9b3c951ab7 Add Jake Coggiano to contributors 2018-11-12 11:34:28 +00:00
Jake Coggiano
22d17e79e3 dropbox: add dropbox impersonate support - fixes #2577 2018-11-12 11:33:39 +00:00
Jake Coggiano
6d3088a00b vendor: add github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/team/ 2018-11-12 11:33:39 +00:00
Nick Craig-Wood
84202c7471 onedrive: note 50,000 files is limit for one directory #2707 2018-11-11 15:22:19 +00:00
Nick Craig-Wood
96a05516f9 acd,box,onedrive,pcloud: remote log.Fatal from NewFs
And replace with error returns.
2018-11-11 11:00:14 +00:00
Nick Craig-Wood
4f6a942595 cmd: Make --progress update the stats right at the end
Before this when rclone exited the stats would just show the last
printed version, rather than the actual final state.
2018-11-11 09:57:37 +00:00
Nick Craig-Wood
c4b0a37b21 rc: improve docs on debugging 2018-11-10 10:18:13 +00:00
Nick Craig-Wood
9322f4baef Add Erik Swanson to contributors 2018-11-08 12:58:41 +00:00
Erik Swanson
fa0a1e7261 s3: fix role_arn, credential_source, ...
When the env_auth option is enabled, the AWS SDK's session constructor
now loads configuration from ~/.aws/config and environment variables,
and credentials per the selected (or default) AWS_PROFILE's settings.

This is accomplished by **NOT** including any Credential provider in the
aws.Config passed to the session constructor: If the Config.Credentials
is non-nil, that will always be used and the user's configuration re
role_arn, credential_source, source_profile, etc... from the shared
config will be completely ignored.

(The conditional creation and configuration of the stscreds Credential
provider is complicated enough that it is not worth re-creating that
logic.)
2018-11-08 12:58:23 +00:00
Nick Craig-Wood
4ad08794c9 fserrors: add "server closed idle connection" to retriable errors
This seems to be related to this go issue: https://github.com/golang/go/issues/19943

See: https://forum.rclone.org/t/copy-from-dropbox-to-google-drive-yields-failed-to-copy-failed-to-open-source-object-server-closed-idle-connection-error/7460
2018-11-08 11:12:25 +00:00
Nick Craig-Wood
c0f600764b Add Scott Edlund to contributors 2018-11-07 14:27:06 +00:00
Scott Edlund
f139e07380 enable softfloat on MIPS arch
MIPS does not have a floating point unit.  Enable softfloat to build binaries run on devices that do not have MIPS_FPU enabled in their kernel.
2018-11-07 14:26:48 +00:00
Nick Craig-Wood
c6786eeb2d move: don't create directories with --dry-run - fixes #2676 2018-11-06 13:34:15 +00:00
Nick Craig-Wood
57b85b8155 rc: fix job tests on Windows 2018-11-06 13:03:48 +00:00
Nick Craig-Wood
2b1194c57e rc: update docs with new methods 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
e6dd121f52 config: add rc operations for config 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
e600217666 config: create config directory on save if it is missing 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
bc17ca7ed9 rc: implement core/obscure 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
1916410316 rc: add core/version and put definitions next to implementations 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
dddfbec92a cmd/version: factor version number parsing routines into fs/version 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
75a88de55c rc/rcserver: with --rc-files if auth set, pass on to URL opened
If `--rc-user` or `--rc-pass` is set then the URL that is opened with
`--rc-files` will have the authorization in the URL in the
`http://user:pass@localhost/` style.
2018-11-05 15:44:40 +00:00
Nick Craig-Wood
2466f4d152 sync: add rc commands for sync/copy/move 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
39283c8a35 operations: implement operations remote control commands 2018-11-05 15:44:40 +00:00
Nick Craig-Wood
46c2f55545 copyurl: factor code into operations and write test 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
fc2afcbcbd lsjson: factor internals of lsjson command into operations 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
fa0a9653d2 rc: methods marked as AuthRequired need auth unless --rc-no-auth
Methods which can read or mutate external storage will require
authorisation - enforce this.  This can be overidden by `--rc-no-auth`.
2018-11-04 20:42:57 +00:00
Nick Craig-Wood
181267e20e cmd/rc: add --user and --pass flags and interpret --rc-user, --rc-pass, --rc-addr 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
75e8ea383c rc: implement rc.PutCachedFs for prefilling the remote cache 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
8c8b58a7de rc: expire remote cache and fix tests under race detector 2018-11-04 20:42:57 +00:00
Nick Craig-Wood
b961e07c57 rc: ensure rclone fails to start up if the --rc port is in use already 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
0b80d1481a cache: make tests not start an rc but use the internal framework 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
89550e7121 rcserver: serve directories as well as files 2018-11-04 15:11:51 +00:00
Nick Craig-Wood
370c218c63 cmd/http: factor directory serving routines into httplib/serve and write tests 2018-11-04 12:46:44 +00:00
Nick Craig-Wood
b972dcb0ae rc: implement options/blocks,get,set and register options 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
0bfa9811f7 rc: factor server code into rcserver and implement serving objects
If a GET or HEAD request is receivied with a URL parameter of fs then
it will be served from that remote.
2018-11-03 11:32:00 +00:00
Nick Craig-Wood
aa9b2c31f4 serve/restic: factor object serving into cmd/httplib/serve 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
cff75db6a4 rcd: implement new command just to serve the remote control API 2018-11-03 11:32:00 +00:00
Nick Craig-Wood
75252e4a89 rc: add --rc-files flag to serve files on the rc http server
This enables building a browser based UI for rclone
2018-11-03 11:32:00 +00:00
Nick Craig-Wood
2089405e1b fs/rc: add more infrastructure to help writing rc functions
- Fs cache for rc commands
- Helper functions for parsing the input
- Reshape command for manipulating JSON blobs
- Background Job starting, control, query and expiry
2018-11-02 17:32:20 +00:00
Nick Craig-Wood
a379eec9d9 fstest/mockfs: create mock fs.Fs for testing 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
45d5339fcb cmd/rc: add --json flag for structured JSON input 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
bb5637d46a serve http, webdav, restic: ensure rclone exits if the port is in use 2018-11-02 17:32:20 +00:00
Nick Craig-Wood
1f05d5bf4a delete: clarify that it only deletes files not directories 2018-11-02 17:07:45 +00:00
HerrH
ff87da9c3b Added some more links for easier finding
Expanded the Installation & Docs section with links to the website and added a link to the full list of storage providers and features.
2018-11-02 16:56:20 +00:00
ssaqua
3d81b75f44 dedupe: check for existing filename before renaming a dupe file 2018-11-02 16:51:52 +00:00
Nick Craig-Wood
baba6d67e6 s3: set ACL for server side copies to that provided by the user - fixes #2691
Before this change the ACL for objects which were server side copied
was left at the default "private" settings. S3 doesn't copy the ACL
from the source when you copy an object, you have to set it afresh
which is what this does.
2018-11-02 16:22:31 +00:00
Nick Craig-Wood
04c0564fe2 Add Ralf Hemberger to contributors 2018-11-02 09:53:23 +00:00
Ralf Hemberger
91cfdb81f5 change spaces to tab 2018-11-02 09:50:34 +00:00
Ralf Hemberger
deae7bf33c WebDav - Add RFC3339 date format - fixes #2712 2018-11-02 09:50:34 +00:00
Henning Surmeier
04a0da1f92 ncdu: remove option ('d' key)
delete files by pressing 'd' in the ncdu listing

GUI Improvements:
Boxes now have a border around them
Boxes can ask questions and allow the selection of options. The
selected option will be given to the UI.boxMenuHandler function.

Fixes #2571
2018-10-28 20:44:03 +00:00
Henning Surmeier
9486df0226 ncdu/scan: remove option for memory representation
Remove files/directories from the in memory structs of the cloud
directory. Size and Count will be recalculated and populated upwards
to the parent directories.
2018-10-28 20:44:03 +00:00
Nick Craig-Wood
948a5d25c2 operations: Fix Purge and Rmdirs when dir is not empty
Before this change, Purge on the fallback path would try to delete
directories starting from the root rather than the dir passed in.
Rmdirs would also attempt to delete the root.
2018-10-27 11:51:17 +01:00
Nick Craig-Wood
f7c31cd210 Add Florian Gamboeck to contributors 2018-10-27 00:28:11 +01:00
Florian Gamboeck
696e7b2833 backend/cache: Print correct info about Cache Writes 2018-10-27 00:27:47 +01:00
Anagh Kumar Baranwal
e76cf1217f Added docs to check for key generation on Mega
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-25 22:49:21 +01:00
Nick Craig-Wood
543e37f662 Require go1.8 for compilation 2018-10-25 17:06:33 +01:00
Nick Craig-Wood
c514cb752d vendor: update to latest versions of everything 2018-10-25 17:06:33 +01:00
Nick Craig-Wood
c0ca93ae6f opendrive: fix retries of upload chunks - fixes #2646
Before this change, upload chunks were being emptied on retry.  This
change introduces a RepeatableReader to fix the problem.
2018-10-25 11:50:38 +01:00
Nick Craig-Wood
38a89d49ae fstest/test_all: tidy HTML report
- link test number to online copy
- style links
- attempt to make a nicer colour scheme
2018-10-25 11:33:17 +01:00
Anagh Kumar Baranwal
6531126eb2 Fixes the rc docs creation
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-25 11:29:59 +01:00
Nick Craig-Wood
25d0e59ef8 fstest/test_all: make sure Version is correct in build 2018-10-25 08:36:09 +01:00
Nick Craig-Wood
b0db08fd2b fstest/test_all: constrain to go1.10 and above 2018-10-24 21:33:42 +01:00
Nick Craig-Wood
07addf74fd fstest/test_all: upload a copy of the report to "current" 2018-10-24 12:21:07 +01:00
Nick Craig-Wood
52c7c738ca fstest/test_all: limit concurrency and run tests in random order 2018-10-24 10:46:58 +01:00
Nick Craig-Wood
5c32b32011 fstest/test_all: fix directories that tests are run in
- Don't build a binary for backend tests
- Run tests in their relevant directories
2018-10-23 17:31:11 +01:00
Nick Craig-Wood
fe61cff079 crypt: ensure integration tests run correctly when -remote is set 2018-10-23 17:12:38 +01:00
Nick Craig-Wood
fbab1e55bb fstest/test_all: adapt to nested test definitions 2018-10-23 16:56:35 +01:00
Nick Craig-Wood
1bfd07567e fstest/test_all: add oneonly flag to only run one test per backend if required 2018-10-23 14:07:48 +01:00
Nick Craig-Wood
f97c4c8d9d fstest/test_all: rework integration tests to improve output
- Make integration tests use a config file
- Output individual logs for each test
- Make HTML report and open browser
- Optionally email and upload results
2018-10-23 14:07:48 +01:00
Anagh Kumar Baranwal
a3c55462a8 Set python version explicitly to 2 to avoid issues on systems where
the default python version is `3`

Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-23 12:14:52 +01:00
Anagh Kumar Baranwal
bbb9a504a8 Added docs to use the -P/--progress flag for real time statistics
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-10-23 12:14:52 +01:00
Jon Fautley
dedc7d885c sftp: Ensure file hash checking is really disabled 2018-10-23 12:03:50 +01:00
Nick Craig-Wood
c5ac96e9e7 Make --files-from only read the objects specified and don't scan directories
Before this change using --files-from would scan all the directories
that the files could possibly be in causing rclone to do more work
that was necessary.

After this change, rclone constructs an in memory tree using the
--fast-list mechanism but from all of the files in the --files-from
list and without scanning any directories.

Any objects that are not found in the --files-from list are ignored
silently.

This mechanism is used for sync/copy/move (march) and all of the
listing commands ls/lsf/md5sum/etc (walk).
2018-10-20 18:13:31 +01:00
Nick Craig-Wood
9959c5f17f webdav: add Content-Type to PUT requests - fixes #2664 2018-10-18 13:18:24 +01:00
Nick Craig-Wood
e8d0a363fc opendrive: fix transfer of files with + and & in - fixes #2657 2018-10-17 14:22:04 +01:00
albertony
935b7c1c0f jottacloud: fix bug in --fast-list handing of empty folders - fixes #2650 2018-10-17 13:58:36 +01:00
Fabian Möller
15ce0ae57c fstests: fix maximum tested size in TestFsPutChunked
Before this it was possible hat maxChunkSize was incorrectly set to 200.
2018-10-16 11:50:47 +02:00
Nick Craig-Wood
67703a73de Start v1.44-DEV development 2018-10-15 12:33:27 +01:00
Nick Craig-Wood
f96ce5674b Version v1.44 2018-10-15 11:03:08 +01:00
Nick Craig-Wood
7f0b204292 azureblob: work around SDK bug which causes errors for chunk-sized files (again)
Until https://github.com/Azure/azure-storage-blob-go/pull/75 is merged
the SDK can't upload a single blob of exactly the chunk size, so
upload files of this size with a multpart upload as a work around.

The previous fix for this 6a773289e7 turned out to cause problems
uploading files with maximum chunk size so needed to be redone.

Fixes #2653
2018-10-15 09:05:34 +01:00
Nick Craig-Wood
83b1ae4833 Add buergi to contributors 2018-10-14 17:20:34 +01:00
buergi
753cc63d96 webdav: add workaround for missing mtime - fixes #2420 2018-10-14 17:19:23 +01:00
Nick Craig-Wood
5dac8e055f union: fix --backup-dir on union backend
Before this fix --backup-dir would fail.

This is fixed by wrapping objects returned so that they belong to the
union Fs rather than the underlying Fs.
2018-10-14 15:19:02 +01:00
Nick Craig-Wood
c3a8eb1c10 fstests: make findObject() sleep a bit longer to fix b2 largePut tests 2018-10-14 14:45:23 +01:00
Nick Craig-Wood
0f2a5403db acd,box,onedrive,opendrive,ploud: fix Features() retaining the original receiver
Before this change the Features() method would return a different Fs
to that the Features() method was called on if the remote was
instantiated on a file.

The practical effect of this is that optional features, eg `rclone
about` wouldn't work properly when called on a file, and likely this
has been causing low level problems for users of these backends for
ages.

Ideally there would be a test for this, but it turns out that this is
really hard, so instead of that all the backends have been converted
to not copy the Fs and a big warning comment inserted for future
readers.

Fixes #2182
2018-10-14 14:41:26 +01:00
Nick Craig-Wood
dcce84714e onedrive: fix link command for non root 2018-10-14 14:17:53 +01:00
Nick Craig-Wood
eb8130f48a fstests: update TestPublicLink comment to show how to run solo 2018-10-14 14:17:05 +01:00
Nick Craig-Wood
aa58f66806 build: add longer timeout to integration tests 2018-10-14 14:16:33 +01:00
Nick Craig-Wood
a3dc591b8e Add teresy to contributors 2018-10-14 00:19:49 +01:00
teresy
5ee1bd7ba4 Remove redundant nil checks 2018-10-14 00:19:35 +01:00
Nick Craig-Wood
dbedf33b9f s3: fix v2 signer on files with spaces - fixes #2438
Before this fix the v2 signer was failing for files with spaces in.
2018-10-14 00:10:29 +01:00
Nick Craig-Wood
0f02c9540c s3: make --s3-v2-auth flag
This is an alternative to setting the region to "other-v2-signature"
which is inconvenient for multi-region providers.
2018-10-14 00:10:29 +01:00
Nick Craig-Wood
06922674c8 drive, s3: review hidden config items 2018-10-13 23:30:13 +01:00
Nick Craig-Wood
8ad7da066c drive: when listing team drives, continue on failure
This means that if the team drive listing returns a 500 error (which
seems reasonably common) rclone will continue to the point where it
asks for the team drive ID.

https://forum.rclone.org/t/many-team-drives-causes-rclone-to-fail/7159
2018-10-13 23:30:13 +01:00
Nick Craig-Wood
e1503add41 azureblob, b2, drive: implement set upload cutoff for chunked upload tests 2018-10-13 22:49:12 +01:00
Nick Craig-Wood
6fea75afde fstests: fix upload offsets not being set and redownload test files
In chunked upload tests:

- Add ability to set upload offset
- Read back the uploaded file to check it is OK
2018-10-13 22:49:12 +01:00
Nick Craig-Wood
6a773289e7 azureblob: work around SDK bug which causes errors for chunk-sized files
See https://github.com/Azure/azure-storage-blob-go/pull/75 for details
2018-10-13 22:49:12 +01:00
Nick Craig-Wood
ade252f13b build: fixup code formatting after goimports change 2018-10-13 22:47:12 +01:00
Nick Craig-Wood
bb2e361004 jottacloud: Fix socket leak on Object.Remove - fixes #2637 2018-10-13 22:47:12 +01:00
Nick Craig-Wood
b24facb73d rest: Fix documentation so it is clearer when resp.Body is closed 2018-10-13 22:47:12 +01:00
Nick Craig-Wood
014d58a757 Add David Haguenauer to contributors 2018-10-13 12:56:21 +01:00
David Haguenauer
1d16e16b30 docs: replace "Github" with "GitHub"
This the way GitHub refer to themselves.
2018-10-13 12:55:45 +01:00
Nick Craig-Wood
249a523dd3 build: fix golint install with new path 2018-10-12 11:35:35 +01:00
Nick Craig-Wood
8d72ef8d1e cmd: Don't print non-ASCII characters with --progress on windows - fixes #2501
This bug causes lots of strange behaviour with non-ASCII characters and --progress

https://github.com/Azure/go-ansiterm/issues/26
2018-10-11 21:25:04 +01:00
Nick Craig-Wood
bc8f0208aa rest: Remove auth headers on HTTP redirect
Before this change the rest package would forward all the headers on
an HTTP redirect, including the Authorization: header.  This caused
problems when forwarded to a signed S3 URL ("Only one auth mechanism
allowed") as well as being a potential security risk.

After we use the go1.8+ mechanism for doing this instead of using our
own which does it correctly removing the Authorization: header when
redirecting to a different host.

This hasn't fixed the behaviour for rclone compiled with go1.7.

Fixes #2635
2018-10-11 21:20:33 +01:00
Nick Craig-Wood
ee25b6106a Add jackyzy823 to contributors 2018-10-11 14:50:33 +01:00
Nick Craig-Wood
5c1b135304 Add dcpu to contributors 2018-10-11 14:50:33 +01:00
HerrH
2f2029fed5 Improved and updated the readme
Updated providers list, added links to docs, improved readability
2018-10-11 14:50:21 +01:00
Fabian Möller
57273d364b fstests: add TestFsPutChunked 2018-10-11 14:47:58 +01:00
Fabian Möller
84289d1d69 readers: add NewPatternReader 2018-10-11 14:47:58 +01:00
Fabian Möller
98e2746e31 backend: add fstests.ChunkedUploadConfig
- azureblob
- b2
- drive
- dropbox
- onedrive
- s3
- swift
2018-10-11 14:47:58 +01:00
Fabian Möller
c00ec0cbe4 fstests: add ChunkedUploadConfig 2018-10-11 14:47:58 +01:00
Fabian Möller
1a40bceb1d backend: unify NewFs path handling for wrapping remotes
Use the same function to join the root paths for the wrapping remotes
alias, cache and crypt.
The new function fspath.JoinRootPath is equivalent to path.Join, but if
the first non empty element starts with "//", this is preserved to allow
Windows network path to be used in these remotes.
2018-10-10 17:50:27 +01:00
jackyzy823
411a6cc472 onedrive: add link sharing support #2178 2018-10-09 20:11:48 +08:00
Fabian Möller
1e2676df26 union: fix ChangeNotify to support multiple remotes
To correctly support multiple remotes, each remote has to receive a
value on the input channel.
2018-10-07 11:13:37 +02:00
Nick Craig-Wood
364fca5cea union: implement optional interfaces (Move, DirMove, Copy etc) - fixes #2619
Implement optional interfaces
- Purge
- PutStream
- Copy
- Move
- DirMove
- DirCacheFlush
- ChangeNotify
- About

Make Hashes() return the intersection of all the hashes supported by the remotes
2018-10-07 00:06:29 +01:00
Nick Craig-Wood
87e1efa997 mount, vfs: Remove EXPERIMENTAL tags
rclone mount and the --vfs-cache-mode has been tested extensively by
users now so removing the EXPERIMENTAL tag is appropriate.
2018-10-06 11:47:46 +01:00
Nick Craig-Wood
6709084e2f config: Show URL of backend help page when starting config 2018-10-06 11:47:46 +01:00
Nick Craig-Wood
6b1f915ebc fs: Implement RegInfo.FileName to return the on disk filename for a backend
Use it in make_backend_docs.py
2018-10-06 11:47:46 +01:00
Nick Craig-Wood
78b9bd77f5 docs: auto generate backend options documentation
This inserts the output of "rclone help backend xxx" into the help
pages for each backend.
2018-10-06 11:47:46 +01:00
Nick Craig-Wood
a9273c5da5 docs: move documentation for options from docs/content into backends
In the following commit, the documentation will be autogenerated.
2018-10-06 11:47:46 +01:00
Nick Craig-Wood
14128656db cmd: Implement specialised help for flags and backends - fixes #2541
Instead of showing all flags/backends all the time, you can type

    rclone help flags
    rclone help flags <regexp>
    rclone help backends
    rclone help backend <name>
2018-10-06 11:47:45 +01:00
Nick Craig-Wood
1557287c64 fs: Make Option.GetValue() public #2541 2018-10-06 11:47:45 +01:00
Nick Craig-Wood
e7e467fb3a cmd: factor FlagName into fs.Option #2541 2018-10-06 11:47:45 +01:00
Nick Craig-Wood
5fde7d8b12 cmd: split flags up into global and backend flags #2541 2018-10-06 11:47:45 +01:00
Nick Craig-Wood
3c086f5f7f cmd: Make default help less verbose #2541
This stops the default help showing all the flags, backends, commands
2018-10-06 11:47:45 +01:00
dcpu
c0084f43dd cache: Remove entries that no longer exist in the source
list directory with 25k files

before(1.43.1)
5m24s

after
3m21s
2018-10-06 11:23:33 +01:00
Nick Craig-Wood
ddbd4fd881 Add Paul Kohout to contributors 2018-10-04 08:25:39 +01:00
Paul Kohout
7826e39fcf s3: use configured server-side-encryption and storace class options when calling CopyObject() - fixes #2610 2018-10-04 08:25:20 +01:00
Nick Craig-Wood
06ae4258be cmd: Fix -P not ending with a new line
Before this fix rclone didn't wait for the stats to be finished before
exiting, so the final new line was never printed.

After this change rclone will wait for the stats routine to cease
before exiting.
2018-10-03 21:46:18 +01:00
Alex Chen
d9037fe2be onedrive: ignore OneNote files by default - fixes #211 2018-10-03 12:46:25 +08:00
Fabian Möller
1d14972e41 vfs: reduce directory cache cleared by poll-interval
Reduce the number of nodes purged from the dir-cache when ForgetPath is
called. This is done by only forgetting the cache of the received path
and invalidating the parent folder cache by resetting *Dir.read.

The parent will read the listing on the next access and reuse the
dir-cache of entries in *Dir.items.
2018-10-02 10:21:14 +01:00
Fabian Möller
05fa9cb379 drive: improve directory notifications in ChangeNotify
When moving a directory in drive, most of the time only a notification
for the directory itself is created, not the old or new parents.

This tires to find the old path in the dirCache and the new path with
the dirCache of the new parent, which can result in two notifications
for a moved directory.
2018-10-02 10:14:14 +01:00
Nick Craig-Wood
59e14c25df vfs: enable rename for nearly all remotes using server side Move or Copy
Before this change remotes without server side Move (eg swift, s3,
gcs) would not be able to rename files.

After it means nearly all remotes will be able to rename files on
rclone mount with the notable exceptions of b2 and yandex.

This changes checks to see if the remote can do Move or Copy then
calls `operations.Move` to do the actual move.  This will do a server
side Move or Copy but won't download and re-upload the file.

It also checks to see if the destination exists first which avoids
conflicts or duplicates.

Fixes #1965
Fixes #2569
2018-09-29 14:56:20 +01:00
Nick Craig-Wood
fc640d3a09 Add Frantisek Fuka to contributors 2018-09-29 14:55:11 +01:00
Frantisek Fuka
e1f67295b4 b2: add note about cleanup in docs
Added: "Note that `cleanup` does not remove partially uploaded filesfrom the bucket."
2018-09-29 14:47:31 +01:00
Henning Surmeier
22ac80e83a webdav/sharepoint: renew cookies after 12hrs 2018-09-26 13:04:41 +01:00
Nick Craig-Wood
c7aa6b587b Add xnaas to contributors 2018-09-26 10:07:13 +01:00
xnaas
8d1848bebe docs: note --track-renames doesn't work with crypt 2018-09-26 10:06:19 +01:00
Fabian Möller
527c0af1c3 drive: cleanup changeNotifyRunner 2018-09-25 17:54:48 +02:00
Fabian Möller
a20fae0364 drive: code cleanup 2018-09-25 15:20:23 +01:00
Fabian Möller
15b1a1f909 drive: add support for apps-script to json export 2018-09-25 15:20:23 +01:00
Fabian Möller
80b25daac7 drive: add support for multipart document extensions 2018-09-25 15:20:23 +01:00
Fabian Möller
70b30d5ca4 drive: add document links 2018-09-25 15:20:23 +01:00
Fabian Möller
0b2fc621fc drive: restructure Object type 2018-09-25 15:20:23 +01:00
Fabian Möller
171e39b230 drive: add --drive-import-formats
Add a new flag to the drive backend to allow document conversions oni upload.
The existing --drive-formats flag has been renamed to --drive-export-formats.
The old flag is still working to be backward compatible.
2018-09-25 15:20:23 +01:00
Fabian Möller
690a44e40e drive: rewrite mime type and extension handling
Make use of the mime package to find matching extensions and mime types.
For simplicity, all extensions are now prefixed with "." to match the
mime package requirements.

Parsed extensions get converted if needed.
2018-09-25 15:20:23 +01:00
Fabian Möller
d9a3b26e47 vfs: add vfs/poll-interval rc command
This command can be used to query the current status of the
poll-interval option and also update the value.
2018-09-25 14:01:13 +02:00
Fabian Möller
1eec59e091 fs: update ChangeNotifier interface
This introduces a channel to the ChangeNotify function, which can be
used to update the poll-interval and cleanly exit the polling function.
2018-09-25 14:01:13 +02:00
Nick Craig-Wood
96ce49ec4e Add ssaqua to contributors 2018-09-24 17:08:47 +01:00
ssaqua
ae63e4b4f0 list: change debug logs for excluded items 2018-09-24 17:08:35 +01:00
Nick Craig-Wood
e2fb588eb9 Add frenos to contributors 2018-09-24 17:05:03 +01:00
frenos
382a6863b5 rc: add support for OPTIONS and basic CORS - #2575 2018-09-24 17:04:47 +01:00
Nick Craig-Wood
7b975bc1ff alias: Fix handling of Windows network paths
Before this fix, the alias backend would mangle Windows paths like
//server/drive as it was treating them as unix paths.

See https://forum.rclone.org/t/smb-share-alias/6857
2018-09-21 18:24:21 +01:00
Nick Craig-Wood
467fe30a5e vendor: update to latest versions of everything 2018-09-21 18:23:37 +01:00
Nick Craig-Wood
4415aa5c2e build: fix make update 2018-09-21 18:23:37 +01:00
Nick Craig-Wood
17ab38502d Revamp issue and PR templates and CONTRIBUTING guide
Thanks to @fd0 of the restic project for a very useful blog post and
something to plagiarise :-)

https://restic.net/blog/2018-09-09/GitHub-issue-templates
2018-09-21 18:17:32 +01:00
Nick Craig-Wood
9fa8c959ee local: preallocate files on linux with fallocate(2) 2018-09-19 16:04:57 +01:00
Nick Craig-Wood
f29c6049fc local: preallocate files on Windows to reduce fragmentation #2469
Before this change on Windows, files copied locally could become
heavily fragmented (300+ fragments for maybe 100 MB), no matter how
much contiguous free space there was (even if it's over 1TiB). This
can needlessly yet severely adversely affect performance on hard
disks.

This changes uses NtSetInformationFile to pre-allocate the space to
avoid this.

It does nothing on other OSes other than Windows.
2018-09-19 16:04:57 +01:00
Nick Craig-Wood
e44fa5db8e build: update git bisect scripts 2018-09-19 16:04:57 +01:00
Fabian Möller
03ea05b860 drive: add workaround for slow downloads
Add --drive-v2-download-min-size flag to allow downloading files via the
drive v2 API. If files are greater than this flag, a download link is
generated when needed. The flag is disabled by default.
2018-09-18 15:55:50 +01:00
Fabian Möller
b8678c9d4b vendor: add google.golang.org/api/drive/v2 2018-09-18 15:55:50 +01:00
Fabian Möller
13823a7743 drive: fix escaped chars in documents during list
Fixes #2591
2018-09-18 15:53:44 +01:00
sandeepkru
b94d87ae2d azureblob and fstests - Modify integration tests to include new
optional setting to test SetTier on only few supported tiers.

Remove unused optional interface ListTiers and backend and internal tests
2018-09-18 13:56:09 +01:00
sandeepkru
e0c5f7ff1b fs - Remove unreferenced ListTierer optional interface 2018-09-18 13:56:09 +01:00
Nick Craig-Wood
b22ecbe174 Add Joanna Marek to contributors 2018-09-18 10:27:33 +01:00
Nick Craig-Wood
c41be436c6 Add Antoine GIRARD to contributors 2018-09-18 10:27:33 +01:00
Joanna Marek
e022ffce0f accounting: change too long names cutting mechanism - fixes #2490 2018-09-18 10:27:23 +01:00
albertony
cfe65f1e72 jottacloud: minor update in docs 2018-09-18 10:25:30 +01:00
Sebastian Bünger
b18595ae07 jottacloud: Fix handling of reserved characters. fixes #2531 2018-09-17 12:42:35 +01:00
Nick Craig-Wood
d27630626a webdav: add a small pause after failed upload before deleting file #2517
This fixes the integration tests for `serve webdav` which uses the
webdav backend tests.
2018-09-17 08:51:50 +01:00
Nick Craig-Wood
c473c7cb53 ftp: add a small pause after failed upload before deleting file #2517
This fixes the integration tests for `serve ftp` which uses the ftp
backend tests.
2018-09-17 08:51:50 +01:00
Nick Craig-Wood
ef3526b3b8 vfs: fix race condition detected by serve ftp tests 2018-09-17 08:50:34 +01:00
Nick Craig-Wood
d4ee7277c0 serve ftp: disable on plan9 since it doesn't compile 2018-09-17 08:50:34 +01:00
Antoine GIRARD
4a3efa5d45 cmd/serve: add ftp server - implement #2151 2018-09-17 08:50:34 +01:00
Nick Craig-Wood
a14f0d46d7 vendor: add github.com/goftp/server 2018-09-17 08:50:34 +01:00
Nick Craig-Wood
a25875170b webdav: Add another time format to fix #2574 2018-09-15 21:46:56 +01:00
Alex Chen
a288646419 onedrive: fix sometimes special chars in filenames not replaced 2018-09-14 21:38:55 +08:00
Nick Craig-Wood
b3d8bc61ac build: add example scripts for bisecting rclone and go 2018-09-14 11:17:51 +01:00
sandeepkru
7accd30da8 cmd and fs: Added new command settier which performs storage tier changes on
supported remotes
2018-09-12 21:09:08 +01:00
sandeepkru
9594fd0a0c fstests: Added integration tests on SetTier operation 2018-09-12 21:09:08 +01:00
sandeepkru
aac84c554a azureblob: Implemented settier command support on azureblob remote, this supports to
change tier on objects. Added internal test to check if feature flags are set correctly
2018-09-12 21:09:08 +01:00
sandeepkru
5716a58413 fs: Added new optional interfaces SetTierer, GetTierer and ListTierer, these are to
perform object tier changes on supported remotes
2018-09-12 21:09:08 +01:00
sandeepkru
233507bfe0 vendor: Update AzureSDK version to latest one, fixes failing integration tests 2018-09-12 08:14:38 +01:00
sandeepkru
5b27702b61 AzureBlob new sdk changes 2018-09-12 08:14:38 +01:00
Nick Craig-Wood
b2a6a97443 Add Craig Miskell to contributors 2018-09-11 07:57:16 +01:00
Craig Miskell
2543278c3f S3: Use (custom) pacer, to retry operations when reasonable - fixes #2503 2018-09-11 07:57:03 +01:00
Jon Craton
19cf3bb9e7 Fixed typo (duplicate word) (#2563) 2018-09-10 19:09:48 -07:00
Fabian Möller
3c44ef788a cache: add plex_insecure option to skip certificate validation
Fixes #2215
2018-09-10 21:19:25 +01:00
Fabian Möller
e5663de09e docs: add section for .plex.direct urls to cache 2018-09-10 21:15:18 +01:00
Nick Craig-Wood
7cfd4e56f8 Add Santiago Rodríguez to contributors
Another email address for Sandeep.
2018-09-10 20:49:55 +01:00
Santiago Rodríguez
282540c2d4 azureblob: add --azureblob-list-chunk parameter - Fixes #2390
This parameter can be used to adjust the size of the listing chunks
which can be used to workaround problems listing large buckets.
2018-09-10 20:45:06 +01:00
albertony
1e7a7d756f jottacloud: fix for --fast-list 2018-09-10 20:38:20 +01:00
Fabian Möller
f6ee0795ac cache: preserve leading / in wrapped remote path
When combining the remote value and the root path, preserve the absence
or presence of the / at the beginning of the wrapped remote path.

e.g. a remote "cloud:" and root path "dir" becomes "cloud:dir" instead
of "cloud:/dir".

Fixes #2553
2018-09-10 20:35:50 +01:00
Fabian Möller
eb5a95e7de crypt: preserve leading / in wrapped remote path
When combining the remote value and the root path, preserve the absence
or presence of the / at the beginning of the wrapped remote path.

e.g. a remote "cloud:" and root path "dir" becomes "cloud:dir" instead
of "cloud:/dir".
2018-09-10 20:35:50 +01:00
Sandeep
5b1c162fb2 Added sandeepkru to maintainers list (#2560) 2018-09-08 20:58:23 -07:00
albertony
d51501938a jottacloud: add link sharing support 2018-09-08 09:38:57 +01:00
Nick Craig-Wood
a823d518ac docs: changelog from v1.43.1 branch 2018-09-07 17:16:11 +01:00
Nick Craig-Wood
87d4e32a6b build: instructions on how to make a point release 2018-09-07 17:10:29 +01:00
Nick Craig-Wood
820d2a7149 Add Felix Brucker to contributors 2018-09-07 15:14:28 +01:00
Nick Craig-Wood
9fe39f25e1 union: add missing docs 2018-09-07 15:14:08 +01:00
Nick Craig-Wood
1b2cc781e5 union: fix so all integration tests pass
* Fix error handling in List and NewObject
* Fix Precision in case we have precision > time.Second
* Fix Features - all binary features are possible
* Fix integration tests using new test facilities
2018-09-07 15:14:08 +01:00
Nick Craig-Wood
e05ec2b77e fstests: Allow object name and fs check to be skipped 2018-09-07 15:14:08 +01:00
Felix Brucker
9e3ea3c6ac union: Implement union backend which reads from multiple backends 2018-09-07 15:14:08 +01:00
Anagh Kumar Baranwal
0fb12112f5 docs: display changes
- Reduced size of the social menu and increased the size of the content
- Added scrollable property to the index menus
- Fixed code wrapping issue

Fixes #2103

Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-09-07 14:54:22 +01:00
Cédric Connes
1b95ca2852 stats: handle FatalError and NoRetryError when reported to stats 2018-09-07 14:44:50 +01:00
albertony
e07a850be3 jottacloud: add permanent delete support: --jottacloud-hard-delete 2018-09-07 12:58:18 +01:00
albertony
3fccce625c jottacloud: add --fast-list support - fixes #2532 2018-09-07 12:49:39 +01:00
albertony
a1f935e815 jottacloud: minor improvement in quota info (omit if unlimited) 2018-09-07 12:49:39 +01:00
Alex Chen
22ee4151fd build: make CIs available for forks
This makes it possible to run CI on a fork of rclone which is useful for contributors.
2018-09-07 10:17:26 +01:00
Fabian Möller
cc23ad71ce ncdu: return error instead of log.Fatal in Show 2018-09-07 10:06:37 +01:00
sandeepkru
57b9fff904 azureblob - BugFix - Incorrect StageBlock invocation in multi-part uploads
fixes #2518. Incorrect formation of block list.
2018-09-06 22:24:40 +01:00
Alex Chen
692ad482dc onedrive: fix new fields not saved when editing old config - fixes #2527 2018-09-07 00:07:16 +08:00
Sebastian Bünger
c6f1c3c7f6 box: Implement link sharing. #2178 2018-09-04 22:01:22 +01:00
Nick Craig-Wood
164d1e05ca hubic: retry auth fetching if it fails to make hubic more reliable 2018-09-04 21:00:36 +01:00
Nick Craig-Wood
c644241392 hubic: fix uploads - fixes #2524
Uploads were broken because chunk size was set to zero.  This was a
consequence of the backend config re-organization which meant that
chunk size had lost its default.

Sharing some backend config between swift and hubic fixes the problem
and means hubic gains its own --hubic-chunk-size flag.
2018-09-04 20:27:48 +01:00
Cnly
89be5cadaa onedrive: fix make check 2018-09-05 00:37:52 +08:00
Cnly
f326f94b97 onedrive: use single-part upload for empty files - fixes #2520 2018-09-05 00:08:00 +08:00
Nick Craig-Wood
3d2117887d Add Anagh Kumar Baranwal to contributors 2018-09-04 16:16:46 +01:00
Anagh Kumar Baranwal
5a6750e1cd cache: documentation fix for cache-chunk-total-size - Fixes #2519
Signed-off-by: Anagh Kumar Baranwal <anaghk.dos@gmail.com>
2018-09-04 16:16:35 +01:00
Fabian Möller
6b8b9d19f3 googlecloudstorage: fix service_account_file been ignored - Fixes #2523 2018-09-04 15:31:20 +01:00
Nick Craig-Wood
4ca26eb38c cmd: fix crash with --progress and --stats 0 #2501 2018-09-04 14:39:48 +01:00
Alex Chen
37b2754f37 Add Alex Chen (Cnly) to MAINTAINERS.md 2018-09-04 08:40:01 +08:00
Nick Craig-Wood
172beb2ae3 Add cron410 to contributors 2018-09-03 20:28:15 +01:00
cron410
5eba392a04 cache: clarify docs 2018-09-03 20:28:15 +01:00
Fabian Möller
deda093637 cache: fix error return value of cache/fetch rc method 2018-09-03 17:32:11 +02:00
dcpu
a4c4019032 cache: improve performance by not sending info requests for cached chunks 2018-09-03 15:41:06 +01:00
Nick Craig-Wood
2e37942592 Add albertony to contributors 2018-09-03 15:31:25 +01:00
albertony
09d7bd2d40 config: don't create default config dir when user supplies --config
Avoid creating empty default configuration directory when user supplies path to config file.

Fixes #2514.
2018-09-03 15:30:53 +01:00
Nick Craig-Wood
ff0efb1501 Add Sheldon Rupp to contributors 2018-09-03 15:04:37 +01:00
Sheldon Rupp
0f1d4a7ca8 docs: fix typo 2018-09-03 15:03:49 +01:00
Fabian Möller
a0b3fd3a33 cache: fix worker scale down
Ensure that calling scaleWorkers will create/destroy the right amount of
workers.
2018-09-03 12:29:35 +02:00
Fabian Möller
cdbe3691b7 cache: add cache/fetch rc function 2018-09-03 12:29:35 +02:00
Fabian Möller
3a0b3b0f6e drive: reformat long API call lines 2018-09-03 12:22:05 +02:00
Nick Craig-Wood
d3afef3e1b Add dcpu to contributors 2018-09-02 18:11:42 +01:00
dcpu
f4aaec9ce5 log: Add --log-format flag - fixes #2424 2018-09-02 18:11:09 +01:00
Nick Craig-Wood
bd5d326160 build: enable caching of the go build cache for Travis and Appveyor 2018-09-02 17:43:09 +01:00
Cnly
5b9b9f1572 onedrive: graph: clarify option for root Sharepoint site 2018-09-02 16:06:25 +01:00
Cnly
571c8754de onedrive: graph: update docs 2018-09-02 16:06:25 +01:00
Cnly
fb9a95e68e onedrive: graph: Remove unnecessary error checks 2018-09-02 16:06:25 +01:00
Cnly
85e0839c8b onedrive: graph: Refine config handling 2018-09-02 16:06:25 +01:00
Cnly
1749fb8ebf onedrive: graph: Refine config keys naming 2018-09-02 16:06:25 +01:00
Oliver Heyme
e114be11ec onedrive: Removed upload cutoff and always do session uploads
Set modtime on copy

Added versioning issue to OneDrive documentation

(cherry picked from commit 7f74403)
2018-09-02 16:06:25 +01:00
Cnly
b709f73aab onedrive: rework to support Microsoft Graph
The initial work on this was done by Oliver Heyme with updates from
Cnly.

Oliver Heyme:

* Changed to Microsoft graph
* Enable writing
* Added more options for adding a OneDrive Remote
* Better error handling
* Send modDate at create upload session and fix list children

Cnly:

* Simple upload API only supports max 4MB files
* Fix supported hash types for different drive types
* Fix unchecked err

Co-authored-by: Oliver Heyme <olihey@googlemail.com>
Co-authored-by: Cnly <minecnly@gmail.com>
2018-09-02 16:06:25 +01:00
Nick Craig-Wood
05a615ef22 Add Dr. Tobias Quathamer to contributors 2018-09-02 15:51:47 +01:00
Dr. Tobias Quathamer
76450c01f3 cache: remove accidentally committed files
* Delete cache_upload_test.go.orig
* Delete cache_upload_test.go.rej
2018-09-02 15:51:22 +01:00
Nick Craig-Wood
86e64c626c Add Cédric Connes to contributors 2018-09-02 15:08:38 +01:00
Cédric Connes
9b827be418 local: skip bad symlinks in dir listing with -L enabled - fixes #1509 2018-09-02 14:47:54 +01:00
Nick Craig-Wood
7e5c6725c1 docs: Fix version in changelog 2018-09-01 18:40:34 +01:00
Nick Craig-Wood
543d75723b Start v1.43-DEV development 2018-09-01 18:37:48 +01:00
Nick Craig-Wood
b4d94f255a build: server side copy the current release files in 2018-09-01 18:22:19 +01:00
Nick Craig-Wood
b0dd218fea build: make tidy-beta for removing old beta releases 2018-09-01 18:21:54 +01:00
Nick Craig-Wood
6396872d75 build: when building a tag release don't suffix the version 2018-09-01 16:57:34 +01:00
Nick Craig-Wood
6cf684f2a1 build: fix addition of β symbol for release build and replace with -beta
Before this fix the release build was being build with a β suffix.

This also replaces β with -beta so we can use the same version tag
everywhere as some systems don't like the β symbol.
2018-09-01 14:51:32 +01:00
Nick Craig-Wood
20c55a6829 Version v1.43 2018-09-01 12:58:00 +01:00
Nick Craig-Wood
a3fec7f030 build: build release binaries on travis and appveyor, not locally 2018-09-01 12:50:35 +01:00
Nick Craig-Wood
8e2b3268be build: Automatically compile the changelog to make editing easier 2018-09-01 12:50:35 +01:00
Nick Craig-Wood
32ab4e9ac6 pcloud: delete half uploaded files on upload error
Sometimes pcloud will leave a half uploaded file when the transfer
actually failed.  This patch deletes the file if it exists.

This problem was spotted by the integration tests.
2018-09-01 10:01:02 +01:00
Nick Craig-Wood
b4d86d5450 onedrive: fix rmdir sometimes deleting directories with contents
Before this change we were using the ChildCount in the Folder facet to
determine if a directory was empty or not.  However this seems to be
unreliable, or updated asynchronously which meant that `rclone rmdir`
sometimes deleted directories that had files in.

This problem was spotted by the integration tests.

Listing the directory instead of relying on the ChildCount fixes the
problem and the integration tests, without changing the cost (one http
transaction).
2018-08-31 23:12:13 +01:00
Nick Craig-Wood
d49ba652e2 local: fix mkdir error when trying to copy files to the root of a drive on windows
This was causing errors which looked like this when copying a file to
the root of a drive:

    mkdir \\?: The filename, directory name, or volume label syntax is incorrect.

This was caused by an incorrect path splitting routine which was
removing \ of the end of UNC paths when it shouldn't have been.  Fixed
by using the standard library `filepath.Dir` instead.
2018-08-31 21:10:36 +01:00
Nick Craig-Wood
7d74686698 fs/accounting: increase maximum burst size of token bucket
This stops occasional errors when using --bwlimit which look like this

    Token bucket error: rate: Wait(n=2255475) exceeds limiter's burst 2097152
2018-08-30 17:24:08 +01:00
Sebastian Bünger
2d7c5ebc7a jottacloud: Implement optional about interface. 2018-08-30 17:15:49 +01:00
Sebastian Bünger
86e3436d55 jottacloud: Add optional MimeTyper interface. 2018-08-30 17:15:49 +01:00
Nick Craig-Wood
f243d2a309 Add bsteiss to contributors 2018-08-30 17:08:40 +01:00
bsteiss
aaa3d7e63b s3: add support for KMS Key ID - fixes #2217
This code supports aws:kms and the kms key id for the s3 backend.
2018-08-30 17:08:27 +01:00
Nick Craig-Wood
e4c5f248c0 build: make appveyor just use latest release go version 2018-08-30 16:55:02 +01:00
Nick Craig-Wood
5afaa48d06 Add Denis to contributors 2018-08-30 16:47:32 +01:00
Denis
1c578ced1c cmd: add copyurl command - Fixes #1320 2018-08-30 16:45:41 +01:00
Nick Craig-Wood
de6ec8056f fs/accounting: fix moving average speed for file stats
Before this change the moving average for the individual file stats
would start at 0 and only converge to the correct value over 15-30
seconds.

This change starts the weighting period as 1 and moves it up once per
sample which gets the average to a better value instantly.
2018-08-28 22:55:51 +01:00
Nick Craig-Wood
66fe4a2523 fs/accounting: fix stats display which was missing transferring data
Before this change the total stats were ignoring the in flight
checking, transferring and bytes.
2018-08-28 22:21:17 +01:00
Nick Craig-Wood
f5617dadf3 fs/accounting: factor out eta and percent calculations and write tests 2018-08-28 22:21:17 +01:00
Nick Craig-Wood
5e75a9ef5c build: switch to using go1.11 modules for managing dependencies 2018-08-28 17:08:22 +01:00
Nick Craig-Wood
da1682a30e vendor: switch to using go1.11 modules 2018-08-28 16:08:48 +01:00
Nick Craig-Wood
5c75453aba version: fix test under Windows 2018-08-28 16:07:36 +01:00
Nick Craig-Wood
84ef6b2ae6 Add Alex Chen to contributors 2018-08-27 08:34:57 +01:00
Nick Craig-Wood
7194c358ad azureblob,b2,qingstor,s3,swift: remove leading / from paths - fixes #2484 2018-08-26 23:19:28 +01:00
Nick Craig-Wood
502d8b0cdd vendor: remove github.com/VividCortex/ewma dependency 2018-08-26 22:05:30 +01:00
Nick Craig-Wood
ca44fb1fba accounting: fix time to completion estimates
Previous to this change package used for this
github.com/VividCortex/ewma took a 0 average to mean reset the
statistics.  This happens quite often when transferring files though a
buffer.

Replace that implementation with a simple home grown one (with about
the same constant), without that feature.
2018-08-26 22:00:48 +01:00
Nick Craig-Wood
842ed7d2a9 swift: make it so just storage_url or auth_token can be overidden
Before this change if only one of storage_url or auth_token were
supplied then rclone would overwrite both of them when authenticating.
This effectively meant you could supply both of them or none of them
only.

Now rclone still does the authentication to read the missing
storage_url or auth_token then afterwards re-writes the auth_token or
storage_url back to what the user desired.

Fixes #2464
2018-08-26 21:54:50 +01:00
Nick Craig-Wood
b3217d2cac serve webdav: make Content-Type without reading the file and add --etag-hash
Before this change x/net/webdav would open each file to find out its
Content-Type.

Now we override the FileInfo and provide that directly from rclone.

An --etag-hash has also been implemented to override the ETag with the
hash passed in.

Fixes #2273
2018-08-26 21:50:41 +01:00
Nick Craig-Wood
94950258a4 fs: allow backends to be named using their Name or Prefix #2449
This means that, for example Google Cloud Storage can be known as
`:gcs:bucket` on the command line, as well as `:google cloud
storage:bucket`.
2018-08-26 17:59:31 +01:00
Nick Craig-Wood
8656bd2bb0 fs: Allow on the fly remotes with :backend: syntax - fixes #2449
This change allows remotes to be created on the fly without a config
file by using the remote type prefixed with a : as the remote name, Eg
:s3: to make an s3 remote.

This assumes the user is supplying the backend config via command line
flags or environment variables.
2018-08-26 17:59:31 +01:00
Nick Craig-Wood
174ca22936 mount,cmount: clip the number of blocks to 2^32-1 on macOS
OSX FUSE only supports 32 bit number of blocks which means that block
counts have been wrapping.  This causes f_bavail to be 0 which in turn
causes problems with programs like borg backup.

Fixes #2356
2018-08-26 17:32:59 +01:00
Nick Craig-Wood
4eefd05dcf version: print the release and beta versions with --check - Fixes #2348 2018-08-26 17:28:28 +01:00
Nick Craig-Wood
1cccfa7331 cmd: Make --progress work on Windows 2018-08-26 17:20:38 +01:00
Nick Craig-Wood
18685e6b0b vendor: add github.com/Azure/go-ansiterm for --progress on Windows support 2018-08-26 17:20:38 +01:00
Nick Craig-Wood
b6db90cc32 cmd: add --progress/-P flag to show progress
Fixes #2347
Fixes #1210
2018-08-26 17:20:38 +01:00
Nick Craig-Wood
b596ccdf0f build: update to use go1.11 for the build
Also select latest patch release for go using 1.yy.x feature in travis
2018-08-26 15:26:44 +01:00
Nick Craig-Wood
a64e0922b9 vendor: update github.com/ncw/swift to fix server side copy bug 2018-08-26 15:03:19 +01:00
sandeepkru
3751ceebdd azureblob: Added blob tier feature, a new configuration of azureblob-access-tier
is added to tier blobs between Hot, Cool and Archive. Addresses #2901
2018-08-21 21:52:45 +01:00
Nick Craig-Wood
9f671c5dd0 fs: fix tests for *SepList 2018-08-21 10:58:59 +01:00
Alex Chen
c6c74cb869 mountlib: fix mount --daemon not working with encrypted config - fixes #2473
This passes the configKey to the child process as an Obscured temporary file with an environment variable to the
2018-08-21 09:41:16 +01:00
Nick Craig-Wood
f9cf70e3aa jottacloud: docs, fixes and tests for MD5 calculation
* Add docs for --jottacloud-md5-memory-limit
  * Factor out readMD5 function and add tests
  * Fix accounting
  * Make sure temp file is deleted at the start (not Windows)
2018-08-21 08:57:53 +01:00
Oliver Heyme
ee4485a316 jottacloud: calculate missing MD5s - fixes #2462
If an MD5 can't be found on the source then this streams the object
into memory or on disk to calculate it.
2018-08-21 08:57:53 +01:00
Nick Craig-Wood
455219f501 crypt: fix accounting when checking hashes on upload
In e52ecba295 we forgot to unwrap and
re-wrap the accounting which mean the the accounting was no longer
first in the chain of readers.  This lead to accounting inaccuracies
in remotes which wrap and unwrap the reader again.
2018-08-21 08:57:53 +01:00
Nick Craig-Wood
1b8f4b616c fs: move CommaSepList and SpaceSepList here from config
fs can't import config so having them there means they are not usable
by rclone core.
2018-08-20 17:52:05 +01:00
Fabian Möller
f818df52b8 config: add List type 2018-08-20 17:38:51 +01:00
Cnly
29fa840d3a onedrive: Add back the check for DirMover interface 2018-08-20 17:31:28 +01:00
Nick Craig-Wood
7712a0e111 fs/asyncreader: skip some tests to work around race detector bug
The race detector currently detects a race with len(chan) against
close(chan).

See: https://github.com/golang/go/issues/27070

Skip the tests which trip this bug under the race detector.
2018-08-20 12:34:29 +01:00
Nick Craig-Wood
77806494c8 mount,cmount: adapt to sdnotify API change 2018-08-20 12:34:29 +01:00
Nick Craig-Wood
ff8de59d2b vendor: update minimum number of packages so compile with go1.11 works 2018-08-20 12:34:29 +01:00
Nick Craig-Wood
c19d1ae9a5 build: fix whitespace changes due to go1.11 gofmt changes 2018-08-20 12:26:06 +01:00
Nick Craig-Wood
64ecc2f587 build: use go1.11rc1 to make the beta releases 2018-08-20 12:26:06 +01:00
Nick Craig-Wood
7c911bf2d6 b2: fix app key support on upload to a bucket - fixes #2428 2018-08-18 19:05:32 +01:00
Nick Craig-Wood
41f709e13b yandex: fix listing/deleting files in the root - fixes #2471
Before this change `rclone ls yandex:hello.txt` would fail whereas
`rclone ls yandex:/hello.txt` would succeed.  Now they both succeed.
2018-08-18 12:12:19 +01:00
Nick Craig-Wood
6c5ccf26b1 vendor: update github.com/t3rm1n4l/go-mega to fix failed logins - fixes #2443 2018-08-18 11:46:25 +01:00
Fabian Möller
6dc5aa7454 docs: clearify buffer-size is per transfer/filehandle 2018-08-17 18:11:40 +01:00
Fabian Möller
552eb8e06b vfs: try to seek buffer on read only files 2018-08-17 18:10:28 +01:00
Nick Craig-Wood
7d35b14138 Add Martin Polden to contributors 2018-08-17 18:08:48 +01:00
Martin Polden
6199b95b61 jottacloud: Handle empty time values 2018-08-17 18:08:29 +01:00
Oliver Heyme
040768383b jottacloud: fix MD5 error check 2018-08-17 18:05:04 +01:00
Nick Craig-Wood
6390dec7db fs/accounting: add --stats-one-line flag for single line stats 2018-08-17 17:58:00 +01:00
Nick Craig-Wood
80a3db34a8 fs/accounting: show the total progress of the sync in the stats #379 2018-08-17 17:58:00 +01:00
Nick Craig-Wood
cb7a461287 sync: add a buffer for checks, uploads and renames #379
--max-backlog controls the queue length.

Add statistics for the check/upload/rename queues.

This means that checking can complete before the uploads which will
give rclone the ability to show exactly what is outstanding.
2018-08-17 17:58:00 +01:00
Nick Craig-Wood
eb84b58d3c webdav: Attempt to remove failed uploads
Some webdav backends (eg rclone serve webdav) leave behind half
written files on error.  This causes the integration tests to
fail. Here we remove the file if it exists.
2018-08-16 16:00:30 +01:00
Nick Craig-Wood
58339a5cb6 fstests: In TestFsPutError reliably provoke test failure
This change to go1.11 causes the TestFsPutError test to fail

https://go-review.googlesource.com/c/go/+/114316

This is because it now passes the half written file to the backend
whereas it didn't previously because of the buffering.

In this commit the size of the data written was increased to 5k from
50 bytes to provoke the test failure under go1.10 also.
2018-08-16 15:52:15 +01:00
Nick Craig-Wood
751bfd456f box: make --box-commit-retries flag defaulting to 100 - Fixes #2054
Sometimes it takes many more commit retries than expected to commit a
multipart file, so split this number into its own config variable and
default it to 100 which should always be enough.
2018-08-11 16:33:55 +01:00
Andres Alvarez
990919f268 Add disclaimer about generated passwords being stored in an obscured format 2018-08-11 15:07:50 +01:00
Nick Craig-Wood
6301e15b79 Add Sebastian Bünger to contributors 2018-08-10 11:15:04 +01:00
Sebastian Bünger
007c7757d4 Add docs for Jottacloud 2018-08-10 11:14:34 +01:00
Sebastian Bünger
dd3e912731 fs/OpenOptions: Make FixRangeOption clamp range to filesize. 2018-08-10 11:14:34 +01:00
Sebastian Bünger
10ed455777 New backend: Jottacloud 2018-08-10 11:14:34 +01:00
Nick Craig-Wood
05bec70c3e Add Matt Tucker to contributors 2018-08-10 10:28:41 +01:00
Matt Tucker
c54f5a781e Fix typo in Box documentation 2018-08-10 10:28:16 +01:00
Nick Craig-Wood
6156bc5806 cache: fix nil pointer deref - fixes #2448 2018-08-07 21:33:13 +01:00
Nick Craig-Wood
e979cd62c1 rc: fix formatting in docs 2018-08-07 21:05:21 +01:00
Nick Craig-Wood
687477b34d rc: add core/stats and vfs/refresh to the docs 2018-08-07 20:58:00 +01:00
Nick Craig-Wood
40d383e223 Add reddi1 to contributors 2018-08-07 20:56:55 +01:00
reddi1
6bfdabab6d rc: added core/stats to return the stats - fixes #2405 2018-08-07 20:56:40 +01:00
Nick Craig-Wood
f7c1c61dda Add Andres Alvarez to contributors 2018-08-07 20:52:04 +01:00
Andres Alvarez
c1f5add049 Add tests for reveal functions 2018-08-07 20:51:50 +01:00
Andres Alvarez
8989c367c4 Add reveal command 2018-08-07 20:51:50 +01:00
Nick Craig-Wood
d95667b06d Add Cnly to contributors 2018-08-07 09:33:25 +01:00
Cnly
0f845e3a59 onedrive: implement DirMove - fixes #197 2018-08-07 09:33:19 +01:00
Fabian Möller
2e80d4c18e vfs: update vfs/refresh rc command documentation 2018-08-07 09:31:12 +01:00
Fabian Möller
6349147af4 vfs: add non recursive mode to vfs/refresh rc command 2018-08-07 09:31:12 +01:00
Fabian Möller
782972088d vfs: add the vfs/refresh rc command
vfs/refresh will walk the directory tree for the given paths and
freshen the directory cache. It will use the fast-list capability
of the remote when enabled.
2018-08-07 09:31:12 +01:00
Fabian Möller
38381d3786 lsjson: add option to show the original object IDs 2018-08-07 09:28:55 +01:00
Fabian Möller
eb6aafbd14 cache: implement fs.ObjectUnWrapper 2018-08-07 09:28:55 +01:00
Nick Craig-Wood
7f3d5c31d9 Add Ruben Vandamme to contributors 2018-08-06 22:07:40 +01:00
Ruben Vandamme
578f56bba7 Swift: Add storage_policy 2018-08-06 22:07:25 +01:00
Nick Craig-Wood
f7c0b2407d drive: add docs for --fast-list and add to integration tests 2018-08-06 21:38:50 +01:00
Fabian Möller
dc5a734522 drive: implement ListR 2018-08-06 21:31:47 +01:00
Nick Craig-Wood
3c2ffa7f57 Add Oleg Kovalov to contributors 2018-08-06 21:14:14 +01:00
Oleg Kovalov
06c9f76cd2 all: fix go-critic linter suggestions 2018-08-06 21:14:03 +01:00
Nick Craig-Wood
44abf6473e Add dan smith to contributors 2018-08-06 21:08:37 +01:00
dan smith
b99595b7ce docs: remove references to copy and move for --track-renames
this change was omitted in the fix for #2008
2018-08-06 21:08:19 +01:00
Nick Craig-Wood
a119ca9f10 b2: Support Application Keys - fixes #2428
This supports B2 application keys limited to a bucket by making sure
we only list the buckets of the bucket ID that the key is limited to.
2018-08-06 14:32:53 +01:00
Nick Craig-Wood
ffd11662ba cache: fix nil pointer deref when using lsjson on cached directory
This stops embedding the fs.Directory into the cache.Directory because
it can be nil and implements an ID method which checks the nil status.

See: https://forum.rclone.org/t/runtime-error-with-cache-remote-and-lsjson/6305
2018-08-05 09:42:31 +01:00
Henning
1f3778dbfb webdav: sharepoint recursion with different depth - fixes #2426
this change adds the depth parameter to listAll and readMetaDataForPath.
this allows recursive calls of these methods with a different depth
header.

Sharepoint won't list files if the depth header is != 0. If that is the
case, it will just return a error 404 although the file exists.
Since it is not possible to determine if a path should be a file or a
directory, rclone has to make a request with depth = 1 first. On success
we are sure that the path is a directory and the listing will work.
If this request returns error 404, the path either doesn't exist or it
is a file.

To be sure, we can try again with depth set to 0. If it still fails, the
path really doesn't exist, else we found our file.
2018-08-04 11:02:47 +01:00
Nick Craig-Wood
f9eb9c894d mega: add --mega-hard-delete flag - fixes #2409 2018-08-03 15:07:51 +01:00
Nick Craig-Wood
f7a92f2c15 Add Andrew to contributors 2018-08-03 13:00:25 +01:00
Andrew
42959fe9c3 Swap cache-db-path and cache-chunk-path
Have the db path option come first in doc, as the chunk path references db and isn't needed if the preceding (db path) command is used.
2018-08-03 13:00:13 +01:00
Nick Craig-Wood
f72eade707 box: Fix upload of > 2GB files on 32 bit platforms
Before this change the Part structure had an int for the Offset and
uploading large files would produce this error

    json: cannot unmarshal number 2147483648 into Go struct field Part.offset of type int

Changing the field to an int64 fixes the problem.
2018-07-31 10:33:55 +01:00
Nick Craig-Wood
bbda4ab1f1 Add HerrH to contributors 2018-07-30 23:14:33 +01:00
HerrH
916b6e3a40 b2: Use create instead of make in the docs 2018-07-30 23:14:03 +01:00
Fabian Möller
dd670ad1db drive: handle gdocs when filtering file names in list
Fixes #2399
2018-07-30 13:01:16 +01:00
Fabian Möller
7983b6bdca vfs: enable vfs-read-chunk-size by default 2018-07-29 18:17:05 +01:00
Fabian Möller
9815b09d90 fs: add multipliers for SizeSuffix 2018-07-29 18:17:05 +01:00
Fabian Möller
9c90b5e77c stats: use appropriate Lock func's 2018-07-22 11:33:19 +02:00
Nick Craig-Wood
01af8e2905 s3: docs for how to configure Aliyun OSS / Netease NOS - thanks @xiaolei0125 2018-07-20 15:49:07 +01:00
Nick Craig-Wood
f06ba393b8 s3: Add --s3-force-path-style - fixes #2401 2018-07-20 15:41:40 +01:00
Nick Craig-Wood
473e3c3eb8 mount/cmount: implement --daemon-timeout flag for OSXFUSE
By default the timeout is 60s which isn't long enough for long
transactions.  The symptoms are rclone just quitting for no reason.
Supplying the --daemon-timeout flag fixes this causing the kernel to
wait longer for rclone.
2018-07-19 13:26:51 +01:00
Nick Craig-Wood
ab78eb13e4 sync: correct help for --delete-during and --delete-after 2018-07-18 19:30:14 +01:00
Nick Craig-Wood
b1f31c2acf cmd: fix boolean backend flags - fixes #2402
Before this change, boolean flags such as `--b2-hard-delete` were
failing to be recognised unless they had a parameter.

This bug was introduced as part of the config re-organisation:
f3f48d7d49
2018-07-18 15:43:57 +01:00
ishuah
dcc74fa404 move: fix delete-empty-src-dirs flag to delete all empty dirs on move - fixes #2372 2018-07-17 10:34:34 +01:00
Nick Craig-Wood
6759d36e2f vendor: get Gopkg.lock back in sync 2018-07-16 22:02:11 +01:00
Nick Craig-Wood
a4797014c9 local: fix crash when deprecated --local-no-unicode-normalization is supplied 2018-07-16 21:38:34 +01:00
Nick Craig-Wood
4d7d240c12 config: Add advanced section to the config editor 2018-07-16 21:20:47 +01:00
Nick Craig-Wood
d046402d80 config: Make sure Required values are entered 2018-07-16 21:20:47 +01:00
Nick Craig-Wood
9bdf465c10 config: make config wizard understand types and defaults 2018-07-16 21:20:47 +01:00
Nick Craig-Wood
f3f48d7d49 Implement new backend config system
This unifies the 3 methods of reading config

  * command line
  * environment variable
  * config file

And allows them all to be configured in all places.  This is done by
making the []fs.Option in the backend registration be the master
source of what the backend options are.

The backend changes are:

  * Use the new configmap.Mapper parameter
  * Use configstruct to parse it into an Options struct
  * Add all config to []fs.Option including defaults and help
  * Remove all uses of pflag
  * Remove all uses of config.FileGet
2018-07-16 21:20:47 +01:00
Nick Craig-Wood
3c89406886 config: Make fs.ConfigFileGet return an exists flag 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
85d09729f2 fs: factor OptionToEnv and ConfigToEnv into fs 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
b3bd2d1c9e config: add configstruct parser to parse maps into config structures 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
4c586a9264 config: add configmap package to manage config in a generic way 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
1c80e84f8a fs: Implement Scan method for SizeSuffix and Duration 2018-07-16 08:50:52 +01:00
Nick Craig-Wood
028f8a69d3 acd: Make very clear in the docs that rclone has no ACD keys #2385 2018-07-15 14:21:19 +01:00
Nick Craig-Wood
b0d1fa1d6b azblob: fix precedence error on testing for StorageError types 2018-07-15 13:56:52 +01:00
Nick Craig-Wood
dbb4b2c900 fs/config: don't print errors about --config if supplied - fixes #2397
Before this change if the rclone was running in an environment which
couldn't find the HOME directory, it would print a warning about
supplying a --config flag even if the user had done so.
2018-07-15 12:39:11 +01:00
Nick Craig-Wood
99201f8ba4 Add sandeepkru to contributors 2018-07-14 10:50:58 +01:00
sandeepkru
5ad8bcb43a backend/azureblob: Port new Azure Blob Storage SDK #2362
This change includes removing older azureblob storage SDK, and getting
parity to existing code with latest blob storage SDK.
This change is also pre-req for addressing #2091
2018-07-14 10:49:58 +01:00
sandeepkru
6efedc4043 vendor: Port new Azure Blob Storage SDK #2362
Removed references to older sdk and added new version sdk(2018-03-28)
2018-07-14 10:49:58 +01:00
Nick Craig-Wood
a3d9a38f51 fs/fserrors: make sure Cause never returns nil 2018-07-13 10:31:40 +01:00
Yoni Jah
b1bd17a220 onedrive: shared folder support - fixes #1200 2018-07-11 18:48:59 +01:00
Nick Craig-Wood
793f594b07 gcs: fix index out of range error with --fast-list fixes #2388 2018-07-09 17:00:52 +01:00
Nick Craig-Wood
4fe6614ae1 s3: fix index out of range error with --fast-list fixes #2388 2018-07-09 17:00:52 +01:00
Nick Craig-Wood
4c2fbf9b36 Add Jasper Lievisse Adriaanse to contributors 2018-07-08 11:01:56 +01:00
Jasper Lievisse Adriaanse
ed4f1b2936 sftp: fix typo in help text 2018-07-08 11:01:35 +01:00
Nick Craig-Wood
144c1a04d4 fs: Fix parsing of paths under Windows - fixes #2353
Before this copyto would parse windows paths incorrectly.

This change moves the parsing code into fspath and makes sure
fspath.Split calls fspath.Parse which does the parsing correctly for

This also renames fspath.RemoteParse to fspath.Parse for consistency
2018-07-06 23:16:43 +01:00
Nick Craig-Wood
25ec7f5c00 Add Onno Zweers to contributors 2018-07-05 10:05:24 +01:00
Onno Zweers
b15603d5ea webdav: document dCache and Macaroons 2018-07-05 10:04:57 +01:00
Nick Craig-Wood
71c974bf9a azureblob: documentation for authentication methods 2018-07-05 09:39:06 +01:00
Nick Craig-Wood
03c5b8232e Update github.com/Azure/azure-sdk-for-go #2118
This pulls in https://github.com/Azure/azure-sdk-for-go/issues/2119
which fixes the SAS URL support.
2018-07-04 09:25:13 +01:00
Nick Craig-Wood
72392a2d72 azureblob: list the container to see if it exists #2118
This means that SAS URLs which are tied to a single container will work.
2018-07-04 09:23:00 +01:00
Nick Craig-Wood
b062ae9d13 azureblob: add connection string and SAS URL auth - fixes #2118 2018-07-04 09:22:59 +01:00
Nick Craig-Wood
8c0335a176 build: fix for goimports format change
See https://github.com/golang/go/issues/23709
2018-07-03 22:33:15 +01:00
Nick Craig-Wood
794e55de27 mega: wait for events instead of arbitrary sleeping 2018-07-02 14:50:09 +01:00
Nick Craig-Wood
038ed1aaf0 vendor: update github.com/t3rm1n4l/go-mega - fixes #2366
This update fixes files being missing from mega directory listings.
2018-07-02 14:50:09 +01:00
Nick Craig-Wood
97beff5370 build: keep track of compile failures better in cross-compile 2018-07-02 10:09:18 +01:00
Nick Craig-Wood
b9b9bce0db ftp: fix Put mkParentDir failed: 521 for BunnyCDN - fixes #2363
According to RFC 959, error 521 is the correct error return to mean
"dir already exists", so add support for this.
2018-06-30 14:29:47 +01:00
Nick Craig-Wood
947e10eb2b config: fix error reading password from piped input - fixes #1308 2018-06-28 11:54:15 +01:00
Nick Craig-Wood
6b42421374 build: build macOS beta releases with native compiler on travis #2309 2018-06-26 09:39:44 +01:00
Nick Craig-Wood
fa051ff970 webdav: add bearer token (Macaroon) support for dCache - fixes #2360 2018-06-25 17:54:36 +01:00
Nick Craig-Wood
69164b3dda build: move non master beta builds into branch subdirectory 2018-06-25 16:49:04 +01:00
Nick Craig-Wood
935533e57f filter: raise --include and --exclude warning to ERROR so it appears without -v 2018-06-22 22:18:55 +01:00
Nick Craig-Wood
1550f70865 webdav: Don't accept redirects when reading metadata #2350
Go can't redirect PROPFIND requests properly, it changes the method to
GET, so we disable redirects when reading the metadata and assume the
object does not exist if we receive a redirect.

This is to work-around the qnap redirecting requests for directories
without /.
2018-06-18 12:22:13 +01:00
Nick Craig-Wood
1a65c3a740 rest: add NoRedirect flag to Options 2018-06-18 12:21:50 +01:00
Nick Craig-Wood
a29a1de43d webdav: if root ends with / then don't check if it is a file 2018-06-18 12:13:47 +01:00
Nick Craig-Wood
e7ae5e8ee0 webdav: ensure we call MKCOL with a URL with a trailing / #2350
This is an attempt to fix rclone and qnap interop.
2018-06-18 11:16:58 +01:00
Mateusz
56e1e82005 fs: added weekday schedule into --bwlimit - fixes #1822 2018-06-17 18:38:09 +01:00
lewapm
8442498693 backend/drive: add flag for keep revision forever - fixes #1525 2018-06-17 18:34:35 +01:00
Nick Craig-Wood
08021c4636 vendor: update all dependencies 2018-06-17 17:59:12 +01:00
Nick Craig-Wood
3f0789e2db deletefile: fix typo in docs 2018-06-17 16:58:37 +01:00
Nick Craig-Wood
7110349547 Start v1.42-DEV development 2018-06-16 21:25:58 +01:00
Nick Craig-Wood
a9adb43896 Version v1.42 2018-06-16 18:21:09 +01:00
Nick Craig-Wood
c47a4c9703 opendrive: re-read hash when updating objects
Previously this was reading a stale hash from the object leading to
broken integration tests.

This fixes these integration tests TestSyncDoesntUpdateModtime,
TestSyncAfterChangingFilesSizeOnly, TestSyncAfterChangingContentsOnly,
TestSyncWithUpdateOlder, TestSyncUTFNorm.
2018-06-15 14:50:17 +01:00
Nick Craig-Wood
d9d00a7dd7 rcat: remove --checksum flag from the docs as it is not usually effective 2018-06-14 16:15:54 +01:00
Nick Craig-Wood
b82e66daaa Add themylogin to contributors 2018-06-14 16:15:53 +01:00
themylogin
7d2861ead6 Adjust S3 upload concurrency with --s3-upload-concurrency 2018-06-14 16:15:17 +01:00
remusb
aaa8591661 cache: add non cached dirs on notifications - #2155 2018-06-13 23:57:26 +03:00
remusb
4df1794932 cache: fix panic when running without plex configs 2018-06-13 15:06:14 +03:00
Nick Craig-Wood
d18928962c dropbox: make dropbox for business folders accessible #2003
Paths prefixed with / on a dropbox for business plan will now start at
the root instead of the users home directory.
2018-06-13 11:03:34 +01:00
remusb
339fbf0df5 cache: reconnect plex websocket on failures 2018-06-12 22:58:15 +03:00
remusb
13ccb39819 cache: allow root to be expired from rc - #2237 2018-06-12 22:19:03 +03:00
remusb
f9a1a7e700 cache: fix root folder caching 2018-06-10 21:54:20 +03:00
Nick Craig-Wood
1c75581959 sync: fix TestCopyRedownload after ModifyWindow changes #2310 2018-06-10 17:34:00 +01:00
Nick Craig-Wood
4d793b8ee8 drive: remove part of workaround for #1675
Now that https://issuetracker.google.com/issues/64468406 has been
fixed, we can remove part of the workaround which fixed #1675 -
019adc35609c2136

This will make queries marginally more efficient.  We still need the
other part of the workaround since the `=` operator is case
insensitive.
2018-06-10 15:28:32 +01:00
Nick Craig-Wood
9289aead9b drive: Add --drive-acknowledge-abuse to download flagged files - fixes #2317
Also if rclone gets the cannotDownloadAbusiveFile suggest using the
--drive-acknowledge-abuse flag.
2018-06-10 15:25:21 +01:00
Filip Bartodziej
ce109ed9c0 log: password prompt output fixed for unix - partially fixes #2220 2018-06-10 12:57:45 +01:00
Filip Bartodziej
d7ac4ca44e cmd: deletefile command - fixes #2286 2018-06-10 12:49:33 +01:00
Nick Craig-Wood
1053d7e123 local: fix symlink/junction point directory handling under Windows
Before this commit rclone's handling of symlinks and junction points
under Windows was broken.  rclone treated them as files and attempted
to transfer them which gave the error "The handle is invalid".

Ultimately the cause of this was 3e43ff7414 which was a
workaround so files with reparse points (which are a kind of symlink)
would transfer correctly.

The solution implemented is to revert the above commit which will mean
that #614 will break again.  However there is now a work-around (which
will be signaled by rclone) to use the -L flag which wasn't available
when the original commit was made.

Fixes #2336
2018-06-10 12:25:03 +01:00
Nick Craig-Wood
017297af70 s3: Fix --s3-chunk-size which was always using the minimum - fixes #2345 2018-06-10 12:22:30 +01:00
remusb
4e8e5fed7d cache: clean remaining empty folders from temp upload path 2018-06-09 14:52:31 +03:00
remusb
c0f772bc14 cache: update internal tests and small fixes 2018-06-08 23:34:38 +03:00
Nick Craig-Wood
334ef28012 Add Benjamin Joseph Dag to contributors 2018-06-08 16:12:57 +01:00
Benjamin Joseph Dag
da45dadfe9 cmd: added --retries-sleep flag
The --retries-sleep flag can be used to sleep after each retry.
2018-06-08 16:12:24 +01:00
Nick Craig-Wood
05edb5f501 drive: Fix change list polling with team drives - fixes #2330
In the drive v3 conversion we forgot the IncludeTeamDriveItems
parameter when calling the changes API.  Adding it fixes the changes
polling with team drives.
2018-06-07 11:35:55 +01:00
Henning Surmeier
04d18d2a07 oauthutil: Use go template for web response
Every response is formed using the AuthResponseData struct together with
the AuthResponse html template.
2018-06-06 09:54:21 +01:00
Henning Surmeier
f1269dc06a onedrive: errorHandler for business requests
This implementation hopefully can handle all error requests from the
onedrive for business authentication.
I have only tested it with the "domain in unmanaged state" error.
2018-06-06 09:54:21 +01:00
Henning Surmeier
c5286ee157 oauthutil: support backend-specific errorHandler
This allows the backend to pass a errorHandler function to the doConfig
function. The webserver will pass the current request as a parameter to
the function.
The function can then examine all paramters and build the AuthError
struct which contains name, code, description of the error. A link to
the docs can be added to the HelpURL field.
oauthutil then takes care of formatting for the HTML response page. The
error details are also returned as an error in the server.err channel
and will be logged to the commandline.
2018-06-06 09:54:21 +01:00
Nick Craig-Wood
ba43acb6aa sync: fix TestCopyEmptyDirectories after ModifyWindow changes #2310 2018-06-04 21:41:25 +01:00
remusb
8a84975993 cache: cache lists using batch writes 2018-06-04 21:04:45 +03:00
ishuah
d758e1908e copy: create (pseudo copy) empty source directories to destination - fixes #1837 2018-06-04 11:01:14 +01:00
ishuah
737aed8412 Ensure items in srcEmptyDirs are actually empty 2018-06-04 11:01:14 +01:00
Stefan
4009fb67c8 fs: calculate ModifyWindow each time on the fly instead of relying on global state - see #2319, #2328 2018-06-03 20:45:34 +02:00
Nick Craig-Wood
3ef938ebde lsf: add --absolute flag to add a leading / onto path names 2018-06-03 10:42:34 +01:00
Nick Craig-Wood
5302e5f9b1 docs: add a note about SIGINFO for macOS 2018-06-02 17:38:05 +01:00
kubatasiemski
de8c7d8e45 cmd: add siginfo handler 2018-06-02 17:35:13 +01:00
Henning Surmeier
2a29f7f6c8 onedrive: Add troubleshooting to docs 2018-06-02 17:10:58 +01:00
Nick Craig-Wood
2b332bced2 Add Kasper Byrdal Nielsen to contributors 2018-05-31 09:42:36 +01:00
Kasper Byrdal Nielsen
aad75e6720 check: Add one-way argument
--one-way argument will check that all files on source matches the files on detination,
but not the other way. For example files present on destination but not on source will not
trigger an error.

Fixes: #1526
2018-05-31 09:42:16 +01:00
Stefan
2a806a8d8b mount: only print "File.rename error" if there actually is an error - see #2130 (#2322) 2018-05-29 19:19:17 +02:00
Nick Craig-Wood
500085d244 vendor: update github.com/dropbox/dropbox-sdk-go-unofficial #2158 2018-05-29 15:56:40 +01:00
Nick Craig-Wood
3d8e529441 rc: return error from remote on failure 2018-05-29 10:48:01 +01:00
Stefan
6607d8752c mountlib: add testcase to ensure the ModifyWindow is calculated on Mount (see #2002) (#2319) 2018-05-28 17:49:26 +02:00
Stefan
67e9ef4547 mount: delay rename if file has open writers instead of failing outright - fixes #2130 (#2249) 2018-05-24 20:45:11 +02:00
Nick Craig-Wood
d4213c0ac5 sftp: Fix slow downloads for long latency connections - fixes #1158
This was caused by using the sftp.File.Read method which resets the
streaming window after each call.  Replacing it with sftp.File.WriteTo
and an io.Pipe fixes the problem bringing the speed to the same as the
sftp binary.
2018-05-24 15:10:28 +01:00
Nick Craig-Wood
3a2248aa5f rc: add core/gc to run a garbage collection on demand 2018-05-24 15:10:28 +01:00
Nick Craig-Wood
573ef4c8ee rc: enable go profiling by default on the --rc port
This means you can use the pprof tool on a running rclone, eg

    go tool pprof http://localhost:5572/debug/pprof/heap
2018-05-24 15:10:28 +01:00
Nick Craig-Wood
7bf2d389a8 Add John Clayton to contributors 2018-05-22 11:48:20 +01:00
John Clayton
71b4f1ccab cache: use secure websockets for HTTPS Plex addresses 2018-05-22 11:47:57 +01:00
Nick Craig-Wood
e5ff375948 Use config.FileGet instead of fs.ConfigFileGet 2018-05-22 09:43:24 +01:00
Nick Craig-Wood
512f4b4487 Update error checking on fmt.Fprint* after errcheck update
Now we need to check or ignore errors on fmt.Fprint* explicitly -
previously errcheck just ignored them for us.
2018-05-22 09:41:13 +01:00
Nick Craig-Wood
a38f8b87ce docs: fix Nextcloud typo spotted by Eugene Mlodik 2018-05-16 16:43:52 +01:00
Nick Craig-Wood
9697754707 drive: Don't attempt to choose Team Drives when using rclone config create 2018-05-16 09:10:09 +01:00
Nick Craig-Wood
8e625e0bc3 config: add ConfirmWithDefault to change the default on AutoConfig 2018-05-16 09:09:41 +01:00
Nick Craig-Wood
e52ecba295 crypt: check the crypted hash of files when uploading #2303
This checks the checksum of the streamed encrypted data against the
checksum of the encrypted object returned from the remote and returns
an error if it is different.
2018-05-15 14:50:36 +01:00
Nick Craig-Wood
e62d2fd309 oauthutil: Fix custom redirect URL message - fixes #2306 2018-05-13 17:28:09 +01:00
Nick Craig-Wood
e56be0dfd8 lsf: Add --csv flag for compliant CSV output 2018-05-13 12:18:21 +01:00
Nick Craig-Wood
2a32e2d838 operations: turn ListFormatted into a Format method on ListFormat 2018-05-13 12:17:55 +01:00
Nick Craig-Wood
db4c206e0e lsjson: add MimeType to the output 2018-05-13 12:17:55 +01:00
Nick Craig-Wood
f77efc7649 lsf: Add 'm' format specifier to show the MimeType 2018-05-13 12:17:55 +01:00
Nick Craig-Wood
aadbcce486 fs: Add MimeTypeDirEntry to return the MimeType of a DirEntry 2018-05-13 12:17:55 +01:00
Nick Craig-Wood
f162116132 lsjson: add ID field to output to show Object ID - fixes #1901 2018-05-13 12:17:55 +01:00
Nick Craig-Wood
909c3a92d6 lsf: implement 'i' format for showing object ID - fixes #1476 2018-05-13 12:17:55 +01:00
Nick Craig-Wood
826975c341 fs: add Optional ID() method to Object and implement it in backends
ID() shows the internal ID of the Object if available.
2018-05-13 12:17:55 +01:00
Fabian Möller
6791cf7d7f atexit: prevent Run from being called on nil signal 2018-05-12 18:59:25 +02:00
Fabian Möller
d022c81d99 mount: ensure atexit gets run on interrupt
When running `rclone mount`, there were 2 signal handlers for `os.Interrupt`.

Those handlers would run concurrently and in some cases cause either unmount or `atexit.Run()` being skipped.

In addition `atexit.Run()` will get called in `resolveExitCode` to ensure cleanup on errors.
2018-05-12 10:40:44 +01:00
Nick Craig-Wood
cdde8fa75a opendrive: finish off #1026
* Fix errcheck and golint warnings
  * Remove unused constants and fix comments
  * Parse error responses properly
  * Fix Open with RangeOption
  * Fix Move, Copy and DirMove
  * Implement DirCacheFlush
  * Check interfaces are correct
  * Remove debugs and update overview
  * Correct feature flags
  * Pare replacement characters down to the minimum set
  * Add to the integration tests
2018-05-12 10:10:46 +01:00
Nick Craig-Wood
5ede6f6d09 Add Jakub Karlicek to contributors 2018-05-12 10:10:45 +01:00
Jakub Karlicek
53292527bb opendrive: fill out the functionality #1026
* Add Mkdir, Rmdir, Purge, Delete, SetModTime, Copy, Move, DirMove
 * Update file size after upload
 * Add Open seek
 * Set private permission for new folder and uploaded file
 * Add docs
 * Update List function
 * Fix UserSessionInfo struct
 * Fix socket leaks
 * Don’t close resp.Body in Open method
 * Get hash when listing files
2018-05-12 10:07:25 +01:00
Oliver Heyme
ec9894da07 opendrive: initial parts with download and upload working #1026 2018-05-12 10:07:16 +01:00
Nick Craig-Wood
ad02d1be3f fstest: update comments on how to run individual tests 2018-05-11 14:04:36 +01:00
Nick Craig-Wood
63f413f477 webdav: show all available information when printing errors 2018-05-11 08:43:53 +01:00
Nick Craig-Wood
f1ffe8e309 fstests: fix test crash if NewFs fails 2018-05-11 08:43:53 +01:00
Nick Craig-Wood
d85b9bc9d6 webdav: workarounds for biz.mail.ru
* Add "Depth: 1" on read metadata PROPFIND call
  * Accept 406 to mean directory already exists
2018-05-11 08:43:53 +01:00
Nick Craig-Wood
b07e51cf73 webdav: read the body of messages into the error if XML parse fails 2018-05-11 08:43:53 +01:00
Nick Craig-Wood
f073db81b1 drive: add --drive-alternate-export to fix large doc export - fixes #2243
The official drive APIs seem to have trouble downloading large
documents sometimes.

This commit adds a --drive-alternate-export flag to use an different,
unofficial set of export URLS which seem to download large files OK.
2018-05-10 10:04:39 +01:00
Nick Craig-Wood
9698a2babb gcs: low level retry all operations if necessary
Google cloud storage doesn't normally need retries, however certain
things (eg bucket creation and removal) are rate limited and do
generate 429 errors.

Before this change the integration tests would regularly blow up with
errors from GCS rate limiting bucket creation and removal.

After this change we low level retry all operations using the same
exponential backoff strategy as used in the google drive backend.
2018-05-10 09:24:09 +01:00
Nick Craig-Wood
5eecbd83ee bin: make make_test_files.go work properly on Windows 2018-05-09 16:59:29 +01:00
Nick Craig-Wood
e42edc8e8c copy, move: Copy single files directly, don't use --files-from work-around
Before this change rclone would inefficiently and confusingly read all
the files in the source directory when copy or moving a single file.
This caused confusion for the users to see log messages about files
which weren't part of the sync.

After the change the copy and move commands use the new infrastructure
made for the copyto and moveto command for single file copy and move.
2018-05-07 20:39:52 +01:00
Nick Craig-Wood
291954baba cmd: make names of argument parsing functions more consistent 2018-05-07 20:39:52 +01:00
Nick Craig-Wood
9d8d7ae1f0 mount,cmount: make --noappledouble --noapplexattr and change defaults #2287
Before this change we would unconditionally set the OSXFUSE options
noappledouble and noapplexattr.

However the noapplexattr options caused problems with copies in the
Finder.

Now the default for noapplexattr is false so we don't add the option
by default and the user can override the defaults using the
--noappledouble and --noapplexattr flags.
2018-05-07 20:37:09 +01:00
Nick Craig-Wood
6ce32e4661 mount,cmount: Add --volname flag and remove special chars from it #2287
Before this change rclone would set the volume name from the
remote:path normally.  However this has `:` and `/` in which make it
difficult to use in macOS.

Now rclone will remove the special characters and replace them with
spaces.  It also allows the volume name to be set with the --volname
flag.
2018-05-07 20:37:09 +01:00
Nick Craig-Wood
1755ffd1f3 mount: make Get/List/Set/Remove xattr return ENOSYS #2287
By default bazil fuse will return ENOTSUPP for these.  However if we
return ENOSYS then OSXFUSE (at least) will never call them again
saving round trips though fuse.
2018-05-07 20:37:09 +01:00
Nick Craig-Wood
aa5c5ec5d3 build: mask linter errors we can't fix 2018-05-05 17:32:41 +01:00
Nick Craig-Wood
e80ae4e09c build: remove unused struct fields spotted by structcheck 2018-05-05 17:32:41 +01:00
Nick Craig-Wood
1320e84bc2 build: remove unused code spotted by the deadcode linter 2018-05-05 17:32:41 +01:00
Nick Craig-Wood
cb5bd47e61 build: fix errors spotted by ineffassign linter
These were mostly caused by shadowing err and a good fraction of them
will have caused errors not to be propagated properly.
2018-05-05 17:32:41 +01:00
Nick Craig-Wood
790a8a9aed build: add gometalinter and gometalinter_install Makefile targets 2018-05-05 17:32:41 +01:00
Nick Craig-Wood
f1a43eca4d mount: make --daemon work for macOS without CGO 2018-05-05 16:23:47 +01:00
Nick Craig-Wood
7ea68f1fc6 sftp: require go1.9+ after golang.org/x/crypto/ssh update 2018-05-05 16:23:47 +01:00
Nick Craig-Wood
6427029c4e vendor: update all dependencies
* Update all dependencies
  * Remove all `[[constraint]]` from Gopkg.toml
  * Add in the minimum number of `[[override]]` to build
  * Remove go get of github.com/inconshreveable/mousetrap as it is vendored
  * Update docs with new policy on constraints
2018-05-05 15:52:24 +01:00
Nick Craig-Wood
21383877df cmd: make exit code 8 for --max-transfer exceeded 2018-05-05 12:58:28 +01:00
Nick Craig-Wood
f95835d613 fserrors: Look deeper into errors for Fatal/Retry/NoRetry errors.
Before this change fatal errors which were wrapped in a system error (eg a
URLError) were not recognised as fatal errors.
2018-05-05 12:58:28 +01:00
Nick Craig-Wood
be79b47a7a sync: log when we abandon the sync due to a fatal error 2018-05-05 12:58:28 +01:00
Nick Craig-Wood
be22735609 fs/accounting: fix deadlock on GetBytes
A deadlock could occur since we have now put a mutex on GetBytes from
StatsInfo.String (s.mu) - progress (acc.statmu) and read (acc.statmu)
- GetBytes (s.mu).

Fix this by giving stringSet its own locking and excluding the call
which caused the deadlock from the mutex in StatsInfo.String.
2018-05-05 12:58:28 +01:00
Nick Craig-Wood
1b1b3c13cd sync: add a test for aborting on max upload 2018-05-05 12:58:28 +01:00
Nick Craig-Wood
5c128272fd Implement --max-transfer flag to quit transferring at a limit #1655 2018-05-05 12:58:28 +01:00
Nick Craig-Wood
d178233e74 sync,march: check the cancel context on every channel send and receive
This fixes a deadlock on sync when all the copying channels receive a
Fatal Error.
2018-05-05 12:58:28 +01:00
Fabian Möller
98bf65c43b vfs: fix ChangeNotify for new or changed folders
Fixes #2251
2018-05-05 12:54:03 +01:00
Fabian Möller
3b5e70c8c6 drive: fix ChangeNotify for folders 2018-05-05 12:54:03 +01:00
Fabian Möller
bd3ad1ac3e vfs: add option to read source files in chunks 2018-05-05 12:49:42 +01:00
Fabian Möller
9fdf273614 fs: improve ChunkedReader
- make Close permanent and return errors afterwards
- use RangeSeek from the wrapped reader if present
- add a limit to chunk growth
- correct RangeSeek interface behavior
- add tests
2018-05-05 12:49:42 +01:00
Nick Craig-Wood
fe25cb9c54 drive: fix about (and df on a mount) for team drives - fixes #2288
Before this fix team drives would return the drive quota which is
incorrect and mis-leading.

Team drives don't appear to have an API for reading the bytes used or
the quota so we now return that the quota and usage are unknown.
2018-05-03 08:59:14 +01:00
Nick Craig-Wood
f2608e2a64 Add NoLooseEnds to contributors 2018-05-01 09:43:18 +01:00
NoLooseEnds
a5f1811892 cmd: Fixed a typo – minimum 2018-05-01 09:42:21 +01:00
Nick Craig-Wood
50dc5fe92e Add Rodrigo to contributors 2018-04-30 17:37:43 +01:00
Rodrigo
b7d2048032 WebDAV: Ignore Reason-Phrase in status line #2281 2018-04-30 17:36:38 +01:00
Nick Craig-Wood
3116249692 make sign_upload: only sign the v1.xx releases not the current ones 2018-04-30 17:29:50 +01:00
Nick Craig-Wood
d049e5c680 make build_dep: make sure we update the whole command for nfpm 2018-04-30 17:29:50 +01:00
Nick Craig-Wood
1c9572aba1 Add Piotr Oleszczyk to contributors 2018-04-30 17:29:50 +01:00
Piotr Oleszczyk
76f2cbeb94 sftp: Add --ssh-path-override flag #1474
The flag allows calculation of checksums on systems using
different paths for SSH and SFTP, like synology NAS boxes.
2018-04-30 17:05:10 +01:00
Nick Craig-Wood
0479c7dcf5 add github-release to make release_dep 2018-04-28 12:38:30 +01:00
Nick Craig-Wood
55674c0bfc Start v1.41-DEV development 2018-04-28 12:37:55 +01:00
Nick Craig-Wood
e4c380b2a8 Version v1.41 2018-04-28 11:46:27 +01:00
Nick Craig-Wood
74cbdea0ef Revert "copy: create (pseudo copy) empty source directories to destination"
Unfortunately this commit attempts to create every directory rather
than just the empty ones, so will need re-working.

Removing this feature for the 1.41 release

This reverts commit 0daced29db.
2018-04-28 10:02:32 +01:00
Nick Craig-Wood
a3bf6b9c2c drive, gcs: fix service account authentication - fixes #2279
This fixes a problem introduced in b78af517de where it would
attempt to read a non-existent service account file.
2018-04-28 09:33:43 +01:00
ishuah
0daced29db copy: create (pseudo copy) empty source directories to destination - fixes #1837 2018-04-27 16:15:32 +01:00
Matt Holt
b78af517de Add service_account_credentials for Google Cloud and Drive 2018-04-27 16:07:37 +01:00
Nick Craig-Wood
d8e88f10cd rc: take note of the --rc-addr flag too as per the docs - fixes #2184 2018-04-26 17:00:44 +01:00
Nick Craig-Wood
849db6699d Add Richard Yang to contributors 2018-04-26 16:23:52 +01:00
Richard Yang
a81ec00a8c dedupe: Add dedupe largest functionality - fixes #2269 2018-04-26 16:21:07 +01:00
Nick Craig-Wood
da4a5e1fb3 docs: note that copytruncate is needed for --log-file with logrotate #2259 2018-04-26 15:30:46 +01:00
Nick Craig-Wood
ae562b5a4f ftp: more workarounds for FTP servers to fix mkParentDir - fixes #2181 2018-04-26 14:58:04 +01:00
Nick Craig-Wood
c01177bc28 ftp: work around strange response from box FTP server
The Box FTP server seems to send 450 instead of 550 - work around that.

See: https://forum.rclone.org/t/using-box-com-over-ftp-problems/5313
2018-04-26 14:58:04 +01:00
Nick Craig-Wood
9f04ce282e rc: fix setting bwlimit to unlimited 2018-04-26 12:21:29 +01:00
Nick Craig-Wood
764440068e filter: fix --min-age and --max-age together check
Somehow in the code reorganisation of
11da2a6c9b the check for --min-age and
--max-age got switched around.  This commit fixes that and means you
can use --min-age and --max-age together.
2018-04-26 09:17:22 +01:00
Nick Craig-Wood
a703216286 filter: take double negatives out of filter flag help 2018-04-26 09:17:13 +01:00
Nick Craig-Wood
96a62d55a2 lsd: Add -R flag and fix and update docs for all ls commands 2018-04-26 08:55:03 +01:00
Nick Craig-Wood
d0f32b62fd Revert "build: Temporary workaround for golint being missing."
This reverts commit be8bd89674.
2018-04-25 16:17:54 +01:00
Mateusz Pabian
7c5f87842c vfs: filter files . and .. from readDir output - fixes #2135 2018-04-25 16:09:07 +01:00
Nick Craig-Wood
cc8799e0d6 Add new email address for Oliver Heyme to contributors 2018-04-25 15:52:41 +01:00
Oliver Heyme
da214973a1 [install] Add arm64/aarch64 suuport 2018-04-25 15:51:38 +01:00
Nick Craig-Wood
be8bd89674 build: Temporary workaround for golint being missing.
See https://github.com/golang/lint/issues/397
2018-04-24 11:22:38 +01:00
Nick Craig-Wood
9ab2521ef2 rc: autogenerate and tidy the docs and commands
* Rename rc/pid -> core/pid
  * Sort the output of `rc list`
  * Make a script to autogenerate the docs
  * Tidy docs
2018-04-23 20:57:17 +01:00
Nick Craig-Wood
21a10e58c9 rc: implement core/memstats to print internal memory usage info 2018-04-23 20:49:36 +01:00
Nick Craig-Wood
d36b80f587 vendor: update bazil.org/fuse - corrects df -i - fixes #2089 2018-04-21 22:57:08 +01:00
Nick Craig-Wood
24980d7123 config: fix typo in error message #2268 2018-04-21 22:49:30 +01:00
Nick Craig-Wood
870c58f7f8 sftp: fail soft with a debug on hash failure #1474
If md5sum/sha1sum fails we debug what it outputed on stderr and return
an empty hash indicating we didn't have a hash, rather than
hash.ErrUnsupported indicating that we don't support this hash type.

This fixes lots of ERROR messages for sftp and synology NAS which,
while it supports md5sum the SFTP paths and the SSH paths are
different so md5sum doesn't work.

We also stop disabling md5sum/sha1sum on errors since typically Hashes
is only checked at the start of a sync run and isn't expected to
change dynamically.
2018-04-21 09:02:53 +01:00
Nick Craig-Wood
b3c6f5f4b8 sftp: Update docs with Synology quirks 2018-04-21 09:02:53 +01:00
Nick Craig-Wood
311a962011 s3: Look in S3 named profile files for credentials - fixes #2243 2018-04-21 09:00:20 +01:00
Nick Craig-Wood
da7a77ef2e ftp: Fix no error on listing non-existent directory 2018-04-20 23:22:46 +01:00
Nick Craig-Wood
9fbc40c5b9 fstests: List missing dir must return ErrorDirNotFound for non bucket based remotes
List or ListR of an non existent directory must return
ErrorDirNotFound for non bucket based remotes.  For bucket based
remotes it may return ErrorDirNotFound or it may return no error and
no entries.
2018-04-20 23:22:46 +01:00
Nick Craig-Wood
56ce784301 Add hensur to contributors 2018-04-20 21:44:12 +01:00
hensur
8fe3037301 webdav: support SharePoint cookie authentication
This enables the use of the SharePoint webdav endpoint provided by
OneDrive for Business or Office365 Education Accounts. It enables
unverified accounts to be accessed with rclone via webdav as it isn't
possible through the normal onedrive backend.

This integrates the https://github.com/hensur/onedrive-cookie-test
package to fetch the required cookies to authorize against the
SharePoint webdav endpoint.
2018-04-20 21:43:54 +01:00
hensur
ba7ae2ee8c rest: Add RemoveHeader and SetCookie method
These methods extend the rest package to support the cookie header and
header deletion.

The deletion is necessary to delete an existing authorization header if
cookie auth should be used.
2018-04-20 21:43:54 +01:00
Nick Craig-Wood
dc59836021 webdav: strip leading and trailing / off root - fixes #2257 2018-04-20 21:43:54 +01:00
Nick Craig-Wood
1a3fb21a77 onedrive: add QuickXorHash support for OneDrive for business - fixes #2262 2018-04-20 21:03:03 +01:00
Nick Craig-Wood
bcdb7719c6 fs/hash: install QuickXorHash as a supported rclone hash type #2262 2018-04-20 21:02:57 +01:00
Nick Craig-Wood
c51d97c752 hashsum: make generic tool for any hash to produce md5sum like output 2018-04-20 21:02:37 +01:00
Nick Craig-Wood
57a5b72d60 onedrive: implement quickXorHash algorithm #2262 2018-04-20 21:02:37 +01:00
Nick Craig-Wood
34ba17deec Add Chris Redekop second email to contributors 2018-04-20 20:53:15 +01:00
Nick Craig-Wood
e3a1bc9cd3 Add Michael G. Noll to contributors 2018-04-20 20:51:31 +01:00
Chris Redekop
a35e62e15c s3: Add an option to disable checksum uploading - fixes #2213 2018-04-20 20:51:12 +01:00
Michael G. Noll
d1ca8b8959 sftp: update docs to match code, fix typos and clarify disable_hashcheck prompt 2018-04-20 20:49:49 +01:00
Nick Craig-Wood
a0c65deca8 box: Parse file/directory size as a floating point number
Very large directories can have their sizes returned as floating point
numbers, eg `1.0034576985781e+14` from the box API.

Before this change this would fail to parse as an int64.

This change parses the size as a float64 instead which will be
perfectly accurate for sizes up to 2**56 which is about 9 PB.

It is unknown whether box themselves use a float64 as an intermediate
representation in the API or not - it seems likely.

Fixes #2261
2018-04-19 21:04:52 +01:00
Nick Craig-Wood
1f255a8567 Add a mega.nz remote #163
Not supported yet:
  * Hash
  * ModTime
  * Server Side Copy

Otherwise fully functional and passing all the tests.
2018-04-18 21:09:54 +01:00
Nick Craig-Wood
f50b85278a vendor: github.com/t3rm1n4l for backend/mega 2018-04-18 21:09:54 +01:00
Nick Craig-Wood
9948b39dba about: don't attempt retries 2018-04-18 21:09:54 +01:00
Nick Craig-Wood
2b855751fc vfs,mount,cmount: use About to return the correct disk total/used/free
Disks total, used, free now shows correctly for mount and cmount (eg
`df` for Unix or in the Windows explorer).
2018-04-18 18:27:34 +01:00
Nick Craig-Wood
ef3bcec76c fs: Extend SizeSuffix to include TB and PB for rclone about 2018-04-17 21:53:42 +01:00
Nick Craig-Wood
1ac6dacf0f about: complete other providers and re-work internals
* Implement about for:
    * local, crypt, cache, drive, swift, hubic, onedrive, pcloud, dropbox
  * Implement `--json` and `---full` flag for `rclone about`
  * change About interface to return a Usage structure
  * Remove operations.About as it is too thin an interface
  * Implement Integration test

Relates to #1138 and #1564
2018-04-17 21:53:27 +01:00
a-roussos
94e277d759 about: add new command 'about' to get quota info from a remote
Implemented for drive only.

Relates to #1138 and #1564.
2018-04-17 21:50:14 +01:00
Nick Craig-Wood
b83814082b backend/http: if HEAD didn't return Content-Length use -1 as size
This means that the files will be treated as an unknown length and
will download properly.

Fixes #2247
2018-04-16 19:40:02 +01:00
Nick Craig-Wood
2b7957cc74 vfs: Only make the VFS cache if --vfs-cache-mode > Off
This stops the cache cleaner running unnecessarily and saves
resources.

This also helps with issue #2227 which was caused by a second mount
deleting objects in the first mounts cache.
2018-04-16 17:06:41 +01:00
Nick Craig-Wood
3d5106e52b drive: fix DirMove leaving a hardlinked directory behind #2245
This bug was introduced by the v3 API conversion in 07f20dd1fd.

The problem was that dircache.FindPath doesn't work for the root directory.

This adds an internal error for dircache.FindPath being called with
the root directory.  This makes a failing test, which the fix to the
drive backend fixes.

This also improves the DirCache integration test.
2018-04-15 10:12:21 +01:00
Nick Craig-Wood
29ce1c2747 fstest: fix CheckListingWithPrecision with non Windows safe chars
* Factor WinPath from fstest to fstests
  * Use it to normalize the directory names while checking them
2018-04-15 10:12:20 +01:00
Nick Craig-Wood
dc247d21ff s3: add in config for all the supported S3 providers #2140
These are AWS, Ceph, Dreamhost, IBM COS S3, Minio, Wasabi and Other.

This configures endpoints where known and makes sure config doesn't
appear where it isn't valid where possible.
2018-04-13 16:33:26 +01:00
Nick Craig-Wood
8c3740c2c5 config: Improve the Provider matching to have a negated match #2140
This makes it easier to make classes of provider in the config.
2018-04-13 16:06:37 +01:00
Giri Badanahatti
acd5d4377e config,s3: hierarchical configuration support #2140
This introduces a method of making provider specific configuration
within a remote.  This is useful particularly in s3.

This commit does the basic configuration in S3 for IBM COS.
2018-04-13 16:05:35 +01:00
Matthew Holt
9e4cd55477 size: Add --json flag 2018-04-13 13:38:06 +01:00
Nick Craig-Wood
2015f98f0c Add Craig Rachel to contributors 2018-04-13 13:36:46 +01:00
Craig Rachel
0e6faa2313 s3: add One Zone Infrequent Access storage class - fixes #2240 2018-04-13 13:36:25 +01:00
Nick Craig-Wood
905e40b3e6 Add Peter Baumgartner to contributors 2018-04-13 13:33:22 +01:00
Peter Baumgartner
1db68571fd s3,swift: Add --use-server-modtime
`--use-server-modtime` stops s3 and swift retrieving the modtime from metadata which enables a fast sync mode with the `--update` flag.
2018-04-13 13:32:17 +01:00
Nick Craig-Wood
6b67489133 Add Animosity022 to contributors 2018-04-13 13:26:41 +01:00
Animosity022
27dfcf303c cache: improve docs
This adds that the cache-chunk-path needs to be cleared manually if chunk-size is changed.
2018-04-13 13:26:26 +01:00
Nick Craig-Wood
e6d9720d7b Add Mateusz Piotrowski to contributors 2018-04-13 13:25:16 +01:00
Mateusz Piotrowski
196da4d903 dropbox: fix a typo in the docs 2018-04-13 13:24:58 +01:00
Nick Craig-Wood
18317a2747 vendor: update github.com/pkg/sftp because dep insisted 2018-04-13 13:23:55 +01:00
Nick Craig-Wood
ef412c1985 drive: fix misplaced log in dedupe MergeDirs 2018-04-13 13:23:55 +01:00
Nick Craig-Wood
d97fe3b824 fs/operations: make dedupe work with mega
* factor into its own files
  * remove assumptions about having a given hash type
  * make tests work if the remote has no hash
2018-04-13 13:23:55 +01:00
Nick Craig-Wood
792c9e185e Add Antoine GIRARD to contributors 2018-04-13 13:23:55 +01:00
Antoine GIRARD
1f681e585b fstests: fix typo 2018-04-13 13:23:08 +01:00
Nick Craig-Wood
e82452ce9a drive: check Open calls for google error messages
This should also enable Open calls to retry properly
2018-04-11 20:55:58 +01:00
Nick Craig-Wood
dcf8334673 fs: add --dump goroutines and --dump openfiles
These are developer flags useful for tracking down resource leaks.
2018-04-11 20:55:58 +01:00
Nick Craig-Wood
37be78705d fs/fshttp: limit MaxIdleConns and MaxIdleConnsPerHost
Before this change mega (which uses a different host per download)
would open too many sockets.
2018-04-11 20:51:28 +01:00
Nick Craig-Wood
4b5ff33125 fstest: retry cleaning the integration test directory if necessary 2018-04-11 20:51:13 +01:00
Nick Craig-Wood
d5b2ec32f1 local: add --local-no-check-updated to disable update checks #2206
This disables the `can't copy - source file is being updated` checks.
2018-04-09 15:27:58 +01:00
Nick Craig-Wood
aeedacfb50 Add Michael P. Dubner to contributors 2018-04-09 13:33:27 +01:00
Michael P. Dubner
92b266d361 rc: new call rc/pid - closes #2211 2018-04-09 13:33:04 +01:00
Nick Craig-Wood
05e32cfcf9 dropbox: Fix crypt+obfuscate on dropbox - fixes #2191
Before this change we lowercased the dropbox root directory.  This was
likely a leftover from when we used to build a dictionary to translate
the cases of dropbox files.  Now with the v2 API we can rely on
dropbox to do that for us, so we no longer need to lowercase the root.

This fixes issues using crypt with name obfuscation on dropbox.
2018-04-09 11:53:41 +01:00
Nick Craig-Wood
cbec59146a lsf: make sure we use localtime in tests - fixes Box integration tests
This problem was introduced with eca99b33c0.  It seems Box is the only
remote which converts time zones, so if you give it a GMT time zone,
it returns a PST time zone which represents the same instant.
2018-04-09 11:46:49 +01:00
Nick Craig-Wood
06e3fa3aba mounttest: reduce duplicated code and improve test output #2154
The written out list of tests was replaced with a nested test for
mount and cmount. The tests for each VFS cache mode were also replaced
with nested tests which makes the output and the code much cleaner.
2018-04-08 15:04:14 +01:00
Nick Craig-Wood
0fa700b3cf Make integration tests use go1.7+ nested tests #2154
* Removed generated code and code generator
  * Updated docs on how to write integration tests
  * Tidied up the actual integration tests
2018-04-08 15:04:14 +01:00
Nick Craig-Wood
42f0963bf9 local: retry remove on Windows sharing violation error #2202
Before this change asynchronous closes in cmount could cause sharing
violations under Windows on Remove which manifest themselves
frequently as test failures.

This change lets the Remove be retried on a sharing violation under
Windows.
2018-04-07 17:36:26 +01:00
Nick Craig-Wood
be54fd8f70 Remove builds conditional on go1.7 since that is now guaranteed #2154
Old fallback code was deleted and the go1.7 style code inlined where
appropriate.
2018-04-07 11:42:55 +01:00
Nick Craig-Wood
e5be471ce0 Use io.SeekStart/End/Current constants now for go1.7+ #2154 2018-04-07 11:42:36 +01:00
Nick Craig-Wood
80588a5a6b Replace "golang.org/x/net/context" with "context" for go1.7+ #2154 2018-04-07 11:42:08 +01:00
Nick Craig-Wood
67023f0040 Require go1.7 for compilation #2154
* Update the travis tests to exclude go1.6
  * Update the compile check to require go1.7+
  * Update misc go1.6 workarounds marked in the source
2018-04-06 20:18:14 +01:00
Nick Craig-Wood
32e02bd367 fstests: Fix TestObjectRemove failures
This was failing because TestPublicLink was causing the file to be
modified with Google drive.
2018-04-06 16:27:19 +01:00
Nick Craig-Wood
c749cf8d99 dropbox: fix repeatedly uploading the same files - fixes #2218
In #2134 and dfd0f4c5a4 some testing
changes got committed by accident which caused this regression.

This patch reverts it to how it was before.
2018-04-06 15:34:56 +01:00
Nick Craig-Wood
92cfb57fbd fstest/test_all: make -clean work better with google cloud storage 2018-04-06 14:54:33 +01:00
Nick Craig-Wood
0cb5c4aa73 gcs: detect bucket presence by listing it - fixes #2193
Doing it like this enables the use of a service account that only has
the "Storage Object Admin" role.
2018-04-06 12:45:15 +01:00
Nick Craig-Wood
0358e9e724 Add Eri Bastos to contributors 2018-04-05 20:20:53 +01:00
Eri Bastos
a69d8ec93b Fixed typo on ownCloud description 2018-04-05 20:20:31 +01:00
Nick Craig-Wood
92c5aa3786 s3: add --s3-chunk-size option - fixes #2203 2018-04-05 15:40:08 +01:00
Nick Craig-Wood
fbe1c7f1ea dropbox: remove unused code 2018-04-05 15:23:23 +01:00
Nick Craig-Wood
c4531daa43 local: work on spurious "can't copy - source file is being updated" errors #2206
Update all the time comparisons to use time.Time.Equal instead of ==

Improve the logging for that error so we can see exactly what has changed
2018-04-05 14:57:30 +01:00
remusb
6e11a25df5 cache: flush the memory cache after close 2018-04-04 23:25:53 +03:00
Nick Craig-Wood
0865e38917 Add Matt Holt to contributors 2018-04-04 14:56:50 +01:00
Nick Craig-Wood
ab2fa59fc4 Add Alexander Neumann to contributors 2018-04-04 14:56:50 +01:00
Matt Holt
e13f65b953 serve restic: Print actual listener address 2018-04-04 14:56:26 +01:00
Alexander Neumann
5b8977a053 serve restic: Disallow overwriting files in append-only mode - Fixes #2195
* Disallow overwriting files in append-only mode
* Add tests for append-only mode
2018-04-04 14:49:13 +01:00
remusb
1dea99ab20 cache: purge file data on notification 2018-04-03 23:24:45 +03:00
Nick Craig-Wood
06a8d3011d Add Chih-Hsuan Yen to contributors 2018-04-02 11:43:22 +01:00
Chih-Hsuan Yen
e7fd607078 Fix make tarball 2018-04-02 11:42:53 +01:00
Nick Craig-Wood
eca99b33c0 lsd,lsf: make sure all times we output are in local time - fixes #2183
Previous to this change times from lsd/lsf were output in whatever
timezone they were in whereas times from lsl were converted to
localtime.
2018-04-01 15:40:04 +01:00
remusb
e42cee5e02 cache: always forget parent dir for notifications - for #2117 2018-03-31 12:44:09 +03:00
Nick Craig-Wood
d45c750f76 Add Steve Kriss to contributors 2018-03-30 19:55:49 +01:00
Steve Kriss
2c2bb0f750 cmd/serve/restic: add append-only mode 2018-03-30 19:54:52 +01:00
Stefan
a8267d1628 link: allow creating public link to files and folders - closes #1562 2018-03-29 09:10:19 +02:00
Nick Craig-Wood
9df266a6b4 onedrive: Fix socket leak in multipart session upload
This had gone unnoticed until recently when we changed to uploading
all files with a multipart session.
2018-03-28 21:03:19 +01:00
Stefan Breunig
4d553ef701 drive: when initialized with a filepath, optional features used incorrect root path – see #2182 2018-03-28 20:33:39 +02:00
Nick Craig-Wood
1ba3ffdc59 Add Keith Goldfarb to contributors 2018-03-26 21:03:18 +01:00
Nick Craig-Wood
72f1b097a7 Add gbadanahatti to contributors 2018-03-26 21:03:18 +01:00
Nick Craig-Wood
885044d0a5 Add seuffert to contributors 2018-03-26 21:03:18 +01:00
Keith Goldfarb
6c10312c75 ncdu: added a "refresh" key - for #2174
Added Control+L key to refresh screen. Not sure if this is the
best choice, but it appears to be somewhat common.
2018-03-26 21:02:39 +01:00
gbadanahatti
e5aa5fe7d8 s3: docs: Minor format and URL changes to IBM COS Documentation content 2018-03-26 20:49:53 +01:00
Nick Craig-Wood
9b140b42c9 docs: fix current download link 2018-03-26 17:45:45 +01:00
Nick Craig-Wood
0bfbde8856 fstest: make ChangeNotify test clean up after itself and be more reliable
Previous to this fix old notifications could creep in and cause the
test to fail.  It also left files around which upset the TestObjectRemove test.

Fixes #2177
2018-03-24 19:57:44 +00:00
Nick Craig-Wood
98a924602f mount, cmount: set --attr-timeout default to 1s - fixes #2157
This  works around these 3 problems:

  * rclone using too much memory #2157
  * rclone not serving files to samba
    * https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112
  * excessive time listing directories #2095
2018-03-23 22:42:51 +00:00
Nick Craig-Wood
7e80e609e8 docs: install.sh add macOS fallback for mktemp - fixes #2173 2018-03-23 22:24:28 +00:00
Mateusz Pabian
91b068ad3a sync: implement --ignore-errors - fixes #642 2018-03-23 22:01:10 +00:00
remusb
b52e34ef5e cache: add info log on notification - for #2150 2018-03-23 22:41:01 +02:00
Nick Craig-Wood
32e6eee341 release: add another step to update the release dependencies #2172 2018-03-23 12:43:18 +00:00
Nick Craig-Wood
c5f1d501ed docs: fix download links for .deb and .rpm 2018-03-23 12:43:18 +00:00
remusb
0ed0d9a7bc cache: integrate with Plex websocket 2018-03-22 21:21:03 +02:00
seuffert
d9c13bff83 add rc cache/stats 2018-03-22 21:16:16 +02:00
Daniel Loader
ce91289b09 docs: tweak rc cache documentation to give an example 2018-03-22 15:10:34 +00:00
Nick Craig-Wood
5ba5be9b37 gcs: ignore zero length directory markers at the root too 2018-03-21 20:10:00 +00:00
Nick Craig-Wood
e9a2cbec37 s3: ignore zero length directory markers at the root too 2018-03-21 20:09:37 +00:00
Nick Craig-Wood
4f6f07c074 cmount: fix error handling for Open/OpenDir 2018-03-21 19:44:30 +00:00
Nick Craig-Wood
f6020f1308 gcs: ignore zero length directory markers 2018-03-19 17:42:27 +00:00
Nick Craig-Wood
a46f2a9eb7 s3: ignore zero length directory markers - fixes #1621 2018-03-19 17:41:46 +00:00
Nick Craig-Wood
911a78ce6d sftp: require go1.8+ after github.com/pkg/sftp update 2018-03-19 16:37:40 +00:00
Nick Craig-Wood
d64789528d vendor: update all dependencies 2018-03-19 15:51:38 +00:00
Nick Craig-Wood
940df88eb2 Start v1.40-DEV development 2018-03-19 14:20:48 +00:00
Nick Craig-Wood
19ca9fb939 release: Put the releases into a v1.XX subdirectory 2018-03-19 14:20:09 +00:00
Nick Craig-Wood
26f1c55987 Version v1.40 2018-03-19 10:06:13 +00:00
Nick Craig-Wood
1afac32d80 serve restic: script for running integration test against all remotes 2018-03-18 19:15:39 +00:00
Nick Craig-Wood
26fbd00b4f serve restic: don't buffer the JSON output in memory for the list command 2018-03-18 16:26:58 +00:00
Nick Craig-Wood
1313b529ff serve restic: use ListR (--fast-list) if available
For Restic's use case, --fast-list will use less transactions and
calling ListR directly means we can avoid the usual memory overhead.
2018-03-18 16:22:05 +00:00
Nick Craig-Wood
82e835d6fc serve restic: make it easy to run integration tests against any remote
Just `cd cmd/serve/restic` then `go test -v -remote TestRemote:`
2018-03-18 14:23:56 +00:00
Nick Craig-Wood
fa867a9a4c serve restic: implement accounting for uploads and downloads
This means the bandwidth stats will be correct and the bandwidth
throttling will work correctly.  This was forgotten as a previous
iteration of the code was using the higher level operations.Rcat which
took care of this.
2018-03-18 14:19:43 +00:00
Nick Craig-Wood
38d9475a34 release: include a source tarball and sign everything #1449 2018-03-17 15:06:04 +00:00
Nick Craig-Wood
c21c7e75b0 Add Stefan Lindblom to contributors 2018-03-17 12:12:23 +00:00
Stefan Lindblom
c8d095612a drive: Document process for service account and impersonation 2018-03-17 12:11:48 +00:00
Nick Craig-Wood
012d4a1235 docs: fix download icon 2018-03-17 12:00:14 +00:00
Nick Craig-Wood
854d3c3025 Add Dave Pedu to contributors 2018-03-17 12:00:14 +00:00
Dave Pedu
5bedc4c668 crypt: fix path in docs 2018-03-17 11:59:25 +00:00
Stefan
86892467d9 config: load config file only on first access (closes #1659, closes #2096) (#2147) 2018-03-17 12:36:30 +01:00
Nick Craig-Wood
e62fe06763 s3: document --ignore-checksum workaround for KMS #1824 2018-03-17 10:51:45 +00:00
Nick Craig-Wood
4295428a0f fs/accounting: add remote control of bwlimit 2018-03-17 10:34:02 +00:00
Nick Craig-Wood
2db0c4dd95 vfs: add remote control for directory cache flushing 2018-03-17 10:34:02 +00:00
Nick Craig-Wood
5bf639048f sync: log an error that --track-renames doesn't work with sync or move
Fixes #2008
2018-03-17 10:34:02 +00:00
remusb
4924ac2f17 cache: reduce log level for plex api - for #2102 2018-03-17 11:57:36 +02:00
Nick Craig-Wood
d4cca8d9f9 onedrive: fix upload of zero length files #1716
Unfortunately multi part upload can't upload zero length files so
bring back the single part upload for zero length files only.

This was broken when we made all uploads multipart uploads.
2018-03-17 09:48:28 +00:00
Nick Craig-Wood
a9e386b153 Add wolfv to contributors 2018-03-17 09:06:51 +00:00
wolfv
117238211b docs: Change log levels to all caps - fixes #2101 2018-03-17 09:06:51 +00:00
Oliver Heyme
645cf5ec0f onedrive: fix wrong upload endpoint and createDate #1716
This fixes the problem introduced by 7f744033d8
2018-03-16 19:18:51 +00:00
Nick Craig-Wood
d1bb8efb88 sftp: follow symlinks correctly - fixes #2145
The sftp library delivers the attributes of the symlink rather than
the object pointed to in directory listings, however when we use Stat
from the library it points to the objects.

Previous to this fix this caused items pointed to by symlinks to be
unusable.

After the fix both symlinked files and directories work as expected.
2018-03-16 15:36:47 +00:00
Nick Craig-Wood
c19e675ca6 vfs: unify locking for RWFileHandle.openPending,.close and File.Delete #2141
Without this fix the cached file can be removed as the file is being
uploaded or downloaded.  This can cause the directory listings to
become inconsistent (this issue) or data loss (if a retry was needed
in the Copy).

Remove file needs to be excluded from running at the same time as both
openPending and close so it makes sense to unify the locking between
all 3.
2018-03-15 20:49:07 +00:00
Nick Craig-Wood
34c45a7c04 mount, cmount: remove addition of O_CREATE to flags on file open #2141
Previously this was adding it in to all file opens which was causing
inefficiencies under Windows where it stats the file using
open/fstat/close.

This change will make stat operations run much quicker under Windows
as they won't have to open the underlying file

This problem was introduced in61b6159a05336bd7ba105766de2d2ff171f7fb81
where we added O_CREATE to all file opens and creates.
2018-03-15 20:48:56 +00:00
Nick Craig-Wood
0a0318df20 Add Leo R. Lundgren to contributors 2018-03-15 20:24:42 +00:00
Leo R. Lundgren
04e055fc06 sftp: Add --sftp-ask-password flag to prompt for password when needed - #2137 2018-03-15 20:24:30 +00:00
Nick Craig-Wood
d551137635 Add Giri Badanahatti to contributors 2018-03-15 20:21:12 +00:00
Giri Badanahatti
aba43cd3a4 Documention for IBM COS (S3) configuration. 2018-03-15 20:20:43 +00:00
Oliver Heyme
7f744033d8 onedrive: Removed upload cutoff and always do session uploads
Set modtime on copy


Added versioning issue to OneDrive documentation
2018-03-15 20:18:11 +00:00
remusb
078d705dbe cache: notify vfs and support crypt in rpc - #2111 2018-03-15 11:39:16 +02:00
Nick Craig-Wood
5981f9fab5 acd: disable integration tests
We no longer have any working keys for Amazon Cloud Drive so disable
the integration tests.
2018-03-14 22:44:46 +00:00
Alexander Neumann
84776c4e43 serve/restic: Remove log message on Close 2018-03-14 21:50:33 +00:00
Nick Craig-Wood
c1a3e363a6 mount: return ENOSYS rather than EIO on attempted link
This fixes FileZilla accessing an rclone mount served over sftp.

See: https://forum.rclone.org/t/moving-files-on-rclone-mount-with-filezilla/5029
2018-03-14 21:10:20 +00:00
Nick Craig-Wood
7ccc6080b0 serve restic: add more info to GET request error 2018-03-14 21:09:47 +00:00
remusb
677971643c cache: add support for rc 2018-03-14 22:58:20 +02:00
remusb
f4a1c1163c rc: update doc with supported params 2018-03-14 22:58:20 +02:00
remusb
97b48cf988 rc: add support for Go 1.6 2018-03-14 22:58:20 +02:00
Nick Craig-Wood
86e5a35491 Implement Remote Control for rclone #2111
This implements a remote control protocol activated with the --rc flag
and a new command `rclone rc` to use that interface.

Still to do
  * docs - need finishing
  * tests
2018-03-14 22:58:20 +02:00
Nick Craig-Wood
8bb2854fe4 httplib: allow the flags to be prefixed when instantiating a server 2018-03-14 22:58:20 +02:00
Remus Bunduc
d76da1f5fd cache: fix dir cache issue - #2117 2018-03-14 11:08:30 +02:00
Nick Craig-Wood
89748feaa5 s3: update docs to discourage use of v2 auth - fixes #2120
From testing it appears that CEPH no longer works properly with v2
auth and neither does Dreamhost, so update the docs anc configuration
to recommend v4 auth.
2018-03-13 20:47:29 +00:00
Nick Craig-Wood
dfd0f4c5a4 sync: when using --backup-dir don't delete files if we can't set their modtime
This is a problem when syncing a file which just needed its modtime
set with dropbox which can't set the mod time of a file without
re-uploading it.

Before this change we would delete the file, then the server side move
would fail moving the file to the backup-dir because it no longer
existed.

After this change the destination file is moved to the backup-dir
instead of being deleted and the new file is uploaded.

Fixes #2134
2018-03-13 16:05:06 +00:00
Nick Craig-Wood
0c9dc006c5 fs: make display of default values of --min-age/--max-age be off - Fixes #2121 2018-03-13 09:06:07 +00:00
Nick Craig-Wood
4e90ad04d5 serve restic: only accept v2 API requests for list 2018-03-11 17:35:01 +00:00
Nick Craig-Wood
43c7ea81df Add Alexander Neumann to contributors 2018-03-11 17:35:01 +00:00
Alexander Neumann
fa003e89b6 serve restic: When listing return empty list instead of 'null' 2018-03-11 14:48:46 +00:00
Alexander Neumann
5114b11d6f serve restic: add http2 server on stdin/stdout 2018-03-11 14:48:46 +00:00
Alexander Neumann
f832433fa5 serve restic: Return empty list for non-existing dirs 2018-03-11 14:48:43 +00:00
Nick Craig-Wood
d073efdc6c serve restic: serves a remote in restic REST API format 2018-03-11 14:43:03 +00:00
Nick Craig-Wood
9e48748182 httplib: Note that authentication is a good idea for non localhost 2018-03-11 14:38:54 +00:00
Nick Craig-Wood
b6058e0106 docs/install.sh: don't create root owned .config/rclone directory #2127 2018-03-10 11:09:13 +00:00
Nick Craig-Wood
66c69fe620 mount: wait longer for consistency after rm in tests 2018-03-09 23:15:38 +00:00
Nick Craig-Wood
a2336ad774 vfs: fix deadlock in mount tests
This was caused by this sequence of calls

1> file.Release
1> file.close  -> takes the file lock
2> vfs.waitforWriters
2> dir.walk -> takes the dir lock
1> file.setObject
1> dir.addObject -> attempts to take the dir lock - BLOCKS
2> file.activeWriters -> tries to take file lock - BLOCKS - DEADLOCK

The fix is to make activeWriters not take the file lock and use atomic
operations to read the number of writers instead.
2018-03-09 23:15:38 +00:00
Nick Craig-Wood
7713acf23d mount: skip failing test TestFileModTimeWithOpenWriters on Windows 2018-03-09 23:15:38 +00:00
Nick Craig-Wood
473a388f6d mount: disable failing test TestWriteFileDoubleClose on OSX 2018-03-09 23:15:37 +00:00
Nick Craig-Wood
c8a4d437a0 Make travis test mount and cmount - fixes #2100
Previously FUSE wasn't found in the container so these tests weren't
run.  Move to VM based testing and install FUSE dependencies.
2018-03-09 23:15:37 +00:00
Nick Craig-Wood
09c14af6d1 cmd: Fix go routines at exit message to make it less confusing 2018-03-09 17:15:48 +00:00
Jakub Tasiemski
acae10cd6f lsjson: add --encrypted to show encrypted name #1765 2018-03-09 08:44:02 +00:00
Nick Craig-Wood
0861207ace fstest/test_all: set cache backend wait time to 30m to fix integration tests 2018-03-08 21:14:09 +00:00
Nick Craig-Wood
a7dbf32c53 cache: Implement --cache-db-wait-time flag
This can be used to make the cache wait for other running cache
backends to finish rather than erroring after 1 second.
2018-03-08 21:14:09 +00:00
Nick Craig-Wood
6025bb6ad1 local: fix race conditions updating the hashes
This was causing occasional test failures for the -race test of mount
and cmount.
2018-03-08 21:08:41 +00:00
Remus Bunduc
70f07fd3ac fs: add ChangeNotify and backend support for it (#2094)
* fs: rename DirChangeNotify to ChangeNotify

* cache: switch to ChangeNotify

* ChangeNotify: keep order of notifications
2018-03-08 22:03:34 +02:00
Nick Craig-Wood
b3f55d6bda vendor: Update github.com/Unknwon/goconfig to fix section listing
This fixes listing sections just after creation which means the rclone
config list will have all the keys in now.
2018-03-08 13:18:27 +00:00
Nick Craig-Wood
d9094f1a45 vendor: Gopkg.lock file format changes only after go dep update 2018-03-08 13:16:59 +00:00
Nick Craig-Wood
572ee5ec96 Sign the tags as part of the release process #1449 2018-03-07 15:18:13 +00:00
Nick Craig-Wood
316dac25c2 travis: add encrypted GITHUB_USER and GITHUB_TOKEN for using the API 2018-03-07 10:18:10 +00:00
Nick Craig-Wood
ee3c45676f bin/get-github-release.go: use GITHUB_USER/GITHUB_TOKEN when available
This should help with rate limiting problems when running under
travis.
2018-03-07 10:18:09 +00:00
Nick Craig-Wood
2e7e15461b bin/get-github-release.go: report body of HTTP responses with errors 2018-03-07 10:18:06 +00:00
Nick Craig-Wood
0175332987 vfs: fix applying modtime for an open Write Handle
The symptom of this was that the time set when the file was open was
lost.  This was causing one of the mount tests to fail too.
2018-03-06 21:58:11 +00:00
Nick Craig-Wood
85e0b87c99 build: add .deb and .rpm output for the build
This uses https://github.com/goreleaser/nfpm to create the .deb and
.rpm packages from the standard build output.
2018-03-06 12:37:44 +00:00
Nick Craig-Wood
d41017a277 A script to download and install the latest release of a github package 2018-03-06 12:37:44 +00:00
Nick Craig-Wood
fc32fee4ad mount, cmount: add --attr-timeout to control attribute caching in kernel
This flag allows the attribute caching in the kernel to be controlled.
The default is 0s - no caching - which is recommended for filesystems
which can change outside the control of the kernel.

Previously this was at the default meaning it was 60s for mount and 1s
for cmount.  This showed strange effects when files changed on the
remote not via the kernel.  For instance Caddy would serve corrupted
files for a while when serving from an rclone mount when a file
changed on the remote.
2018-03-04 11:20:22 +00:00
Nick Craig-Wood
5795bd7db6 vfs: update cached copy if we know it has changed even if pending opens
This fixes a problem with Caddy serving corrupted files out of the VFS
cache when the file on the remote changed.
2018-03-04 11:20:22 +00:00
Nick Craig-Wood
9b011ce7e4 vfs: keep track of number of open RWHandles 2018-03-04 11:20:22 +00:00
Nick Craig-Wood
5e334eedd2 vfs: re-use the File objects when re-reading the directory
Make it so that d.items is never nil to simplify the code

This should help with inconsistent reads when the source object changes.
2018-03-04 11:20:22 +00:00
Nick Craig-Wood
7fb53a031c vfs: don't cache the object in read and read/write handles
This should help with inconsistent reads when the source object changes.
2018-03-04 11:20:22 +00:00
ishuah
ebfeec9fb4 mount: run rclone mount in the background - fixes #723 2018-03-04 14:06:07 +03:00
ishuah
90af7af9a3 added dependency github.com/sevlyar/go-daemon 2018-03-04 14:06:07 +03:00
Nick Craig-Wood
fe8eeec5b5 cache: improve efficiency with RangeOption and RangeSeek #1825
* All remotes now support RangeOption so remove SeekOption
  * Correct off by one error as RangeOption arguments are inclusive.
  * Use RangeSeek in preference to Seek if available
2018-03-02 17:10:56 +00:00
Nick Craig-Wood
e0eb666dbf fs/walk: fix new golint warning about unused variables in range 2018-03-02 17:01:58 +00:00
Nick Craig-Wood
7d4da1c66a local: fix crash on Stat error while reading a file 2018-03-01 13:17:40 +00:00
Nick Craig-Wood
f3e982d3bf azureblob,b2,gcs,qingstor,s3,swift: Don't check for bucket/container presense if listing was OK
In a typical rclone copy to a bucket/container based remote, before
this change we were doing a list, followed by a HEAD of the bucket to
check it existed before doing the copy.  The fact the list succeeded
means the bucket exists so mark it OK at that point.

Issue #1421
2018-03-01 12:11:34 +00:00
Nick Craig-Wood
3f9d0d3baf docs: improve --files-from documentation 2018-03-01 09:59:50 +00:00
Nick Craig-Wood
e9fd2250eb Make titles smaller in issue template 2018-02-28 22:05:49 +00:00
Nick Craig-Wood
769aa860f2 Rewrite greeting message for issue template inside HTML quoting 2018-02-28 21:58:41 +00:00
Nick Craig-Wood
fdebf9da31 local: Downgrade "invalid cross-device link: trying copy" to debug - Fixes #1875 2018-02-28 21:27:34 +00:00
Nick Craig-Wood
77f344a69d pacer: attempt to fix occasional "beginSleep didn't fire" test failures 2018-02-27 11:06:59 +00:00
Nick Craig-Wood
62540b4007 docs: clarify beta docs and add link to tip.rclone.org 2018-02-27 10:58:48 +00:00
Fabian Möller
21faac6e6c Add David0rk to contributors 2018-02-27 10:06:56 +01:00
Fabian Möller
167a4396c7 drive: remove debug binary 2018-02-27 09:59:06 +01:00
David0rk
1585aa61c1 docs: update install.sh shebang (#2097)
change shebang to bash to avoid syntax errors
2018-02-27 09:32:01 +01:00
Nick Craig-Wood
b91bd32489 vfs: Fix TestWriteFileDoubleClose with --vfs-cache-mode >= writes
This was causing the file to be closed on Flush() instead of Release()
when the file was opened with O_TRUNC.
2018-02-26 21:26:32 +00:00
Nick Craig-Wood
c3d0f68923 vfs: fix truncation work-around on Windows
This no longer needs to deal with O_RDONLY and O_TRUNC since we
disallow this earlier.  This also fixes the code to just do it for
O_APPEND, not for everything.
2018-02-26 19:46:38 +00:00
Nick Craig-Wood
f57e92b9a5 vfs: fix creation of files when truncating #2083
As spotted by @B4dM4n
2018-02-26 19:37:58 +00:00
Nick Craig-Wood
baf9ee5cf7 vfs: update cached copy if we know it has changed before using it
Before this change we would have to wait for the --vfs-cache-max-age
to expire before getting an update.
2018-02-26 18:00:51 +00:00
Nick Craig-Wood
354f1ad722 vfs: Use operations.Copy instead of CopyFile for efficiency 2018-02-26 17:54:18 +00:00
Nick Craig-Wood
54deb01f00 vfs: Make OpenFile and friends return EINVAL if O_RDONLY and O_TRUNC
Before this change Open("name", os.O_RDONLY|os.O_TRUNC) would have
truncated the file.  This is what Linux does, but is counterintuitive.
POSIX states this is undefined, so return an error in this case
instead.  This preserves the invariant O_RDONLY => file is not
changed.
2018-02-26 17:04:27 +00:00
Nick Craig-Wood
3282fd26af vfs: clean path names before using them in the cache
This avoids inconsistent cache behaviour on open("potato/")
close("potato").

The tests were also adjusted to make them more comprehensive.
2018-02-26 16:59:14 +00:00
Nick Craig-Wood
88d830c7b7 vfs: create cache.opens and use it in place of cache.get to avoid potential race 2018-02-26 16:58:02 +00:00
Nick Craig-Wood
724120d2f3 local: make DirMove return fs.ErrorCantDirMove to allow fallback
Before this change `rclone move localdir /mnt/different-fs` would
error.  Now it falls back to moving individual files, which in turn
falls back to copying individual files across the filesystem boundary.
2018-02-26 12:55:05 +00:00
Nick Craig-Wood
25bbc5d22b drive: make --drive-auth-owner-only look in all directories
Previously it was ignoring directories which weren't owned by the user
which meant it was ignoring files owned by the user in those
directories.
2018-02-26 12:30:59 +00:00
Fabian Möller
00adf40f9f cryptdecode: use Cipher instead of NewFs (#2087)
* crypt: extract NewCipher out of NewFs
* cryptdecode: make use of crypt.NewCipher

Fixes #2075
2018-02-25 12:57:14 +01:00
Fabian Möller
aeefa34f62 fstests: add TestInternal (#2085)
TestInternal allows to perform a custom test on the backend using the
optional InternalTester interface.
2018-02-25 10:58:06 +01:00
Nick Craig-Wood
9252224d82 vfs: don't open the file when using a RW handle for a null Seek
Background: cmd/mount/file.go Open() function does a Seek(0, 1) to see
if the file handle is seekable to set a FUSE hint.  Before this change
the file was downloaded before it needed to be which was inefficient
(and broke beta.rclone.org because HEAD requests caused downloads!).
2018-02-22 17:28:21 +00:00
Nick Craig-Wood
1383df4f58 b2: add more logging on multipart upload errors to debug #2036 2018-02-21 09:05:59 +00:00
Nick Craig-Wood
0ce81f68fe Make a beta release for all branches on the main repo (but not pull requests) 2018-02-20 16:06:39 +00:00
Nick Craig-Wood
20ca7d0e4f build: update to using go1.10 as the default go version
Note we have to put the version number in quotes to work around
https://github.com/travis-ci/gimme/issues/132
2018-02-20 13:41:16 +00:00
Nick Craig-Wood
4c3d42bcbb Add Daniel Loader to contributors 2018-02-20 13:04:14 +00:00
Nick Craig-Wood
2ef8de0843 Add Mateusz to contributors 2018-02-20 13:04:14 +00:00
Daniel Loader
a70200dd29 Add version output at end of the install.sh script 2018-02-20 13:03:50 +00:00
Nick Craig-Wood
c99412d11e cryptcheck: make reading of nonce more efficient with RangeOption #1825
...also only calculate the required hash which will speed things up slightly.
2018-02-19 18:00:39 +00:00
Nick Craig-Wood
abc736df1d cat: Use RangeOption for limited fetches to make more efficient #1825 2018-02-19 18:00:39 +00:00
Nick Craig-Wood
ab0d06eb16 fs: Make RangeOption mandatory #1825 2018-02-19 18:00:39 +00:00
Nick Craig-Wood
9ffc3898b1 fstests: Allow RangeOption tests to run everywhere #1825 2018-02-19 18:00:39 +00:00
Mateusz
afc963ed92 config: retry saving the config after failure - fixes #2060 2018-02-19 17:59:27 +00:00
Nick Craig-Wood
c929de9dc4 crypt: Implement RangeOption #1825 2018-02-19 15:45:24 +00:00
Fabian Möller
451cd6d971 fs: add ChunkedReader 2018-02-19 15:45:24 +00:00
Fabian Möller
a647c54888 fs: add RangeSeeker interface 2018-02-19 15:45:24 +00:00
Nick Craig-Wood
334bf49d30 httplib: add Close() method to shut the server down and use it in tests 2018-02-19 15:45:24 +00:00
Nick Craig-Wood
d8f78a7266 serve http/webdav: update docs on SSL 2018-02-19 14:08:17 +00:00
Fabian Möller
62e72801be vfs: fix race between multiple RWFileHandle (#2052)
Fixes #2034
2018-02-18 14:12:26 +01:00
Nick Craig-Wood
358c1fbac9 serve http/webdav: support SSL/TLS 2018-02-16 18:28:10 +00:00
Nick Craig-Wood
cc9d7156e4 serve http/webdav: add --user --pass authentication #1802 2018-02-16 18:28:10 +00:00
Nick Craig-Wood
221a8a9c5d serve http/webdav: add --htpasswd option for authentication #1802 2018-02-16 18:28:10 +00:00
Nick Craig-Wood
2b6f7028a6 vendor: github.com/abbot/go-http-auth for #1802 2018-02-16 18:28:09 +00:00
Nick Craig-Wood
5530662ccc serve http/webdav: factor common http server creation to httplib 2018-02-16 17:48:20 +00:00
Nick Craig-Wood
442334ba61 vfs: disable cache cleaner if --vfs-cache-poll-interval=0
And use this to disable the cleaner in the cache tests to make them
more reliable
2018-02-16 14:12:46 +00:00
Nick Craig-Wood
70b4842823 Add Victor to contributors 2018-02-16 13:21:50 +00:00
Victor
2f63a9f81c onedrive: Overwrite object size value with real size when reading file.
Because of a bug in the Onedrive API it will sometime report the wrong
size. If the size is wrong other remotes that depend on the size might
fail. To fix this we overwrite the objects size with the real size
from ContentLength header.
2018-02-16 13:21:26 +00:00
Nick Craig-Wood
8a9ed57951 vfs: fix another race in cache tests 2018-02-16 12:05:59 +00:00
Nick Craig-Wood
a5c3bcc9c7 fshttp: fix idle timeouts for HTTP connections #2057
Now we only nudge on the idle timeout after a successful Read or Write
which returns some bytes and no errors.
2018-02-16 10:35:41 +00:00
Nick Craig-Wood
9b800d7184 vfs: fix race in cache tests 2018-02-15 21:34:37 +00:00
Nick Craig-Wood
b1945d0094 swift: fix refresh of authentication token
Before this fix we were doing the token refresh but ignoring the new
tokens.

This bug was introduced in v1.39 by 4c0e2f9b3b

Fixes #2018
Fixes #2031
2018-02-15 19:22:45 +00:00
remusb
9a34fd984c cache: fix dirmove with temp fs enabled 2018-02-14 23:47:45 +02:00
Nick Craig-Wood
644313a4b9 http: Fix handling of directories with & in
This was caused by inconsistent escaping of the URL in the prefix
check, so check the URL links back to the correct host and scheme
instead of the prefix check.

The decoded path check will catch any URLs which are outside of the
root.
2018-02-14 11:26:37 +00:00
Nick Craig-Wood
675e7c5d8e docs: make downloads into a table
Add the scripted downloads to the download page
2018-02-13 11:23:11 +00:00
Nick Craig-Wood
99f3c8bc93 docs: turn version into a partial so it can be reused more easily 2018-02-13 11:20:23 +00:00
Nick Craig-Wood
ff6a7142da Add Durval Menezes to contributors 2018-02-12 11:47:01 +00:00
Durval Menezes
691c725e8b docs: Enhanced documentation for the --drive-shared-with-me option. 2018-02-12 11:46:29 +00:00
Nick Craig-Wood
ee388c4331 New email address for Oliver Heyme 2018-02-12 11:43:28 +00:00
Nick Craig-Wood
771fbbe314 docs: for --max-delete 2018-02-12 11:32:59 +00:00
Bjørn Erik Pedersen
ab8c0a81fa Add a delete threshold to sync (--max-delete)
Fixes #959
2018-02-12 11:29:58 +00:00
Nick Craig-Wood
cd7fd51119 vfs: fix docs - fixes #2067 2018-02-12 11:29:32 +00:00
Nick Craig-Wood
0f787e43b0 mount: link the nssm service manager for mount under Windows 2018-02-12 11:29:32 +00:00
Nick Craig-Wood
3a7bb7b2df mount: update docs showing --vfs-cache-mode to work around limitations 2018-02-12 11:29:32 +00:00
remusb
54724a1362 cache: notify vfs when using temp fs - fixes #2051 2018-02-11 22:30:58 +02:00
Stefan Breunig
846bbef1e9 vfs: write 0 bytes when flushing unwritten handles to avoid race conditions in FUSE - fixes #1181 2018-02-11 17:59:13 +00:00
remusb
b33e3f779c cache: add support for polling 2018-02-10 22:01:05 +02:00
Nick Craig-Wood
8a25ca786c drive: add --drive-impersonate for service accounts #1491 2018-02-09 16:58:35 +00:00
Nick Craig-Wood
04a0a7406b vfs: downgrade "poll-interval is not supported" message to Info
...to save confusion as it isn't very important
2018-02-09 07:57:50 +00:00
Oliver Heyme
9a653fea10 crypt: Changed max filename length documentation to 143 2018-02-06 18:26:58 +00:00
Fabian Möller
b183bd7f00 alias: add new backend to create aliases for remote names #1049
The alias backend is a wrapper for an existing remote.
It allows you to name a "remote:path" as an "alias:".
2018-02-06 18:23:47 +00:00
Nick Craig-Wood
5055b340da swift: Fix extra HEAD transaction when uploading a new file - fixes #2053
Also don't keep the swift.Headers as a pointer to a map, just use the map
2018-02-06 14:43:21 +00:00
Nick Craig-Wood
6546b7e0b0 vendor: update github.com/jlaffaye/ftp to fix FTP with online.net 2018-02-05 09:12:30 +00:00
Nick Craig-Wood
f4a5489d19 vendor: dep ensure changes 2018-02-05 09:10:45 +00:00
Nick Craig-Wood
82418c3021 box: improve accounting for chunked uploads 2018-02-02 15:14:41 +00:00
Nick Craig-Wood
bf6101cb6c azureblob: improve accounting for chunked uploads 2018-02-02 15:14:41 +00:00
Nick Craig-Wood
5723d2dbff pcloud: remove unused chunked upload flag and code 2018-02-02 15:14:41 +00:00
Nick Craig-Wood
d0d6b83a7a fs/accounting: rework to enable accounting to work with crypt and b2
This removes the old system of part accounting and replaces it with a
system of popping off the accounting reader and wrapping up new ones
as necessary.

This makes it much easier to carry the context down the chain of
wrapped readers and get the limiting as near as possible to the
output.  This makes the accounting more accurate and the bandwidth
limiting smoother.

Fixes #2029 and Fixes #1443
2018-02-02 15:14:41 +00:00
Nick Craig-Wood
bea02fcf52 fs/accounting: factor into separate files without changing functionality 2018-02-02 15:14:40 +00:00
Nick Craig-Wood
8722403b0d Add nbuchanan to contributors 2018-02-02 14:24:42 +00:00
nbuchanan
9aa8815990 drive: add --drive-use-created-date to use created date as modified date 2018-02-02 14:20:11 +00:00
Nick Craig-Wood
6fb868e00c config: fix --log-level flag after code reorganization - fixes #2043 2018-02-02 14:07:44 +00:00
Nick Craig-Wood
2f746426e7 install.sh: use mv to overwite an existing binary
This stops the install process erroring with "Text file busy" when
trying to `cp` over the binary.
2018-02-02 13:49:37 +00:00
ishuah
4c1ffc7f54 copy/move: detect file size change during copy/move - fixes #1250 2018-02-02 13:49:11 +00:00
Jakub Tasiemski
1018e9bb27 cmd: rewrite touch tests #1934 2018-02-02 13:46:56 +00:00
Nick Craig-Wood
295c3fabec vfs: fill and clean the cache immediately on startup 2018-02-02 12:19:53 +00:00
Nick Craig-Wood
3f8d286a75 vfs: fix cache cleaning on startup
Previous to this fix the vfs cache wasn't being cleaned properly on
startup as the atimes of the existing files were being ignored.
2018-02-02 12:06:42 +00:00
Nick Craig-Wood
fc8641809e fstests: add name of remote to WARN message 2018-02-02 12:05:34 +00:00
Nick Craig-Wood
de35f1c165 Show WARN in integration tests if remote not configured 2018-02-02 09:50:58 +00:00
Nick Craig-Wood
2974efc7d6 Makefile: disable caching in integration tests 2018-02-02 09:37:00 +00:00
Nick Craig-Wood
a6227f34e2 drive: request the export formats only when required #320
If the listing has no google docs in or the user uses
`--drive-skip-gdocs` then we don't fetch the export formats which
saves a transaction to drive.
2018-02-01 12:05:00 +00:00
Fabian Möller
3c7a755631 lsjson: explain the Path value in the docs 2018-01-31 20:06:01 +00:00
Nick Craig-Wood
8df78f2b6d operations: ignore size of objects when they are < 0 #320
This allows google docs to be transferred and checked correctly.
2018-01-31 16:22:05 +00:00
Nick Craig-Wood
44276db454 vfs: make -ve sized files appear as 0 size. #320
This means that Google docs will no longer appear as huge files in
`rclone mount`.  They will not be downloadable, though sometimes
trying twice will work.
2018-01-31 16:22:05 +00:00
Nick Craig-Wood
2eb5cfb7ad fs: Formalize the ObjectUnWrapper interface 2018-01-31 16:21:41 +00:00
remusb
b3d8b7e22e cache: use atexit for cleanup 2018-01-30 22:35:53 +02:00
Nick Craig-Wood
ed2d4ef4a2 travis: revert switch to using the .x version notation for the go minor versions
This doesn't seem to work for the `on` clause in the deploy script so
revert to the previous scheme.

Fixes #2033
2018-01-30 16:28:55 +00:00
Nick Craig-Wood
11fe3fdc16 drive: update docs to clarify access to "Computers" tab #1773 2018-01-30 16:28:55 +00:00
Fabian Möller
cf6d522d2f drive: fix upload to existing file (#2032)
This fixes uploads to existing files for Google Drive introduced by #2007.
Instead of updating the old file a new "Untitled" file would be created
in the root folder.
2018-01-30 14:37:06 +01:00
Fabian Möller
29d428040c cache: clean root path (#2023)
Trim "/" from the root path to fix "slice bounds out of range" panic
in cache.go:1272.

Fixes #1945
2018-01-30 14:35:40 +01:00
Fabian Möller
1aa482c333 drive: fix chunked upload (#2030) 2018-01-29 23:36:39 +01:00
remusb
40af98b0b3 cache: offline uploading 2018-01-30 00:05:04 +02:00
Nick Craig-Wood
c277a4096c mount: don't set modtime twice #2021 2018-01-29 20:49:13 +00:00
Nick Craig-Wood
1852a0e0c9 dropbox: Fix custom oauth client parameters - fixes #2028 2018-01-29 20:04:41 +00:00
Nick Craig-Wood
44cedbd9d9 Update MAINTAINERS with our new maintainer Fabian Möller @B4dM4n 2018-01-29 16:35:35 +00:00
Nick Craig-Wood
540e00e938 Merge Fabian Möller's email addresses 2018-01-29 16:33:56 +00:00
Nick Craig-Wood
a4fe2455ed drive: add scope configuration and root folder selection
This allows:

  * appdata access - Fixes #1799
  * access to backup and sync folders - Fixes #1773
  * drives.file access - Fixes #2000
  * read only access - Fixes #337
2018-01-29 14:40:10 +00:00
Fabian Möller
f622017539 drive: use contains for name matching in list
Use contains for name matching in list to work around #1675.
2018-01-29 14:18:49 +00:00
Fabian Möller
07f20dd1fd drive: migrate to api v3 2018-01-29 12:00:02 +00:00
Nick Craig-Wood
fe52502f19 fs: Adjust RangeOption.Decode to return -1 for read to end
A Range request can never request 0 bytes however this change was made
to make a clearer signal that the limit means read to the end.

Add test and more documentation and fixup uses
2018-01-27 14:31:29 +00:00
Nick Craig-Wood
9a73688e3a fs: Add ParseRangeOption to parse incoming Range: requests 2018-01-27 13:16:37 +00:00
Nick Craig-Wood
bc3ee977f4 fs/hash: move interface assertion to tests so it doesn't pull in spf13/flag 2018-01-26 14:35:18 +00:00
Nick Craig-Wood
a69fc8b80d travis: run tests on go1.10rc1 2018-01-26 12:16:46 +00:00
Nick Craig-Wood
926cd52a7f Makefile: make full tests run on go1.10+ as well as go1.9 2018-01-26 12:02:44 +00:00
Nick Craig-Wood
c2ce3114f4 Update CONTRIBUTING with more info about integration tests. 2018-01-26 10:00:16 +00:00
Fabian Möller
29286cc8b3 drive: fix single Drive Document as FS root
Allow using Drive Documents as FS root by doing a direcoty list during NewFS.

Fixes #1772
2018-01-26 09:59:36 +00:00
Fabian Möller
1f5e23aedb scripts: make absolute paths consistent
Change absolute binary paths in scripts to /usr/bin/env or make them
relative.
This allows the scripts to be used on linux distributions
like NixOS, where binaries are not located in /usr/ or /bin/.
2018-01-26 09:39:05 +00:00
Nick Craig-Wood
d016438243 fstest: Fix CheckWithDuplicates after code reshuffle to not use operations 2018-01-25 12:03:39 +00:00
Nick Craig-Wood
fa500e6d21 lib/atexit: factor from cmd so it can be used by backend/cache #1946 2018-01-25 10:33:00 +00:00
Nick Craig-Wood
dbabb18b0c vfs: Make error messages more informative #2009 2018-01-25 10:33:00 +00:00
Nick Craig-Wood
6f6f2aa369 fstest: Fix config file override, hence fixing make quicktest 2018-01-25 10:33:00 +00:00
Fabian Möller
17dabf7a99 ftp: fix RangeOption support in Open #1825 2018-01-25 10:21:00 +00:00
Fabian Möller
9520992a54 sftp: fix RangeOption support in Open #1825 2018-01-25 10:20:43 +00:00
Fabian Möller
a3dd2c691e amazonclouddrive: remove unnecessary notifies from DirChangeNotify
It is unnecessary to notify the node.Parents, because a cahnge event is
generated for all involved files and folders in a move from d1/f1 to
d2/f1. There will be a event for d1, d2 and f1.

Additionally a duplicate notification is resolved when them empty string
is in pathsToClear.

Related to #2006
2018-01-25 10:19:06 +00:00
Nick Craig-Wood
38f829842a s3: fix server side copy and set modtime on files with + in - fixes #2001
This was broken in 64ea94c1a4 when
putting a work-around for Digital Ocean.  PathEscape has now been
adjusted so it works with both providers.
2018-01-23 10:50:50 +00:00
Nick Craig-Wood
f9806848fe fstest: use the difficult file name for server side copy #2001
This should detect re-occurrence of #315
2018-01-23 09:37:33 +00:00
Nick Craig-Wood
88e0770f2d cache: Implement RangeOption #1825 2018-01-22 19:44:55 +00:00
Nick Craig-Wood
a6833b68ca local: factor RangeOption code to Decode() method and readers.LimitedReadCloser #1825 2018-01-22 19:44:00 +00:00
Nick Craig-Wood
e44dc2b14d box: fix RangeOption support in Open #1825 2018-01-22 17:05:47 +00:00
Nick Craig-Wood
d876392d15 onedrive: Factor code into fs.FixRangeOption 2018-01-22 17:05:00 +00:00
Nick Craig-Wood
c098e25552 fstest: Skip RangeOption test on Appveyor also 2018-01-22 11:10:29 +00:00
Fabian Möller
186f78d44f local: fix RangeOption support in Open #1825 2018-01-21 19:50:26 +00:00
Nick Craig-Wood
ea69deaa4c fstests: Skip RangeOption test in CI until all implemented 2018-01-21 18:09:16 +00:00
Nick Craig-Wood
c963c74fbe onedrive: fix RangeOption support in Open #1825 2018-01-21 17:11:37 +00:00
Nick Craig-Wood
9c45125271 azureblob: fix RangeOption support in Open #1825 2018-01-21 17:11:32 +00:00
Nick Craig-Wood
8653944a6d Make RangeOption manadatory for Open - #1825
Add an integration test to make sure all backends implement
RangeOption correctly.
2018-01-21 17:09:12 +00:00
Nick Craig-Wood
84bc4dc142 Clarify RangeOption semantics 2018-01-21 09:51:28 +00:00
Nick Craig-Wood
84d00e9046 authors: Fix duplicated entry for Iakov Davydov 2018-01-21 09:38:50 +00:00
Jon Fautley
71bc108ce6 sftp: performance: don't consult config file outside of Fs setup 2018-01-21 09:37:22 +00:00
Stefan Breunig
e57a388851 docs: Update integration testing guide 2018-01-20 18:52:53 +00:00
Nick Craig-Wood
bfa2878d24 Add Andreas Roussos to contributors 2018-01-20 18:50:29 +00:00
Andreas Roussos
dcdb43eb07 Fix typos, reword the description of the lsl command
Add a period at the end of each sentence for consistency.
Change the remaining verbs to their imperative form (again, for consistency).
The default `rclone lsl` output is size, modification time and path, so reword the command description to reflect that.
Correct various typos.
2018-01-20 18:50:20 +00:00
Fabian Möller
115d24e1f7 amazonclouddrive: implement DirChangeNotify
Use the Changes API to invalidate cache entries.
The latest retrieved checkpoint is stored in the config file to allow
fast resumption after restart.
2018-01-20 18:48:52 +00:00
Nick Craig-Wood
62b74d06ff Add Jody Frankowski to contributors 2018-01-20 18:15:27 +00:00
Nick Craig-Wood
7117ba7d58 Add Iakov Davydov to contributors 2018-01-20 18:15:27 +00:00
Jody Frankowski
5e73acd40a Clean up mount.go and vfs/help.go docs
* Title cleanups
* Typos
* `rclone mount vs rclone sync/copy` update with `File Caching`
2018-01-20 18:14:20 +00:00
Nick Craig-Wood
25a41e1945 drive: fix missing error handler 2018-01-20 18:04:23 +00:00
Nick Craig-Wood
ee66419a27 fs/fserrors: Add test for error from #1964 2018-01-19 17:07:40 +00:00
Nick Craig-Wood
8e86a902e2 travis: switch to using the .x version notation for the go minor versions 2018-01-19 14:32:32 +00:00
Nick Craig-Wood
a80d8a21dc vfs: add flags parameter to Dir.Create 2018-01-19 13:18:40 +00:00
Nick Craig-Wood
517bdc719b vfs: make specialized file Open functions private 2018-01-19 11:46:01 +00:00
Nick Craig-Wood
5ad226ab54 fs: Add dir option to fs.Purge #1891
Purge optional interface needs fixing too.
2018-01-19 11:45:50 +00:00
Nick Craig-Wood
a375992186 fstest: Fix removal of test folders/buckets 2018-01-19 10:20:06 +00:00
Nick Craig-Wood
b96c73bee6 test_all: fix -clean flag 2018-01-19 09:47:01 +00:00
Nick Craig-Wood
97c414f025 config/hash: rename more symbols after factoring into own package 2018-01-18 20:27:52 +00:00
Nick Craig-Wood
71722b5b95 config: factor Obscure and Reveal into its own package 2018-01-18 20:19:55 +00:00
Nick Craig-Wood
59a8108fc3 webdav: add a new time format #1952 2018-01-18 16:54:13 +00:00
Nick Craig-Wood
821be5ebed ncdu: add link to asciinema demo of it in action 2018-01-18 14:22:43 +00:00
Nick Craig-Wood
2030dc13b2 lib/oauthutil: fix Google drive oauth process
The problem was introduced by the code refactoring in
11da2a6c9b
2018-01-18 11:18:35 +00:00
Ernest Borowski
5cce74d630 flags: remove --no-traverse flag because it is obsolete - fixes #1813
Signed-off-by: Ernest Borowski <er.borowski@gmail.com>
2018-01-18 11:00:25 +00:00
Iakov Davydov
acd55a8f65 local, fs: --exclude-if-present ignores directories which it doesn't have permission for - fixes #1959 2018-01-16 20:00:16 +00:00
Nick Craig-Wood
ad76dd0adc Add Lucas Bremgartner to contributors 2018-01-16 19:53:59 +00:00
Lucas Bremgartner
8c90bfb0cd FAQ: env vars for SSL root certs and DNS resolver
Added section to FAQ about environment variables, which allow to control
location of SSL root certificate as well as DNS resolver used.

see also comment in #683
2018-01-16 19:53:47 +00:00
Nick Craig-Wood
4b0c5f79b5 qingstor: Only support on go1.7+ 2018-01-16 17:05:26 +00:00
Nick Craig-Wood
1848e26183 dropbox: Only support on go1.7+
See https://github.com/dropbox/dropbox-sdk-go-unofficial/pull/40
2018-01-16 17:05:02 +00:00
Nick Craig-Wood
7d3a17725d vendor: update all dependencies to latest versions 2018-01-16 13:20:59 +00:00
Nick Craig-Wood
8e83fb6fb9 Makefile: Fix integration test runner 2018-01-16 13:14:41 +00:00
Nick Craig-Wood
11da2a6c9b Break the fs package up into smaller parts.
The purpose of this is to make it easier to maintain and eventually to
allow the rclone backends to be re-used in other projects without
having to use the rclone configuration system.

The new code layout is documented in CONTRIBUTING.
2018-01-15 17:51:14 +00:00
Nick Craig-Wood
92624bbbf1 Move all backends into backend directory 2018-01-12 20:27:08 +00:00
Nick Craig-Wood
60afda007b Move dircache, oauthutil, rest and pacer modules into lib 2018-01-12 17:07:38 +00:00
Nick Craig-Wood
b8b620f5c2 Move all backends into backend directory 2018-01-12 17:07:38 +00:00
ishuah
0a7731cf0d cryptdecode: added option to return encrypted file names. Fixes #1923 2018-01-11 19:22:40 +03:00
Will Gunn
6cac98d2ce docs: Add documentation for --stats-file-name-length
Missed adding documentation in original PR https://github.com/ncw/rclone/pull/1951

    Fixes comment on #1206
2018-01-11 13:55:25 +00:00
Nick Craig-Wood
712e6a8085 lsf: fix integration tests 2018-01-11 13:52:15 +00:00
Nick Craig-Wood
6d333da69f Add Will Gunn to contributors 2018-01-10 20:33:57 +00:00
Will Gunn
5c7e8d5a2b fs: Add --stats-file-name-length to specify the printed file name length for stats
Fixes #1206
2018-01-10 20:32:36 +00:00
Jon Fautley
57f1bb7bb2 sftp: add 'set_modtime' hidden configuration option 2018-01-10 20:27:23 +00:00
Filip Bartodziej
5e83dce1f6 Installation script check for a tool to extract zip archives #1949 2018-01-10 20:18:20 +00:00
Nick Craig-Wood
052c886317 sftp: read $USER in username fallback not $HOME 2018-01-08 21:39:16 +00:00
Nick Craig-Wood
28480c0570 sftp: use correct OS way of reading username - fixes running under crontab 2018-01-07 12:57:46 +00:00
Nick Craig-Wood
72349bdaae Add Jon Fautley to contributors 2018-01-07 11:19:14 +00:00
Jon Fautley
36e6d23112 sftp: Add option to disable remote hash check command execution 2018-01-07 11:18:51 +00:00
Nick Craig-Wood
0eba37d8f3 lsf: add --files-only and --dirs-only flags 2018-01-06 18:04:24 +00:00
Nick Craig-Wood
c74c3b37da lsf: add option to print hashes 2018-01-06 17:53:37 +00:00
Nick Craig-Wood
7c71ee1a5b fs: fix TestListFormat on remotes which return 0 as dir size not -1 2018-01-06 17:47:42 +00:00
Nick Craig-Wood
ed20fa5ee7 ls* commands: update docs and add defaults into options for lsf 2018-01-06 17:00:20 +00:00
Nick Craig-Wood
54a9fdf421 ls2: remove in favour of lsf 2018-01-06 14:41:36 +00:00
Jakub Tasiemski
0d041602cf cmd: new command lsf 2018-01-06 14:39:31 +00:00
Nick Craig-Wood
8f47d7fc06 Add Chris Redekop to contributors 2018-01-06 14:30:27 +00:00
Chris Redekop
4dd1e507f4 s3: set/get the hash for multipart files - #523 2018-01-06 14:30:10 +00:00
Nick Craig-Wood
65618afd8c serve/http: fix serving files with : in - fixes #1939 2018-01-05 17:25:05 +00:00
Nick Craig-Wood
be4ed14525 rest: rename URLEscape to URLPathEscape for consistency with go1.8 2018-01-05 15:55:43 +00:00
Nick Craig-Wood
ef89f1f1a7 webdav: parse time in alternate format for mydrive.ch - fixes #1952 2018-01-05 14:28:06 +00:00
Nick Craig-Wood
b412c745a1 Start v1.39-DEV development 2017-12-23 13:40:28 +00:00
Nick Craig-Wood
f34a9116d4 Version v1.39 2017-12-23 13:07:45 +00:00
Andrew Starr-Bochicchio
64ea94c1a4 s3: Use rest.URLEscape rather than url.QueryEscape.
The X-Amz-Copy-Source takes a path. url.QueryEscape
escapes spaces with a plus sign while rest.URLEscape
(which mimics the url.PathEscape available from go 1.8)
uses '%20'

This works around an issue when copying objects with
spaces in their key on DigitalOcean Spaces.
2017-12-23 11:27:45 +00:00
remusb
4eac50eb83 cache: update docs for 1.39 2017-12-22 13:52:55 +02:00
Nick Craig-Wood
5683f74025 Add Yassine Imounachen to contributors 2017-12-21 10:33:43 +00:00
Yassine Imounachen
fe71d4fd87 Fix 'QingClound' typo 2017-12-21 10:33:21 +00:00
remusb
a64d92bd35 cache: update internal tests with chunk path 2017-12-20 23:03:44 +02:00
remusb
c5cf0792f2 cache: add the ability to specify a custom chunk path - fixes #1872 2017-12-20 22:43:30 +02:00
Nick Craig-Wood
255d3e925d s3: fix crash if a bad listing is received - fixes #1927
Caringo Swarm is returning a listing with IsTruncated set but no
NextMarker and no Keys.  Rclone doesn't know how to continue the
listing at this point, so it returns an error rather than truncating
the listing or risking a loop.
2017-12-20 16:51:07 +00:00
remusb
0d4bff8239 cache: fix Windows separator issue for #1904 (#1930) 2017-12-20 17:24:50 +02:00
Nick Craig-Wood
4ba58884b1 webdav: decode multiple <s:propstat> more carefully - fixes nextcloud 12.0.4
For some reason nextcloud sends multiple propstat responses now, one
with a 404 status.  rclone was interpreting the last status and
assuming the file was missing.
2017-12-20 11:53:10 +00:00
remusb
8839e4ee33 cache: add SIGHUP support to evict all cache - fixes 1906 2017-12-19 15:48:48 +02:00
remusb
ebbe77f525 cache: enable internal tests and fix race condition for them (#1928) 2017-12-19 15:37:38 +02:00
remusb
6f1ae00c7f cache: disable unreliable internal tests 2017-12-18 16:31:15 +02:00
remusb
6b5989712f cache: refactor entries caching pattern for #1904 (#1924) 2017-12-18 14:55:37 +02:00
Nick Craig-Wood
29d34426bc vfs: fix deletion of in use directories #1860
This was causing errors if the cache cleaner was called between the
Open and the pendingOpen of a RW file.

The fix was to move the cache open to the Open from the openPending.
2017-12-15 15:42:49 +00:00
Nick Craig-Wood
2a01fa9fa0 moveto,copyto: clarify error message if source doesn't exist - fixes #1022 2017-12-15 11:37:31 +00:00
Nick Craig-Wood
4c0e2f9b3b swift: fix crash on bad authentication - fixes #1919
This also fixes Hubic not re-authenticating for long transfers.
2017-12-14 14:23:55 +00:00
Nick Craig-Wood
240c97cd7a Update MAINTAINERS doc 2017-12-14 13:56:58 +00:00
Nick Craig-Wood
2fd0bec4e4 docs: note that script install checks the version 2017-12-14 11:00:22 +00:00
Nick Craig-Wood
7e585cda96 fs: fix TestRmdirsLeaveRoot test 2017-12-14 08:57:28 +00:00
Nick Craig-Wood
1b1593a894 Add lewapm to contributors 2017-12-13 10:24:16 +00:00
lewapm
9c242edc10 rmdirs: add --leave-root flag - fixes #1874 2017-12-13 10:23:54 +00:00
Nick Craig-Wood
0914ec316c b2: fix multipart upload retries #1733
Prior to this fix we were uploading 0 length bodies if a retry was
needed on a multipart upload chunk.  This gave this error `http:
ContentLength=268435496 with Body length 0`.

Fix by remaking the hash appending reader in the Call loop.  This is
inefficient in the face of retries, but these are uncommon.
2017-12-13 10:11:20 +00:00
Nick Craig-Wood
2cf808c825 ncdu: fix crashes on empty directories - fixes #1910
Up arrow or right arrow in an empty directory would crash ncdu
2017-12-12 13:54:15 +00:00
Nick Craig-Wood
66558213e0 b2: send correct fileName when using --hard-delete - fixes #1905 2017-12-12 07:48:06 +00:00
remusb
84701e376a cache: delay Plex connection to the first read handle - fixes #1903 2017-12-12 00:46:08 +02:00
remusb
829dd1ad25 cache: try a full read on the last chunk for #1896 2017-12-11 01:15:53 +02:00
remusb
7c972d375b cache: fix mismatched types for #1896 2017-12-10 14:16:16 +02:00
remusb
3d2f3d9a7f cache: catch panic and add more logging for #1896 2017-12-10 14:11:31 +02:00
Nick Craig-Wood
845b22a628 Add Jon Fautley to contributors 2017-12-10 10:53:49 +00:00
Jon Fautley
3684585104 sftp: add option to enable the use of aes128-cbc cipher 2017-12-10 10:53:32 +00:00
Filip Bartodziej
f424019380 error codes documented and bugs fixed 2017-12-10 10:16:20 +00:00
Filip Bartodziej
ab03f6e475 version check in curl installation 2017-12-10 10:16:20 +00:00
remusb
b48b537325 cache: plex integration, refactor chunk storage and worker retries (#1899) 2017-12-09 23:54:26 +02:00
ishuah
b05e472d2e stats: condensed transfer output to fit 80x25 terminals 2017-12-09 10:48:36 +03:00
Nick Craig-Wood
5061aaaf46 vendor: update github.com/dropbox/dropbox-sdk-go-unofficial to fix #1806 2017-12-07 22:14:36 +00:00
Nick Craig-Wood
e00616b016 Write version.txt on building into root of downloads 2017-12-07 21:49:32 +00:00
Nick Craig-Wood
09f203f62b Add Filip Bartodziej to contributors 2017-12-07 21:37:09 +00:00
Filip Bartodziej
2965cbe264 curl install for rclone #1856 2017-12-07 21:36:55 +00:00
Nick Craig-Wood
bb3ba7b314 Add Giovanni Pizzi to contributors 2017-12-07 21:31:15 +00:00
Giovanni Pizzi
f12512dd13 swift: Allow authentication with storage url and auth key
Adding the option to load the storage url and the auth key
from the environment when you have an alternate authorization,
external to rclone, and you need to use it (e.g. because
it's not yet supported by the swift go library)

Allowing to get alternate authentication from config file,
and using proper way (c.Authenticated()) to know if it's authenticated.

Updated docs as well
2017-12-07 21:30:58 +00:00
remusb
25b073c767 fs: add Wrap feature for FS to identify their parent FS (#1884) 2017-12-06 17:14:34 +02:00
Nick Craig-Wood
ebd7780188 fstest: don't error out if the target was not found at end of run 2017-12-04 15:58:29 +00:00
Nick Craig-Wood
fa4a25a73b fs: only test one level of cache
Can't test multiple caches at once as can only have 1 DB open at once
2017-12-04 15:50:59 +00:00
Ernest Borowski
934df67aef filter: warn the user if he use --include and --exclude together fixes #1764
Signed-off-by: Ernest Borowski <er.borowski@gmail.com>
2017-12-04 14:20:01 +00:00
Nick Craig-Wood
006b296c34 Tidy up Makefile to get rid of vendor directory avoidance workarounds 2017-12-03 13:03:20 +00:00
Nick Craig-Wood
38b85e94ea vfs: rename --cache-* options to --vfs-cache-* to save confusion
..as the backend cache options are all called --cache-* too. Adjust
docs to point out what the vfs cache does vs the backend cache.
2017-12-03 12:14:15 +00:00
Nick Craig-Wood
4b185355df fs: rcat - use in memory object and Copy for more reliable transfers 2017-12-03 12:14:15 +00:00
Nick Craig-Wood
7d15c33e42 fs: make Copy and Move return the destination object if possible 2017-12-03 12:14:15 +00:00
Nick Craig-Wood
11332a19a0 fs: make an in memory object for short transfers 2017-12-03 12:14:15 +00:00
Nick Craig-Wood
a1f8318b29 Add Laurence to contributors 2017-12-03 10:24:53 +00:00
Laurence
e767c9ac9f Fix typo in dbhashsum description 2017-12-03 10:24:33 +00:00
Nick Craig-Wood
56cfb810a8 Add Tim Cooijmans to contributors 2017-12-03 10:22:42 +00:00
Tim Cooijmans
835ca15ec8 drive: add service account support. Fixes #839. 2017-12-03 10:21:41 +00:00
remusb
4af4bbb539 cache: add support for PutStream - fixes #1836 2017-11-30 21:16:45 +02:00
remusb
47450ba326 cache: handle errors when bolt tries to start 2017-11-30 12:27:59 +02:00
Nick Craig-Wood
639e812789 fs: integration tests: add SUMMARY heading for log scraping 2017-11-29 15:55:37 +00:00
Nick Craig-Wood
1c6cad2252 fs: integration tests: add 30 minute timeout per test 2017-11-29 13:51:17 +00:00
Nick Craig-Wood
6d3df6f172 cmount: make tests more reliable on Windows 2017-11-28 20:39:24 +00:00
Nick Craig-Wood
c16ac697a9 vfs: keep track of directories in the cache also #1860
This makes managing empty directories more reliable.
2017-11-28 20:39:23 +00:00
Nick Craig-Wood
0978957a2e vfs: make sure all 96 combinations of flags for Open work 2017-11-28 20:39:23 +00:00
Nick Craig-Wood
d1b19f975d vfs: remove items from cache when deleted #1860
Also fixes Error message when items have been deleted from the cache
(eg when Moved) when the cache reaper comes to delete them.
2017-11-28 16:13:58 +00:00
ishuah
aab8051f50 move: add --delete-empty-src-dirs flag - fixes #1854 2017-11-28 11:38:19 +03:00
Nick Craig-Wood
1248beb0b2 cachestats: Fix nil pointer if not a cache remote - fixes #1855
Also don't retry or show stats
2017-11-24 10:22:23 +00:00
Nick Craig-Wood
6448c445f5 acd: Fix download of large files failing - Fixes #1501
Previously it was necessary to work around large files failing to
download with `--acd-templink-threshold`.  This change makes that flag
obsolete and all files should download.  Templinks may be useful under
some circumstances though the flag isn't being removed.

It does this by filtering `Authorization:` headers out in the
transport if the authorization is supplied in the URL.  This prevents
the "Only one auth mechanism allowed; only the X-Amz-Algorithm query
parameter, Signature query string parameter or the Authorization
header should be specified" error from AWS.
2017-11-24 09:14:25 +00:00
Nick Craig-Wood
fdb01437d8 fs: Allow the http Transport to have an optional filter request function 2017-11-24 09:07:56 +00:00
Nick Craig-Wood
729e1305b7 oauthutil: Allow the http.Client to be passed in 2017-11-24 09:07:03 +00:00
Nick Craig-Wood
02ffd43572 fs: Save the config before asking for a token - fixes #1220
Before this if the client_id/client_secret was edited it would
disappear when asking for the new token.

This means the post config is done after the user has confirmed the
config is OK which can't be helped.
2017-11-23 14:01:32 +00:00
Nick Craig-Wood
e53892f53b fs,drive,dropbox: Make and use new RepeatableReader variants to lower memory use
RepeatableReaderSized has a pre-allocated buffer which should help
with memory usage - before it grew the buffer.  Since we know the size
of the chunks, pre-allocating it should be much more efficient.

RepeatableReaderBuffer uses the buffer passed in.

RepeatableLimit* are convenience funcitions for wrapping a reader in
an io.LimitReader and then a RepeatableReader with the same buffer
size.
2017-11-23 13:53:46 +00:00
ishuah
6c62fced60 move: fixed root source directories getting deleted after move - fixes #1849 2017-11-23 12:01:35 +03:00
Nick Craig-Wood
c64ad851af Add David Minor to contributors 2017-11-23 08:57:34 +00:00
David Minor
4c116af1d0 s3: add support for ECS task IAM roles
ECS container IAM metadata is in a different place than EC2 IAM metadata.
Use defaults' RemoteCredProvider function to query the standard locations
for the credentials.

Give the ECS role precedence over the role available from the underlying
EC2 instance.
2017-11-23 08:56:56 +00:00
Nick Craig-Wood
8357a82eee dropbox: change default chunk size to 48MB now we are buffering them in memory 2017-11-22 17:15:37 +00:00
Nick Craig-Wood
483f4b8ad9 dropbox: multiparts uploads retry retry every error after the first chunk is done 2017-11-22 17:15:37 +00:00
Nick Craig-Wood
6f61da5c75 dropbox: buffer the chunks when uploading large files so they can be retried
We use fs.RepeatableReader to buffer the chunks which plays nice with
the accounting.  The default chunk size is 128M which may be too
large.

Fixes #1806
2017-11-22 17:15:37 +00:00
Nick Craig-Wood
159fce0106 fs: fix --cache-dir to have some effect 2017-11-22 17:05:02 +00:00
remusb
569c1a2ec1 cache: catch signal interrupt for bolt handle cleanup 2017-11-22 18:32:36 +02:00
remusb
2497ca5134 cache: add extra logging in Move and Copy 2017-11-22 00:38:25 +02:00
Nick Craig-Wood
cbe5d7ce64 fs: Remove X-Auth-Token: from headers when dumping for swift 2017-11-21 17:32:07 +00:00
Nick Craig-Wood
1a65a4e769 fs: Add --dump flag, introduce --dump requests, responses and remove --dump-auth, --dump-filters
Now --dump-flag is written as --dump flag. This is a comma separated list which can contain

  * headers - HTTP headers as before
  * bodies  - HTTP bodies as before
  * requests - HTTP request bodies
  * responses - HTTP response bodies
  * auth - HTTP auth
  * filters - Filter rexeps

Leave --dump-headers and --dump-bodies for the time being but remove
the other --dump-* flags as they aren't used very often.
2017-11-21 17:32:07 +00:00
Nick Craig-Wood
bcf1ece43b Update MAINTAINERS with our new maintainer Remus Bunduc 2017-11-21 17:32:07 +00:00
ishuah
b4aa920a3d stats: show the amount of data transferred in kb/mb - fixes #1167 2017-11-21 12:40:02 +03:00
Remus Bunduc
41a97e39c8 cache: fix option help text 2017-11-21 11:25:28 +02:00
Nick Craig-Wood
abbcb2f5e0 cache: disable another unreliable test #1844 2017-11-20 21:25:38 +00:00
Nick Craig-Wood
cb6de4a2cf cache: disable unreliable test #1844 2017-11-20 19:55:00 +00:00
Nick Craig-Wood
dc1c679c65 mount: support truncate properly 2017-11-20 19:42:35 +00:00
Nick Craig-Wood
3fb4fe31d2 vfs: make sure write only handles never truncate files they shouldn't 2017-11-20 19:42:25 +00:00
Nick Craig-Wood
76b151984c vfs: cache the size of the object in the read handle 2017-11-20 17:57:13 +00:00
Nick Craig-Wood
f0ed384786 cache: fix default setting for warmup_age 2017-11-20 14:39:12 +00:00
Nick Craig-Wood
f80f7a0509 cache: use fs.CacheDir to make the default directory for the cache
NB this changes the default dir for the cache
2017-11-20 14:38:28 +00:00
Nick Craig-Wood
af50f31f7d mounttest: wait for Release after every Read to stop using in use files under Windows 2017-11-20 12:46:24 +00:00
Nick Craig-Wood
8e2213fbbd local: add error message for cross file system moves 2017-11-20 12:46:24 +00:00
Nick Craig-Wood
085c690798 build: add in 64bit path for WinFSP headers 2017-11-20 12:46:24 +00:00
Nick Craig-Wood
2b666187a6 cmount: disable tests on windows + race detector
These either hang or produce incorrect results for reasons I haven't
worked out yet.
2017-11-20 12:46:24 +00:00
Nick Craig-Wood
00b46a8b96 mounttest: wait for files to disappear from the directory listing 2017-11-20 12:46:24 +00:00
Nick Craig-Wood
b21f227bd3 mounttest: fix crash when FUSE not present 2017-11-20 12:46:24 +00:00
Nick Craig-Wood
e98e550021 mounttest: wait for all background Close/Release after writing a file
The filesystem does a certain amount of things asynchronously waiting
for the file to be released after writing it means everything should
be in a consistent state.
2017-11-20 12:46:23 +00:00
Nick Craig-Wood
60945d0a37 vfs: remove misleading comment 2017-11-20 12:46:23 +00:00
Nick Craig-Wood
b4083b4371 vfs: rename Fsync to Sync and implement Sync on Node and Handle 2017-11-20 12:46:23 +00:00
Nick Craig-Wood
eb3415db50 cmount: enable more tests for Windows 2017-11-20 12:46:23 +00:00
Nick Craig-Wood
9fbd8a6419 mounttest: fixes for running under Windows
* don't mount and unmount between cache runs - WinFSP doesn't suport it
  * use OS paths for opening things
2017-11-20 12:46:23 +00:00
Nick Craig-Wood
9738f8532b vfs: Add FlushDirCache method 2017-11-20 12:46:23 +00:00
Nick Craig-Wood
a5b034a992 vfs: add WaitForWriters to wait until all writers have finished 2017-11-20 12:46:23 +00:00
Nick Craig-Wood
321b6da7af vfs: don't remove file from writers until it is transferred
This means that the list of active writers is up to date
2017-11-20 12:46:23 +00:00
Nick Craig-Wood
1b22ee5b93 vfs: fix error handling in openPending so it returns the correct error 2017-11-20 12:46:23 +00:00
Nick Craig-Wood
eab55ce882 vfs: add open files to directories 2017-11-20 12:46:23 +00:00
Nick Craig-Wood
61b6159a05 mount, cmount: add O_CREATE to Open calls since fuse doesn't seem to supply it 2017-11-20 12:46:22 +00:00
Nick Craig-Wood
c560017934 vfs: add Path method to Node and use it to stop reading nil DirEntry
All DirEntry calls now have been checked for nil or converted to use Path.
2017-11-20 12:46:22 +00:00
Nick Craig-Wood
7c3584f4e6 mountlib: wait for mountpoint to disappear under Windows 2017-11-20 12:46:22 +00:00
Nick Craig-Wood
981cfb1bec mounttest: retry directory listings to account for slow updates on Windows 2017-11-20 12:46:22 +00:00
Nick Craig-Wood
992647b157 vfs: Don't error a r/w file open without cache; delay error until Read called
If we open a file for r/w without the cache we now always return a
handle and return an error if the file is ever read from.  This fixes
incompatibility with cmount under windows.
2017-11-20 12:46:22 +00:00
Nick Craig-Wood
dec21ccf63 vfs, cmount: make truncate work properly in the presence or otherwise of open files 2017-11-20 12:46:22 +00:00
Nick Craig-Wood
94adf4f43b cmount: translate FUSE open flags into OS flags
On Windows the fuse.O_* flags do not have the same values as the
os.O_* flags so translate between the two representations.  They are
mostly the same which is why this hasn't caused a problem before.
2017-11-20 12:46:22 +00:00
Nick Craig-Wood
e7f2935333 vfs: decode flags in Open/OpenFile for debug 2017-11-20 12:46:22 +00:00
Nick Craig-Wood
f5f8c0c438 cmount: make Truncate call the correct Handle or Node method 2017-11-20 12:46:22 +00:00
Nick Craig-Wood
60cdcf784c cmount: use -o atomic_o_trunc to make sure O_TRUNC is supplied to Open() 2017-11-20 12:46:22 +00:00
Nick Craig-Wood
57a5c67729 mounttest: run the tests for all 4 VFS cache modes 2017-11-20 12:46:21 +00:00
Nick Craig-Wood
d7908c06c9 mountlib: ensure we don't open files with read and write intent 2017-11-20 12:46:21 +00:00
Nick Craig-Wood
8951875c21 vfs,mount,cmount,mountlib: allow flags to be overriden by environment variables 2017-11-20 12:46:21 +00:00
Nick Craig-Wood
05a1e1532b vfs,mount,cmount,serve: Add documentation for vfs caching modes 2017-11-20 12:46:21 +00:00
Nick Craig-Wood
7f20e1d7f3 vfs: add read write files and caching #711
This adds new flags to mount, cmount, serve *

    --cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
    --cache-mode string              Cache mode off|minimal|writes|full (default "off")
    --cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
2017-11-20 12:36:50 +00:00
Nick Craig-Wood
bb0ce0cb5f vendor: vfs add vendor/github.com/djherbis/times 2017-11-20 12:36:50 +00:00
Nick Craig-Wood
e946a8eab0 fs: Add CacheDir config variable 2017-11-20 12:00:32 +00:00
Nick Craig-Wood
a0cfa0929b vfs: remove un-needed (after introduction of rcat) createInfo struct 2017-11-20 12:00:32 +00:00
Nick Craig-Wood
3fb1e96988 vfs: factor Open logic from Dir.Create into vfs.OpenFile 2017-11-20 12:00:32 +00:00
Nick Craig-Wood
46947b3b9b rcat: fix goroutine leak
This was leaking goroutines in the short file case beause it wasn't
calling Close() on the Account object.  This became apparent when
testing with mount.
2017-11-20 12:00:32 +00:00
Nick Craig-Wood
de98e2480d Add Jakub Tasiemski to contributors 2017-11-20 11:16:22 +00:00
Jakub Tasiemski
3cf7c61aa0 Add touch command - fixes #1594 2017-11-20 11:16:05 +00:00
Fabian Möller
d8b3bf014d mount: use sdnotify to signal systemd the mount is ready
When the NOTIFY_SOCKET environment variable is set notify systemd after
the mount is ready.
2017-11-20 11:03:10 +00:00
Fabian Möller
0bfa29cbcf vendor: add github.com/okzk/sdnotify 2017-11-20 11:03:10 +00:00
Nick Craig-Wood
6cc968b085 Add Fabian Möller to contributors 2017-11-19 22:14:33 +00:00
Fabian Möller
ce5b3a531d crypt: implement DirChangeNotify
crypt now implements the DirChangeNotify if the wrapped FS provides it.
2017-11-19 20:09:52 +00:00
Fabian Möller
5acb6f47e7 mountlib: log when poll-interval is ineffective
Notify the user in case poll-interval is used on a unsupported remote
2017-11-19 20:08:14 +00:00
Nick Craig-Wood
409ba56fde Add Iakov Davydov to contributors 2017-11-17 21:52:00 +00:00
Nick Craig-Wood
5d875e8840 Add Remus Bunduc to contributors 2017-11-17 21:52:00 +00:00
Iakov Davydov
429bb7e8b8 docs for --exclude-if-present 2017-11-17 21:51:11 +00:00
Iakov Davydov
7d3abdc463 tests for --exclude-if-present 2017-11-17 21:51:11 +00:00
Iakov Davydov
538246f6c3 support exclude file in --fast-list mode 2017-11-17 21:51:11 +00:00
Iakov Davydov
557dd8f031 ListDirSorted check for excludefile 2017-11-17 21:51:11 +00:00
Iakov Davydov
37aaa19f3a new option: --exclude-if-present 2017-11-17 21:51:11 +00:00
Iakov Davydov
cef2e3bf83 path -> startPath in walkRDirTree (we need the path package) 2017-11-17 21:51:11 +00:00
Iakov Davydov
a3a436ce16 WalkRDirTree: return error if unknown item type 2017-11-17 21:51:11 +00:00
Iakov Davydov
5d05df3124 ListContainsExcludeFile: checks for exclude file in the list 2017-11-17 21:51:11 +00:00
Iakov Davydov
421ba84e12 DirTree.Prune: deletes several directories 2017-11-17 21:51:11 +00:00
Iakov Davydov
7ae7080824 FileExists check if a file exists 2017-11-17 21:51:11 +00:00
ishuah
31d2fb4e11 mount: Fix mount breaking on Windows - fixes #1827 2017-11-16 15:20:53 +03:00
Nick Craig-Wood
704e82aab1 dropbox: adapt to upstream changes #1804 2017-11-15 16:02:29 +00:00
Nick Craig-Wood
fc352c1ff6 vendor: update github.com/dropbox/dropbox-sdk-go-unofficial to fix #1804 2017-11-15 15:55:01 +00:00
Nick Craig-Wood
e491093cd1 vendor: dep ensure to get things into sync after merges 2017-11-15 15:52:44 +00:00
Remus Bunduc
016abf825e cache: first version 2017-11-15 15:23:21 +00:00
Remus Bunduc
0c942199c9 cache: add vendor requirements: bbolt and go-cache 2017-11-15 15:23:21 +00:00
ishuah
aec2265be0 rclone: implement exit codes - #1136 2017-11-15 17:48:37 +03:00
Substantiel
2423fa40e2 config: add password sub command for setting obscured passwords 2017-11-15 14:44:45 +00:00
Nick Craig-Wood
4355f3fe97 Add Ernest Borowski to contributors 2017-11-14 21:25:02 +00:00
Ernest Borowski
9fbff7bcab mountlib: check if directory is not empty before mounting - fixes #1386
Signed-off-by: Ernest Borowski <er.borowski@gmail.com>
2017-11-14 21:24:31 +00:00
Substantiel
413faa99cf oauthutil: make sure auth server always finishes even when things go wrong 2017-11-09 21:34:44 +00:00
ishuah
ed91d6b5a5 Added Ishuah Kariuki to MAINTAINERS.md 2017-11-09 17:10:32 +03:00
ishuah
c65734ee69 move: delete source directory after successful move - fixes #1642 2017-11-07 22:21:38 +00:00
Nick Craig-Wood
8c8abfd6dc vendor: update github.com/a8m/tree - fixes #1797 2017-11-06 11:23:27 +00:00
ishuah
dfaee55ef3 crypt: Added option to encrypt directory names or leave them intact - #1240 2017-11-06 10:38:48 +00:00
Nick Craig-Wood
72072d7d6b Add Pierre Carlson to contributors 2017-11-05 22:09:31 +00:00
Pierre Carlson
f1287e13f7 Add new fields for swift configuration to support IBM Bluemix Swift 2017-11-05 22:08:43 +00:00
Substantiel
7749157596 Add --auto-confirm flag 2017-11-05 21:56:50 +00:00
Oliver Heyme
682b4d54c5 onedrive: Add option to choose resourceURL during setup of OneDrive Business account if more than one is avauilable for user 2017-11-05 21:41:56 +00:00
Nick Craig-Wood
245edd1b0e local: fix equality check for times 2017-11-05 21:39:49 +00:00
Nick Craig-Wood
4d081ec87e Add Corban Raun to contributors 2017-11-05 21:39:49 +00:00
Corban Raun
a8dfc5ce3b Fix spelling in some documentation 2017-11-05 21:38:59 +00:00
Nick Craig-Wood
68d0b5adbb serve webdav: this implements a webdav server for any rclone remote. 2017-11-04 10:24:11 +00:00
Nick Craig-Wood
c4ad3ac94c vendor: ensure golang.org/x/net/webdav is vendored 2017-11-04 10:24:11 +00:00
Nick Craig-Wood
16e16bc220 serve http: use vfs to cache the directories and support Range header 2017-11-04 10:24:11 +00:00
Nick Craig-Wood
73dfa21ba3 local: avoid triggering the race detector 2017-11-04 10:24:11 +00:00
Nick Craig-Wood
c31556c6d1 vfs: Make sure all public methods are locked in Read and Write Handle 2017-11-04 10:24:10 +00:00
Nick Craig-Wood
2083ac6e2a vfs: add ECLOSED and tidy errors 2017-11-04 10:24:10 +00:00
Nick Craig-Wood
22ee839d05 cmount,vfs: unify Read and Write handles and File and Dir where possible 2017-11-04 10:24:10 +00:00
Nick Craig-Wood
5634659ea3 mount,vfs: unify Read and Write handles in preparation for ReadWrite handles 2017-11-04 10:24:10 +00:00
Nick Craig-Wood
e18122e88b vfs: add tests and subsequent fixes
* Tests for VFS layer
  * Small fixes found during testing
  * Fix Close, Flush and Release behaviour for ReadFileHandle and WriteFileHandle
  * Fix nil object bugs on File
2017-11-04 10:24:10 +00:00
Nick Craig-Wood
07ec8073fe mount: remove unused DirEntry struct 2017-11-03 13:00:00 +00:00
Nick Craig-Wood
8184ec4b70 vfs: add EPERM to errors 2017-11-03 13:00:00 +00:00
Nick Craig-Wood
190367d917 vfs: factor duplicated Open code into vfs from mount/cmount 2017-11-03 13:00:00 +00:00
Nick Craig-Wood
a5dc62f6c1 vfs: Make file handles compatible with OS
* Implement directory handles
  * Unify OpenFile
  * Add all the methods to match *os.File
  * Add StatParent and Rename methods to VFS
2017-11-03 13:00:00 +00:00
Nick Craig-Wood
3e0c91ba4b vfs: Move DefaultOpt to vfs and make some methods private 2017-11-03 13:00:00 +00:00
Nick Craig-Wood
7e065440fb vfs: rename Lookup to Stat to be more in keeping with os 2017-11-03 12:59:59 +00:00
Nick Craig-Wood
e8883e9fdb vfs: factor flags into vfsflags and remove global variables 2017-11-03 12:59:59 +00:00
Nick Craig-Wood
1a8f824bad vfs: use os package errors where possible 2017-11-03 12:59:59 +00:00
Nick Craig-Wood
c1aaff220d Factor new vfs module out of cmd/mountlib
This is an OS style file system abstraction with directory caching
used in mount, cmount, serve webdav and serve http.
2017-11-03 12:59:59 +00:00
Nick Craig-Wood
6da6b2556b mountlib: make directory entries be returned in sorted order 2017-11-03 12:59:59 +00:00
Nick Craig-Wood
ca19fd2d7e mountlib: Make read/write file handles support more standard interfaces
Including Read, ReadAt, Seek, Close for read handles and Write,
WriteAt, Close for read handles.
2017-11-03 12:59:59 +00:00
Nick Craig-Wood
2fac74b517 mountlib: store only Node in *Dir removing DirEntry struct 2017-11-03 12:59:59 +00:00
Nick Craig-Wood
8b6daaa877 mountlib: add DirEntry() to Node interface 2017-11-03 12:59:59 +00:00
Nick Craig-Wood
3af9d63261 mountlib: add Remove and RemoveAll methods to Node 2017-11-03 12:59:59 +00:00
Nick Craig-Wood
c6cd2a5280 mountlib: add parent and entry to Dir 2017-11-03 12:59:59 +00:00
Nick Craig-Wood
0bb84efe75 mountlib: Rename Remove to RemoveName 2017-11-03 12:59:59 +00:00
Nick Craig-Wood
3ec15ac2bd mountlib: make sure Node is always set in DirEntry
This simplifies the code and makes using the DirEntry.Node usable when
using ReadDir.
2017-11-03 12:59:58 +00:00
Nick Craig-Wood
750690503e mountlib: make Node satisfy os.FileInfo interface 2017-11-03 12:59:58 +00:00
Nick Craig-Wood
54950d3423 mountlib: make more useful as a general purpose file system adaptor 2017-11-03 12:59:58 +00:00
Nick Craig-Wood
014aa3d157 fstest: check no files or directories between runs 2017-11-03 12:59:58 +00:00
Nick Craig-Wood
cc7ed13b9b fs: factor test running code into fstest/run.go 2017-11-03 12:59:58 +00:00
Nick Craig-Wood
6552581a17 b2: correct docs on SHA1s on large files 2017-11-03 12:49:15 +00:00
Nick Craig-Wood
f60e2a7aac swift: add OS_TENANT_ID to config 2017-11-02 14:49:07 +00:00
Nick Craig-Wood
cacae8d12d swift: add OS_USER_ID to config
Also add env names to the config to make them easier to match.
2017-11-01 21:26:04 +00:00
Nick Craig-Wood
4a1013f2de swift: Allow configs with user id instead of user name 2017-10-31 14:23:10 +00:00
Nick Craig-Wood
d0b9baab13 Update travis builds to go 1.9.2 and go 1.8.5 2017-10-26 22:30:53 +01:00
Nick Craig-Wood
96665c16cb serve http: make it compile on go1.6 and go1.7 2017-10-26 21:52:29 +01:00
Nick Craig-Wood
39b9f80302 Add John Leach to contributors 2017-10-26 21:39:22 +01:00
John Leach
1602a3a055 Check if swift segments container exists before create
Avoids blindly trying to create the segments container, which can fail if the
authentication credentials don't allow container creates or updates.

Fixes #1769
2017-10-26 21:39:05 +01:00
Nick Craig-Wood
fafaea7edc Add Andrew Starr-Bochicchio to contributors 2017-10-26 21:35:19 +01:00
Andrew Starr-Bochicchio
e6fb96cfd4 Initial docs for usage with DigitalOcean Spaces. 2017-10-26 21:34:42 +01:00
Nick Craig-Wood
e612673ea0 webdav: fix Copy, Move and DirMove to be more compatible
The fix was to use an absolute URL in the Destination: as per RFC2518

This makes it compatible with the golang.org/x/net/webdav server
2017-10-25 22:59:22 +01:00
Nick Craig-Wood
fd2406f94e webdav: fix directory detection when creating a remote
Factor the is a directory check out and use it everywhere.
2017-10-25 12:04:20 +01:00
Nick Craig-Wood
cd146415d1 serve http: error if Range supplied (not supported yet)
Also add Server header
2017-10-24 23:18:36 +01:00
Nick Craig-Wood
2740c965c0 serve http: Fix timeouts 2017-10-24 23:07:46 +01:00
Nick Craig-Wood
6669165b6b serve http command to serve a remote over HTTP
This implements a basic webserver to serve an rclone remote over HTTP.

It also sets up the framework for adding more types of server later.
2017-10-24 13:25:49 +01:00
Nick Craig-Wood
a06bcd4c57 Add paypal.me link to donate page 2017-10-23 12:56:48 +01:00
Nick Craig-Wood
6df1f6fad1 webdav: support put.io #580
* Add docs on how to set up
  * Fix the listing routine
    * Use Depth: 1 in otherwise we get a recursive listing
    * Detect collections properly rather than relying on them ending in /
    * Add / to collection URLs which don't have one
2017-10-23 12:37:02 +01:00
Nick Craig-Wood
683befaec1 Add Jason Rose to contributors 2017-10-20 15:46:46 +01:00
ishuah
10f27e2ff2 allow trailing+leading whitespace for passwords - #1717
warn users when they enter passwords with leading/trailing whitespaces

Updated config_test.go, removing deprecated test case and updated TestReveal
2017-10-20 15:46:17 +01:00
Jason Rose
d121a94c20 Corrected default log-level value 2017-10-20 15:43:31 +01:00
Nick Craig-Wood
567071750b vendor: update github.com/ncw/swift to fix memory leak in swift transfers 2017-10-19 14:44:13 +01:00
Nick Craig-Wood
115053930e Make error messages less crypting when revealing an unobscured password - fixes #1743 2017-10-16 22:03:06 +01:00
Nick Craig-Wood
ef1346602e Add contributors
* thierry
  * Dan Dascalescu
  * Simon Leinen
2017-10-16 21:58:58 +01:00
Dan Dascalescu
9417194751 Fix dedupe description typo 2017-10-16 21:51:31 +01:00
Dan Dascalescu
69ba806528 2017-Oct update to the Drive docs 2017-10-16 21:50:08 +01:00
Dan Dascalescu
ae9d58d625 Copy edit the SFTP guide 2017-10-16 21:49:25 +01:00
Ubuntu
d6bab0169f Per-remote env variables start with RCLONE_CONFIG_ 2017-10-16 21:45:22 +01:00
Ubuntu
d7dd6f3814 Typo fix: resove -> resolve 2017-10-16 21:45:22 +01:00
Nick Craig-Wood
edfab09eb9 config: add sub commands for full config file management
Previously config sub commands were manually parsed rather than using
cobra.

Make config command have the following sub commands:

 * create    Create a new remote with name, type and options.
 * delete    Delete an existing remote <name>.
 * dump      Dump the config file as JSON.
 * edit      Enter an interactive configuration session.
 * file      Show path of configuration file in use.
 * providers List in JSON format all the providers and options.
 * show      Print (decrypted) config file, or the config for a single remote.
 * update    Update options in an existing remote.

The following changes were made to existing commands

 * listproviders was renamed to providers
 * listoptions was removed in favour of providing the output in providers
 * jsonconfig was renamed to create
 * an optional parameter was added to the show command
2017-10-14 11:50:41 +01:00
thierry
0575623dff Add config listproviders, listoptions, jsonconfig for automated config
Addition of a method listing the providers, a method listing the
options of a provider and method of manual configuration.
2017-10-13 17:17:36 +01:00
Nick Craig-Wood
fc8b13c993 moveto/copyto: Fix to allow copying to the same name - fixes #1736 2017-10-12 20:45:36 +01:00
Nick Craig-Wood
b531bf1349 Add android and IOS build to circleci 2017-10-11 13:40:02 +01:00
Nick Craig-Wood
43ced30f11 fs: Add more errors to retry - fixes #1733 2017-10-10 19:51:02 +01:00
Nick Craig-Wood
106bc1c9fc Add jersou to contributors 2017-10-10 19:44:44 +01:00
jersou
f64ee433b7 docs: missing "sync" command name fix 2017-10-10 19:44:19 +01:00
Nick Craig-Wood
3eb7f52e39 fs: Add "unexpected EOF reading trailer" as a retriable error - fixes #1730 2017-10-09 17:29:16 +01:00
Nick Craig-Wood
7f3dc9b5c4 Implement WebDAV remote #580
This has special knowledge of Owncloud and Nextcloud to enable more
functionality such as mod times.
2017-10-09 16:19:37 +01:00
Nick Craig-Wood
bcdd79320b rest: Add SetUserPass to create Authorization header 2017-10-09 16:19:37 +01:00
Nick Craig-Wood
2453abfbea rest: add a Signer callback 2017-10-09 16:19:37 +01:00
Nick Craig-Wood
efd88c5676 rest: add CallXML and DecodeXML functions 2017-10-09 16:19:37 +01:00
Nick Craig-Wood
4966611866 rest: factor URLJoin and URLEscape from http remote 2017-10-09 16:19:37 +01:00
Nick Craig-Wood
00fe6d95da fs: fix duplicate files causing spurious copies
Before this fix duplicate files (on Google Drive) caused the next file
to be spuriously copied.  `rclone dedupe` worked around the problem.
2017-10-02 16:52:53 +01:00
Nick Craig-Wood
b7521c0fe2 dropbox: fix error when renaming directories - fixes #1708 2017-10-02 11:21:16 +01:00
Nick Craig-Wood
a1d942e5c3 pcloud: make compile with go1.6 2017-10-01 16:41:23 +01:00
Nick Craig-Wood
9e9297838f Implement pcloud remote - #418 2017-10-01 11:37:35 +01:00
Nick Craig-Wood
6403242f48 drive, yandex: add missing CleanUpper interface check 2017-09-30 16:34:46 +01:00
Nick Craig-Wood
737cf3d957 rest: factor multipart upload out into function and generalise 2017-09-30 16:08:38 +01:00
Nick Craig-Wood
8f2f480628 rest: Add TransferEncoding and Close parameters 2017-09-30 16:03:47 +01:00
Nick Craig-Wood
a5e0115b19 Makefile: clean some more files 2017-09-30 16:02:00 +01:00
Nick Craig-Wood
63d0734c71 tree: remove workaround for tree library bug now it is fixed 2017-09-30 15:51:14 +01:00
Nick Craig-Wood
b017fcfe9a vendor: update all dependencies to latest versions 2017-09-30 15:27:27 +01:00
Nick Craig-Wood
911d121bb9 docs: Fix version number 2017-09-30 15:22:00 +01:00
Nick Craig-Wood
1c10497b68 Start v1.38-DEV development 2017-09-30 15:16:09 +01:00
Nick Craig-Wood
d96e45ba5b Version v1.38 2017-09-30 14:20:43 +01:00
Nick Craig-Wood
657b3a674d fs: fix test_all -clean to run just one cleaning thread per remote 2017-09-30 11:07:09 +01:00
Nick Craig-Wood
5177d8c854 docs: update website footer 2017-09-30 09:28:49 +01:00
Nick Craig-Wood
b2b989434d docs: use a shortcode to insert the version string 2017-09-30 09:28:49 +01:00
Nick Craig-Wood
3e9861eecf docs: improve links to cloud providers 2017-09-30 09:28:49 +01:00
Nick Craig-Wood
3fc69f4140 docs: fix daggers 2017-09-30 09:19:53 +01:00
Nick Craig-Wood
b1e85c7ceb website: Adapt to hugo v0.27.1 2017-09-30 09:19:53 +01:00
Nick Craig-Wood
1d994f7330 s3: add Wasabi instructions 2017-09-30 09:00:56 +01:00
Nick Craig-Wood
0e76e35b6f dropbox: Fix deprecation warnings for Move, MoveDir and Copy - fixes #1699 2017-09-30 08:10:51 +01:00
Nick Craig-Wood
29e2744155 vendor: update github.com/dropbox/dropbox-sdk-go-unofficial 2017-09-30 08:10:50 +01:00
Nick Craig-Wood
6390bb2b09 vendor: resync with dep ensure 2017-09-30 08:10:50 +01:00
Stephen Harris
6f2a6dfbc5 For MacOS installation, make sure the /usr/local/bin directory exists 2017-09-28 16:34:01 +01:00
Nick Craig-Wood
b6684ea4f5 crypt: fix PutStream
* Make crypt call the underlying PutStream not Put as it might be different
  * Make wrapped objects with size < 0 carry on having size < 0 after wrapping
2017-09-28 08:56:40 +01:00
Nick Craig-Wood
2857ed5c35 fs: fix --immutable tests on remotes which don't have modtime 2017-09-28 08:56:30 +01:00
Nick Craig-Wood
8771d352d4 Makefile: make test now stores logs and tests everything 2017-09-27 16:13:33 +01:00
Nick Craig-Wood
748c9f5cb7 docs: merge email addresses for @ishuah 2017-09-25 21:02:33 +01:00
Stefan Breunig
646a419453 docs: update overview table to reflect streaming upload ability 2017-09-24 21:59:31 +02:00
Nick Craig-Wood
c98dfa2556 Add ishuah to contributors 2017-09-24 20:03:11 +01:00
ishuah
7195e44dce crypt: added cryptdecode command - #1129 2017-09-24 20:02:59 +01:00
Nick Craig-Wood
c9e2739500 Add Jacob McNamee to contributors 2017-09-24 20:02:40 +01:00
Jacob McNamee
2d8e75cab4 Implement --immutable option 2017-09-24 20:00:00 +01:00
Stefan Breunig
5a3a56abd8 yandex: address errcheck warnings 2017-09-19 23:30:08 +02:00
Stefan Breunig
7b89a5f656 Add LingMan to contributors 2017-09-19 23:13:51 +02:00
LingMan
a4396ebe0f docs: remove duplicated --drive-auth-owner-only documentation (#1688) 2017-09-19 18:00:41 +02:00
Stefan
85877f3adc config: add show/file subcommands which print the config/its path (fixes #1086) 2017-09-19 17:59:19 +02:00
Nick Craig-Wood
87335de8a8 fs: fix filename normalization issues in the tests when running on OS X 2017-09-17 15:31:22 +01:00
Stefan Breunig
12405f9f41 fuse: re-use rcat to support uploads for all remotes (fixes #1672) 2017-09-16 22:49:08 +02:00
Stefan Breunig
168b0a0ecb googlecloudstorage: support streaming uploads (see #1614) (closes #1684) 2017-09-16 22:46:02 +02:00
Stefan
234bfae0d5 b2: implement streaming upload of files with unknown length (see #1614) (closes #1686) 2017-09-16 22:43:48 +02:00
Nick Craig-Wood
4ac9a65049 fs: stop normalizing file names but do a normalized compare in the sync
This works by using a transform function to transform file names when
doing a compare when matching file names in a directory.  rclone now
UTF-8 normalizes the file names and does a case insensitive compare if
the destination remote is case insensitive.

This deprecates the --local-no-unicode-normalization flag.

Fixes #1477
2017-09-16 19:49:31 +01:00
Nick Craig-Wood
a8e41f081c fs: re-implement check and cryptcheck using the same traversal as sync
This makes them 100% consistent with sync and also make them use less
memory as they no longer build the whole tree in memory first.

Fixes #1657
2017-09-16 19:49:31 +01:00
Nick Craig-Wood
261c7ad9e4 fs: make syncCopyMove use context for go routine cancellation 2017-09-16 19:49:31 +01:00
Nick Craig-Wood
fe96d5cf0a fs: factor multiple directory traverse out of sync 2017-09-16 19:49:31 +01:00
Nick Craig-Wood
8574129892 swift: fix server side copy to empty container with --fast-list
This was caused by an incorrect error return code from ListR when the
container did not exist.
2017-09-16 19:49:31 +01:00
Nick Craig-Wood
6df12b3f00 fs: improve retriable error detection 2017-09-16 19:48:49 +01:00
Stefan Breunig
7f8d306c9c s3: allow streaming upload of files with unknown file size (see #1614) 2017-09-15 20:20:32 +02:00
Stefan Breunig
9d3f11b493 amazonclouddrive, rcat: ensure rcat integration test passes even with AmazonCloudDrive (fixes: #1680) 2017-09-15 18:09:04 +02:00
Nick Craig-Wood
38cc211762 box: fix Update to send the correct name #97
This caused problems with the UTF Normalization with files being
continuously re-uploaded.
2017-09-15 12:03:08 +01:00
Nick Craig-Wood
e0eabc75c0 drive: change the default for --drive-use-trash to true - fixes #1661 2017-09-15 11:58:50 +01:00
Nick Craig-Wood
798502b204 fs: add more errors to be considered temporary errors
This makes a framework for adding temporary errors identified by
syscall number or by error string.

Fixes #1660
2017-09-14 18:01:43 +01:00
Stefan Breunig
9d22f4208f swift: implement streaming uploads (see #1614) 2017-09-14 07:42:16 +02:00
Stefan Breunig
56dedc49e3 rcat: properly report if the upload fails 2017-09-13 20:21:52 +02:00
Girish Ramakrishnan
2f0551074c s3: set session token when using STS 2017-09-12 22:59:29 +01:00
Nick Craig-Wood
d6eb625815 Add Girish Ramakrishnan to contributors 2017-09-12 09:30:03 +01:00
Girish Ramakrishnan
4c45cbea18 copy: error out if dst could not be listed 2017-09-12 09:29:44 +01:00
Nick Craig-Wood
897690d997 Add Jan Varho to contributors 2017-09-12 09:28:18 +01:00
Jan Varho
5a1351f141 s3: Document glacier transitions and behavior 2017-09-12 09:27:32 +01:00
Jan Varho
c22be38747 s3: Error message for objects in glacier 2017-09-12 09:27:32 +01:00
Oliver Heyme
f91f89d409 onedrive: Removed second browser authentication and enabled headless mode #254 2017-09-12 09:21:19 +01:00
Oliver Heyme
113f43ec42 oauthutil: Made GetToken and PutToken exported (required for OneDrive Business) 2017-09-12 09:21:06 +01:00
Oliver Heyme
7ef18b6b35 onedrive: Support for OneDrive for Business added #254
- 2 test fail (MimeType and modification date when copying)
- no headless setup
- uses the credentials for the "rclonetest" app I have created
2017-09-12 09:20:36 +01:00
Stefan Breunig
a91448c83a rcat: honor --dry-run even for small files 2017-09-11 22:28:16 +02:00
Stefan Breunig
80b1f2a494 rcat: configurable small files cutoff and implement proper upload verification 2017-09-11 08:26:53 +02:00
Stefan Breunig
57817397a0 rcat: directly upload small files without streaming them 2017-09-11 08:25:34 +02:00
Stefan
10fa2a7806 snapd: remove snapd because the build fails (see #1188, #1595, #1618) 2017-09-10 07:44:13 +02:00
Stefan
9a62d2f8ad Makefile: avoid using deprecated xargs arguments 2017-09-10 07:43:13 +02:00
ishuah91
49816e67bd yandex: implement cleanup (empty trash) - addresses #575 2017-09-08 11:37:39 +01:00
Jon Craton
fe536f3fa8 Typo fix in changelog 2017-09-06 16:13:24 +01:00
Nick Craig-Wood
c54d513bdd Add ishuah91 to contributors 2017-09-06 16:12:29 +01:00
ishuah91
dd975ab00d drive: implement cleanup (empty trash) - addresses #575 2017-09-06 16:12:00 +01:00
Nick Craig-Wood
2944f7603d s3: read 1000 items in listings #1653
This fixes directory listings with wasabi which fail if you supply
more than the allowed 1000 items as a parameter.  rclone used to
supply 1024 items which exceeds the spec - this works fine with
s3/ceph/etc but fails with wasabi.
2017-09-06 11:13:28 +01:00
Nick Craig-Wood
58f7b4ed7c Clarify --filter-from docs 2017-09-01 11:35:26 +01:00
Nick Craig-Wood
cbea06026a Make check obey --ignore-size - fixes #1643 2017-09-01 11:20:41 +01:00
Nick Craig-Wood
8207af9460 b2: Fix SHA1 mismatch when downloading files with no SHA1 #678
Some large files (depending on which version of rclone they were
uploaded with and where they were uploaded from) don't have an SHA1,
so we can't check it in that case.
2017-08-31 21:39:41 +01:00
Nick Craig-Wood
921fcc0723 Add Josiah White to contributors 2017-08-31 21:39:41 +01:00
Josiah White
445fc55772 Ignore return from patch request on failure. 2017-08-31 21:39:00 +01:00
Nick Craig-Wood
09fbbdbb04 Add Daniel Jagszent to contributors 2017-08-31 16:46:44 +01:00
Daniel Jagszent
4b0e983323 Local: Make documentation consistent with code
Change flag `--no-local-unicode-normalization` to `--local-no-unicode-normalization` since that's the way the flag is called in the source code.

Fixes #1633
2017-08-31 16:46:14 +01:00
wuyu
ee9f987234 qingstor: Support hash md5 for upload object
* Using single object to uploaded when files less than or equal to 67108864 bytes

 * Using multi-part object to uploaded when files large than 67108864 bytes, and
   calculate MD5SUMS in the upload process

 * For Mkdir and Rmdir, Add block to wait qingstor service sync status to
   handling extreme cases that try to create a just deleted bucket or delete
   a just created bucket etc
2017-08-31 16:41:08 +01:00
Nick Craig-Wood
f407e3da55 Add bpicode to contributors 2017-08-31 16:35:35 +01:00
bpicode
f1f7e0e6f9 support for zsh auto-completion - #983 2017-08-31 16:21:28 +01:00
bpicode
7e93567b18 vendor: update version of github.com/spf13/cobra for zsh support 2017-08-31 16:21:28 +01:00
Nick Craig-Wood
2c8d6e86cc fs: fix gofmt 2017-08-31 16:01:19 +01:00
cbruegg
bb6300b032 Fix bwlimit toggle in conjunction with schedules (Fixes #1607) 2017-08-31 15:33:29 +01:00
Nick Craig-Wood
e96c5b5f39 hubic: don't check the container exists before creating it
This fixes being able to create containers for Hubic.
2017-08-30 15:54:49 +01:00
Nick Craig-Wood
672c410235 Update to using go1.9 as the default go version
Get rid of Makefile spaghetti for avoiding vendor directory where
possible in make check.
2017-08-29 16:39:56 +01:00
Nick Craig-Wood
459cf64403 qingstor: fix errors in debug parameters noticed by go1.9 go vet 2017-08-29 14:19:14 +01:00
Stefan Breunig
0158ab6926 info: add check to stream files with unknown size 2017-08-22 08:00:10 +02:00
Stefan Breunig
4e189fe6e7 fstests: only test uploadswith indeterminate size on remotes that support it 2017-08-22 07:19:43 +02:00
Stefan Breunig
b78ecb1568 docs: add optional feature "streaming uploads" to overview table 2017-08-19 14:35:17 +02:00
Stefan Breunig
a122b9fa7a yandex: implement streaming uploads (see #1614) 2017-08-19 14:07:23 +02:00
Stefan Breunig
323daae63e http: immediately fail streaming uploads instead of spooling them first (see #1614) 2017-08-19 12:42:31 +02:00
Stefan Breunig
e754f50778 box: implement streaming uploads (see #1614) 2017-08-19 12:32:56 +02:00
Stefan Breunig
034cf22d4d Add Alex McGrath Kraak to contributors 2017-08-17 06:49:38 +02:00
Alex McGrath Kraak
2cc9071791 http: add --user-agent option. close #1557 2017-08-17 06:49:27 +02:00
Stefan Breunig
b510c70c1e b2: calculate missing hashes on the fly instead of spooling – fixes #1288 2017-08-12 12:57:34 +02:00
Stefan Breunig
001431d326 snapcraft: switch back to go build plugin and only build rclone – see #1188 2017-08-12 09:20:37 +02:00
Stefan Breunig
e64435a5c1 snapcraft: adjust snapcraft-dev build to allow fuse mounting – see #1188 2017-08-11 20:57:13 +02:00
Nick Craig-Wood
9c47b767b4 swift: Configure from environment vars and add endpoint_type - fixes #1542 2017-08-10 21:38:45 +01:00
Nick Craig-Wood
2870874329 azureblob: Read LastModified time of containers in root listing 2017-08-10 20:20:14 +01:00
Nick Craig-Wood
d54fca4e58 dropbox: fix entry doesn't belong in directory error - fixes #1558
This was caused by the unreliable casing in `path_lower` as returned
in the directory listings.  We now ignore everything except the last
element in `path_lower` which is guaranteed to have the correct case.
2017-08-10 13:57:06 +01:00
Nick Craig-Wood
dcbf538416 dropbox: stop using deprecated API methods 2017-08-10 13:57:06 +01:00
Nick Craig-Wood
5b79922b5e vendor: add dropbox/dropbox-sdk-go-unofficial 2017-08-10 13:57:06 +01:00
Nick Craig-Wood
41b2645dec vendor: remove ncw/dropbox-sdk-go-unofficial dependency 2017-08-10 13:57:05 +01:00
Nick Craig-Wood
76226e0147 dropbox: swap back to upstream dropbox/dropbox-sdk-go-unofficial
Now that dropbox/dropbox-sdk-go-unofficial#13 is fixed.
2017-08-10 13:57:05 +01:00
Nick Craig-Wood
76c5aa8533 gcs: Check for errors when testing bucket is OK in mkdir #1590
Previously we would check the bucket's status and on error we would
try to create it.  Now we only try to create it if we got a not found
error, otherwise we report the error to the user.
2017-08-10 10:29:21 +01:00
Nick Craig-Wood
265fb8a5e2 fs: Manage empty directories - fixes #100
During the sync we collect a list of directories which should be empty
and attempt to rmdir them at the end of the sync.  If the directories
are not empty then the rmdir will fail, logging a message but not
erroring the sync.
2017-08-09 21:07:00 +01:00
Nick Craig-Wood
8a1a900733 fstest: use Feature.CanHaveEmptyDirectories to sharpen tests
Now we actually test whether the directories are present or not,
filtering out empty directories in the test using the
CanHaveEmptyDirectories flag.
2017-08-09 20:55:08 +01:00
Nick Craig-Wood
20ae7d562b fs: Add CanHaveEmptyDirectories and BucketBased feature flags to all remotes 2017-08-09 20:55:08 +01:00
Nick Craig-Wood
c1bfdd893f vendor: update qingstor
dep ensure needed to do this, probably after various vendor merges
2017-08-09 13:03:07 +01:00
Nick Craig-Wood
ec2ea37ad2 fs: Add --disable flag to disable optional features - fixes #1551
Eg to disable server side copy use `--disable copy`, to see a list of
what you can disable, `--disable help`.
2017-08-07 21:34:45 +01:00
Nick Craig-Wood
bced73c947 sftp: fix compile for go1.6 2017-08-07 21:34:05 +01:00
Nick Craig-Wood
5b6585f57d sftp: limit new connections per second 2017-08-07 19:47:49 +01:00
Nick Craig-Wood
c6b844977a sftp: clear the cached hashes on object update 2017-08-07 17:36:59 +01:00
Nick Craig-Wood
47eab397ba sftp: implement connection pooling for multiple ssh connections
A connection may be opened for each `--transfers` and `--checkers`
now.  Connections are checked when putting them in the pool and
getting them out the pool so it should recover from network errors
much better.

This fixes #1561, fixes #1541, fixes #1381, fixes #1158, fixes #1538
2017-08-07 17:19:37 +01:00
Nick Craig-Wood
bfe812ea6b dedupe: implement merging of duplicate directories - fixes #1243 2017-08-07 15:36:41 +01:00
Nick Craig-Wood
db1995e63a Add MergeDirs optional interface and implement it for drive 2017-08-07 15:32:47 +01:00
Nick Craig-Wood
81a2ab599f fs: add optional ID to fs.Directory and set it in the remotes which care 2017-08-07 15:31:22 +01:00
Nick Craig-Wood
74687c25f5 sftp: fixup formatting and golint warnings 2017-08-07 14:50:31 +01:00
Nick Craig-Wood
d025066fae Add Christian Brüggemann to contributors 2017-08-06 11:50:20 +01:00
Christian Brüggemann
80ce569874 sftp: Add support for md5 and sha1 hashes where available 2017-08-06 11:49:52 +01:00
Nick Craig-Wood
ee13ea74f1 box: fix multipart upload giving "parts_mismatch" error #97 2017-08-05 21:01:32 +01:00
Stefan Breunig
40f24e0ea3 config: use absolute ConfigPath to ensure newly written config is on the same mount - fixes #1569 2017-08-05 12:13:25 +02:00
Stefan Breunig
b523cfc01d oauthutil: don't show "save failed" error when setting up new remote – fixes #1466 2017-08-05 12:04:42 +02:00
Nick Craig-Wood
38dabcf6b2 azure: correct docs on MD5 and chunked files 2017-08-04 23:54:57 +01:00
Nick Craig-Wood
ee6a35d750 Test compilation of all arches
* Add compile_all step to Makefile
  * Add this to travis
  * Add -compile-only flag to cross-compile.go to save time making the zips
2017-08-04 23:20:26 +01:00
Nick Craig-Wood
92d2e1f8d7 azureblob: rework and complete #801
* Fixup bitrot (rclone and Azure library)
  * Implement Copy
  * Add modtime to metadata under mtime key as RFC3339Nano
  * Make multipart upload work
  * Make it pass the integration tests
  * Fix uploading of zero length blobs
  * Rename to azureblob as it seems likely we will do azurefile
  * Add docs
2017-08-04 22:56:16 +01:00
Nick Craig-Wood
98d238daa4 Add Andrei Dragomir to contributors 2017-08-04 22:56:16 +01:00
Andrei Dragomir
036fd61a50 Added Azure Blob storage support #801 2017-08-04 22:54:27 +01:00
Nick Craig-Wood
91cfcc21ff vendor: add github.com/Azure/azure-sdk-for-go and dependencies 2017-08-04 22:54:27 +01:00
Nick Craig-Wood
132f71d504 qingstor: add missing file to fix plan9 build 2017-08-04 22:54:27 +01:00
Stefan Breunig
861e125a4f local: revert to copy when moving file across file system boundaries – fixes #1176 2017-08-04 23:27:32 +02:00
Stefan Breunig
230e65313a snapcraft: slighty improve buildfile (see #1188) 2017-08-04 21:37:25 +02:00
Nick Craig-Wood
8a185deefa qingstor: Fixes before merge
* use rclone's http.Client for bwlimit, logging, etc
  * remove extraneous fmt.Sprintf from logging
  * fix icon in docs
  * add docs about --fast-list
  * hoist md5 regexp compilation out of function
  * create container if necessary on server side copy
  * keep note of whether the container has been deleted
  * build constraint not to compile for plan9
2017-08-04 19:37:53 +01:00
Nick Craig-Wood
7b9557df90 Add wuyu to contributors 2017-08-04 19:37:53 +01:00
wuyu
ec5b72f8d5 Add new QingStor remote
Add new package qingstor to support QingStor API.

Add new unit test for its and tested through; But I commented
on some tests case because of some of the features of QingStor.

Add new docs for it.
2017-08-04 17:25:47 +01:00
wuyu
466dd22b44 vendor: add qingstor-sdk-go for QingStor 2017-08-04 17:09:28 +01:00
Nick Craig-Wood
f682002b84 fs: Make tests create a new bucket rather than purging the old one
This enables QingStor to pass the tests as it has a 2 minute lockout
on deleting the old bucket then creating it again.
2017-08-04 17:09:28 +01:00
Nick Craig-Wood
7d34caac83 cmd: add os and go version to rclone version output 2017-08-04 14:25:55 +01:00
Stefan Breunig
28a18303f3 implement rcat – fixes #230, fixes #1001 2017-08-03 21:42:35 +02:00
Nick Craig-Wood
3e3a59768e fs/test_all: fix after fstest factorisation 2017-08-03 20:01:05 +01:00
Nick Craig-Wood
d4b9bb9894 gen_tests: allow specification of a build tag 2017-08-03 20:01:05 +01:00
Nick Craig-Wood
e01741b557 fs: Cleaning up directories in test is no longer needed
..as it is done in the finalise method.
2017-08-03 20:01:05 +01:00
Nick Craig-Wood
7ec24ad67a fstests: Use a different container after the Rmdir
Use a new directory here.  This is for the container based remotes
which take time to create and destroy a container (eg azure blob)
2017-08-03 20:01:05 +01:00
Nick Craig-Wood
eff10bbc1d Add Oliver Heyme to contributors 2017-08-03 20:01:05 +01:00
Oliver Heyme
73f7278497 oauthutil: Added AuthOptions and shuts down the web server properly
1. This makes AuthOptions a parameter for doConfig, Config and ConfigOffline to enable a Fs to add additional options (required for OneDrive for Business)
2. Fix to properly shutdown the webserver recieving the auth information (go1.8)
2017-08-03 19:59:42 +01:00
Nick Craig-Wood
6d59887487 Fix URL encoding issues - fixes #1573
This fixes the confusion between paths which were URL encoded and
paths which weren't.  In particular it allows files to have % in the
name.
2017-08-02 13:19:36 +01:00
Nick Craig-Wood
21aca68680 tree: fix when running under Windows 2017-08-01 14:46:21 +01:00
Nick Craig-Wood
214f5e6411 http: only run the tests on go1.8+ 2017-08-01 12:38:29 +01:00
Nick Craig-Wood
2b5ce6ef51 http: Fix directories with : in #1555 2017-07-31 23:15:31 +01:00
Nick Craig-Wood
b0fd187cba http: fix panic with url encoded content - fixes #1565
This fixes the issue which caused the panic (carrying on after an
error) and the issue which caused the error (double unescaping the
URL).
2017-07-30 23:16:32 +01:00
Nick Craig-Wood
c3cd247d4b Document --dump-bodies using lots of memory - fixes #1516 2017-07-30 10:02:14 +01:00
Nick Craig-Wood
5d911e9450 pacer: Factor TokenDispenser into pacer from box remote 2017-07-29 23:14:47 +01:00
Nick Craig-Wood
a56d51c594 Add Andy Pilate to contributors 2017-07-27 21:18:37 +01:00
Andy Pilate
ef328c5497 Fixes typo in command dedupe definition 2017-07-27 21:17:57 +01:00
Andy Pilate
49e4cdb8b9 Added information about Drive server copies limits 2017-07-27 21:17:24 +01:00
Stefan Breunig
ee52365e88 doc: add FAQ entry for "tcp lookup no such host" - fixes #683 2017-07-27 18:20:25 +02:00
Nick Craig-Wood
f3060caf04 Implement tree command - fixes #1528 2017-07-26 23:06:48 +01:00
Nick Craig-Wood
bfef0bc2e9 vendor: add github.com/a8m/tree 2017-07-26 23:06:48 +01:00
Nick Craig-Wood
da9926d574 vendor: update golang.org/x/sys
Now that https://github.com/golang/go/issues/21136 is fixed
2017-07-26 22:56:17 +01:00
Nick Craig-Wood
ebc8361933 mount: Add notes on Windows limitations from Bill Zissimopoulos 2017-07-26 21:08:24 +01:00
Nick Craig-Wood
71fe046937 fs: Add Find method to DirTree 2017-07-26 16:38:53 +01:00
Nick Craig-Wood
d5ff7104e5 fs: Implement NewDirTree for non --fast-list 2017-07-26 16:38:44 +01:00
Nick Craig-Wood
cd4895690a fstest: Factor test initialisation into Initialise() 2017-07-26 16:38:33 +01:00
Nick Craig-Wood
1ecf2bcbd5 fs: fix typo in --bind description 2017-07-23 23:08:33 +01:00
Nick Craig-Wood
c3d6cc91ec Fix --bind flag changes under go1.6
Correcting 9f24639568
2017-07-23 22:36:32 +01:00
Nick Craig-Wood
6fce1ac267 vendor: roll back golang.org/x/sys to fix compile
Until https://github.com/golang/go/issues/21136 is fixed
2017-07-23 22:24:24 +01:00
Nick Craig-Wood
9f24639568 Add --bind flag for choosing the local addr on outgoing connections - fixes #1087
Supported by all remotes except FTP.
2017-07-23 16:27:39 +01:00
Nick Craig-Wood
8b30023f0d Update MAINTAINERS with how to update the authors file. 2017-07-23 15:06:11 +01:00
Nick Craig-Wood
c507836617 Add Zhiming Wang to contributors 2017-07-23 15:02:19 +01:00
Zhiming Wang
6152bab28d local: add --skip-links to suppress symlink warnings
Give users a way to explicitly acknowledge that symlinks are to be skipped
without warnings.

Fixes #1480.
2017-07-23 15:02:02 +01:00
Nick Craig-Wood
6ae29df4d7 Add commit message and updating a backend sections to CONTRIBUTING 2017-07-23 13:23:42 +01:00
Nick Craig-Wood
de54fd4c64 mount: add docs for windows install 2017-07-23 13:05:02 +01:00
Nick Craig-Wood
859721f3cf Add John Papandriopoulos to contributors 2017-07-23 13:05:02 +01:00
John Papandriopoulos
d134d78979 b2: add --b2-hard-delete to permanently delete instead of hide files - Fixes #1547 2017-07-23 13:02:42 +01:00
Nick Craig-Wood
7b81f12dad box: add docs
* reorder remotes so they are in alphabetical order by full name everywhere
  * update CONTRIBUTING doc
2017-07-23 11:32:34 +01:00
Nick Craig-Wood
d279161cee Implement box storage remote - #97 2017-07-23 11:32:34 +01:00
Nick Craig-Wood
b5bf819256 acd,b2,crypt,drive: add missing upload options 2017-07-23 11:32:34 +01:00
Nick Craig-Wood
384724fd11 rest, b2, onedrive: remove Absolute parameter from rest.Opts and replace with RootURL 2017-07-23 11:32:34 +01:00
Nick Craig-Wood
5f70746d39 rest: Allow RootURL to be overridden 2017-07-23 11:32:34 +01:00
Nick Craig-Wood
088806ba4c rest: add Parameters field to opts for adding URL parameters 2017-07-23 11:32:34 +01:00
Nick Craig-Wood
45ba4ed594 rest: implement multipart uploads 2017-07-23 11:32:34 +01:00
Nick Craig-Wood
edfa1b3a69 oauthutil: fix panic from use of nil context 2017-07-23 11:32:34 +01:00
Nick Craig-Wood
db6009126d Fix test failure with new stretchr/testify - fixes #1550 2017-07-23 08:59:07 +01:00
Nick Craig-Wood
5255cbf5e3 Update godep as part of vendor update 2017-07-23 08:51:57 +01:00
Nick Craig-Wood
eb87cf6f12 vendor: update all dependencies 2017-07-23 08:51:42 +01:00
Nick Craig-Wood
0b6fba34a3 Fix fetch_windows target in Makefile 2017-07-22 20:44:09 +01:00
Nick Craig-Wood
c8b5ee1e54 Start v1.37-DEV development 2017-07-22 20:43:06 +01:00
Nick Craig-Wood
a73ecec11f Version v1.37 2017-07-22 20:04:29 +01:00
Nick Craig-Wood
c223464cd0 mount: fix panic on renames - fixes #1533
Make sure d.items is not nil and improve locking
2017-07-22 11:00:51 +01:00
Nick Craig-Wood
39d09c04a2 drive: Make --drive-trashed-only show all directories - fixes #1524
Without showing all directories it doesn't show trashed files which
are in an untrashed directory.

This isn't an ideal fix, but it makes the feature useable.
2017-07-22 10:03:27 +01:00
Stefan Breunig
db5494b316 document SIGUSR2 to toggle bandwidth limiter (fixes #1424) 2017-07-22 10:49:45 +02:00
Stefan Breunig
c3dab09a94 add Yaroslav Halchenko to contributors 2017-07-22 10:28:12 +02:00
Yaroslav Halchenko
3ddcbce989 DOC: any empty directoryies -> empty directories (fixes #1546) 2017-07-22 10:24:41 +02:00
Nick Craig-Wood
0cf19ef66a Make ListDirSorted check for subdirectories and write test 2017-07-19 09:36:27 +01:00
Nick Craig-Wood
655891170f Check in ListDirSorted that the directory entries all belong 2017-07-18 23:39:42 +01:00
Nick Craig-Wood
93423a0812 swift: fix zero length directory markekrs showing in the subdirectory listing
This was causing lots of duplicated files to be copied.
2017-07-18 23:38:48 +01:00
Nick Craig-Wood
78f33f5d6e Add gdm85 to contributors 2017-07-18 15:16:17 +01:00
gdm85
209b7da3b2 gcs: Add ability to specify location and storage class via config and command line
* Add gcs-location and gcs-storage-class options for Google Cloud Storage
* Added config options (same as S3)
* Updated configuration example in documentation for Google Cloud Storage
2017-07-18 15:15:29 +01:00
Nick Craig-Wood
6f71260acf Add --tpslimit and --tpslimit-burst to limit transactions per second for HTTP
This is useful if you are being rate limited or banned by your cloud
storage provider.
2017-07-16 17:25:39 +01:00
Nick Craig-Wood
ec6c3f2686 vendor: remove github.com/tsenart/tb 2017-07-16 16:14:44 +01:00
Nick Craig-Wood
62e28d0a72 Replace token bucket limiter github.com/tsenart/tb with golang.org/x/time/rate
In tests tsenart/tb has proved inaccurate at low rates.
2017-07-16 16:14:44 +01:00
Nick Craig-Wood
470642f2b7 vendor: add vendor/golang.org/x/time/rate 2017-07-14 05:35:00 +01:00
Nick Craig-Wood
b5002eb6a4 drive: document google docs sometimes fail to download 2017-07-10 23:15:30 +01:00
Nick Craig-Wood
ee5698b3a9 drive: Add docs on duplicated files, and re-copying 2017-07-09 23:32:34 +01:00
Nick Craig-Wood
728ff231ab Link wiki from main website - fixes #1156 2017-07-09 22:48:52 +01:00
Nick Craig-Wood
542f938ce2 website: Decrease spacing between menu items
...as they were overflowing the page before.  Thanks to Amy Craig-Wood
for CSS wrangling!
2017-07-09 22:48:26 +01:00
Nick Craig-Wood
e24d0ac94d Add slack invite to website menu - fixes #1145 2017-07-08 22:30:35 +01:00
Nick Craig-Wood
da2e2544ee Fix tests on Windows 2017-07-08 16:26:41 +01:00
Nick Craig-Wood
72add5ab27 sync: state whether duplicates are objects are directories 2017-07-08 15:42:18 +01:00
Nick Craig-Wood
9ac72ee53f Make commit number in beta version tag be 3 digits always 2017-07-07 21:31:52 +01:00
Nick Craig-Wood
c3dac2e385 dropbox: fix large directory listings 2017-07-07 21:20:07 +01:00
Nick Craig-Wood
92294a4a92 drive: Add --drive-trashed-only and remove obsolete --drive-full-list
* Add --drive-trashed-only to show only the contents of the trash
  * Remove --drive-full-list as it is obsolete
  * Tidy the docs for the drive options
2017-07-06 15:32:57 +01:00
Nick Craig-Wood
69ff009264 Use a stable sort for sorting directory entries
This is useful if there are duplicates. Assuming the remote delivers
the entries in a consistent order, this will give the best user
experience in syncing as it will consistently use the first entry for
the sync comparison.
2017-07-06 14:07:26 +01:00
Nick Craig-Wood
27b157580e Move make_test_files.go into bin 2017-07-06 11:54:57 +01:00
Nick Craig-Wood
3f288bc9ea Added decrypt_names.py to help decoding encrypted logs 2017-07-06 11:53:39 +01:00
Nick Craig-Wood
ce1b9a7daf swift,hubic: fix paged directory listings
This was caused by rclone adjusting the object names.  If the last
object in the listing page happened to be a directory, rclone would
remove the / which caused the next page to start in the wrong place.
2017-07-06 11:31:37 +01:00
Nick Craig-Wood
f0512d1a52 Fix missing fs.Dir -> fs.Directory 2017-07-06 11:31:36 +01:00
Stefan Breunig
51866fbd34 drive: add missing seek to start on retries of chunked uploads
follow up to ee13bc6775
2017-07-05 18:52:04 +02:00
Stefan Breunig
ee13bc6775 drive: fix stats accounting for upload - fixes #970, #968 2017-07-04 19:56:46 +02:00
Nick Craig-Wood
e86f62c3e8 Add rclone info internal command for testing out limits of remotes 2017-07-03 15:05:27 +01:00
Nick Craig-Wood
6c3bf629a1 yandex: fix fs.Name()
Put in tests for fs.Root() and fs.Name() for all remotes
2017-07-03 13:39:31 +01:00
Nick Craig-Wood
575e779b55 Warn about duplicate files when syncing - fixes #1506
Error about unsorted directories and test thoroughly
2017-06-30 21:24:13 +01:00
Nick Craig-Wood
dc56ad9816 sftp, local: refactor to stop storing os.FileInfo in preparation for serialization 2017-06-30 14:27:27 +01:00
Nick Craig-Wood
e7d04fc103 Create fs.Directory interface and use it everywhere 2017-06-30 14:26:59 +01:00
Nick Craig-Wood
e2d7d413ef fs: rename BasicInfo to DirEntry 2017-06-30 14:26:58 +01:00
Nick Craig-Wood
e7e9aa0dfa fs: Remove unused ListFser interface 2017-06-30 14:26:58 +01:00
Nick Craig-Wood
f88300a153 Don't Mkdir at the start of sync - fixes #1131
This is possible now that the bucket based remotes will create the
buckets on demand (9c1e703777).
2017-06-29 12:31:53 +01:00
Nick Craig-Wood
e54087ece1 Fix config tests to save configData which fixes subsequent tests 2017-06-29 12:31:53 +01:00
Nick Craig-Wood
54561fd2bc s3: work around eventual consistency in bucket creation
Deleting a bucket then testing its existence can give the wrong
result.  Work around by keeping a flag as to whether we have deleted
the bucket.
2017-06-29 12:31:52 +01:00
Nick Craig-Wood
479c5a514a swift, s3, gcs: create container if necessary on server side copy 2017-06-28 21:16:07 +01:00
Nick Craig-Wood
f3c7e1a9dd Debug directory creation and removal - fixes #1192 2017-06-27 22:19:35 +01:00
Nick Craig-Wood
70b5b2f5c6 acd, onedrive: fix initialization order for token renewer - fixes #1442 2017-06-27 22:19:35 +01:00
sainaen
d7811f72ad Clarify how 'move' may use server side copying 2017-06-26 22:54:14 +01:00
Nick Craig-Wood
aa20486485 Add --stats-log-level so can see --stats without -v - fixes #1180
The most common use for this flag is likely to be showing the stats
without using -v by using `--stats-log-level NOTICE`.
2017-06-26 22:50:37 +01:00
Nick Craig-Wood
33f302a06b Document workaround for files/dirs with : in - fixes #1331 2017-06-26 16:13:12 +01:00
Nick Craig-Wood
24cb739d1f b2: reduce minimum chunk size to 5MB - fixes #1289 2017-06-26 16:02:46 +01:00
Nick Craig-Wood
f0abd6173d Add Harshavardhana and sainaen to contributors 2017-06-26 12:37:00 +01:00
sainaen
1817d8f631 crypt: Fix typo in cryptcheck's short description 2017-06-26 12:35:20 +01:00
sainaen
a308ad5bd7 Fix typos and punctuation in the 'docs.md'
* Add commas to introductory phrases ('However', 'First', 'For example')
* Consistently capitalize provider names
* Fix some typos ('bandwith', 'integriTIty', etc.)
2017-06-26 12:35:20 +01:00
Nick Craig-Wood
b360527931 mount: fix hang on errored upload
In certain circumstances if an upload failed then the mount could hang
indefinitely. This was fixed by closing the read pipe after the Put
completed.  This will cause the write side to return a pipe closed
error fixing the hang.

Fixes #1498
2017-06-26 12:08:51 +01:00
Stefan Breunig
52b042971a keep file permissions and try to keep user/group on supported systems (fixes #1467) 2017-06-25 09:05:24 +02:00
Stefan Breunig
2d2778eabf don't delete remote if name does not change while renaming (fixes #1495) 2017-06-25 08:55:54 +02:00
Nick Craig-Wood
d55f8f0492 sftp: add support for using ssh key files #1494
Update docs about macOS and ssh-agent #1218
2017-06-23 16:25:35 +01:00
Nick Craig-Wood
b44d0ea088 drive: convert / in names to a unicode equivalent (/) - fixes #62 2017-06-20 21:27:14 +01:00
Nick Craig-Wood
d981456ddc Add Vasiliy Tolstov to contributors 2017-06-20 21:27:14 +01:00
Nick Craig-Wood
b22c4c4307 http: fix, tidy and rework ready for release
* Fix remaining problems
  * Refactor to make testing easier and add a test suite
  * Make path parsing more robust.
  * Add single file operations
  * Add MimeType reading for objects
  * Add documentation
  * Note go1.7+ is required to build
2017-06-20 21:27:14 +01:00
Nick Craig-Wood
afc8cc550a http: Update interfaces for List/ListR/Put/Update 2017-06-20 21:27:14 +01:00
Vasiliy Tolstov
83b642e98f fix for caddy web server
Signed-off-by: Vasiliy Tolstov <v.tolstov@selfip.ru>
2017-06-20 21:27:14 +01:00
Nick Craig-Wood
d5d635b7f3 http: Fix comments, remove optional methods which don't work 2017-06-20 21:27:14 +01:00
Vasiliy Tolstov
6b89e6c381 add new http remote filesystem
Signed-off-by: Vasiliy Tolstov <v.tolstov@selfip.ru>
2017-06-20 21:27:14 +01:00
Nick Craig-Wood
be0dd09801 vendor: golang.org/x/net/html for http 2017-06-20 21:27:14 +01:00
Nick Craig-Wood
b76cd4abd2 Fix Range header option 2017-06-20 21:27:14 +01:00
Nick Craig-Wood
0dbf1230bc Update CONTRIBUTING with --fast-list 2017-06-20 21:27:14 +01:00
Nick Craig-Wood
4fd9570332 fs: Use an in place filter in ListDirSorted 2017-06-20 21:27:14 +01:00
Harshavardhana
8d77e48190 Minio supports ETags and metadata.
Current doc mentioned lack of ETag and metadata
support which since has been long fixed in many
upstream Minio releases.

Also cleanup the doc to show new startup banner etc.
2017-06-20 08:21:02 +01:00
Nick Craig-Wood
dcce65b2b3 mount/cmount: factor duplicated code into mountlib 2017-06-19 14:36:51 +01:00
Nick Craig-Wood
4ce31555b2 vendor: update github.com/billziss-gh/cgofuse - fixes #1481 2017-06-19 09:53:34 +01:00
Nick Craig-Wood
5ed4bc97f3 travis: reduce number of parallel builds to avoid "Killed" error 2017-06-19 08:16:35 +01:00
Nick Craig-Wood
54e37be591 Only test with -race using go latest 2017-06-19 08:07:50 +01:00
Nick Craig-Wood
eaa717b88a Fix crypt obfuscate tests with Windows 2017-06-18 22:53:19 +01:00
Nick Craig-Wood
bbbc202ee6 Add ftp.md to docs builder and update docs 2017-06-15 20:12:26 +01:00
Nick Craig-Wood
97364fd0b6 ncdu: disable on plan9 and solaris as termbox isn't supported there 2017-06-15 20:10:54 +01:00
Nick Craig-Wood
c34f11a92f rclone ncdu for exploring a remote with a text based user interface. 2017-06-15 17:44:17 +01:00
Nick Craig-Wood
e31fc877e2 vendor: github.com/nsf/termbox-go and dependencies for rclone ncdu 2017-06-15 16:46:32 +01:00
Nick Craig-Wood
e069fc439e crypt: use an in place filter for encrypting directory entries 2017-06-15 16:46:32 +01:00
Nick Craig-Wood
5250fcdf08 core: fix data race in walk
This was detected by the race detector when the client of Walk() sorted entries.
2017-06-15 16:46:32 +01:00
Edward Q. Bridges
9876ba53f8 Updated permissions
As it happens, after testing the `GetObject` permission is also required to do `HEAD` requests on a given object.
2017-06-14 17:29:21 +01:00
Nick Craig-Wood
64662bef8d Deprecate --old-sync-method it is replaced with --fast-list
Remove old sync method code.
2017-06-14 16:49:40 +01:00
Nick Craig-Wood
0b8d9084fc test_all: print command line so it can be cut and pasted into bash 2017-06-14 16:49:40 +01:00
Nick Craig-Wood
7be49249d3 Add lsjson command - fixes #1063 2017-06-14 16:49:40 +01:00
Nick Craig-Wood
8a6a8b9623 Change List interface and add ListR optional interface
This simplifies the implementation of remotes.  The only required
interface is now `List` which is a simple one level directory list.

Optionally remotes may implement `ListR` if they have an efficient way
of doing a recursive list.
2017-06-14 16:49:40 +01:00
Nick Craig-Wood
6fc88ff32e Use --fast-list flag for sync/copy/move - fixes #1277
Redo test framework to take a -fast-list flag and test remotes with that flag.
2017-06-14 16:49:40 +01:00
Nick Craig-Wood
50928a5027 Implement --fast-list flag.
This is supported remotes which can do a recursive listing.  It will
use more memory.

This is related to #1277 but doesn't fix that issue yet.
2017-06-14 16:49:40 +01:00
Nick Craig-Wood
3a431056e2 gcs, swift: increase directory listing chunk to 1000 to increase performance 2017-06-14 16:49:40 +01:00
Nick Craig-Wood
53c3e5f0ab Add placeholder support for ListR interface.
The ListR interface will be implemented by remotes that can do a
recursive directory listing more efficiently than just recursing
through the directories.  These include the bucket based remotes.
2017-06-14 16:49:40 +01:00
Nick Craig-Wood
0edb025257 Fixup tests with dirs vs bucket based fs 2017-06-14 16:49:40 +01:00
Nick Craig-Wood
fded4dbea2 yandex: correct error return for listing empty directory 2017-06-14 16:49:40 +01:00
Nick Craig-Wood
7e20e16cff core: Implement Walk directory listing and use in place of Lister
This is in preparation for removing the Lister code and replacing the
fundamental operation in the Fs with listing a single directory.
2017-06-14 16:49:40 +01:00
Nick Craig-Wood
1e88f0702a dropbox: fix oauth configuration
This was broken in c59a292719
2017-06-14 16:46:46 +01:00
Nick Craig-Wood
68333d34a1 dropbox: make setting mod time on existing files work properly
This is a fix left over from the v2 conversion.  Dropbox ignores the
client modification on an incoming file if it was identical to the
existing file.  This change deletes the existing file first before
re-uploading the new one.
2017-06-13 13:58:39 +01:00
Nick Craig-Wood
740b3f6ae2 Fix problems found with ineffassign 2017-06-13 11:52:36 +01:00
Nick Craig-Wood
28fcc53e45 mount test: retry umount as it fails occasionally
This is because of the background releasing of files which happens
after all the files are closed.
2017-06-13 10:52:10 +01:00
Nick Craig-Wood
2ca477c57f swift: make sensible error if the user forgets the container - fixes #1470 2017-06-10 14:44:56 +01:00
Nick Craig-Wood
9a11d3efd9 Revert "Start Cat tests from 2 as onedrive doesn't support ranging from 1"
Now that https://github.com/OneDrive/onedrive-api-docs/issues/543 is
fixed, this can be reverted.

This reverts commit 320c53eab0.
2017-06-10 13:48:00 +01:00
Nick Craig-Wood
10d5377ed8 acd: remove revoked credentials, allow oauth proxy config and update docs 2017-06-10 12:02:34 +01:00
Nick Craig-Wood
ee14efd3c2 config: fix menu selection when no remotes 2017-06-10 11:39:40 +01:00
Nick Craig-Wood
b4be7d65a6 Update build to go1.8.3 2017-06-09 12:06:28 +01:00
Nick Craig-Wood
52e1bfae2a oauth: Allow auth_url and token_url to be set in the config file
If set in the config file, these override the ones configured into the
remote.  This enables alternative oauth servers to be used for all
oauth remotes.  This can only be altered by editing the config file
for the moment.
2017-06-08 20:35:32 +01:00
Nick Craig-Wood
9c1e703777 swift, b2, gcs, s3: Fix moveto and copyto
We now make sure the container/bucket is created before creating any objects.
2017-06-07 14:34:59 +01:00
Nick Craig-Wood
b49821956a Fix copyto/moveto test error (see #1261) 2017-06-07 14:08:46 +01:00
Nick Craig-Wood
a61ba1e7c4 moveto, copyto: report transfers and checks as per move and copy 2017-06-07 13:02:21 +01:00
Nick Craig-Wood
d30cc1e119 Factor RemoteSplit into fs 2017-06-07 12:27:33 +01:00
Nick Craig-Wood
74a3dfc4e1 Fix TestHashSums 2017-06-06 23:21:47 +01:00
Nick Craig-Wood
3fe9448229 drive, acd, onedrive: Cache the directory IDs when reading the parent directory
This makes directory listings much more efficient (one less
transaction needed) and also fixes #1439 (which was caused by having
to look up a directory name with quotes in which isn't dealt with well
by the list routine) by not doing a directory lookup at all.
2017-06-05 12:26:30 +01:00
Nick Craig-Wood
a5cfdfd233 drive: add team drive support - fixes #885 2017-06-04 22:38:29 +01:00
Nick Craig-Wood
bdc19b7c8a fstests: fix -remote flag to override test target 2017-06-04 22:38:29 +01:00
Nick Craig-Wood
e92cc8fe2b Add Edward Q. Bridges to contributors 2017-06-04 22:38:10 +01:00
Edward Q. Bridges
6ee4c62cae Add section on required IAM permissions.
cf.: https://github.com/ncw/rclone/issues/1455
2017-06-04 22:37:17 +01:00
Nick Craig-Wood
b047402294 config: Fix save of temp file under Windows - fixes #1458 2017-06-01 16:38:19 +01:00
Nick Craig-Wood
7693cecd17 Add Fabian Möller to contributors 2017-06-01 16:23:48 +01:00
Fabian Möller
558f014d43 migrate Gopkg.toml and Gopkg.lock to new format
Update Gopkg.toml and Gopkg.lock to follow the breaking changes
introduced by https://github.com/golang/dep/pull/644
2017-06-01 16:23:13 +01:00
Nick Craig-Wood
48508cb5b7 Add Ruwbin to contributors 2017-06-01 09:03:56 +01:00
Ruwbin
44c98e8654 fix docs typos 2017-06-01 09:03:19 +01:00
Stefan Breunig
9782c264e9 hand dirCacheTime through again 2017-06-01 09:02:22 +01:00
Stefan
9cede6b372 fully write new config file before moving to target location (fixes #1287)
* fully write new config file before moving to target location (fixes #1287)
* do not fail if there is no previous config; print temporary config path on failure
2017-06-01 08:57:10 +01:00
Stefan Breunig
decd960867 make moveto/copyto no-ops when source and destination are the same (fixes #1261) 2017-05-30 23:01:19 +01:00
Nick Craig-Wood
71028e0f06 dropbox/dbhash: fix errcheck warning 2017-05-30 22:08:49 +01:00
Nick Craig-Wood
52e96bc0e2 dropbox: add missing dbhashsum command
This was missed from 6381959850
2017-05-30 19:26:06 +01:00
Nick Craig-Wood
178ff62d6a vendor: add github.com/ncw/dropbox-sdk-go-unofficial and remove github.com/stacktic/dropbox
In due course this will become github.com/dropbox/dropbox-sdk-go-unofficial
when the fate of https://github.com/dropbox/dropbox-sdk-go-unofficial/pull/14
has been decided.
2017-05-30 15:49:29 +01:00
Nick Craig-Wood
9d335eb5cb dropbox: add low level retries 2017-05-30 14:49:09 +01:00
Nick Craig-Wood
20da3e6352 Add options to Put, PutUnchecked and Update, add HashOption and speed up local
* Add options to Put, PutUnchecked and Update for all Fses
  * Use these to create HashOption
  * Implement this in local
  * Pass the option in fs.Copy

This has the effect that we only calculate hashes we need to in the
local Fs which speeds up transfers significantly.
2017-05-29 12:04:52 +01:00
Nick Craig-Wood
6381959850 dropbox: support Dropbox content hashing scheme - fixes #1302
* add support to hashing module
  * add dbhashsum to list the hashes
  * add support to dropbox module

This means objects up and downloaded to/from Dropbox will have their
hashes checked.

Note after this change local objects are calculating MD5, SHA1 and
DBHASH which is excessive and needs to be fixed.
2017-05-29 12:04:44 +01:00
Nick Craig-Wood
8916455e4f dropbox: implement dropbox hasher #1302 2017-05-29 12:04:34 +01:00
Nick Craig-Wood
8e214e838e dropbox: Update dropbox to use the v2 API #349
This is feature complete with the old version but now supports modification times.
2017-05-29 12:04:33 +01:00
Nick Craig-Wood
23acd3ce01 oauthutil: Don't expect tokens to have refresh URL 2017-05-29 12:04:33 +01:00
Stefan Breunig
a2e3af0523 poll for Google Drive changes when mounted 2017-05-28 17:54:52 +01:00
Nick Craig-Wood
5455d34f8c Fix ssh agent on Windows - fixes #1279 2017-05-26 10:21:07 +01:00
Nick Craig-Wood
84512ac77d vendor: add github.com/xanzy/ssh-agent for #1279 2017-05-26 10:21:06 +01:00
Nick Craig-Wood
1ec0327ed7 vendor: update cgofuse (because dep wanted to!) 2017-05-26 10:15:14 +01:00
Nick Craig-Wood
0f07b63fd1 ftp: convert the old config style to the new config style 2017-05-25 10:16:51 +01:00
Nick Craig-Wood
88ef475629 config: allow keys to be deleted from the config file 2017-05-25 10:15:22 +01:00
Sjur Fredriksen
ade61fa756 Updated FTP to follow SFTP standards, updated documentation 2017-05-25 09:30:15 +01:00
Nick Craig-Wood
cfc5f7bb2d Document another file to edit when making a remote 2017-05-25 09:28:18 +01:00
Nick Craig-Wood
ae9f8304fa Attempt to make async buffer test more reliable 2017-05-24 16:24:06 +01:00
Nick Craig-Wood
55755a8e5b Add Sjur Fredriksen to contributors 2017-05-24 15:59:49 +01:00
Sjur Fredriksen
080050fac2 Update ftp.md
Added information regarding non-standard FTP ports.
2017-05-24 15:59:18 +01:00
Nick Craig-Wood
a243ea6353 sftp: fix under Windows #1432
This was caused by erroneous use of filepath to parse unix standard paths
2017-05-24 15:39:17 +01:00
Nick Craig-Wood
51d2174c0b ftp: check connection before returning it to the pool #1435
If the last FTP command caused an error, and if the error wasn't a
regular FTP error code, then we check the connection is working using
a NOOP call before returning it to the connection pool.
2017-05-24 14:47:13 +01:00
Nick Craig-Wood
e75db0b14d Add Steven Lu to contributors 2017-05-24 08:44:42 +01:00
Steven Lu
c59a292719 Obtain a refresh token for GCD 2017-05-24 08:44:00 +01:00
Nick Craig-Wood
be5b8b8dff Add Bob Potter to contributors 2017-05-24 07:36:38 +01:00
Bob Potter
525220b14e Add --local-no-unicode-normalization flag
Fixes #1411
2017-05-24 07:36:06 +01:00
Nick Craig-Wood
a9d29c2264 ftp: don't pool the connection if file download failed 2017-05-19 17:45:22 +01:00
Nick Craig-Wood
8f54dc06a2 Use build tags to control when and where cmount is built 2017-05-19 17:08:04 +01:00
Nick Craig-Wood
7daf97f90a Add CircleCI badge to README 2017-05-19 16:06:43 +01:00
Nick Craig-Wood
2cae017738 mountlib: fix race condition in cache clear 2017-05-19 15:47:52 +01:00
Nick Craig-Wood
e172f00e0e ftp: fix errors from Close of a stream which hasn't been fully read 2017-05-19 12:28:47 +01:00
Nick Craig-Wood
412dacf8be Add a test for partial reads to all remotes 2017-05-19 12:28:47 +01:00
Nick Craig-Wood
cdacf026e4 ftp: implement server side move and directory move 2017-05-18 20:49:36 +01:00
Nick Craig-Wood
0ca6408580 ftp: rework mkdir to be more efficient 2017-05-18 20:49:36 +01:00
Nick Craig-Wood
9627a6142d ftp: support --contimeout 2017-05-18 20:49:36 +01:00
Nick Craig-Wood
6cc783f20b ftp: stop rmdir being recursive 2017-05-18 20:49:36 +01:00
Nick Craig-Wood
3136a75f4d ftp: add connection pool and remove excess locking 2017-05-18 20:49:36 +01:00
Nick Craig-Wood
a9101f8608 ftp: Fix for go1.6 and go1.7 2017-05-18 20:49:36 +01:00
Nick Craig-Wood
af043eda15 Vendor github.com/jlaffaye/ftp for ftp backend 2017-05-18 20:49:36 +01:00
Nick Craig-Wood
35c210d36f ftp: fix remaining issues to make tests work
* fix root
  * factor ftpConnection
  * fix path munging
  * fix recursive dir loops after update
  * use fs.Trace and comment out debugs
  * re-arrange and supplement docs
2017-05-18 20:49:36 +01:00
Nick Craig-Wood
3ed0440bd2 ftp: use path instead of filepath 2017-05-18 20:49:36 +01:00
Nick Craig-Wood
c13cff37ef ftp: replace URL parser with url.URL 2017-05-18 20:49:36 +01:00
Nick Craig-Wood
fce734662f ftp: fix golint/go vet/errchk errors and move methods into standard order 2017-05-18 20:49:36 +01:00
Nick Craig-Wood
e0ba1a2cd2 ftp: fix bitrot 2017-05-18 20:49:36 +01:00
Antonio Messina
c72fca2711 Add ftp backend - fixes #540 2017-05-18 20:49:36 +01:00
Nick Craig-Wood
ae17d88518 Add Bill Zissimopoulos to contributors 2017-05-18 20:48:47 +01:00
Bill Zissimopoulos
e19fc49a5f add circleci configuration 2017-05-18 20:45:08 +01:00
Bill Zissimopoulos
95c0378e3c update cgofuse dependency to v1.0.1 2017-05-18 20:45:08 +01:00
Nick Craig-Wood
7ee3cfd7c9 Add Igor Kharin to contributors 2017-05-15 21:03:16 +01:00
Igor Kharin
bd2cdeeeab sftp: specify HostKeyCallback in ClientConfig 2017-05-15 21:02:05 +01:00
Nick Craig-Wood
77cd93ef89 Fix tag to 8 digits of commit to make Appveyor and Travis consistent 2017-05-15 20:58:48 +01:00
Nick Craig-Wood
5b063679b5 travis: install libfuse for cmount build and disable on OS X 2017-05-15 17:41:16 +01:00
Nick Craig-Wood
09093a9954 Use appveyor to build the Windows beta releases 2017-05-15 17:41:16 +01:00
Nick Craig-Wood
df0cfa9735 Add -no-clean flag to cross-compile.go 2017-05-15 17:41:16 +01:00
Nick Craig-Wood
64d7489fd2 Add -include, -exclude -cgo to cross-compile.go 2017-05-15 17:41:16 +01:00
Nick Craig-Wood
ecedcd0e7f cmount: stop failing tests on Windows 2017-05-15 17:40:44 +01:00
Nick Craig-Wood
3dff91d691 mount: add missing build constraint to fix Windows build 2017-05-15 17:40:15 +01:00
Nick Craig-Wood
e131ef3714 Fix appveyor tests after vendor update 2017-05-15 16:56:47 +01:00
Nick Craig-Wood
ea0bc278ba cmount: Vendor github.com/billziss-gh/cgofuse 2017-05-15 16:56:47 +01:00
Nick Craig-Wood
b553c23d5b Automate production of zip files for Windows 2017-05-15 16:56:47 +01:00
Nick Craig-Wood
4f954896a8 appveyor: make build include WinFsp and test cmount 2017-05-15 16:56:47 +01:00
Nick Craig-Wood
b259f8b752 cmount, mount, mountlib: make --read-only reject modify operations
Normally mount/cmount use `-o ro` to get the kernel to mark the fs as
read only.  However this is ignored by WinFsp, so in addition if
`--read-only` is in effect then return EROFS ("Read only File System")
from all methods which attempt to modify something.
2017-05-15 16:56:47 +01:00
Nick Craig-Wood
8be8a8e41b mountlib: on read only open of file, make open pending until first read
This fixes a problem with Windows which seems fond of opening files
just to read their attributes and closing them again.
2017-05-15 16:56:47 +01:00
Nick Craig-Wood
79aa060e21 win-build.bat example bat file for building with WinFsp under Windows 2017-05-15 16:56:46 +01:00
Nick Craig-Wood
f9500729b7 mountlib: fix cross platform tests 2017-05-15 16:56:46 +01:00
Nick Craig-Wood
204a19e67f cmount: Wait for mountpoint to appear on Windows before declaring mounted 2017-05-15 16:56:46 +01:00
Nick Craig-Wood
e6ffe3464c cmount: check for filesystem blowing up before Init is called 2017-05-15 16:56:46 +01:00
Nick Craig-Wood
0384364c3e cmount: pass --FileSystemName under windows 2017-05-15 16:56:46 +01:00
Nick Craig-Wood
763facfd78 cmount: implement --fuse-flag to pass commands to fuse library directly
Useful for `--fuse-flag -h` to see exactly which options the library supports.
2017-05-15 16:56:46 +01:00
Nick Craig-Wood
bc88f1dafa cmount: fix openFile leak 2017-05-15 16:56:46 +01:00
Nick Craig-Wood
0c055a1215 cmount: Statfs: reduce max size of volume for Windows 2017-05-15 16:56:46 +01:00
Nick Craig-Wood
938d7951ab cmount: allow extra options to pass to fuse with -o 2017-05-15 16:56:45 +01:00
Nick Craig-Wood
b4466bd9b1 Add -o uid=-1 -o gid=-1 for Windows/WinFsp 2017-05-15 16:56:45 +01:00
Nick Craig-Wood
31f76aa464 cmount: implement no-ops for Fsync, Chmod, Chown, Access, Fsyncdir and stop using fuse.FileSystemBase 2017-05-15 16:56:45 +01:00
Nick Craig-Wood
c887c164dc cmount: add function tracing 2017-05-15 16:56:45 +01:00
Nick Craig-Wood
115ac00222 mount, mountlib: move function tracing into mount 2017-05-15 16:56:45 +01:00
Nick Craig-Wood
50e79bc087 fs: Implement fs.Trace for tracing entry and exit of functions 2017-05-15 16:56:45 +01:00
Nick Craig-Wood
abda616f84 mountlib: make Nodes also be fmt.Stringer so they debug nicely 2017-05-15 16:56:45 +01:00
Nick Craig-Wood
9c3048580a cmount: fix code quality warnings 2017-05-15 16:56:45 +01:00
Nick Craig-Wood
c1d5faa32a mountlib: fix code quality warnings 2017-05-15 16:56:45 +01:00
Nick Craig-Wood
d127d8686a mountlib: pass options in fsys not as args 2017-05-15 16:56:44 +01:00
Nick Craig-Wood
bc9856b570 Forward port 930ff266f2 to cmount branch
compare checksums on upload/download via FUSE
2017-05-15 16:56:44 +01:00
Nick Craig-Wood
855071cc19 cmount: name the command mount under windows and cmount under linux 2017-05-15 16:56:44 +01:00
Nick Craig-Wood
b179540e80 cmount: fix Getattr to work on directories 2017-05-15 16:56:44 +01:00
Nick Craig-Wood
6a8e4690d3 mountlib: windows fixes for drive letter and timing 2017-05-15 16:56:44 +01:00
Nick Craig-Wood
917ea6ac57 mountlib: make tests work under all platforms 2017-05-15 16:56:44 +01:00
Nick Craig-Wood
7b47a1e842 cmount: set the correct values for uid, gid under Windows 2017-05-15 16:56:44 +01:00
Nick Craig-Wood
bcd87009e2 Fix docs typo 2017-05-15 16:56:44 +01:00
Nick Craig-Wood
caf85737c3 cmount: fix Windows compile (thanks Bill Zissimopoulos) 2017-05-15 16:56:44 +01:00
Nick Craig-Wood
e1516e0159 Forward port 58a82cd578 into cmount branch
allow the fuse directory cached to be cleaned manually
2017-05-15 16:56:43 +01:00
Nick Craig-Wood
ee1111e4c9 cmount: a new mount option based on cgofuse.
This with the aid of WinFSP should work on Windows.

Unfinished bits
  * 1 test doesn't pass
  * docs
  * build
2017-05-15 16:56:43 +01:00
Nick Craig-Wood
268fe0004c mount: factor filesystem code into mountlib and mounttest 2017-05-12 21:24:24 +01:00
Nick Craig-Wood
0c92a64bb3 vendor: update spf13/cobra to fix arg parsing 2017-05-12 19:49:32 +01:00
Nick Craig-Wood
8b61692754 vendor: update github.com/aws/aws-sdk-go to get plan9 build fix 2017-05-12 14:24:51 +01:00
Nick Craig-Wood
663e6f3ec0 vendor: patch github.com/aws/aws-sdk-go to fix the build
Temporary until https://github.com/aws/aws-sdk-go/pull/1262 is merged.
2017-05-11 17:11:35 +01:00
Nick Craig-Wood
17633f5460 Require go1.6 for building rclone
This is required because google.golang.org/grpc needs it.
2017-05-11 17:07:49 +01:00
Nick Craig-Wood
98c2d2c41b Switch to using the dep tool and update all the dependencies 2017-05-11 15:39:54 +01:00
Nick Craig-Wood
5135ff73cb Compile 386 builds with "GO386=387" for maximum compatibility #437 2017-05-09 11:58:29 +01:00
Stefan Breunig
58a82cd578 allow the fuse directory cached to be cleaned manually (fixes #803) 2017-05-07 12:08:59 +01:00
Nick Craig-Wood
d86ea8623b Add Yoni Jah second email to contributors 2017-05-02 22:54:11 +01:00
Yoni Jah
cdeeff988e Added RepetableReader to fs. used in OneDrive with io.LimitedReader to display accurate speed 2017-05-02 22:31:05 +01:00
Stefan Breunig
930ff266f2 compare checksums on upload/download via FUSE 2017-05-02 22:27:38 +01:00
Nick Craig-Wood
d5c0fe632f Add Zahiar Ahmed to contributors 2017-05-02 22:16:16 +01:00
Zahiar Ahmed
3c5c5eeec2 Add us-east-2 (Ohio) and eu-west-2 (London) S3 regions 2017-05-02 22:07:50 +01:00
Martin Kristensen
56f017c60c drive: use explicit fields for all endpoints
Reuses the same fields for all endpoints for simplicitys sake.
Should solve remaining part of #1346
2017-05-02 21:30:45 +01:00
Nick Craig-Wood
b6517840ca Update build to go 1.8.1 2017-04-25 08:10:36 +01:00
Nick Craig-Wood
1ccfea5aa9 Add Anisse Astier to contributors 2017-04-25 08:08:33 +01:00
Anisse Astier
7e858f4b8d dropbox: typo
dropbix -> dropbox.
2017-04-25 08:07:37 +01:00
Martin Kristensen
7b4f368307 acd: fix typo in log message for temp link download 2017-04-25 08:07:00 +01:00
Nick Craig-Wood
06a3502ed8 Script to update authors.md automatically from the git changelog 2017-04-24 20:36:06 +01:00
Nick Craig-Wood
a9a43144ca Add Too Much IO to contributors 2017-04-24 20:33:51 +01:00
Martin Kristensen
dd968a8ccf drive: nextPageToken field was missing
Fixes the bug found by users in #1346
2017-04-24 19:50:51 +01:00
Martin Kristensen
0d6e1afe54 drive: only request owner field when using --drive-auth-owner-only
This fixes the note @ncw made in #1359
2017-04-24 10:35:42 +01:00
Nick Craig-Wood
7d9faffd4b Add Martin Kristensen to contributors 2017-04-23 17:03:20 +01:00
Martin Kristensen
d7df065320 drive: reduce bandwidth by adding fields for partial responses
Fixes #1346
2017-04-23 17:01:15 +01:00
Michael Ledin
84d4d7f9d9 oauthutil: Print redirection URI if using own credentials. 2017-04-22 10:37:46 +01:00
Nick Craig-Wood
733d6fe56c Add Michael Ledin to contributors 2017-04-22 10:24:33 +01:00
Michael Ledin
8350544092 onedrive: swap to using http://localhost:53682/ as redirect URL.
The previous redirect URL http://localhost.rclone.org:53682/ can't be
used any more in new OneDrive authentication which is a problem for
users trying to make their own credentials.
2017-04-22 10:08:18 +01:00
Nick Craig-Wood
6a63bc2788 Add Hraban Luyat to contributors 2017-04-22 09:39:46 +01:00
Hraban Luyat
66e8c1600e Print password prompts to stderr
This makes rclone with encrypted config better suited for use in
pipelines. E.g.:

$ rclone lsl mydrive:Some/Dir | sort -k 4

If the password prompt ("Enter configuration password") is printed to
stdout, it will be swallowed by sort. By printing it to stderr, you
still see the prompt, without sacrificing compatibility with the unix
pipeline.
2017-04-22 09:38:39 +01:00
Stephen Harris
82b8d68ffb crypt: report the name:root as specified by the user
Rather then the underlying Fs root (which may be encrypted when
filename_encryption is set).

Fixes #1305
2017-04-22 09:28:05 +01:00
Nick Craig-Wood
b86bbcd67e Add Jon Craton to contributors 2017-04-22 09:22:51 +01:00
Jon Craton
38b6d607aa fixed typo 2017-04-22 09:21:44 +01:00
Stephen Harris
e1647a5a08 crypt: Fix obfuscate filename encryption method
Fix issue #1315 where filenames calculated with a base distance of zero
(ie the characters add up to 0(mod 256) aren't de-obfuscated on reading.
This was due to overloading of "0" to also mean "invalid UTF8; no rotation",
so we remove that double meaning
2017-04-22 09:16:00 +01:00
Nick Craig-Wood
bc25190fc7 Fix misleading log message with --dry-run - fixes #1309 2017-04-10 16:07:22 +01:00
Yoni Jah
e3a41321cc onedrive: changed QueryEscape to PathEscape - fixes #1296 2017-04-10 15:46:15 +01:00
Stefan Breunig
2fd86c93fc allow modTime to be changed even before all writers are closed (fixes #1197 -- again) 2017-03-31 01:28:08 +02:00
Nick Craig-Wood
2b8c461e04 Add Ihor Dvoretskyi to contributors 2017-03-29 18:12:13 +01:00
Ihor Dvoretskyi
a54692d165 OneDrive vs One Drive
It's better to call this service as it's officially named.
2017-03-29 18:11:33 +01:00
Nick Craig-Wood
4b4c59a4bb crypt: add integration tests for obfuscate name encryption 2017-03-29 17:57:10 +01:00
Nick Craig-Wood
81d688107e Add Stephen Harris to contributors 2017-03-29 17:57:03 +01:00
Stephen Harris
6e003934fc crypt: add an "obfuscate" option for filename encryption.
This is a simple "rotate" of the filename, with each file having a rot
distance based on the filename.  We store the distance at the beginning
of the filename.  So a file called "go" would become "37.KS".

This is not a strong encryption of filenames, but it should stop automated
scanning tools from picking up on filename patterns.  As such it's an
intermediate between "off" and "standard".  The advantage is that it
allows for longer path segment names.

We use the nameKey as an additional input to calculate the obfuscation
distance.  This should mean that two different passwords will result
in two different keys

The obfuscation rotation works by splitting the ranges up and handle cases
  0-9
  A-Za-z
  0xA0-0xFF
  and anything greater in blocks of 256
2017-03-29 17:56:55 +01:00
Dedsec1
37e1b20ec1 Updated .pkgr.yml file to use rclone as its own cli. 2017-03-29 17:48:53 +01:00
Nick Craig-Wood
d1787b50fd Yoni Jah to contributors 2017-03-29 17:38:14 +01:00
Yoni Jah
9dfc346998 onedrive: Retry on token expired error, reset upload body on retry 2017-03-29 17:38:07 +01:00
Nick Craig-Wood
9ab4c19945 Add Danny Tsai to contributors 2017-03-29 17:26:03 +01:00
Danny Tsai
3bab119fa5 drive: implement --drive-shared-with-me flag to view shared with me files 2017-03-29 17:23:30 +01:00
Nick Craig-Wood
1fdf3e2aae Add Marvin Watson to contributors 2017-03-29 17:12:17 +01:00
marvwatson
4810aa65a4 Update references from HTTP to HTTPS where possible 2017-03-29 05:38:34 -07:00
Nick Craig-Wood
f798552cf1 Update urls to https after site move 2017-03-29 10:06:22 +01:00
Stefan Breunig
4dc030d081 implement ModTime via FUSE for remotes that support it (fixes #1197) 2017-03-24 09:23:04 +01:00
Nick Craig-Wood
216499d78b Add Mike Tesch to authors 2017-03-19 08:26:41 +00:00
Mike Tesch
60f636ee15 Fix spelling of Unfortunately 2017-03-18 20:22:19 -04:00
Nick Craig-Wood
f0bf117a04 Add Jérôme Vizcaino to authors 2017-03-18 21:24:05 +00:00
Jérôme Vizcaino
788b6ce821 mount: umount dir when program ends with SIGINT (Ctrl+C) or SIGTERM 2017-03-18 21:24:05 +00:00
Nick Craig-Wood
503cd84919 Start v1.36-DEV development 2017-03-18 11:30:59 +00:00
Nick Craig-Wood
118e26f8e2 Version v1.36 2017-03-18 11:16:43 +00:00
Nick Craig-Wood
5355881332 local: fix unnormalised unicode causing problems reading directories #1212 2017-03-16 22:37:56 +00:00
Nick Craig-Wood
b94b50a808 Prepare website for https 2017-03-16 22:36:23 +00:00
Nick Craig-Wood
9b07d32c02 onedrive, drive, amazonclouddrive: make sure we find the root
This fixes copyto copying things to the wrong place - fixes #1231
2017-03-16 09:42:49 +00:00
Nick Craig-Wood
986a2851bf onedrive: make sure we create root for server side copy 2017-03-15 19:40:58 +00:00
Nick Craig-Wood
6474f2c7c2 onedrive: fix uploading empty files with go1.8 2017-03-15 14:01:08 +00:00
Nick Craig-Wood
99f7fe736a onedrive: implement Move #197 2017-03-15 14:01:08 +00:00
Nick Craig-Wood
e80d8db417 Fix typo in option name 2017-03-13 12:51:01 +00:00
Nick Craig-Wood
320c53eab0 Start Cat tests from 2 as onedrive doesn't support ranging from 1
This has been reported here: https://github.com/OneDrive/onedrive-api-docs/issues/543
2017-03-12 14:24:33 +00:00
Nick Craig-Wood
4d5b73df85 Fix TestListDirSorted eventual consistency listing problems 2017-03-12 14:00:22 +00:00
Nick Craig-Wood
0faf82702b onedrive: fix waitForJob to parse errors correctly #1224 2017-03-12 12:00:10 +00:00
Nick Craig-Wood
194a8f56e1 rest: Implement IgnoreStatus option to not parse the error return 2017-03-12 11:44:43 +00:00
Nick Craig-Wood
f046c00d3b onedrive: fix overwrite detection in Copy - fixes #1224 2017-03-11 22:22:13 +00:00
Nick Craig-Wood
488353c977 acd: Fix Move returning nil objects and nil error #1226 2017-03-09 21:32:50 +00:00
Nick Craig-Wood
c45c604997 onedrive: fix NewObject so it doesn't return an object when given a directory 2017-03-06 20:11:54 +00:00
Nick Craig-Wood
b2a4ea9304 fs/buffer: Fix panic on concurrent Read/Close - fixes #1213 2017-03-06 19:22:17 +00:00
Nick Craig-Wood
8dc7bf883d vendor: Update go-acd 2017-03-06 19:21:18 +00:00
Nick Craig-Wood
61f186c8a3 Add missing dependency 2017-03-05 10:24:46 +00:00
Nick Craig-Wood
e88623e3c8 Add cryptcheck and obscure to docs 2017-03-05 10:19:22 +00:00
Nick Craig-Wood
4652db34a4 Update config docs - fixes #1174 2017-03-05 10:14:57 +00:00
Dedsec1
05d72385b5 created .pkgr.yml file for automated apt-get 2017-03-04 12:19:23 +00:00
Dedsec1
9bb408e1a9 Update snapcraft.yaml 2017-03-04 12:05:48 +00:00
Nick Craig-Wood
10e532bce9 Fix --files-from with an empty file copying everything - fixes #1196 2017-03-04 10:12:54 +00:00
Nick Craig-Wood
4ab7e05e02 Fix MimeType propagation
In fs.Copy, don't wrap objects if possible, and if not, then add a
MimeType method into the wrapped object.
2017-03-04 10:10:55 +00:00
Nick Craig-Wood
1cc58e4e09 mount: fix logging for unimplemented file open modes #1195 2017-03-02 22:07:01 +00:00
Nick Craig-Wood
fdaac6df67 local: open files in write only mode so they can write to an rclone mount
Fixes #1195
2017-03-02 22:03:07 +00:00
Nick Craig-Wood
1d42a343d2 b2: fix inconsistent listings and rclone check
This was caused by re-using a variable for the results of a JSON
unmarshal and the unmarshaller picking up existing entries.

See https://forum.rclone.org/t/check-command-gives-unreliable-results/
2017-03-02 15:08:31 +00:00
Nick Craig-Wood
0ce34be41d Fix bulleted list doc formatting errors - fixes #1170 2017-03-02 15:07:25 +00:00
Nick Craig-Wood
5fba913207 local: fix detection of directories in new object creation
This stops the local listing erroring on all symlinks
2017-02-27 17:03:31 +00:00
Nick Craig-Wood
f7252645ba Make the Makefile build rclone with the correct version number by default 2017-02-27 11:53:03 +00:00
Nick Craig-Wood
e48d19f895 acd: make file size warning at Info level so it appears by default 2017-02-26 13:23:12 +00:00
Nick Craig-Wood
6bad0ad9c4 Fix installation instructions to install in /usr/bin not /usr/sbin 2017-02-26 10:09:23 +00:00
Nick Craig-Wood
dc5b7dc102 rest: don't duplicate headers on redirect now go1.8 does it 2017-02-25 21:41:03 +00:00
Nick Craig-Wood
55eafb3a9a gcs: fix depth 1 directory listings 2017-02-25 16:03:29 +00:00
Nick Craig-Wood
5b6dd36307 dropbox, yandex: fix return of wrapped nil introduced in 79e3c67bbd 2017-02-25 15:23:27 +00:00
Nick Craig-Wood
175c39e1d0 b2: fix uploading empty files with go1.8 2017-02-25 15:22:14 +00:00
Nick Craig-Wood
84b12574de sftp: fix detection of file vs directory 2017-02-25 14:31:27 +00:00
Nick Craig-Wood
efbb040e3f yandex: fix single level directory listing 2017-02-25 13:41:24 +00:00
Nick Craig-Wood
79e3c67bbd local, yandex, dropbox: fix NewObject suceeding on a directory #1079
Add tests to make it consistent across all remotes
2017-02-25 11:09:57 +00:00
Nick Craig-Wood
527099ae72 dropbox: normalise the case for single level directory listings #1165
This should fix directory at a time syncs having strange case.
2017-02-24 22:49:29 +00:00
Nick Craig-Wood
e2f0feef3c Add debugging to print hash values on failed hash comparison 2017-02-23 11:23:19 +00:00
Nick Craig-Wood
30e97ad9ec Fix parsing of remotes in moveto and copyto - fixes #1079 2017-02-22 22:09:33 +00:00
Nick Craig-Wood
07dc76eff0 Remove unused test file 2017-02-22 20:58:24 +00:00
Nick Craig-Wood
e59dc81658 Stop --track-renames deleting case folded source files - fixes #1094
What was happening is that when Move was implemented as Copy + Delete,
MoveFile was seeing the files didn't need transferring (because they
were identical) then deleted the source.

The fix uses Move instead and patches onedrive to disallow a case
folded identical copy (which errors with 500 error)
2017-02-22 19:28:22 +00:00
Nick Craig-Wood
f40443359d Fix exit code docs - fixes #1169 2017-02-22 18:00:56 +00:00
Nick Craig-Wood
6b0f2ef4bd Fix --delete-before deleting files on copy - fixes #1166 2017-02-22 13:17:38 +00:00
Nick Craig-Wood
12aa03f5b8 dropbox: fix depth 1 listing - fixes #1165 2017-02-22 12:48:16 +00:00
Nick Craig-Wood
73a96dc588 Improve directory listing tests to detect issue #1165 2017-02-22 11:53:40 +00:00
Nick Craig-Wood
980cd5bfd8 Put the -beta-latest files at the root of beta.rclone.org - fixes #1047 2017-02-20 18:03:02 +00:00
Nick Craig-Wood
86cc9f3dfb Include git-log.txt into beta releases - fixes #1047 2017-02-20 17:08:07 +00:00
Nick Craig-Wood
1ae604fcf4 cross-compile: make rclone-beta-latest* for download #1047 2017-02-20 16:58:46 +00:00
Nick Craig-Wood
5e93fe96d3 cross-compile: add missing .exe suffix to windows binaries 2017-02-20 16:36:54 +00:00
Nick Craig-Wood
31745320c8 Log the rclone version at the end of the run - fixes #847 2017-02-20 16:36:25 +00:00
Nick Craig-Wood
2da6cd7f84 Introduce AtExit to fix --cpuprofile and --memprofile to write profiles at end of run 2017-02-20 16:33:45 +00:00
Nick Craig-Wood
6e0e1ad9cb Add more description to the snapcraft files 2017-02-18 12:02:04 +00:00
Dedsec1
dd62c94d05 Create dev-snapcraft.yaml for current snapshot rclone 2017-02-18 11:53:12 +00:00
Nick Craig-Wood
ee70b99143 Add Hisham Zarka to contributors 2017-02-18 11:42:26 +00:00
Hisham Zarka
b3a526814e fix --ignore-checksum 2017-02-18 13:13:53 +04:00
Nick Craig-Wood
69a15ae173 Replace gox with a go script to do cross compiling
gox wasn't building the mips binaries for some reason.
2017-02-17 21:54:32 +00:00
Nick Craig-Wood
1d7f95da8e Support MIPS big and little endian - fixes #849 2017-02-17 19:11:08 +00:00
Nick Craig-Wood
8ec57d145e Update vendor directory
Re-added cobra patch 499475bb41
2017-02-17 16:49:51 +00:00
Nick Craig-Wood
3ef9f6f016 mount: add test scripts 2017-02-17 11:37:19 +00:00
Nick Craig-Wood
990b676e13 travis: only run go latest on OS X and include go tip, but allow failures
fixup
2017-02-17 10:34:29 +00:00
Nick Craig-Wood
5cdfe9c7ae Updae to go1.8 2017-02-17 09:40:14 +00:00
Nick Craig-Wood
033d1eb7af Refactor Account interface 2017-02-17 09:15:24 +00:00
Nick Craig-Wood
ac62ef430d Prevent double closes on async buffer 2017-02-17 08:55:24 +00:00
Nick Craig-Wood
928be0f1fd mount: fix seek with buffering to use correct interface
Stop pre-cache before seeking which stops lots of excess data transfer
2017-02-17 08:55:24 +00:00
Nick Craig-Wood
6f75290678 Make async buffering start slowly to improve seek performance 2017-02-17 08:26:14 +00:00
Dedsec1
8c2b50c7ed Update snapcraft.yaml 2017-02-16 22:25:44 +00:00
Nick Craig-Wood
2b1695e09b Add Dedsec1 to contributors 2017-02-16 22:22:47 +00:00
Nick Craig-Wood
ef604f6100 mount: implement renaming directories - fixes #954
This also fixes various caching issues renaming files.
2017-02-16 17:42:38 +00:00
Nick Craig-Wood
f3c5745468 Add srcRemote and dstRemote parameters to DirMove #954 2017-02-16 17:42:37 +00:00
Nick Craig-Wood
e4835f535d sftp: remove stray debug 2017-02-16 12:40:29 +00:00
Nick Craig-Wood
33c2873ae9 drive: Fix Rmdir on directories with trashed files - fixes #1040
When we try to delete a directory which is empty other than with
trashed files, we trash the directory rather than deleting it.
2017-02-16 12:29:37 +00:00
Nick Craig-Wood
dac4bb22d3 mount: Make include and exclude filters apply to mount - fixes #1060 2017-02-15 23:28:53 +00:00
Nick Craig-Wood
b52c80e85c sync: don't update mod times if --dry-run set - fixes #1100 2017-02-15 23:09:44 +00:00
Nick Craig-Wood
f15c6b68b6 Re-add the async buffer on seek - fixes #1137 2017-02-15 22:54:21 +00:00
Nick Craig-Wood
3f778d70f7 Add sync.Pool to async reader 2017-02-15 22:37:58 +00:00
Dedsec1
6fc114d681 Create Ubuntu snap for rclone #1120 2017-02-15 09:56:55 +00:00
Nick Craig-Wood
9a9d09845c mount: put read and write async buffers back - control with --buffer-size #1043 2017-02-14 22:59:52 +00:00
Nick Craig-Wood
7fa687b3e1 fs: Async buffer: use ReadFill to fill the chunks and increase to 1MB 2017-02-14 22:36:37 +00:00
Nick Craig-Wood
493da54113 Add --buffer-size parameter to control buffer size for copy 2017-02-14 22:36:37 +00:00
Nick Craig-Wood
541929258b check: Add --download flag to check all the data, not just hashes 2017-02-13 10:48:26 +00:00
Nick Craig-Wood
370f242fa2 local: Fix interaction between -x flag and --max-depth - fixes #1126
This was causing the by directory sync to ignore the -x flag because
it was putting directories into the listing which should have been
excluded.
2017-02-13 09:24:29 +00:00
Nick Craig-Wood
7047c67a5e sync: Fix log message containing <nil> 2017-02-13 09:23:21 +00:00
Nick Craig-Wood
18c75a81f9 Add notes on cryptcheck and backups to crypt docs 2017-02-12 16:49:31 +00:00
Nick Craig-Wood
01c747e7db Add cryptcheck command to check integrity of crypt remotes #1102 2017-02-12 16:30:18 +00:00
Nick Craig-Wood
186aedda98 Fix go vet on go 1.7 2017-02-12 12:43:13 +00:00
Nick Craig-Wood
ca0e25b1a1 Remove spurious comment 2017-02-12 10:56:52 +00:00
Nick Craig-Wood
f87a694d10 Make donation page easier to find and add bitcoin address 2017-02-11 23:03:05 +00:00
Nick Craig-Wood
006227baed Replace -v with -vv where necessary or change Debugf to Logf 2017-02-11 20:27:46 +00:00
Nick Craig-Wood
4d28b5ed22 Update list of commands in docs. 2017-02-11 20:27:46 +00:00
Nick Craig-Wood
499475bb41 Fix -vv by temporarily patching vendored cobra
This is a temporary fix until this pull request gets merged

https://github.com/spf13/cobra/pull/391

See original ticket

https://github.com/spf13/pflag/issues/112
2017-02-11 20:27:46 +00:00
Nick Craig-Wood
666dae4229 Add --syslog flag to optionally log to syslog on capable platforms 2017-02-11 20:27:46 +00:00
Nick Craig-Wood
ac1c041377 Redo log level flags
* -vv or --log-level DEBUG
  * -v or --log-level INFO
  * --log-level NOTICE (default)
  * -q --log-level ERROR

Replace Config.Verbose and Config.Quiet with Config.LogLevel

Fixes #739 Fixes #1108 Fixes #1000
2017-02-11 20:22:42 +00:00
Nick Craig-Wood
0366ea39c5 Reassign some logging levels 2017-02-11 17:56:05 +00:00
Nick Craig-Wood
80f53176d9 Rename log functions and factor into own file 2017-02-11 17:54:50 +00:00
Nick Craig-Wood
40c02989f1 acd: Fix panic on token expiry - fixes #1117 2017-02-11 17:49:59 +00:00
Nick Craig-Wood
50e190ff54 cat: don't allocate buffers if not needed to reduce memory usage 2017-02-09 11:46:53 +00:00
Nick Craig-Wood
dd20a297d6 cat: Fix go routine leak 2017-02-09 11:25:36 +00:00
Nick Craig-Wood
c0ad29c06c Clarify logging and docs for --no-traverse incompatibilities - fixes #1059 2017-02-08 22:35:12 +00:00
Nick Craig-Wood
d091d4a8bb rclone cat: add --head, --tail, --offset, --count and --discard
Fixes #819
2017-02-08 08:09:41 +00:00
Nick Craig-Wood
381b845307 acd: Fix nil pointer deref after Move #1098
Don't attempt to read the info in moveNode as there are paths which
don't, read it again from the directory afterwards.
2017-02-04 12:56:21 +00:00
Nick Craig-Wood
48cdedc97b Re-implement sync routine to work a directory at a time
Multiple directories (up to --checkers worth) are scanned at once.

This uses much less memory than the previous scheme - only the amount
of memory needed to hold an entire directory listing of objects.

For directory based remotes the speed is unchanged.

For bucket based remotes, instead of doing one API call to list the
whole bucket, it does multiple calls, one for each pseudo directory.
However these are done in parallel so in practice this seems to speed
up directory listings.

This replaces the existing sync method as it performs faster and uses
less memory.

The old sync method is available with the temporary --old-sync-method
flag.

Fixes #517
Fixes #439
Fixes #236
Fixes #1067
2017-02-04 10:30:25 +00:00
Nick Craig-Wood
7c6cd3a9e1 Make --delete-after the default and refactor --delete-{before,during,after} parsing 2017-02-04 10:30:25 +00:00
Nick Craig-Wood
bcdd73369f Ignore --delete-before with --track-renames 2017-02-04 10:30:25 +00:00
Nick Craig-Wood
86bec20b56 sync: factor accumulating the rename checks 2017-02-04 10:30:25 +00:00
Nick Craig-Wood
c3b2b89473 Add ListDirSorted function to list a directory
* fix error return of readFilesFn also
2017-02-04 10:30:25 +00:00
Nick Craig-Wood
85f05c57d1 Clean empty directories between test runs 2017-02-04 10:30:25 +00:00
Nick Craig-Wood
16d91246c4 sftp: Fix remote race on creating directories
Because there is a period of time between checking a directory needs
creating and creating it, the leads to errors where directories are
attempting to be created twice.

Add locking on a per directory basis to fix while doing mkdir.
2017-02-04 10:29:46 +00:00
Nick Craig-Wood
726cb43be9 Complete SFTP remote #521
* Add unit tests
  * Fix up remote so it passes tests
  * Add docs
2017-02-04 10:29:46 +00:00
Nick Craig-Wood
288302c2cf Make fallback purge delete empty directories too.
This was implemented to make the SFTP unit tests pass.
2017-02-04 10:29:46 +00:00
Nick Craig-Wood
609671aabc Add Jack Schmidt to contributors 2017-02-04 10:29:46 +00:00
Jack Schmidt
b9a8315696 Basic SFTP support, Issue #521 2017-02-04 10:29:18 +00:00
Jack Schmidt
27e18b6efa sftp: add required packages to vendor 2017-02-04 10:29:18 +00:00
Nick Craig-Wood
9d331ce04b Implement --ignore-checksum flag
Fixes #793 Fixes #863 Fixes #981
2017-02-03 08:11:10 +00:00
Nick Craig-Wood
916569102c b2: constrain memory usage when doing multipart uploads #439
Each part of a multipart upload takes 96M of memory, so we make sure
that we don't user more than `--transfers` * 96M of memory buffering
the multipart uploads.

This has the consequence that some uploads may appear to be at 0% for
a while, however they will get going eventually so this won't
re-introduce #731.
2017-02-03 08:03:04 +00:00
Nick Craig-Wood
28f9b9b611 drive: detect files using file size as well as md5 - fixes 980 2017-02-03 08:00:03 +00:00
Nick Craig-Wood
7679620f4b drive: Experimentally add --drive-list-chunk 2017-02-02 21:49:02 +00:00
Nick Craig-Wood
8a11da4e14 mount: Make fsync be a no-op for direectories too #1045 2017-02-02 21:31:41 +00:00
Nick Craig-Wood
f11867d810 Add Jon Yergatian to contributors 2017-02-02 21:14:24 +00:00
Jon Yergatian
6f8501e9a1 s3: Added ca-central-1 region 2017-02-02 21:14:10 +00:00
Nick Craig-Wood
37fe6d56e5 mount: fix docs for umount flags - fixes #1036 2017-01-30 18:17:16 +00:00
Nick Craig-Wood
ff8f11d79c Add Károly Oláh to contributors 2017-01-29 20:55:21 +00:00
okaresz
cbc113492a Add Drive specific option: --drive-skip-gdocs - fixes #1035 2017-01-29 20:53:51 +00:00
Nick Craig-Wood
74702554da onedrive: use token renewer to stop auth errors on long uploads
Fixes #820
2017-01-29 20:45:45 +00:00
Nick Craig-Wood
bd29015022 Factor token renewer from amazonclouddrive to oauthutil 2017-01-29 20:45:44 +00:00
Nick Craig-Wood
2192805360 rclone config: when choosing from a list, allow the value to be entered 2017-01-29 15:51:26 +00:00
Nick Craig-Wood
db0b93c0ad rclone config: allow rename and copy of remotes - fixes #641 2017-01-29 15:37:44 +00:00
Nick Craig-Wood
94947f2523 Implement -L, --copy-links flag to allow rclone to follow symlinks
Fixes #40
2017-01-29 13:43:20 +00:00
Nick Craig-Wood
29c6e22024 mount: Make fsync be a no-op rather than returning an error - fixes #1045 2017-01-29 11:29:42 +00:00
Nick Craig-Wood
390f3cf35b crypt: Add --crypt-show-mapping to show encrypted file mapping
Fixes #1004
2017-01-29 10:14:17 +00:00
Nick Craig-Wood
20c033b484 Add a writing documentation section and update document 2017-01-29 09:47:47 +00:00
Nick Craig-Wood
8068ef96b6 Add Dario Giovannetti to contributors 2017-01-26 10:29:21 +00:00
Dario Giovannetti
9d36258923 Comply with XDG Base Directory specification
Fixes #868
2017-01-26 10:22:08 +00:00
Nick Craig-Wood
9fdeb82328 Fix tests under Windows 2017-01-20 17:12:05 +00:00
Nick Craig-Wood
2abfae283c crypt: fix crypt writer getting stuck in a loop #902
This happened when the underlying reader returned io.ErrUnexpectedEOF.
The error handling for the call to io.ReadFull failed to take this
into account.

io.ErrUnexpectedEOF is reasonably common when SSL connections go wrong
when communicating with ACD, so it manifested itself as transfers from
non-encrypted ACD to encrypted ACD getting stuck.
2017-01-20 16:00:55 +00:00
Nick Craig-Wood
b6848a3edb Fix race in Lister.Finished which was causing the tests to be unreliable 2017-01-19 20:11:17 +00:00
Nick Craig-Wood
e2bf9eb8e9 Implement --suffix for use with --backup-dir only #98
This also makes sure we remove files we are about to override in the
--backup-dir properly.
2017-01-19 20:11:17 +00:00
Nick Craig-Wood
a77659e47d Make directory listing checks more reliable and easier to read 2017-01-19 20:11:17 +00:00
Nick Craig-Wood
e9da14ac2e acd: After moving a file, wait for the file to no longer be in the directory
This fixes a Move followed quickly by a Copy updating the wrong file.
2017-01-19 20:11:17 +00:00
Nick Craig-Wood
a4bf22e620 b2: fix upload url not being refreshed properly #825 2017-01-17 17:34:21 +00:00
Nick Craig-Wood
a6b4065e13 mount: fix retry on network failure when reading off crypt - fixes #1042 2017-01-17 16:32:04 +00:00
Nick Craig-Wood
07ebf35987 Clarify what happens to files already in the --backup-dir DIR 2017-01-16 19:26:56 +00:00
Nick Craig-Wood
47ebd0789c Make "make quicktest" ignore a config file if present for local running
This means "make quicktest" should give the same result as when run by
Travis.
2017-01-16 17:54:18 +00:00
Nick Craig-Wood
166fd50451 oauthutil: copy the config before modifying it
The stops simultaneous use of oauth configs with different client IDs
causing a problem.
2017-01-16 17:33:25 +00:00
Nick Craig-Wood
0604d3dbf2 acd, onedrive: make sure we have found the root before purging
If we don't, purge can try to trash the root node which fortunately
doesn't succeed.
2017-01-16 17:33:25 +00:00
Nick Craig-Wood
1fa258c2b4 Define a new Features() method for Fs
Optional interfaces are becoming more important in rclone,
--track-renames and --backup-dir both rely on them.

Up to this point rclone has used interface upgrades to define optional
behaviour on Fs objects.  However when one Fs object wraps another it
is very difficult for this scheme to work accurately.  rclone has
relied on specific error messages being returned when the interface
isn't supported - this is unsatisfactory because it means you have to
call the interface to see whether it is supported.

This change enables accurate detection of optional interfaces by use
of a Features struct as returned by an obligatory Fs.Features()
method.  The Features struct contains flags and function pointers
which can be tested against nil to see whether they can be used.

As a result crypt and hubic can accurately reflect the capabilities of
the underlying Fs they are wrapping.
2017-01-16 17:33:25 +00:00
ncw
3745c526f1 Implement --backup-dir - fixes #98
The parameter of backup-dir specifies a remote that all deleted or
overwritten files will be copied to.
2017-01-16 17:33:25 +00:00
Nick Craig-Wood
c123c702ab Fix fs.Overlapping and factor fs.SameConfig 2017-01-14 09:55:53 +00:00
ncw
4aae7bcca6 Factor server side move detection 2017-01-14 09:55:53 +00:00
Nick Craig-Wood
aa62e93094 acd: fix panic when renaming files - fixes #973
Fixed by no longer overwriting the parameters in a retry loop
2017-01-14 09:50:45 +00:00
Nick Craig-Wood
45862f4c16 Add Brandur to contributors 2017-01-12 10:08:40 +00:00
Brandur
3b1e0b66bb Return error on not found from ListFn
This changes `ListFn`'s implementation so that if it encounters a not
found error, instead of sending a fatal error to log, it coordinates the
return of the error between checker goroutines and sends it back to the
caller.

The main impetus here is that it allows an external program compiling
against rclone as a package to handle a not found, where it currently it
cannot.

This does change error output on a not found a little bit, we go from
this:

    2017/01/09 21:14:03 directory not found

To this:

    2017/01/09 21:13:44 Failed to ls: directory not found
2017-01-12 10:07:59 +00:00
Nick Craig-Wood
a7d8ccd265 Add T.C. Ferguson to contributors 2017-01-10 13:22:51 +00:00
T.C. Ferguson
d4c923a5cc Add obscure command for generating encrypted passwords for rclone's config 2017-01-10 13:18:09 +00:00
Nick Craig-Wood
e426cb1d1a Add emyarod to contributors 2017-01-10 13:13:14 +00:00
emyarod
3c87a0d0dc Update remote docs to show correct setup process 2017-01-10 13:09:52 +00:00
0xJAKE
499766f6ab Update amazonclouddrive.md
Added details about Amazon Drive's latest trash retention policy.
See:
https://www.reddit.com/r/DataHoarder/comments/5dh96j/files_in_amazon_cloud_drive_trash_now_deleted/
https://www.amazon.com/gp/help/customer/display.html?nodeId=201376760
2017-01-08 10:24:03 -06:00
Nick Craig-Wood
35a6436983 mount: implement proper directory handling (mkdir, rmdir)
Before this change mount only simulated rmdir & mkdir, now it actually
runs mkdir & rmdir on the underlying remote, using the new parmaeters
to fs.Mkdir and fs.Rmdir.

Fixes #956
2017-01-06 11:24:22 +00:00
Nick Craig-Wood
341745d4d5 Update docs on server side copy 2017-01-05 21:11:46 +00:00
Nick Craig-Wood
78c1f2839e Fix filters to add ** rules to directory rules
This fixes `--exclude ".*{,/**}"` to exclude all . files and
. directories.
2017-01-05 19:33:49 +00:00
Nick Craig-Wood
de2d967abd Stop --track-renames hashing matching files - fixes #984
Also only hash files of the correct size.

This speeds it up a lot.
2017-01-05 17:58:01 +00:00
Marco Paganini
6611d92e21 Only start bandwidth ticker when necessary.
- Only start the token ticker when the timetable entry has more than one
  entry.
- This fixes the "Scheduled bandwidth change" log message when no
  bwlimit is specified.
- Fixes #987
2017-01-04 19:03:49 -08:00
Nick Craig-Wood
e1a49ca426 Document environment variable usage 2017-01-04 21:38:54 +00:00
Nick Craig-Wood
f73ee5eade Make all config file variables be settable in the environment
These are set in the form RCLONE_CONFIG_remote_option where remote is
the uppercased remote name and option is the uppercased config file
option name.  Note that RCLONE_CONFIG_remote_TYPE must be set if
defining a new remote.

Fixes #616
2017-01-03 22:42:47 +00:00
Nick Craig-Wood
0d75d2585f Allow all options to be set from environment variables
The option names are munged changing - to _ making upper case and
prepending RCLONE_.  The values are as parsed by pflag.
2017-01-03 22:42:47 +00:00
Marco Paganini
3b0f944e23 Add time-based bandwidth limits.
- Change the --bwlimit command line parameter to accept both a limit (as
  before) or a full timetable (formatted as "hh:mm,limit
  hh:mm,limit...")
- The timetable is checked once a minute by a ticker function. A new
  tokenBucket is created every time a bandwidth change is necessary.
- This change is compatible with the SIGUSR2 change to toggle bandwidth
  limits.

This resolves #221.
2017-01-03 21:00:38 +00:00
Nick Craig-Wood
aaeab58ce6 Add Lukas Loesche to contributors 2017-01-03 20:49:04 +00:00
Lukas Loesche
5894c02a34 Typo: the the -> the in docs and comments 2017-01-03 20:48:26 +00:00
Nick Craig-Wood
f1221b510b Change --track-renames to use the length,hash pair stored in a map
This makes it much faster in the case of many files and use less
memory.

This also detects use of --no-traverse and disables it.
2017-01-03 20:37:06 +00:00
Nick Craig-Wood
274ab349f4 sync: Only allow --track-renames if have a common hash 2017-01-03 20:35:05 +00:00
Nick Craig-Wood
86392fb800 Add Bjørn Erik Pedersen to contributors 2017-01-03 20:35:05 +00:00
Bjørn Erik Pedersen
adc156ab2a docs: Document track-renames option
See #888
2017-01-03 20:35:05 +00:00
Bjørn Erik Pedersen
47d3a450a4 sync: Track and perform server-side renames
This commits adds support for tracking of file renames if `track-renames` flag is set,
and it then performs server-side renames for remotes that support it, i.e.
remotes that implement either the `Mover` or the `Copier` interface.

Fixes #888
2017-01-03 20:35:05 +00:00
Nick Craig-Wood
5c89fd679d Fix incorrect vendoring for swift library
(vendored a feature branch by accident)
2017-01-03 17:39:56 +00:00
Nick Craig-Wood
1cad759306 Update vendor directory 2017-01-02 16:12:05 +00:00
Nick Craig-Wood
5b8b379feb Version v1.35 2017-01-02 15:33:06 +00:00
Nick Craig-Wood
f538fd8eb4 Update RELEASE procedure 2017-01-02 14:38:14 +00:00
Nick Craig-Wood
4dd5428b13 Fix rmdirs test and integration tests which depend on each other 2017-01-02 14:15:07 +00:00
Nick Craig-Wood
64ec220d5d Fix --no-update-modtime test on remotes which don't support hashes 2016-12-31 15:19:26 +00:00
Nick Craig-Wood
cbfec0d281 Fix tests for missing config file 2016-12-20 15:05:08 +00:00
Nick Craig-Wood
80c044f2d3 Stop overwriting global remote in tests 2016-12-20 14:15:11 +00:00
Nick Craig-Wood
1b2dda8c4c oauthutil: Reload config file off disk before updating token
This fixes the config file being overwritten when two rclones are running.

Fixes #682
2016-12-19 15:04:07 +00:00
Nick Craig-Wood
473bdad00b crypt: Prevent the user pointing crypt at itself - fixes #927
This would hopefully have stopped the issues reported in #784 & #929
2016-12-19 14:09:59 +00:00
Nick Craig-Wood
4482e75f38 Fix golint 2016-12-15 21:02:41 +00:00
Nick Craig-Wood
43c530922a Restore ability for any command to show stats by adding --stats flag
Make default for `mount` command not to show stats - this can be
re-enabled by adding a `--stats` parameter.
2016-12-15 17:40:17 +00:00
Nick Craig-Wood
dd60f088ed mount: retry reads on error #873 2016-12-15 17:16:55 +00:00
Nick Craig-Wood
0117aeafbf mount: this removes the async buffering as it was killing seek performance 2016-12-15 17:08:52 +00:00
Nick Craig-Wood
442861581a Update release process - fixes #855 2016-12-14 17:49:26 +00:00
Nick Craig-Wood
4e809c951d acd: Note that Move and DirMove are now supported - fixes #122 2016-12-14 17:45:20 +00:00
Nick Craig-Wood
215fd2a11d b2: use new prefix and delimiter parameters in directory listings
This makes --max-depth 1 directory listings much more efficient (it no
longer lists all the files) and simplifies the code, bringing it into
line with s3/swift/gcs

Fixes #944
2016-12-14 17:37:26 +00:00
Nick Craig-Wood
13b705e227 mount: report the modification times for directories from the remote #940 #950
This stops the modification times for directories just being the
current time and reads them from the remote instead.  This doesn't
take any extra transactions.
2016-12-14 15:26:04 +00:00
Nick Craig-Wood
8083804575 Make sure wrapped retry/fatal errors are never nil to avoid panic 2016-12-13 16:02:14 +00:00
Nick Craig-Wood
ec0916c59d crypt: return unexpected EOF instead of Failed to authenticate decrypted block #873
Streams which truncated early with an EOF message would return a
"Failed to authenticate decrypted block" error.  While technically
correct, this isn't a helpful error message as it masks the underlying
problem.  This changes it to return "unexpected EOF" instead.

The rest of rclone knows it should retry such errors.
2016-12-12 15:20:40 +00:00
Nick Craig-Wood
7392cd1a1a Add section on how to set RCLONE_CONFIG_PASS from a script 2016-12-12 12:33:43 +00:00
Nick Craig-Wood
2656a0e070 Update go to 1.7.4 and 1.6.4 in CI 2016-12-09 17:12:11 +00:00
Nick Craig-Wood
5b5df9ae8e acd: fix the corner cases in Move and DirMove and refactor 2016-12-09 16:57:07 +00:00
Nick Craig-Wood
fafbcc8e2f Make server side move more obvious in debug 2016-12-09 16:57:07 +00:00
Nick Craig-Wood
c55402caa2 drive: create destination directory on Move() 2016-12-09 16:57:07 +00:00
Nick Craig-Wood
d132dc7640 drive: make DirMove more efficient and complain about moving the root 2016-12-09 16:57:07 +00:00
Nick Craig-Wood
48a2e3844d Add optional interface DirCacheFlush for making the tests more reliable
This is defined for the users of dircache drive, onedrive, and acd.

This helps fix the DirMove tests on acd.
2016-12-09 16:57:07 +00:00
Nick Craig-Wood
d911bf3889 Add links to the forum in the main pages 2016-12-08 10:42:42 +00:00
Nick Craig-Wood
dcf53a1d12 Allows multiple --include/--exclude/--filter options - fixes #875 2016-12-07 13:37:40 +00:00
Nick Craig-Wood
3bdfa284a9 Make rclone lsd obey the filters properly 2016-12-07 11:16:36 +00:00
Nick Craig-Wood
cb9f1eefd2 crypt: fix Mkdir/Rmdir with a dir parameter - fixes rmdirs command 2016-12-06 15:14:41 +00:00
Nick Craig-Wood
dd99a4b3dc Update golang.org/x/sys to enable mips compile #849 2016-12-06 15:12:29 +00:00
Nick Craig-Wood
e79a5de7df local: fix Mkdir/Rmdir with a dir on Windows 2016-12-05 18:09:45 +00:00
Nick Craig-Wood
c24da0b886 fuse: add stats printing and note which files are transferring 2016-12-04 16:59:46 +00:00
Nick Craig-Wood
be4fd51289 fuse: Add bandwidth accounting and buffering
This fixes rclone mount ignoring bwlimit and increases buffering which
should speed up transfers greatly.

Fixes #796
Fixes #690
2016-12-04 16:57:47 +00:00
Nick Craig-Wood
2cbdb95ce5 Only show transfer stats on commands which transfer stuff - fixes #849 2016-12-04 16:52:24 +00:00
Nick Craig-Wood
716ce49ce9 Patch vendored version of stretchr to use latest go-spew 2016-12-04 16:28:27 +00:00
Nick Craig-Wood
34b9ac8a5d Update vendor directory 2016-12-04 16:25:30 +00:00
Nick Craig-Wood
c265f451f2 Implement moveto and copyto commands for choosing a destination name on copy/move
Fixes #227
Fixes #476
2016-12-03 23:43:52 +00:00
Nick Craig-Wood
2058652fa4 Allow overlapping remotes in move when DirMove is supported 2016-12-03 09:08:40 +00:00
Nick Craig-Wood
50b3cfccb1 Factor Move out of sync.go and add remote parameter to Move and Copy 2016-12-03 09:08:40 +00:00
Nick Craig-Wood
5e35aeca9e Regularize the command definition names 2016-12-03 09:08:40 +00:00
Nick Craig-Wood
05798672c8 acd: Fix nil pointer deref - fixes #916 2016-11-30 21:05:35 +00:00
Nick Craig-Wood
7929b6e756 fuse: support R/W files only if truncate is set.
Any reads on the file handle will return an error.  This is to support
windows/samba writes.
2016-11-28 17:56:54 +00:00
Nick Craig-Wood
2756900749 Fix not transferring files that don't differ in size - fixes #911
Due to a logic error files stored on remotes which support modtime but
not hashes weren't being transferred when updating with a file of the
same size but different modtime.  Instead the modtime of the remote
file was being set to that of the local file.

In practice this affected crypt with all remotes except Amazon Drive
and Dropbox.
2016-11-28 17:08:15 +00:00
Nick Craig-Wood
539853df36 Fix rmdirs test 2016-11-28 12:23:24 +00:00
Nick Craig-Wood
651db36674 Add Scott McGillivray to authors 2016-11-28 12:18:30 +00:00
Scott McGillivray
f9df545e3c add --stats-unit option and improve alignment for --stats output 2016-11-28 12:18:30 +00:00
Scott McGillivray
5e62ede8d0 make the parameter format for --stats flag more obvious 2016-11-27 18:57:23 +00:00
Nick Craig-Wood
7f41c9a015 Add Thibault Molleman to contributors 2016-11-27 18:54:24 +00:00
Thibault Molleman
ac7727861e drive docs: update openoffice formats 2016-11-27 18:53:18 +00:00
Nick Craig-Wood
943a0938e7 Add 0xJAKE to contributors 2016-11-27 18:42:11 +00:00
0xJAKE
6580d9478e filtering docs: clarify / referencing root of remote in filters, not root of local drive 2016-11-27 18:41:10 +00:00
0xJAKE
36d411c25d acd docs: clarify --max-size only ignoring files (not splitting) 2016-11-27 18:40:46 +00:00
Nick Craig-Wood
8aae166a5b Add missing rmdirs command 2016-11-27 18:36:13 +00:00
Nick Craig-Wood
aaad0354e6 Clarify match rules in filter docs 2016-11-27 12:10:52 +00:00
Nick Craig-Wood
f3365dd251 Make rclone rmdirs command to delete empty directories - fixes #831 2016-11-27 11:49:31 +00:00
Nick Craig-Wood
aaa1370a36 Add directory parameter to Rmdir and Mkdir #100 #831
This will enable rclone to manage directories properly in the future.
2016-11-26 12:02:53 +00:00
Nick Craig-Wood
c41b67ea08 mount: Implement statfs interface so df works - fixes #894
The data returned is not related to the files on the remote, but
apparently samba needs it.
2016-11-20 22:54:03 +00:00
Nick Craig-Wood
0b562bcabc mount: Note that write is now supported on ACD 2016-11-19 10:54:37 +00:00
Stefan Breunig
1e41a015b5 just use one upload method, as go-acd can determine size itself now
Fixes #874
Fixes #669
2016-11-19 10:52:00 +00:00
Nick Craig-Wood
8b82cc7073 Patch vendored version of stretchr to use latest go-spew 2016-11-19 10:35:00 +00:00
Nick Craig-Wood
e19b30bd26 Add test dependencies back to vendor directory 2016-11-19 10:22:36 +00:00
Nick Craig-Wood
09897c8d0d Save test dependencies too on make update 2016-11-19 10:22:23 +00:00
Nick Craig-Wood
d4ddbcea96 Notes on the vendor directory 2016-11-19 10:09:50 +00:00
Nick Craig-Wood
00af021abb Update vendor dependencies 2016-11-19 10:05:20 +00:00
Nick Craig-Wood
8118623680 Rebuild the godeps from scratch on update and include godep as a build_dep 2016-11-19 10:05:20 +00:00
Nick Craig-Wood
2c594dd996 acd: fix docs for --max-size 2016-11-17 17:30:49 +00:00
Nick Craig-Wood
d8b7156b5c Add Alishan Ladhani to contributors 2016-11-15 16:22:40 +00:00
Alishan Ladhani
d4a609c6cd Update Onedrive doc to reflect file size limit 2016-11-13 23:23:26 -05:00
Stefan Breunig
bf243f30d3 report number of blocks in fuse 2016-11-12 14:10:36 +01:00
Nick Craig-Wood
3ce82facac Add Stefan Breunig to contributors 2016-11-11 18:06:27 +00:00
Stefan Breunig
fb1458815a acd: add support for server side DirMove #122 2016-11-11 18:05:24 +00:00
Stefan Breunig
2243b065e8 acd: filter out bogus children Amazon reports sometimes 2016-11-11 18:05:24 +00:00
Stefan Breunig
718694d5ee acd: server side move #122
This approach (ab)uses that trashed items can have naming conflicts
and that one can change their parents, even though direct replacing
("moving") is forbidden.
2016-11-11 18:05:24 +00:00
Stefan Breunig
77f38cb6f1 acd: extend move test to check conflict cases for two step rename+move 2016-11-11 18:05:24 +00:00
Too Much IO
ca017980a3 Add support for server side move operations
Depends on pull request at https://github.com/ncw/go-acd/pull/1
2016-11-11 18:05:24 +00:00
Nick Craig-Wood
4105da206a b2: reauth the account while doing uploads too #825
Originally it was thought the upload URL expiring would provide 401
errors so it was excluded from reauth when doing uploads, but on
re-reading the docs and looking at this issue it seems that 401 errors
are only caused by the account token expiring and not the upload token
expiring.

We will refresh both the upload token and account token on a 401 error
while uploading, and just the account token when we get a 401 at any
other time.
2016-11-07 13:30:51 +00:00
Nick Craig-Wood
34e7ca90fc Update go-acd vendor to fix error message - fixes #860 2016-11-07 10:20:26 +00:00
Nick Craig-Wood
687abe7803 Fix godep update 2016-11-06 14:50:52 +00:00
Nick Craig-Wood
9b1820a7ad Update go-acd dependency 2016-11-06 14:26:12 +00:00
Nick Craig-Wood
5f320cc540 Add missing vendor files 2016-11-06 10:40:40 +00:00
Nick Craig-Wood
23b8f008e0 Add missing docs changes 2016-11-06 10:40:11 +00:00
Nick Craig-Wood
d95288175f Version v1.34 2016-11-06 10:18:30 +00:00
Nick Craig-Wood
b83f7ac06b Update dependencies pre release 2016-11-05 18:35:34 +00:00
Nick Craig-Wood
f7af730b50 Use a vendor directory for repeatable builds - fixes #816
This is using godep to manage the vendor directory.
2016-11-05 18:18:08 +00:00
Nick Craig-Wood
01be5bff02 Fix ogier/pflag vs spf13/pflag 2016-11-05 18:18:08 +00:00
Nick Craig-Wood
e825df6448 Fix Check on crypted file systems 2016-11-05 18:17:21 +00:00
Nick Craig-Wood
ff41b0d435 Improve error message when source remote isn't found in sync #848 2016-11-05 18:03:55 +00:00
Nick Craig-Wood
e162377ca3 acd: Simplify the wait options into a single --acd-upload-wait-per-gb - fixes #262
This means the feature can be disabled by setting the time to 0.

This also logs the HTTP status for analysis purposes.

Thanks Felix Bünemann for extensive testing and data collection.
2016-11-05 13:57:03 +00:00
Nick Craig-Wood
d1080d5456 crypt: fix panic on close after failed seek 2016-11-05 10:01:33 +00:00
Nick Craig-Wood
64b5a76bec mount: detect and deal with seeking beyond end of file - fixes #828 2016-11-05 09:59:36 +00:00
Nick Craig-Wood
7cfb1bdc70 fuse: tests: create the directory before starting tests 2016-11-05 09:57:45 +00:00
Nick Craig-Wood
441951a93b Stop removing failed upload to cloud storage remotes - fixes #559
We do remove a partially written file on local so we don't have
corrupted files lying around.
2016-11-04 21:34:25 +00:00
Nick Craig-Wood
154e91bb23 crypt: Fix data corruption caused by seeking - #828
The corruption was caused when the file was read to the end thus
setting io.EOF and returning the buffers to the pool.  Seek reset the
EOF and carried on using the buffers that had been returned to the
pool thus causing corruption when other goroutines claimed the buffers
simultaneously.

Fix by resetting the buffer pointers to nil when released and claiming
new ones when seek resets EOF.  Also added locking for Read and Seek
which shouldn't be run concurrently.
2016-11-03 22:55:05 +00:00
Nick Craig-Wood
cb40511807 s3: Allow command line to override acl (Thanks Radek Senfeld) 2016-11-03 21:05:30 +00:00
Nick Craig-Wood
452c68115f acd: Add 502 Bad Gateway to list of errors we retry 2016-11-03 18:56:21 +00:00
Nick Craig-Wood
b35123ba48 Make -x/--one-file-system compile under Windows and add docs 2016-11-03 11:53:49 +00:00
Nick Craig-Wood
978e06a623 Add Durval Menezes to contributors 2016-11-03 11:53:49 +00:00
Durval Menezes
15c9fed60f local: Implement -x/--one-file-system to stay on a single file system 2016-11-03 11:52:40 +00:00
Nick Craig-Wood
2302179237 acd: Fix overwriting a file with a zero length file 2016-11-02 16:39:55 +00:00
Nick Craig-Wood
318e335137 Remove Authorization: headers from --dump-headers output
Add in `--dump-auth` flag to put it back.
2016-11-02 15:53:43 +00:00
Nick Craig-Wood
11301a64fb Add Felix Bünemann to contributors 2016-11-02 13:18:26 +00:00
Felix Bünemann
1c912de9cc Fix ACD file size warning limit
The previous value of 50 GiB was too high, we need to use 50,000 MiB.

For detailed discusssion see issue #215.
2016-11-02 13:15:35 +00:00
Nick Craig-Wood
d1759fdfa9 Add request ID to HTTP debugging to make it easier to trace concurrent flows 2016-10-31 12:01:28 +00:00
Nick Craig-Wood
c102bf28e3 Add Marco Paganini to contributors 2016-10-31 12:01:03 +00:00
Nick Craig-Wood
e65059e431 Fix non-windows/non-unix builds for bwlimit/SIGUSR2 feature and add a mutex
The race detector complained whenever SIGUSR2 was sent to rclone so
this adds a mutex to prevent concurrent access.
2016-10-30 19:20:16 +00:00
Nick Craig-Wood
5454f2abd0 Fix race in checkServerTime 2016-10-30 19:16:27 +00:00
Marco Paganini
cc4f5ba7ba Add support to toggle bandwidth limits via SIGUSR2.
Sending rclone a SIGUSR2 signal will toggle the limiter between off and
the limit set with the --bwlimit command-line option.
2016-10-30 17:46:59 +00:00
Nick Craig-Wood
062616e4dd mount: update code comments 2016-10-30 17:46:00 +00:00
Nick Craig-Wood
6846a1cc11 Add Tomasz Mazur to contributors 2016-10-27 12:14:33 +01:00
Tomasz Mazur
6fd5ef2d99 Update B2 docs with Data usage, and Crypt section 2016-10-27 12:11:51 +01:00
Nick Craig-Wood
87107413f5 fuse: add missing locking on filehandle read #823 #802 2016-10-27 09:57:52 +01:00
Nick Craig-Wood
5986953317 acd: Reset the headers on tempurl redirect #802 2016-10-26 18:42:41 +01:00
Nick Craig-Wood
9d2dd2c49a crypt: Fix data corruption on seek
This was caused by failing to reset the internal buffer on seek so old
data was read first before the new data.

The unit tests didn't detect this because they were reading to the end
of the file to check integrity and thus emptying the internal buffer.

Both code and unit tests were fixed up.
2016-10-25 15:15:44 +01:00
Nick Craig-Wood
54d99d6ab2 Add a link to the forum in the issue template 2016-10-24 12:34:18 +01:00
Nick Craig-Wood
77b975d16f Note Amazon Drive doesn't support uploads via FUSE yet 2016-10-23 21:46:48 +01:00
Nick Craig-Wood
c464cc6376 mount: fix alignment of 64 bit counter on ARM #813 2016-10-23 17:36:35 +01:00
Nick Craig-Wood
93e84403bb Remove io.SeekStart and replace with 0 as it is go 1.7 only 2016-10-22 12:07:51 +01:00
Nick Craig-Wood
5b8327038a acd: make upload timeouts scale by file size
Fixes #712
Fixes #262
2016-10-22 11:53:06 +01:00
Nick Craig-Wood
eba0a3633b crypt: speed up repeated seeking - fixes #804 2016-10-21 10:03:16 +01:00
Nick Craig-Wood
de73063977 Fix output of crypt objects in logs 2016-10-20 17:46:51 +01:00
Nick Craig-Wood
eca9e8eb70 Update go to 1.7.3 2016-10-20 11:00:15 +01:00
Nick Craig-Wood
a4a44a41ae acd: document non .com login process - fixes #781 2016-10-18 17:33:41 +01:00
Nick Craig-Wood
a02edb9e69 Add rclone mount --dir-cache-time to control caching of directory entries - fixes #680 2016-10-18 17:23:57 +01:00
Nick Craig-Wood
368cce93ff Ignore files with control characters in the names - fixes #689 2016-10-18 15:24:29 +01:00
Nick Craig-Wood
d8d11023d3 mount: update internal position on seek - fixes #774 2016-10-17 20:20:07 +01:00
Nick Craig-Wood
4803ce010e Make exponential backoff work exactly as per google specification - fixes #583 2016-10-17 17:57:09 +01:00
Nick Craig-Wood
b7875fc02a rclone check: show count of hashes that couldn't be checked #700 2016-10-17 16:48:11 +01:00
Nick Craig-Wood
544ca6035a b2: Make sure each upload has at least one upload slot - fixes #731 2016-10-17 16:48:11 +01:00
Nick Craig-Wood
0238558a4b Clarify bits vs bytes in --bwlimit docs 2016-10-14 09:24:50 +01:00
Radek Šenfeld
bc414b698d Command line argument for setting/overriding Amazon S3 ACL 2016-10-13 17:45:11 +01:00
Nick Craig-Wood
ace1e21894 Add listremotes command - fixes #558 2016-10-08 14:24:37 +01:00
Nick Craig-Wood
8a56a6836a Check server time against local time #654 2016-10-08 14:00:50 +01:00
Nick Craig-Wood
83849e0a36 Don't show encrypted password to stop confusion - fixes #656 2016-10-08 11:26:14 +01:00
Nick Craig-Wood
618f2e33e8 Show the BETA_URL in make vars 2016-10-08 11:23:21 +01:00
Nick Craig-Wood
fe53caf997 crypt: clarify docs about subdirectories - fixes #655 2016-10-08 10:52:29 +01:00
Nick Craig-Wood
d83074ae05 crypt: more docs for remote parameter - fixes #686 2016-10-08 10:34:59 +01:00
Nick Craig-Wood
0cef6bd0ac Put SSL download link onto downloads page - fixes #657 2016-10-08 10:21:07 +01:00
Nick Craig-Wood
d42b38699b Make ResponseHeaderTimeout be --timeout not --contimeout fixes #766
This was causing a problem with Amazon Drive which often pauses for a
long time after uploads before returning the response.
2016-10-08 10:12:19 +01:00
Nick Craig-Wood
98804cb860 b2: Fix seek producing corrupted file errors 2016-10-07 12:16:25 +01:00
Nick Craig-Wood
d033e92234 Stop single file and --files-from operations iterating through the source bucket.
This works by making sure directory listings that use a filter only
iterate the files provided in the filter (if any).

Single file copies now don't iterate the source or destination
buckets.

Note that this could potentially slow down very long `--files-from`
lists - this is easy to fix (with another flag probably) if it causes
anyone a problem.

Fixes #610
Fixes #769
2016-10-07 11:39:39 +01:00
Nick Craig-Wood
ec7cef98d8 Update installation docs with macOS walkthrough from Spencer Charest 2016-10-06 17:20:45 +01:00
Nick Craig-Wood
aedad89560 Fetch the tags for travis build 2016-10-06 15:15:21 +01:00
Nick Craig-Wood
f45b3c87bf mount: add --no-seek flag to disable seeking 2016-10-06 13:37:45 +01:00
Nick Craig-Wood
e94850f322 Fix timeouts not working when set to 0 and firing too often - #766 2016-10-06 10:23:23 +01:00
Nick Craig-Wood
de80a540a7 mount: attempt to speed up 2016-10-05 21:04:57 +01:00
Nick Craig-Wood
392a86f585 mount: Fix read flushing - fixes #638 2016-10-05 21:03:56 +01:00
Nick Craig-Wood
265f5b77a7 mount: make files opened for read seekable - fixes #707 2016-10-05 21:03:56 +01:00
Nick Craig-Wood
aef2ac5c04 Add options for Open and implement Range for all remotes 2016-10-05 21:03:56 +01:00
Nick Craig-Wood
75e5e59385 crypt: document mod times and hashes 2016-10-05 16:19:09 +01:00
Nick Craig-Wood
6c21009c76 Add links to forum on contact page and sidebar 2016-10-05 14:55:09 +01:00
Nick Craig-Wood
9192e0a28d Link to git log from beta downloads 2016-10-05 14:35:23 +01:00
Nick Craig-Wood
47e201837f Upgrade font awesome to 4.6.3 2016-10-05 14:31:10 +01:00
Nick Craig-Wood
4847c5695c Deploy beta from Linux build server only 2016-10-05 13:59:04 +01:00
Nick Craig-Wood
391feb698e Automatically upload betas on pushes to master
* Add links to betas on the download page
  * Encourage new issue submitters to use the beta
2016-10-05 12:47:57 +01:00
Nick Craig-Wood
a4714e5b75 Fix \ vs / confusion 2016-10-04 13:39:29 +01:00
Nick Craig-Wood
4dae5ee264 Move build scripts to bin sub-directory 2016-10-04 11:37:31 +01:00
Nick Craig-Wood
7e9739db57 Remove obsolete entry 2016-10-04 11:37:18 +01:00
Nick Craig-Wood
1e557f4bd9 New graphics used by forum.rclone.org 2016-10-04 11:31:42 +01:00
Nick Craig-Wood
ca19204cf4 Add missing doc pages 2016-10-04 11:30:48 +01:00
Nick Craig-Wood
03977354cb Fix golint warnings 2016-10-03 20:40:54 +01:00
Nick Craig-Wood
c43395fafa Add xor-zz to contributors 2016-10-03 20:29:47 +01:00
xor-zz
7cf6fe2209 acd: Fix docs for --max-size option suggestion 2016-10-03 20:28:24 +01:00
Nick Craig-Wood
9ea20bac42 Fix accidentally committed test in move code 2016-10-03 20:16:41 +01:00
Nick Craig-Wood
945f49ab5e Make ContentType be preserved for cloud -> cloud copies - fixes #733 2016-10-03 20:02:04 +01:00
Nick Craig-Wood
6c9a258d82 Fix move command
* Delete src files which already existed in dst - fixes #751
  * Fix deletion of src file when dst file older
2016-10-03 19:58:44 +01:00
Nick Craig-Wood
f2eeb4301c Make --dump-bodies imply --dump-headers 2016-09-22 08:40:37 +01:00
Nick Craig-Wood
c117eaf5a2 drive: add .epub, .odp and .tsv as export formats. 2016-09-19 18:08:10 +01:00
Nick Craig-Wood
3e43ff7414 local: windows - ignore the symlink bit on files
This allows files with reparse points to be backed up.

Fixes #614
2016-09-19 17:29:22 +01:00
Nick Craig-Wood
bb21cf6f0e local: ignore directory based junction points on windows
These are a kind of symlink and rclone doesn't follow symlinks.

Fixes #692
2016-09-19 17:29:22 +01:00
Nick Craig-Wood
bfe6f299d0 Revise list of OSes which can redirect stderr - fixes #698 2016-09-19 17:13:41 +01:00
Nick Craig-Wood
e19ba47875 swift: more docs for setup process - fixes #598 2016-09-19 16:36:36 +01:00
Nick Craig-Wood
7227a2653d Add Asko Tamm to contributors 2016-09-13 19:52:19 +01:00
Asko Tamm
61665ddd10 s3: add support for setting storage class in config and command line 2016-09-13 19:49:44 +01:00
Nick Craig-Wood
0caac70994 Fix build for < go1.7 2016-09-13 11:36:14 +01:00
Nick Craig-Wood
83ba59749f Make failed uploads not count as "Transferred" - fixes #708 2016-09-12 18:15:58 +01:00
Nick Craig-Wood
20a429c048 acd: Only wait for uploads to appear on 408,500,504 errors - fixes #712 2016-09-12 17:50:19 +01:00
Nick Craig-Wood
cf43ca2a7b Document which remotes support which optional features 2016-09-12 17:50:19 +01:00
Nick Craig-Wood
4001e21624 Make sure high level retries show with -q - fixes #648
Also update the exit code documentation describing that.
2016-09-12 17:50:19 +01:00
Nick Craig-Wood
bbf819e2d1 Move versioncheck so it happens earlier in the compile process. 2016-09-12 17:50:19 +01:00
Nick Craig-Wood
0cb9bb3b54 Redo http Transport code
* Insert User-Agent in Transport - fixes #199
  * Update timeouts to use Context
  * Modernise transport
2016-09-12 17:50:19 +01:00
Nick Craig-Wood
5c91623148 mount: Implement FUSE mount options - fixes #653 2016-09-10 09:50:46 +01:00
Nick Craig-Wood
5b913884cf crypt: fix Name and Root 2016-09-09 08:41:21 +01:00
Nick Craig-Wood
346d4c587c swift: don't read metadata for directory marker objects - fixes #703 2016-09-08 16:44:11 +01:00
Fredrik Fornwall
d5b16c8b1a Support linux/arm64 build
Fixes #699
2016-09-08 16:29:15 +01:00
Nick Craig-Wood
e78eeedc75 Add Fredrik Fornwall to contributors 2016-09-08 08:18:33 +01:00
Fredrik Fornwall
87db3cfad3 Use Dup2 library function instead of raw syscall
The Dup2 syscall does not exist on 64-bit arm Linux while its
replacement Dup3 does not exist on non-Linux systems.

Using the unix.Dup2 library function instead of raw syscalls
improves the portability across more platforms.
2016-09-08 08:17:31 +01:00
Nick Craig-Wood
54fdc6866e Make version tag include branch if not master 2016-09-08 08:04:13 +01:00
Nick Craig-Wood
2eaac80c86 b2 with crypt: fix crash when uploading large files - fixes #673 2016-09-05 18:10:01 +01:00
Nick Craig-Wood
b3d0848d09 b2: Fix download of large files - fixes #678
Large files were failing to download with an sha1 mismatch error.
Correct this by making sure we use the sha1 read from the info rather
than the header.
2016-09-05 17:26:04 +01:00
Nick Craig-Wood
0c6990bc95 gcs: Fix compile after removal of SetOpaque
It turns out that the SetOpaque call isn't needed any more as Google
aren't returning paths with `%2F` in any more so remove the whole
complication.

Fixes #676
Fixes #660
2016-09-05 16:08:17 +01:00
Nick Craig-Wood
d9bba67d18 b2: return error when we try to create a bucket which someone else owns #645 2016-08-25 21:43:43 +01:00
Nick Craig-Wood
140a3d0aef b2: Fix encrypted uploads #644
This was caused by accidentally letting b2 read the underlying object sha1.
2016-08-25 21:26:55 +01:00
Nick Craig-Wood
31fe800d6a Add crypt to the docs index plus a few docs tweaks 2016-08-24 23:48:37 +01:00
Nick Craig-Wood
3996bbb8cb Version v1.33 2016-08-24 23:02:05 +01:00
Nick Craig-Wood
c2599cb116 Fix crypt tests on Windows 2016-08-24 22:21:34 +01:00
Nick Craig-Wood
2c13074f6c drive: document how to make your own client_id - fixes #560 2016-08-24 22:06:41 +01:00
Nick Craig-Wood
059743a1b0 crypt: add to integration tests 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
73cd1f4e88 crypt: Implement DirMover 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
a54806e5c1 Fix Move when underlying remote returns ErrorCantMove 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
e6a0521ca2 Make it possible to test Fs multiple times and use this with crypt
We test both the filename encryption modes for crypt.
2016-08-23 17:45:37 +01:00
Nick Craig-Wood
43eadf278c Remove flattening and replace with {off, standard} name encryption 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
5f375a182d Create TestCrypt remote 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
663dd6ed8b crypt: ask for a second password for the salt 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
226c2a0d83 Implement crypt for encrypted remotes - #219 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
b4b4b6cb1c Allow Fs tests to declare new config items 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
9985fc40f4 Make Password parameters obey Optional flag and offer to generate random ones 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
b1de4c8cba Implement password Option and re-implement editing
Editing now shows all the options for the fs and asks one at a time
whether they should be changed.
2016-08-23 17:45:37 +01:00
Nick Craig-Wood
6a4e424630 Re-implement Obscure/Reveal so they use AES-CTR encryption 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
ebb67c135e Fix listToChan passing nil objects to DeleteFile 2016-08-23 17:45:37 +01:00
Nick Craig-Wood
326dcf2470 Add more troublesome symbols to test cases
These are from #623 #620 #218
2016-08-23 14:28:05 +01:00
Nick Craig-Wood
86eb80ecdc Add Radek Šenfeld to contributors. 2016-08-23 12:25:39 +01:00
Radek Šenfeld
2003ba356b User-configurable Amazon S3 ACL
fixes #413
2016-08-23 12:25:08 +01:00
Nick Craig-Wood
037a000cc8 b2: fix stats accounting for upload - fixes #602 2016-08-22 21:19:38 +01:00
Nick Craig-Wood
8a771450d2 docs: Add hover over links on headings 2016-08-22 17:21:06 +01:00
Nick Craig-Wood
1e7dc06ab8 Fix file encoding 2016-08-22 16:47:06 +01:00
Nick Craig-Wood
ca841c56a8 Disable smart dashes so --flag shows properly in the docs - fixes #632 2016-08-22 16:46:08 +01:00
Nick Craig-Wood
79eebf1993 onedrive: fix URL escaping in file names - eg uploading files with + in them.
Fixes #620
Fixes #218
2016-08-22 10:58:49 +01:00
Nick Craig-Wood
bbccf4acd5 Update go versions
Remove tip for the moment
2016-08-20 14:14:48 +01:00
Nick Craig-Wood
9e7ddd5efc Fix tests when FUSE isn't present 2016-08-20 14:11:21 +01:00
Nick Craig-Wood
6089f443b9 Fix windows build - fixes #628
Try to make clearer the distinction between OS paths and rclone paths
(remotes) so it is harder to muddle them up.
2016-08-20 12:29:54 +01:00
Nick Craig-Wood
84eb7031bb Implement the rclone cat command 2016-08-18 22:45:32 +01:00
Nick Craig-Wood
f22029bf3d Add mount command to implement FUSE mounting of remotes #494
This enables any rclone remote to be mounted and used as a filesystem
with some limitations.

Only supported for Linux, FreeBSD and OS X
2016-08-18 21:54:54 +01:00
Nick Craig-Wood
d7b79b4481 Mark the compiled from source version with -DEV - fixes #627 2016-08-18 21:31:10 +01:00
Nick Craig-Wood
b5faaf7116 Fix double close of abort channel - fixes #592 2016-08-18 18:56:57 +01:00
Nick Craig-Wood
b4f2ada820 b2: on cleanup delete hide marker if it is the current file #604 2016-08-18 18:36:00 +01:00
Nick Craig-Wood
8a66930bd7 acd: document --acd-upload-wait-time 2016-08-18 17:49:49 +01:00
Nick Craig-Wood
2ebeed6753 acd: Fix token expiry during large uploads
When rclone is busy doing lots of very long uploads it doesn't refresh
the token. Amazon will fail uploads if they finish when the token is
more than 1 Hour past expiry.

Fix this by keeping track of the number of uploads and refreshing the
token when the token expires if there is an upload in progress.
2016-08-18 17:39:23 +01:00
Nick Craig-Wood
23d8ba41d5 oauthuil: implement a timer for token expiry 2016-08-18 17:39:23 +01:00
Nick Craig-Wood
4f9e805d44 acd: Work around 408 REQUEST_TIMEOUT and 504 GATEWAY_TIMEOUT errors
Amazon Drive sometimes returns errors at the end of large uploads

  * 408 REQUEST_TIMEOUT
  * 504 GATEWAY_TIMEOUT
  * 500 Internal error

The file may have been uploaded correctly though, so, on error, wait
for up to 2 minutes for it to appear if it was fully
uploaded (configure timeout with --acd-upload-wait-time).

Issues: #601 #605 #606
2016-08-18 17:39:23 +01:00
Nick Craig-Wood
3f7107839e Add Per Cederberg to contributors 2016-08-18 17:10:50 +01:00
Per Cederberg
bb62c49489 New B2 API endpoint
Backblaze will change the authentication API endpoint on August 16, 2016. The old endpoint will be removed Feb 2nd 2017.

See https://help.backblaze.com/hc/en-us/articles/224959187-B2-Domain-Migration-Plan
2016-08-15 15:59:19 +02:00
Nick Craig-Wood
ae6018355c Correct parameter order for copy/sync etc 2016-08-06 00:07:36 +01:00
Nick Craig-Wood
0805ec051f Add BasicInfo interface shared between Dir and Object 2016-08-05 17:45:27 +01:00
Nick Craig-Wood
e27b91ffb8 Factor each commmand into its own package 2016-08-05 17:13:54 +01:00
Nick Craig-Wood
0a7b34eefc Move internals of rclone command into cmd so it can be imported externally 2016-08-04 22:33:46 +01:00
Nick Craig-Wood
549cac90af Use cobra autogenerated docs
* put the most up to date docs into the code
  * generate command docs using rclone gendocs
  * put command docs into own directory
  * remake them into MANUAL.md
2016-08-04 21:47:14 +01:00
Nick Craig-Wood
ba0b41dd92 Add gendocs command to rclone 2016-08-04 21:47:14 +01:00
Nick Craig-Wood
2df261e42b Add genautocomplete command to make bash completion script. 2016-08-04 21:47:14 +01:00
Nick Craig-Wood
38adb35abe Make dedupe take an optional mode parameter 2016-08-04 21:47:14 +01:00
Nick Craig-Wood
520ded60e3 Add memtest command for debugging purposes 2016-08-04 21:47:14 +01:00
Nick Craig-Wood
ae56df7d4f Add --dedupe-mode only to dedupe command 2016-08-04 21:47:14 +01:00
Nick Craig-Wood
412591dfaf Make rclone use cobra for command line parsing 2016-08-03 17:16:27 +01:00
Nick Craig-Wood
57f8f1ec92 b2: set maximum backoff to 5 Minutes #597 2016-08-01 22:57:52 +01:00
Nick Craig-Wood
f0434789cf Encourage using the latest version before submitting an issue. 2016-07-28 10:38:16 +01:00
Nick Craig-Wood
c2f6decb9c swift: note that tenant isn't optional for > v1 auth - fixes #563 2016-07-15 18:25:59 +01:00
Nick Craig-Wood
9eeed25418 local: fix filenames with invalid UTF-8 not being uploaded #568 2016-07-15 14:18:09 +01:00
Nick Craig-Wood
67562081f7 Version v1.32 2016-07-13 17:32:39 +01:00
Nick Craig-Wood
41917eb1f2 b2: Fix upload of files large files not in root - fixes #582 2016-07-13 15:28:39 +01:00
Nick Craig-Wood
c3e996f10f b2 doc fixes 2016-07-13 14:50:47 +01:00
Nick Craig-Wood
63f6827a0d Version v1.31 2016-07-13 12:28:01 +01:00
Nick Craig-Wood
96e2271cce Factor commands into Makefile 2016-07-13 12:25:19 +01:00
Nick Craig-Wood
ac3c83f966 Fix integration tests for drive 2016-07-12 21:38:15 +01:00
Nick Craig-Wood
b9c8e61d39 Explicitly check the state in tests after writing files
...otherwise Amazon Drive will fail.
2016-07-12 21:36:39 +01:00
Nick Craig-Wood
a6056408dd Fix move command - stop it running for overlapping fses - fixes #577
* Make move command check for overlapping remotes and refuse to run
  * Do copy/delete rather than all the copies then all the deletes
  * Doesn't purge the source - this was unexpected behaviour see #512 and #416
  * Add -list-retries flag to test suite to control retries

This changes the semantics of `move` slightly.  However it now errs on
the side of not deleting stuff.
2016-07-12 10:49:37 +01:00
Nick Craig-Wood
b9479cf7ab Implement --no-update-modtime flag - fixes #511 2016-07-12 10:46:45 +01:00
Nick Craig-Wood
452a5badc1 Add Stefan Weichinger to contributors 2016-07-11 15:32:58 +01:00
Stefan G. Weichinger
d645bf0966 Add basic info how to use ansible role for installation 2016-07-11 15:31:36 +01:00
Nick Craig-Wood
50addaa91e Add Antonio Messina to contributors 2016-07-11 15:22:17 +01:00
Antonio Messina
02a3bbaa3d swift: add support for non-default project domain.
With Keystone V3 both users and projects (a.k.a. tenants) can belong
to different domains. This change allow specifying different domains
for the user and the project.
2016-07-11 15:16:58 +01:00
Nick Craig-Wood
a20d80565b Tidy stats output - fixes #541 2016-07-11 13:04:30 +01:00
Nick Craig-Wood
56adb52a21 Rename Amazon Cloud Drive to Amazon Drive - fixes #532 2016-07-11 12:42:44 +01:00
Nick Craig-Wood
8c2fc6daf8 s3: Add instructions on how to use rclone with minio 2016-07-11 12:12:28 +01:00
Nick Craig-Wood
4bd9932703 Fix wording in verbose copy logs - fixes #574 2016-07-09 10:11:57 +01:00
Nick Craig-Wood
2a1d4b7563 s3: Add ap-northeast-2 (Seoul) and ap-south-1 (Mumbai) regions - fixes #567 2016-07-06 11:14:59 +01:00
Nick Craig-Wood
b394431f18 Improve --files-from docs - fixes #547 2016-07-05 12:33:59 +01:00
Nick Craig-Wood
cc628717d8 b2: Add --b2-versions flag so old versions can be listed and retreived. #420 2016-07-05 11:27:04 +01:00
Nick Craig-Wood
f3e00133a0 dropbox: Don't retry 461 errors - fixes #551
461 errors from dropbox indicate some sort of copyright violation.
2016-07-04 13:45:53 +01:00
Nick Craig-Wood
606961f49d b2: Treat 403 errors (eg cap exceeded) as fatal #420 2016-07-04 13:45:53 +01:00
Nick Craig-Wood
13591c7c00 Redo error handling for sync/copy/move
* Factor sync/copy/move into its own file
  * Make fatal errors abort the sync
  * Make Copy return errors
  * Make Sync/Copy/Move return the last Copy error if there was one
  * Prioritise returning Fatal errors
  * NoRetry errors are returned if no other types of errors
2016-07-04 13:45:53 +01:00
Nick Craig-Wood
28f4061892 Add two more classes of error Fatal and NoRetry
These are for remotes to signal that they have a fatal error and don't
want to continue (eg cap exceeded) or that a particular file shouldn't
be retried for some reason.
2016-07-04 13:45:52 +01:00
Nick Craig-Wood
018fe80bcb b2: cleanup old file versions - fixes #462 2016-07-02 17:03:08 +01:00
Nick Craig-Wood
0a43ff9c13 Modify interface for accounting to take a string not an fs.Object 2016-07-02 16:58:50 +01:00
Nick Craig-Wood
9aae143833 Implement cleanup command for emptying trash / removing old versions of files 2016-07-01 16:35:36 +01:00
Nick Craig-Wood
c8e2531c8b b2: make error handling compliant 2016-07-01 16:23:23 +01:00
Nick Craig-Wood
9290004bb8 pacer: make sleep get-able and set-able 2016-07-01 16:22:51 +01:00
Nick Craig-Wood
cbebefebc4 b2: Fix handling of token expiry #420
Found with --b2-test-mode expire_some_account_authorization_tokens
2016-07-01 11:47:42 +01:00
Nick Craig-Wood
6f3897ce2c b2: implement --b2-test-mode to set X-Bz-Test-Mode header #420 2016-07-01 11:30:09 +01:00
Nick Craig-Wood
ea5878f590 b2: set cutoff for chunked upload to 200MB #420
This is the value recommended in the b2 integration checklist:

https://www.backblaze.com/b2/docs/integration_checklist.html
2016-07-01 10:08:09 +01:00
Nick Craig-Wood
46f8e50614 b2: Make upload multi-threaded - fixes #531 2016-07-01 10:04:52 +01:00
Nick Craig-Wood
70dc97231e Convert more tests to use assert/require 2016-06-30 15:45:30 +01:00
Nick Craig-Wood
f6a053df6e Automatically set --no-traverse when copying a single file 2016-06-29 17:38:56 +01:00
Nick Craig-Wood
af4ef8ad8d Implement --no-traverse flag to stop copy traversing the destination remote.
Refactor sync/copy/move
  * Don't load the src listing unless doing a sync and --delete-before
  * Don't load the dst listing if doing copy/move and --no-traverse is set

`rclone --no-traverse copy src dst` now won't load either of the
listings into memory so will use the minimum amount of memory.

This change will reduce the amount of memory rclone uses dramatically
too as in normal operations (copy without --notraverse or sync) as it
no longer loads the source file listing into memory at all.

Fixes #8
Fixes #544
Fixes #546
2016-06-29 17:38:50 +01:00
Nick Craig-Wood
13797a1fb8 Make retry logs be debug in main copy routine 2016-06-28 08:51:57 +01:00
Nick Craig-Wood
3ad8fb8634 Make DeleteFile and DeleteFiles return errors 2016-06-28 08:51:57 +01:00
Nick Craig-Wood
ab43005422 Make NewObject return an error
* make it return an error
  * make a canonical error fs.ErrorNotFound
  * make a test for it
  * remove logs/debugs of error
2016-06-28 08:51:57 +01:00
Nick Craig-Wood
b1f131964e Rename NewFsObject to NewObject 2016-06-28 08:51:57 +01:00
Nick Craig-Wood
1a87b69376 Get rid of LimitedFs - FIXME needs docs on copying single files
If remote:path points to a file make NewFs return a sentinel error
fs.ErrorIsFile and an Fs which points to the parent.

Use this to remove the LimitedFs and just add this file to the
--files-from list.

This means that server side operations can be used also.

Fixes #518
Fixes #545
2016-06-28 08:51:43 +01:00
Nick Craig-Wood
5a3b109e25 Fix issues identified by go vet -shadow - fixes #530 2016-06-21 21:17:52 +01:00
Nick Craig-Wood
a67c7461ee s3: skip SetModTime for objects > 5GB - fixes #534 2016-06-19 17:26:44 +01:00
Klaus Post
e0aa4bb492 Fix incomplete local hashes.
Fixes #533
2016-06-19 16:51:49 +02:00
Nick Craig-Wood
ab0947ee37 Fix typo in changelog 2016-06-18 16:58:37 +01:00
Nick Craig-Wood
bd0227450e Version v1.30 2016-06-18 16:41:46 +01:00
Nick Craig-Wood
f438f1e9ef Fix stats print 2016-06-18 16:41:46 +01:00
Nick Craig-Wood
3f7b2c1ade Add Justin R. Wilson to contributors 2016-06-18 14:31:17 +01:00
Justin R. Wilson
6e35a3b3ce Add AES256 server-side encryption for s3 - Fixes #491
Add a configuration key and support for AES256 server-side encryption.
2016-06-18 14:28:38 +01:00
Nick Craig-Wood
d3dd672640 Document recursion requirements for Fses 2016-06-18 14:12:47 +01:00
Nick Craig-Wood
2a46be8cf3 b2: implement large file uploading - fixes #456 2016-06-18 13:38:05 +01:00
Nick Craig-Wood
1b4370bde1 Rework retry logic when copying objects
* Fix off by one retry logic - fixes #406
  * Retry any retriable errors
  * Restructure code
2016-06-18 10:55:58 +01:00
Nick Craig-Wood
cc6a776034 drive, acd: Tweak logging after changing Fs.Put so that it must cope with existing files 2016-06-18 10:54:42 +01:00
Nick Craig-Wood
2cfb3834f2 Log errors with %v 2016-06-18 09:36:47 +01:00
Nick Craig-Wood
46135d830e Add --ignore-size flag - fixes #399 2016-06-17 17:20:08 +01:00
Nick Craig-Wood
318e42e35b Add a section on quoting in the shell to the docs - fixes #473 2016-06-17 16:28:50 +01:00
Nick Craig-Wood
c7f04e24d3 Document that you can't repeat filter flags - fixes #506 2016-06-17 16:06:21 +01:00
Nick Craig-Wood
e4650eff58 drive: fix retry of multipart uploads - fixes #520
Reset the reader on retry otherwise it is empty when read again.
2016-06-15 21:48:30 +01:00
Nick Craig-Wood
869d91269d Debug cause of low level retries 2016-06-15 21:48:14 +01:00
Nick Craig-Wood
df1092ef33 Change Fs.Put so that it must cope with existing files
This should fix duplicate files on drive and 409 errors on
amazonclouddrive however it will slow down the upload slightly as
another roundtrip will be needed.

None of the other Fses needed adjusting.

Fixes #483
2016-06-13 19:29:10 +01:00
Nick Craig-Wood
4c5b2833b3 Convert to using github.com/pkg/errors everywhere 2016-06-13 17:43:03 +01:00
Nick Craig-Wood
7fe653c350 Unwrap errors properly for patform specific connection retry code.
Include more possible errors for Windows.

For #442
2016-06-10 13:48:41 +01:00
Nick Craig-Wood
661715733a Make sure we don't use conflicting content types on upload - fixes #513 2016-06-09 17:52:58 +01:00
Nick Craig-Wood
f17cb1bf50 Fix retry of Windows wsaend errors #442
Make the test for wsaend error less specific
2016-06-09 15:34:13 +01:00
Nick Craig-Wood
9ec06df79f Be explicit about which arch we support which fixes failure to build with new gox 2016-06-09 15:33:26 +01:00
Nick Craig-Wood
67d0375b98 Audit use of log.Print and change to Debug, Log, or ErrorLog as appropriate 2016-06-06 21:23:54 +01:00
Nick Craig-Wood
4882b8ba67 Tweak website footer 2016-06-06 21:23:22 +01:00
Nick Craig-Wood
108760e17b Log -v output to stdout by default - fixes #228 2016-06-04 18:49:27 +01:00
Nick Craig-Wood
f15e7e89d2 Add version string to debug startup message 2016-06-03 23:08:14 +01:00
Nick Craig-Wood
e2788aa729 Display the transfer stats in more human readable form - fixes #428 2016-06-03 22:49:50 +01:00
Nick Craig-Wood
772f99fd74 Make SizeSuffix output without b suffix for more useful printouts 2016-06-03 22:49:14 +01:00
Nick Craig-Wood
9bbcdeefd0 Start the logger earlier so all messages go there - fixes #486 2016-06-03 22:08:27 +01:00
Nick Craig-Wood
a21cc161de Make 0 size files specifiable with --max-size 0b - fixes #450 2016-06-03 21:54:27 +01:00
Nick Craig-Wood
e818b7c206 Represent -1 as "off" for SIZE values 2016-06-03 21:51:39 +01:00
Nick Craig-Wood
5723d788a4 Add b suffix so we can specify bytes in --bwlimit, --min-size etc
Fixes #449
2016-06-03 21:16:48 +01:00
Nick Craig-Wood
1d6698a754 Build tweaks - fixes #484
* disable CGO for static builds everywhere
  * override Version in release build script
  * don't output symbol table in release binaries
2016-06-03 20:34:19 +01:00
Nick Craig-Wood
1fce83b936 swift: add auth version parameter - fixes #407 2016-06-03 17:52:24 +01:00
Nick Craig-Wood
ccdd1ea6c4 Add --max-depth parameter
This will apply to ls/lsd/sync/copy etc

Fixes #412
Fixes #213
2016-06-03 17:05:39 +01:00
Nick Craig-Wood
348734584b Try OS X 10.11 to fix travis build 2016-05-30 20:32:35 +01:00
Nick Craig-Wood
c6a79ff72d Fix remaining places in listing where we were logging errors not returning them 2016-05-30 19:51:15 +01:00
Nick Craig-Wood
b6f1391da3 Fix new style directory listing on windows 2016-05-30 19:44:15 +01:00
Nick Craig-Wood
ce94c0e729 Update go versions in travis 2016-05-28 20:45:25 +01:00
Nick Craig-Wood
58befe280c Fix directory name normalisation on OS X 2016-05-28 20:23:37 +01:00
Nick Craig-Wood
4c0f4ccb65 Fix destination of Facebook share link - fixes #499 2016-05-28 17:27:25 +01:00
Nick Craig-Wood
085677d511 acd: Work around spurious 403 errors
Sometimes ACD gives this error on reauthentication

HTTP code 403: "403 Forbidden", reponse body: {"message":"Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header. Authorization=Bearer"}

This code retries this error if it is received.
2016-05-28 16:49:26 +01:00
Nick Craig-Wood
0a922ad1dc acd: Reauth on 401 errors
Fixes #493
Fixes #501
2016-05-28 16:49:26 +01:00
Nick Craig-Wood
83c3bb2f1a Add Romain Lapray to contributors 2016-05-28 16:39:17 +01:00
rlapray
83087a45f0 Details about Hubic "default" folder 2016-05-28 16:36:47 +01:00
Nick Craig-Wood
cadf202107 Clarify filtering docs #489 2016-05-19 12:39:16 +01:00
Nick Craig-Wood
36700d36a7 Fix dropbox root directory listings 2016-05-16 17:54:59 +01:00
Nick Craig-Wood
ad85f6e413 Implement directory include filtering for efficiency
Fixes #395
2016-05-16 17:14:04 +01:00
Nick Craig-Wood
536526cc92 amazonclouddrive: Restart directory listings on error - fixes #475
Before this change rclone would retry only the page that was missing
from the directory listing.  However it turns out that on 429 errors
at least, that page is gone from the directory listing which results
in missing files in the list.  The workaround for this is to restart
the directory listing on any retryable errors.
2016-05-14 17:15:42 +01:00
Nick Craig-Wood
ac9c20b048 Make IsRetryError function 2016-05-14 17:11:19 +01:00
Nick Craig-Wood
2db35f0ce7 Dump out unexpected state in integration test 2016-05-07 21:19:26 +01:00
Nick Craig-Wood
dbfa7031d2 Factor Lister into own file, write tests and fix 2016-05-07 17:17:43 +01:00
Nick Craig-Wood
c2d0e86431 Add more tests for List() and fix resulting problems 2016-05-07 14:50:35 +01:00
Nick Craig-Wood
68ec6a9f5b Add a directory parameter to Fs.List() 2016-05-06 16:52:34 +01:00
Nick Craig-Wood
753b0717be Refactor the List and ListDir interface
Gives more accurate error propagation, control of depth of recursion
and short circuit recursion where possible.

Most of the the heavy lifting is done in the "fs" package, making file
system implementations a bit simpler.

This commit contains some code originally by Klaus Post.

Fixes #316
2016-05-06 16:52:34 +01:00
Nick Craig-Wood
3bdad260b0 Fix typo (thanks Saverio Proto) 2016-05-06 14:09:12 +01:00
Nick Craig-Wood
d205dc23e9 Fix oddities using a file in the root - fixes #471
* Check return from NewFsObject which caused nil ptr deref
  * Correct root directory from "" to string(os.PathSeparator) in getDirFile
2016-05-06 13:52:50 +01:00
Nick Craig-Wood
bdd26d71b2 Clarify swift errors - fixes #460 2016-05-02 12:34:15 +01:00
Nick Craig-Wood
8b2f6faf18 Re-enable OS X in travis tests 2016-05-01 13:13:20 +01:00
Nick Craig-Wood
7c01bbddf8 Normalise path names for OSX local filesystem
Fixes #194 Fixes #451 Fixes #463
2016-05-01 13:13:20 +01:00
Nick Craig-Wood
1752ee3c8b Retry errors which indicate the connection closed prematurely.
See discussion in #442
2016-04-29 17:29:34 +01:00
Nick Craig-Wood
5c2d8ffe33 Retry only the failing tests in the integration tests 2016-04-26 10:20:07 +01:00
Nick Craig-Wood
7fecd5c8c6 Add Leigh Klotz to contributors 2016-04-22 21:12:45 +01:00
Leigh Klotz
19b7ff12ad Doc updates for pasword prompt changes 2016-04-22 21:11:36 +01:00
Nick Craig-Wood
b053234eb1 Add Fabian Ruff to contributors 2016-04-22 21:02:54 +01:00
Fabian Ruff
640d7bd365 Add domain option for openstack (v3 auth) 2016-04-22 21:00:54 +01:00
Nick Craig-Wood
8af68e779f Add Michal Witkowski to contributors 2016-04-22 20:09:16 +01:00
Nick Craig-Wood
3a1198cac5 gcs: Don't configure the oauth token if service_account_file is supplied 2016-04-22 20:07:10 +01:00
Michal Witkowski
022ab4516d Add service account support for GCS 2016-04-22 19:53:27 +01:00
Nick Craig-Wood
17aac9b15f Note certificates FAQ works on Solaris too 2016-04-22 11:53:56 +01:00
Klaus Post
6c0c9abd57 Use "password:" instead of "password>" prompt
Fixes #410
2016-04-21 19:39:46 +01:00
Nick Craig-Wood
70496c15e1 Add Jim Tittsler to contributors 2016-04-21 19:37:41 +01:00
Jim Tittsler
8b61e68bb7 Fix doc typos. 2016-04-20 11:50:28 +09:00
Nick Craig-Wood
bb75d80d33 Fix frontmatter 2016-04-18 18:55:07 +01:00
Nick Craig-Wood
157d7d45f5 Version v1.29 2016-04-18 18:30:29 +01:00
Nick Craig-Wood
b5cba73cc3 Make test more reliable 2016-04-18 17:48:52 +01:00
Nick Craig-Wood
dd36264aad Add FAQ All my uploaded docx/xlsx/pptx files appear as archive/zip
Fixes #417
2016-04-12 21:41:24 +01:00
Nick Craig-Wood
ddb47758f3 drive: increase default chunk size to 8 MB and document - fixes #397 2016-04-12 21:33:55 +01:00
Nick Craig-Wood
9539bbf78a Fix appveyor build after vet removal from tools repo 2016-04-07 20:07:00 +01:00
Nick Craig-Wood
0f8e7c3843 Make rclone check obey the --size-only flag - fixes #419 2016-04-07 15:01:45 +01:00
Nick Craig-Wood
b835330714 Use "application/octet-stream" if mime.TypeByExtension returns invalid type
Fixes #424
2016-04-07 14:32:01 +01:00
Nick Craig-Wood
310db14ed6 Notes on --transfers and B2 2016-04-04 17:58:36 +01:00
Klaus Post
7f2e9d9a6b Require go v1.5 for compilation
Google cloud package requires go v1.5 to compile, so we need to require the same for rclone.

Fixes #408
2016-04-04 17:34:39 +01:00
Nick Craig-Wood
6cc9c09610 drive: preserve mime type on file update - fixes #417 2016-04-04 16:58:42 +01:00
Nick Craig-Wood
93c60c34e1 b2: Fix incorrect value of Precision - should be 1ms not 1s 2016-03-24 15:23:27 +00:00
Klaus Post
02c11dd4a7 Don't de-reference swift connection
The connection object contains a mutex, so it is good practice not to dereference it to a value.

Reported by Go tip "go vet".
2016-03-23 17:09:05 +00:00
Klaus Post
40dc575aa4 Update Travis CI
- Only use golint if version is > Go 1.4
- Add Go 1.6 and tip as test targets.
2016-03-23 17:07:26 +00:00
Klaus Post
f8101771c9 Disable keepalive to keep server from serving stale results.
Fixes issue #402

Bonus fix: Fix "multiple header writes" warning when no code is received.
2016-03-23 16:57:56 +00:00
Klaus Post
8f4d6973fb Fix missing "quit" option when there are no remotes. 2016-03-23 16:57:56 +00:00
Nick Craig-Wood
ced3a4bc19 Implement -I, --ignore-times for unconditional upload - fixes #311 2016-03-22 17:02:27 +00:00
Nick Craig-Wood
cb22583212 b2: Enable mod time syncing - fixes #348 2016-03-22 15:56:44 +00:00
Nick Craig-Wood
414b35ea56 Change the interface of SetModTime to return an error - #348 2016-03-22 15:56:44 +00:00
Nick Craig-Wood
f469905d07 dropbox: Note 10,000 files limitation on purge - fixes #374 2016-03-22 14:46:43 +00:00
Nick Craig-Wood
20f4b2c91d b2: update API to new version - fixes #393
* Make reading mod time and SHA1 much more efficient
    * removes an HTTP transaction to increase speed
  * Reduce memory usage of the objects
2016-03-22 14:39:56 +00:00
Nick Craig-Wood
37543bd1d9 b2: Fix parsing of mod time when not in metadata
This files this error `Failed to parse mod time string "":
"src_last_modified_millis" not found in metadata`.
2016-03-22 10:26:37 +00:00
Nick Craig-Wood
0dc0052e93 Note that filters must use / not \ - #394 2016-03-19 17:40:54 +00:00
Nick Craig-Wood
bd27473762 swift: Don't return an MD5SUM for static large objects - #392
* rename isManifest to isDynamicLargeObject for clarity
2016-03-17 17:36:20 +00:00
Nick Craig-Wood
9dccf91da7 swift/hubic: document segmented object MD5SUM limitations - fixes #392 2016-03-16 17:39:44 +00:00
Nick Craig-Wood
a1323eb204 s3: Fix uploading files bigger than 50GB - fixes #386 2016-03-10 16:48:55 +00:00
Klaus Post
e57c4406f3 Add mutex to "warned" map.
Fixes #385
2016-03-10 15:51:56 +01:00
Nick Craig-Wood
fdd4b4ee22 drive: Add missing retries for Move and DirMove 2016-03-06 18:15:01 +00:00
Nick Craig-Wood
8ef551bf9c Make dedupe remove identical copies without asking and add non interactive mode - fixes #338
* Now removes identical copies without asking
  * Now obeys `--dry-run`
  * Implement `--dedupe-mode` for non interactive running
    * `--dedupe-mode interactive` - interactive the default.
    * `--dedupe-mode skip` - removes identical files then skips anything left.
    * `--dedupe-mode first` - removes identical files then keeps the first one.
    * `--dedupe-mode newest` - removes identical files then keeps the newest one.
    * `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
    * `--dedupe-mode rename` - removes identical files then renames the rest to be different.
  * Add tests which will only run on Google Drive.
2016-03-06 18:15:01 +00:00
Nick Craig-Wood
2119fb4314 drive: tweak pacer to speed up directory listings and make more reliable 2016-03-06 18:15:01 +00:00
Nick Craig-Wood
0166544319 Add Attack constant to pacer 2016-03-05 20:29:05 +00:00
Nick Craig-Wood
874a64e5f6 A script to make a directory heirarchy for testing 2016-03-05 20:26:15 +00:00
Nick Craig-Wood
e0c03a11ab Commit missing docs changes and adjust RELEASE.md to make sure it doesn't happen again 2016-03-01 17:42:27 +00:00
Nick Craig-Wood
3c7f80f58f Version v1.28 2016-03-01 09:00:01 +00:00
Nick Craig-Wood
229ea3f86c Stop --update tests running on remotes which don't do mod time 2016-03-01 07:26:33 +00:00
Nick Craig-Wood
41eb386063 Reset password/config path in config tests to fix other tests 2016-02-29 21:43:37 +00:00
Nick Craig-Wood
dfc7cd97a3 Optionally disable gzip compression on downloads with --no-gzip-encoding - fixes #353 2016-02-29 19:48:54 +00:00
Nick Craig-Wood
280ac26464 Implement -u/--update so creation times can be used on all remotes - #226 2016-02-29 17:46:40 +00:00
Nick Craig-Wood
88cca8a6eb Simplify literals (after running gofmt -s over the code) 2016-02-29 16:57:23 +00:00
Nick Craig-Wood
9c263e3e2b Commit missing tests 2016-02-28 20:25:51 +00:00
Nick Craig-Wood
7d4e143dee Make it obvious that the client secrets are encrypted 2016-02-28 19:57:19 +00:00
Nick Craig-Wood
3343c1afa4 Don't make directories if --dry-run set - fixes #342 2016-02-28 19:56:50 +00:00
Nick Craig-Wood
b279df2e67 Drive: disable copy and move for google docs - fixes #332 2016-02-28 09:35:28 +00:00
Nick Craig-Wood
e6f340d245 swift: Fix uploading of chunked files with non ascii characters - fixes #350 2016-02-27 18:59:16 +00:00
Nick Craig-Wood
bfc66cceaa Update b2 docs after temp file changes 2016-02-27 16:32:40 +00:00
Nick Craig-Wood
1105b6bd94 Add Jakub Gedeon to contributors 2016-02-27 13:58:00 +00:00
Jakub Gedeon
694d390710 s3: Check if directory exists during Mkdir
If you dont have privs to create a bucket in S3 but it exists, don't
fail with an auth error, but detect that the mkdir was not needed and
return successfully.
2016-02-27 13:24:46 +00:00
Nick Craig-Wood
6b6b43402b b2: Use one upload URL per go routine
This fixes `more than one upload using auth token` errors.
2016-02-27 13:00:35 +00:00
Nick Craig-Wood
6f46270735 b2: Add pacing, retries and reauthentication - fixes #310 2016-02-27 12:04:45 +00:00
Nick Craig-Wood
ee5e34a19c b2: factor authorize account into its own method 2016-02-27 12:04:45 +00:00
Nick Craig-Wood
70902b4051 Make rest Set methods safe for concurrent calling 2016-02-27 12:04:45 +00:00
Nick Craig-Wood
f46304e8ae Update README from docs/content/about.md 2016-02-27 11:15:51 +00:00
Nick Craig-Wood
40252f0aa6 Make continuous integrations logs less noisy 2016-02-26 17:01:19 +00:00
Nick Craig-Wood
e7b9cc4705 Fix pacer tests 2016-02-26 16:59:52 +00:00
Nick Craig-Wood
867a26fe4f Implement --low-level-retries flag - fixes #266 2016-02-25 22:58:21 +00:00
Nick Craig-Wood
3890105cdc Add -run-only flag to run_all test 2016-02-25 22:05:57 +00:00
Nick Craig-Wood
d2219a800a Fix and document the move command - fixes #334
* Don't attempt to use server side Move unless they are on the same Fs
  * Fix move in the presense of filters
2016-02-25 20:05:34 +00:00
Nick Craig-Wood
ccb59480bd Add InActive method to Filter to detect when no fiters are in use. 2016-02-25 19:58:00 +00:00
Nick Craig-Wood
b5c5209162 Fix redirecting stderr on unix-like OSes - fixes #363 2016-02-24 22:03:14 +00:00
Nick Craig-Wood
835b6761b7 Write about convmv in the docs for fixing non UTF-8 filesystems - fixes #300 2016-02-21 14:09:06 +00:00
Nick Craig-Wood
f30c836696 Note Linux version requirements for running rclone - fixes #346 2016-02-21 13:59:24 +00:00
Nick Craig-Wood
090ce00afc Clarify Dropbox docs on mod times - fixes #345 2016-02-21 13:52:00 +00:00
Nick Craig-Wood
377986d599 Update config walk throughs with new style choice menu 2016-02-21 13:40:16 +00:00
Nick Craig-Wood
95e4d837ef Make config chooser easier to understand 2016-02-21 13:40:16 +00:00
Nick Craig-Wood
e08e35984c Add help to remote chooser in rclone config - fixes #43 2016-02-21 13:40:16 +00:00
Nick Craig-Wood
a3b4c8a0f2 Add issue template for github 2016-02-21 10:32:44 +00:00
Nick Craig-Wood
700e47d6e2 Stub out ReadPassword on plan9 and solaris to fix compilation 2016-02-21 10:31:53 +00:00
Nick Craig-Wood
ea11f5ff3d Stop make beta remaking the docs 2016-02-21 10:29:48 +00:00
klauspost
758c7f2d84 Avoid b2 temporary file.
If source can provide SHA1 hash we don't copy input to a temporary file.

Fixes #358
2016-02-19 18:07:15 +00:00
klauspost
ef06371c93 Create separate interface for object information.
Take out read-only information about a Fs in a separate struct to limit access.

See discussion at #282.
2016-02-19 13:31:09 +00:00
Nick Craig-Wood
85a0f25b95 b2: Fix reading metadata for all files when using a subdir - fixes #356
Also fix some confusion with Metadata prefix/root.
2016-02-19 12:11:30 +00:00
klauspost
84b00b362f Change back to original goconfig package.
Add documentation for `--ask-password`.
2016-02-17 11:45:05 +01:00
klauspost
bfd7601cf9 Add configuration file encryption
See #317 for details.

Use `rclone config` to add/change/remove password.

Tests that loads the default configuration will now fail with a better error message, and add a switch that makes it possible to disable password prompts and fail instead.

Make it possible to use the "RCLONE_CONFIG_PASS" environment variable as password for configuration.
2016-02-16 16:32:05 +01:00
Nick Craig-Wood
4676a89963 Note that you may need curl --insecure when fetching root CA certificates 2016-02-16 14:55:26 +00:00
Nick Craig-Wood
8cd3c25b41 Amazon Cloud Drive: retry on 400, 401, 408, 504 and EOF errors - fixes #340 2016-02-16 14:45:22 +00:00
Nick Craig-Wood
5f97603684 Fix fetch test dependencies too. 2016-02-15 17:31:11 +00:00
Nick Craig-Wood
f1debd4701 Fetch test dependencies too. 2016-02-15 17:20:26 +00:00
Nick Craig-Wood
1cd0d9a1f2 Fix listing drive docs at root - fixes #336
* Remove full drive list code
    * it is slower and uses more data
    * having two directory listing routines is causing problems (including this one)
    * less code is more
  * Make sure we don't recurse into directories we don't own
  * Fix export extension handling and add tests
2016-02-15 16:46:43 +00:00
Nick Craig-Wood
a6320bbad3 Fix delete command to wait until all finished - fixes missing deletes.
This also could affect deletes at the end of the sync command.
2016-02-15 16:43:59 +00:00
Nick Craig-Wood
b1dd8e998b Yandex Disk: Use http.Client passed in for all operations - fixes logging. 2016-02-15 16:43:18 +00:00
Xavier Lucas
c2e8f06bfa Swift storageUrl overloading fixes #167 2016-02-09 22:17:13 +00:00
Nick Craig-Wood
08a8f7174a Add Brian Stengaard to contributors 2016-02-09 21:45:51 +00:00
Nick Craig-Wood
ce4c1d4f35 s3: Fix empty checks in auth 2016-02-09 17:19:33 +00:00
Nick Craig-Wood
a0b9bd527e Add both forms of env var to the docs 2016-02-09 17:19:13 +00:00
Brian Stengaard
ce05ef7110 Add IAM role and Env credentials
This will make the s3 provider authentaction logic

  - Configured credentials if both key and secret available
  - Anonymous if key and secret missing and env_auth not set
  - if env_auth is set to truthy (https://golang.org/pkg/strconv/#ParseBool)
    - AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY environment variables
    - IAM role credentials as fallback
2016-02-09 16:32:36 +00:00
Werner Beroux
6a47d966a4 Update filtering documentation - fixes #306
Explains that filtering is done relative to the remote root.

Also removes a section that seems more about internal knowledge and
that may likely more confuse people. Adds instead a section giving an
overview of how to perform filtering before going into details.
2016-02-09 16:25:19 +00:00
Nick Craig-Wood
85d99de26b Fix typo in error strings 2016-02-09 16:15:50 +00:00
Nick Craig-Wood
4a82251c62 Add man page to repository too (missed from #256) 2016-02-07 20:26:10 +00:00
Nick Craig-Wood
e62c0a58a7 Version 1.27 2016-01-31 17:50:13 +00:00
Nick Craig-Wood
1f3e48f18f Add manuals to repository - fixes #256 2016-01-31 16:34:30 +00:00
Nick Craig-Wood
bbbe11790b Update docs to make syncing from a directory more obvious - fixes #302 2016-01-31 16:27:19 +00:00
Nick Craig-Wood
13edf62824 Document rclone return codes - fixes #308 2016-01-31 16:15:25 +00:00
Nick Craig-Wood
558bc2e132 drive: Export Google documents - fixes #49
Rclone will download one format of a google doc. The choice of which
export format is controlled by the `--drive-formats` flag.
2016-01-31 16:10:43 +00:00
Nick Craig-Wood
0f73129ab7 dedupe command to deduplicate a remote. Useful with google drive - fixes #41 2016-01-31 16:09:42 +00:00
Nick Craig-Wood
1373efaa39 Delete command which does obey the filters - fixes #327 2016-01-31 16:06:04 +00:00
Nick Craig-Wood
5c37b777fc Make the --dry-run warnings into logs so they appear without the -v flag 2016-01-31 16:06:04 +00:00
Nick Craig-Wood
d4df3f2154 acd: Download files >= 9GB with their tempLink direct from s3
This files the problem downloading files > 10GB.

Fixes #204 Fixes #313
2016-01-30 18:08:44 +00:00
Nick Craig-Wood
8ae424c5a3 Emphasize testing sync with --dry-run and -v 2016-01-29 07:59:33 +00:00
Nick Craig-Wood
cae19df058 s3: URL escape CopySource
This fixes metadata update and copy for files with `+` in

Fixes #315
2016-01-27 17:39:33 +00:00
Nick Craig-Wood
8c211fc8df Warn the user about files with same name but different case
Relates to #107 & #119.
2016-01-26 16:57:09 +00:00
Nick Craig-Wood
74a71f7824 Add tests for --delete-before, --delete-during and --delete-after 2016-01-26 16:57:09 +00:00
Nick Craig-Wood
12b51c5eb8 Remove duplicate check for filter IncludeObject 2016-01-26 16:57:09 +00:00
klauspost
14069fd8e6 Implement --delete-before, --delete-during, --delete-after - fixes #252. 2016-01-26 16:57:09 +00:00
Nick Craig-Wood
cd62f41606 Reduce number of logs and show hash type where appropriate 2016-01-24 18:06:57 +00:00
Nick Craig-Wood
109d4ee490 Prefix all test remotes with rclone-test- and make names more pronouncable 2016-01-24 12:37:46 +00:00
Nick Craig-Wood
18ebec8276 Check remote is empty between integration tests 2016-01-24 12:37:19 +00:00
Nick Craig-Wood
c47b4f828f acd: Fix deadlock in directory traversal code 2016-01-24 11:20:55 +00:00
Nick Craig-Wood
c3a0c0c451 swift: Fix upload from unprivileged user - fixes #273 2016-01-23 20:32:53 +00:00
Nick Craig-Wood
6cb0de43ce Deprecate compiling with go1.3 2016-01-23 17:27:00 +00:00
Nick Craig-Wood
83f0d3e03d acd: remove 409 conflict from error codes we will retry
This should fix the very long pauses or getting stuck people have seen
in uploads.
2016-01-23 17:02:09 +00:00
Nick Craig-Wood
eda4130703 Fix integration tests so they can be run independently and out of order - fixes #291
* Make all integration tests start with an empty remote
  * Add an -individual flag so this can be a different bucket/container/directory
  * Fix up tests after changing the hashers
  * Add sha1sum test
  * Make directory checking in tests sleep more to fix acd inconsistencies
  * Factor integration tests to make more maintainable
  * Ensure remote writes have a fstest.CheckItems() before use
    * this fixes eventual consistency on the directory listings later
  * Call fs.Stats.ResetCounters() before every fs.Sync()

Note that the tests shouldn't be run concurrently as fs.Config is global state.
2016-01-23 17:02:09 +00:00
Nick Craig-Wood
ccba859812 Test all available hashes for each remote 2016-01-23 09:10:36 +00:00
Nick Craig-Wood
de3cf5e8d7 Add -verbose flag to unit tests and add some more eventual consistency retries 2016-01-20 20:06:05 +00:00
Nick Craig-Wood
ce305321b6 amazon cloud drive: Fix "Next token is expired" - Fixes #289 Fixes #263
This should also fix the consequent "409 Conflict" name already exists errors.
2016-01-20 20:05:52 +00:00
Nick Craig-Wood
e6117e978e Add Werner Beroux to contributors 2016-01-20 16:33:28 +00:00
Werner Beroux
4b40898743 Update filtering.md
Clarify by removing the extension which makes it confusing if not careful.
2016-01-20 16:16:24 +01:00
Nick Craig-Wood
ae3a0ec27e b2: Don't re-read the SHA1 if we already have it 2016-01-19 08:21:20 +00:00
Nick Craig-Wood
d9458fb4ee b2: return error in Hash from readFileMetadata operation 2016-01-19 08:21:10 +00:00
Nick Craig-Wood
27f67edb1a Fix formatting problem in sha1sum 2016-01-17 13:56:42 +00:00
Nick Craig-Wood
3ffea738e6 Make hash constants start from 1 not 2 2016-01-17 10:47:24 +00:00
Nick Craig-Wood
a63dd6020c onedrive: fix incorrectly decoded SHA-1 2016-01-17 10:46:36 +00:00
Nick Craig-Wood
d0678bc3e5 local: report error on stat in Hash in case file disappeared 2016-01-17 10:46:19 +00:00
klauspost
ce04a073ef Update templates to changes in the latest hugo version
Fixes #295
2016-01-16 14:11:52 +00:00
Nick Craig-Wood
c337a367f3 Make make serve fail if make website would fail 2016-01-16 14:10:57 +00:00
klauspost
7ae40cb352 Update information on revised hash functionality. 2016-01-16 10:17:11 +00:00
Nick Craig-Wood
e8daab7971 Fix integration tests for remotes with unsupported hash schemes 2016-01-16 09:45:15 +00:00
klauspost
78c3a5ccfa Add support for multiple hash types.
Add support for multiple hash types with negotiation of common hash types for comparison.

Manually rebased version of #277 (see discussion there)
2016-01-11 13:39:33 +01:00
Nick Craig-Wood
2142c75846 Add missing docs for options - fixes #278 2016-01-10 12:04:20 +00:00
Nick Craig-Wood
c724d8f614 dropbox: Make file exclusion error controllable with -q #287 2016-01-10 11:49:04 +00:00
Nick Craig-Wood
af5f4ee724 Make --include rules add their implict exclude * at the end of the filter list
This means you can mix `--include` and `--include-from` with the
other filters (eg `--exclude`) but you must include all the files you
want in the include statement.

Fixes #280
2016-01-10 11:42:53 +00:00
Nick Craig-Wood
01aa4394a6 Explain that errored sync doesn't delete files - fixes #285 2016-01-10 10:44:33 +00:00
Nick Craig-Wood
2646519712 Add --memprofile flag 2016-01-09 15:25:48 +00:00
Nick Craig-Wood
5b2efd563a Add Xavier Lucas to contributors 2016-01-08 08:32:52 +00:00
xlucas
e7b7432079 OVH Swift authentication enpoint 2016-01-08 08:30:13 +00:00
Nick Craig-Wood
ea2ef4443b Remove -verbose from errcheck 2016-01-08 08:20:04 +00:00
klauspost
25f22ec561 Add "--ignore-existing" flag.
Add option to completely ignore existing files and not consider them for transfer.

Fixes #274
2016-01-08 08:20:04 +00:00
Nick Craig-Wood
5189231a34 Tweaks to rclone authorize
* Document the headless / remote setup procedure
  * Move Config constants into fs
  * Parse arguments in main for Authorize
2016-01-07 20:31:23 +00:00
klauspost
bcbd30bb8a Add easier headless configuration.
This will allow setting up a remote with copy&paste of values to a headless machine. It will allow copy+pasting a token into the configuration.

This requires rclone to be on a machine with a proper browser. Custom client id and secrets are supported.

To test token generation, use `rclone auth "fs type"`.
2016-01-07 20:31:23 +00:00
Nick Craig-Wood
c245183101 Stop errcheck running for go < 1.5 2016-01-07 16:37:51 +00:00
klauspost
4ce2a84df0 Document workaround for ACD maximum file size.
Document workaround for ACD maximum file size and display a warning in verbose mode before upload starts.

Fixes #215.
2016-01-05 17:12:16 +00:00
klauspost
3c31d711b3 Add local file system option to disable UNC on Windows.
This will add an option to disable UNC conversion on Windows to deal with buggy file system implementations like EncFS.

Fixes #261
2016-01-05 17:08:11 +00:00
Nick Craig-Wood
3f5d8390ba Add Björn Harrtell to contributors 2016-01-05 17:05:31 +00:00
Björn Harrtell
78edafcaac drive: add --drive-auth-owner-only to only consider files owned by the user. 2016-01-05 17:02:04 +00:00
Nick Craig-Wood
1ce3673006 Add -clean flag to test_all.go to clean left over test directories 2016-01-03 21:49:26 +00:00
Nick Craig-Wood
3423de65fa Make canonical place for all fs in fs/all/all.go 2016-01-03 14:12:45 +00:00
Nick Craig-Wood
0c81439bc3 Fix upload_github target 2016-01-02 12:18:32 +00:00
Nick Craig-Wood
77fb8ac240 Version 1.26 2016-01-02 12:04:32 +00:00
Nick Craig-Wood
979dfb8cc6 Add Joseph Spurrier to contributors 2016-01-02 11:50:49 +00:00
Joseph Spurrier
fe0289f2f5 s3: Fix corrupting Content-Type on mod time update
This fixes an issue where updating the modification time resets the
content-type to the S3 default of binary/octet-stream which breaks
static websites that expect an html file to have a content-type of
text/html.
2016-01-02 11:47:52 +00:00
Nick Craig-Wood
6a64567dd7 Add Dmitry Burdeev (dibu) to contributors 2016-01-02 11:45:30 +00:00
Nick Craig-Wood
8de8cd62ca yandex: stop create folder error being fatal 2015-12-30 21:07:42 +00:00
Nick Craig-Wood
cba27d2920 yandex: correct precision to 1ns 2015-12-30 20:47:44 +00:00
Nick Craig-Wood
9ade179407 yandex: Fix socket leaks 2015-12-30 14:30:16 +00:00
Nick Craig-Wood
82b85431bd yandex: Make it use our http client so logging, bwlimit etc works properly 2015-12-30 14:30:16 +00:00
Nick Craig-Wood
98778b1870 Docs for Yandex 2015-12-30 14:30:16 +00:00
Nick Craig-Wood
dfd46c23f9 Fix forgotten update for test_all.go 2015-12-30 12:12:24 +00:00
dibu28
3ac4407b88 Implement Yandex storage backend - fixes #234 2015-12-30 12:11:46 +00:00
Nick Craig-Wood
8ea0d5212f Add -verbose flag to test_all and fix tries count 2015-12-30 11:34:22 +00:00
Nick Craig-Wood
acd350d833 Add retry for eventual consistency in findObject test 2015-12-30 10:46:04 +00:00
Nick Craig-Wood
2f4b9f619d Add C. Bess to contributors 2015-12-30 10:13:11 +00:00
C. Bess
70efd0274c Add Contributing link to readme 2015-12-30 10:10:53 +00:00
Nick Craig-Wood
33b3eea6ec Implement Backblaze B2 - fixes #224 2015-12-30 10:05:07 +00:00
Nick Craig-Wood
113624691a Add -dump-headers and -dump-bodies flags for operations test debugging 2015-12-30 09:35:35 +00:00
Nick Craig-Wood
afaec1a2e9 Use test logger instead of log for test output 2015-12-30 09:35:25 +00:00
Nick Craig-Wood
ddf39f2d57 Replace test_all.sh with test_all.go which is cross platform and parallel 2015-12-30 09:26:34 +00:00
Nick Craig-Wood
2df5d95d70 Documentation for --min-age and --max-age 2015-12-29 19:34:10 +00:00
Nick Craig-Wood
64a808ac76 Add CONTRIBUTING file 2015-12-29 19:23:20 +00:00
Nick Craig-Wood
05dc7183cb onedrive: Don't mask HTTP error codes with JSON decode error 2015-12-28 15:15:12 +00:00
Nick Craig-Wood
e69e181090 Fix --min-age and --max-age when only one is present 2015-12-17 14:22:43 +00:00
Nick Craig-Wood
a1269fa669 Make sure we use bash as our shell 2015-12-17 13:30:58 +00:00
Nick Craig-Wood
8369b5209f swift: Make sure we read the size for 0 length files - Fixes #237
This was causing a problem with sync for chunked files.  The directory
listing would read their size back as 0 and see that the size had
changed and immediately resync it.
2015-12-17 13:30:58 +00:00
Nick Craig-Wood
2aa3c0a2af make beta announces destination URL 2015-12-17 13:30:58 +00:00
Nick Craig-Wood
ac65d8369e Make fs.CheckClose public to stop duplication 2015-12-17 13:30:58 +00:00
Nick Craig-Wood
7a24532224 Factor REST library out of onedrive 2015-12-17 13:30:58 +00:00
Nick Craig-Wood
8057d668bb Fix crash in http logging - fixes #223
A nil-pointer exception was caused if the http transaction ever
resulted in a go error while using `--dump-bodies`.  Now don't ignore
the error and log it instead of the http body.
2015-12-17 13:30:58 +00:00
Nick Craig-Wood
36f1bc4a8a Make ls/lsl/md5sum/size/check obey includes and excludes - fixes #169
* run check directory listings concurrently
2015-12-17 13:30:58 +00:00
Nick Craig-Wood
beb8098b0a Ignore current builds when uploading to github 2015-12-17 13:28:12 +00:00
Nick Craig-Wood
6e64a71382 Add Adriano Aurélio Meirelles to contributors 2015-12-17 13:28:12 +00:00
Adriano Aurélio Meirelles
3cbd57d9ad Add support to filter files based on their age 2015-12-17 09:52:38 -02:00
Nick Craig-Wood
4f50b26af0 Add missing cloud storage systems 2015-11-23 22:19:50 +00:00
Nick Craig-Wood
cb651b5866 Upload releases to github too - fixes #225 2015-11-23 22:18:21 +00:00
Nick Craig-Wood
3c1069c815 onedrive: re-enable server side copy 2015-11-22 11:04:16 +00:00
Nick Craig-Wood
7f0020a407 Version v1.25 2015-11-14 13:06:39 +00:00
Nick Craig-Wood
c270c1c80c Increase retries for eventual consistency in tests 2015-11-14 12:57:17 +00:00
Nick Craig-Wood
29ecc2d8bb onedrive: disable server side copy as it seems to be broken 2015-11-14 12:11:38 +00:00
Nick Craig-Wood
13da1b8d28 Add docs for fs specific options - fixes #210 2015-11-14 11:38:35 +00:00
Nick Craig-Wood
0b338eaa28 Fix up sensitive vs insensitive in the docs and some formatting - fixes #214 2015-11-14 11:20:04 +00:00
Nick Craig-Wood
46696865fd Ignore golint errors that can't be fixed
Stop duplicating checkers in .travis.yml - use Makefile as definitive source
2015-11-14 10:08:52 +00:00
Nick Craig-Wood
fcea3777c0 Implement Hubic storage system - fixes #200 2015-11-14 08:08:52 +00:00
Nick Craig-Wood
5023050d95 Add RedirectLocalhostURL for another form of redirect URL 2015-11-14 08:08:51 +00:00
Nick Craig-Wood
bed01a303f Add UnWrapper interface and implement in LimitedFs 2015-11-14 08:08:51 +00:00
Nick Craig-Wood
2c2cb84ca7 Make it so optional interface Purge can fail so it can be wrapped 2015-11-14 08:08:51 +00:00
Nick Craig-Wood
e9dda25c60 Implement Move in limited fs 2015-11-14 08:08:51 +00:00
Nick Craig-Wood
80ffbade22 Fix deletion of some excluded files without --delete-excluded #205
This only happened if the destination file was present but the source
file was missing.
2015-11-12 11:46:04 +00:00
Nick Craig-Wood
7beb50caa7 Remove go tip for the moment since it seems to be broken 2015-11-11 18:18:04 +00:00
Nick Craig-Wood
e8ba43c479 swift: Use ContentType from Object to avoid lookups in listings - fixes #208 2015-11-11 17:19:57 +00:00
Nick Craig-Wood
dcd6bedc27 make beta to compile and upload a beta release 2015-11-11 17:00:08 +00:00
Nick Craig-Wood
5bb76cc35c Stop SetModTime losing metadata (eg X-Object-Manifest) - fixes #203 2015-11-11 17:00:08 +00:00
Nick Craig-Wood
3e68d485f2 Use svg for build status like the other badges 2015-11-08 17:46:19 +00:00
Nick Craig-Wood
1945f09d06 Drop back to testing with go 1.4.2 as it includes go vet 2015-11-08 10:52:35 +00:00
Nick Craig-Wood
2c66bdd6bb Remove Go 1.5-ism to make compilable by go 1.3 & 1.4 - fixes #201 2015-11-08 10:42:50 +00:00
Nick Craig-Wood
a4f3548bbf Remove OS X build until #194 is fixed and update go versions 2015-11-08 10:31:40 +00:00
1107 changed files with 350144 additions and 30474 deletions

7
.gitattributes vendored Normal file
View File

@@ -0,0 +1,7 @@
# Ignore generated files in GitHub language statistics and diffs
/MANUAL.* linguist-generated=true
/rclone.1 linguist-generated=true
# Don't fiddle with the line endings of test data
**/testdata/** -text
**/test/** -text

4
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,4 @@
github: [ncw]
patreon: njcw
liberapay: ncw
custom: ["https://rclone.org/donate/"]

31
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,31 @@
<!--
Welcome :-) We understand you are having a problem with rclone; we want to help you with that!
If you've just got a question or aren't sure if you've found a bug then please use the rclone forum:
https://forum.rclone.org/
instead of filing an issue for a quick response.
If you are reporting a bug or asking for a new feature then please use one of the templates here:
https://github.com/rclone/rclone/issues/new
otherwise fill in the form below.
Thank you
The Rclone Developers
-->
#### Output of `rclone version`
#### Describe the issue

50
.github/ISSUE_TEMPLATE/Bug.md vendored Normal file
View File

@@ -0,0 +1,50 @@
---
name: Bug report
about: Report a problem with rclone
---
<!--
Welcome :-) We understand you are having a problem with rclone; we want to help you with that!
If you've just got a question or aren't sure if you've found a bug then please use the rclone forum:
https://forum.rclone.org/
instead of filing an issue for a quick response.
If you think you might have found a bug, please can you try to replicate it with the latest beta?
https://beta.rclone.org/
If you can still replicate it with the latest beta, then please fill in the info below which makes our lives much easier. A log with -vv will make our day :-)
Thank you
The Rclone Developers
-->
#### What is the problem you are having with rclone?
#### What is your rclone version (output from `rclone version`)
#### Which OS you are using and how many bits (eg Windows 7, 64 bit)
#### Which cloud storage system are you using? (eg Google Drive)
#### The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
#### A log from the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`)

36
.github/ISSUE_TEMPLATE/Feature.md vendored Normal file
View File

@@ -0,0 +1,36 @@
---
name: Feature request
about: Suggest a new feature or enhancement for rclone
---
<!--
Welcome :-)
So you've got an idea to improve rclone? We love that! You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
Here is a checklist of things to do:
1. Please search the old issues first for your idea and +1 or comment on an existing issue if possible.
2. Discuss on the forum first: https://forum.rclone.org/
3. Make a feature request issue (this is the right place!).
4. Be prepared to get involved making the feature :-)
Looking forward to your great idea!
The Rclone Developers
-->
#### What is your current rclone version (output from `rclone version`)?
#### What problem are you are trying to solve?
#### How do you think rclone should be changed to solve that?

5
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,5 @@
blank_issues_enabled: false
contact_links:
- name: Rclone Forum Community Support
url: https://forum.rclone.org/
about: Please ask and answer questions here.

29
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,29 @@
<!--
Thank you very much for contributing code or documentation to rclone! Please
fill out the following questions to make it easier for us to review your
changes.
You do not need to check all the boxes below all at once, feel free to take
your time and add more commits. If you're done and ready for review, please
check the last box.
-->
#### What is the purpose of this change?
<!--
Describe the changes here
-->
#### Was the change discussed in an issue or in the forum before?
<!--
Link issues and relevant forum posts here.
-->
#### Checklist
- [ ] I have read the [contribution guidelines](https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#submitting-a-pull-request).
- [ ] I have added tests for all changes in this PR if appropriate.
- [ ] I have added documentation for the changes if appropriate.
- [ ] All commit messages are in [house style](https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#commit-messages).
- [ ] I'm done, this Pull Request is ready for review :-)

255
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,255 @@
---
# Github Actions build for rclone
# -*- compile-command: "yamllint -f parsable build.yml" -*-
name: build
# Trigger the workflow on push or pull request
on:
push:
branches:
- '*'
tags:
- '*'
pull_request:
jobs:
build:
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'race', 'go1.11', 'go1.12', 'go1.13']
include:
- job_name: linux
os: ubuntu-latest
go: '1.14.x'
modules: 'on'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
quicktest: true
deploy: true
- job_name: mac
os: macOS-latest
go: '1.14.x'
modules: 'on'
gotags: '' # cmount doesn't work on osx travis for some reason
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
racequicktest: true
deploy: true
- job_name: windows_amd64
os: windows-latest
go: '1.14.x'
modules: 'on'
gotags: cmount
build_flags: '-include "^windows/amd64" -cgo'
quicktest: true
racequicktest: true
deploy: true
- job_name: windows_386
os: windows-latest
go: '1.14.x'
modules: 'on'
gotags: cmount
goarch: '386'
cgo: '1'
build_flags: '-include "^windows/386" -cgo'
quicktest: true
deploy: true
- job_name: other_os
os: ubuntu-latest
go: '1.14.x'
modules: 'on'
build_flags: '-exclude "^(windows/|darwin/amd64|linux/)"'
compile_all: true
deploy: true
- job_name: race
os: ubuntu-latest
go: '1.14.x'
modules: 'on'
quicktest: true
racequicktest: true
- job_name: go1.11
os: ubuntu-latest
go: '1.11.x'
modules: 'on'
quicktest: true
- job_name: go1.12
os: ubuntu-latest
go: '1.12.x'
modules: 'on'
quicktest: true
- job_name: go1.13
os: ubuntu-latest
go: '1.13.x'
modules: 'on'
quicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@v1
with:
# Checkout into a fixed path to avoid import path problems on go < 1.11
path: ./src/github.com/rclone/rclone
- name: Install Go
uses: actions/setup-go@v1
with:
go-version: ${{ matrix.go }}
- name: Set environment variables
shell: bash
run: |
echo '::set-env name=GOPATH::${{ runner.workspace }}'
echo '::add-path::${{ runner.workspace }}/bin'
echo '::set-env name=GO111MODULE::${{ matrix.modules }}'
echo '::set-env name=GOTAGS::${{ matrix.gotags }}'
echo '::set-env name=BUILD_FLAGS::${{ matrix.build_flags }}'
if [[ "${{ matrix.goarch }}" != "" ]]; then echo '::set-env name=GOARCH::${{ matrix.goarch }}' ; fi
if [[ "${{ matrix.cgo }}" != "" ]]; then echo '::set-env name=CGO_ENABLED::${{ matrix.cgo }}' ; fi
- name: Install Libraries on Linux
shell: bash
run: |
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse libfuse-dev rpm pkg-config
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
shell: bash
run: |
brew update
brew cask install osxfuse
if: matrix.os == 'macOS-latest'
- name: Install Libraries on Windows
shell: powershell
run: |
$ProgressPreference = 'SilentlyContinue'
choco install -y winfsp zip
Write-Host "::set-env name=CPATH::C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse"
if ($env:GOARCH -eq "386") {
choco install -y mingw --forcex86 --force
Write-Host "::add-path::C:\\ProgramData\\chocolatey\\lib\\mingw\\tools\\install\\mingw32\\bin"
}
# Copy mingw32-make.exe to make.exe so the same command line
# can be used on Windows as on macOS and Linux
$path = (get-command mingw32-make.exe).Path
Copy-Item -Path $path -Destination (Join-Path (Split-Path -Path $path) 'make.exe')
if: matrix.os == 'windows-latest'
- name: Print Go version and environment
shell: bash
run: |
printf "Using go at: $(which go)\n"
printf "Go version: $(go version)\n"
printf "\n\nGo environment:\n\n"
go env
printf "\n\nRclone environment:\n\n"
make vars
printf "\n\nSystem environment:\n\n"
env
- name: Run tests
shell: bash
run: |
make
make quicktest
if: matrix.quicktest
- name: Race test
shell: bash
run: |
make racequicktest
if: matrix.racequicktest
- name: Code quality test
shell: bash
run: |
make build_dep
make check
if: matrix.check
- name: Compile all architectures test
shell: bash
run: |
make
make compile_all
if: matrix.compile_all
- name: Deploy built binaries
shell: bash
run: |
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep_linux ; fi
if [[ "${{ matrix.os }}" == "windows-latest" ]]; then make release_dep_windows ; fi
make ci_beta
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# working-directory: '$(modulePath)'
# Deploy binaries if enabled in config && not a PR && not a fork
if: matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
xgo:
timeout-minutes: 60
name: "xgo cross compile"
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v1
with:
# Checkout into a fixed path to avoid import path problems on go < 1.11
path: ./src/github.com/rclone/rclone
- name: Set environment variables
shell: bash
run: |
echo '::set-env name=GOPATH::${{ runner.workspace }}'
echo '::add-path::${{ runner.workspace }}/bin'
- name: Cross-compile rclone
run: |
docker pull billziss/xgo-cgofuse
GO111MODULE=off go get -v github.com/karalabe/xgo # don't add to go.mod
# xgo \
# -image=billziss/xgo-cgofuse \
# -targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
# -tags cmount \
# -dest build \
# .
xgo \
-image=billziss/xgo-cgofuse \
-targets=android/*,ios/* \
-dest build \
.
- name: Build rclone
run: |
docker pull golang
docker run --rm -v "$PWD":/usr/src/rclone -w /usr/src/rclone golang go build -mod=mod -v
- name: Upload artifacts
run: |
make ci_upload
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# Upload artifacts if not a PR && not a fork
if: github.head_ref == '' && github.repository == 'rclone/rclone'

View File

@@ -0,0 +1,25 @@
name: Docker beta build
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Checkout master
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Build and publish image
uses: ilteoood/docker_buildx@439099796bfc03dd9cedeb72a0c7cb92be5cc92c
with:
tag: beta
imageName: rclone/rclone
platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7
publish: true
dockerHubUser: ${{ secrets.DOCKER_HUB_USER }}
dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }}

View File

@@ -0,0 +1,33 @@
name: Docker release build
on:
release:
types: [published]
jobs:
build:
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Checkout master
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Get actual patch version
id: actual_patch_version
run: echo ::set-output name=ACTUAL_PATCH_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g')
- name: Get actual minor version
id: actual_minor_version
run: echo ::set-output name=ACTUAL_MINOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1,2)
- name: Get actual major version
id: actual_major_version
run: echo ::set-output name=ACTUAL_MAJOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1)
- name: Build and publish image
uses: ilteoood/docker_buildx@439099796bfc03dd9cedeb72a0c7cb92be5cc92c
with:
tag: latest,${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }},${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }},${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
imageName: rclone/rclone
platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7
publish: true
dockerHubUser: ${{ secrets.DOCKER_HUB_USER }}
dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }}

11
.gitignore vendored
View File

@@ -1,10 +1,11 @@
*~
_junk/
rclone
rclonetest/rclonetest
build
docs/public
MANUAL.md
MANUAL.html
MANUAL.txt
rclone.1
rclone.iml
.idea
.history
*.test
*.log
*.iml

26
.golangci.yml Normal file
View File

@@ -0,0 +1,26 @@
# golangci-lint configuration options
linters:
enable:
- deadcode
- errcheck
- goimports
- golint
- ineffassign
- structcheck
- varcheck
- govet
- unconvert
#- prealloc
#- maligned
disable-all: true
issues:
# Enable some lints excluded by default
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0

View File

@@ -1,24 +0,0 @@
language: go
sudo: false
os:
- linux
- osx
go:
- 1.3.3
- 1.4.2
- 1.5
- tip
install:
- go get ./...
- go get -u github.com/golang/lint/golint
- go get -u golang.org/x/tools/cmd/goimports
script:
- go vet ./...
- diff <(goimports -d .) <(printf "")
- diff <(golint ./...) <(printf "")
- go test -v ./...
- go test -cpu=2 -race -v ./...

415
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,415 @@
# Contributing to rclone #
This is a short guide on how to contribute things to rclone.
## Reporting a bug ##
If you've just got a question or aren't sure if you've found a bug
then please use the [rclone forum](https://forum.rclone.org/) instead
of filing an issue.
When filing an issue, please include the following information if
possible as well as a description of the problem. Make sure you test
with the [latest beta of rclone](https://beta.rclone.org/):
* Rclone version (eg output from `rclone -V`)
* Which OS you are using and how many bits (eg Windows 7, 64 bit)
* The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
* A log of the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`)
* if the log contains secrets then edit the file with a text editor first to obscure them
## Submitting a pull request ##
If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via GitHub.
If it is a big feature then make an issue first so it can be discussed.
You'll need a Go environment set up with GOPATH set. See [the Go
getting started docs](https://golang.org/doc/install) for more info.
First in your web browser press the fork button on [rclone's GitHub
page](https://github.com/rclone/rclone).
Now in your terminal
go get -u github.com/rclone/rclone
cd $GOPATH/src/github.com/rclone/rclone
git remote rename origin upstream
git remote add origin git@github.com:YOURUSER/rclone.git
Make a branch to add your new feature
git checkout -b my-new-feature
And get hacking.
When ready - run the unit tests for the code you changed
go test -v
Note that you may need to make a test remote, eg `TestSwift` for some
of the unit tests.
Note the top level Makefile targets
* make check
* make test
Both of these will be run by Travis when you make a pull request but
you can do this yourself locally too. These require some extra go
packages which you can install with
* make build_dep
Make sure you
* Add [documentation](#writing-documentation) for a new feature.
* Follow the [commit message guidelines](#commit-messages).
* Add [unit tests](#testing) for a new feature
* squash commits down to one per feature
* rebase to master with `git rebase master`
When you are done with that
git push origin my-new-feature
Go to the GitHub website and click [Create pull
request](https://help.github.com/articles/creating-a-pull-request/).
You patch will get reviewed and you might get asked to fix some stuff.
If so, then make the changes in the same branch, squash the commits (make multiple commits one commit) by running:
```
git log # See how many commits you want to squash
git reset --soft HEAD~2 # This squashes the 2 latest commits together.
git status # Check what will happen, if you made a mistake resetting, you can run git reset 'HEAD@{1}' to undo.
git commit # Add a new commit message.
git push --force # Push the squashed commit to your GitHub repo.
# For more, see Stack Overflow, Git docs, or generally Duck around the web. jtagcat also reccommends wizardzines.com
```
## CI for your fork ##
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
## Testing ##
rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests.
go test -v ./...
rclone contains a mixture of unit tests and integration tests.
Because it is difficult (and in some respects pointless) to test cloud
storage systems by mocking all their interfaces, rclone unit tests can
run against any of the backends. This is done by making specially
named remotes in the default config file.
If you wanted to test changes in the `drive` backend, then you would
need to make a remote called `TestDrive`.
You can then run the unit tests in the drive directory. These tests
are skipped if `TestDrive:` isn't defined.
cd backend/drive
go test -v
You can then run the integration tests which tests all of rclone's
operations. Normally these get run against the local filing system,
but they can be run against any of the remotes.
cd fs/sync
go test -v -remote TestDrive:
go test -v -remote TestDrive: -fast-list
cd fs/operations
go test -v -remote TestDrive:
If you want to use the integration test framework to run these tests
all together with an HTML report and test retries then from the
project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backend drive
If you want to run all the integration tests against all the remotes,
then change into the project root and run
make test
This command is run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
## Code Organisation ##
Rclone code is organised into a small number of top level directories
with modules beneath.
* backend - the rclone backends for interfacing to cloud providers -
* all - import this to load all the cloud providers
* ...providers
* bin - scripts for use while building or maintaining rclone
* cmd - the rclone commands
* all - import this to load all the commands
* ...commands
* docs - the documentation and website
* content - adjust these docs only - everything else is autogenerated
* command - these are auto generated - edit the corresponding .go file
* fs - main rclone definitions - minimal amount of code
* accounting - bandwidth limiting and statistics
* asyncreader - an io.Reader which reads ahead
* config - manage the config file and flags
* driveletter - detect if a name is a drive letter
* filter - implements include/exclude filtering
* fserrors - rclone specific error handling
* fshttp - http handling for rclone
* fspath - path handling for rclone
* hash - defines rclone's hash types and functions
* list - list a remote
* log - logging facilities
* march - iterates directories in lock step
* object - in memory Fs objects
* operations - primitives for sync, eg Copy, Move
* sync - sync directories
* walk - walk a directory
* fstest - provides integration test framework
* fstests - integration tests for the backends
* mockdir - mocks an fs.Directory
* mockobject - mocks an fs.Object
* test_all - Runs integration tests for everything
* graphics - the images used in the website etc
* lib - libraries used by the backend
* atexit - register functions to run when rclone exits
* dircache - directory ID to name caching
* oauthutil - helpers for using oauth
* pacer - retries with backoff and paces operations
* readers - a selection of useful io.Readers
* rest - a thin abstraction over net/http for REST
* vfs - Virtual FileSystem layer for implementing rclone mount and similar
## Writing Documentation ##
If you are adding a new feature then please update the documentation.
If you add a new general flag (not for a backend), then document it in
`docs/content/docs.md` - the flags there are supposed to be in
alphabetical order.
If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field. The first line of this is used
for the flag help, the remainder is shown to the user in `rclone
config` and is added to the docs with `make backenddocs`.
The only documentation you need to edit are the `docs/content/*.md`
files. The MANUAL.*, rclone.1, web site etc are all auto generated
from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature.
Documentation for rclone sub commands is with their code, eg
`cmd/ls/ls.go`.
Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy.
## Making a release ##
There are separate instructions for making a release in the RELEASE.md
file.
## Commit messages ##
Please make the first line of your commit message a summary of the
change that a user (not a developer) of rclone would like to read, and
prefix it with the directory of the change followed by a colon. The
changelog gets made by looking at just these first lines so make it
good!
If you have more to say about the commit, then enter a blank line and
carry on the description. Remember to say why the change was needed -
the commit itself shows what was changed.
Writing more is better than less. Comparing the behaviour before the
change to that after the change is very useful. Imagine you are
writing to yourself in 12 months time when you've forgotten everything
about what you just did and you need to get up to speed quickly.
If the change fixes an issue then write `Fixes #1234` in the commit
message. This can be on the subject line if it will fit. If you
don't want to close the associated issue just put `#1234` and the
change will get linked into the issue.
Here is an example of a short commit message:
```
drive: add team drive support - fixes #885
```
And here is an example of a longer one:
```
mount: fix hang on errored upload
In certain circumstances if an upload failed then the mount could hang
indefinitely. This was fixed by closing the read pipe after the Put
completed. This will cause the write side to return a pipe closed
error fixing the hang.
Fixes #1498
```
## Adding a dependency ##
rclone uses the [go
modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more)
support in go1.11 and later to manage its dependencies.
rclone can be built with modules outside of the GOPATH
To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to
`go.mod` and `go.sum`.
GO111MODULE=on go get github.com/ncw/new_dependency
You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to.
Please check in the changes generated by `go mod` including `go.mod`
and `go.sum` in the same commit as your other changes.
## Updating a dependency ##
If you need to update a dependency then run
GO111MODULE=on go get -u github.com/pkg/errors
Check in a single commit as above.
## Updating all the dependencies ##
In order to update all the dependencies then run `make update`. This
just uses the go modules to update all the modules to their latest
stable release. Check in the changes in a single commit as above.
This should be done early in the release cycle to pick up new versions
of packages in time for them to get some testing.
## Updating a backend ##
If you update a backend then please run the unit tests and the
integration tests for that backend.
Assuming the backend is called `remote`, make create a config entry
called `TestRemote` for the tests to use.
Now `cd remote` and run `go test -v` to run the unit tests.
Then `cd fs` and run `go test -v -remote TestRemote:` to run the
integration tests.
The next section goes into more detail about the tests.
## Writing a new backend ##
Choose a name. The docs here will use `remote` as an example.
Note that in rclone terminology a file system backend is called a
remote or an fs.
Research
* Look at the interfaces defined in `fs/fs.go`
* Study one or more of the existing remotes
Getting going
* Create `backend/remote/remote.go` (copy this from a similar remote)
* box is a good one to start from if you have a directory based remote
* b2 is a good one to start from if you have a bucket based remote
* Add your remote to the imports in `backend/all/all.go`
* HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead.
* Try to implement as many optional methods as possible as it makes the remote more usable.
* Use lib/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed
* `rclone purge -v TestRemote:rclone-info`
* `rclone info --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
* `go run cmd/info/internal/build_csv/main.go -o remote.csv remote.json`
* open `remote.csv` in a spreadsheet and examine
Unit tests
* Create a config entry called `TestRemote` for the unit tests to use
* Create a `backend/remote/remote_test.go` - copy and adjust your example remote
* Make sure all tests pass with `go test -v`
Integration tests
* Add your backend to `fstest/test_all/config.yaml`
* Once you've done that then you can use the integration test framework from the project root:
* go install ./...
* test_all -backends remote
Or if you want to run the integration tests manually:
* Make sure integration tests pass with
* `cd fs/operations`
* `go test -v -remote TestRemote:`
* `cd fs/sync`
* `go test -v -remote TestRemote:`
* If your remote defines `ListR` check with this also
* `go test -v -remote TestRemote: -fast-list`
See the [testing](#testing) section for more information on integration tests.
Add your fs to the docs - you'll need to pick an icon for it from
[fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in
alphabetical order of full name of remote (eg `drive` is ordered as
`Google Drive`) but with the local file system last.
* `README.md` - main GitHub page
* `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
* make sure this has the `autogenerated options` comments in (see your reference backend docs)
* update them with `make backenddocs` - revert any changes in other backends
* `docs/content/overview.md` - overview docs
* `docs/content/docs.md` - list of remotes in config section
* `docs/content/_index.md` - front page of rclone.org
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
* `bin/make_manual.py` - add the page to the `docs` constant
Once you've written the docs, run `make serve` and check they look OK
in the web browser and the links (internal and external) all work.
## Writing a plugin ##
New features (backends, commands) can also be added "out-of-tree", through Go plugins.
Changes will be kept in a dynamically loaded file instead of being compiled into the main binary.
This is useful if you can't merge your changes upstream or don't want to maintain a fork of rclone.
Usage
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
- `KIND` should be one of `backend`, `command` or `bundle`.
- Example: A plugin with backend support for PiFS would be called
`librcloneplugin_backend_pifs.so`.
- Loading
- Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282))
- Supported on rclone v1.50 or greater.
- All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded.
- If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source of rclone)
Building
To turn your existing additions into a Go plugin, move them to an external repository
and change the top-level package name to `main`.
Check `rclone --version` and make sure that the plugin's rclone dependency and host Go version match.
Then, run `go build -buildmode=plugin -o PLUGIN_NAME.so .` to build the plugin.
[Go reference](https://godoc.org/github.com/rclone/rclone/lib/plugin)
[Minimal example](https://gist.github.com/terorie/21b517ee347828e899e1913efc1d684f)

22
Dockerfile Normal file
View File

@@ -0,0 +1,22 @@
FROM golang AS builder
COPY . /go/src/github.com/rclone/rclone/
WORKDIR /go/src/github.com/rclone/rclone/
RUN \
CGO_ENABLED=0 \
make
RUN ./rclone version
# Begin final image
FROM alpine:latest
RUN apk --no-cache add ca-certificates fuse tzdata && \
echo "user_allow_other" >> /etc/fuse.conf
COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/
ENTRYPOINT [ "rclone" ]
WORKDIR /data
ENV XDG_CONFIG_HOME=/config

96
MAINTAINERS.md Normal file
View File

@@ -0,0 +1,96 @@
# Maintainers guide for rclone #
Current active maintainers of rclone are:
| Name | GitHub ID | Specific Responsibilities |
| :--------------- | :---------------- | :-------------------------- |
| Nick Craig-Wood | @ncw | overall project health |
| Stefan Breunig | @breunigs | |
| Ishuah Kariuki | @ishuah | |
| Remus Bunduc | @remusb | cache backend |
| Fabian Möller | @B4dM4n | |
| Alex Chen | @Cnly | onedrive backend |
| Sandeep Ummadi | @sandeepkru | azureblob backend |
| Sebastian Bünger | @buengese | jottacloud & yandex backends |
| Ivan Andreev | @ivandeex | chunker & mailru backends |
| Max Sum | @Max-Sum | union backend |
| Fred | @creativeprojects | seafile backend |
| Caleb Case | @calebcase | tardigrade backend |
**This is a work in progress Draft**
This is a guide for how to be an rclone maintainer. This is mostly a writeup of what I (@ncw) attempt to do.
## Triaging Tickets ##
When a ticket comes in it should be triaged. This means it should be classified by adding labels and placed into a milestone. Quite a lot of tickets need a bit of back and forth to determine whether it is a valid ticket so tickets may remain without labels or milestone for a while.
Rclone uses the labels like this:
* `bug` - a definite verified bug
* `can't reproduce` - a problem which we can't reproduce
* `doc fix` - a bug in the documentation - if users need help understanding the docs add this label
* `duplicate` - normally close these and ask the user to subscribe to the original
* `enhancement: new remote` - a new rclone backend
* `enhancement` - a new feature
* `FUSE` - to do with `rclone mount` command
* `good first issue` - mark these if you find a small self contained issue - these get shown to new visitors to the project
* `help` wanted - mark these if you find a self contained issue - these get shown to new visitors to the project
* `IMPORTANT` - note to maintainers not to forget to fix this for the release
* `maintenance` - internal enhancement, code re-organisation etc
* `Needs Go 1.XX` - waiting for that version of Go to be released
* `question` - not a `bug` or `enhancement` - direct to the forum for next time
* `Remote: XXX` - which rclone backend this affects
* `thinking` - not decided on the course of action yet
If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "good first issue" tag to give new contributors something easy to do to get going.
When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (eg the next go release).
The milestones have these meanings:
* v1.XX - stuff we would like to fit into this release
* v1.XX+1 - stuff we are leaving until the next release
* Soon - stuff we think is a good idea - waiting to be scheduled to a release
* Help wanted - blue sky stuff that might get moved up, or someone could help with
* Known bugs - bugs waiting on external factors or we aren't going to fix for the moment
Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile) are good candidates for ones that have slipped between the gaps and need following up.
## Closing Tickets ##
Close tickets as soon as you can - make sure they are tagged with a release. Post a link to a beta in the ticket with the fix in, asking for feedback.
## Pull requests ##
Try to process pull requests promptly!
Merging pull requests on GitHub itself works quite well now-a-days so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message.
After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`.
Sometimes pull requests need to be left open for a while - this especially true of contributions of new backends which take a long time to get right.
## Merges ##
If you are merging a branch locally then do `git merge --ff-only branch-name` to avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
## Release cycle ##
Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer if there is something big to merge that didn't stabilize properly or for personal reasons.
High impact regressions should be fixed before the next release.
Near the start of the release cycle the dependencies should be updated with `make update` to give time for bugs to surface.
Towards the end of the release cycle try not to merge anything too big so let things settle down.
Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained.
## Mailing list ##
There is now an invite only mailing list for rclone developers `rclone-dev` on google groups.
## TODO ##
I should probably make a dev@rclone.org to register with cloud providers.

21281
MANUAL.html generated Normal file

File diff suppressed because it is too large Load Diff

26822
MANUAL.md generated Normal file

File diff suppressed because it is too large Load Diff

27558
MANUAL.txt generated Normal file

File diff suppressed because it is too large Load Diff

244
Makefile
View File

@@ -1,35 +1,141 @@
SHELL = /bin/bash
TAG := $(shell git describe --tags)
SHELL = bash
# Branch we are working on
BRANCH := $(or $(BUILD_SOURCEBRANCHNAME),$(lastword $(subst /, ,$(GITHUB_REF))),$(shell git rev-parse --abbrev-ref HEAD))
# Tag of the current commit, if any. If this is not "" then we are building a release
RELEASE_TAG := $(shell git tag -l --points-at HEAD)
# Version of last release (may not be on this branch)
VERSION := $(shell cat VERSION)
# Last tag on this branch
LAST_TAG := $(shell git describe --tags --abbrev=0)
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f", $$_)')
# If we are working on a release, override branch to master
ifdef RELEASE_TAG
BRANCH := master
endif
TAG_BRANCH := -$(BRANCH)
BRANCH_PATH := branch/
# If building HEAD or master then unset TAG_BRANCH and BRANCH_PATH
ifeq ($(subst HEAD,,$(subst master,,$(BRANCH))),)
TAG_BRANCH :=
BRANCH_PATH :=
endif
# Make version suffix -DDD-gCCCCCCCC (D=commits since last relase, C=Commit) or blank
VERSION_SUFFIX := $(shell git describe --abbrev=8 --tags | perl -lpe 's/^v\d+\.\d+\.\d+//; s/^-(\d+)/"-".sprintf("%03d",$$1)/e;')
# TAG is current version + number of commits since last release + branch
TAG := $(VERSION)$(VERSION_SUFFIX)$(TAG_BRANCH)
NEXT_VERSION := $(shell echo $(VERSION) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f.0", $$_)')
ifndef RELEASE_TAG
TAG := $(TAG)-beta
endif
GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... )
ifdef BETA_SUBDIR
BETA_SUBDIR := /$(BETA_SUBDIR)
endif
BETA_PATH := $(BRANCH_PATH)$(TAG)$(BETA_SUBDIR)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
LINTTAGS=--build-tags "$(GOTAGS)"
endif
.PHONY: rclone test_all vars version
rclone:
@go version
go install -v ./...
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS)
mkdir -p `go env GOPATH`/bin/
cp -av rclone`go env GOEXE` `go env GOPATH`/bin/rclone`go env GOEXE`.new
mv -v `go env GOPATH`/bin/rclone`go env GOEXE`.new `go env GOPATH`/bin/rclone`go env GOEXE`
test: rclone
go test ./...
cd fs && ./test_all.sh
test_all:
go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/rclone/rclone/fstest/test_all
vars:
@echo SHELL="'$(SHELL)'"
@echo BRANCH="'$(BRANCH)'"
@echo TAG="'$(TAG)'"
@echo VERSION="'$(VERSION)'"
@echo NEXT_VERSION="'$(NEXT_VERSION)'"
@echo GO_VERSION="'$(GO_VERSION)'"
@echo BETA_URL="'$(BETA_URL)'"
btest:
@echo "[$(TAG)]($(BETA_URL)) on branch [$(BRANCH)](https://github.com/rclone/rclone/tree/$(BRANCH)) (uploaded in 15-30 mins)" | xclip -r -sel clip
@echo "Copied markdown of beta release to clip board"
version:
@echo '$(TAG)'
# Full suite of integration tests
test: rclone test_all
-test_all 2>&1 | tee test_all.log
@echo "Written logs in test_all.log"
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) $(GO_FILES)
racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race $(GO_FILES)
# Do source code quality checks
check: rclone
go vet ./...
errcheck ./...
golint ./...
diff <(goimports -d .) <(printf "")
@echo "-- START CODE QUALITY REPORT -------------------------------"
@golangci-lint run $(LINTTAGS) ./...
@echo "-- END CODE QUALITY REPORT ---------------------------------"
doc: rclone.1 MANUAL.html MANUAL.txt
# Get the build dependencies
build_dep:
go run bin/get-github-release.go -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
# Get the release dependencies we only install on linux
release_dep_linux:
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64.tar.gz'
go run bin/get-github-release.go -extract github-release aktau/github-release 'linux-amd64-github-release.tar.bz2'
# Get the release dependencies we only install on Windows
release_dep_windows:
GO111MODULE=off GOOS="" GOARCH="" go get github.com/josephspurrier/goversioninfo/cmd/goversioninfo
# Update dependencies
showupdates:
@echo "*** Direct dependencies that could be updated ***"
@GO111MODULE=on go list -u -f '{{if (and (not (or .Main .Indirect)) .Update)}}{{.Path}}: {{.Version}} -> {{.Update.Version}}{{end}}' -m all 2> /dev/null
# Update direct and indirect dependencies and test dependencies
update:
GO111MODULE=on go get -u -t ./...
-#GO111MODULE=on go get -d $(go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all)
GO111MODULE=on go mod tidy
# Tidy the module dependencies
tidy:
GO111MODULE=on go mod tidy
doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs
rclone.1: MANUAL.md
pandoc -s --from markdown --to man MANUAL.md -o rclone.1
pandoc -s --from markdown-smart --to man MANUAL.md -o rclone.1
MANUAL.md: make_manual.py docs/content/*.md
./make_manual.py
MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs backenddocs
./bin/make_manual.py
MANUAL.html: MANUAL.md
pandoc -s --from markdown --to html MANUAL.md -o MANUAL.html
pandoc -s --from markdown-smart --to html MANUAL.md -o MANUAL.html
MANUAL.txt: MANUAL.md
pandoc -s --from markdown --to plain MANUAL.md -o MANUAL.txt
pandoc -s --from markdown-smart --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs docs/content/
backenddocs: rclone bin/make_backend_docs.py
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" ./bin/make_backend_docs.py
rcdocs: rclone
bin/make_rc_docs.sh
install: rclone
install -d ${DESTDIR}/usr/bin
@@ -39,38 +145,102 @@ clean:
go clean ./...
find . -name \*~ | xargs -r rm -f
rm -rf build docs/public
rm -f rclone rclonetest/rclonetest rclone.1 MANUAL.md MANUAL.html MANUAL.txt
rm -f rclone fs/operations/operations.test fs/sync/sync.test fs/test_all.log test.log
website:
rm -rf docs/public
cd docs && hugo
@if grep -R "raw HTML omitted" docs/public ; then echo "ERROR: found unescaped HTML - fix the markdown source" ; fi
upload_website: website
rclone -v sync docs/public memstore:www-rclone-org
upload_test_website: website
rclone -P sync docs/public test-rclone-org:
validate_website: website
find docs/public -type f -name "*.html" | xargs tidy --mute-id yes -errors --gnu-emacs yes --drop-empty-elements no --warn-proprietary-attributes no --mute MISMATCHED_ATTRIBUTE_WARN
tarball:
git archive -9 --format=tar.gz --prefix=rclone-$(TAG)/ -o build/rclone-$(TAG).tar.gz $(TAG)
sign_upload:
cd build && md5sum rclone-v* | gpg --clearsign > MD5SUMS
cd build && sha1sum rclone-v* | gpg --clearsign > SHA1SUMS
cd build && sha256sum rclone-v* | gpg --clearsign > SHA256SUMS
check_sign:
cd build && gpg --verify MD5SUMS && gpg --decrypt MD5SUMS | md5sum -c
cd build && gpg --verify SHA1SUMS && gpg --decrypt SHA1SUMS | sha1sum -c
cd build && gpg --verify SHA256SUMS && gpg --decrypt SHA256SUMS | sha256sum -c
upload:
rclone -v copy build/ memstore:downloads-rclone-org
rclone -P copy build/ memstore:downloads-rclone-org/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "memstore:downloads-rclone-org/$(TAG)/$$i" "memstore:downloads-rclone-org/$$j"'
upload_github:
./bin/upload-github $(TAG)
cross: doc
./cross-compile $(TAG)
go run bin/cross-compile.go -release current $(BUILDTAGS) $(TAG)
serve:
cd docs && hugo server -v -w
beta:
go run bin/cross-compile.go $(BUILDTAGS) $(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/
tag:
@echo "Old tag is $(LAST_TAG)"
@echo "New tag is $(NEW_TAG)"
echo -e "package fs\n\n// Version of rclone\nconst Version = \"$(NEW_TAG)\"\n" | gofmt > fs/version.go
perl -lpe 's/VERSION/${NEW_TAG}/g; s/DATE/'`date -I`'/g;' docs/content/downloads.md.in > docs/content/downloads.md
git tag $(NEW_TAG)
@echo "Add this to changelog in docs/content/changelog.md"
@echo " * $(NEW_TAG) -" `date -I`
@git log $(LAST_TAG)..$(NEW_TAG) --oneline
@echo "Then commit the changes"
@echo git commit -m \"Version $(NEW_TAG)\" -a -v
log_since_last_release:
git log $(LAST_TAG)..
compile_all:
go run bin/cross-compile.go -compile-only $(BUILDTAGS) $(TAG)
ci_upload:
sudo chown -R $$USER build
find build -type l -delete
gzip -r9v build
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
ifndef BRANCH_PATH
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
endif
@echo Beta release ready at $(BETA_URL)/testbuilds
ci_beta:
git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) $(BUILDTAGS) $(TAG)
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifndef BRANCH_PATH
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
endif
@echo Beta release ready at $(BETA_URL)
# Fetch the binary builds from GitHub actions
fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w --disableFastRender
tag: doc
@echo "Old tag is $(VERSION)"
@echo "New tag is $(NEXT_VERSION)"
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEXT_VERSION)\"\n" | gofmt > fs/version.go
echo -n "$(NEXT_VERSION)" > docs/layouts/partials/version.html
echo "$(NEXT_VERSION)" > VERSION
git tag -s -m "Version $(NEXT_VERSION)" $(NEXT_VERSION)
bin/make_changelog.py $(LAST_TAG) $(NEXT_VERSION) > docs/content/changelog.md.new
mv docs/content/changelog.md.new docs/content/changelog.md
@echo "Edit the new changelog in docs/content/changelog.md"
@echo "Then commit all the changes"
@echo git commit -m \"Version $(NEXT_VERSION)\" -a -v
@echo "And finally run make retag before make cross etc"
retag:
git tag -f $(LAST_TAG)
git tag -f -s -m "Version $(VERSION)" $(VERSION)
gen_tests:
cd fstest/fstests && go generate
startdev:
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(VERSION)-DEV\"\n" | gofmt > fs/version.go
git commit -m "Start $(VERSION)-DEV development" fs/version.go
winzip:
zip -9 rclone-$(TAG).zip rclone.exe

119
README.md
View File

@@ -1,41 +1,110 @@
[![Logo](http://rclone.org/img/rclone-120x120.png)](http://rclone.org/)
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/)
[Website](http://rclone.org) |
[Documentation](http://rclone.org/docs/) |
[Changelog](http://rclone.org/changelog/) |
[Installation](http://rclone.org/install/) |
[G+](https://google.com/+RcloneOrg)
[Website](https://rclone.org) |
[Documentation](https://rclone.org/docs/) |
[Download](https://rclone.org/downloads/) |
[Contributing](CONTRIBUTING.md) |
[Changelog](https://rclone.org/changelog/) |
[Installation](https://rclone.org/install/) |
[Forum](https://forum.rclone.org/)
[![Build Status](https://github.com/rclone/rclone/workflows/build/badge.svg)](https://github.com/rclone/rclone/actions?query=workflow%3Abuild)
[![Go Report Card](https://goreportcard.com/badge/github.com/rclone/rclone)](https://goreportcard.com/report/github.com/rclone/rclone)
[![GoDoc](https://godoc.org/github.com/rclone/rclone?status.svg)](https://godoc.org/github.com/rclone/rclone)
[![Docker Pulls](https://img.shields.io/docker/pulls/rclone/rclone)](https://hub.docker.com/r/rclone/rclone)
[![Build Status](https://travis-ci.org/ncw/rclone.png?branch=master)](https://travis-ci.org/ncw/rclone) [![Windows Build Status](https://ci.appveyor.com/api/projects/status/github/ncw/rclone?branch=master&passingText=windows%20-%20ok&svg=true)](https://ci.appveyor.com/project/ncw/rclone) [![GoDoc](https://godoc.org/github.com/ncw/rclone?status.svg)](https://godoc.org/github.com/ncw/rclone)
# Rclone
Rclone is a command line program to sync files and directories to and from
Rclone *("rsync for cloud storage")* is a command line program to sync files and directories to and from different cloud storage providers.
* Google Drive
* Amazon S3
* Openstack Swift / Rackspace cloud files / Memset Memstore
* Dropbox
* Google Cloud Storage
* Amazon Cloud Drive
* The local filesystem
## Storage providers
Features
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* FTP [:page_facing_up:](https://rclone.org/ftp/)
* GetSky [:page_facing_up:](https://rclone.org/jottacloud/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Hubic [:page_facing_up:](https://rclone.org/hubic/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
* Memory [:page_facing_up:](https://rclone.org/memory/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
* Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
* Minio [:page_facing_up:](https://rclone.org/s3/#minio)
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
* OVH [:page_facing_up:](https://rclone.org/swift/)
* OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)
* QingStor [:page_facing_up:](https://rclone.org/qingstor/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* Seafile [:page_facing_up:](https://rclone.org/seafile/)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Tardigrade [:page_facing_up:](https://rclone.org/tardigrade/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
* The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
* MD5SUMs checked at all times for file integrity
## Features
* MD5/SHA-1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* Copy mode to just copy new/changed files
* Sync mode to make a directory identical
* Check mode to check all MD5SUMs
* Can sync to and from network, eg two different Drive accounts
* [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, e.g. two different cloud accounts
* Optional large file chunking ([Chunker](https://rclone.org/chunker/))
* Optional encryption ([Crypt](https://rclone.org/crypt/))
* Optional cache ([Cache](https://rclone.org/cache/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
* Multi-threaded downloads to local disk
* Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDav/FTP/SFTP/dlna
See the home page for installation, usage, documentation, changelog
and configuration walkthroughs.
## Installation & documentation
* http://rclone.org/
Please see the [rclone website](https://rclone.org/) for:
* [Installation](https://rclone.org/install/)
* [Documentation & configuration](https://rclone.org/docs/)
* [Changelog](https://rclone.org/changelog/)
* [FAQ](https://rclone.org/faq/)
* [Storage providers](https://rclone.org/overview/)
* [Forum](https://forum.rclone.org/)
* ...and more
## Downloads
* https://rclone.org/downloads/
License
-------
This is free software under the terms of MIT the license (check the
COPYING file included in this package).
[COPYING file](/COPYING) included in this package).

View File

@@ -1,22 +1,111 @@
Required software for making a release
* [github-release](https://github.com/aktau/github-release) for uploading packages
* [gox](https://github.com/mitchellh/gox) for cross compiling
* Run `gox -build-toolchain`
* This assumes you have your own source checkout
* pandoc for making the html and man pages
* errcheck - go get github.com/kisielk/errcheck
* golint - go get github.com/golang/lint
# Release
Making a release
* go get -u -f -v ./...
* make check
* make test
This file describes how to make the various kinds of releases
## Extra required software for making a release
* [github-release](https://github.com/aktau/github-release) for uploading packages
* pandoc for making the html and man pages
## Making a release
* git checkout master
* git pull
* git status - make sure everything is checked in
* Check GitHub actions build for master is Green
* make test # see integration test server or run locally
* make tag
* edit docs/content/changelog.md
* git commit -a -v
* edit docs/content/changelog.md # make sure to remove duplicate logs from point releases
* make tidy
* make doc
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX.0"
* make retag
* # Set the GOPATH for a gox enabled compiler - . ~/bin/go-cross - not required for go >= 1.5
* make cross
* git push --tags origin master
* # Wait for the GitHub builds to complete then...
* make fetch_binaries
* make tarball
* make sign_upload
* make check_sign
* make upload
* make upload_website
* git push --tags origin master
* make upload_github
* make startdev
* # announce with forum post, twitter post, patreon post
Early in the next release cycle update the dependencies
* Review any pinned packages in go.mod and remove if possible
* make update
* git status
* git add new files
* git commit -a -v
If `make update` fails with errors like this:
```
# github.com/cpuguy83/go-md2man/md2man
../../../../pkg/mod/github.com/cpuguy83/go-md2man@v1.0.8/md2man/md2man.go:11:16: undefined: blackfriday.EXTENSION_NO_INTRA_EMPHASIS
../../../../pkg/mod/github.com/cpuguy83/go-md2man@v1.0.8/md2man/md2man.go:12:16: undefined: blackfriday.EXTENSION_TABLES
```
Can be fixed with
* GO111MODULE=on go get -u github.com/russross/blackfriday@v1.5.2
* GO111MODULE=on go mod tidy
## Making a point release
If rclone needs a point release due to some horrendous bug:
First make the release branch. If this is a second point release then
this will be done already.
* BASE_TAG=v1.XX # eg v1.52
* NEW_TAG=${BASE_TAG}.Y # eg v1.52.1
* echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
* git branch ${BASE_TAG} ${BASE_TAG}-stable
Now
* git co ${BASE_TAG}-stable
* git cherry-pick any fixes
* Test (see above)
* make NEXT_VERSION=${NEW_TAG} tag
* edit docs/content/changelog.md
* make TAG=${NEW_TAG} doc
* git commit -a -v -m "Version ${NEW_TAG}"
* git tag -d ${NEW_TAG}
* git tag -s -m "Version ${NEW_TAG}" ${NEW_TAG}
* git push --tags -u origin ${BASE_TAG}-stable
* Wait for builds to complete
* make BRANCH_PATH= TAG=${NEW_TAG} fetch_binaries
* make TAG=${NEW_TAG} tarball
* make TAG=${NEW_TAG} sign_upload
* make TAG=${NEW_TAG} check_sign
* make TAG=${NEW_TAG} upload
* make TAG=${NEW_TAG} upload_website
* make TAG=${NEW_TAG} upload_github
* NB this overwrites the current beta so we need to do this
* git co master
* make VERSION=${NEW_TAG} startdev
* # cherry pick the changes to the changelog and VERSION
* git checkout ${BASE_TAG}-stable VERSION docs/content/changelog.md
* git commit --amend
* git push
* Announce!
## Making a manual build of docker
The rclone docker image should autobuild on via GitHub actions. If it doesn't
or needs to be updated then rebuild like this.
```
docker pull golang
docker build --rm --ulimit memlock=67108864 -t rclone/rclone:1.52.0 -t rclone/rclone:1.52 -t rclone/rclone:1 -t rclone/rclone:latest .
docker push rclone/rclone:1.52.0
docker push rclone/rclone:1.52
docker push rclone/rclone:1
docker push rclone/rclone:latest
```

1
VERSION Normal file
View File

@@ -0,0 +1 @@
v1.52.2

View File

@@ -1,706 +0,0 @@
// Package amazonclouddrive provides an interface to the Amazon Cloud
// Drive object storage system.
package amazonclouddrive
/*
FIXME make searching for directory in id and file in id more efficient
- use the name: search parameter - remember the escaping rules
- use Folder GetNode and GetFile
FIXME make the default for no files and no dirs be (FILE & FOLDER) so
we ignore assets completely!
*/
import (
"fmt"
"io"
"log"
"net/http"
"regexp"
"strings"
"sync"
"time"
"github.com/ncw/go-acd"
"github.com/ncw/rclone/dircache"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/oauthutil"
"github.com/ncw/rclone/pacer"
"golang.org/x/oauth2"
)
const (
rcloneClientID = "amzn1.application-oa2-client.6bf18d2d1f5b485c94c8988bb03ad0e7"
rcloneClientSecret = "k8/NyszKm5vEkZXAwsbGkd6C3NrbjIqMg4qEhIeF14Szub2wur+/teS3ubXgsLe9//+tr/qoqK+lq6mg8vWkoA=="
folderKind = "FOLDER"
fileKind = "FILE"
assetKind = "ASSET"
statusAvailable = "AVAILABLE"
timeFormat = time.RFC3339 // 2014-03-07T22:31:12.173Z
minSleep = 20 * time.Millisecond
)
// Globals
var (
// Description of how to auth for this app
acdConfig = &oauth2.Config{
Scopes: []string{"clouddrive:read_all", "clouddrive:write"},
Endpoint: oauth2.Endpoint{
AuthURL: "https://www.amazon.com/ap/oa",
TokenURL: "https://api.amazon.com/auth/o2/token",
},
ClientID: rcloneClientID,
ClientSecret: fs.Reveal(rcloneClientSecret),
RedirectURL: oauthutil.RedirectURL,
}
)
// Register with Fs
func init() {
fs.Register(&fs.Info{
Name: "amazon cloud drive",
NewFs: NewFs,
Config: func(name string) {
err := oauthutil.Config(name, acdConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: oauthutil.ConfigClientID,
Help: "Amazon Application Client Id - leave blank normally.",
}, {
Name: oauthutil.ConfigClientSecret,
Help: "Amazon Application Client Secret - leave blank normally.",
}},
})
}
// Fs represents a remote acd server
type Fs struct {
name string // name of this remote
c *acd.Client // the connection to the acd server
root string // the path we are working on
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
}
// Object describes a acd object
//
// Will definitely have info but maybe not meta
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
info *acd.Node // Info from the acd object if known
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("Amazon cloud drive root '%s'", f.root)
}
// Pattern to match a acd path
var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
// parsePath parses an acd 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
return
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Rate exceeded.
500, // Get occasional 500 Internal Server Error
409, // Conflict - happens in the unit tests a lot
503, // Service Unavailable
}
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
return fs.ShouldRetry(err) || fs.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string) (fs.Fs, error) {
root = parsePath(root)
oAuthClient, err := oauthutil.NewClient(name, acdConfig)
if err != nil {
log.Fatalf("Failed to configure amazon cloud drive: %v", err)
}
c := acd.NewClient(oAuthClient)
c.UserAgent = fs.UserAgent
f := &Fs{
name: name,
root: root,
c: c,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
}
// Update endpoints
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
_, resp, err = f.c.Account.GetEndpoints()
return shouldRetry(resp, err)
})
if err != nil {
return nil, fmt.Errorf("Failed to get endpoints: %v", err)
}
// Get rootID
var rootInfo *acd.Folder
err = f.pacer.Call(func() (bool, error) {
rootInfo, resp, err = f.c.Nodes.GetRoot()
return shouldRetry(resp, err)
})
if err != nil || rootInfo.Id == nil {
return nil, fmt.Errorf("Failed to get root: %v", err)
}
f.dirCache = dircache.New(root, *rootInfo.Id, f)
// Find the current root
err = f.dirCache.FindRoot(false)
if err != nil {
// Assume it is a file
newRoot, remote := dircache.SplitPath(root)
newF := *f
newF.dirCache = dircache.New(newRoot, *rootInfo.Id, &newF)
newF.root = newRoot
// Make new Fs which is the parent
err = newF.dirCache.FindRoot(false)
if err != nil {
// No root so return old f
return f, nil
}
obj := newF.newFsObjectWithInfo(remote, nil)
if obj == nil {
// File doesn't exist so return old f
return f, nil
}
// return a Fs Limited to this object
return fs.NewLimited(&newF, obj), nil
}
return f, nil
}
// Return an FsObject from a path
//
// May return nil if an error occurred
func (f *Fs) newFsObjectWithInfo(remote string, info *acd.Node) fs.Object {
o := &Object{
fs: f,
remote: remote,
}
if info != nil {
// Set info but not meta
o.info = info
} else {
err := o.readMetaData() // reads info and meta, returning an error
if err != nil {
// logged already FsDebug("Failed to read info: %s", err)
return nil
}
}
return o
}
// NewFsObject returns an FsObject from a path
//
// May return nil if an error occurred
func (f *Fs) NewFsObject(remote string) fs.Object {
return f.newFsObjectWithInfo(remote, nil)
}
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error) {
//fs.Debug(f, "FindLeaf(%q, %q)", pathID, leaf)
folder := acd.FolderFromId(pathID, f.c.Nodes)
var resp *http.Response
var subFolder *acd.Folder
err = f.pacer.Call(func() (bool, error) {
subFolder, resp, err = folder.GetFolder(leaf)
return shouldRetry(resp, err)
})
if err != nil {
if err == acd.ErrorNodeNotFound {
//fs.Debug(f, "...Not found")
return "", false, nil
}
//fs.Debug(f, "...Error %v", err)
return "", false, err
}
if subFolder.Status != nil && *subFolder.Status != statusAvailable {
fs.Debug(f, "Ignoring folder %q in state %q", *subFolder.Status)
time.Sleep(1 * time.Second) // FIXME wait for problem to go away!
return "", false, nil
}
//fs.Debug(f, "...Found(%q, %v)", *subFolder.Id, leaf)
return *subFolder.Id, true, nil
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(pathID, leaf string) (newID string, err error) {
//fmt.Printf("CreateDir(%q, %q)\n", pathID, leaf)
folder := acd.FolderFromId(pathID, f.c.Nodes)
var resp *http.Response
var info *acd.Folder
err = f.pacer.Call(func() (bool, error) {
info, resp, err = folder.CreateFolder(leaf)
return shouldRetry(resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
return "", err
}
//fmt.Printf("...Id %q\n", *info.Id)
return *info.Id, nil
}
// list the objects into the function supplied
//
// If directories is set it only sends directories
// User function to process a File item from listAll
//
// Should return true to finish processing
type listAllFn func(*acd.Node) bool
// Lists the directory required calling the user function on each item found
//
// If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
query := "parents:" + dirID
if directoriesOnly {
query += " AND kind:" + folderKind
} else if filesOnly {
query += " AND kind:" + fileKind
} else {
// FIXME none of these work
//query += " AND kind:(" + fileKind + " OR " + folderKind + ")"
//query += " AND (kind:" + fileKind + " OR kind:" + folderKind + ")"
}
opts := acd.NodeListOptions{
Filters: query,
}
var nodes []*acd.Node
//var resp *http.Response
OUTER:
for {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
nodes, resp, err = f.c.Nodes.GetNodes(&opts)
return shouldRetry(resp, err)
})
if err != nil {
fs.Stats.Error()
fs.ErrorLog(f, "Couldn't list files: %v", err)
break
}
if nodes == nil {
break
}
for _, node := range nodes {
if node.Name != nil && node.Id != nil && node.Kind != nil && node.Status != nil {
// Ignore nodes if not AVAILABLE
if *node.Status != statusAvailable {
continue
}
if fn(node) {
found = true
break OUTER
}
}
}
}
return
}
// Path should be directory path either "" or "path/"
//
// List the directory using a recursive list from the root
//
// This fetches the minimum amount of stuff but does more API calls
// which makes it slow
func (f *Fs) listDirRecursive(dirID string, path string, out fs.ObjectsChan) error {
var subError error
// Make the API request
var wg sync.WaitGroup
_, err := f.listAll(dirID, "", false, false, func(node *acd.Node) bool {
// Recurse on directories
switch *node.Kind {
case folderKind:
wg.Add(1)
folder := path + *node.Name + "/"
fs.Debug(f, "Reading %s", folder)
go func() {
defer wg.Done()
err := f.listDirRecursive(*node.Id, folder, out)
if err != nil {
subError = err
fs.ErrorLog(f, "Error reading %s:%s", folder, err)
}
}()
return false
case fileKind:
if fs := f.newFsObjectWithInfo(path+*node.Name, node); fs != nil {
out <- fs
}
default:
// ignore ASSET etc
}
return false
})
wg.Wait()
fs.Debug(f, "Finished reading %s", path)
if err != nil {
return err
}
if subError != nil {
return subError
}
return nil
}
// List walks the path returning a channel of FsObjects
func (f *Fs) List() fs.ObjectsChan {
out := make(fs.ObjectsChan, fs.Config.Checkers)
go func() {
defer close(out)
err := f.dirCache.FindRoot(false)
if err != nil {
fs.Stats.Error()
fs.ErrorLog(f, "Couldn't find root: %s", err)
} else {
err = f.listDirRecursive(f.dirCache.RootID(), "", out)
if err != nil {
fs.Stats.Error()
fs.ErrorLog(f, "List failed: %s", err)
}
}
}()
return out
}
// ListDir lists the directories
func (f *Fs) ListDir() fs.DirChan {
out := make(fs.DirChan, fs.Config.Checkers)
go func() {
defer close(out)
err := f.dirCache.FindRoot(false)
if err != nil {
fs.Stats.Error()
fs.ErrorLog(f, "Couldn't find root: %s", err)
} else {
_, err := f.listAll(f.dirCache.RootID(), "", true, false, func(item *acd.Node) bool {
dir := &fs.Dir{
Name: *item.Name,
Bytes: -1,
Count: -1,
}
dir.When, _ = time.Parse(timeFormat, *item.ModifiedDate)
out <- dir
return false
})
if err != nil {
fs.Stats.Error()
fs.ErrorLog(f, "ListDir failed: %s", err)
}
}
}()
return out
}
// Put the object into the container
//
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) Put(in io.Reader, remote string, modTime time.Time, size int64) (fs.Object, error) {
// Temporary Object under construction
o := &Object{
fs: f,
remote: remote,
}
leaf, directoryID, err := f.dirCache.FindPath(remote, true)
if err != nil {
return nil, err
}
folder := acd.FolderFromId(directoryID, o.fs.c.Nodes)
var info *acd.File
var resp *http.Response
err = f.pacer.CallNoRetry(func() (bool, error) {
if size != 0 {
info, resp, err = folder.Put(in, leaf)
} else {
info, resp, err = folder.PutSized(in, size, leaf)
}
return shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
o.info = info.Node
return o, nil
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir() error {
return f.dirCache.FindRoot(true)
}
// purgeCheck remotes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(check bool) error {
if f.root == "" {
return fmt.Errorf("Can't purge root directory")
}
dc := f.dirCache
err := dc.FindRoot(false)
if err != nil {
return err
}
rootID := dc.RootID()
if check {
// check directory is empty
empty := true
_, err = f.listAll(rootID, "", false, false, func(node *acd.Node) bool {
switch *node.Kind {
case folderKind:
empty = false
return true
case fileKind:
empty = false
return true
default:
fs.Debug("Found ASSET %s", *node.Id)
}
return false
})
if err != nil {
return err
}
if !empty {
return fmt.Errorf("Directory not empty")
}
}
node := acd.NodeFromId(rootID, f.c.Nodes)
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = node.Trash()
return shouldRetry(resp, err)
})
if err != nil {
return err
}
f.dirCache.ResetRoot()
if err != nil {
return err
}
return nil
}
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir() error {
return f.purgeCheck(true)
}
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Copy src to this remote using server side copy operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
//func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
// srcObj, ok := src.(*Object)
// if !ok {
// fs.Debug(src, "Can't copy - not same remote type")
// return nil, fs.ErrorCantCopy
// }
// srcFs := srcObj.fs
// _, err := f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
// if err != nil {
// return nil, err
// }
// return f.NewFsObject(remote), nil
//}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge() error {
return f.purgeCheck(false)
}
// ------------------------------------------------------------
// Fs returns the parent Fs
func (o *Object) Fs() fs.Fs {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Md5sum returns the Md5sum of an object returning a lowercase hex string
func (o *Object) Md5sum() (string, error) {
if o.info.ContentProperties.Md5 != nil {
return *o.info.ContentProperties.Md5, nil
}
return "", nil
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return int64(*o.info.ContentProperties.Size)
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
func (o *Object) readMetaData() (err error) {
if o.info != nil {
return nil
}
leaf, directoryID, err := o.fs.dirCache.FindPath(o.remote, false)
if err != nil {
return err
}
folder := acd.FolderFromId(directoryID, o.fs.c.Nodes)
var resp *http.Response
var info *acd.File
err = o.fs.pacer.Call(func() (bool, error) {
info, resp, err = folder.GetFile(leaf)
return shouldRetry(resp, err)
})
if err != nil {
fs.Debug(o, "Failed to read info: %s", err)
return err
}
o.info = info.Node
return nil
}
// ModTime returns the modification time of the object
//
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime() time.Time {
err := o.readMetaData()
if err != nil {
fs.Log(o, "Failed to read metadata: %s", err)
return time.Now()
}
modTime, err := time.Parse(timeFormat, *o.info.ModifiedDate)
if err != nil {
fs.Log(o, "Failed to read mtime from object: %s", err)
return time.Now()
}
return modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(modTime time.Time) {
// FIXME not implemented
return
}
// Storable returns a boolean showing whether this object storable
func (o *Object) Storable() bool {
return true
}
// Open an object for read
func (o *Object) Open() (in io.ReadCloser, err error) {
file := acd.File{Node: o.info}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
in, resp, err = file.Open()
return shouldRetry(resp, err)
})
return in, err
}
// Update the object with the contents of the io.Reader, modTime and size
//
// The new object may have been created if an error is returned
func (o *Object) Update(in io.Reader, modTime time.Time, size int64) error {
file := acd.File{Node: o.info}
var info *acd.File
var resp *http.Response
var err error
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
if size != 0 {
info, resp, err = file.OverwriteSized(in, size)
} else {
info, resp, err = file.Overwrite(in)
}
return shouldRetry(resp, err)
})
if err != nil {
return err
}
o.info = info.Node
return nil
}
// Remove an object
func (o *Object) Remove() error {
var resp *http.Response
var err error
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.info.Trash()
return shouldRetry(resp, err)
})
return err
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
// _ fs.Copier = (*Fs)(nil)
// _ fs.Mover = (*Fs)(nil)
// _ fs.DirMover = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

View File

@@ -1,56 +0,0 @@
// Test AmazonCloudDrive filesystem interface
//
// Automatically generated - DO NOT EDIT
// Regenerate with: make gen_tests
package amazonclouddrive_test
import (
"testing"
"github.com/ncw/rclone/amazonclouddrive"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fstest/fstests"
)
func init() {
fstests.NilObject = fs.Object((*amazonclouddrive.Object)(nil))
fstests.RemoteName = "TestAmazonCloudDrive:"
}
// Generic tests for the Fs
func TestInit(t *testing.T) { fstests.TestInit(t) }
func TestFsString(t *testing.T) { fstests.TestFsString(t) }
func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
func TestFsNewFsObjectNotFound(t *testing.T) { fstests.TestFsNewFsObjectNotFound(t) }
func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
func TestFsListRoot(t *testing.T) { fstests.TestFsListRoot(t) }
func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
func TestFsNewFsObject(t *testing.T) { fstests.TestFsNewFsObject(t) }
func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
func TestObjectMd5sum(t *testing.T) { fstests.TestObjectMd5sum(t) }
func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
func TestLimitedFs(t *testing.T) { fstests.TestLimitedFs(t) }
func TestLimitedFsNotFound(t *testing.T) { fstests.TestLimitedFsNotFound(t) }
func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }

View File

@@ -1,21 +0,0 @@
version: "{build}"
os: Windows Server 2012 R2
clone_folder: c:\gopath\src\github.com\ncw\rclone
environment:
GOPATH: c:\gopath
install:
- go get golang.org/x/tools/cmd/vet
- echo %PATH%
- echo %GOPATH%
- go version
- go env
- go get -d ./...
build_script:
- go vet ./...
- go test -v -cpu=2 ./...
- go test -cpu=2 -short -race ./...

54
backend/alias/alias.go Normal file
View File

@@ -0,0 +1,54 @@
package alias
import (
"errors"
"strings"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath"
)
// Register with Fs
func init() {
fsi := &fs.RegInfo{
Name: "alias",
Description: "Alias for an existing remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",
Help: "Remote or path to alias.\nCan be \"myremote:path/to/dir\", \"myremote:bucket\", \"myremote:\" or \"/local/path\".",
Required: true,
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
}
// NewFs constructs an Fs from the path.
//
// The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.Remote == "" {
return nil, errors.New("alias can't point to an empty remote - check the value of the remote setting")
}
if strings.HasPrefix(opt.Remote, name+":") {
return nil, errors.New("can't point alias remote at itself - check the value of the remote setting")
}
fsInfo, configName, fsPath, config, err := fs.ConfigFs(opt.Remote)
if err != nil {
return nil, err
}
return fsInfo.NewFs(configName, fspath.JoinRootPath(fsPath, root), config)
}

View File

@@ -0,0 +1,105 @@
package alias
import (
"context"
"fmt"
"path"
"path/filepath"
"sort"
"testing"
_ "github.com/rclone/rclone/backend/local" // pull in test backend
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/stretchr/testify/require"
)
var (
remoteName = "TestAlias"
)
func prepare(t *testing.T, root string) {
config.LoadConfig()
// Configure the remote
config.FileSet(remoteName, "type", "alias")
config.FileSet(remoteName, "remote", root)
}
func TestNewFS(t *testing.T) {
type testEntry struct {
remote string
size int64
isDir bool
}
for testi, test := range []struct {
remoteRoot string
fsRoot string
fsList string
wantOK bool
entries []testEntry
}{
{"", "", "", true, []testEntry{
{"four", -1, true},
{"one%.txt", 6, false},
{"three", -1, true},
{"two.html", 7, false},
}},
{"", "four", "", true, []testEntry{
{"five", -1, true},
{"under four.txt", 9, false},
}},
{"", "", "four", true, []testEntry{
{"four/five", -1, true},
{"four/under four.txt", 9, false},
}},
{"four", "..", "", true, []testEntry{
{"four", -1, true},
{"one%.txt", 6, false},
{"three", -1, true},
{"two.html", 7, false},
}},
{"four", "../three", "", true, []testEntry{
{"underthree.txt", 9, false},
}},
} {
what := fmt.Sprintf("test %d remoteRoot=%q, fsRoot=%q, fsList=%q", testi, test.remoteRoot, test.fsRoot, test.fsList)
remoteRoot, err := filepath.Abs(filepath.FromSlash(path.Join("test/files", test.remoteRoot)))
require.NoError(t, err, what)
prepare(t, remoteRoot)
f, err := fs.NewFs(fmt.Sprintf("%s:%s", remoteName, test.fsRoot))
require.NoError(t, err, what)
gotEntries, err := f.List(context.Background(), test.fsList)
require.NoError(t, err, what)
sort.Sort(gotEntries)
require.Equal(t, len(test.entries), len(gotEntries), what)
for i, gotEntry := range gotEntries {
what := fmt.Sprintf("%s, entry=%d", what, i)
wantEntry := test.entries[i]
require.Equal(t, wantEntry.remote, gotEntry.Remote(), what)
require.Equal(t, wantEntry.size, gotEntry.Size(), what)
_, isDir := gotEntry.(fs.Directory)
require.Equal(t, wantEntry.isDir, isDir, what)
}
}
}
func TestNewFSNoRemote(t *testing.T) {
prepare(t, "")
f, err := fs.NewFs(fmt.Sprintf("%s:", remoteName))
require.Error(t, err)
require.Nil(t, f)
}
func TestNewFSInvalidRemote(t *testing.T) {
prepare(t, "not_existing_test_remote:")
f, err := fs.NewFs(fmt.Sprintf("%s:", remoteName))
require.Error(t, err)
require.Nil(t, f)
}

View File

@@ -0,0 +1 @@
apple

View File

@@ -0,0 +1 @@
beetroot

View File

@@ -0,0 +1 @@
hello

View File

@@ -0,0 +1 @@
rutabaga

View File

@@ -0,0 +1 @@
potato

43
backend/all/all.go Normal file
View File

@@ -0,0 +1,43 @@
package all
import (
// Active file systems
_ "github.com/rclone/rclone/backend/alias"
_ "github.com/rclone/rclone/backend/amazonclouddrive"
_ "github.com/rclone/rclone/backend/azureblob"
_ "github.com/rclone/rclone/backend/b2"
_ "github.com/rclone/rclone/backend/box"
_ "github.com/rclone/rclone/backend/cache"
_ "github.com/rclone/rclone/backend/chunker"
_ "github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/drive"
_ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/googlecloudstorage"
_ "github.com/rclone/rclone/backend/googlephotos"
_ "github.com/rclone/rclone/backend/http"
_ "github.com/rclone/rclone/backend/hubic"
_ "github.com/rclone/rclone/backend/jottacloud"
_ "github.com/rclone/rclone/backend/koofr"
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/mailru"
_ "github.com/rclone/rclone/backend/mega"
_ "github.com/rclone/rclone/backend/memory"
_ "github.com/rclone/rclone/backend/onedrive"
_ "github.com/rclone/rclone/backend/opendrive"
_ "github.com/rclone/rclone/backend/pcloud"
_ "github.com/rclone/rclone/backend/premiumizeme"
_ "github.com/rclone/rclone/backend/putio"
_ "github.com/rclone/rclone/backend/qingstor"
_ "github.com/rclone/rclone/backend/s3"
_ "github.com/rclone/rclone/backend/seafile"
_ "github.com/rclone/rclone/backend/sftp"
_ "github.com/rclone/rclone/backend/sharefile"
_ "github.com/rclone/rclone/backend/sugarsync"
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/tardigrade"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/webdav"
_ "github.com/rclone/rclone/backend/yandex"
)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,20 @@
// Test AmazonCloudDrive filesystem interface
// +build acd
package amazonclouddrive_test
import (
"testing"
"github.com/rclone/rclone/backend/amazonclouddrive"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.NilObject = fs.Object((*amazonclouddrive.Object)(nil))
fstests.RemoteName = "TestAmazonCloudDrive:"
fstests.Run(t)
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,35 @@
// +build !plan9,!solaris,go1.13
package azureblob
import (
"testing"
"github.com/stretchr/testify/assert"
)
func (f *Fs) InternalTest(t *testing.T) {
// Check first feature flags are set on this
// remote
enabled := f.Features().SetTier
assert.True(t, enabled)
enabled = f.Features().GetTier
assert.True(t, enabled)
}
func TestIncrement(t *testing.T) {
for _, test := range []struct {
in []byte
want []byte
}{
{[]byte{0, 0, 0, 0}, []byte{1, 0, 0, 0}},
{[]byte{0xFE, 0, 0, 0}, []byte{0xFF, 0, 0, 0}},
{[]byte{0xFF, 0, 0, 0}, []byte{0, 1, 0, 0}},
{[]byte{0, 1, 0, 0}, []byte{1, 1, 0, 0}},
{[]byte{0xFF, 0xFF, 0xFF, 0xFE}, []byte{0, 0, 0, 0xFF}},
{[]byte{0xFF, 0xFF, 0xFF, 0xFF}, []byte{0, 0, 0, 0}},
} {
increment(test.in)
assert.Equal(t, test.want, test.in)
}
}

View File

@@ -0,0 +1,37 @@
// Test AzureBlob filesystem interface
// +build !plan9,!solaris,go1.13
package azureblob
import (
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureBlob:",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MaxChunkSize: maxChunkSize,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
)

View File

@@ -0,0 +1,6 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9 solaris !go1.13
package azureblob

347
backend/b2/api/types.go Normal file
View File

@@ -0,0 +1,347 @@
package api
import (
"fmt"
"path"
"strconv"
"strings"
"time"
"github.com/rclone/rclone/fs/fserrors"
)
// Error describes a B2 error response
type Error struct {
Status int `json:"status"` // The numeric HTTP status code. Always matches the status in the HTTP response.
Code string `json:"code"` // A single-identifier code that identifies the error.
Message string `json:"message"` // A human-readable message, in English, saying what went wrong.
}
// Error satisfies the error interface
func (e *Error) Error() string {
return fmt.Sprintf("%s (%d %s)", e.Message, e.Status, e.Code)
}
// Fatal satisfies the Fatal interface
//
// It indicates which errors should be treated as fatal
func (e *Error) Fatal() bool {
return e.Status == 403 // 403 errors shouldn't be retried
}
var _ fserrors.Fataler = (*Error)(nil)
// Bucket describes a B2 bucket
type Bucket struct {
ID string `json:"bucketId"`
AccountID string `json:"accountId"`
Name string `json:"bucketName"`
Type string `json:"bucketType"`
}
// Timestamp is a UTC time when this file was uploaded. It is a base
// 10 number of milliseconds since midnight, January 1, 1970 UTC. This
// fits in a 64 bit integer such as the type "long" in the programming
// language Java. It is intended to be compatible with Java's time
// long. For example, it can be passed directly into the java call
// Date.setTime(long time).
type Timestamp time.Time
// MarshalJSON turns a Timestamp into JSON (in UTC)
func (t *Timestamp) MarshalJSON() (out []byte, err error) {
timestamp := (*time.Time)(t).UTC().UnixNano()
return []byte(strconv.FormatInt(timestamp/1e6, 10)), nil
}
// UnmarshalJSON turns JSON into a Timestamp
func (t *Timestamp) UnmarshalJSON(data []byte) error {
timestamp, err := strconv.ParseInt(string(data), 10, 64)
if err != nil {
return err
}
*t = Timestamp(time.Unix(timestamp/1e3, (timestamp%1e3)*1e6).UTC())
return nil
}
const versionFormat = "-v2006-01-02-150405.000"
// AddVersion adds the timestamp as a version string into the filename passed in.
func (t Timestamp) AddVersion(remote string) string {
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
s := time.Time(t).Format(versionFormat)
// Replace the '.' with a '-'
s = strings.Replace(s, ".", "-", -1)
return base + s + ext
}
// RemoveVersion removes the timestamp from a filename as a version string.
//
// It returns the new file name and a timestamp, or the old filename
// and a zero timestamp.
func RemoveVersion(remote string) (t Timestamp, newRemote string) {
newRemote = remote
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
if len(base) < len(versionFormat) {
return
}
versionStart := len(base) - len(versionFormat)
// Check it ends in -xxx
if base[len(base)-4] != '-' {
return
}
// Replace with .xxx for parsing
base = base[:len(base)-4] + "." + base[len(base)-3:]
newT, err := time.Parse(versionFormat, base[versionStart:])
if err != nil {
return
}
return Timestamp(newT), base[:versionStart] + ext
}
// IsZero returns true if the timestamp is uninitialized
func (t Timestamp) IsZero() bool {
return time.Time(t).IsZero()
}
// Equal compares two timestamps
//
// If either are !IsZero then it returns false
func (t Timestamp) Equal(s Timestamp) bool {
if time.Time(t).IsZero() {
return false
}
if time.Time(s).IsZero() {
return false
}
return time.Time(t).Equal(time.Time(s))
}
// File is info about a file
type File struct {
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
Action string `json:"action"` // Either "upload" or "hide". "upload" means a file that was uploaded to B2 Cloud Storage. "hide" means a file version marking the file as hidden, so that it will not show up in b2_list_file_names. The result of b2_list_file_names will contain only "upload". The result of b2_list_file_versions may have both.
Size int64 `json:"size"` // The number of bytes in the file.
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
ContentType string `json:"contentType"` // The MIME type of the file.
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
}
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct {
AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"`
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead.
RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance.
}
// ListBucketsRequest is parameters for b2_list_buckets call
type ListBucketsRequest struct {
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId,omitempty"` // When specified, the result will be a list containing just this bucket.
BucketName string `json:"bucketName,omitempty"` // When specified, the result will be a list containing just this bucket.
BucketTypes []string `json:"bucketTypes,omitempty"` // If present, B2 will use it as a filter for bucket types returned in the list buckets response.
}
// ListBucketsResponse is as returned from the b2_list_buckets call
type ListBucketsResponse struct {
Buckets []Bucket `json:"buckets"`
}
// ListFileNamesRequest is as passed to b2_list_file_names or b2_list_file_versions
type ListFileNamesRequest struct {
BucketID string `json:"bucketId"` // required - The bucket to look for file names in.
StartFileName string `json:"startFileName,omitempty"` // optional - The first file name to return. If there is a file with this name, it will be returned in the list. If not, the first file name after this the first one after this name.
MaxFileCount int `json:"maxFileCount,omitempty"` // optional - The maximum number of files to return from this call. The default value is 100, and the maximum allowed is 1000.
StartFileID string `json:"startFileId,omitempty"` // optional - What to pass in to startFileId for the next search to continue where this one left off.
Prefix string `json:"prefix,omitempty"` // optional - Files returned will be limited to those with the given prefix. Defaults to the empty string, which matches all files.
Delimiter string `json:"delimiter,omitempty"` // Files returned will be limited to those within the top folder, or any one subfolder. Defaults to NULL. Folder names will also be returned. The delimiter character will be used to "break" file names into folders.
}
// ListFileNamesResponse is as received from b2_list_file_names or b2_list_file_versions
type ListFileNamesResponse struct {
Files []File `json:"files"` // An array of objects, each one describing one file.
NextFileName *string `json:"nextFileName"` // What to pass in to startFileName for the next search to continue where this one left off, or null if there are no more files.
NextFileID *string `json:"nextFileId"` // What to pass in to startFileId for the next search to continue where this one left off, or null if there are no more files.
}
// GetUploadURLRequest is passed to b2_get_upload_url
type GetUploadURLRequest struct {
BucketID string `json:"bucketId"` // The ID of the bucket that you want to upload to.
}
// GetUploadURLResponse is received from b2_get_upload_url
type GetUploadURLResponse struct {
BucketID string `json:"bucketId"` // The unique ID of the bucket.
UploadURL string `json:"uploadUrl"` // The URL that can be used to upload files to this bucket, see b2_upload_file.
AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when uploading files to this bucket, see b2_upload_file.
}
// GetDownloadAuthorizationRequest is passed to b2_get_download_authorization
type GetDownloadAuthorizationRequest struct {
BucketID string `json:"bucketId"` // The ID of the bucket that you want to upload to.
FileNamePrefix string `json:"fileNamePrefix"` // The file name prefix of files the download authorization token will allow access to.
ValidDurationInSeconds int64 `json:"validDurationInSeconds"` // The number of seconds before the authorization token will expire. The minimum value is 1 second. The maximum value is 604800 which is one week in seconds.
B2ContentDisposition string `json:"b2ContentDisposition,omitempty"` // optional - If this is present, download requests using the returned authorization must include the same value for b2ContentDisposition.
}
// GetDownloadAuthorizationResponse is received from b2_get_download_authorization
type GetDownloadAuthorizationResponse struct {
BucketID string `json:"bucketId"` // The unique ID of the bucket.
FileNamePrefix string `json:"fileNamePrefix"` // The file name prefix of files the download authorization token will allow access to.
AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when downloading files, see b2_download_file_by_name.
}
// FileInfo is received from b2_upload_file, b2_get_file_info and b2_finish_large_file
type FileInfo struct {
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
Action string `json:"action"` // Either "upload" or "hide". "upload" means a file that was uploaded to B2 Cloud Storage. "hide" means a file version marking the file as hidden, so that it will not show up in b2_list_file_names. The result of b2_list_file_names will contain only "upload". The result of b2_list_file_versions may have both.
AccountID string `json:"accountId"` // Your account ID.
BucketID string `json:"bucketId"` // The bucket that the file is in.
Size int64 `json:"contentLength"` // The number of bytes stored in the file.
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
ContentType string `json:"contentType"` // The MIME type of the file.
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
}
// CreateBucketRequest is used to create a bucket
type CreateBucketRequest struct {
AccountID string `json:"accountId"`
Name string `json:"bucketName"`
Type string `json:"bucketType"`
}
// DeleteBucketRequest is used to create a bucket
type DeleteBucketRequest struct {
ID string `json:"bucketId"`
AccountID string `json:"accountId"`
}
// DeleteFileRequest is used to delete a file version
type DeleteFileRequest struct {
ID string `json:"fileId"` // The ID of the file, as returned by b2_upload_file, b2_list_file_names, or b2_list_file_versions.
Name string `json:"fileName"` // The name of this file.
}
// HideFileRequest is used to delete a file
type HideFileRequest struct {
BucketID string `json:"bucketId"` // The bucket containing the file to hide.
Name string `json:"fileName"` // The name of the file to hide.
}
// GetFileInfoRequest is used to return a FileInfo struct with b2_get_file_info
type GetFileInfoRequest struct {
ID string `json:"fileId"` // The ID of the file, as returned by b2_upload_file, b2_list_file_names, or b2_list_file_versions.
}
// StartLargeFileRequest (b2_start_large_file) Prepares for uploading the parts of a large file.
//
// If the original source of the file being uploaded has a last
// modified time concept, Backblaze recommends using
// src_last_modified_millis as the name, and a string holding the base
// 10 number number of milliseconds since midnight, January 1, 1970
// UTC. This fits in a 64 bit integer such as the type "long" in the
// programming language Java. It is intended to be compatible with
// Java's time long. For example, it can be passed directly into the
// Java call Date.setTime(long time).
//
// If the caller knows the SHA1 of the entire large file being
// uploaded, Backblaze recommends using large_file_sha1 as the name,
// and a 40 byte hex string representing the SHA1.
//
// Example: { "src_last_modified_millis" : "1452802803026", "large_file_sha1" : "a3195dc1e7b46a2ff5da4b3c179175b75671e80d", "color": "blue" }
type StartLargeFileRequest struct {
BucketID string `json:"bucketId"` //The ID of the bucket that the file will go in.
Name string `json:"fileName"` // The name of the file. See Files for requirements on file names.
ContentType string `json:"contentType"` // The MIME type of the content of the file, which will be returned in the Content-Type header when downloading the file. Use the Content-Type b2/x-auto to automatically set the stored Content-Type post upload. In the case where a file extension is absent or the lookup fails, the Content-Type is set to application/octet-stream.
Info map[string]string `json:"fileInfo"` // A JSON object holding the name/value pairs for the custom file info.
}
// StartLargeFileResponse is the response to StartLargeFileRequest
type StartLargeFileResponse struct {
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId"` // The unique ID of the bucket.
ContentType string `json:"contentType"` // The MIME type of the file.
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
}
// GetUploadPartURLRequest is passed to b2_get_upload_part_url
type GetUploadPartURLRequest struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
}
// GetUploadPartURLResponse is received from b2_get_upload_url
type GetUploadPartURLResponse struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
UploadURL string `json:"uploadUrl"` // The URL that can be used to upload files to this bucket, see b2_upload_part.
AuthorizationToken string `json:"authorizationToken"` // The authorizationToken that must be used when uploading files to this bucket, see b2_upload_part.
}
// UploadPartResponse is the response to b2_upload_part
type UploadPartResponse struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
PartNumber int64 `json:"partNumber"` // Which part this is (starting from 1)
Size int64 `json:"contentLength"` // The number of bytes stored in the file.
SHA1 string `json:"contentSha1"` // The SHA1 of the bytes stored in the file.
}
// FinishLargeFileRequest is passed to b2_finish_large_file
//
// The response is a FileInfo object (with extra AccountID and BucketID fields which we ignore).
//
// Large files do not have a SHA1 checksum. The value will always be "none".
type FinishLargeFileRequest struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
SHA1s []string `json:"partSha1Array"` // A JSON array of hex SHA1 checksums of the parts of the large file. This is a double-check that the right parts were uploaded in the right order, and that none were missed. Note that the part numbers start at 1, and the SHA1 of the part 1 is the first string in the array, at index 0.
}
// CancelLargeFileRequest is passed to b2_finish_large_file
//
// The response is a CancelLargeFileResponse
type CancelLargeFileRequest struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
}
// CancelLargeFileResponse is the response to CancelLargeFileRequest
type CancelLargeFileResponse struct {
ID string `json:"fileId"` // The unique identifier of the file being uploaded.
Name string `json:"fileName"` // The name of this file.
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId"` // The unique ID of the bucket.
}
// CopyFileRequest is as passed to b2_copy_file
type CopyFileRequest struct {
SourceID string `json:"sourceFileId"` // The ID of the source file being copied.
Name string `json:"fileName"` // The name of the new file being created.
Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied.
MetadataDirective string `json:"metadataDirective,omitempty"` // The strategy for how to populate metadata for the new file: COPY or REPLACE
ContentType string `json:"contentType,omitempty"` // The MIME type of the content of the file (REPLACE only)
Info map[string]string `json:"fileInfo,omitempty"` // This field stores the metadata that will be stored with the file. (REPLACE only)
DestBucketID string `json:"destinationBucketId,omitempty"` // The destination ID of the bucket if set, if not the source bucket will be used
}
// CopyPartRequest is the request for b2_copy_part - the response is UploadPartResponse
type CopyPartRequest struct {
SourceID string `json:"sourceFileId"` // The ID of the source file being copied.
LargeFileID string `json:"largeFileId"` // The ID of the large file the part will belong to, as returned by b2_start_large_file.
PartNumber int64 `json:"partNumber"` // Which part this is (starting from 1)
Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied.
}

View File

@@ -0,0 +1,87 @@
package api_test
import (
"testing"
"time"
"github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fstest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var (
emptyT api.Timestamp
t0 = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123456789Z"))
t0r = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123000000Z"))
t1 = api.Timestamp(fstest.Time("2001-02-03T04:05:06.123000000Z"))
)
func TestTimestampMarshalJSON(t *testing.T) {
resB, err := t0.MarshalJSON()
res := string(resB)
require.NoError(t, err)
assert.Equal(t, "3661123", res)
resB, err = t1.MarshalJSON()
res = string(resB)
require.NoError(t, err)
assert.Equal(t, "981173106123", res)
}
func TestTimestampUnmarshalJSON(t *testing.T) {
var tActual api.Timestamp
err := tActual.UnmarshalJSON([]byte("981173106123"))
require.NoError(t, err)
assert.Equal(t, (time.Time)(t1), (time.Time)(tActual))
}
func TestTimestampAddVersion(t *testing.T) {
for _, test := range []struct {
t api.Timestamp
in string
expected string
}{
{t0, "potato.txt", "potato-v1970-01-01-010101-123.txt"},
{t1, "potato", "potato-v2001-02-03-040506-123"},
{t1, "", "-v2001-02-03-040506-123"},
} {
actual := test.t.AddVersion(test.in)
assert.Equal(t, test.expected, actual, test.in)
}
}
func TestTimestampRemoveVersion(t *testing.T) {
for _, test := range []struct {
in string
expectedT api.Timestamp
expectedRemote string
}{
{"potato.txt", emptyT, "potato.txt"},
{"potato-v1970-01-01-010101-123.txt", t0r, "potato.txt"},
{"potato-v2001-02-03-040506-123", t1, "potato"},
{"-v2001-02-03-040506-123", t1, ""},
{"potato-v2A01-02-03-040506-123", emptyT, "potato-v2A01-02-03-040506-123"},
{"potato-v2001-02-03-040506=123", emptyT, "potato-v2001-02-03-040506=123"},
} {
actualT, actualRemote := api.RemoveVersion(test.in)
assert.Equal(t, test.expectedT, actualT, test.in)
assert.Equal(t, test.expectedRemote, actualRemote, test.in)
}
}
func TestTimestampIsZero(t *testing.T) {
assert.True(t, emptyT.IsZero())
assert.False(t, t0.IsZero())
assert.False(t, t1.IsZero())
}
func TestTimestampEqual(t *testing.T) {
assert.False(t, emptyT.Equal(emptyT))
assert.False(t, t0.Equal(emptyT))
assert.False(t, emptyT.Equal(t0))
assert.False(t, t0.Equal(t1))
assert.False(t, t1.Equal(t0))
assert.True(t, t0.Equal(t0))
assert.True(t, t1.Equal(t1))
}

1941
backend/b2/b2.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,170 @@
package b2
import (
"testing"
"time"
"github.com/rclone/rclone/fstest"
)
// Test b2 string encoding
// https://www.backblaze.com/b2/docs/string_encoding.html
var encodeTest = []struct {
fullyEncoded string
minimallyEncoded string
plainText string
}{
{fullyEncoded: "%20", minimallyEncoded: "+", plainText: " "},
{fullyEncoded: "%21", minimallyEncoded: "!", plainText: "!"},
{fullyEncoded: "%22", minimallyEncoded: "%22", plainText: "\""},
{fullyEncoded: "%23", minimallyEncoded: "%23", plainText: "#"},
{fullyEncoded: "%24", minimallyEncoded: "$", plainText: "$"},
{fullyEncoded: "%25", minimallyEncoded: "%25", plainText: "%"},
{fullyEncoded: "%26", minimallyEncoded: "%26", plainText: "&"},
{fullyEncoded: "%27", minimallyEncoded: "'", plainText: "'"},
{fullyEncoded: "%28", minimallyEncoded: "(", plainText: "("},
{fullyEncoded: "%29", minimallyEncoded: ")", plainText: ")"},
{fullyEncoded: "%2A", minimallyEncoded: "*", plainText: "*"},
{fullyEncoded: "%2B", minimallyEncoded: "%2B", plainText: "+"},
{fullyEncoded: "%2C", minimallyEncoded: "%2C", plainText: ","},
{fullyEncoded: "%2D", minimallyEncoded: "-", plainText: "-"},
{fullyEncoded: "%2E", minimallyEncoded: ".", plainText: "."},
{fullyEncoded: "%2F", minimallyEncoded: "/", plainText: "/"},
{fullyEncoded: "%30", minimallyEncoded: "0", plainText: "0"},
{fullyEncoded: "%31", minimallyEncoded: "1", plainText: "1"},
{fullyEncoded: "%32", minimallyEncoded: "2", plainText: "2"},
{fullyEncoded: "%33", minimallyEncoded: "3", plainText: "3"},
{fullyEncoded: "%34", minimallyEncoded: "4", plainText: "4"},
{fullyEncoded: "%35", minimallyEncoded: "5", plainText: "5"},
{fullyEncoded: "%36", minimallyEncoded: "6", plainText: "6"},
{fullyEncoded: "%37", minimallyEncoded: "7", plainText: "7"},
{fullyEncoded: "%38", minimallyEncoded: "8", plainText: "8"},
{fullyEncoded: "%39", minimallyEncoded: "9", plainText: "9"},
{fullyEncoded: "%3A", minimallyEncoded: ":", plainText: ":"},
{fullyEncoded: "%3B", minimallyEncoded: ";", plainText: ";"},
{fullyEncoded: "%3C", minimallyEncoded: "%3C", plainText: "<"},
{fullyEncoded: "%3D", minimallyEncoded: "=", plainText: "="},
{fullyEncoded: "%3E", minimallyEncoded: "%3E", plainText: ">"},
{fullyEncoded: "%3F", minimallyEncoded: "%3F", plainText: "?"},
{fullyEncoded: "%40", minimallyEncoded: "@", plainText: "@"},
{fullyEncoded: "%41", minimallyEncoded: "A", plainText: "A"},
{fullyEncoded: "%42", minimallyEncoded: "B", plainText: "B"},
{fullyEncoded: "%43", minimallyEncoded: "C", plainText: "C"},
{fullyEncoded: "%44", minimallyEncoded: "D", plainText: "D"},
{fullyEncoded: "%45", minimallyEncoded: "E", plainText: "E"},
{fullyEncoded: "%46", minimallyEncoded: "F", plainText: "F"},
{fullyEncoded: "%47", minimallyEncoded: "G", plainText: "G"},
{fullyEncoded: "%48", minimallyEncoded: "H", plainText: "H"},
{fullyEncoded: "%49", minimallyEncoded: "I", plainText: "I"},
{fullyEncoded: "%4A", minimallyEncoded: "J", plainText: "J"},
{fullyEncoded: "%4B", minimallyEncoded: "K", plainText: "K"},
{fullyEncoded: "%4C", minimallyEncoded: "L", plainText: "L"},
{fullyEncoded: "%4D", minimallyEncoded: "M", plainText: "M"},
{fullyEncoded: "%4E", minimallyEncoded: "N", plainText: "N"},
{fullyEncoded: "%4F", minimallyEncoded: "O", plainText: "O"},
{fullyEncoded: "%50", minimallyEncoded: "P", plainText: "P"},
{fullyEncoded: "%51", minimallyEncoded: "Q", plainText: "Q"},
{fullyEncoded: "%52", minimallyEncoded: "R", plainText: "R"},
{fullyEncoded: "%53", minimallyEncoded: "S", plainText: "S"},
{fullyEncoded: "%54", minimallyEncoded: "T", plainText: "T"},
{fullyEncoded: "%55", minimallyEncoded: "U", plainText: "U"},
{fullyEncoded: "%56", minimallyEncoded: "V", plainText: "V"},
{fullyEncoded: "%57", minimallyEncoded: "W", plainText: "W"},
{fullyEncoded: "%58", minimallyEncoded: "X", plainText: "X"},
{fullyEncoded: "%59", minimallyEncoded: "Y", plainText: "Y"},
{fullyEncoded: "%5A", minimallyEncoded: "Z", plainText: "Z"},
{fullyEncoded: "%5B", minimallyEncoded: "%5B", plainText: "["},
{fullyEncoded: "%5C", minimallyEncoded: "%5C", plainText: "\\"},
{fullyEncoded: "%5D", minimallyEncoded: "%5D", plainText: "]"},
{fullyEncoded: "%5E", minimallyEncoded: "%5E", plainText: "^"},
{fullyEncoded: "%5F", minimallyEncoded: "_", plainText: "_"},
{fullyEncoded: "%60", minimallyEncoded: "%60", plainText: "`"},
{fullyEncoded: "%61", minimallyEncoded: "a", plainText: "a"},
{fullyEncoded: "%62", minimallyEncoded: "b", plainText: "b"},
{fullyEncoded: "%63", minimallyEncoded: "c", plainText: "c"},
{fullyEncoded: "%64", minimallyEncoded: "d", plainText: "d"},
{fullyEncoded: "%65", minimallyEncoded: "e", plainText: "e"},
{fullyEncoded: "%66", minimallyEncoded: "f", plainText: "f"},
{fullyEncoded: "%67", minimallyEncoded: "g", plainText: "g"},
{fullyEncoded: "%68", minimallyEncoded: "h", plainText: "h"},
{fullyEncoded: "%69", minimallyEncoded: "i", plainText: "i"},
{fullyEncoded: "%6A", minimallyEncoded: "j", plainText: "j"},
{fullyEncoded: "%6B", minimallyEncoded: "k", plainText: "k"},
{fullyEncoded: "%6C", minimallyEncoded: "l", plainText: "l"},
{fullyEncoded: "%6D", minimallyEncoded: "m", plainText: "m"},
{fullyEncoded: "%6E", minimallyEncoded: "n", plainText: "n"},
{fullyEncoded: "%6F", minimallyEncoded: "o", plainText: "o"},
{fullyEncoded: "%70", minimallyEncoded: "p", plainText: "p"},
{fullyEncoded: "%71", minimallyEncoded: "q", plainText: "q"},
{fullyEncoded: "%72", minimallyEncoded: "r", plainText: "r"},
{fullyEncoded: "%73", minimallyEncoded: "s", plainText: "s"},
{fullyEncoded: "%74", minimallyEncoded: "t", plainText: "t"},
{fullyEncoded: "%75", minimallyEncoded: "u", plainText: "u"},
{fullyEncoded: "%76", minimallyEncoded: "v", plainText: "v"},
{fullyEncoded: "%77", minimallyEncoded: "w", plainText: "w"},
{fullyEncoded: "%78", minimallyEncoded: "x", plainText: "x"},
{fullyEncoded: "%79", minimallyEncoded: "y", plainText: "y"},
{fullyEncoded: "%7A", minimallyEncoded: "z", plainText: "z"},
{fullyEncoded: "%7B", minimallyEncoded: "%7B", plainText: "{"},
{fullyEncoded: "%7C", minimallyEncoded: "%7C", plainText: "|"},
{fullyEncoded: "%7D", minimallyEncoded: "%7D", plainText: "}"},
{fullyEncoded: "%7E", minimallyEncoded: "~", plainText: "~"},
{fullyEncoded: "%7F", minimallyEncoded: "%7F", plainText: "\u007f"},
{fullyEncoded: "%E8%87%AA%E7%94%B1", minimallyEncoded: "%E8%87%AA%E7%94%B1", plainText: "自由"},
{fullyEncoded: "%F0%90%90%80", minimallyEncoded: "%F0%90%90%80", plainText: "𐐀"},
}
func TestUrlEncode(t *testing.T) {
for _, test := range encodeTest {
got := urlEncode(test.plainText)
if got != test.minimallyEncoded && got != test.fullyEncoded {
t.Errorf("urlEncode(%q) got %q wanted %q or %q", test.plainText, got, test.minimallyEncoded, test.fullyEncoded)
}
}
}
func TestTimeString(t *testing.T) {
for _, test := range []struct {
in time.Time
want string
}{
{fstest.Time("1970-01-01T00:00:00.000000000Z"), "0"},
{fstest.Time("2001-02-03T04:05:10.123123123Z"), "981173110123"},
{fstest.Time("2001-02-03T05:05:10.123123123+01:00"), "981173110123"},
} {
got := timeString(test.in)
if test.want != got {
t.Logf("%v: want %v got %v", test.in, test.want, got)
}
}
}
func TestParseTimeString(t *testing.T) {
for _, test := range []struct {
in string
want time.Time
wantError string
}{
{"0", fstest.Time("1970-01-01T00:00:00.000000000Z"), ""},
{"981173110123", fstest.Time("2001-02-03T04:05:10.123000000Z"), ""},
{"", time.Time{}, ""},
{"potato", time.Time{}, `strconv.ParseInt: parsing "potato": invalid syntax`},
} {
o := Object{}
err := o.parseTimeString(test.in)
got := o.modTime
var gotError string
if err != nil {
gotError = err.Error()
}
if test.want != got {
t.Logf("%v: want %v got %v", test.in, test.want, got)
}
if test.wantError != gotError {
t.Logf("%v: want error %v got error %v", test.in, test.wantError, gotError)
}
}
}

34
backend/b2/b2_test.go Normal file
View File

@@ -0,0 +1,34 @@
// Test B2 filesystem interface
package b2
import (
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestB2:",
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
NeedMultipleChunks: true,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
)

482
backend/b2/upload.go Normal file
View File

@@ -0,0 +1,482 @@
// Upload large files for b2
//
// Docs - https://www.backblaze.com/b2/docs/large_files.html
package b2
import (
"bytes"
"context"
"crypto/sha1"
"encoding/hex"
"fmt"
gohash "hash"
"io"
"strings"
"sync"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/sync/errgroup"
)
type hashAppendingReader struct {
h gohash.Hash
in io.Reader
hexSum string
hexReader io.Reader
}
// Read returns bytes all bytes from the original reader, then the hex sum
// of what was read so far, then EOF.
func (har *hashAppendingReader) Read(b []byte) (int, error) {
if har.hexReader == nil {
n, err := har.in.Read(b)
if err == io.EOF {
har.in = nil // allow GC
err = nil // allow reading hexSum before EOF
har.hexSum = hex.EncodeToString(har.h.Sum(nil))
har.hexReader = strings.NewReader(har.hexSum)
}
return n, err
}
return har.hexReader.Read(b)
}
// AdditionalLength returns how many bytes the appended hex sum will take up.
func (har *hashAppendingReader) AdditionalLength() int {
return hex.EncodedLen(har.h.Size())
}
// HexSum returns the hash sum as hex. It's only available after the original
// reader has EOF'd. It's an empty string before that.
func (har *hashAppendingReader) HexSum() string {
return har.hexSum
}
// newHashAppendingReader takes a Reader and a Hash and will append the hex sum
// after the original reader reaches EOF. The increased size depends on the
// given hash, which may be queried through AdditionalLength()
func newHashAppendingReader(in io.Reader, h gohash.Hash) *hashAppendingReader {
withHash := io.TeeReader(in, h)
return &hashAppendingReader{h: h, in: withHash}
}
// largeUpload is used to control the upload of large files which need chunking
type largeUpload struct {
f *Fs // parent Fs
o *Object // object being uploaded
doCopy bool // doing copy rather than upload
what string // text name of operation for logs
in io.Reader // read the data from here
wrap accounting.WrapFn // account parts being transferred
id string // ID of the file being uploaded
size int64 // total size
parts int64 // calculated number of parts, if known
sha1s []string // slice of SHA1s for each part
uploadMu sync.Mutex // lock for upload variable
uploads []*api.GetUploadPartURLResponse // result of get upload URL calls
chunkSize int64 // chunk size to use
src *Object // if copying, object we are reading from
}
// newLargeUpload starts an upload of object o from in with metadata in src
//
// If newInfo is set then metadata from that will be used instead of reading it from src
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, chunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File) (up *largeUpload, err error) {
remote := o.remote
size := src.Size()
parts := int64(0)
sha1SliceSize := int64(maxParts)
if size == -1 {
fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", f.opt.ChunkSize, maxParts*f.opt.ChunkSize)
} else {
parts = size / int64(chunkSize)
if size%int64(chunkSize) != 0 {
parts++
}
if parts > maxParts {
return nil, errors.Errorf("%q too big (%d bytes) makes too many parts %d > %d - increase --b2-chunk-size", remote, size, parts, maxParts)
}
sha1SliceSize = parts
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
}
bucket, bucketPath := o.split()
bucketID, err := f.getBucketID(ctx, bucket)
if err != nil {
return nil, err
}
var request = api.StartLargeFileRequest{
BucketID: bucketID,
Name: f.opt.Enc.FromStandardPath(bucketPath),
}
if newInfo == nil {
modTime := src.ModTime(ctx)
request.ContentType = fs.MimeType(ctx, src)
request.Info = map[string]string{
timeKey: timeString(modTime),
}
// Set the SHA1 if known
if !o.fs.opt.DisableCheckSum || doCopy {
if calculatedSha1, err := src.Hash(ctx, hash.SHA1); err == nil && calculatedSha1 != "" {
request.Info[sha1Key] = calculatedSha1
}
}
} else {
request.ContentType = newInfo.ContentType
request.Info = newInfo.Info
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
}
up = &largeUpload{
f: f,
o: o,
doCopy: doCopy,
what: "upload",
id: response.ID,
size: size,
parts: parts,
sha1s: make([]string, sha1SliceSize),
chunkSize: int64(chunkSize),
}
// unwrap the accounting from the input, we use wrap to put it
// back on after the buffering
if doCopy {
up.what = "copy"
up.src = src.(*Object)
} else {
up.in, up.wrap = accounting.UnWrap(in)
}
return up, nil
}
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
//
// This should be returned with returnUploadURL when finished
func (up *largeUpload) getUploadURL(ctx context.Context) (upload *api.GetUploadPartURLResponse, err error) {
up.uploadMu.Lock()
defer up.uploadMu.Unlock()
if len(up.uploads) == 0 {
opts := rest.Opts{
Method: "POST",
Path: "/b2_get_upload_part_url",
}
var request = api.GetUploadPartURLRequest{
ID: up.id,
}
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &upload)
return up.f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get upload URL")
}
} else {
upload, up.uploads = up.uploads[0], up.uploads[1:]
}
return upload, nil
}
// returnUploadURL returns the UploadURL to the cache
func (up *largeUpload) returnUploadURL(upload *api.GetUploadPartURLResponse) {
if upload == nil {
return
}
up.uploadMu.Lock()
up.uploads = append(up.uploads, upload)
up.uploadMu.Unlock()
}
// Transfer a chunk
func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byte) error {
err := up.f.pacer.Call(func() (bool, error) {
fs.Debugf(up.o, "Sending chunk %d length %d", part, len(body))
// Get upload URL
upload, err := up.getUploadURL(ctx)
if err != nil {
return false, err
}
in := newHashAppendingReader(bytes.NewReader(body), sha1.New())
size := int64(len(body)) + int64(in.AdditionalLength())
// Authorization
//
// An upload authorization token, from b2_get_upload_part_url.
//
// X-Bz-Part-Number
//
// A number from 1 to 10000. The parts uploaded for one file
// must have contiguous numbers, starting with 1.
//
// Content-Length
//
// The number of bytes in the file being uploaded. Note that
// this header is required; you cannot leave it out and just
// use chunked encoding. The minimum size of every part but
// the last one is 100MB.
//
// X-Bz-Content-Sha1
//
// The SHA1 checksum of the this part of the file. B2 will
// check this when the part is uploaded, to make sure that the
// data arrived correctly. The same SHA1 checksum must be
// passed to b2_finish_large_file.
opts := rest.Opts{
Method: "POST",
RootURL: upload.UploadURL,
Body: up.wrap(in),
ExtraHeaders: map[string]string{
"Authorization": upload.AuthorizationToken,
"X-Bz-Part-Number": fmt.Sprintf("%d", part),
sha1Header: "hex_digits_at_end",
},
ContentLength: &size,
}
var response api.UploadPartResponse
resp, err := up.f.srv.CallJSON(ctx, &opts, nil, &response)
retry, err := up.f.shouldRetry(ctx, resp, err)
if err != nil {
fs.Debugf(up.o, "Error sending chunk %d (retry=%v): %v: %#v", part, retry, err, err)
}
// On retryable error clear PartUploadURL
if retry {
fs.Debugf(up.o, "Clearing part upload URL because of error: %v", err)
upload = nil
}
up.returnUploadURL(upload)
up.sha1s[part-1] = in.HexSum()
return retry, err
})
if err != nil {
fs.Debugf(up.o, "Error sending chunk %d: %v", part, err)
} else {
fs.Debugf(up.o, "Done sending chunk %d", part)
}
return err
}
// Copy a chunk
func (up *largeUpload) copyChunk(ctx context.Context, part int64, partSize int64) error {
err := up.f.pacer.Call(func() (bool, error) {
fs.Debugf(up.o, "Copying chunk %d length %d", part, partSize)
opts := rest.Opts{
Method: "POST",
Path: "/b2_copy_part",
}
offset := (part - 1) * up.chunkSize // where we are in the source file
var request = api.CopyPartRequest{
SourceID: up.src.id,
LargeFileID: up.id,
PartNumber: part,
Range: fmt.Sprintf("bytes=%d-%d", offset, offset+partSize-1),
}
var response api.UploadPartResponse
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response)
retry, err := up.f.shouldRetry(ctx, resp, err)
if err != nil {
fs.Debugf(up.o, "Error copying chunk %d (retry=%v): %v: %#v", part, retry, err, err)
}
up.sha1s[part-1] = response.SHA1
return retry, err
})
if err != nil {
fs.Debugf(up.o, "Error copying chunk %d: %v", part, err)
} else {
fs.Debugf(up.o, "Done copying chunk %d", part)
}
return err
}
// finish closes off the large upload
func (up *largeUpload) finish(ctx context.Context) error {
fs.Debugf(up.o, "Finishing large file %s with %d parts", up.what, up.parts)
opts := rest.Opts{
Method: "POST",
Path: "/b2_finish_large_file",
}
var request = api.FinishLargeFileRequest{
ID: up.id,
SHA1s: up.sha1s,
}
var response api.FileInfo
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response)
return up.f.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
}
return up.o.decodeMetaDataFileInfo(&response)
}
// cancel aborts the large upload
func (up *largeUpload) cancel(ctx context.Context) error {
fs.Debugf(up.o, "Cancelling large file %s", up.what)
opts := rest.Opts{
Method: "POST",
Path: "/b2_cancel_large_file",
}
var request = api.CancelLargeFileRequest{
ID: up.id,
}
var response api.CancelLargeFileResponse
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response)
return up.f.shouldRetry(ctx, resp, err)
})
if err != nil {
fs.Errorf(up.o, "Failed to cancel large file %s: %v", up.what, err)
}
return err
}
// Stream uploads the chunks from the input, starting with a required initial
// chunk. Assumes the file size is unknown and will upload until the input
// reaches EOF.
//
// Note that initialUploadBlock must be returned to f.putBuf()
func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock []byte) (err error) {
defer atexit.OnError(&err, func() { _ = up.cancel(ctx) })()
fs.Debugf(up.o, "Starting streaming of large file (id %q)", up.id)
var (
g, gCtx = errgroup.WithContext(ctx)
hasMoreParts = true
)
up.size = int64(len(initialUploadBlock))
g.Go(func() error {
for part := int64(1); hasMoreParts; part++ {
// Get a block of memory from the pool and token which limits concurrency.
var buf []byte
if part == 1 {
buf = initialUploadBlock
} else {
buf = up.f.getBuf(false)
}
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in uploading all the other parts.
if gCtx.Err() != nil {
up.f.putBuf(buf, false)
return nil
}
// Read the chunk
var n int
if part == 1 {
n = len(buf)
} else {
n, err = io.ReadFull(up.in, buf)
if err == io.ErrUnexpectedEOF {
fs.Debugf(up.o, "Read less than a full chunk, making this the last one.")
buf = buf[:n]
hasMoreParts = false
} else if err == io.EOF {
fs.Debugf(up.o, "Could not read any more bytes, previous chunk was the last.")
up.f.putBuf(buf, false)
return nil
} else if err != nil {
// other kinds of errors indicate failure
up.f.putBuf(buf, false)
return err
}
}
// Keep stats up to date
up.parts = part
up.size += int64(n)
if part > maxParts {
up.f.putBuf(buf, false)
return errors.Errorf("%q too big (%d bytes so far) makes too many parts %d > %d - increase --b2-chunk-size", up.o, up.size, up.parts, maxParts)
}
part := part // for the closure
g.Go(func() (err error) {
defer up.f.putBuf(buf, false)
return up.transferChunk(gCtx, part, buf)
})
}
return nil
})
err = g.Wait()
if err != nil {
return err
}
up.sha1s = up.sha1s[:up.parts]
return up.finish(ctx)
}
// Upload uploads the chunks from the input
func (up *largeUpload) Upload(ctx context.Context) (err error) {
defer atexit.OnError(&err, func() { _ = up.cancel(ctx) })()
fs.Debugf(up.o, "Starting %s of large file in %d chunks (id %q)", up.what, up.parts, up.id)
var (
g, gCtx = errgroup.WithContext(ctx)
remaining = up.size
)
g.Go(func() error {
for part := int64(1); part <= up.parts; part++ {
// Get a block of memory from the pool and token which limits concurrency.
buf := up.f.getBuf(up.doCopy)
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in uploading all the other parts.
if gCtx.Err() != nil {
up.f.putBuf(buf, up.doCopy)
return nil
}
reqSize := remaining
if reqSize >= up.chunkSize {
reqSize = up.chunkSize
}
if !up.doCopy {
// Read the chunk
buf = buf[:reqSize]
_, err = io.ReadFull(up.in, buf)
if err != nil {
up.f.putBuf(buf, up.doCopy)
return err
}
}
part := part // for the closure
g.Go(func() (err error) {
defer up.f.putBuf(buf, up.doCopy)
if !up.doCopy {
err = up.transferChunk(gCtx, part, buf)
} else {
err = up.copyChunk(gCtx, part, reqSize)
}
return err
})
remaining -= reqSize
}
return nil
})
err = g.Wait()
if err != nil {
return err
}
return up.finish(ctx)
}

244
backend/box/api/types.go Normal file
View File

@@ -0,0 +1,244 @@
// Package api has type definitions for box
//
// Converted from the API docs with help from https://mholt.github.io/json-to-go/
package api
import (
"encoding/json"
"fmt"
"time"
)
const (
// 2017-05-03T07:26:10-07:00
timeFormat = `"` + time.RFC3339 + `"`
)
// Time represents represents date and time information for the
// box API, by using RFC3339
type Time time.Time
// MarshalJSON turns a Time into JSON (in UTC)
func (t *Time) MarshalJSON() (out []byte, err error) {
timeString := (*time.Time)(t).Format(timeFormat)
return []byte(timeString), nil
}
// UnmarshalJSON turns JSON into a Time
func (t *Time) UnmarshalJSON(data []byte) error {
newT, err := time.Parse(timeFormat, string(data))
if err != nil {
return err
}
*t = Time(newT)
return nil
}
// Error is returned from box when things go wrong
type Error struct {
Type string `json:"type"`
Status int `json:"status"`
Code string `json:"code"`
ContextInfo json.RawMessage
HelpURL string `json:"help_url"`
Message string `json:"message"`
RequestID string `json:"request_id"`
}
// Error returns a string for the error and satisfies the error interface
func (e *Error) Error() string {
out := fmt.Sprintf("Error %q (%d)", e.Code, e.Status)
if e.Message != "" {
out += ": " + e.Message
}
if e.ContextInfo != nil {
out += fmt.Sprintf(" (%+v)", e.ContextInfo)
}
return out
}
// Check Error satisfies the error interface
var _ error = (*Error)(nil)
// ItemFields are the fields needed for FileInfo
var ItemFields = "type,id,sequence_id,etag,sha1,name,size,created_at,modified_at,content_created_at,content_modified_at,item_status,shared_link"
// Types of things in Item
const (
ItemTypeFolder = "folder"
ItemTypeFile = "file"
ItemStatusActive = "active"
ItemStatusTrashed = "trashed"
ItemStatusDeleted = "deleted"
)
// Item describes a folder or a file as returned by Get Folder Items and others
type Item struct {
Type string `json:"type"`
ID string `json:"id"`
SequenceID string `json:"sequence_id"`
Etag string `json:"etag"`
SHA1 string `json:"sha1"`
Name string `json:"name"`
Size float64 `json:"size"` // box returns this in xEyy format for very large numbers - see #2261
CreatedAt Time `json:"created_at"`
ModifiedAt Time `json:"modified_at"`
ContentCreatedAt Time `json:"content_created_at"`
ContentModifiedAt Time `json:"content_modified_at"`
ItemStatus string `json:"item_status"` // active, trashed if the file has been moved to the trash, and deleted if the file has been permanently deleted
SharedLink struct {
URL string `json:"url,omitempty"`
Access string `json:"access,omitempty"`
} `json:"shared_link"`
}
// ModTime returns the modification time of the item
func (i *Item) ModTime() (t time.Time) {
t = time.Time(i.ContentModifiedAt)
if t.IsZero() {
t = time.Time(i.ModifiedAt)
}
return t
}
// FolderItems is returned from the GetFolderItems call
type FolderItems struct {
TotalCount int `json:"total_count"`
Entries []Item `json:"entries"`
Offset int `json:"offset"`
Limit int `json:"limit"`
Order []struct {
By string `json:"by"`
Direction string `json:"direction"`
} `json:"order"`
}
// Parent defined the ID of the parent directory
type Parent struct {
ID string `json:"id"`
}
// CreateFolder is the request for Create Folder
type CreateFolder struct {
Name string `json:"name"`
Parent Parent `json:"parent"`
}
// UploadFile is the request for Upload File
type UploadFile struct {
Name string `json:"name"`
Parent Parent `json:"parent"`
ContentCreatedAt Time `json:"content_created_at"`
ContentModifiedAt Time `json:"content_modified_at"`
}
// UpdateFileModTime is used in Update File Info
type UpdateFileModTime struct {
ContentModifiedAt Time `json:"content_modified_at"`
}
// UpdateFileMove is the request for Upload File to change name and parent
type UpdateFileMove struct {
Name string `json:"name"`
Parent Parent `json:"parent"`
}
// CopyFile is the request for Copy File
type CopyFile struct {
Name string `json:"name"`
Parent Parent `json:"parent"`
}
// CreateSharedLink is the request for Public Link
type CreateSharedLink struct {
SharedLink struct {
URL string `json:"url,omitempty"`
Access string `json:"access,omitempty"`
} `json:"shared_link"`
}
// UploadSessionRequest is uses in Create Upload Session
type UploadSessionRequest struct {
FolderID string `json:"folder_id,omitempty"` // don't pass for update
FileSize int64 `json:"file_size"`
FileName string `json:"file_name,omitempty"` // optional for update
}
// UploadSessionResponse is returned from Create Upload Session
type UploadSessionResponse struct {
TotalParts int `json:"total_parts"`
PartSize int64 `json:"part_size"`
SessionEndpoints struct {
ListParts string `json:"list_parts"`
Commit string `json:"commit"`
UploadPart string `json:"upload_part"`
Status string `json:"status"`
Abort string `json:"abort"`
} `json:"session_endpoints"`
SessionExpiresAt Time `json:"session_expires_at"`
ID string `json:"id"`
Type string `json:"type"`
NumPartsProcessed int `json:"num_parts_processed"`
}
// Part defines the return from upload part call which are passed to commit upload also
type Part struct {
PartID string `json:"part_id"`
Offset int64 `json:"offset"`
Size int64 `json:"size"`
Sha1 string `json:"sha1"`
}
// UploadPartResponse is returned from the upload part call
type UploadPartResponse struct {
Part Part `json:"part"`
}
// CommitUpload is used in the Commit Upload call
type CommitUpload struct {
Parts []Part `json:"parts"`
Attributes struct {
ContentCreatedAt Time `json:"content_created_at"`
ContentModifiedAt Time `json:"content_modified_at"`
} `json:"attributes"`
}
// ConfigJSON defines the shape of a box config.json
type ConfigJSON struct {
BoxAppSettings AppSettings `json:"boxAppSettings"`
EnterpriseID string `json:"enterpriseID"`
}
// AppSettings defines the shape of the boxAppSettings within box config.json
type AppSettings struct {
ClientID string `json:"clientID"`
ClientSecret string `json:"clientSecret"`
AppAuth AppAuth `json:"appAuth"`
}
// AppAuth defines the shape of the appAuth within boxAppSettings in config.json
type AppAuth struct {
PublicKeyID string `json:"publicKeyID"`
PrivateKey string `json:"privateKey"`
Passphrase string `json:"passphrase"`
}
// User is returned from /users/me
type User struct {
Type string `json:"type"`
ID string `json:"id"`
Name string `json:"name"`
Login string `json:"login"`
CreatedAt time.Time `json:"created_at"`
ModifiedAt time.Time `json:"modified_at"`
Language string `json:"language"`
Timezone string `json:"timezone"`
SpaceAmount int64 `json:"space_amount"`
SpaceUsed int64 `json:"space_used"`
MaxUploadSize int64 `json:"max_upload_size"`
Status string `json:"status"`
JobTitle string `json:"job_title"`
Phone string `json:"phone"`
Address string `json:"address"`
AvatarURL string `json:"avatar_url"`
}

1307
backend/box/box.go Normal file

File diff suppressed because it is too large Load Diff

17
backend/box/box_test.go Normal file
View File

@@ -0,0 +1,17 @@
// Test Box filesystem interface
package box_test
import (
"testing"
"github.com/rclone/rclone/backend/box"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestBox:",
NilObject: (*box.Object)(nil),
})
}

276
backend/box/upload.go Normal file
View File

@@ -0,0 +1,276 @@
// multpart upload for box
package box
import (
"bytes"
"context"
"crypto/sha1"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"net/http"
"strconv"
"sync"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/box/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/rest"
)
// createUploadSession creates an upload session for the object
func (o *Object) createUploadSession(ctx context.Context, leaf, directoryID string, size int64) (response *api.UploadSessionResponse, err error) {
opts := rest.Opts{
Method: "POST",
Path: "/files/upload_sessions",
RootURL: uploadURL,
}
request := api.UploadSessionRequest{
FileSize: size,
}
// If object has an ID then it is existing so create a new version
if o.id != "" {
opts.Path = "/files/" + o.id + "/upload_sessions"
} else {
opts.Path = "/files/upload_sessions"
request.FolderID = directoryID
request.FileName = o.fs.opt.Enc.FromStandardName(leaf)
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, &response)
return shouldRetry(resp, err)
})
return
}
// sha1Digest produces a digest using sha1 as per RFC3230
func sha1Digest(digest []byte) string {
return "sha=" + base64.StdEncoding.EncodeToString(digest)
}
// uploadPart uploads a part in an upload session
func (o *Object) uploadPart(ctx context.Context, SessionID string, offset, totalSize int64, chunk []byte, wrap accounting.WrapFn, options ...fs.OpenOption) (response *api.UploadPartResponse, err error) {
chunkSize := int64(len(chunk))
sha1sum := sha1.Sum(chunk)
opts := rest.Opts{
Method: "PUT",
Path: "/files/upload_sessions/" + SessionID,
RootURL: uploadURL,
ContentType: "application/octet-stream",
ContentLength: &chunkSize,
ContentRange: fmt.Sprintf("bytes %d-%d/%d", offset, offset+chunkSize-1, totalSize),
Options: options,
ExtraHeaders: map[string]string{
"Digest": sha1Digest(sha1sum[:]),
},
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
opts.Body = wrap(bytes.NewReader(chunk))
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
return response, nil
}
// commitUpload finishes an upload session
func (o *Object) commitUpload(ctx context.Context, SessionID string, parts []api.Part, modTime time.Time, sha1sum []byte) (result *api.FolderItems, err error) {
opts := rest.Opts{
Method: "POST",
Path: "/files/upload_sessions/" + SessionID + "/commit",
RootURL: uploadURL,
ExtraHeaders: map[string]string{
"Digest": sha1Digest(sha1sum),
},
}
request := api.CommitUpload{
Parts: parts,
}
request.Attributes.ContentModifiedAt = api.Time(modTime)
request.Attributes.ContentCreatedAt = api.Time(modTime)
var body []byte
var resp *http.Response
// For discussion of this value see:
// https://github.com/rclone/rclone/issues/2054
maxTries := o.fs.opt.CommitRetries
const defaultDelay = 10
var tries int
outer:
for tries = 0; tries < maxTries; tries++ {
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil)
if err != nil {
return shouldRetry(resp, err)
}
body, err = rest.ReadBody(resp)
return shouldRetry(resp, err)
})
delay := defaultDelay
var why string
if err != nil {
// Sometimes we get 400 Error with
// parts_mismatch immediately after uploading
// the last part. Ignore this error and wait.
if boxErr, ok := err.(*api.Error); ok && boxErr.Code == "parts_mismatch" {
why = err.Error()
} else {
return nil, err
}
} else {
switch resp.StatusCode {
case http.StatusOK, http.StatusCreated:
break outer
case http.StatusAccepted:
why = "not ready yet"
delayString := resp.Header.Get("Retry-After")
if delayString != "" {
delay, err = strconv.Atoi(delayString)
if err != nil {
fs.Debugf(o, "Couldn't decode Retry-After header %q: %v", delayString, err)
delay = defaultDelay
}
}
default:
return nil, errors.Errorf("unknown HTTP status return %q (%d)", resp.Status, resp.StatusCode)
}
}
fs.Debugf(o, "commit multipart upload failed %d/%d - trying again in %d seconds (%s)", tries+1, maxTries, delay, why)
time.Sleep(time.Duration(delay) * time.Second)
}
if tries >= maxTries {
return nil, errors.New("too many tries to commit multipart upload - increase --low-level-retries")
}
err = json.Unmarshal(body, &result)
if err != nil {
return nil, errors.Wrapf(err, "couldn't decode commit response: %q", body)
}
return result, nil
}
// abortUpload cancels an upload session
func (o *Object) abortUpload(ctx context.Context, SessionID string) (err error) {
opts := rest.Opts{
Method: "DELETE",
Path: "/files/upload_sessions/" + SessionID,
RootURL: uploadURL,
NoResponse: true,
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
return err
}
// uploadMultipart uploads a file using multipart upload
func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, leaf, directoryID string, size int64, modTime time.Time, options ...fs.OpenOption) (err error) {
// Create upload session
session, err := o.createUploadSession(ctx, leaf, directoryID, size)
if err != nil {
return errors.Wrap(err, "multipart upload create session failed")
}
chunkSize := session.PartSize
fs.Debugf(o, "Multipart upload session started for %d parts of size %v", session.TotalParts, fs.SizeSuffix(chunkSize))
// Cancel the session if something went wrong
defer atexit.OnError(&err, func() {
fs.Debugf(o, "Cancelling multipart upload: %v", err)
cancelErr := o.abortUpload(ctx, session.ID)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr)
}
})()
// unwrap the accounting from the input, we use wrap to put it
// back on after the buffering
in, wrap := accounting.UnWrap(in)
// Upload the chunks
remaining := size
position := int64(0)
parts := make([]api.Part, session.TotalParts)
hash := sha1.New()
errs := make(chan error, 1)
var wg sync.WaitGroup
outer:
for part := 0; part < session.TotalParts; part++ {
// Check any errors
select {
case err = <-errs:
break outer
default:
}
reqSize := remaining
if reqSize >= chunkSize {
reqSize = chunkSize
}
// Make a block of memory
buf := make([]byte, reqSize)
// Read the chunk
_, err = io.ReadFull(in, buf)
if err != nil {
err = errors.Wrap(err, "multipart upload failed to read source")
break outer
}
// Make the global hash (must be done sequentially)
_, _ = hash.Write(buf)
// Transfer the chunk
wg.Add(1)
o.fs.uploadToken.Get()
go func(part int, position int64) {
defer wg.Done()
defer o.fs.uploadToken.Put()
fs.Debugf(o, "Uploading part %d/%d offset %v/%v part size %v", part+1, session.TotalParts, fs.SizeSuffix(position), fs.SizeSuffix(size), fs.SizeSuffix(chunkSize))
partResponse, err := o.uploadPart(ctx, session.ID, position, size, buf, wrap, options...)
if err != nil {
err = errors.Wrap(err, "multipart upload failed to upload part")
select {
case errs <- err:
default:
}
return
}
parts[part] = partResponse.Part
}(part, position)
// ready for next block
remaining -= chunkSize
position += chunkSize
}
wg.Wait()
if err == nil {
select {
case err = <-errs:
default:
}
}
if err != nil {
return err
}
// Finalise the upload session
result, err := o.commitUpload(ctx, session.ID, parts, modTime, hash.Sum(nil))
if err != nil {
return errors.Wrap(err, "multipart upload failed to finalize")
}
if result.TotalCount != 1 || len(result.Entries) != 1 {
return errors.Errorf("multipart upload failed %v - not sure why", o)
}
return o.setMetaData(&result.Entries[0])
}

1943
backend/cache/cache.go vendored Normal file

File diff suppressed because it is too large Load Diff

1632
backend/cache/cache_internal_test.go vendored Normal file

File diff suppressed because it is too large Load Diff

21
backend/cache/cache_mount_other_test.go vendored Normal file
View File

@@ -0,0 +1,21 @@
// +build !linux !go1.13
// +build !darwin !go1.13
// +build !freebsd !go1.13
// +build !windows
// +build !race
package cache_test
import (
"testing"
"github.com/rclone/rclone/fs"
)
func (r *run) mountFs(t *testing.T, f fs.Fs) {
panic("mountFs not defined for this platform")
}
func (r *run) unmountFs(t *testing.T, f fs.Fs) {
panic("unmountFs not defined for this platform")
}

79
backend/cache/cache_mount_unix_test.go vendored Normal file
View File

@@ -0,0 +1,79 @@
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
// +build !race
package cache_test
import (
"os"
"testing"
"time"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/rclone/rclone/cmd/mount"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/stretchr/testify/require"
)
func (r *run) mountFs(t *testing.T, f fs.Fs) {
device := f.Name() + ":" + f.Root()
var options = []fuse.MountOption{
fuse.MaxReadahead(uint32(mountlib.MaxReadAhead)),
fuse.Subtype("rclone"),
fuse.FSName(device), fuse.VolumeName(device),
fuse.NoAppleDouble(),
fuse.NoAppleXattr(),
//fuse.AllowOther(),
}
err := os.MkdirAll(r.mntDir, os.ModePerm)
require.NoError(t, err)
c, err := fuse.Mount(r.mntDir, options...)
require.NoError(t, err)
filesys := mount.NewFS(f)
server := fusefs.New(c, nil)
// Serve the mount point in the background returning error to errChan
r.unmountRes = make(chan error, 1)
go func() {
err := server.Serve(filesys)
closeErr := c.Close()
if err == nil {
err = closeErr
}
r.unmountRes <- err
}()
// check if the mount process has an error to report
<-c.Ready
require.NoError(t, c.MountError)
r.unmountFn = func() error {
// Shutdown the VFS
filesys.VFS.Shutdown()
return fuse.Unmount(r.mntDir)
}
r.vfs = filesys.VFS
r.isMounted = true
}
func (r *run) unmountFs(t *testing.T, f fs.Fs) {
var err error
for i := 0; i < 4; i++ {
err = r.unmountFn()
if err != nil {
//log.Printf("signal to umount failed - retrying: %v", err)
time.Sleep(3 * time.Second)
continue
}
break
}
require.NoError(t, err)
err = <-r.unmountRes
require.NoError(t, err)
err = r.vfs.CleanUp()
require.NoError(t, err)
r.isMounted = false
}

View File

@@ -0,0 +1,125 @@
// +build windows
// +build !race
package cache_test
import (
"fmt"
"os"
"testing"
"time"
"github.com/billziss-gh/cgofuse/fuse"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/cmount"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/stretchr/testify/require"
)
// waitFor runs fn() until it returns true or the timeout expires
func waitFor(fn func() bool) (ok bool) {
const totalWait = 10 * time.Second
const individualWait = 10 * time.Millisecond
for i := 0; i < int(totalWait/individualWait); i++ {
ok = fn()
if ok {
return ok
}
time.Sleep(individualWait)
}
return false
}
func (r *run) mountFs(t *testing.T, f fs.Fs) {
// FIXME implement cmount
t.Skip("windows not supported yet")
device := f.Name() + ":" + f.Root()
options := []string{
"-o", "fsname=" + device,
"-o", "subtype=rclone",
"-o", fmt.Sprintf("max_readahead=%d", mountlib.MaxReadAhead),
"-o", "uid=-1",
"-o", "gid=-1",
"-o", "allow_other",
// This causes FUSE to supply O_TRUNC with the Open
// call which is more efficient for cmount. However
// it does not work with cgofuse on Windows with
// WinFSP so cmount must work with or without it.
"-o", "atomic_o_trunc",
"--FileSystemName=rclone",
}
fsys := cmount.NewFS(f)
host := fuse.NewFileSystemHost(fsys)
// Serve the mount point in the background returning error to errChan
r.unmountRes = make(chan error, 1)
go func() {
var err error
ok := host.Mount(r.mntDir, options)
if !ok {
err = errors.New("mount failed")
}
r.unmountRes <- err
}()
// unmount
r.unmountFn = func() error {
// Shutdown the VFS
fsys.VFS.Shutdown()
if host.Unmount() {
if !waitFor(func() bool {
_, err := os.Stat(r.mntDir)
return err != nil
}) {
t.Fatalf("mountpoint %q didn't disappear after unmount - continuing anyway", r.mntDir)
}
return nil
}
return errors.New("host unmount failed")
}
// Wait for the filesystem to become ready, checking the file
// system didn't blow up before starting
select {
case err := <-r.unmountRes:
require.NoError(t, err)
case <-time.After(time.Second * 3):
}
// Wait for the mount point to be available on Windows
// On Windows the Init signal comes slightly before the mount is ready
if !waitFor(func() bool {
_, err := os.Stat(r.mntDir)
return err == nil
}) {
t.Errorf("mountpoint %q didn't became available on mount", r.mntDir)
}
r.vfs = fsys.VFS
r.isMounted = true
}
func (r *run) unmountFs(t *testing.T, f fs.Fs) {
// FIXME implement cmount
t.Skip("windows not supported yet")
var err error
for i := 0; i < 4; i++ {
err = r.unmountFn()
if err != nil {
//log.Printf("signal to umount failed - retrying: %v", err)
time.Sleep(3 * time.Second)
continue
}
break
}
require.NoError(t, err)
err = <-r.unmountRes
require.NoError(t, err)
err = r.vfs.CleanUp()
require.NoError(t, err)
r.isMounted = false
}

25
backend/cache/cache_test.go vendored Normal file
View File

@@ -0,0 +1,25 @@
// Test Cache filesystem interface
// +build !plan9
// +build !race
package cache_test
import (
"testing"
"github.com/rclone/rclone/backend/cache"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache
})
}

6
backend/cache/cache_unsupported.go vendored Normal file
View File

@@ -0,0 +1,6 @@
// Build for cache for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9
package cache

456
backend/cache/cache_upload_test.go vendored Normal file
View File

@@ -0,0 +1,456 @@
// +build !plan9
// +build !race
package cache_test
import (
"context"
"fmt"
"math/rand"
"os"
"path"
"strconv"
"testing"
"time"
"github.com/rclone/rclone/backend/cache"
_ "github.com/rclone/rclone/backend/drive"
"github.com/rclone/rclone/fs"
"github.com/stretchr/testify/require"
)
func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
require.NoError(t, err)
}
func testInternalUploadQueueOneFile(t *testing.T, id string, rootFs fs.Fs, boltDb *cache.Persistent) {
// create some rand test data
testSize := int64(524288000)
testReader := runInstance.randomReader(t, testSize)
bu := runInstance.listenForBackgroundUpload(t, rootFs, "one")
runInstance.writeRemoteReader(t, rootFs, "one", testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(524416032), ti.Size())
} else {
require.Equal(t, testSize, ti.Size())
}
de1, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, de1, 1)
runInstance.completeBackgroundUpload(t, "one", bu)
// check if it was removed from temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
// check if it can be read
data2, err := runInstance.readDataFromRemote(t, rootFs, "one", 0, int64(1024), false)
require.NoError(t, err)
require.Len(t, data2, 1024)
}
func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir(context.Background(), "one")
require.NoError(t, err)
err = rootFs.Mkdir(context.Background(), "one/test")
require.NoError(t, err)
err = rootFs.Mkdir(context.Background(), "second")
require.NoError(t, err)
// create some rand test data
testSize := int64(10485760)
testReader := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
time.Sleep(time.Second * 5)
//_ = os.Remove(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
//require.NoError(t, err)
err = runInstance.dirMove(t, rootFs, "one/test", "second/test")
require.NoError(t, err)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second/test")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadTempPathCleaned(t *testing.T) {
id := fmt.Sprintf("tiutpc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "5s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir(context.Background(), "one")
require.NoError(t, err)
err = rootFs.Mkdir(context.Background(), "one/test")
require.NoError(t, err)
err = rootFs.Mkdir(context.Background(), "second")
require.NoError(t, err)
// create some rand test data
testSize := int64(1048576)
testReader := runInstance.randomReader(t, testSize)
testReader2 := runInstance.randomReader(t, testSize)
runInstance.writeObjectReader(t, rootFs, "one/test/data.bin", testReader)
runInstance.writeObjectReader(t, rootFs, "second/data.bin", testReader2)
runInstance.completeAllBackgroundUploads(t, rootFs, "one/test/data.bin")
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one/test")))
require.True(t, os.IsNotExist(err))
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "one")))
require.True(t, os.IsNotExist(err))
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second")))
require.False(t, os.IsNotExist(err))
runInstance.completeAllBackgroundUploads(t, rootFs, "second/data.bin")
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/data.bin")))
require.True(t, os.IsNotExist(err))
de1, err := runInstance.list(t, rootFs, "one/test")
require.NoError(t, err)
require.Len(t, de1, 1)
// check if it can be read
de1, err = runInstance.list(t, rootFs, "second")
require.NoError(t, err)
require.Len(t, de1, 1)
}
func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir(context.Background(), "test")
require.NoError(t, err)
minSize := 5242880
maxSize := 10485760
totalFiles := 10
rand.Seed(time.Now().Unix())
lastFile := ""
for i := 0; i < totalFiles; i++ {
size := int64(rand.Intn(maxSize-minSize) + minSize)
testReader := runInstance.randomReader(t, size)
remote := "test/" + strconv.Itoa(i) + ".bin"
runInstance.writeRemoteReader(t, rootFs, remote, testReader)
// validate that it exists in temp fs
ti, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, remote)))
require.NoError(t, err)
require.Equal(t, size, runInstance.cleanSize(t, ti.Size()))
if runInstance.wrappedIsExternal && i < totalFiles-1 {
time.Sleep(time.Second * 3)
}
lastFile = remote
}
// check if cache lists all files, likely temp upload didn't finish yet
de1, err := runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
// wait for background uploader to do its thing
runInstance.completeAllBackgroundUploads(t, rootFs, lastFile)
// retry until we have no more temp files and fail if they don't go down to 0
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test")))
require.True(t, os.IsNotExist(err))
// check if cache lists all files
de1, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
require.Len(t, de1, totalFiles)
}
func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test DirMove - allowed
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject(context.Background(), "test/one")
require.Error(t, err)
_, err = rootFs.NewObject(context.Background(), "second/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.NoError(t, err)
_, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.Error(t, err)
var started bool
started, err = boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "second/one")))
require.NoError(t, err)
require.False(t, started)
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Rmdir - allowed
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
require.Contains(t, err.Error(), "directory not empty")
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
started, err := boltDb.SearchPendingUpload(runInstance.encryptRemoteIfNeeded(t, path.Join(id, "test/one")))
require.False(t, started)
require.NoError(t, err)
// test Move/Rename -- allowed
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.NoError(t, err)
// try to read from it
_, err = rootFs.NewObject(context.Background(), "test/one")
require.Error(t, err)
_, err = rootFs.NewObject(context.Background(), "test/second")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/second", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject(context.Background(), "test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove -- allowed
err = runInstance.rm(t, rootFs, "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject(context.Background(), "test/one")
require.Error(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.Error(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update -- allowed
firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated")
require.NoError(t, err)
obj2, err := rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
data2 := runInstance.readDataFromObj(t, obj2, 0, int64(len("one content updated")), false)
require.Equal(t, "one content updated", string(data2))
tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
if runInstance.rootIsCrypt {
require.Equal(t, int64(67), tmpInfo.Size())
} else {
require.Equal(t, int64(len(data2)), tmpInfo.Size())
}
// test SetModTime -- allowed
secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
require.NoError(t, err)
require.NotEqual(t, secondModTime, firstModTime)
require.NotEqual(t, time.Time{}, firstModTime)
require.NotEqual(t, time.Time{}, secondModTime)
}
func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
// create some rand test data
runInstance.mkdir(t, rootFs, "test")
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// check if it can be read
data1, err := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data1)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
err = boltDb.SetPendingUploadToStarted(runInstance.encryptRemoteIfNeeded(t, path.Join(rootFs.Root(), "test/one")))
require.NoError(t, err)
// test DirMove
err = runInstance.dirMove(t, rootFs, "test", "second")
if err != errNotSupported {
require.Error(t, err)
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "second/one")))
require.Error(t, err)
}
// test Rmdir
err = runInstance.rm(t, rootFs, "test")
require.Error(t, err)
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
// test Move/Rename
err = runInstance.move(t, rootFs, path.Join("test", "one"), path.Join("test", "second"))
if err != errNotSupported {
require.Error(t, err)
// try to read from it
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject(context.Background(), "test/second")
require.Error(t, err)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/second")))
require.Error(t, err)
}
// test Copy -- allowed
err = runInstance.copy(t, rootFs, path.Join("test", "one"), path.Join("test", "third"))
if err != errNotSupported {
require.NoError(t, err)
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
_, err = rootFs.NewObject(context.Background(), "test/third")
require.NoError(t, err)
data2, err := runInstance.readDataFromRemote(t, rootFs, "test/third", 0, int64(len([]byte("one content"))), false)
require.NoError(t, err)
require.Equal(t, []byte("one content"), data2)
// validate that it exists in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/third")))
require.NoError(t, err)
}
// test Remove
err = runInstance.rm(t, rootFs, "test/one")
require.Error(t, err)
_, err = rootFs.NewObject(context.Background(), "test/one")
require.NoError(t, err)
// validate that it doesn't exist in temp fs
_, err = os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
require.NoError(t, err)
runInstance.writeRemoteString(t, rootFs, "test/one", "one content")
// test Update - this seems to work. Why? FIXME
//firstModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//err = runInstance.updateData(t, rootFs, "test/one", "one content", " updated", func() {
// data2 := runInstance.readDataFromRemote(t, rootFs, "test/one", 0, int64(len("one content updated")), true)
// require.Equal(t, "one content", string(data2))
//
// tmpInfo, err := os.Stat(path.Join(runInstance.tmpUploadDir, id, runInstance.encryptRemoteIfNeeded(t, "test/one")))
// require.NoError(t, err)
// if runInstance.rootIsCrypt {
// require.Equal(t, int64(67), tmpInfo.Size())
// } else {
// require.Equal(t, int64(len(data2)), tmpInfo.Size())
// }
//})
//require.Error(t, err)
// test SetModTime -- seems to work cause of previous
//secondModTime, err := runInstance.modTime(t, rootFs, "test/one")
//require.NoError(t, err)
//require.Equal(t, secondModTime, firstModTime)
//require.NotEqual(t, time.Time{}, firstModTime)
//require.NotEqual(t, time.Time{}, secondModTime)
}

129
backend/cache/directory.go vendored Normal file
View File

@@ -0,0 +1,129 @@
// +build !plan9
package cache
import (
"context"
"path"
"time"
"github.com/rclone/rclone/fs"
)
// Directory is a generic dir that stores basic information about it
type Directory struct {
Directory fs.Directory `json:"-"` // can be nil
CacheFs *Fs `json:"-"` // cache fs
Name string `json:"name"` // name of the directory
Dir string `json:"dir"` // abs path of the directory
CacheModTime int64 `json:"modTime"` // modification or creation time - IsZero for unknown
CacheSize int64 `json:"size"` // size of directory and contents or -1 if unknown
CacheItems int64 `json:"items"` // number of objects or -1 for unknown
CacheType string `json:"cacheType"` // object type
CacheTs *time.Time `json:",omitempty"`
}
// NewDirectory builds an empty dir which will be used to unmarshal data in it
func NewDirectory(f *Fs, remote string) *Directory {
cd := ShallowDirectory(f, remote)
t := time.Now()
cd.CacheTs = &t
return cd
}
// ShallowDirectory builds an empty dir which will be used to unmarshal data in it
func ShallowDirectory(f *Fs, remote string) *Directory {
var cd *Directory
fullRemote := cleanPath(path.Join(f.Root(), remote))
// build a new one
dir := cleanPath(path.Dir(fullRemote))
name := cleanPath(path.Base(fullRemote))
cd = &Directory{
CacheFs: f,
Name: name,
Dir: dir,
CacheModTime: time.Now().UnixNano(),
CacheSize: 0,
CacheItems: 0,
CacheType: "Directory",
}
return cd
}
// DirectoryFromOriginal builds one from a generic fs.Directory
func DirectoryFromOriginal(ctx context.Context, f *Fs, d fs.Directory) *Directory {
var cd *Directory
fullRemote := path.Join(f.Root(), d.Remote())
dir := cleanPath(path.Dir(fullRemote))
name := cleanPath(path.Base(fullRemote))
t := time.Now()
cd = &Directory{
Directory: d,
CacheFs: f,
Name: name,
Dir: dir,
CacheModTime: d.ModTime(ctx).UnixNano(),
CacheSize: d.Size(),
CacheItems: d.Items(),
CacheType: "Directory",
CacheTs: &t,
}
return cd
}
// Fs returns its FS info
func (d *Directory) Fs() fs.Info {
return d.CacheFs
}
// String returns a human friendly name for this object
func (d *Directory) String() string {
if d == nil {
return "<nil>"
}
return d.Remote()
}
// Remote returns the remote path
func (d *Directory) Remote() string {
return d.CacheFs.cleanRootFromPath(d.abs())
}
// abs returns the absolute path to the dir
func (d *Directory) abs() string {
return cleanPath(path.Join(d.Dir, d.Name))
}
// ModTime returns the cached ModTime
func (d *Directory) ModTime(ctx context.Context) time.Time {
return time.Unix(0, d.CacheModTime)
}
// Size returns the cached Size
func (d *Directory) Size() int64 {
return d.CacheSize
}
// Items returns the cached Items
func (d *Directory) Items() int64 {
return d.CacheItems
}
// ID returns the ID of the cached directory if known
func (d *Directory) ID() string {
if d.Directory == nil {
return ""
}
return d.Directory.ID()
}
var (
_ fs.Directory = (*Directory)(nil)
)

638
backend/cache/handle.go vendored Normal file
View File

@@ -0,0 +1,638 @@
// +build !plan9
package cache
import (
"context"
"fmt"
"io"
"path"
"runtime"
"strings"
"sync"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations"
)
var uploaderMap = make(map[string]*backgroundWriter)
var uploaderMapMx sync.Mutex
// initBackgroundUploader returns a single instance
func initBackgroundUploader(fs *Fs) (*backgroundWriter, error) {
// write lock to create one
uploaderMapMx.Lock()
defer uploaderMapMx.Unlock()
if b, ok := uploaderMap[fs.String()]; ok {
// if it was already started we close it so that it can be started again
if b.running {
b.close()
} else {
return b, nil
}
}
bb := newBackgroundWriter(fs)
uploaderMap[fs.String()] = bb
return uploaderMap[fs.String()], nil
}
// Handle is managing the read/write/seek operations on an open handle
type Handle struct {
ctx context.Context
cachedObject *Object
cfs *Fs
memory *Memory
preloadQueue chan int64
preloadOffset int64
offset int64
seenOffsets map[int64]bool
mu sync.Mutex
workersWg sync.WaitGroup
confirmReading chan bool
workers int
maxWorkerID int
UseMemory bool
closed bool
reading bool
}
// NewObjectHandle returns a new Handle for an existing Object
func NewObjectHandle(ctx context.Context, o *Object, cfs *Fs) *Handle {
r := &Handle{
ctx: ctx,
cachedObject: o,
cfs: cfs,
offset: 0,
preloadOffset: -1, // -1 to trigger the first preload
UseMemory: !cfs.opt.ChunkNoMemory,
reading: false,
}
r.seenOffsets = make(map[int64]bool)
r.memory = NewMemory(-1)
// create a larger buffer to queue up requests
r.preloadQueue = make(chan int64, r.cfs.opt.TotalWorkers*10)
r.confirmReading = make(chan bool)
r.startReadWorkers()
return r
}
// cacheFs is a convenience method to get the parent cache FS of the object's manager
func (r *Handle) cacheFs() *Fs {
return r.cfs
}
// storage is a convenience method to get the persistent storage of the object's manager
func (r *Handle) storage() *Persistent {
return r.cacheFs().cache
}
// String representation of this reader
func (r *Handle) String() string {
return r.cachedObject.abs()
}
// startReadWorkers will start the worker pool
func (r *Handle) startReadWorkers() {
if r.workers > 0 {
return
}
totalWorkers := r.cacheFs().opt.TotalWorkers
if r.cacheFs().plexConnector.isConfigured() {
if !r.cacheFs().plexConnector.isConnected() {
err := r.cacheFs().plexConnector.authenticate()
if err != nil {
fs.Errorf(r, "failed to authenticate to Plex: %v", err)
}
}
if r.cacheFs().plexConnector.isConnected() {
totalWorkers = 1
}
}
r.scaleWorkers(totalWorkers)
}
// scaleOutWorkers will increase the worker pool count by the provided amount
func (r *Handle) scaleWorkers(desired int) {
current := r.workers
if current == desired {
return
}
if current > desired {
// scale in gracefully
for r.workers > desired {
r.preloadQueue <- -1
r.workers--
}
} else {
// scale out
for r.workers < desired {
w := &worker{
r: r,
id: r.maxWorkerID,
}
r.workersWg.Add(1)
r.workers++
r.maxWorkerID++
go w.run()
}
}
// ignore first scale out from 0
if current != 0 {
fs.Debugf(r, "scale workers to %v", desired)
}
}
func (r *Handle) confirmExternalReading() {
// if we have a max value of workers
// then we skip this step
if r.workers > 1 ||
!r.cacheFs().plexConnector.isConfigured() {
return
}
if !r.cacheFs().plexConnector.isPlaying(r.cachedObject) {
return
}
fs.Infof(r, "confirmed reading by external reader")
r.scaleWorkers(r.cacheFs().opt.TotalWorkers)
}
// queueOffset will send an offset to the workers if it's different from the last one
func (r *Handle) queueOffset(offset int64) {
if offset != r.preloadOffset {
// clean past in-memory chunks
if r.UseMemory {
go r.memory.CleanChunksByNeed(offset)
}
r.confirmExternalReading()
r.preloadOffset = offset
// clear the past seen chunks
// they will remain in our persistent storage but will be removed from transient
// so they need to be picked up by a worker
for k := range r.seenOffsets {
if k < offset {
r.seenOffsets[k] = false
}
}
for i := 0; i < r.workers; i++ {
o := r.preloadOffset + int64(r.cacheFs().opt.ChunkSize)*int64(i)
if o < 0 || o >= r.cachedObject.Size() {
continue
}
if v, ok := r.seenOffsets[o]; ok && v {
continue
}
r.seenOffsets[o] = true
r.preloadQueue <- o
}
}
}
// getChunk is called by the FS to retrieve a specific chunk of known start and size from where it can find it
// it can be from transient or persistent cache
// it will also build the chunk from the cache's specific chunk boundaries and build the final desired chunk in a buffer
func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
var data []byte
var err error
// we calculate the modulus of the requested offset with the size of a chunk
offset := chunkStart % int64(r.cacheFs().opt.ChunkSize)
// we align the start offset of the first chunk to a likely chunk in the storage
chunkStart = chunkStart - offset
r.queueOffset(chunkStart)
found := false
if r.UseMemory {
data, err = r.memory.GetChunk(r.cachedObject, chunkStart)
if err == nil {
found = true
}
}
if !found {
// we're gonna give the workers a chance to pickup the chunk
// and retry a couple of times
for i := 0; i < r.cacheFs().opt.ReadRetries*8; i++ {
data, err = r.storage().GetChunk(r.cachedObject, chunkStart)
if err == nil {
found = true
break
}
fs.Debugf(r, "%v: chunk retry storage: %v", chunkStart, i)
time.Sleep(time.Millisecond * 500)
}
}
// not found in ram or
// the worker didn't managed to download the chunk in time so we abort and close the stream
if err != nil || len(data) == 0 || !found {
if r.workers == 0 {
fs.Errorf(r, "out of workers")
return nil, io.ErrUnexpectedEOF
}
return nil, errors.Errorf("chunk not found %v", chunkStart)
}
// first chunk will be aligned with the start
if offset > 0 {
if offset > int64(len(data)) {
fs.Errorf(r, "unexpected conditions during reading. current position: %v, current chunk position: %v, current chunk size: %v, offset: %v, chunk size: %v, file size: %v",
r.offset, chunkStart, len(data), offset, r.cacheFs().opt.ChunkSize, r.cachedObject.Size())
return nil, io.ErrUnexpectedEOF
}
data = data[int(offset):]
}
return data, nil
}
// Read a chunk from storage or len(p)
func (r *Handle) Read(p []byte) (n int, err error) {
r.mu.Lock()
defer r.mu.Unlock()
var buf []byte
// first reading
if !r.reading {
r.reading = true
}
// reached EOF
if r.offset >= r.cachedObject.Size() {
return 0, io.EOF
}
currentOffset := r.offset
buf, err = r.getChunk(currentOffset)
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
fs.Errorf(r, "(%v/%v) error (%v) response", currentOffset, r.cachedObject.Size(), err)
}
if len(buf) == 0 && err != io.ErrUnexpectedEOF {
return 0, io.EOF
}
readSize := copy(p, buf)
newOffset := currentOffset + int64(readSize)
r.offset = newOffset
return readSize, err
}
// Close will tell the workers to stop
func (r *Handle) Close() error {
r.mu.Lock()
defer r.mu.Unlock()
if r.closed {
return errors.New("file already closed")
}
close(r.preloadQueue)
r.closed = true
// wait for workers to complete their jobs before returning
r.workersWg.Wait()
r.memory.db.Flush()
fs.Debugf(r, "cache reader closed %v", r.offset)
return nil
}
// Seek will move the current offset based on whence and instruct the workers to move there too
func (r *Handle) Seek(offset int64, whence int) (int64, error) {
r.mu.Lock()
defer r.mu.Unlock()
var err error
switch whence {
case io.SeekStart:
fs.Debugf(r, "moving offset set from %v to %v", r.offset, offset)
r.offset = offset
case io.SeekCurrent:
fs.Debugf(r, "moving offset cur from %v to %v", r.offset, r.offset+offset)
r.offset += offset
case io.SeekEnd:
fs.Debugf(r, "moving offset end (%v) from %v to %v", r.cachedObject.Size(), r.offset, r.cachedObject.Size()+offset)
r.offset = r.cachedObject.Size() + offset
default:
err = errors.Errorf("cache: unimplemented seek whence %v", whence)
}
chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize))
if chunkStart >= int64(r.cacheFs().opt.ChunkSize) {
chunkStart = chunkStart - int64(r.cacheFs().opt.ChunkSize)
}
r.queueOffset(chunkStart)
return r.offset, err
}
type worker struct {
r *Handle
rc io.ReadCloser
id int
}
// String is a representation of this worker
func (w *worker) String() string {
return fmt.Sprintf("worker-%v <%v>", w.id, w.r.cachedObject.Name)
}
// reader will return a reader depending on the capabilities of the source reader:
// - if it supports seeking it will seek to the desired offset and return the same reader
// - if it doesn't support seeking it will close a possible existing one and open at the desired offset
// - if there's no reader associated with this worker, it will create one
func (w *worker) reader(offset, end int64, closeOpen bool) (io.ReadCloser, error) {
var err error
r := w.rc
if w.rc == nil {
r, err = w.r.cacheFs().openRateLimited(func() (io.ReadCloser, error) {
return w.r.cachedObject.Object.Open(w.r.ctx, &fs.RangeOption{Start: offset, End: end - 1})
})
if err != nil {
return nil, err
}
return r, nil
}
if !closeOpen {
if do, ok := r.(fs.RangeSeeker); ok {
_, err = do.RangeSeek(w.r.ctx, offset, io.SeekStart, end-offset)
return r, err
} else if do, ok := r.(io.Seeker); ok {
_, err = do.Seek(offset, io.SeekStart)
return r, err
}
}
_ = w.rc.Close()
return w.r.cacheFs().openRateLimited(func() (io.ReadCloser, error) {
r, err = w.r.cachedObject.Object.Open(w.r.ctx, &fs.RangeOption{Start: offset, End: end - 1})
if err != nil {
return nil, err
}
return r, nil
})
}
// run is the main loop for the worker which receives offsets to preload
func (w *worker) run() {
var err error
var data []byte
defer func() {
if w.rc != nil {
_ = w.rc.Close()
}
w.r.workersWg.Done()
}()
for {
chunkStart, open := <-w.r.preloadQueue
if chunkStart < 0 || !open {
break
}
// skip if it exists
if w.r.UseMemory {
if w.r.memory.HasChunk(w.r.cachedObject, chunkStart) {
continue
}
// add it in ram if it's in the persistent storage
data, err = w.r.storage().GetChunk(w.r.cachedObject, chunkStart)
if err == nil {
err = w.r.memory.AddChunk(w.r.cachedObject.abs(), data, chunkStart)
if err != nil {
fs.Errorf(w, "failed caching chunk in ram %v: %v", chunkStart, err)
} else {
continue
}
}
} else {
if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
}
chunkEnd := chunkStart + int64(w.r.cacheFs().opt.ChunkSize)
// TODO: Remove this comment if it proves to be reliable for #1896
//if chunkEnd > w.r.cachedObject.Size() {
// chunkEnd = w.r.cachedObject.Size()
//}
w.download(chunkStart, chunkEnd, 0)
}
}
func (w *worker) download(chunkStart, chunkEnd int64, retry int) {
var err error
var data []byte
// stop retries
if retry >= w.r.cacheFs().opt.ReadRetries {
return
}
// back-off between retries
if retry > 0 {
time.Sleep(time.Second * time.Duration(retry))
}
closeOpen := false
if retry > 0 {
closeOpen = true
}
w.rc, err = w.reader(chunkStart, chunkEnd, closeOpen)
// we seem to be getting only errors so we abort
if err != nil {
fs.Errorf(w, "object open failed %v: %v", chunkStart, err)
err = w.r.cachedObject.refreshFromSource(w.r.ctx, true)
if err != nil {
fs.Errorf(w, "%v", err)
}
w.download(chunkStart, chunkEnd, retry+1)
return
}
data = make([]byte, chunkEnd-chunkStart)
var sourceRead int
sourceRead, err = io.ReadFull(w.rc, data)
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
fs.Errorf(w, "failed to read chunk %v: %v", chunkStart, err)
err = w.r.cachedObject.refreshFromSource(w.r.ctx, true)
if err != nil {
fs.Errorf(w, "%v", err)
}
w.download(chunkStart, chunkEnd, retry+1)
return
}
data = data[:sourceRead] // reslice to remove extra garbage
if err == io.ErrUnexpectedEOF {
fs.Debugf(w, "partial downloaded chunk %v", fs.SizeSuffix(chunkStart))
} else {
fs.Debugf(w, "downloaded chunk %v", chunkStart)
}
if w.r.UseMemory {
err = w.r.memory.AddChunk(w.r.cachedObject.abs(), data, chunkStart)
if err != nil {
fs.Errorf(w, "failed caching chunk in ram %v: %v", chunkStart, err)
}
}
err = w.r.storage().AddChunk(w.r.cachedObject.abs(), data, chunkStart)
if err != nil {
fs.Errorf(w, "failed caching chunk in storage %v: %v", chunkStart, err)
}
}
const (
// BackgroundUploadStarted is a state for a temp file that has started upload
BackgroundUploadStarted = iota
// BackgroundUploadCompleted is a state for a temp file that has completed upload
BackgroundUploadCompleted
// BackgroundUploadError is a state for a temp file that has an error upload
BackgroundUploadError
)
// BackgroundUploadState is an entity that maps to an existing file which is stored on the temp fs
type BackgroundUploadState struct {
Remote string
Status int
Error error
}
type backgroundWriter struct {
fs *Fs
stateCh chan int
running bool
notifyCh chan BackgroundUploadState
mu sync.Mutex
}
func newBackgroundWriter(f *Fs) *backgroundWriter {
b := &backgroundWriter{
fs: f,
stateCh: make(chan int),
notifyCh: make(chan BackgroundUploadState),
}
return b
}
func (b *backgroundWriter) close() {
b.stateCh <- 2
b.mu.Lock()
defer b.mu.Unlock()
b.running = false
}
func (b *backgroundWriter) pause() {
b.stateCh <- 1
}
func (b *backgroundWriter) play() {
b.stateCh <- 0
}
func (b *backgroundWriter) isRunning() bool {
b.mu.Lock()
defer b.mu.Unlock()
return b.running
}
func (b *backgroundWriter) notify(remote string, status int, err error) {
state := BackgroundUploadState{
Remote: remote,
Status: status,
Error: err,
}
select {
case b.notifyCh <- state:
fs.Debugf(remote, "notified background upload state: %v", state.Status)
default:
}
}
func (b *backgroundWriter) run() {
state := 0
for {
b.mu.Lock()
b.running = true
b.mu.Unlock()
select {
case s := <-b.stateCh:
state = s
default:
//
}
switch state {
case 1:
runtime.Gosched()
time.Sleep(time.Millisecond * 500)
continue
case 2:
return
}
absPath, err := b.fs.cache.getPendingUpload(b.fs.Root(), time.Duration(b.fs.opt.TempWaitTime))
if err != nil || absPath == "" || !b.fs.isRootInPath(absPath) {
time.Sleep(time.Second)
continue
}
remote := b.fs.cleanRootFromPath(absPath)
b.notify(remote, BackgroundUploadStarted, nil)
fs.Infof(remote, "background upload: started upload")
err = operations.MoveFile(context.TODO(), b.fs.UnWrap(), b.fs.tempFs, remote, remote)
if err != nil {
b.notify(remote, BackgroundUploadError, err)
_ = b.fs.cache.rollbackPendingUpload(absPath)
fs.Errorf(remote, "background upload: %v", err)
continue
}
// clean empty dirs up to root
thisDir := cleanPath(path.Dir(remote))
for thisDir != "" {
thisList, err := b.fs.tempFs.List(context.TODO(), thisDir)
if err != nil {
break
}
if len(thisList) > 0 {
break
}
err = b.fs.tempFs.Rmdir(context.TODO(), thisDir)
fs.Debugf(thisDir, "cleaned from temp path")
if err != nil {
break
}
thisDir = cleanPath(path.Dir(thisDir))
}
fs.Infof(remote, "background upload: uploaded entry")
err = b.fs.cache.removePendingUpload(absPath)
if err != nil && !strings.Contains(err.Error(), "pending upload not found") {
fs.Errorf(remote, "background upload: %v", err)
}
parentCd := NewDirectory(b.fs, cleanPath(path.Dir(remote)))
err = b.fs.cache.ExpireDir(parentCd)
if err != nil {
fs.Errorf(parentCd, "background upload: cache expire error: %v", err)
}
b.fs.notifyChangeUpstream(remote, fs.EntryObject)
fs.Infof(remote, "finished background upload")
b.notify(remote, BackgroundUploadCompleted, nil)
}
}
// Check the interfaces are satisfied
var (
_ io.ReadCloser = (*Handle)(nil)
_ io.Seeker = (*Handle)(nil)
)

380
backend/cache/object.go vendored Normal file
View File

@@ -0,0 +1,380 @@
// +build !plan9
package cache
import (
"context"
"io"
"path"
"sync"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/readers"
)
const (
objectInCache = "Object"
objectPendingUpload = "TempObject"
)
// Object is a generic file like object that stores basic information about it
type Object struct {
fs.Object `json:"-"`
ParentFs fs.Fs `json:"-"` // parent fs
CacheFs *Fs `json:"-"` // cache fs
Name string `json:"name"` // name of the directory
Dir string `json:"dir"` // abs path of the object
CacheModTime int64 `json:"modTime"` // modification or creation time - IsZero for unknown
CacheSize int64 `json:"size"` // size of directory and contents or -1 if unknown
CacheStorable bool `json:"storable"` // says whether this object can be stored
CacheType string `json:"cacheType"`
CacheTs time.Time `json:"cacheTs"`
cacheHashesMu sync.Mutex
CacheHashes map[hash.Type]string // all supported hashes cached
refreshMutex sync.Mutex
}
// NewObject builds one from a generic fs.Object
func NewObject(f *Fs, remote string) *Object {
fullRemote := path.Join(f.Root(), remote)
dir, name := path.Split(fullRemote)
cacheType := objectInCache
parentFs := f.UnWrap()
if f.opt.TempWritePath != "" {
_, err := f.cache.SearchPendingUpload(fullRemote)
if err == nil { // queued for upload
cacheType = objectPendingUpload
parentFs = f.tempFs
fs.Debugf(fullRemote, "pending upload found")
}
}
co := &Object{
ParentFs: parentFs,
CacheFs: f,
Name: cleanPath(name),
Dir: cleanPath(dir),
CacheModTime: time.Now().UnixNano(),
CacheSize: 0,
CacheStorable: false,
CacheType: cacheType,
CacheTs: time.Now(),
}
return co
}
// ObjectFromOriginal builds one from a generic fs.Object
func ObjectFromOriginal(ctx context.Context, f *Fs, o fs.Object) *Object {
var co *Object
fullRemote := cleanPath(path.Join(f.Root(), o.Remote()))
dir, name := path.Split(fullRemote)
cacheType := objectInCache
parentFs := f.UnWrap()
if f.opt.TempWritePath != "" {
_, err := f.cache.SearchPendingUpload(fullRemote)
if err == nil { // queued for upload
cacheType = objectPendingUpload
parentFs = f.tempFs
fs.Debugf(fullRemote, "pending upload found")
}
}
co = &Object{
ParentFs: parentFs,
CacheFs: f,
Name: cleanPath(name),
Dir: cleanPath(dir),
CacheType: cacheType,
CacheTs: time.Now(),
}
co.updateData(ctx, o)
return co
}
func (o *Object) updateData(ctx context.Context, source fs.Object) {
o.Object = source
o.CacheModTime = source.ModTime(ctx).UnixNano()
o.CacheSize = source.Size()
o.CacheStorable = source.Storable()
o.CacheTs = time.Now()
o.cacheHashesMu.Lock()
o.CacheHashes = make(map[hash.Type]string)
o.cacheHashesMu.Unlock()
}
// Fs returns its FS info
func (o *Object) Fs() fs.Info {
return o.CacheFs
}
// String returns a human friendly name for this object
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Remote()
}
// Remote returns the remote path
func (o *Object) Remote() string {
p := path.Join(o.Dir, o.Name)
return o.CacheFs.cleanRootFromPath(p)
}
// abs returns the absolute path to the object
func (o *Object) abs() string {
return path.Join(o.Dir, o.Name)
}
// ModTime returns the cached ModTime
func (o *Object) ModTime(ctx context.Context) time.Time {
_ = o.refresh(ctx)
return time.Unix(0, o.CacheModTime)
}
// Size returns the cached Size
func (o *Object) Size() int64 {
_ = o.refresh(context.TODO())
return o.CacheSize
}
// Storable returns the cached Storable
func (o *Object) Storable() bool {
_ = o.refresh(context.TODO())
return o.CacheStorable
}
// refresh will check if the object info is expired and request the info from source if it is
// all these conditions must be true to ignore a refresh
// 1. cache ts didn't expire yet
// 2. is not pending a notification from the wrapped fs
func (o *Object) refresh(ctx context.Context) error {
isNotified := o.CacheFs.isNotifiedRemote(o.Remote())
isExpired := time.Now().After(o.CacheTs.Add(time.Duration(o.CacheFs.opt.InfoAge)))
if !isExpired && !isNotified {
return nil
}
return o.refreshFromSource(ctx, true)
}
// refreshFromSource requests the original FS for the object in case it comes from a cached entry
func (o *Object) refreshFromSource(ctx context.Context, force bool) error {
o.refreshMutex.Lock()
defer o.refreshMutex.Unlock()
var err error
var liveObject fs.Object
if o.Object != nil && !force {
return nil
}
if o.isTempFile() {
liveObject, err = o.ParentFs.NewObject(ctx, o.Remote())
err = errors.Wrapf(err, "in parent fs %v", o.ParentFs)
} else {
liveObject, err = o.CacheFs.Fs.NewObject(ctx, o.Remote())
err = errors.Wrapf(err, "in cache fs %v", o.CacheFs.Fs)
}
if err != nil {
fs.Errorf(o, "error refreshing object in : %v", err)
return err
}
o.updateData(ctx, liveObject)
o.persist()
return nil
}
// SetModTime sets the ModTime of this object
func (o *Object) SetModTime(ctx context.Context, t time.Time) error {
if err := o.refreshFromSource(ctx, false); err != nil {
return err
}
err := o.Object.SetModTime(ctx, t)
if err != nil {
return err
}
o.CacheModTime = t.UnixNano()
o.persist()
fs.Debugf(o, "updated ModTime: %v", t)
return nil
}
// Open is used to request a specific part of the file using fs.RangeOption
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
var err error
if o.Object == nil {
err = o.refreshFromSource(ctx, true)
} else {
err = o.refresh(ctx)
}
if err != nil {
return nil, err
}
cacheReader := NewObjectHandle(ctx, o, o.CacheFs)
var offset, limit int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
offset = x.Offset
case *fs.RangeOption:
offset, limit = x.Decode(o.Size())
}
_, err = cacheReader.Seek(offset, io.SeekStart)
if err != nil {
return nil, err
}
}
return readers.NewLimitedReadCloser(cacheReader, limit), nil
}
// Update will change the object data
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
if err := o.refreshFromSource(ctx, false); err != nil {
return err
}
// pause background uploads if active
if o.CacheFs.opt.TempWritePath != "" {
o.CacheFs.backgroundRunner.pause()
defer o.CacheFs.backgroundRunner.play()
// don't allow started uploads
if o.isTempFile() && o.tempFileStartedUpload() {
return errors.Errorf("%v is currently uploading, can't update", o)
}
}
fs.Debugf(o, "updating object contents with size %v", src.Size())
// FIXME use reliable upload
err := o.Object.Update(ctx, in, src, options...)
if err != nil {
fs.Errorf(o, "error updating source: %v", err)
return err
}
// deleting cached chunks and info to be replaced with new ones
_ = o.CacheFs.cache.RemoveObject(o.abs())
// advertise to ChangeNotify if wrapped doesn't do that
o.CacheFs.notifyChangeUpstreamIfNeeded(o.Remote(), fs.EntryObject)
o.CacheModTime = src.ModTime(ctx).UnixNano()
o.CacheSize = src.Size()
o.cacheHashesMu.Lock()
o.CacheHashes = make(map[hash.Type]string)
o.cacheHashesMu.Unlock()
o.CacheTs = time.Now()
o.persist()
return nil
}
// Remove deletes the object from both the cache and the source
func (o *Object) Remove(ctx context.Context) error {
if err := o.refreshFromSource(ctx, false); err != nil {
return err
}
// pause background uploads if active
if o.CacheFs.opt.TempWritePath != "" {
o.CacheFs.backgroundRunner.pause()
defer o.CacheFs.backgroundRunner.play()
// don't allow started uploads
if o.isTempFile() && o.tempFileStartedUpload() {
return errors.Errorf("%v is currently uploading, can't delete", o)
}
}
err := o.Object.Remove(ctx)
if err != nil {
return err
}
fs.Debugf(o, "removing object")
_ = o.CacheFs.cache.RemoveObject(o.abs())
_ = o.CacheFs.cache.removePendingUpload(o.abs())
parentCd := NewDirectory(o.CacheFs, cleanPath(path.Dir(o.Remote())))
_ = o.CacheFs.cache.ExpireDir(parentCd)
// advertise to ChangeNotify if wrapped doesn't do that
o.CacheFs.notifyChangeUpstreamIfNeeded(parentCd.Remote(), fs.EntryDirectory)
return nil
}
// Hash requests a hash of the object and stores in the cache
// since it might or might not be called, this is lazy loaded
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
_ = o.refresh(ctx)
o.cacheHashesMu.Lock()
if o.CacheHashes == nil {
o.CacheHashes = make(map[hash.Type]string)
}
cachedHash, found := o.CacheHashes[ht]
o.cacheHashesMu.Unlock()
if found {
return cachedHash, nil
}
if err := o.refreshFromSource(ctx, false); err != nil {
return "", err
}
liveHash, err := o.Object.Hash(ctx, ht)
if err != nil {
return "", err
}
o.cacheHashesMu.Lock()
o.CacheHashes[ht] = liveHash
o.cacheHashesMu.Unlock()
o.persist()
fs.Debugf(o, "object hash cached: %v", liveHash)
return liveHash, nil
}
// persist adds this object to the persistent cache
func (o *Object) persist() *Object {
err := o.CacheFs.cache.AddObject(o)
if err != nil {
fs.Errorf(o, "failed to cache object: %v", err)
}
return o
}
func (o *Object) isTempFile() bool {
_, err := o.CacheFs.cache.SearchPendingUpload(o.abs())
if err != nil {
o.CacheType = objectInCache
return false
}
o.CacheType = objectPendingUpload
return true
}
func (o *Object) tempFileStartedUpload() bool {
started, err := o.CacheFs.cache.SearchPendingUpload(o.abs())
if err != nil {
return false
}
return started
}
// UnWrap returns the Object that this Object is wrapping or
// nil if it isn't wrapping anything
func (o *Object) UnWrap() fs.Object {
return o.Object
}
var (
_ fs.Object = (*Object)(nil)
_ fs.ObjectUnWrapper = (*Object)(nil)
)

298
backend/cache/plex.go vendored Normal file
View File

@@ -0,0 +1,298 @@
// +build !plan9
package cache
import (
"bytes"
"crypto/tls"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"strings"
"sync"
"time"
cache "github.com/patrickmn/go-cache"
"github.com/rclone/rclone/fs"
"golang.org/x/net/websocket"
)
const (
// defPlexLoginURL is the default URL for Plex login
defPlexLoginURL = "https://plex.tv/users/sign_in.json"
defPlexNotificationURL = "%s/:/websockets/notifications?X-Plex-Token=%s"
)
// PlaySessionStateNotification is part of the API response of Plex
type PlaySessionStateNotification struct {
SessionKey string `json:"sessionKey"`
GUID string `json:"guid"`
Key string `json:"key"`
ViewOffset int64 `json:"viewOffset"`
State string `json:"state"`
TranscodeSession string `json:"transcodeSession"`
}
// NotificationContainer is part of the API response of Plex
type NotificationContainer struct {
Type string `json:"type"`
Size int `json:"size"`
PlaySessionState []PlaySessionStateNotification `json:"PlaySessionStateNotification"`
}
// PlexNotification is part of the API response of Plex
type PlexNotification struct {
Container NotificationContainer `json:"NotificationContainer"`
}
// plexConnector is managing the cache integration with Plex
type plexConnector struct {
url *url.URL
username string
password string
token string
insecure bool
f *Fs
mu sync.Mutex
running bool
runningMu sync.Mutex
stateCache *cache.Cache
saveToken func(string)
}
// newPlexConnector connects to a Plex server and generates a token
func newPlexConnector(f *Fs, plexURL, username, password string, insecure bool, saveToken func(string)) (*plexConnector, error) {
u, err := url.ParseRequestURI(strings.TrimRight(plexURL, "/"))
if err != nil {
return nil, err
}
pc := &plexConnector{
f: f,
url: u,
username: username,
password: password,
token: "",
insecure: insecure,
stateCache: cache.New(time.Hour, time.Minute),
saveToken: saveToken,
}
return pc, nil
}
// newPlexConnector connects to a Plex server and generates a token
func newPlexConnectorWithToken(f *Fs, plexURL, token string, insecure bool) (*plexConnector, error) {
u, err := url.ParseRequestURI(strings.TrimRight(plexURL, "/"))
if err != nil {
return nil, err
}
pc := &plexConnector{
f: f,
url: u,
token: token,
insecure: insecure,
stateCache: cache.New(time.Hour, time.Minute),
}
pc.listenWebsocket()
return pc, nil
}
func (p *plexConnector) closeWebsocket() {
p.runningMu.Lock()
defer p.runningMu.Unlock()
fs.Infof("plex", "stopped Plex watcher")
p.running = false
}
func (p *plexConnector) websocketDial() (*websocket.Conn, error) {
u := strings.TrimRight(strings.Replace(strings.Replace(
p.url.String(), "http://", "ws://", 1), "https://", "wss://", 1), "/")
url := fmt.Sprintf(defPlexNotificationURL, u, p.token)
config, err := websocket.NewConfig(url, "http://localhost")
if err != nil {
return nil, err
}
if p.insecure {
config.TlsConfig = &tls.Config{InsecureSkipVerify: true}
}
return websocket.DialConfig(config)
}
func (p *plexConnector) listenWebsocket() {
p.runningMu.Lock()
defer p.runningMu.Unlock()
conn, err := p.websocketDial()
if err != nil {
fs.Errorf("plex", "%v", err)
return
}
p.running = true
go func() {
for {
if !p.isConnected() {
break
}
notif := &PlexNotification{}
err := websocket.JSON.Receive(conn, notif)
if err != nil {
fs.Debugf("plex", "%v", err)
p.closeWebsocket()
break
}
// we're only interested in play events
if notif.Container.Type == "playing" {
// we loop through each of them
for _, v := range notif.Container.PlaySessionState {
// event type of playing
if v.State == "playing" {
// if it's not cached get the details and cache them
if _, found := p.stateCache.Get(v.Key); !found {
req, err := http.NewRequest("GET", fmt.Sprintf("%s%s", p.url.String(), v.Key), nil)
if err != nil {
continue
}
p.fillDefaultHeaders(req)
resp, err := http.DefaultClient.Do(req)
if err != nil {
continue
}
var data []byte
data, err = ioutil.ReadAll(resp.Body)
if err != nil {
continue
}
p.stateCache.Set(v.Key, data, cache.DefaultExpiration)
}
} else if v.State == "stopped" {
p.stateCache.Delete(v.Key)
}
}
}
}
}()
}
// fillDefaultHeaders will add common headers to requests
func (p *plexConnector) fillDefaultHeaders(req *http.Request) {
req.Header.Add("X-Plex-Client-Identifier", fmt.Sprintf("rclone (%v)", p.f.String()))
req.Header.Add("X-Plex-Product", fmt.Sprintf("rclone (%v)", p.f.Name()))
req.Header.Add("X-Plex-Version", fs.Version)
req.Header.Add("Accept", "application/json")
if p.token != "" {
req.Header.Add("X-Plex-Token", p.token)
}
}
// authenticate will generate a token based on a username/password
func (p *plexConnector) authenticate() error {
p.mu.Lock()
defer p.mu.Unlock()
form := url.Values{}
form.Set("user[login]", p.username)
form.Add("user[password]", p.password)
req, err := http.NewRequest("POST", defPlexLoginURL, strings.NewReader(form.Encode()))
if err != nil {
return err
}
p.fillDefaultHeaders(req)
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
var data map[string]interface{}
err = json.NewDecoder(resp.Body).Decode(&data)
if err != nil {
return fmt.Errorf("failed to obtain token: %v", err)
}
tokenGen, ok := get(data, "user", "authToken")
if !ok {
return fmt.Errorf("failed to obtain token: %v", data)
}
token, ok := tokenGen.(string)
if !ok {
return fmt.Errorf("failed to obtain token: %v", data)
}
p.token = token
if p.token != "" {
if p.saveToken != nil {
p.saveToken(p.token)
}
fs.Infof(p.f.Name(), "Connected to Plex server: %v", p.url.String())
}
p.listenWebsocket()
return nil
}
// isConnected checks if this rclone is authenticated to Plex
func (p *plexConnector) isConnected() bool {
p.runningMu.Lock()
defer p.runningMu.Unlock()
return p.running
}
// isConfigured checks if this rclone is configured to use a Plex server
func (p *plexConnector) isConfigured() bool {
return p.url != nil
}
func (p *plexConnector) isPlaying(co *Object) bool {
var err error
if !p.isConnected() {
p.listenWebsocket()
}
remote := co.Remote()
if cr, yes := p.f.isWrappedByCrypt(); yes {
remote, err = cr.DecryptFileName(co.Remote())
if err != nil {
fs.Debugf("plex", "can not decrypt wrapped file: %v", err)
return false
}
}
isPlaying := false
for _, v := range p.stateCache.Items() {
if bytes.Contains(v.Object.([]byte), []byte(remote)) {
isPlaying = true
break
}
}
return isPlaying
}
// adapted from: https://stackoverflow.com/a/28878037 (credit)
func get(m interface{}, path ...interface{}) (interface{}, bool) {
for _, p := range path {
switch idx := p.(type) {
case string:
if mm, ok := m.(map[string]interface{}); ok {
if val, found := mm[idx]; found {
m = val
continue
}
}
return nil, false
case int:
if mm, ok := m.([]interface{}); ok {
if len(mm) > idx {
m = mm[idx]
continue
}
}
return nil, false
}
}
return m, true
}

98
backend/cache/storage_memory.go vendored Normal file
View File

@@ -0,0 +1,98 @@
// +build !plan9
package cache
import (
"strconv"
"strings"
"time"
cache "github.com/patrickmn/go-cache"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
)
// Memory is a wrapper of transient storage for a go-cache store
type Memory struct {
db *cache.Cache
}
// NewMemory builds this cache storage
// defaultExpiration will set the expiry time of chunks in this storage
func NewMemory(defaultExpiration time.Duration) *Memory {
mem := &Memory{}
err := mem.Connect(defaultExpiration)
if err != nil {
fs.Errorf("cache", "can't open ram connection: %v", err)
}
return mem
}
// Connect will create a connection for the storage
func (m *Memory) Connect(defaultExpiration time.Duration) error {
m.db = cache.New(defaultExpiration, -1)
return nil
}
// HasChunk confirms the existence of a single chunk of an object
func (m *Memory) HasChunk(cachedObject *Object, offset int64) bool {
key := cachedObject.abs() + "-" + strconv.FormatInt(offset, 10)
_, found := m.db.Get(key)
return found
}
// GetChunk will retrieve a single chunk which belongs to a cached object or an error if it doesn't find it
func (m *Memory) GetChunk(cachedObject *Object, offset int64) ([]byte, error) {
key := cachedObject.abs() + "-" + strconv.FormatInt(offset, 10)
var data []byte
if x, found := m.db.Get(key); found {
data = x.([]byte)
return data, nil
}
return nil, errors.Errorf("couldn't get cached object data at offset %v", offset)
}
// AddChunk adds a new chunk of a cached object
func (m *Memory) AddChunk(fp string, data []byte, offset int64) error {
return m.AddChunkAhead(fp, data, offset, time.Second)
}
// AddChunkAhead adds a new chunk of a cached object
func (m *Memory) AddChunkAhead(fp string, data []byte, offset int64, t time.Duration) error {
key := fp + "-" + strconv.FormatInt(offset, 10)
m.db.Set(key, data, cache.DefaultExpiration)
return nil
}
// CleanChunksByAge will cleanup on a cron basis
func (m *Memory) CleanChunksByAge(chunkAge time.Duration) {
m.db.DeleteExpired()
}
// CleanChunksByNeed will cleanup chunks after the FS passes a specific chunk
func (m *Memory) CleanChunksByNeed(offset int64) {
var items map[string]cache.Item
items = m.db.Items()
for key := range items {
sepIdx := strings.LastIndex(key, "-")
keyOffset, err := strconv.ParseInt(key[sepIdx+1:], 10, 64)
if err != nil {
fs.Errorf("cache", "couldn't parse offset entry %v", key)
continue
}
if keyOffset < offset {
m.db.Delete(key)
}
}
}
// CleanChunksBySize will cleanup chunks after the total size passes a certain point
func (m *Memory) CleanChunksBySize(maxSize int64) {
// NOOP
}

1051
backend/cache/storage_persistent.go vendored Normal file

File diff suppressed because it is too large Load Diff

23
backend/cache/utils_test.go vendored Normal file
View File

@@ -0,0 +1,23 @@
package cache
import bolt "go.etcd.io/bbolt"
// PurgeTempUploads will remove all the pending uploads from the queue
func (b *Persistent) PurgeTempUploads() {
b.tempQueueMux.Lock()
defer b.tempQueueMux.Unlock()
_ = b.db.Update(func(tx *bolt.Tx) error {
_ = tx.DeleteBucket([]byte(tempBucket))
_, _ = tx.CreateBucketIfNotExists([]byte(tempBucket))
return nil
})
}
// SetPendingUploadToStarted is a way to mark an entry as started (even if it's not already)
func (b *Persistent) SetPendingUploadToStarted(remote string) error {
return b.updatePendingUpload(remote, func(item *tempUploadInfo) error {
item.Started = true
return nil
})
}

2280
backend/chunker/chunker.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,691 @@
package chunker
import (
"bytes"
"context"
"flag"
"fmt"
"io/ioutil"
"path"
"regexp"
"strings"
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Command line flags
var (
UploadKilobytes = flag.Int("upload-kilobytes", 0, "Upload size in Kilobytes, set this to test large uploads")
)
// test that chunking does not break large uploads
func testPutLarge(t *testing.T, f *Fs, kilobytes int) {
t.Run(fmt.Sprintf("PutLarge%dk", kilobytes), func(t *testing.T) {
fstests.TestPutLarge(context.Background(), t, f, &fstest.Item{
ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"),
Path: fmt.Sprintf("chunker-upload-%dk", kilobytes),
Size: int64(kilobytes) * int64(fs.KibiByte),
})
})
}
// test chunk name parser
func testChunkNameFormat(t *testing.T, f *Fs) {
saveOpt := f.opt
defer func() {
// restore original settings (f is pointer, f.opt is struct)
f.opt = saveOpt
_ = f.setChunkNameFormat(f.opt.NameFormat)
}()
assertFormat := func(pattern, wantDataFormat, wantCtrlFormat, wantNameRegexp string) {
err := f.setChunkNameFormat(pattern)
assert.NoError(t, err)
assert.Equal(t, wantDataFormat, f.dataNameFmt)
assert.Equal(t, wantCtrlFormat, f.ctrlNameFmt)
assert.Equal(t, wantNameRegexp, f.nameRegexp.String())
}
assertFormatValid := func(pattern string) {
err := f.setChunkNameFormat(pattern)
assert.NoError(t, err)
}
assertFormatInvalid := func(pattern string) {
err := f.setChunkNameFormat(pattern)
assert.Error(t, err)
}
assertMakeName := func(wantChunkName, mainName string, chunkNo int, ctrlType, xactID string) {
gotChunkName := ""
assert.NotPanics(t, func() {
gotChunkName = f.makeChunkName(mainName, chunkNo, ctrlType, xactID)
}, "makeChunkName(%q,%d,%q,%q) must not panic", mainName, chunkNo, ctrlType, xactID)
if gotChunkName != "" {
assert.Equal(t, wantChunkName, gotChunkName)
}
}
assertMakeNamePanics := func(mainName string, chunkNo int, ctrlType, xactID string) {
assert.Panics(t, func() {
_ = f.makeChunkName(mainName, chunkNo, ctrlType, xactID)
}, "makeChunkName(%q,%d,%q,%q) should panic", mainName, chunkNo, ctrlType, xactID)
}
assertParseName := func(fileName, wantMainName string, wantChunkNo int, wantCtrlType, wantXactID string) {
gotMainName, gotChunkNo, gotCtrlType, gotXactID := f.parseChunkName(fileName)
assert.Equal(t, wantMainName, gotMainName)
assert.Equal(t, wantChunkNo, gotChunkNo)
assert.Equal(t, wantCtrlType, gotCtrlType)
assert.Equal(t, wantXactID, gotXactID)
}
const newFormatSupported = false // support for patterns not starting with base name (*)
// valid formats
assertFormat(`*.rclone_chunk.###`, `%s.rclone_chunk.%03d`, `%s.rclone_chunk._%s`, `^(.+?)\.rclone_chunk\.(?:([0-9]{3,})|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
assertFormat(`*.rclone_chunk.#`, `%s.rclone_chunk.%d`, `%s.rclone_chunk._%s`, `^(.+?)\.rclone_chunk\.(?:([0-9]+)|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
assertFormat(`*_chunk_#####`, `%s_chunk_%05d`, `%s_chunk__%s`, `^(.+?)_chunk_(?:([0-9]{5,})|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
assertFormat(`*-chunk-#`, `%s-chunk-%d`, `%s-chunk-_%s`, `^(.+?)-chunk-(?:([0-9]+)|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
assertFormat(`*-chunk-#-%^$()[]{}.+-!?:\`, `%s-chunk-%d-%%^$()[]{}.+-!?:\`, `%s-chunk-_%s-%%^$()[]{}.+-!?:\`, `^(.+?)-chunk-(?:([0-9]+)|_([a-z][a-z0-9]{2,6}))-%\^\$\(\)\[\]\{\}\.\+-!\?:\\(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
if newFormatSupported {
assertFormat(`_*-chunk-##,`, `_%s-chunk-%02d,`, `_%s-chunk-_%s,`, `^_(.+?)-chunk-(?:([0-9]{2,})|_([a-z][a-z0-9]{2,6})),(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
}
// invalid formats
assertFormatInvalid(`chunk-#`)
assertFormatInvalid(`*-chunk`)
assertFormatInvalid(`*-*-chunk-#`)
assertFormatInvalid(`*-chunk-#-#`)
assertFormatInvalid(`#-chunk-*`)
assertFormatInvalid(`*/#`)
assertFormatValid(`*#`)
assertFormatInvalid(`**#`)
assertFormatInvalid(`#*`)
assertFormatInvalid(``)
assertFormatInvalid(`-`)
// quick tests
if newFormatSupported {
assertFormat(`part_*_#`, `part_%s_%d`, `part_%s__%s`, `^part_(.+?)_(?:([0-9]+)|_([a-z][a-z0-9]{2,6}))(?:_([0-9][0-9a-z]{3,8})\.\.tmp_([0-9]{10,13}))?$`)
f.opt.StartFrom = 1
assertMakeName(`part_fish_1`, "fish", 0, "", "")
assertParseName(`part_fish_43`, "fish", 42, "", "")
assertMakeName(`part_fish__locks`, "fish", -2, "locks", "")
assertParseName(`part_fish__locks`, "fish", -1, "locks", "")
assertMakeName(`part_fish__x2y`, "fish", -2, "x2y", "")
assertParseName(`part_fish__x2y`, "fish", -1, "x2y", "")
assertMakeName(`part_fish_3_0004`, "fish", 2, "", "4")
assertParseName(`part_fish_4_0005`, "fish", 3, "", "0005")
assertMakeName(`part_fish__blkinfo_jj5fvo3wr`, "fish", -3, "blkinfo", "jj5fvo3wr")
assertParseName(`part_fish__blkinfo_zz9fvo3wr`, "fish", -1, "blkinfo", "zz9fvo3wr")
// old-style temporary suffix (parse only)
assertParseName(`part_fish_4..tmp_0000000011`, "fish", 3, "", "000b")
assertParseName(`part_fish__blkinfo_jj5fvo3wr`, "fish", -1, "blkinfo", "jj5fvo3wr")
}
// prepare format for long tests
assertFormat(`*.chunk.###`, `%s.chunk.%03d`, `%s.chunk._%s`, `^(.+?)\.chunk\.(?:([0-9]{3,})|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
f.opt.StartFrom = 2
// valid data chunks
assertMakeName(`fish.chunk.003`, "fish", 1, "", "")
assertParseName(`fish.chunk.003`, "fish", 1, "", "")
assertMakeName(`fish.chunk.021`, "fish", 19, "", "")
assertParseName(`fish.chunk.021`, "fish", 19, "", "")
// valid temporary data chunks
assertMakeName(`fish.chunk.011_4321`, "fish", 9, "", "4321")
assertParseName(`fish.chunk.011_4321`, "fish", 9, "", "4321")
assertMakeName(`fish.chunk.011_00bc`, "fish", 9, "", "00bc")
assertParseName(`fish.chunk.011_00bc`, "fish", 9, "", "00bc")
assertMakeName(`fish.chunk.1916_5jjfvo3wr`, "fish", 1914, "", "5jjfvo3wr")
assertParseName(`fish.chunk.1916_5jjfvo3wr`, "fish", 1914, "", "5jjfvo3wr")
assertMakeName(`fish.chunk.1917_zz9fvo3wr`, "fish", 1915, "", "zz9fvo3wr")
assertParseName(`fish.chunk.1917_zz9fvo3wr`, "fish", 1915, "", "zz9fvo3wr")
// valid temporary data chunks (old temporary suffix, only parse)
assertParseName(`fish.chunk.004..tmp_0000000047`, "fish", 2, "", "001b")
assertParseName(`fish.chunk.323..tmp_9994567890123`, "fish", 321, "", "3jjfvo3wr")
// parsing invalid data chunk names
assertParseName(`fish.chunk.3`, "", -1, "", "")
assertParseName(`fish.chunk.001`, "", -1, "", "")
assertParseName(`fish.chunk.21`, "", -1, "", "")
assertParseName(`fish.chunk.-21`, "", -1, "", "")
assertParseName(`fish.chunk.004abcd`, "", -1, "", "") // missing underscore delimiter
assertParseName(`fish.chunk.004__1234`, "", -1, "", "") // extra underscore delimiter
assertParseName(`fish.chunk.004_123`, "", -1, "", "") // too short temporary suffix
assertParseName(`fish.chunk.004_1234567890`, "", -1, "", "") // too long temporary suffix
assertParseName(`fish.chunk.004_-1234`, "", -1, "", "") // temporary suffix must be positive
assertParseName(`fish.chunk.004_123E`, "", -1, "", "") // uppercase not allowed
assertParseName(`fish.chunk.004_12.3`, "", -1, "", "") // punctuation not allowed
// parsing invalid data chunk names (old temporary suffix)
assertParseName(`fish.chunk.004.tmp_0000000021`, "", -1, "", "")
assertParseName(`fish.chunk.003..tmp_123456789`, "", -1, "", "")
assertParseName(`fish.chunk.003..tmp_012345678901234567890123456789`, "", -1, "", "")
assertParseName(`fish.chunk.323..tmp_12345678901234`, "", -1, "", "")
assertParseName(`fish.chunk.003..tmp_-1`, "", -1, "", "")
// valid control chunks
assertMakeName(`fish.chunk._info`, "fish", -1, "info", "")
assertMakeName(`fish.chunk._locks`, "fish", -2, "locks", "")
assertMakeName(`fish.chunk._blkinfo`, "fish", -3, "blkinfo", "")
assertMakeName(`fish.chunk._x2y`, "fish", -4, "x2y", "")
assertParseName(`fish.chunk._info`, "fish", -1, "info", "")
assertParseName(`fish.chunk._locks`, "fish", -1, "locks", "")
assertParseName(`fish.chunk._blkinfo`, "fish", -1, "blkinfo", "")
assertParseName(`fish.chunk._x2y`, "fish", -1, "x2y", "")
// valid temporary control chunks
assertMakeName(`fish.chunk._info_0001`, "fish", -1, "info", "1")
assertMakeName(`fish.chunk._locks_4321`, "fish", -2, "locks", "4321")
assertMakeName(`fish.chunk._uploads_abcd`, "fish", -3, "uploads", "abcd")
assertMakeName(`fish.chunk._blkinfo_xyzabcdef`, "fish", -4, "blkinfo", "xyzabcdef")
assertMakeName(`fish.chunk._x2y_1aaa`, "fish", -5, "x2y", "1aaa")
assertParseName(`fish.chunk._info_0001`, "fish", -1, "info", "0001")
assertParseName(`fish.chunk._locks_4321`, "fish", -1, "locks", "4321")
assertParseName(`fish.chunk._uploads_9abc`, "fish", -1, "uploads", "9abc")
assertParseName(`fish.chunk._blkinfo_xyzabcdef`, "fish", -1, "blkinfo", "xyzabcdef")
assertParseName(`fish.chunk._x2y_1aaa`, "fish", -1, "x2y", "1aaa")
// valid temporary control chunks (old temporary suffix, parse only)
assertParseName(`fish.chunk._info..tmp_0000000047`, "fish", -1, "info", "001b")
assertParseName(`fish.chunk._locks..tmp_0000054321`, "fish", -1, "locks", "15wx")
assertParseName(`fish.chunk._uploads..tmp_0000000000`, "fish", -1, "uploads", "0000")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123`, "fish", -1, "blkinfo", "3jjfvo3wr")
assertParseName(`fish.chunk._x2y..tmp_0000000000`, "fish", -1, "x2y", "0000")
// parsing invalid control chunk names
assertParseName(`fish.chunk.metadata`, "", -1, "", "") // must be prepended by underscore
assertParseName(`fish.chunk.info`, "", -1, "", "")
assertParseName(`fish.chunk.locks`, "", -1, "", "")
assertParseName(`fish.chunk.uploads`, "", -1, "", "")
assertParseName(`fish.chunk._os`, "", -1, "", "") // too short
assertParseName(`fish.chunk._metadata`, "", -1, "", "") // too long
assertParseName(`fish.chunk._blockinfo`, "", -1, "", "") // way too long
assertParseName(`fish.chunk._4me`, "", -1, "", "") // cannot start with digit
assertParseName(`fish.chunk._567`, "", -1, "", "") // cannot be all digits
assertParseName(`fish.chunk._me_ta`, "", -1, "", "") // punctuation not allowed
assertParseName(`fish.chunk._in-fo`, "", -1, "", "")
assertParseName(`fish.chunk._.bin`, "", -1, "", "")
assertParseName(`fish.chunk._.2xy`, "", -1, "", "")
// parsing invalid temporary control chunks
assertParseName(`fish.chunk._blkinfo1234`, "", -1, "", "") // missing underscore delimiter
assertParseName(`fish.chunk._info__1234`, "", -1, "", "") // extra underscore delimiter
assertParseName(`fish.chunk._info_123`, "", -1, "", "") // too short temporary suffix
assertParseName(`fish.chunk._info_1234567890`, "", -1, "", "") // too long temporary suffix
assertParseName(`fish.chunk._info_-1234`, "", -1, "", "") // temporary suffix must be positive
assertParseName(`fish.chunk._info_123E`, "", -1, "", "") // uppercase not allowed
assertParseName(`fish.chunk._info_12.3`, "", -1, "", "") // punctuation not allowed
assertParseName(`fish.chunk._locks..tmp_123456789`, "", -1, "", "")
assertParseName(`fish.chunk._meta..tmp_-1`, "", -1, "", "")
assertParseName(`fish.chunk._blockinfo..tmp_012345678901234567890123456789`, "", -1, "", "")
// short control chunk names: 3 letters ok, 1-2 letters not allowed
assertMakeName(`fish.chunk._ext`, "fish", -1, "ext", "")
assertParseName(`fish.chunk._int`, "fish", -1, "int", "")
assertMakeNamePanics("fish", -1, "in", "")
assertMakeNamePanics("fish", -1, "up", "4")
assertMakeNamePanics("fish", -1, "x", "")
assertMakeNamePanics("fish", -1, "c", "1z")
assertMakeName(`fish.chunk._ext_0000`, "fish", -1, "ext", "0")
assertMakeName(`fish.chunk._ext_0026`, "fish", -1, "ext", "26")
assertMakeName(`fish.chunk._int_0abc`, "fish", -1, "int", "abc")
assertMakeName(`fish.chunk._int_9xyz`, "fish", -1, "int", "9xyz")
assertMakeName(`fish.chunk._out_jj5fvo3wr`, "fish", -1, "out", "jj5fvo3wr")
assertMakeName(`fish.chunk._out_jj5fvo3wr`, "fish", -1, "out", "jj5fvo3wr")
assertParseName(`fish.chunk._ext_0000`, "fish", -1, "ext", "0000")
assertParseName(`fish.chunk._ext_0026`, "fish", -1, "ext", "0026")
assertParseName(`fish.chunk._int_0abc`, "fish", -1, "int", "0abc")
assertParseName(`fish.chunk._int_9xyz`, "fish", -1, "int", "9xyz")
assertParseName(`fish.chunk._out_jj5fvo3wr`, "fish", -1, "out", "jj5fvo3wr")
assertParseName(`fish.chunk._out_jj5fvo3wr`, "fish", -1, "out", "jj5fvo3wr")
// base file name can sometimes look like a valid chunk name
assertParseName(`fish.chunk.003.chunk.004`, "fish.chunk.003", 2, "", "")
assertParseName(`fish.chunk.003.chunk._info`, "fish.chunk.003", -1, "info", "")
assertParseName(`fish.chunk.003.chunk._Meta`, "", -1, "", "")
assertParseName(`fish.chunk._info.chunk.004`, "fish.chunk._info", 2, "", "")
assertParseName(`fish.chunk._info.chunk._info`, "fish.chunk._info", -1, "info", "")
assertParseName(`fish.chunk._info.chunk._info.chunk._Meta`, "", -1, "", "")
// base file name looking like a valid chunk name (old temporary suffix)
assertParseName(`fish.chunk.003.chunk.005..tmp_0000000022`, "fish.chunk.003", 3, "", "000m")
assertParseName(`fish.chunk.003.chunk._x..tmp_0000054321`, "", -1, "", "")
assertParseName(`fish.chunk._info.chunk.005..tmp_0000000023`, "fish.chunk._info", 3, "", "000n")
assertParseName(`fish.chunk._info.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", "")
assertParseName(`fish.chunk.003.chunk._blkinfo..tmp_9994567890123`, "fish.chunk.003", -1, "blkinfo", "3jjfvo3wr")
assertParseName(`fish.chunk._info.chunk._blkinfo..tmp_9994567890123`, "fish.chunk._info", -1, "blkinfo", "3jjfvo3wr")
assertParseName(`fish.chunk.004..tmp_0000000021.chunk.004`, "fish.chunk.004..tmp_0000000021", 2, "", "")
assertParseName(`fish.chunk.004..tmp_0000000021.chunk.005..tmp_0000000025`, "fish.chunk.004..tmp_0000000021", 3, "", "000p")
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._info`, "fish.chunk.004..tmp_0000000021", -1, "info", "")
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._blkinfo..tmp_9994567890123`, "fish.chunk.004..tmp_0000000021", -1, "blkinfo", "3jjfvo3wr")
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._Meta`, "", -1, "", "")
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._x..tmp_0000054321`, "", -1, "", "")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk.004`, "fish.chunk._blkinfo..tmp_9994567890123", 2, "", "")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk.005..tmp_0000000026`, "fish.chunk._blkinfo..tmp_9994567890123", 3, "", "000q")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk._info`, "fish.chunk._blkinfo..tmp_9994567890123", -1, "info", "")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk._blkinfo..tmp_9994567890123`, "fish.chunk._blkinfo..tmp_9994567890123", -1, "blkinfo", "3jjfvo3wr")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk._info.chunk._Meta`, "", -1, "", "")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", "")
assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk.004`, "fish.chunk._blkinfo..tmp_1234567890123456789", 2, "", "")
assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk.005..tmp_0000000022`, "fish.chunk._blkinfo..tmp_1234567890123456789", 3, "", "000m")
assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk._info`, "fish.chunk._blkinfo..tmp_1234567890123456789", -1, "info", "")
assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk._blkinfo..tmp_9994567890123`, "fish.chunk._blkinfo..tmp_1234567890123456789", -1, "blkinfo", "3jjfvo3wr")
assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk._info.chunk._Meta`, "", -1, "", "")
assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", "")
// attempts to make invalid chunk names
assertMakeNamePanics("fish", -1, "", "") // neither data nor control
assertMakeNamePanics("fish", 0, "info", "") // both data and control
assertMakeNamePanics("fish", -1, "metadata", "") // control type too long
assertMakeNamePanics("fish", -1, "blockinfo", "") // control type way too long
assertMakeNamePanics("fish", -1, "2xy", "") // first digit not allowed
assertMakeNamePanics("fish", -1, "123", "") // all digits not allowed
assertMakeNamePanics("fish", -1, "Meta", "") // only lower case letters allowed
assertMakeNamePanics("fish", -1, "in-fo", "") // punctuation not allowed
assertMakeNamePanics("fish", -1, "_info", "")
assertMakeNamePanics("fish", -1, "info_", "")
assertMakeNamePanics("fish", -2, ".bind", "")
assertMakeNamePanics("fish", -2, "bind.", "")
assertMakeNamePanics("fish", -1, "", "1") // neither data nor control
assertMakeNamePanics("fish", 0, "info", "23") // both data and control
assertMakeNamePanics("fish", -1, "metadata", "45") // control type too long
assertMakeNamePanics("fish", -1, "blockinfo", "7") // control type way too long
assertMakeNamePanics("fish", -1, "2xy", "abc") // first digit not allowed
assertMakeNamePanics("fish", -1, "123", "def") // all digits not allowed
assertMakeNamePanics("fish", -1, "Meta", "mnk") // only lower case letters allowed
assertMakeNamePanics("fish", -1, "in-fo", "xyz") // punctuation not allowed
assertMakeNamePanics("fish", -1, "_info", "5678")
assertMakeNamePanics("fish", -1, "info_", "999")
assertMakeNamePanics("fish", -2, ".bind", "0")
assertMakeNamePanics("fish", -2, "bind.", "0")
assertMakeNamePanics("fish", 0, "", "1234567890") // temporary suffix too long
assertMakeNamePanics("fish", 0, "", "123F4") // uppercase not allowed
assertMakeNamePanics("fish", 0, "", "123.") // punctuation not allowed
assertMakeNamePanics("fish", 0, "", "_123")
}
func testSmallFileInternals(t *testing.T, f *Fs) {
const dir = "small"
ctx := context.Background()
saveOpt := f.opt
defer func() {
f.opt.FailHard = false
_ = operations.Purge(ctx, f.base, dir)
f.opt = saveOpt
}()
f.opt.FailHard = false
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
checkSmallFileInternals := func(obj fs.Object) {
assert.NotNil(t, obj)
o, ok := obj.(*Object)
assert.True(t, ok)
assert.NotNil(t, o)
if o == nil {
return
}
switch {
case !f.useMeta:
// If meta format is "none", non-chunked file (even empty)
// internally is a single chunk without meta object.
assert.Nil(t, o.main)
assert.True(t, o.isComposite()) // sorry, sometimes a name is misleading
assert.Equal(t, 1, len(o.chunks))
case f.hashAll:
// Consistent hashing forces meta object on small files too
assert.NotNil(t, o.main)
assert.True(t, o.isComposite())
assert.Equal(t, 1, len(o.chunks))
default:
// normally non-chunked file is kept in the Object's main field
assert.NotNil(t, o.main)
assert.False(t, o.isComposite())
assert.Equal(t, 0, len(o.chunks))
}
}
checkContents := func(obj fs.Object, contents string) {
assert.NotNil(t, obj)
assert.Equal(t, int64(len(contents)), obj.Size())
r, err := obj.Open(ctx)
assert.NoError(t, err)
assert.NotNil(t, r)
if r == nil {
return
}
data, err := ioutil.ReadAll(r)
assert.NoError(t, err)
assert.Equal(t, contents, string(data))
_ = r.Close()
}
checkHashsum := func(obj fs.Object) {
var ht hash.Type
switch {
case !f.hashAll:
return
case f.useMD5:
ht = hash.MD5
case f.useSHA1:
ht = hash.SHA1
default:
return
}
// even empty files must have hashsum in consistent mode
sum, err := obj.Hash(ctx, ht)
assert.NoError(t, err)
assert.NotEqual(t, sum, "")
}
checkSmallFile := func(name, contents string) {
filename := path.Join(dir, name)
item := fstest.Item{Path: filename, ModTime: modTime}
_, put := fstests.PutTestContents(ctx, t, f, &item, contents, false)
assert.NotNil(t, put)
checkSmallFileInternals(put)
checkContents(put, contents)
checkHashsum(put)
// objects returned by Put and NewObject must have similar structure
obj, err := f.NewObject(ctx, filename)
assert.NoError(t, err)
assert.NotNil(t, obj)
checkSmallFileInternals(obj)
checkContents(obj, contents)
checkHashsum(obj)
_ = obj.Remove(ctx)
_ = put.Remove(ctx) // for good
}
checkSmallFile("emptyfile", "")
checkSmallFile("smallfile", "Ok")
}
func testPreventCorruption(t *testing.T, f *Fs) {
if f.opt.ChunkSize > 50 {
t.Skip("this test requires small chunks")
}
const dir = "corrupted"
ctx := context.Background()
saveOpt := f.opt
defer func() {
f.opt.FailHard = false
_ = operations.Purge(ctx, f.base, dir)
f.opt = saveOpt
}()
f.opt.FailHard = true
contents := random.String(250)
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
const overlapMessage = "chunk overlap"
assertOverlapError := func(err error) {
assert.Error(t, err)
if err != nil {
assert.Contains(t, err.Error(), overlapMessage)
}
}
newFile := func(name string) fs.Object {
item := fstest.Item{Path: path.Join(dir, name), ModTime: modTime}
_, obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
require.NotNil(t, obj)
return obj
}
billyObj := newFile("billy")
billyChunkName := func(chunkNo int) string {
return f.makeChunkName(billyObj.Remote(), chunkNo, "", "")
}
err := f.Mkdir(ctx, billyChunkName(1))
assertOverlapError(err)
_, err = f.Move(ctx, newFile("silly1"), billyChunkName(2))
assert.Error(t, err)
assert.True(t, err == fs.ErrorCantMove || (err != nil && strings.Contains(err.Error(), overlapMessage)))
_, err = f.Copy(ctx, newFile("silly2"), billyChunkName(3))
assert.Error(t, err)
assert.True(t, err == fs.ErrorCantCopy || (err != nil && strings.Contains(err.Error(), overlapMessage)))
// accessing chunks in strict mode is prohibited
f.opt.FailHard = true
billyChunk4Name := billyChunkName(4)
billyChunk4, err := f.NewObject(ctx, billyChunk4Name)
assertOverlapError(err)
f.opt.FailHard = false
billyChunk4, err = f.NewObject(ctx, billyChunk4Name)
assert.NoError(t, err)
require.NotNil(t, billyChunk4)
f.opt.FailHard = true
_, err = f.Put(ctx, bytes.NewBufferString(contents), billyChunk4)
assertOverlapError(err)
// you can freely read chunks (if you have an object)
r, err := billyChunk4.Open(ctx)
assert.NoError(t, err)
var chunkContents []byte
assert.NotPanics(t, func() {
chunkContents, err = ioutil.ReadAll(r)
_ = r.Close()
})
assert.NoError(t, err)
assert.NotEqual(t, contents, string(chunkContents))
// but you can't change them
err = billyChunk4.Update(ctx, bytes.NewBufferString(contents), newFile("silly3"))
assertOverlapError(err)
// Remove isn't special, you can't corrupt files even if you have an object
err = billyChunk4.Remove(ctx)
assertOverlapError(err)
// recreate billy in case it was anyhow corrupted
willyObj := newFile("willy")
willyChunkName := f.makeChunkName(willyObj.Remote(), 1, "", "")
f.opt.FailHard = false
willyChunk, err := f.NewObject(ctx, willyChunkName)
f.opt.FailHard = true
assert.NoError(t, err)
require.NotNil(t, willyChunk)
_, err = operations.Copy(ctx, f, willyChunk, willyChunkName, newFile("silly4"))
assertOverlapError(err)
// operations.Move will return error when chunker's Move refused
// to corrupt target file, but reverts to copy/delete method
// still trying to delete target chunk. Chunker must come to rescue.
_, err = operations.Move(ctx, f, willyChunk, willyChunkName, newFile("silly5"))
assertOverlapError(err)
r, err = willyChunk.Open(ctx)
assert.NoError(t, err)
assert.NotPanics(t, func() {
_, err = ioutil.ReadAll(r)
_ = r.Close()
})
assert.NoError(t, err)
}
func testChunkNumberOverflow(t *testing.T, f *Fs) {
if f.opt.ChunkSize > 50 {
t.Skip("this test requires small chunks")
}
const dir = "wreaked"
const wreakNumber = 10200300
ctx := context.Background()
saveOpt := f.opt
defer func() {
f.opt.FailHard = false
_ = operations.Purge(ctx, f.base, dir)
f.opt = saveOpt
}()
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
contents := random.String(100)
newFile := func(f fs.Fs, name string) (fs.Object, string) {
filename := path.Join(dir, name)
item := fstest.Item{Path: filename, ModTime: modTime}
_, obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
require.NotNil(t, obj)
return obj, filename
}
f.opt.FailHard = false
file, fileName := newFile(f, "wreaker")
wreak, _ := newFile(f.base, f.makeChunkName("wreaker", wreakNumber, "", ""))
f.opt.FailHard = false
fstest.CheckListingWithRoot(t, f, dir, nil, nil, f.Precision())
_, err := f.NewObject(ctx, fileName)
assert.Error(t, err)
f.opt.FailHard = true
_, err = f.List(ctx, dir)
assert.Error(t, err)
_, err = f.NewObject(ctx, fileName)
assert.Error(t, err)
f.opt.FailHard = false
_ = wreak.Remove(ctx)
_ = file.Remove(ctx)
}
func testMetadataInput(t *testing.T, f *Fs) {
const minChunkForTest = 50
if f.opt.ChunkSize < minChunkForTest {
t.Skip("this test requires chunks that fit metadata")
}
const dir = "usermeta"
ctx := context.Background()
saveOpt := f.opt
defer func() {
f.opt.FailHard = false
_ = operations.Purge(ctx, f.base, dir)
f.opt = saveOpt
}()
f.opt.FailHard = false
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
putFile := func(f fs.Fs, name, contents, message string, check bool) fs.Object {
item := fstest.Item{Path: name, ModTime: modTime}
_, obj := fstests.PutTestContents(ctx, t, f, &item, contents, check)
assert.NotNil(t, obj, message)
return obj
}
runSubtest := func(contents, name string) {
description := fmt.Sprintf("file with %s metadata", name)
filename := path.Join(dir, name)
require.True(t, len(contents) > 2 && len(contents) < minChunkForTest, description+" test data is correct")
part := putFile(f.base, f.makeChunkName(filename, 0, "", ""), "oops", "", true)
_ = putFile(f, filename, contents, "upload "+description, false)
obj, err := f.NewObject(ctx, filename)
assert.NoError(t, err, "access "+description)
assert.NotNil(t, obj)
assert.Equal(t, int64(len(contents)), obj.Size(), "size "+description)
o, ok := obj.(*Object)
assert.NotNil(t, ok)
if o != nil {
assert.True(t, o.isComposite() && len(o.chunks) == 1, description+" is forced composite")
o = nil
}
defer func() {
_ = obj.Remove(ctx)
_ = part.Remove(ctx)
}()
r, err := obj.Open(ctx)
assert.NoError(t, err, "open "+description)
assert.NotNil(t, r, "open stream of "+description)
if err == nil && r != nil {
data, err := ioutil.ReadAll(r)
assert.NoError(t, err, "read all of "+description)
assert.Equal(t, contents, string(data), description+" contents is ok")
_ = r.Close()
}
}
metaData, err := marshalSimpleJSON(ctx, 3, 1, "", "")
require.NoError(t, err)
todaysMeta := string(metaData)
runSubtest(todaysMeta, "today")
pastMeta := regexp.MustCompile(`"ver":[0-9]+`).ReplaceAllLiteralString(todaysMeta, `"ver":1`)
pastMeta = regexp.MustCompile(`"size":[0-9]+`).ReplaceAllLiteralString(pastMeta, `"size":0`)
runSubtest(pastMeta, "past")
futureMeta := regexp.MustCompile(`"ver":[0-9]+`).ReplaceAllLiteralString(todaysMeta, `"ver":999`)
futureMeta = regexp.MustCompile(`"nchunks":[0-9]+`).ReplaceAllLiteralString(futureMeta, `"nchunks":0,"x":"y"`)
runSubtest(futureMeta, "future")
}
// InternalTest dispatches all internal tests
func (f *Fs) InternalTest(t *testing.T) {
t.Run("PutLarge", func(t *testing.T) {
if *UploadKilobytes <= 0 {
t.Skip("-upload-kilobytes is not set")
}
testPutLarge(t, f, *UploadKilobytes)
})
t.Run("ChunkNameFormat", func(t *testing.T) {
testChunkNameFormat(t, f)
})
t.Run("SmallFileInternals", func(t *testing.T) {
testSmallFileInternals(t, f)
})
t.Run("PreventCorruption", func(t *testing.T) {
testPreventCorruption(t, f)
})
t.Run("ChunkNumberOverflow", func(t *testing.T) {
testChunkNumberOverflow(t, f)
})
t.Run("MetadataInput", func(t *testing.T) {
testMetadataInput(t, f)
})
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -0,0 +1,58 @@
// Test the Chunker filesystem interface
package chunker_test
import (
"flag"
"os"
"path/filepath"
"testing"
_ "github.com/rclone/rclone/backend/all" // for integration tests
"github.com/rclone/rclone/backend/chunker"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// Command line flags
var (
// Invalid characters are not supported by some remotes, eg. Mailru.
// We enable testing with invalid characters when -remote is not set, so
// chunker overlays a local directory, but invalid characters are disabled
// by default when -remote is set, eg. when test_all runs backend tests.
// You can still test with invalid characters using the below flag.
UseBadChars = flag.Bool("bad-chars", false, "Set to test bad characters in file names when -remote is set")
)
// TestIntegration runs integration tests against a concrete remote
// set by the -remote flag. If the flag is not set, it creates a
// dynamic chunker overlay wrapping a local temporary directory.
func TestIntegration(t *testing.T) {
opt := fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*chunker.Object)(nil),
SkipBadWindowsCharacters: !*UseBadChars,
UnimplementableObjectMethods: []string{
"MimeType",
"GetTier",
"SetTier",
},
UnimplementableFsMethods: []string{
"PublicLink",
"OpenWriterAt",
"MergeDirs",
"DirCacheFlush",
"UserInfo",
"Disconnect",
},
}
if *fstest.RemoteName == "" {
name := "TestChunker"
opt.RemoteName = name + ":"
tempDir := filepath.Join(os.TempDir(), "rclone-chunker-test-standard")
opt.ExtraConfig = []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "chunker"},
{Name: name, Key: "remote", Value: tempDir},
}
}
fstests.Run(t, &opt)
}

1054
backend/crypt/cipher.go Normal file

File diff suppressed because it is too large Load Diff

1224
backend/crypt/cipher_test.go Normal file

File diff suppressed because it is too large Load Diff

1031
backend/crypt/crypt.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,143 @@
package crypt
import (
"bytes"
"context"
"crypto/md5"
"fmt"
"io"
"testing"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type testWrapper struct {
fs.ObjectInfo
}
// UnWrap returns the Object that this Object is wrapping or nil if it
// isn't wrapping anything
func (o testWrapper) UnWrap() fs.Object {
if o, ok := o.ObjectInfo.(fs.Object); ok {
return o
}
return nil
}
// Create a temporary local fs to upload things from
func makeTempLocalFs(t *testing.T) (localFs fs.Fs, cleanup func()) {
localFs, err := fs.TemporaryLocalFs()
require.NoError(t, err)
cleanup = func() {
require.NoError(t, localFs.Rmdir(context.Background(), ""))
}
return localFs, cleanup
}
// Upload a file to a remote
func uploadFile(t *testing.T, f fs.Fs, remote, contents string) (obj fs.Object, cleanup func()) {
inBuf := bytes.NewBufferString(contents)
t1 := time.Date(2012, time.December, 17, 18, 32, 31, 0, time.UTC)
upSrc := object.NewStaticObjectInfo(remote, t1, int64(len(contents)), true, nil, nil)
obj, err := f.Put(context.Background(), inBuf, upSrc)
require.NoError(t, err)
cleanup = func() {
require.NoError(t, obj.Remove(context.Background()))
}
return obj, cleanup
}
// Test the ObjectInfo
func testObjectInfo(t *testing.T, f *Fs, wrap bool) {
var (
contents = random.String(100)
path = "hash_test_object"
ctx = context.Background()
)
if wrap {
path = "_wrap"
}
localFs, cleanupLocalFs := makeTempLocalFs(t)
defer cleanupLocalFs()
obj, cleanupObj := uploadFile(t, localFs, path, contents)
defer cleanupObj()
// encrypt the data
inBuf := bytes.NewBufferString(contents)
var outBuf bytes.Buffer
enc, err := f.cipher.newEncrypter(inBuf, nil)
require.NoError(t, err)
nonce := enc.nonce // read the nonce at the start
_, err = io.Copy(&outBuf, enc)
require.NoError(t, err)
var oi fs.ObjectInfo = obj
if wrap {
// wrap the object in an fs.ObjectUnwrapper if required
oi = testWrapper{oi}
}
// wrap the object in a crypt for upload using the nonce we
// saved from the encryptor
src := f.newObjectInfo(oi, nonce)
// Test ObjectInfo methods
assert.Equal(t, int64(outBuf.Len()), src.Size())
assert.Equal(t, f, src.Fs())
assert.NotEqual(t, path, src.Remote())
// Test ObjectInfo.Hash
wantHash := md5.Sum(outBuf.Bytes())
gotHash, err := src.Hash(ctx, hash.MD5)
require.NoError(t, err)
assert.Equal(t, fmt.Sprintf("%x", wantHash), gotHash)
}
func testComputeHash(t *testing.T, f *Fs) {
var (
contents = random.String(100)
path = "compute_hash_test"
ctx = context.Background()
hashType = f.Fs.Hashes().GetOne()
)
if hashType == hash.None {
t.Skipf("%v: does not support hashes", f.Fs)
}
localFs, cleanupLocalFs := makeTempLocalFs(t)
defer cleanupLocalFs()
// Upload a file to localFs as a test object
localObj, cleanupLocalObj := uploadFile(t, localFs, path, contents)
defer cleanupLocalObj()
// Upload the same data to the remote Fs also
remoteObj, cleanupRemoteObj := uploadFile(t, f, path, contents)
defer cleanupRemoteObj()
// Calculate the expected Hash of the remote object
computedHash, err := f.ComputeHash(ctx, remoteObj.(*Object), localObj, hashType)
require.NoError(t, err)
// Test computed hash matches remote object hash
remoteObjHash, err := remoteObj.(*Object).Object.Hash(ctx, hashType)
require.NoError(t, err)
assert.Equal(t, remoteObjHash, computedHash)
}
// InternalTest is called by fstests.Run to extra tests
func (f *Fs) InternalTest(t *testing.T) {
t.Run("ObjectInfo", func(t *testing.T) { testObjectInfo(t, f, false) })
t.Run("ObjectInfoWrap", func(t *testing.T) { testObjectInfo(t, f, true) })
t.Run("ComputeHash", func(t *testing.T) { testComputeHash(t, f) })
}

View File

@@ -0,0 +1,93 @@
// Test Crypt filesystem interface
package crypt_test
import (
"os"
"path/filepath"
"testing"
"github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/drive" // for integration tests
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/swift" // for integration tests
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
t.Skip("Skipping as -remote not set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*crypt.Object)(nil),
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
// TestStandard runs integration tests against the remote
func TestStandard(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard")
name := "TestCrypt"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*crypt.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: obscure.MustObscure("potato")},
{Name: name, Key: "filename_encryption", Value: "standard"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
// TestOff runs integration tests against the remote
func TestOff(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-off")
name := "TestCrypt2"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*crypt.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "off"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
// TestObfuscate runs integration tests against the remote
func TestObfuscate(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate")
name := "TestCrypt3"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*crypt.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "obfuscate"},
},
SkipBadWindowsCharacters: true,
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}

View File

@@ -0,0 +1,63 @@
// Package pkcs7 implements PKCS#7 padding
//
// This is a standard way of encoding variable length buffers into
// buffers which are a multiple of an underlying crypto block size.
package pkcs7
import "github.com/pkg/errors"
// Errors Unpad can return
var (
ErrorPaddingNotFound = errors.New("Bad PKCS#7 padding - not padded")
ErrorPaddingNotAMultiple = errors.New("Bad PKCS#7 padding - not a multiple of blocksize")
ErrorPaddingTooLong = errors.New("Bad PKCS#7 padding - too long")
ErrorPaddingTooShort = errors.New("Bad PKCS#7 padding - too short")
ErrorPaddingNotAllTheSame = errors.New("Bad PKCS#7 padding - not all the same")
)
// Pad buf using PKCS#7 to a multiple of n.
//
// Appends the padding to buf - make a copy of it first if you don't
// want it modified.
func Pad(n int, buf []byte) []byte {
if n <= 1 || n >= 256 {
panic("bad multiple")
}
length := len(buf)
padding := n - (length % n)
for i := 0; i < padding; i++ {
buf = append(buf, byte(padding))
}
if (len(buf) % n) != 0 {
panic("padding failed")
}
return buf
}
// Unpad buf using PKCS#7 from a multiple of n returning a slice of
// buf or an error if malformed.
func Unpad(n int, buf []byte) ([]byte, error) {
if n <= 1 || n >= 256 {
panic("bad multiple")
}
length := len(buf)
if length == 0 {
return nil, ErrorPaddingNotFound
}
if (length % n) != 0 {
return nil, ErrorPaddingNotAMultiple
}
padding := int(buf[length-1])
if padding > n {
return nil, ErrorPaddingTooLong
}
if padding == 0 {
return nil, ErrorPaddingTooShort
}
for i := 0; i < padding; i++ {
if buf[length-1-i] != byte(padding) {
return nil, ErrorPaddingNotAllTheSame
}
}
return buf[:length-padding], nil
}

View File

@@ -0,0 +1,73 @@
package pkcs7
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
)
func TestPad(t *testing.T) {
for _, test := range []struct {
n int
in string
expected string
}{
{8, "", "\x08\x08\x08\x08\x08\x08\x08\x08"},
{8, "1", "1\x07\x07\x07\x07\x07\x07\x07"},
{8, "12", "12\x06\x06\x06\x06\x06\x06"},
{8, "123", "123\x05\x05\x05\x05\x05"},
{8, "1234", "1234\x04\x04\x04\x04"},
{8, "12345", "12345\x03\x03\x03"},
{8, "123456", "123456\x02\x02"},
{8, "1234567", "1234567\x01"},
{8, "abcdefgh", "abcdefgh\x08\x08\x08\x08\x08\x08\x08\x08"},
{8, "abcdefgh1", "abcdefgh1\x07\x07\x07\x07\x07\x07\x07"},
{8, "abcdefgh12", "abcdefgh12\x06\x06\x06\x06\x06\x06"},
{8, "abcdefgh123", "abcdefgh123\x05\x05\x05\x05\x05"},
{8, "abcdefgh1234", "abcdefgh1234\x04\x04\x04\x04"},
{8, "abcdefgh12345", "abcdefgh12345\x03\x03\x03"},
{8, "abcdefgh123456", "abcdefgh123456\x02\x02"},
{8, "abcdefgh1234567", "abcdefgh1234567\x01"},
{8, "abcdefgh12345678", "abcdefgh12345678\x08\x08\x08\x08\x08\x08\x08\x08"},
{16, "", "\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10\x10"},
{16, "a", "a\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f"},
} {
actual := Pad(test.n, []byte(test.in))
assert.Equal(t, test.expected, string(actual), fmt.Sprintf("Pad %d %q", test.n, test.in))
recovered, err := Unpad(test.n, actual)
assert.NoError(t, err)
assert.Equal(t, []byte(test.in), recovered, fmt.Sprintf("Unpad %d %q", test.n, test.in))
}
assert.Panics(t, func() { Pad(1, []byte("")) }, "bad multiple")
assert.Panics(t, func() { Pad(256, []byte("")) }, "bad multiple")
}
func TestUnpad(t *testing.T) {
// We've tested the OK decoding in TestPad, now test the error cases
for _, test := range []struct {
n int
in string
err error
}{
{8, "", ErrorPaddingNotFound},
{8, "1", ErrorPaddingNotAMultiple},
{8, "12", ErrorPaddingNotAMultiple},
{8, "123", ErrorPaddingNotAMultiple},
{8, "1234", ErrorPaddingNotAMultiple},
{8, "12345", ErrorPaddingNotAMultiple},
{8, "123456", ErrorPaddingNotAMultiple},
{8, "1234567", ErrorPaddingNotAMultiple},
{8, "1234567\xFF", ErrorPaddingTooLong},
{8, "1234567\x09", ErrorPaddingTooLong},
{8, "1234567\x00", ErrorPaddingTooShort},
{8, "123456\x01\x02", ErrorPaddingNotAllTheSame},
{8, "\x07\x08\x08\x08\x08\x08\x08\x08", ErrorPaddingNotAllTheSame},
} {
result, actualErr := Unpad(test.n, []byte(test.in))
assert.Equal(t, test.err, actualErr, fmt.Sprintf("Unpad %d %q", test.n, test.in))
assert.Equal(t, result, []byte(nil))
}
assert.Panics(t, func() { _, _ = Unpad(1, []byte("")) }, "bad multiple")
assert.Panics(t, func() { _, _ = Unpad(256, []byte("")) }, "bad multiple")
}

3514
backend/drive/drive.go Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,381 @@
package drive
import (
"bytes"
"context"
"encoding/json"
"io"
"io/ioutil"
"mime"
"path/filepath"
"strings"
"testing"
"github.com/pkg/errors"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/api/drive/v3"
)
func TestDriveScopes(t *testing.T) {
for _, test := range []struct {
in string
want []string
wantFlag bool
}{
{"", []string{
"https://www.googleapis.com/auth/drive",
}, false},
{" drive.file , drive.readonly", []string{
"https://www.googleapis.com/auth/drive.file",
"https://www.googleapis.com/auth/drive.readonly",
}, false},
{" drive.file , drive.appfolder", []string{
"https://www.googleapis.com/auth/drive.file",
"https://www.googleapis.com/auth/drive.appfolder",
}, true},
} {
got := driveScopes(test.in)
assert.Equal(t, test.want, got, test.in)
gotFlag := driveScopesContainsAppFolder(got)
assert.Equal(t, test.wantFlag, gotFlag, test.in)
}
}
/*
var additionalMimeTypes = map[string]string{
"application/vnd.ms-excel.sheet.macroenabled.12": ".xlsm",
"application/vnd.ms-excel.template.macroenabled.12": ".xltm",
"application/vnd.ms-powerpoint.presentation.macroenabled.12": ".pptm",
"application/vnd.ms-powerpoint.slideshow.macroenabled.12": ".ppsm",
"application/vnd.ms-powerpoint.template.macroenabled.12": ".potm",
"application/vnd.ms-powerpoint": ".ppt",
"application/vnd.ms-word.document.macroenabled.12": ".docm",
"application/vnd.ms-word.template.macroenabled.12": ".dotm",
"application/vnd.openxmlformats-officedocument.presentationml.template": ".potx",
"application/vnd.openxmlformats-officedocument.spreadsheetml.template": ".xltx",
"application/vnd.openxmlformats-officedocument.wordprocessingml.template": ".dotx",
"application/vnd.sun.xml.writer": ".sxw",
"text/richtext": ".rtf",
}
*/
// Load the example export formats into exportFormats for testing
func TestInternalLoadExampleFormats(t *testing.T) {
fetchFormatsOnce.Do(func() {})
buf, err := ioutil.ReadFile(filepath.FromSlash("test/about.json"))
var about struct {
ExportFormats map[string][]string `json:"exportFormats,omitempty"`
ImportFormats map[string][]string `json:"importFormats,omitempty"`
}
require.NoError(t, err)
require.NoError(t, json.Unmarshal(buf, &about))
_exportFormats = fixMimeTypeMap(about.ExportFormats)
_importFormats = fixMimeTypeMap(about.ImportFormats)
}
func TestInternalParseExtensions(t *testing.T) {
for _, test := range []struct {
in string
want []string
wantErr error
}{
{"doc", []string{".doc"}, nil},
{" docx ,XLSX, pptx,svg", []string{".docx", ".xlsx", ".pptx", ".svg"}, nil},
{"docx,svg,Docx", []string{".docx", ".svg"}, nil},
{"docx,potato,docx", []string{".docx"}, errors.New(`couldn't find MIME type for extension ".potato"`)},
} {
extensions, _, gotErr := parseExtensions(test.in)
if test.wantErr == nil {
assert.NoError(t, gotErr)
} else {
assert.EqualError(t, gotErr, test.wantErr.Error())
}
assert.Equal(t, test.want, extensions)
}
// Test it is appending
extensions, _, gotErr := parseExtensions("docx,svg", "docx,svg,xlsx")
assert.NoError(t, gotErr)
assert.Equal(t, []string{".docx", ".svg", ".xlsx"}, extensions)
}
func TestInternalFindExportFormat(t *testing.T) {
item := &drive.File{
Name: "file",
MimeType: "application/vnd.google-apps.document",
}
for _, test := range []struct {
extensions []string
wantExtension string
wantMimeType string
}{
{[]string{}, "", ""},
{[]string{".pdf"}, ".pdf", "application/pdf"},
{[]string{".pdf", ".rtf", ".xls"}, ".pdf", "application/pdf"},
{[]string{".xls", ".rtf", ".pdf"}, ".rtf", "application/rtf"},
{[]string{".xls", ".csv", ".svg"}, "", ""},
} {
f := new(Fs)
f.exportExtensions = test.extensions
gotExtension, gotFilename, gotMimeType, gotIsDocument := f.findExportFormat(item)
assert.Equal(t, test.wantExtension, gotExtension)
if test.wantExtension != "" {
assert.Equal(t, item.Name+gotExtension, gotFilename)
} else {
assert.Equal(t, "", gotFilename)
}
assert.Equal(t, test.wantMimeType, gotMimeType)
assert.Equal(t, true, gotIsDocument)
}
}
func TestMimeTypesToExtension(t *testing.T) {
for mimeType, extension := range _mimeTypeToExtension {
extensions, err := mime.ExtensionsByType(mimeType)
assert.NoError(t, err)
assert.Contains(t, extensions, extension)
}
}
func TestExtensionToMimeType(t *testing.T) {
for mimeType, extension := range _mimeTypeToExtension {
gotMimeType := mime.TypeByExtension(extension)
mediatype, _, err := mime.ParseMediaType(gotMimeType)
assert.NoError(t, err)
assert.Equal(t, mimeType, mediatype)
}
}
func TestExtensionsForExportFormats(t *testing.T) {
if _exportFormats == nil {
t.Error("exportFormats == nil")
}
for fromMT, toMTs := range _exportFormats {
for _, toMT := range toMTs {
if !isInternalMimeType(toMT) {
extensions, err := mime.ExtensionsByType(toMT)
assert.NoError(t, err, "invalid MIME type %q", toMT)
assert.NotEmpty(t, extensions, "No extension found for %q (from: %q)", fromMT, toMT)
}
}
}
}
func TestExtensionsForImportFormats(t *testing.T) {
t.Skip()
if _importFormats == nil {
t.Error("_importFormats == nil")
}
for fromMT := range _importFormats {
if !isInternalMimeType(fromMT) {
extensions, err := mime.ExtensionsByType(fromMT)
assert.NoError(t, err, "invalid MIME type %q", fromMT)
assert.NotEmpty(t, extensions, "No extension found for %q", fromMT)
}
}
}
func (f *Fs) InternalTestDocumentImport(t *testing.T) {
oldAllow := f.opt.AllowImportNameChange
f.opt.AllowImportNameChange = true
defer func() {
f.opt.AllowImportNameChange = oldAllow
}()
testFilesPath, err := filepath.Abs(filepath.FromSlash("test/files"))
require.NoError(t, err)
testFilesFs, err := fs.NewFs(testFilesPath)
require.NoError(t, err)
_, f.importMimeTypes, err = parseExtensions("odt,ods,doc")
require.NoError(t, err)
err = operations.CopyFile(context.Background(), f, testFilesFs, "example2.doc", "example2.doc")
require.NoError(t, err)
}
func (f *Fs) InternalTestDocumentUpdate(t *testing.T) {
testFilesPath, err := filepath.Abs(filepath.FromSlash("test/files"))
require.NoError(t, err)
testFilesFs, err := fs.NewFs(testFilesPath)
require.NoError(t, err)
_, f.importMimeTypes, err = parseExtensions("odt,ods,doc")
require.NoError(t, err)
err = operations.CopyFile(context.Background(), f, testFilesFs, "example2.xlsx", "example1.ods")
require.NoError(t, err)
}
func (f *Fs) InternalTestDocumentExport(t *testing.T) {
var buf bytes.Buffer
var err error
f.exportExtensions, _, err = parseExtensions("txt")
require.NoError(t, err)
obj, err := f.NewObject(context.Background(), "example2.txt")
require.NoError(t, err)
rc, err := obj.Open(context.Background())
require.NoError(t, err)
defer func() { require.NoError(t, rc.Close()) }()
_, err = io.Copy(&buf, rc)
require.NoError(t, err)
text := buf.String()
for _, excerpt := range []string{
"Lorem ipsum dolor sit amet, consectetur",
"porta at ultrices in, consectetur at augue.",
} {
require.Contains(t, text, excerpt)
}
}
func (f *Fs) InternalTestDocumentLink(t *testing.T) {
var buf bytes.Buffer
var err error
f.exportExtensions, _, err = parseExtensions("link.html")
require.NoError(t, err)
obj, err := f.NewObject(context.Background(), "example2.link.html")
require.NoError(t, err)
rc, err := obj.Open(context.Background())
require.NoError(t, err)
defer func() { require.NoError(t, rc.Close()) }()
_, err = io.Copy(&buf, rc)
require.NoError(t, err)
text := buf.String()
require.True(t, strings.HasPrefix(text, "<html>"))
require.True(t, strings.HasSuffix(text, "</html>\n"))
for _, excerpt := range []string{
`<meta http-equiv="refresh"`,
`Loading <a href="`,
} {
require.Contains(t, text, excerpt)
}
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/Shortcuts
func (f *Fs) InternalTestShortcuts(t *testing.T) {
const (
// from fstest/fstests/fstests.go
existingDir = "hello? sausage"
existingFile = `hello? sausage/êé/Hello, 世界/ " ' @ < > & ? + ≠/z.txt`
existingSubDir = "êé"
)
ctx := context.Background()
srcObj, err := f.NewObject(ctx, existingFile)
require.NoError(t, err)
srcHash, err := srcObj.Hash(ctx, hash.MD5)
require.NoError(t, err)
assert.NotEqual(t, "", srcHash)
t.Run("Errors", func(t *testing.T) {
_, err := f.makeShortcut(ctx, "", f, "")
assert.Error(t, err)
assert.Contains(t, err.Error(), "can't be root")
_, err = f.makeShortcut(ctx, "notfound", f, "dst")
assert.Error(t, err)
assert.Contains(t, err.Error(), "can't find source")
_, err = f.makeShortcut(ctx, existingFile, f, existingFile)
assert.Error(t, err)
assert.Contains(t, err.Error(), "not overwriting")
assert.Contains(t, err.Error(), "existing file")
_, err = f.makeShortcut(ctx, existingFile, f, existingDir)
assert.Error(t, err)
assert.Contains(t, err.Error(), "not overwriting")
assert.Contains(t, err.Error(), "existing directory")
})
t.Run("File", func(t *testing.T) {
dstObj, err := f.makeShortcut(ctx, existingFile, f, "shortcut.txt")
require.NoError(t, err)
require.NotNil(t, dstObj)
assert.Equal(t, "shortcut.txt", dstObj.Remote())
dstHash, err := dstObj.Hash(ctx, hash.MD5)
require.NoError(t, err)
assert.Equal(t, srcHash, dstHash)
require.NoError(t, dstObj.Remove(ctx))
})
t.Run("Dir", func(t *testing.T) {
dstObj, err := f.makeShortcut(ctx, existingDir, f, "shortcutdir")
require.NoError(t, err)
require.Nil(t, dstObj)
entries, err := f.List(ctx, "shortcutdir")
require.NoError(t, err)
require.Equal(t, 1, len(entries))
require.Equal(t, "shortcutdir/"+existingSubDir, entries[0].Remote())
require.NoError(t, f.Rmdir(ctx, "shortcutdir"))
})
t.Run("Command", func(t *testing.T) {
_, err := f.Command(ctx, "shortcut", []string{"one"}, nil)
require.Error(t, err)
require.Contains(t, err.Error(), "need exactly 2 arguments")
_, err = f.Command(ctx, "shortcut", []string{"one", "two"}, map[string]string{
"target": "doesnotexistremote:",
})
require.Error(t, err)
require.Contains(t, err.Error(), "couldn't find target")
_, err = f.Command(ctx, "shortcut", []string{"one", "two"}, map[string]string{
"target": ".",
})
require.Error(t, err)
require.Contains(t, err.Error(), "target is not a drive backend")
dstObjI, err := f.Command(ctx, "shortcut", []string{existingFile, "shortcut2.txt"}, map[string]string{
"target": fs.ConfigString(f),
})
require.NoError(t, err)
dstObj := dstObjI.(*Object)
assert.Equal(t, "shortcut2.txt", dstObj.Remote())
dstHash, err := dstObj.Hash(ctx, hash.MD5)
require.NoError(t, err)
assert.Equal(t, srcHash, dstHash)
require.NoError(t, dstObj.Remove(ctx))
dstObjI, err = f.Command(ctx, "shortcut", []string{existingFile, "shortcut3.txt"}, nil)
require.NoError(t, err)
dstObj = dstObjI.(*Object)
assert.Equal(t, "shortcut3.txt", dstObj.Remote())
dstHash, err = dstObj.Hash(ctx, hash.MD5)
require.NoError(t, err)
assert.Equal(t, srcHash, dstHash)
require.NoError(t, dstObj.Remove(ctx))
})
}
func (f *Fs) InternalTest(t *testing.T) {
// These tests all depend on each other so run them as nested tests
t.Run("DocumentImport", func(t *testing.T) {
f.InternalTestDocumentImport(t)
t.Run("DocumentUpdate", func(t *testing.T) {
f.InternalTestDocumentUpdate(t)
t.Run("DocumentExport", func(t *testing.T) {
f.InternalTestDocumentExport(t)
t.Run("DocumentLink", func(t *testing.T) {
f.InternalTestDocumentLink(t)
})
})
})
})
t.Run("Shortcuts", f.InternalTestShortcuts)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -0,0 +1,35 @@
// Test Drive filesystem interface
package drive
import (
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestDrive:",
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
CeilChunkSize: fstests.NextPowerOfTwo,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
)

View File

@@ -0,0 +1,178 @@
{
"importFormats": {
"text/tab-separated-values": [
"application/vnd.google-apps.spreadsheet"
],
"application/x-vnd.oasis.opendocument.presentation": [
"application/vnd.google-apps.presentation"
],
"image/jpeg": [
"application/vnd.google-apps.document"
],
"image/bmp": [
"application/vnd.google-apps.document"
],
"image/gif": [
"application/vnd.google-apps.document"
],
"application/vnd.ms-excel.sheet.macroenabled.12": [
"application/vnd.google-apps.spreadsheet"
],
"application/vnd.openxmlformats-officedocument.wordprocessingml.template": [
"application/vnd.google-apps.document"
],
"application/vnd.ms-powerpoint.presentation.macroenabled.12": [
"application/vnd.google-apps.presentation"
],
"application/vnd.ms-word.template.macroenabled.12": [
"application/vnd.google-apps.document"
],
"application/vnd.openxmlformats-officedocument.wordprocessingml.document": [
"application/vnd.google-apps.document"
],
"image/pjpeg": [
"application/vnd.google-apps.document"
],
"application/vnd.google-apps.script+text/plain": [
"application/vnd.google-apps.script"
],
"application/vnd.ms-excel": [
"application/vnd.google-apps.spreadsheet"
],
"application/vnd.sun.xml.writer": [
"application/vnd.google-apps.document"
],
"application/vnd.ms-word.document.macroenabled.12": [
"application/vnd.google-apps.document"
],
"application/vnd.ms-powerpoint.slideshow.macroenabled.12": [
"application/vnd.google-apps.presentation"
],
"text/rtf": [
"application/vnd.google-apps.document"
],
"text/plain": [
"application/vnd.google-apps.document"
],
"application/vnd.oasis.opendocument.spreadsheet": [
"application/vnd.google-apps.spreadsheet"
],
"application/x-vnd.oasis.opendocument.spreadsheet": [
"application/vnd.google-apps.spreadsheet"
],
"image/png": [
"application/vnd.google-apps.document"
],
"application/x-vnd.oasis.opendocument.text": [
"application/vnd.google-apps.document"
],
"application/msword": [
"application/vnd.google-apps.document"
],
"application/pdf": [
"application/vnd.google-apps.document"
],
"application/json": [
"application/vnd.google-apps.script"
],
"application/x-msmetafile": [
"application/vnd.google-apps.drawing"
],
"application/vnd.openxmlformats-officedocument.spreadsheetml.template": [
"application/vnd.google-apps.spreadsheet"
],
"application/vnd.ms-powerpoint": [
"application/vnd.google-apps.presentation"
],
"application/vnd.ms-excel.template.macroenabled.12": [
"application/vnd.google-apps.spreadsheet"
],
"image/x-bmp": [
"application/vnd.google-apps.document"
],
"application/rtf": [
"application/vnd.google-apps.document"
],
"application/vnd.openxmlformats-officedocument.presentationml.template": [
"application/vnd.google-apps.presentation"
],
"image/x-png": [
"application/vnd.google-apps.document"
],
"text/html": [
"application/vnd.google-apps.document"
],
"application/vnd.oasis.opendocument.text": [
"application/vnd.google-apps.document"
],
"application/vnd.openxmlformats-officedocument.presentationml.presentation": [
"application/vnd.google-apps.presentation"
],
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": [
"application/vnd.google-apps.spreadsheet"
],
"application/vnd.google-apps.script+json": [
"application/vnd.google-apps.script"
],
"application/vnd.openxmlformats-officedocument.presentationml.slideshow": [
"application/vnd.google-apps.presentation"
],
"application/vnd.ms-powerpoint.template.macroenabled.12": [
"application/vnd.google-apps.presentation"
],
"text/csv": [
"application/vnd.google-apps.spreadsheet"
],
"application/vnd.oasis.opendocument.presentation": [
"application/vnd.google-apps.presentation"
],
"image/jpg": [
"application/vnd.google-apps.document"
],
"text/richtext": [
"application/vnd.google-apps.document"
]
},
"exportFormats": {
"application/vnd.google-apps.document": [
"application/rtf",
"application/vnd.oasis.opendocument.text",
"text/html",
"application/pdf",
"application/epub+zip",
"application/zip",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"text/plain"
],
"application/vnd.google-apps.spreadsheet": [
"application/x-vnd.oasis.opendocument.spreadsheet",
"text/tab-separated-values",
"application/pdf",
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"text/csv",
"application/zip",
"application/vnd.oasis.opendocument.spreadsheet"
],
"application/vnd.google-apps.jam": [
"application/pdf"
],
"application/vnd.google-apps.script": [
"application/vnd.google-apps.script+json"
],
"application/vnd.google-apps.presentation": [
"application/vnd.oasis.opendocument.presentation",
"application/pdf",
"application/vnd.openxmlformats-officedocument.presentationml.presentation",
"text/plain"
],
"application/vnd.google-apps.form": [
"application/zip"
],
"application/vnd.google-apps.drawing": [
"image/svg+xml",
"image/png",
"application/pdf",
"image/jpeg"
]
}
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -12,25 +12,24 @@ package drive
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"regexp"
"strconv"
"github.com/ncw/rclone/fs"
"google.golang.org/api/drive/v2"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/readers"
"google.golang.org/api/drive/v3"
"google.golang.org/api/googleapi"
)
const (
// statusResumeIncomplete is the code returned by the Google uploader when the transfer is not yet complete.
statusResumeIncomplete = 308
// Number of times to try each chunk
maxTries = 10
)
// resumableUpload is used by the generated APIs to provide resumable uploads.
@@ -51,40 +50,52 @@ type resumableUpload struct {
}
// Upload the io.Reader in of size bytes with contentType and info
func (f *Fs) Upload(in io.Reader, size int64, contentType string, info *drive.File, remote string) (*drive.File, error) {
fileID := info.Id
var body io.Reader
body, err := googleapi.WithoutDataWrapper.JSONReader(info)
if err != nil {
return nil, err
func (f *Fs) Upload(ctx context.Context, in io.Reader, size int64, contentType, fileID, remote string, info *drive.File) (*drive.File, error) {
params := url.Values{
"alt": {"json"},
"uploadType": {"resumable"},
"fields": {partialFields},
}
params := make(url.Values)
params.Set("alt", "json")
params.Set("uploadType", "resumable")
urls := "https://www.googleapis.com/upload/drive/v2/files"
params.Set("supportsAllDrives", "true")
if f.opt.KeepRevisionForever {
params.Set("keepRevisionForever", "true")
}
urls := "https://www.googleapis.com/upload/drive/v3/files"
method := "POST"
if fileID != "" {
params.Set("setModifiedDate", "true")
urls += "/{fileId}"
method = "PUT"
method = "PATCH"
}
urls += "?" + params.Encode()
req, _ := http.NewRequest(method, urls, body)
googleapi.Expand(req.URL, map[string]string{
"fileId": fileID,
})
req.Header.Set("Content-Type", "application/json; charset=UTF-8")
req.Header.Set("X-Upload-Content-Type", contentType)
req.Header.Set("X-Upload-Content-Length", fmt.Sprintf("%v", size))
req.Header.Set("User-Agent", fs.UserAgent)
var res *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
var body io.Reader
body, err = googleapi.WithoutDataWrapper.JSONReader(info)
if err != nil {
return false, err
}
var req *http.Request
req, err = http.NewRequest(method, urls, body)
if err != nil {
return false, err
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
googleapi.Expand(req.URL, map[string]string{
"fileId": fileID,
})
req.Header.Set("Content-Type", "application/json; charset=UTF-8")
req.Header.Set("X-Upload-Content-Type", contentType)
if size >= 0 {
req.Header.Set("X-Upload-Content-Length", fmt.Sprintf("%v", size))
}
res, err = f.client.Do(req)
if err == nil {
defer googleapi.CloseBody(res)
err = googleapi.CheckResponse(res)
}
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return nil, err
@@ -98,61 +109,31 @@ func (f *Fs) Upload(in io.Reader, size int64, contentType string, info *drive.Fi
MediaType: contentType,
ContentLength: size,
}
return rx.Upload()
return rx.Upload(ctx)
}
// Make an http.Request for the range passed in
func (rx *resumableUpload) makeRequest(start int64, body []byte) *http.Request {
reqSize := int64(len(body))
req, _ := http.NewRequest("POST", rx.URI, bytes.NewBuffer(body))
func (rx *resumableUpload) makeRequest(ctx context.Context, start int64, body io.ReadSeeker, reqSize int64) *http.Request {
req, _ := http.NewRequest("POST", rx.URI, body)
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.ContentLength = reqSize
totalSize := "*"
if rx.ContentLength >= 0 {
totalSize = strconv.FormatInt(rx.ContentLength, 10)
}
if reqSize != 0 {
req.Header.Set("Content-Range", fmt.Sprintf("bytes %v-%v/%v", start, start+reqSize-1, rx.ContentLength))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %v-%v/%v", start, start+reqSize-1, totalSize))
} else {
req.Header.Set("Content-Range", fmt.Sprintf("bytes */%v", rx.ContentLength))
req.Header.Set("Content-Range", fmt.Sprintf("bytes */%v", totalSize))
}
req.Header.Set("Content-Type", rx.MediaType)
req.Header.Set("User-Agent", fs.UserAgent)
return req
}
// rangeRE matches the transfer status response from the server. $1 is
// the last byte index uploaded.
var rangeRE = regexp.MustCompile(`^0\-(\d+)$`)
// Query drive for the amount transferred so far
//
// If error is nil, then start should be valid
func (rx *resumableUpload) transferStatus() (start int64, err error) {
req := rx.makeRequest(0, nil)
res, err := rx.f.client.Do(req)
if err != nil {
return 0, err
}
defer googleapi.CloseBody(res)
if res.StatusCode == http.StatusCreated || res.StatusCode == http.StatusOK {
return rx.ContentLength, nil
}
if res.StatusCode != statusResumeIncomplete {
err = googleapi.CheckResponse(res)
if err != nil {
return 0, err
}
return 0, fmt.Errorf("unexpected http return code %v", res.StatusCode)
}
Range := res.Header.Get("Range")
if m := rangeRE.FindStringSubmatch(Range); len(m) == 2 {
start, err = strconv.ParseInt(m[1], 10, 64)
if err == nil {
return start, nil
}
}
return 0, fmt.Errorf("unable to parse range %q", Range)
}
// Transfer a chunk - caller must call googleapi.CloseBody(res) if err == nil || res != nil
func (rx *resumableUpload) transferChunk(start int64, body []byte) (int, error) {
req := rx.makeRequest(start, body)
func (rx *resumableUpload) transferChunk(ctx context.Context, start int64, chunk io.ReadSeeker, chunkSize int64) (int, error) {
_, _ = chunk.Seek(0, io.SeekStart)
req := rx.makeRequest(ctx, start, chunk, chunkSize)
res, err := rx.f.client.Do(req)
if err != nil {
return 599, err
@@ -174,7 +155,7 @@ func (rx *resumableUpload) transferChunk(start int64, body []byte) (int, error)
// been 200 OK.
//
// So parse the response out of the body. We aren't expecting
// any other 2xx codes, so we parse it unconditionaly on
// any other 2xx codes, so we parse it unconditionally on
// StatusCode
if err = json.NewDecoder(res.Body).Decode(&rx.ret); err != nil {
return 598, err
@@ -184,30 +165,46 @@ func (rx *resumableUpload) transferChunk(start int64, body []byte) (int, error)
}
// Upload uploads the chunks from the input
// It retries each chunk maxTries times (with a pause of uploadPause between attempts).
func (rx *resumableUpload) Upload() (*drive.File, error) {
// It retries each chunk using the pacer and --low-level-retries
func (rx *resumableUpload) Upload(ctx context.Context) (*drive.File, error) {
start := int64(0)
buf := make([]byte, chunkSize)
var StatusCode int
for start < rx.ContentLength {
reqSize := rx.ContentLength - start
if reqSize >= int64(chunkSize) {
reqSize = int64(chunkSize)
var err error
buf := make([]byte, int(rx.f.opt.ChunkSize))
for finished := false; !finished; {
var reqSize int64
var chunk io.ReadSeeker
if rx.ContentLength >= 0 {
// If size known use repeatable reader for smoother bwlimit
if start >= rx.ContentLength {
break
}
reqSize = rx.ContentLength - start
if reqSize >= int64(rx.f.opt.ChunkSize) {
reqSize = int64(rx.f.opt.ChunkSize)
}
chunk = readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize)
} else {
buf = buf[:reqSize]
}
// Read the chunk
_, err := io.ReadFull(rx.Media, buf)
if err != nil {
return nil, err
// If size unknown read into buffer
var n int
n, err = readers.ReadFill(rx.Media, buf)
if err == io.EOF {
// Send the last chunk with the correct ContentLength
// otherwise Google doesn't know we've finished
rx.ContentLength = start + int64(n)
finished = true
} else if err != nil {
return nil, err
}
reqSize = int64(n)
chunk = bytes.NewReader(buf[:reqSize])
}
// Transfer the chunk
err = rx.f.pacer.Call(func() (bool, error) {
fs.Debug(rx.remote, "Sending chunk %d length %d", start, reqSize)
StatusCode, err = rx.transferChunk(start, buf)
again, err := shouldRetry(err)
fs.Debugf(rx.remote, "Sending chunk %d length %d", start, reqSize)
StatusCode, err = rx.transferChunk(ctx, start, chunk, reqSize)
again, err := rx.f.shouldRetry(err)
if StatusCode == statusResumeIncomplete || StatusCode == http.StatusCreated || StatusCode == http.StatusOK {
again = false
err = nil
@@ -241,7 +238,7 @@ func (rx *resumableUpload) Upload() (*drive.File, error) {
// Handle 404 Not Found errors when doing resumable uploads by starting
// the entire upload over from the beginning.
if rx.ret == nil {
return nil, fs.RetryErrorf("Incomplete upload - retry, last error %d", StatusCode)
return nil, fserrors.RetryErrorf("Incomplete upload - retry, last error %d", StatusCode)
}
return rx.ret, nil
}

View File

@@ -0,0 +1,127 @@
// Package dbhash implements the dropbox hash as described in
//
// https://www.dropbox.com/developers/reference/content-hash
package dbhash
import (
"crypto/sha256"
"hash"
)
const (
// BlockSize of the checksum in bytes.
BlockSize = sha256.BlockSize
// Size of the checksum in bytes.
Size = sha256.BlockSize
bytesPerBlock = 4 * 1024 * 1024
hashReturnedError = "hash function returned error"
)
type digest struct {
n int // bytes written into blockHash so far
blockHash hash.Hash
totalHash hash.Hash
sumCalled bool
writtenMore bool
}
// New returns a new hash.Hash computing the Dropbox checksum.
func New() hash.Hash {
d := &digest{}
d.Reset()
return d
}
// writeBlockHash writes the current block hash into the total hash
func (d *digest) writeBlockHash() {
blockHash := d.blockHash.Sum(nil)
_, err := d.totalHash.Write(blockHash)
if err != nil {
panic(hashReturnedError)
}
// reset counters for blockhash
d.n = 0
d.blockHash.Reset()
}
// Write writes len(p) bytes from p to the underlying data stream. It returns
// the number of bytes written from p (0 <= n <= len(p)) and any error
// encountered that caused the write to stop early. Write must return a non-nil
// error if it returns n < len(p). Write must not modify the slice data, even
// temporarily.
//
// Implementations must not retain p.
func (d *digest) Write(p []byte) (n int, err error) {
n = len(p)
for len(p) > 0 {
d.writtenMore = true
toWrite := bytesPerBlock - d.n
if toWrite > len(p) {
toWrite = len(p)
}
_, err = d.blockHash.Write(p[:toWrite])
if err != nil {
panic(hashReturnedError)
}
d.n += toWrite
p = p[toWrite:]
// Accumulate the total hash
if d.n == bytesPerBlock {
d.writeBlockHash()
}
}
return n, nil
}
// Sum appends the current hash to b and returns the resulting slice.
// It does not change the underlying hash state.
//
// TODO(ncw) Sum() can only be called once for this type of hash.
// If you call Sum(), then Write() then Sum() it will result in
// a panic. Calling Write() then Sum(), then Sum() is OK.
func (d *digest) Sum(b []byte) []byte {
if d.sumCalled && d.writtenMore {
panic("digest.Sum() called more than once")
}
d.sumCalled = true
d.writtenMore = false
if d.n != 0 {
d.writeBlockHash()
}
return d.totalHash.Sum(b)
}
// Reset resets the Hash to its initial state.
func (d *digest) Reset() {
d.n = 0
d.totalHash = sha256.New()
d.blockHash = sha256.New()
d.sumCalled = false
d.writtenMore = false
}
// Size returns the number of bytes Sum will return.
func (d *digest) Size() int {
return d.totalHash.Size()
}
// BlockSize returns the hash's underlying block size.
// The Write method must be able to accept any amount
// of data, but it may operate more efficiently if all writes
// are a multiple of the block size.
func (d *digest) BlockSize() int {
return d.totalHash.BlockSize()
}
// Sum returns the Dropbox checksum of the data.
func Sum(data []byte) [Size]byte {
var d digest
d.Reset()
_, _ = d.Write(data)
var out [Size]byte
d.Sum(out[:0])
return out
}
// must implement this interface
var _ hash.Hash = (*digest)(nil)

View File

@@ -0,0 +1,88 @@
package dbhash_test
import (
"encoding/hex"
"fmt"
"testing"
"github.com/rclone/rclone/backend/dropbox/dbhash"
"github.com/stretchr/testify/assert"
)
func testChunk(t *testing.T, chunk int) {
data := make([]byte, chunk)
for i := 0; i < chunk; i++ {
data[i] = 'A'
}
for _, test := range []struct {
n int
want string
}{
{0, "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"},
{1, "1cd6ef71e6e0ff46ad2609d403dc3fee244417089aa4461245a4e4fe23a55e42"},
{2, "01e0655fb754d10418a73760f57515f4903b298e6d67dda6bf0987fa79c22c88"},
{4096, "8620913d33852befe09f16fff8fd75f77a83160d29f76f07e0276e9690903035"},
{4194303, "647c8627d70f7a7d13ce96b1e7710a771a55d41a62c3da490d92e56044d311fa"},
{4194304, "d4d63bac5b866c71620185392a8a6218ac1092454a2d16f820363b69852befa3"},
{4194305, "8f553da8d00d0bf509d8470e242888be33019c20c0544811f5b2b89e98360b92"},
{8388607, "83b30cf4fb5195b04a937727ae379cf3d06673bf8f77947f6a92858536e8369c"},
{8388608, "e08b3ba1f538804075c5f939accdeaa9efc7b5c01865c94a41e78ca6550a88e7"},
{8388609, "02c8a4aefc2bfc9036f89a7098001865885938ca580e5c9e5db672385edd303c"},
} {
d := dbhash.New()
var toWrite int
for toWrite = test.n; toWrite >= chunk; toWrite -= chunk {
n, err := d.Write(data)
assert.Nil(t, err)
assert.Equal(t, chunk, n)
}
n, err := d.Write(data[:toWrite])
assert.Nil(t, err)
assert.Equal(t, toWrite, n)
got := hex.EncodeToString(d.Sum(nil))
assert.Equal(t, test.want, got, fmt.Sprintf("when testing length %d", n))
}
}
func TestHashChunk16M(t *testing.T) { testChunk(t, 16*1024*1024) }
func TestHashChunk8M(t *testing.T) { testChunk(t, 8*1024*1024) }
func TestHashChunk4M(t *testing.T) { testChunk(t, 4*1024*1024) }
func TestHashChunk2M(t *testing.T) { testChunk(t, 2*1024*1024) }
func TestHashChunk1M(t *testing.T) { testChunk(t, 1*1024*1024) }
func TestHashChunk64k(t *testing.T) { testChunk(t, 64*1024) }
func TestHashChunk32k(t *testing.T) { testChunk(t, 32*1024) }
func TestHashChunk2048(t *testing.T) { testChunk(t, 2048) }
func TestHashChunk2047(t *testing.T) { testChunk(t, 2047) }
func TestSumCalledTwice(t *testing.T) {
d := dbhash.New()
assert.NotPanics(t, func() { d.Sum(nil) })
d.Reset()
assert.NotPanics(t, func() { d.Sum(nil) })
assert.NotPanics(t, func() { d.Sum(nil) })
_, _ = d.Write([]byte{1})
assert.Panics(t, func() { d.Sum(nil) })
}
func TestSize(t *testing.T) {
d := dbhash.New()
assert.Equal(t, 32, d.Size())
}
func TestBlockSize(t *testing.T) {
d := dbhash.New()
assert.Equal(t, 64, d.BlockSize())
}
func TestSum(t *testing.T) {
assert.Equal(t,
[64]byte{
0x1c, 0xd6, 0xef, 0x71, 0xe6, 0xe0, 0xff, 0x46,
0xad, 0x26, 0x09, 0xd4, 0x03, 0xdc, 0x3f, 0xee,
0x24, 0x44, 0x17, 0x08, 0x9a, 0xa4, 0x46, 0x12,
0x45, 0xa4, 0xe4, 0xfe, 0x23, 0xa5, 0x5e, 0x42,
},
dbhash.Sum([]byte{'A'}),
)
}

1209
backend/dropbox/dropbox.go Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,26 @@
// Test Dropbox filesystem interface
package dropbox
import (
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestDropbox:",
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MaxChunkSize: maxChunkSize,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
var _ fstests.SetUploadChunkSizer = (*Fs)(nil)

393
backend/fichier/api.go Normal file
View File

@@ -0,0 +1,393 @@
package fichier
import (
"context"
"io"
"net/http"
"regexp"
"strconv"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/rest"
)
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
403, // Forbidden (may happen when request limit is exceeded)
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
var isAlphaNumeric = regexp.MustCompile(`^[a-zA-Z0-9]+$`).MatchString
func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenResponse, error) {
request := DownloadRequest{
URL: url,
Single: 1,
}
opts := rest.Opts{
Method: "POST",
Path: "/download/get_token.cgi",
}
var token GetTokenResponse
err := f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, &token)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list files")
}
return &token, nil
}
func fileFromSharedFile(file *SharedFile) File {
return File{
URL: file.Link,
Filename: file.Filename,
Size: file.Size,
}
}
func (f *Fs) listSharedFiles(ctx context.Context, id string) (entries fs.DirEntries, err error) {
opts := rest.Opts{
Method: "GET",
RootURL: "https://1fichier.com/dir/",
Path: id,
Parameters: map[string][]string{"json": {"1"}},
}
var sharedFiles SharedFolderResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, &sharedFiles)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list files")
}
entries = make([]fs.DirEntry, len(sharedFiles))
for i, sharedFile := range sharedFiles {
entries[i] = f.newObjectFromFile(ctx, "", fileFromSharedFile(&sharedFile))
}
return entries, nil
}
func (f *Fs) listFiles(ctx context.Context, directoryID int) (filesList *FilesList, err error) {
// fs.Debugf(f, "Requesting files for dir `%s`", directoryID)
request := ListFilesRequest{
FolderID: directoryID,
}
opts := rest.Opts{
Method: "POST",
Path: "/file/ls.cgi",
}
filesList = &FilesList{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, filesList)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list files")
}
for i := range filesList.Items {
item := &filesList.Items[i]
item.Filename = f.opt.Enc.ToStandardName(item.Filename)
}
return filesList, nil
}
func (f *Fs) listFolders(ctx context.Context, directoryID int) (foldersList *FoldersList, err error) {
// fs.Debugf(f, "Requesting folders for id `%s`", directoryID)
request := ListFolderRequest{
FolderID: directoryID,
}
opts := rest.Opts{
Method: "POST",
Path: "/folder/ls.cgi",
}
foldersList = &FoldersList{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, foldersList)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list folders")
}
foldersList.Name = f.opt.Enc.ToStandardName(foldersList.Name)
for i := range foldersList.SubFolders {
folder := &foldersList.SubFolders[i]
folder.Name = f.opt.Enc.ToStandardName(folder.Name)
}
// fs.Debugf(f, "Got FoldersList for id `%s`", directoryID)
return foldersList, err
}
func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return nil, err
}
folderID, err := strconv.Atoi(directoryID)
if err != nil {
return nil, err
}
files, err := f.listFiles(ctx, folderID)
if err != nil {
return nil, err
}
folders, err := f.listFolders(ctx, folderID)
if err != nil {
return nil, err
}
entries = make([]fs.DirEntry, len(files.Items)+len(folders.SubFolders))
for i, item := range files.Items {
entries[i] = f.newObjectFromFile(ctx, dir, item)
}
for i, folder := range folders.SubFolders {
createDate, err := time.Parse("2006-01-02 15:04:05", folder.CreateDate)
if err != nil {
return nil, err
}
fullPath := getRemote(dir, folder.Name)
folderID := strconv.Itoa(folder.ID)
entries[len(files.Items)+i] = fs.NewDir(fullPath, createDate).SetID(folderID)
// fs.Debugf(f, "Put Path `%s` for id `%d` into dircache", fullPath, folder.ID)
f.dirCache.Put(fullPath, folderID)
}
return entries, nil
}
func (f *Fs) newObjectFromFile(ctx context.Context, dir string, item File) *Object {
return &Object{
fs: f,
remote: getRemote(dir, item.Filename),
file: item,
}
}
func getRemote(dir, fileName string) string {
if dir == "" {
return fileName
}
return dir + "/" + fileName
}
func (f *Fs) makeFolder(ctx context.Context, leaf string, folderID int) (response *MakeFolderResponse, err error) {
name := f.opt.Enc.FromStandardName(leaf)
// fs.Debugf(f, "Creating folder `%s` in id `%s`", name, directoryID)
request := MakeFolderRequest{
FolderID: folderID,
Name: name,
}
opts := rest.Opts{
Method: "POST",
Path: "/folder/mkdir.cgi",
}
response = &MakeFolderResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't create folder")
}
// fs.Debugf(f, "Created Folder `%s` in id `%s`", name, directoryID)
return response, err
}
func (f *Fs) removeFolder(ctx context.Context, name string, folderID int) (response *GenericOKResponse, err error) {
// fs.Debugf(f, "Removing folder with id `%s`", directoryID)
request := &RemoveFolderRequest{
FolderID: folderID,
}
opts := rest.Opts{
Method: "POST",
Path: "/folder/rm.cgi",
}
response = &GenericOKResponse{}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't remove folder")
}
if response.Status != "OK" {
return nil, errors.New("Can't remove non-empty dir")
}
// fs.Debugf(f, "Removed Folder with id `%s`", directoryID)
return response, nil
}
func (f *Fs) deleteFile(ctx context.Context, url string) (response *GenericOKResponse, err error) {
request := &RemoveFileRequest{
Files: []RmFile{
{url},
},
}
opts := rest.Opts{
Method: "POST",
Path: "/file/rm.cgi",
}
response = &GenericOKResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't remove file")
}
// fs.Debugf(f, "Removed file with url `%s`", url)
return response, nil
}
func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse, err error) {
// fs.Debugf(f, "Requesting Upload node")
opts := rest.Opts{
Method: "GET",
ContentType: "application/json", // 1Fichier API is bad
Path: "/upload/get_upload_server.cgi",
}
response = &GetUploadNodeResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "didnt got an upload node")
}
// fs.Debugf(f, "Got Upload node")
return response, err
}
func (f *Fs) uploadFile(ctx context.Context, in io.Reader, size int64, fileName, folderID, uploadID, node string, options ...fs.OpenOption) (response *http.Response, err error) {
// fs.Debugf(f, "Uploading File `%s`", fileName)
fileName = f.opt.Enc.FromStandardName(fileName)
if len(uploadID) > 10 || !isAlphaNumeric(uploadID) {
return nil, errors.New("Invalid UploadID")
}
opts := rest.Opts{
Method: "POST",
Path: "/upload.cgi",
Parameters: map[string][]string{
"id": {uploadID},
},
NoResponse: true,
Body: in,
ContentLength: &size,
Options: options,
MultipartContentName: "file[]",
MultipartFileName: fileName,
MultipartParams: map[string][]string{
"did": {folderID},
},
}
if node != "" {
opts.RootURL = "https://" + node
}
err = f.pacer.CallNoRetry(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, nil)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't upload file")
}
// fs.Debugf(f, "Uploaded File `%s`", fileName)
return response, err
}
func (f *Fs) endUpload(ctx context.Context, uploadID string, nodeurl string) (response *EndFileUploadResponse, err error) {
// fs.Debugf(f, "Ending File Upload `%s`", uploadID)
if len(uploadID) > 10 || !isAlphaNumeric(uploadID) {
return nil, errors.New("Invalid UploadID")
}
opts := rest.Opts{
Method: "GET",
Path: "/end.pl",
RootURL: "https://" + nodeurl,
Parameters: map[string][]string{
"xid": {uploadID},
},
ExtraHeaders: map[string]string{
"JSON": "1",
},
}
response = &EndFileUploadResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, response)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't finish file upload")
}
return response, err
}

425
backend/fichier/fichier.go Normal file
View File

@@ -0,0 +1,425 @@
package fichier
import (
"context"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
const (
rootID = "0"
apiBaseURL = "https://api.1fichier.com/v1"
minSleep = 400 * time.Millisecond // api is extremely rate limited now
maxSleep = 5 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
attackConstant = 0 // start with max sleep
)
func init() {
fs.Register(&fs.RegInfo{
Name: "fichier",
Description: "1Fichier",
Config: func(name string, config configmap.Mapper) {
},
NewFs: NewFs,
Options: []fs.Option{{
Help: "Your API Key, get it from https://1fichier.com/console/params.pl",
Name: "api_key",
}, {
Help: "If you want to download a shared folder, add this parameter",
Name: "shared_folder",
Required: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Characters that need escaping
//
// '\\': '', // FULLWIDTH REVERSE SOLIDUS
// '<': '', // FULLWIDTH LESS-THAN SIGN
// '>': '', // FULLWIDTH GREATER-THAN SIGN
// '"': '', // FULLWIDTH QUOTATION MARK - not on the list but seems to be reserved
// '\'': '', // FULLWIDTH APOSTROPHE
// '$': '', // FULLWIDTH DOLLAR SIGN
// '`': '', // FULLWIDTH GRAVE ACCENT
//
// Leading space and trailing space
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeSingleQuote |
encoder.EncodeBackQuote |
encoder.EncodeDoubleQuote |
encoder.EncodeLtGt |
encoder.EncodeDollar |
encoder.EncodeLeftSpace |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
APIKey string `config:"api_key"`
SharedFolder string `config:"shared_folder"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs is the interface a cloud storage system must provide
type Fs struct {
root string
name string
features *fs.Features
opt Options
dirCache *dircache.DirCache
baseClient *http.Client
pacer *fs.Pacer
rest *rest.Client
}
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
folderID, err := strconv.Atoi(pathID)
if err != nil {
return "", false, err
}
folders, err := f.listFolders(ctx, folderID)
if err != nil {
return "", false, err
}
for _, folder := range folders.SubFolders {
if folder.Name == leaf {
pathIDOut := strconv.Itoa(folder.ID)
return pathIDOut, true, nil
}
}
return "", false, nil
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
folderID, err := strconv.Atoi(pathID)
if err != nil {
return "", err
}
resp, err := f.makeFolder(ctx, leaf, folderID)
if err != nil {
return "", err
}
return strconv.Itoa(resp.FolderID), err
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("1Fichier root '%s'", f.root)
}
// Precision of the ModTimes in this Fs
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Hashes returns the supported hash types of the filesystem
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.Whirlpool)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// NewFs makes a new Fs object from the path
//
// The path is of the form remote:path
//
// Remotes are looked up in the config file. If the remote isn't
// found then NotFoundInConfigFile will be returned.
//
// On Windows avoid single character remote names as they can be mixed
// up with drive letters.
func NewFs(name string, root string, config configmap.Mapper) (fs.Fs, error) {
opt := new(Options)
err := configstruct.Set(config, opt)
if err != nil {
return nil, err
}
// If using a Shared Folder override root
if opt.SharedFolder != "" {
root = ""
}
//workaround for wonky parser
root = strings.Trim(root, "/")
f := &Fs{
name: name,
root: root,
opt: *opt,
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant), pacer.AttackConstant(attackConstant))),
baseClient: &http.Client{},
}
f.features = (&fs.Features{
DuplicateFiles: true,
CanHaveEmptyDirectories: true,
}).Fill(f)
client := fshttp.NewClient(fs.Config)
f.rest = rest.NewClient(client).SetRoot(apiBaseURL)
f.rest.SetHeader("Authorization", "Bearer "+f.opt.APIKey)
f.dirCache = dircache.New(root, rootID, f)
ctx := context.Background()
// Find the current root
err = f.dirCache.FindRoot(ctx, false)
if err != nil {
// Assume it is a file
newRoot, remote := dircache.SplitPath(root)
tempF := *f
tempF.dirCache = dircache.New(newRoot, rootID, &tempF)
tempF.root = newRoot
// Make new Fs which is the parent
err = tempF.dirCache.FindRoot(ctx, false)
if err != nil {
// No root so return old f
return f, nil
}
_, err := tempF.NewObject(ctx, remote)
if err != nil {
if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f
return f, nil
}
return nil, err
}
f.features.Fill(&tempF)
// XXX: update the old f here instead of returning tempF, since
// `features` were already filled with functions having *f as a receiver.
// See https://github.com/rclone/rclone/issues/2182
f.dirCache = tempF.dirCache
f.root = tempF.root
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if f.opt.SharedFolder != "" {
return f.listSharedFiles(ctx, f.opt.SharedFolder)
}
dirContent, err := f.listDir(ctx, dir)
if err != nil {
return nil, err
}
return dirContent, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, false)
if err != nil {
if err == fs.ErrorDirNotFound {
return nil, fs.ErrorObjectNotFound
}
return nil, err
}
folderID, err := strconv.Atoi(directoryID)
if err != nil {
return nil, err
}
files, err := f.listFiles(ctx, folderID)
if err != nil {
return nil, err
}
for _, file := range files.Items {
if file.Filename == leaf {
path, ok := f.dirCache.GetInv(directoryID)
if !ok {
return nil, errors.New("Cannot find dir in dircache")
}
return f.newObjectFromFile(ctx, path, file), nil
}
}
return nil, fs.ErrorObjectNotFound
}
// Put in to the remote path with the modTime given of the given size
//
// When called from outside an Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Put should either
// return an error or upload it properly (rather than e.g. calling panic).
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
exisitingObj, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return exisitingObj, exisitingObj.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
// Not found so create it
return f.PutUnchecked(ctx, in, src, options...)
default:
return nil, err
}
}
// putUnchecked uploads the object with the given name and size
//
// This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that.
func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size int64, options ...fs.OpenOption) (fs.Object, error) {
if size > int64(100e9) {
return nil, errors.New("File too big, cant upload")
} else if size == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
}
nodeResponse, err := f.getUploadNode(ctx)
if err != nil {
return nil, err
}
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true)
if err != nil {
return nil, err
}
_, err = f.uploadFile(ctx, in, size, leaf, directoryID, nodeResponse.ID, nodeResponse.URL, options...)
if err != nil {
return nil, err
}
fileUploadResponse, err := f.endUpload(ctx, nodeResponse.ID, nodeResponse.URL)
if err != nil {
return nil, err
}
if len(fileUploadResponse.Links) != 1 {
return nil, errors.New("unexpected amount of files")
}
link := fileUploadResponse.Links[0]
fileSize, err := strconv.ParseInt(link.Size, 10, 64)
if err != nil {
return nil, err
}
return &Object{
fs: f,
remote: remote,
file: File{
ACL: 0,
CDN: 0,
Checksum: link.Whirlpool,
ContentType: "",
Date: time.Now().Format("2006-01-02 15:04:05"),
Filename: link.Filename,
Pass: 0,
Size: fileSize,
URL: link.Download,
},
}, nil
}
// PutUnchecked uploads the object
//
// This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that.
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.putUnchecked(ctx, in, src.Remote(), src.Size(), options...)
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
_, err := f.dirCache.FindDir(ctx, dir, true)
return err
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return err
}
folderID, err := strconv.Atoi(directoryID)
if err != nil {
return err
}
_, err = f.removeFolder(ctx, dir, folderID)
if err != nil {
return err
}
f.dirCache.FlushDir(dir)
return nil
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ dircache.DirCacher = (*Fs)(nil)
)

View File

@@ -0,0 +1,17 @@
// Test 1Fichier filesystem interface
package fichier
import (
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fs.Config.LogLevel = fs.LogLevelDebug
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFichier:",
})
}

158
backend/fichier/object.go Normal file
View File

@@ -0,0 +1,158 @@
package fichier
import (
"context"
"io"
"net/http"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/rest"
)
// Object is a filesystem like object provided by an Fs
type Object struct {
fs *Fs
remote string
file File
}
// String returns a description of the Object
func (o *Object) String() string {
return o.file.Filename
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// ModTime returns the modification date of the file
// It should return a best guess if one isn't available
func (o *Object) ModTime(ctx context.Context) time.Time {
modTime, err := time.Parse("2006-01-02 15:04:05", o.file.Date)
if err != nil {
return time.Now()
}
return modTime
}
// Size returns the size of the file
func (o *Object) Size() int64 {
return o.file.Size
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.fs
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.Whirlpool {
return "", hash.ErrUnsupported
}
return o.file.Checksum, nil
}
// Storable says whether this object can be stored
func (o *Object) Storable() bool {
return true
}
// SetModTime sets the metadata on the object to set the modification date
func (o *Object) SetModTime(context.Context, time.Time) error {
return fs.ErrorCantSetModTime
//return errors.New("setting modtime is not supported for 1fichier remotes")
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
fs.FixRangeOption(options, o.file.Size)
downloadToken, err := o.fs.getDownloadToken(ctx, o.file.URL)
if err != nil {
return nil, err
}
var resp *http.Response
opts := rest.Opts{
Method: "GET",
RootURL: downloadToken.URL,
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.rest.Call(ctx, &opts)
return shouldRetry(resp, err)
})
if err != nil {
return nil, err
}
return resp.Body, err
}
// Update in to the object with the modTime given of the given size
//
// When called from outside an Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Upload should either
// return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
if src.Size() < 0 {
return errors.New("refusing to update with unknown size")
}
// upload with new size but old name
info, err := o.fs.putUnchecked(ctx, in, o.Remote(), src.Size(), options...)
if err != nil {
return err
}
// Delete duplicate after successful upload
err = o.Remove(ctx)
if err != nil {
return errors.Wrap(err, "failed to remove old version")
}
// Replace guts of old object with new one
*o = *info.(*Object)
return nil
}
// Remove removes this object
func (o *Object) Remove(ctx context.Context) error {
// fs.Debugf(f, "Removing file `%s` with url `%s`", o.file.Filename, o.file.URL)
_, err := o.fs.deleteFile(ctx, o.file.URL)
if err != nil {
return err
}
return nil
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType(ctx context.Context) string {
return o.file.ContentType
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
return o.file.URL
}
// Check the interfaces are satisfied
var (
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

120
backend/fichier/structs.go Normal file
View File

@@ -0,0 +1,120 @@
package fichier
// ListFolderRequest is the request structure of the corresponding request
type ListFolderRequest struct {
FolderID int `json:"folder_id"`
}
// ListFilesRequest is the request structure of the corresponding request
type ListFilesRequest struct {
FolderID int `json:"folder_id"`
}
// DownloadRequest is the request structure of the corresponding request
type DownloadRequest struct {
URL string `json:"url"`
Single int `json:"single"`
}
// RemoveFolderRequest is the request structure of the corresponding request
type RemoveFolderRequest struct {
FolderID int `json:"folder_id"`
}
// RemoveFileRequest is the request structure of the corresponding request
type RemoveFileRequest struct {
Files []RmFile `json:"files"`
}
// RmFile is the request structure of the corresponding request
type RmFile struct {
URL string `json:"url"`
}
// GenericOKResponse is the response structure of the corresponding request
type GenericOKResponse struct {
Status string `json:"status"`
Message string `json:"message"`
}
// MakeFolderRequest is the request structure of the corresponding request
type MakeFolderRequest struct {
Name string `json:"name"`
FolderID int `json:"folder_id"`
}
// MakeFolderResponse is the response structure of the corresponding request
type MakeFolderResponse struct {
Name string `json:"name"`
FolderID int `json:"folder_id"`
}
// GetUploadNodeResponse is the response structure of the corresponding request
type GetUploadNodeResponse struct {
ID string `json:"id"`
URL string `json:"url"`
}
// GetTokenResponse is the response structure of the corresponding request
type GetTokenResponse struct {
URL string `json:"url"`
Status string `json:"Status"`
Message string `json:"Message"`
}
// SharedFolderResponse is the response structure of the corresponding request
type SharedFolderResponse []SharedFile
// SharedFile is the structure how 1Fichier returns a shared File
type SharedFile struct {
Filename string `json:"filename"`
Link string `json:"link"`
Size int64 `json:"size"`
}
// EndFileUploadResponse is the response structure of the corresponding request
type EndFileUploadResponse struct {
Incoming int `json:"incoming"`
Links []struct {
Download string `json:"download"`
Filename string `json:"filename"`
Remove string `json:"remove"`
Size string `json:"size"`
Whirlpool string `json:"whirlpool"`
} `json:"links"`
}
// File is the structure how 1Fichier returns a File
type File struct {
ACL int `json:"acl"`
CDN int `json:"cdn"`
Checksum string `json:"checksum"`
ContentType string `json:"content-type"`
Date string `json:"date"`
Filename string `json:"filename"`
Pass int `json:"pass"`
Size int64 `json:"size"`
URL string `json:"url"`
}
// FilesList is the structure how 1Fichier returns a list of files
type FilesList struct {
Items []File `json:"items"`
Status string `json:"Status"`
}
// Folder is the structure how 1Fichier returns a Folder
type Folder struct {
CreateDate string `json:"create_date"`
ID int `json:"id"`
Name string `json:"name"`
Pass int `json:"pass"`
}
// FoldersList is the structure how 1Fichier returns a list of Folders
type FoldersList struct {
FolderID int `json:"folder_id"`
Name string `json:"name"`
Status string `json:"Status"`
SubFolders []Folder `json:"sub_folders"`
}

977
backend/ftp/ftp.go Normal file
View File

@@ -0,0 +1,977 @@
// Package ftp interfaces with FTP servers
package ftp
import (
"context"
"crypto/tls"
"io"
"net/textproto"
"os"
"path"
"runtime"
"strings"
"sync"
"time"
"github.com/jlaffaye/ftp"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "ftp",
Description: "FTP Connection",
NewFs: NewFs,
Options: []fs.Option{{
Name: "host",
Help: "FTP host to connect to",
Required: true,
Examples: []fs.OptionExample{{
Value: "ftp.example.com",
Help: "Connect to ftp.example.com",
}},
}, {
Name: "user",
Help: "FTP username, leave blank for current username, " + os.Getenv("USER"),
}, {
Name: "port",
Help: "FTP port, leave blank to use default (21)",
}, {
Name: "pass",
Help: "FTP password",
IsPassword: true,
Required: true,
}, {
Name: "tls",
Help: `Use FTPS over TLS (Implicit)
When using implicit FTP over TLS the client will connect using TLS
right from the start, which in turn breaks the compatibility with
non-TLS-aware servers. This is usually served over port 990 rather
than port 21. Cannot be used in combination with explicit FTP.`,
Default: false,
}, {
Name: "explicit_tls",
Help: `Use FTP over TLS (Explicit)
When using explicit FTP over TLS the client explicitly request
security from the server in order to upgrade a plain text connection
to an encrypted one. Cannot be used in combination with implicit FTP.`,
Default: false,
}, {
Name: "concurrency",
Help: "Maximum number of FTP simultaneous connections, 0 for unlimited",
Default: 0,
Advanced: true,
}, {
Name: "no_check_certificate",
Help: "Do not verify the TLS certificate of the server",
Default: false,
Advanced: true,
}, {
Name: "disable_epsv",
Help: "Disable using EPSV even if server advertises support",
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// The FTP protocol can't handle trailing spaces (for instance
// pureftpd turns them into _)
//
// proftpd can't handle '*' in file names
// pureftpd can't handle '[', ']' or '*'
Default: (encoder.Display |
encoder.EncodeRightSpace),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
TLS bool `config:"tls"`
ExplicitTLS bool `config:"explicit_tls"`
Concurrency int `config:"concurrency"`
SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableEPSV bool `config:"disable_epsv"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote FTP server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed options
features *fs.Features // optional features
url string
user string
pass string
dialAddr string
poolMu sync.Mutex
pool []*ftp.ServerConn
tokens *pacer.TokenDispenser
}
// Object describes an FTP file
type Object struct {
fs *Fs
remote string
info *FileInfo
}
// FileInfo is the metadata known about an FTP file
type FileInfo struct {
Name string
Size uint64
ModTime time.Time
IsDir bool
}
// ------------------------------------------------------------
// Name of this fs
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String returns a description of the FS
func (f *Fs) String() string {
return f.url
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Enable debugging output
type debugLog struct {
mu sync.Mutex
auth bool
}
// Write writes len(p) bytes from p to the underlying data stream. It returns
// the number of bytes written from p (0 <= n <= len(p)) and any error
// encountered that caused the write to stop early. Write must return a non-nil
// error if it returns n < len(p). Write must not modify the slice data, even
// temporarily.
//
// Implementations must not retain p.
//
// This writes debug info to the log
func (dl *debugLog) Write(p []byte) (n int, err error) {
dl.mu.Lock()
defer dl.mu.Unlock()
_, file, _, ok := runtime.Caller(1)
direction := "FTP Rx"
if ok && strings.Contains(file, "multi") {
direction = "FTP Tx"
}
lines := strings.Split(string(p), "\r\n")
if lines[len(lines)-1] == "" {
lines = lines[:len(lines)-1]
}
for _, line := range lines {
if !dl.auth && strings.HasPrefix(line, "PASS") {
fs.Debugf(direction, "PASS *****")
continue
}
fs.Debugf(direction, "%q", line)
}
return len(p), nil
}
// Open a new connection to the FTP server.
func (f *Fs) ftpConnection() (*ftp.ServerConn, error) {
fs.Debugf(f, "Connecting to FTP server")
ftpConfig := []ftp.DialOption{ftp.DialWithTimeout(fs.Config.ConnectTimeout)}
if f.opt.TLS && f.opt.ExplicitTLS {
fs.Errorf(f, "Implicit TLS and explicit TLS are mutually incompatible. Please revise your config")
return nil, errors.New("Implicit TLS and explicit TLS are mutually incompatible. Please revise your config")
} else if f.opt.TLS {
tlsConfig := &tls.Config{
ServerName: f.opt.Host,
InsecureSkipVerify: f.opt.SkipVerifyTLSCert,
}
ftpConfig = append(ftpConfig, ftp.DialWithTLS(tlsConfig))
} else if f.opt.ExplicitTLS {
tlsConfig := &tls.Config{
ServerName: f.opt.Host,
InsecureSkipVerify: f.opt.SkipVerifyTLSCert,
}
ftpConfig = append(ftpConfig, ftp.DialWithExplicitTLS(tlsConfig))
}
if f.opt.DisableEPSV {
ftpConfig = append(ftpConfig, ftp.DialWithDisabledEPSV(true))
}
if fs.Config.Dump&(fs.DumpHeaders|fs.DumpBodies|fs.DumpRequests|fs.DumpResponses) != 0 {
ftpConfig = append(ftpConfig, ftp.DialWithDebugOutput(&debugLog{auth: fs.Config.Dump&fs.DumpAuth != 0}))
}
c, err := ftp.Dial(f.dialAddr, ftpConfig...)
if err != nil {
fs.Errorf(f, "Error while Dialing %s: %s", f.dialAddr, err)
return nil, errors.Wrap(err, "ftpConnection Dial")
}
err = c.Login(f.user, f.pass)
if err != nil {
_ = c.Quit()
fs.Errorf(f, "Error while Logging in into %s: %s", f.dialAddr, err)
return nil, errors.Wrap(err, "ftpConnection Login")
}
return c, nil
}
// Get an FTP connection from the pool, or open a new one
func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
f.tokens.Get()
}
f.poolMu.Lock()
if len(f.pool) > 0 {
c = f.pool[0]
f.pool = f.pool[1:]
}
f.poolMu.Unlock()
if c != nil {
return c, nil
}
c, err = f.ftpConnection()
if err != nil && f.opt.Concurrency > 0 {
f.tokens.Put()
}
return c, err
}
// Return an FTP connection to the pool
//
// It nils the pointed to connection out so it can't be reused
//
// if err is not nil then it checks the connection is alive using a
// NOOP request
func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
defer f.tokens.Put()
}
if pc == nil {
return
}
c := *pc
if c == nil {
return
}
*pc = nil
if err != nil {
// If not a regular FTP error code then check the connection
_, isRegularError := errors.Cause(err).(*textproto.Error)
if !isRegularError {
nopErr := c.NoOp()
if nopErr != nil {
fs.Debugf(f, "Connection failed, closing: %v", nopErr)
_ = c.Quit()
return
}
}
}
f.poolMu.Lock()
f.pool = append(f.pool, c)
f.poolMu.Unlock()
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
ctx := context.Background()
// defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err)
// Parse config into Options struct
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
pass, err := obscure.Reveal(opt.Pass)
if err != nil {
return nil, errors.Wrap(err, "NewFS decrypt password")
}
user := opt.User
if user == "" {
user = os.Getenv("USER")
}
port := opt.Port
if port == "" {
port = "21"
}
dialAddr := opt.Host + ":" + port
protocol := "ftp://"
if opt.TLS {
protocol = "ftps://"
}
u := protocol + path.Join(dialAddr+"/", root)
f := &Fs{
name: name,
root: root,
opt: *opt,
url: u,
user: user,
pass: pass,
dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(f)
// Make a connection and pool it to return errors early
c, err := f.getFtpConnection()
if err != nil {
return nil, errors.Wrap(err, "NewFs")
}
f.putFtpConnection(&c, nil)
if root != "" {
// Check to see if the root actually an existing file
remote := path.Base(root)
f.root = path.Dir(root)
if f.root == "." {
f.root = ""
}
_, err := f.NewObject(ctx, remote)
if err != nil {
if err == fs.ErrorObjectNotFound || errors.Cause(err) == fs.ErrorNotAFile {
// File doesn't exist so return old f
f.root = root
return f, nil
}
return nil, err
}
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, err
}
// translateErrorFile turns FTP errors into rclone errors if possible for a file
func translateErrorFile(err error) error {
switch errX := err.(type) {
case *textproto.Error:
switch errX.Code {
case ftp.StatusFileUnavailable, ftp.StatusFileActionIgnored:
err = fs.ErrorObjectNotFound
}
}
return err
}
// translateErrorDir turns FTP errors into rclone errors if possible for a directory
func translateErrorDir(err error) error {
switch errX := err.(type) {
case *textproto.Error:
switch errX.Code {
case ftp.StatusFileUnavailable, ftp.StatusFileActionIgnored:
err = fs.ErrorDirNotFound
}
}
return err
}
// entryToStandard converts an incoming ftp.Entry to Standard encoding
func (f *Fs) entryToStandard(entry *ftp.Entry) {
// Skip . and .. as we don't want these encoded
if entry.Name == "." || entry.Name == ".." {
return
}
entry.Name = f.opt.Enc.ToStandardName(entry.Name)
entry.Target = f.opt.Enc.ToStandardPath(entry.Target)
}
// dirFromStandardPath returns dir in encoded form.
func (f *Fs) dirFromStandardPath(dir string) string {
// Skip . and .. as we don't want these encoded
if dir == "." || dir == ".." {
return dir
}
return f.opt.Enc.FromStandardPath(dir)
}
// findItem finds a directory entry for the name in its parent directory
func (f *Fs) findItem(remote string) (entry *ftp.Entry, err error) {
// defer fs.Trace(remote, "")("o=%v, err=%v", &o, &err)
fullPath := path.Join(f.root, remote)
if fullPath == "" || fullPath == "." || fullPath == "/" {
// if root, assume exists and synthesize an entry
return &ftp.Entry{
Name: "",
Type: ftp.EntryTypeFolder,
Time: time.Now(),
}, nil
}
dir := path.Dir(fullPath)
base := path.Base(fullPath)
c, err := f.getFtpConnection()
if err != nil {
return nil, errors.Wrap(err, "findItem")
}
files, err := c.List(f.dirFromStandardPath(dir))
f.putFtpConnection(&c, err)
if err != nil {
return nil, translateErrorFile(err)
}
for _, file := range files {
f.entryToStandard(file)
if file.Name == base {
return file, nil
}
}
return nil, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) {
// defer fs.Trace(remote, "")("o=%v, err=%v", &o, &err)
entry, err := f.findItem(remote)
if err != nil {
return nil, err
}
if entry != nil && entry.Type != ftp.EntryTypeFolder {
o := &Object{
fs: f,
remote: remote,
}
info := &FileInfo{
Name: remote,
Size: entry.Size,
ModTime: entry.Time,
}
o.info = info
return o, nil
}
return nil, fs.ErrorObjectNotFound
}
// dirExists checks the directory pointed to by remote exists or not
func (f *Fs) dirExists(remote string) (exists bool, err error) {
entry, err := f.findItem(remote)
if err != nil {
return false, errors.Wrap(err, "dirExists")
}
if entry != nil && entry.Type == ftp.EntryTypeFolder {
return true, nil
}
return false, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer log.Trace(dir, "dir=%q", dir)("entries=%v, err=%v", &entries, &err)
c, err := f.getFtpConnection()
if err != nil {
return nil, errors.Wrap(err, "list")
}
var listErr error
var files []*ftp.Entry
resultchan := make(chan []*ftp.Entry, 1)
errchan := make(chan error, 1)
go func() {
result, err := c.List(f.dirFromStandardPath(path.Join(f.root, dir)))
f.putFtpConnection(&c, err)
if err != nil {
errchan <- err
return
}
resultchan <- result
}()
// Wait for List for up to Timeout seconds
timer := time.NewTimer(fs.Config.Timeout)
select {
case listErr = <-errchan:
timer.Stop()
return nil, translateErrorDir(listErr)
case files = <-resultchan:
timer.Stop()
case <-timer.C:
// if timer fired assume no error but connection dead
fs.Errorf(f, "Timeout when waiting for List")
return nil, errors.New("Timeout when waiting for List")
}
// Annoyingly FTP returns success for a directory which
// doesn't exist, so check it really doesn't exist if no
// entries found.
if len(files) == 0 {
exists, err := f.dirExists(dir)
if err != nil {
return nil, errors.Wrap(err, "list")
}
if !exists {
return nil, fs.ErrorDirNotFound
}
}
for i := range files {
object := files[i]
f.entryToStandard(object)
newremote := path.Join(dir, object.Name)
switch object.Type {
case ftp.EntryTypeFolder:
if object.Name == "." || object.Name == ".." {
continue
}
d := fs.NewDir(newremote, object.Time)
entries = append(entries, d)
default:
o := &Object{
fs: f,
remote: newremote,
}
info := &FileInfo{
Name: newremote,
Size: object.Size,
ModTime: object.Time,
}
o.info = info
entries = append(entries, o)
}
}
return entries, nil
}
// Hashes are not supported
func (f *Fs) Hashes() hash.Set {
return 0
}
// Precision shows Modified Time not supported
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// fs.Debugf(f, "Trying to put file %s", src.Remote())
err := f.mkParentDir(src.Remote())
if err != nil {
return nil, errors.Wrap(err, "Put mkParentDir failed")
}
o := &Object{
fs: f,
remote: src.Remote(),
}
err = o.Update(ctx, in, src, options...)
return o, err
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
// getInfo reads the FileInfo for a path
func (f *Fs) getInfo(remote string) (fi *FileInfo, err error) {
// defer fs.Trace(remote, "")("fi=%v, err=%v", &fi, &err)
dir := path.Dir(remote)
base := path.Base(remote)
c, err := f.getFtpConnection()
if err != nil {
return nil, errors.Wrap(err, "getInfo")
}
files, err := c.List(f.dirFromStandardPath(dir))
f.putFtpConnection(&c, err)
if err != nil {
return nil, translateErrorFile(err)
}
for i := range files {
file := files[i]
f.entryToStandard(file)
if file.Name == base {
info := &FileInfo{
Name: remote,
Size: file.Size,
ModTime: file.Time,
IsDir: file.Type == ftp.EntryTypeFolder,
}
return info, nil
}
}
return nil, fs.ErrorObjectNotFound
}
// mkdir makes the directory and parents using unrooted paths
func (f *Fs) mkdir(abspath string) error {
abspath = path.Clean(abspath)
if abspath == "." || abspath == "/" {
return nil
}
fi, err := f.getInfo(abspath)
if err == nil {
if fi.IsDir {
return nil
}
return fs.ErrorIsFile
} else if err != fs.ErrorObjectNotFound {
return errors.Wrapf(err, "mkdir %q failed", abspath)
}
parent := path.Dir(abspath)
err = f.mkdir(parent)
if err != nil {
return err
}
c, connErr := f.getFtpConnection()
if connErr != nil {
return errors.Wrap(connErr, "mkdir")
}
err = c.MakeDir(f.dirFromStandardPath(abspath))
f.putFtpConnection(&c, err)
switch errX := err.(type) {
case *textproto.Error:
switch errX.Code {
case ftp.StatusFileUnavailable: // dir already exists: see issue #2181
err = nil
case 521: // dir already exists: error number according to RFC 959: issue #2363
err = nil
}
}
return err
}
// mkParentDir makes the parent of remote if necessary and any
// directories above that
func (f *Fs) mkParentDir(remote string) error {
parent := path.Dir(remote)
return f.mkdir(path.Join(f.root, parent))
}
// Mkdir creates the directory if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
// defer fs.Trace(dir, "")("err=%v", &err)
root := path.Join(f.root, dir)
return f.mkdir(root)
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
c, err := f.getFtpConnection()
if err != nil {
return errors.Wrap(translateErrorFile(err), "Rmdir")
}
err = c.RemoveDir(f.dirFromStandardPath(path.Join(f.root, dir)))
f.putFtpConnection(&c, err)
return translateErrorDir(err)
}
// Move renames a remote file object
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(remote)
if err != nil {
return nil, errors.Wrap(err, "Move mkParentDir failed")
}
c, err := f.getFtpConnection()
if err != nil {
return nil, errors.Wrap(err, "Move")
}
err = c.Rename(
f.opt.Enc.FromStandardPath(path.Join(srcObj.fs.root, srcObj.remote)),
f.opt.Enc.FromStandardPath(path.Join(f.root, remote)),
)
f.putFtpConnection(&c, err)
if err != nil {
return nil, errors.Wrap(err, "Move Rename failed")
}
dstObj, err := f.NewObject(ctx, remote)
if err != nil {
return nil, errors.Wrap(err, "Move NewObject failed")
}
return dstObj, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
srcPath := path.Join(srcFs.root, srcRemote)
dstPath := path.Join(f.root, dstRemote)
// Check if destination exists
fi, err := f.getInfo(dstPath)
if err == nil {
if fi.IsDir {
return fs.ErrorDirExists
}
return fs.ErrorIsFile
} else if err != fs.ErrorObjectNotFound {
return errors.Wrapf(err, "DirMove getInfo failed")
}
// Make sure the parent directory exists
err = f.mkdir(path.Dir(dstPath))
if err != nil {
return errors.Wrap(err, "DirMove mkParentDir dst failed")
}
// Do the move
c, err := f.getFtpConnection()
if err != nil {
return errors.Wrap(err, "DirMove")
}
err = c.Rename(
f.dirFromStandardPath(srcPath),
f.dirFromStandardPath(dstPath),
)
f.putFtpConnection(&c, err)
if err != nil {
return errors.Wrapf(err, "DirMove Rename(%q,%q) failed", srcPath, dstPath)
}
return nil
}
// ------------------------------------------------------------
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// String version of o
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Hash returns the hash of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return int64(o.info.Size)
}
// ModTime returns the modification time of the object
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.info.ModTime
}
// SetModTime sets the modification time of the object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return nil
}
// Storable returns a boolean as to whether this object is storable
func (o *Object) Storable() bool {
return true
}
// ftpReadCloser implements io.ReadCloser for FTP objects.
type ftpReadCloser struct {
rc io.ReadCloser
c *ftp.ServerConn
f *Fs
err error // errors found during read
}
// Read bytes into p
func (f *ftpReadCloser) Read(p []byte) (n int, err error) {
n, err = f.rc.Read(p)
if err != nil && err != io.EOF {
f.err = err // store any errors for Close to examine
}
return
}
// Close the FTP reader and return the connection to the pool
func (f *ftpReadCloser) Close() error {
var err error
errchan := make(chan error, 1)
go func() {
errchan <- f.rc.Close()
}()
// Wait for Close for up to 60 seconds
timer := time.NewTimer(60 * time.Second)
select {
case err = <-errchan:
timer.Stop()
case <-timer.C:
// if timer fired assume no error but connection dead
fs.Errorf(f.f, "Timeout when waiting for connection Close")
f.f.putFtpConnection(nil, nil)
return nil
}
// if errors while reading or closing, dump the connection
if err != nil || f.err != nil {
_ = f.c.Quit()
f.f.putFtpConnection(nil, nil)
} else {
f.f.putFtpConnection(&f.c, nil)
}
// mask the error if it was caused by a premature close
// NB StatusAboutToSend is to work around a bug in pureftpd
// See: https://github.com/rclone/rclone/issues/3445#issuecomment-521654257
switch errX := err.(type) {
case *textproto.Error:
switch errX.Code {
case ftp.StatusTransfertAborted, ftp.StatusFileUnavailable, ftp.StatusAboutToSend:
err = nil
}
}
return err
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
// defer fs.Trace(o, "")("rc=%v, err=%v", &rc, &err)
path := path.Join(o.fs.root, o.remote)
var offset, limit int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
offset = x.Offset
case *fs.RangeOption:
offset, limit = x.Decode(o.Size())
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
c, err := o.fs.getFtpConnection()
if err != nil {
return nil, errors.Wrap(err, "open")
}
fd, err := c.RetrFrom(o.fs.opt.Enc.FromStandardPath(path), uint64(offset))
if err != nil {
o.fs.putFtpConnection(&c, err)
return nil, errors.Wrap(err, "open")
}
rc = &ftpReadCloser{rc: readers.NewLimitedReadCloser(fd, limit), c: c, f: o.fs}
return rc, nil
}
// Update the already existing object
//
// Copy the reader into the object updating modTime and size
//
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
// defer fs.Trace(o, "src=%v", src)("err=%v", &err)
path := path.Join(o.fs.root, o.remote)
// remove the file if upload failed
remove := func() {
// Give the FTP server a chance to get its internal state in order after the error.
// The error may have been local in which case we closed the connection. The server
// may still be dealing with it for a moment. A sleep isn't ideal but I haven't been
// able to think of a better method to find out if the server has finished - ncw
time.Sleep(1 * time.Second)
removeErr := o.Remove(ctx)
if removeErr != nil {
fs.Debugf(o, "Failed to remove: %v", removeErr)
} else {
fs.Debugf(o, "Removed after failed upload: %v", err)
}
}
c, err := o.fs.getFtpConnection()
if err != nil {
return errors.Wrap(err, "Update")
}
err = c.Stor(o.fs.opt.Enc.FromStandardPath(path), in)
if err != nil {
_ = c.Quit() // toss this connection to avoid sync errors
remove()
o.fs.putFtpConnection(nil, err)
return errors.Wrap(err, "update stor")
}
o.fs.putFtpConnection(&c, nil)
o.info, err = o.fs.getInfo(path)
if err != nil {
return errors.Wrap(err, "update getinfo")
}
return nil
}
// Remove an object
func (o *Object) Remove(ctx context.Context) (err error) {
// defer fs.Trace(o, "")("err=%v", &err)
path := path.Join(o.fs.root, o.remote)
// Check if it's a directory or a file
info, err := o.fs.getInfo(path)
if err != nil {
return err
}
if info.IsDir {
err = o.fs.Rmdir(ctx, o.remote)
} else {
c, err := o.fs.getFtpConnection()
if err != nil {
return errors.Wrap(err, "Remove")
}
err = c.Delete(o.fs.opt.Enc.FromStandardPath(path))
o.fs.putFtpConnection(&c, err)
}
return err
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
_ fs.Mover = &Fs{}
_ fs.DirMover = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Object = &Object{}
)

48
backend/ftp/ftp_test.go Normal file
View File

@@ -0,0 +1,48 @@
// Test FTP filesystem interface
package ftp_test
import (
"testing"
"github.com/rclone/rclone/backend/ftp"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFTPProftpd:",
NilObject: (*ftp.Object)(nil),
})
}
func TestIntegration2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFTPRclone:",
NilObject: (*ftp.Object)(nil),
})
}
func TestIntegration3(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFTPPureftpd:",
NilObject: (*ftp.Object)(nil),
})
}
// func TestIntegration4(t *testing.T) {
// if *fstest.RemoteName != "" {
// t.Skip("skipping as -remote is set")
// }
// fstests.Run(t, &fstests.Opt{
// RemoteName: "TestFTPVsftpd:",
// NilObject: (*ftp.Object)(nil),
// })
// }

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,18 @@
// Test GoogleCloudStorage filesystem interface
package googlecloudstorage_test
import (
"testing"
"github.com/rclone/rclone/backend/googlecloudstorage"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestGoogleCloudStorage:",
NilObject: (*googlecloudstorage.Object)(nil),
})
}

View File

@@ -0,0 +1,148 @@
// This file contains the albums abstraction
package googlephotos
import (
"path"
"strings"
"sync"
"github.com/rclone/rclone/backend/googlephotos/api"
)
// All the albums
type albums struct {
mu sync.Mutex
dupes map[string][]*api.Album // duplicated names
byID map[string]*api.Album //..indexed by ID
byTitle map[string]*api.Album //..indexed by Title
path map[string][]string // partial album names to directory
}
// Create a new album
func newAlbums() *albums {
return &albums{
dupes: map[string][]*api.Album{},
byID: map[string]*api.Album{},
byTitle: map[string]*api.Album{},
path: map[string][]string{},
}
}
// add an album
func (as *albums) add(album *api.Album) {
// Munge the name of the album into a sensible path name
album.Title = path.Clean(album.Title)
if album.Title == "." || album.Title == "/" {
album.Title = addID("", album.ID)
}
as.mu.Lock()
as._add(album)
as.mu.Unlock()
}
// _add an album - call with lock held
func (as *albums) _add(album *api.Album) {
// update dupes by title
dupes := as.dupes[album.Title]
dupes = append(dupes, album)
as.dupes[album.Title] = dupes
// Dedupe the album name if necessary
if len(dupes) >= 2 {
// If this is the first dupe, then need to adjust the first one
if len(dupes) == 2 {
firstAlbum := dupes[0]
as._del(firstAlbum)
as._add(firstAlbum)
// undo add of firstAlbum to dupes
as.dupes[album.Title] = dupes
}
album.Title = addID(album.Title, album.ID)
}
// Store the new album
as.byID[album.ID] = album
as.byTitle[album.Title] = album
// Store the partial paths
dir, leaf := album.Title, ""
for dir != "" {
i := strings.LastIndex(dir, "/")
if i >= 0 {
dir, leaf = dir[:i], dir[i+1:]
} else {
dir, leaf = "", dir
}
dirs := as.path[dir]
found := false
for _, dir := range dirs {
if dir == leaf {
found = true
}
}
if !found {
as.path[dir] = append(as.path[dir], leaf)
}
}
}
// del an album
func (as *albums) del(album *api.Album) {
as.mu.Lock()
as._del(album)
as.mu.Unlock()
}
// _del an album - call with lock held
func (as *albums) _del(album *api.Album) {
// We leave in dupes so it doesn't cause albums to get renamed
// Remove from byID and byTitle
delete(as.byID, album.ID)
delete(as.byTitle, album.Title)
// Remove from paths
dir, leaf := album.Title, ""
for dir != "" {
// Can't delete if this dir exists anywhere in the path structure
if _, found := as.path[dir]; found {
break
}
i := strings.LastIndex(dir, "/")
if i >= 0 {
dir, leaf = dir[:i], dir[i+1:]
} else {
dir, leaf = "", dir
}
dirs := as.path[dir]
for i, dir := range dirs {
if dir == leaf {
dirs = append(dirs[:i], dirs[i+1:]...)
break
}
}
if len(dirs) == 0 {
delete(as.path, dir)
} else {
as.path[dir] = dirs
}
}
}
// get an album by title
func (as *albums) get(title string) (album *api.Album, ok bool) {
as.mu.Lock()
defer as.mu.Unlock()
album, ok = as.byTitle[title]
return album, ok
}
// getDirs gets directories below an album path
func (as *albums) getDirs(albumPath string) (dirs []string, ok bool) {
as.mu.Lock()
defer as.mu.Unlock()
dirs, ok = as.path[albumPath]
return dirs, ok
}

View File

@@ -0,0 +1,311 @@
package googlephotos
import (
"testing"
"github.com/rclone/rclone/backend/googlephotos/api"
"github.com/stretchr/testify/assert"
)
func TestNewAlbums(t *testing.T) {
albums := newAlbums()
assert.NotNil(t, albums.dupes)
assert.NotNil(t, albums.byID)
assert.NotNil(t, albums.byTitle)
assert.NotNil(t, albums.path)
}
func TestAlbumsAdd(t *testing.T) {
albums := newAlbums()
assert.Equal(t, map[string][]*api.Album{}, albums.dupes)
assert.Equal(t, map[string]*api.Album{}, albums.byID)
assert.Equal(t, map[string]*api.Album{}, albums.byTitle)
assert.Equal(t, map[string][]string{}, albums.path)
a1 := &api.Album{
Title: "one",
ID: "1",
}
albums.add(a1)
assert.Equal(t, map[string][]*api.Album{
"one": {a1},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"1": a1,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one": a1,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": {"one"},
}, albums.path)
a2 := &api.Album{
Title: "two",
ID: "2",
}
albums.add(a2)
assert.Equal(t, map[string][]*api.Album{
"one": {a1},
"two": {a2},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"1": a1,
"2": a2,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one": a1,
"two": a2,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": {"one", "two"},
}, albums.path)
// Add a duplicate
a2a := &api.Album{
Title: "two",
ID: "2a",
}
albums.add(a2a)
assert.Equal(t, map[string][]*api.Album{
"one": {a1},
"two": {a2, a2a},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"1": a1,
"2": a2,
"2a": a2a,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one": a1,
"two {2}": a2,
"two {2a}": a2a,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": {"one", "two {2}", "two {2a}"},
}, albums.path)
// Add a sub directory
a1sub := &api.Album{
Title: "one/sub",
ID: "1sub",
}
albums.add(a1sub)
assert.Equal(t, map[string][]*api.Album{
"one": {a1},
"two": {a2, a2a},
"one/sub": {a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"1": a1,
"2": a2,
"2a": a2a,
"1sub": a1sub,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one": a1,
"one/sub": a1sub,
"two {2}": a2,
"two {2a}": a2a,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": {"one", "two {2}", "two {2a}"},
"one": {"sub"},
}, albums.path)
// Add a weird path
a0 := &api.Album{
Title: "/../././..////.",
ID: "0",
}
albums.add(a0)
assert.Equal(t, map[string][]*api.Album{
"{0}": {a0},
"one": {a1},
"two": {a2, a2a},
"one/sub": {a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"0": a0,
"1": a1,
"2": a2,
"2a": a2a,
"1sub": a1sub,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"{0}": a0,
"one": a1,
"one/sub": a1sub,
"two {2}": a2,
"two {2a}": a2a,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": {"one", "two {2}", "two {2a}", "{0}"},
"one": {"sub"},
}, albums.path)
}
func TestAlbumsDel(t *testing.T) {
albums := newAlbums()
a1 := &api.Album{
Title: "one",
ID: "1",
}
albums.add(a1)
a2 := &api.Album{
Title: "two",
ID: "2",
}
albums.add(a2)
// Add a duplicate
a2a := &api.Album{
Title: "two",
ID: "2a",
}
albums.add(a2a)
// Add a sub directory
a1sub := &api.Album{
Title: "one/sub",
ID: "1sub",
}
albums.add(a1sub)
assert.Equal(t, map[string][]*api.Album{
"one": {a1},
"two": {a2, a2a},
"one/sub": {a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"1": a1,
"2": a2,
"2a": a2a,
"1sub": a1sub,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one": a1,
"one/sub": a1sub,
"two {2}": a2,
"two {2a}": a2a,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": {"one", "two {2}", "two {2a}"},
"one": {"sub"},
}, albums.path)
albums.del(a1)
assert.Equal(t, map[string][]*api.Album{
"one": {a1},
"two": {a2, a2a},
"one/sub": {a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"2": a2,
"2a": a2a,
"1sub": a1sub,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one/sub": a1sub,
"two {2}": a2,
"two {2a}": a2a,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": {"one", "two {2}", "two {2a}"},
"one": {"sub"},
}, albums.path)
albums.del(a2)
assert.Equal(t, map[string][]*api.Album{
"one": {a1},
"two": {a2, a2a},
"one/sub": {a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"2a": a2a,
"1sub": a1sub,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one/sub": a1sub,
"two {2a}": a2a,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": {"one", "two {2a}"},
"one": {"sub"},
}, albums.path)
albums.del(a2a)
assert.Equal(t, map[string][]*api.Album{
"one": {a1},
"two": {a2, a2a},
"one/sub": {a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{
"1sub": a1sub,
}, albums.byID)
assert.Equal(t, map[string]*api.Album{
"one/sub": a1sub,
}, albums.byTitle)
assert.Equal(t, map[string][]string{
"": {"one"},
"one": {"sub"},
}, albums.path)
albums.del(a1sub)
assert.Equal(t, map[string][]*api.Album{
"one": {a1},
"two": {a2, a2a},
"one/sub": {a1sub},
}, albums.dupes)
assert.Equal(t, map[string]*api.Album{}, albums.byID)
assert.Equal(t, map[string]*api.Album{}, albums.byTitle)
assert.Equal(t, map[string][]string{}, albums.path)
}
func TestAlbumsGet(t *testing.T) {
albums := newAlbums()
a1 := &api.Album{
Title: "one",
ID: "1",
}
albums.add(a1)
album, ok := albums.get("one")
assert.Equal(t, true, ok)
assert.Equal(t, a1, album)
album, ok = albums.get("notfound")
assert.Equal(t, false, ok)
assert.Nil(t, album)
}
func TestAlbumsGetDirs(t *testing.T) {
albums := newAlbums()
a1 := &api.Album{
Title: "one",
ID: "1",
}
albums.add(a1)
dirs, ok := albums.getDirs("")
assert.Equal(t, true, ok)
assert.Equal(t, []string{"one"}, dirs)
dirs, ok = albums.getDirs("notfound")
assert.Equal(t, false, ok)
assert.Nil(t, dirs)
}

View File

@@ -0,0 +1,190 @@
package api
import (
"fmt"
"time"
)
// ErrorDetails in the internals of the Error type
type ErrorDetails struct {
Code int `json:"code"`
Message string `json:"message"`
Status string `json:"status"`
}
// Error is returned on errors
type Error struct {
Details ErrorDetails `json:"error"`
}
// Error satisfies error interface
func (e *Error) Error() string {
return fmt.Sprintf("%s (%d %s)", e.Details.Message, e.Details.Code, e.Details.Status)
}
// Album of photos
type Album struct {
ID string `json:"id,omitempty"`
Title string `json:"title"`
ProductURL string `json:"productUrl,omitempty"`
MediaItemsCount string `json:"mediaItemsCount,omitempty"`
CoverPhotoBaseURL string `json:"coverPhotoBaseUrl,omitempty"`
CoverPhotoMediaItemID string `json:"coverPhotoMediaItemId,omitempty"`
IsWriteable bool `json:"isWriteable,omitempty"`
}
// ListAlbums is returned from albums.list and sharedAlbums.list
type ListAlbums struct {
Albums []Album `json:"albums"`
SharedAlbums []Album `json:"sharedAlbums"`
NextPageToken string `json:"nextPageToken"`
}
// CreateAlbum creates an Album
type CreateAlbum struct {
Album *Album `json:"album"`
}
// MediaItem is a photo or video
type MediaItem struct {
ID string `json:"id"`
ProductURL string `json:"productUrl"`
BaseURL string `json:"baseUrl"`
MimeType string `json:"mimeType"`
MediaMetadata struct {
CreationTime time.Time `json:"creationTime"`
Width string `json:"width"`
Height string `json:"height"`
Photo struct {
} `json:"photo"`
} `json:"mediaMetadata"`
Filename string `json:"filename"`
}
// MediaItems is returned from mediaitems.list, mediaitems.search
type MediaItems struct {
MediaItems []MediaItem `json:"mediaItems"`
NextPageToken string `json:"nextPageToken"`
}
//Content categories
// NONE Default content category. This category is ignored when any other category is used in the filter.
// LANDSCAPES Media items containing landscapes.
// RECEIPTS Media items containing receipts.
// CITYSCAPES Media items containing cityscapes.
// LANDMARKS Media items containing landmarks.
// SELFIES Media items that are selfies.
// PEOPLE Media items containing people.
// PETS Media items containing pets.
// WEDDINGS Media items from weddings.
// BIRTHDAYS Media items from birthdays.
// DOCUMENTS Media items containing documents.
// TRAVEL Media items taken during travel.
// ANIMALS Media items containing animals.
// FOOD Media items containing food.
// SPORT Media items from sporting events.
// NIGHT Media items taken at night.
// PERFORMANCES Media items from performances.
// WHITEBOARDS Media items containing whiteboards.
// SCREENSHOTS Media items that are screenshots.
// UTILITY Media items that are considered to be utility. These include, but aren't limited to documents, screenshots, whiteboards etc.
// ARTS Media items containing art.
// CRAFTS Media items containing crafts.
// FASHION Media items related to fashion.
// HOUSES Media items containing houses.
// GARDENS Media items containing gardens.
// FLOWERS Media items containing flowers.
// HOLIDAYS Media items taken of holidays.
// MediaTypes
// ALL_MEDIA Treated as if no filters are applied. All media types are included.
// VIDEO All media items that are considered videos. This also includes movies the user has created using the Google Photos app.
// PHOTO All media items that are considered photos. This includes .bmp, .gif, .ico, .jpg (and other spellings), .tiff, .webp and special photo types such as iOS live photos, Android motion photos, panoramas, photospheres.
// Features
// NONE Treated as if no filters are applied. All features are included.
// FAVORITES Media items that the user has marked as favorites in the Google Photos app.
// Date is used as part of SearchFilter
type Date struct {
Year int `json:"year,omitempty"`
Month int `json:"month,omitempty"`
Day int `json:"day,omitempty"`
}
// DateFilter is uses to add date ranges to media item queries
type DateFilter struct {
Dates []Date `json:"dates,omitempty"`
Ranges []struct {
StartDate Date `json:"startDate,omitempty"`
EndDate Date `json:"endDate,omitempty"`
} `json:"ranges,omitempty"`
}
// ContentFilter is uses to add content categories to media item queries
type ContentFilter struct {
IncludedContentCategories []string `json:"includedContentCategories,omitempty"`
ExcludedContentCategories []string `json:"excludedContentCategories,omitempty"`
}
// MediaTypeFilter is uses to add media types to media item queries
type MediaTypeFilter struct {
MediaTypes []string `json:"mediaTypes,omitempty"`
}
// FeatureFilter is uses to add features to media item queries
type FeatureFilter struct {
IncludedFeatures []string `json:"includedFeatures,omitempty"`
}
// Filters combines all the filter types for media item queries
type Filters struct {
DateFilter *DateFilter `json:"dateFilter,omitempty"`
ContentFilter *ContentFilter `json:"contentFilter,omitempty"`
MediaTypeFilter *MediaTypeFilter `json:"mediaTypeFilter,omitempty"`
FeatureFilter *FeatureFilter `json:"featureFilter,omitempty"`
IncludeArchivedMedia *bool `json:"includeArchivedMedia,omitempty"`
ExcludeNonAppCreatedData *bool `json:"excludeNonAppCreatedData,omitempty"`
}
// SearchFilter is uses with mediaItems.search
type SearchFilter struct {
AlbumID string `json:"albumId,omitempty"`
PageSize int `json:"pageSize"`
PageToken string `json:"pageToken,omitempty"`
Filters *Filters `json:"filters,omitempty"`
}
// SimpleMediaItem is part of NewMediaItem
type SimpleMediaItem struct {
UploadToken string `json:"uploadToken"`
}
// NewMediaItem is a single media item for upload
type NewMediaItem struct {
Description string `json:"description"`
SimpleMediaItem SimpleMediaItem `json:"simpleMediaItem"`
}
// BatchCreateRequest creates media items from upload tokens
type BatchCreateRequest struct {
AlbumID string `json:"albumId,omitempty"`
NewMediaItems []NewMediaItem `json:"newMediaItems"`
}
// BatchCreateResponse is returned from BatchCreateRequest
type BatchCreateResponse struct {
NewMediaItemResults []struct {
UploadToken string `json:"uploadToken"`
Status struct {
Message string `json:"message"`
Code int `json:"code"`
} `json:"status"`
MediaItem MediaItem `json:"mediaItem"`
} `json:"newMediaItemResults"`
}
// BatchRemoveItems is for removing items from an album
type BatchRemoveItems struct {
MediaItemIds []string `json:"mediaItemIds"`
}

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More