1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-22 04:13:14 +00:00

Compare commits

..

213 Commits

Author SHA1 Message Date
Nick Craig-Wood
48fa6f5700 vendor: Update github.com/t3rm1n4l/go-mega to fix base64 decoding issue
See: https://forum.rclone.org/t/problem-to-login-with-mega/12276
2019-10-14 11:24:03 +01:00
Nick Craig-Wood
b4b59c53f1 mount: fix "mount_fusefs: -o timeout=: option not supported" on FreeBSD
Before this change `rclone mount` would give this error on FreeBSD

    mount helper error: mount_fusefs: -o timeout=: option not supported

Because the default value for FreeBSD was set to 15m for
--daemon-timeout and that FreeBSD does not support the timeout option.

This change sets the default for --daemon-timeout to 0 on FreeBSD
which fixes the problem.

Fixes #3610
2019-10-13 11:36:51 +01:00
Ivan Andreev
77b42aa33a chunker: fix integration tests and hashsum issues 2019-10-13 10:43:46 +01:00
Ivan Andreev
910c80bd02 chunker: option to hash all files 2019-10-13 10:43:46 +01:00
Ivan Andreev
9049bb62ca chunker: prevent chunk corruption, survive meta-like input 2019-10-13 10:43:46 +01:00
Ivan Andreev
7aa2b4191c chunker: reservations for future extensions 2019-10-13 10:43:46 +01:00
Alex Chen
41ed33b08e docs: update onedrive/sharepoint docs on some known issues 2019-10-12 12:08:22 +01:00
Nick Craig-Wood
f3b0f8a9f0 sync: --update/-u not transfer files that haven't changed - fixes #3232
Before this change --update would transfer any file which was newer
than the destination regardless of whether it had changed or not.
This is needlessly wasteful of bandwidth.

After this change --update will only transfer files if they are newer
**and** they are different (checked with checksum and size).
2019-10-12 11:54:56 +01:00
Nick Craig-Wood
65a82fe77d dropbox: fix nil pointer exception on restricted files
See: https://forum.rclone.org/t/issues-syncing-dropbox/12233
2019-10-11 16:21:24 +01:00
Nick Craig-Wood
c892a6f8ef Add Michele Caci to contributors 2019-10-11 16:17:24 +01:00
Michele Caci
02c777ffbf filter: prevent mix opts when filesfrom is present - fixes #3599 2019-10-11 16:17:02 +01:00
Nick Craig-Wood
bc45f6f952 Add Arijit Biswas to contributors 2019-10-11 15:25:20 +01:00
Arijit Biswas
3d807ab449 docs: update onedrive docs on creating own client ID
Updated the creating "own Client ID and Key" based on new portal (portal.azure.com).
2019-10-11 15:24:54 +01:00
Jon Fautley
5d33236050 ftp: allow disabling EPSV mode 2019-10-10 21:00:41 +01:00
Nick Craig-Wood
a4d572d004 Add Vighnesh SK to contributors 2019-10-10 16:05:29 +01:00
Cenk Alti
58f280b8a2 fserrors: fix a bug in Cause function 2019-10-10 16:05:15 +01:00
Vighnesh SK
ec09de1628 Change the Debug message in NeedTranser (#3608)
'Couldn't find file - Need to Transfer' changed to 'Need to transfer -
File Not Found at Destination' because while reading the debug logs, it
confuses with failure in operation.
2019-10-10 13:44:05 +01:00
Nick Craig-Wood
6abaa9e22c fstests: allow skipping of the broken UTF-8 test for the cache backend 2019-10-10 10:36:18 +01:00
Nick Craig-Wood
e8b92f4853 sftp: fix test failures
This was introduced by 50a3a96e27
2019-10-09 17:43:03 +01:00
Nick Craig-Wood
50a3a96e27 serve sftp: fix crash on unsupported operations (eg Readlink)
Before this change the sftp handler returned a nil error for unknown
operations which meant the server crashed when one was encountered.

In particular the "Readlink" operations was causing problems.

After this change the handler returns ErrSshFxOpUnsupported which
signals to the remote end that we don't support that operation.

See: https://forum.rclone.org/t/rclone-serve-sftp-not-working-in-windows/12209
2019-10-09 16:12:21 +01:00
Dan Walters
8950b586c4 dlna: associate subtitles with all possible media nodes
When there was a .nfo and a .mp4, they were being associated only with
the .nfo.
2019-10-09 11:57:42 +01:00
Nick Craig-Wood
3f40849343 Add Brett Dutro to contributors 2019-10-09 10:07:50 +01:00
Nick Craig-Wood
7271a404db Add Tyler to contributors 2019-10-09 10:07:50 +01:00
Brett Dutro
7d0d7e66ca vfs: move writeback of dirty data out of close() method into its own method (FlushWrites) and remove close() call from Flush()
If a file handle is duplicated with dup() and the duplicate handle is
flushed, rclone will go ahead and close the file, making the original
file handle stale. This change removes the close() call from Flush() and
replaces it with FlushWrites() so that the file only gets closed when
Release() is called. The new FlushWrites method takes care of actually
writing the file back to the underlying storage.

Fixes #3381
2019-10-09 10:07:29 +01:00
Tyler
0cac9d9bd0 Fix 1fichier link in Readme 2019-10-08 21:55:12 +01:00
Nick Craig-Wood
8c1edf410c dropbox: make disallowed filenames return no retry error - fixes #3569
Before this change we silently skipped uploads to dropbox of
disallowed file names.  However this then caused "corrupted on
transfer" errors because the sizes were wrong.

After this change we return an no retry error which will mean that the
sync fails (as it should - not all files were uploaded) but no
unecessary retries happened.
2019-10-08 19:59:47 +01:00
Nick Craig-Wood
1833167d10 vendor: run go mod tidy / go mod vendor with go1.13 2019-10-08 19:59:47 +01:00
Nick Craig-Wood
455b9280ba config: use alternating Red/Green in config to make more obvious 2019-10-08 19:59:47 +01:00
Nick Craig-Wood
45e440d356 vendor: remove github.com/Azure/go-ansiterm 2019-10-08 19:59:47 +01:00
Nick Craig-Wood
593de059be lib/terminal: factor from cmd/progress, swap Azure/go-ansiterm for mattn/go-colorable 2019-10-08 19:59:47 +01:00
Nick Craig-Wood
c78d1dd18b vendor: add github.com/mattn/go-colorable 2019-10-08 19:59:47 +01:00
Nick Craig-Wood
2a82aca225 Add Sezal Agrawal to contributors 2019-10-08 19:59:47 +01:00
SezalAgrawal
7712b780ba operations: display 'Deleted X extra copies' only if dedupe successful - fixes #3551 2019-10-08 16:35:53 +01:00
Sezal Agrawal
5c2dfeee46 operations: display 'All duplicates removed' only if dedupe successful -fixes rclone#3550 2019-10-08 16:34:13 +01:00
Dan Walters
572d302620 dlna: simplify search method for associating subtitles with media nodes
Seems to be some corner cases that are not being handled, so taking a different
approach that should be a little more robust.

Also, changing resources to be served under a subpath:  We've been serving
media at /res?path=%2Fdir%2Ffilename.mp4; change that to be just /r/dir/filename.mp4.
It's cleaner, easier to reason about, and a necessary first step towards just
serving the resources via httplib anyway.
2019-10-08 07:49:39 +01:00
Henning Surmeier
eff11b44cf webdav: parse and return sharepoint error response
fixes #3176
2019-10-06 20:17:13 +01:00
Nick Craig-Wood
15b1feea9d mount: fix panic on File.Open - Fixes #3595
This problem was introduced in "mount: allow files of unkown size to
be read properly" 0baafb158f by failure to check that the
DirEntry was nil or not.
2019-10-06 19:26:58 +01:00
Dan Walters
6337cc70d3 dlna: support for external srt subtitles
Allows for filename.srt, filename.en.srt, etc., to be automatically associated with video.mp4 (or whatever) when playing over dlna.

This is the "modern" method, which I've verified to work on VLC and in LG webOS 2.  There is a vendor specific mechanism for Samsung that I havn't been able to get working on my F series.

Also made some minor corrections to logging and container IDs.
2019-10-06 12:18:56 +01:00
Nick Craig-Wood
d210fecf3b Add Raphael to contributors 2019-10-05 17:07:02 +01:00
Nick Craig-Wood
f962fb9499 Add SwitchJS to contributors 2019-10-05 17:07:02 +01:00
Raphael
7f378ca8e3 documentation: add sharepoint required flags fixes #3564
Enhance the WebDAV documentation with information regarding the flags that are required to make Rclone work correctly with SharePoint.
fixes #3564
2019-10-05 17:06:44 +01:00
SwitchJS
9a5ea9c8a8 docs: fix spelling 2019-10-05 16:00:39 +01:00
Nick Craig-Wood
d15425e8c8 Start v1.49.5-DEV development 2019-10-05 12:42:28 +01:00
Nick Craig-Wood
b3faee9471 build: fix macOS build after brew changes 2019-10-05 11:51:28 +01:00
Nick Craig-Wood
5271fe3b3f yandex: use lib/encoder 2019-10-05 10:22:43 +01:00
Nick Craig-Wood
7da1c84a7f build: don't deploy xgo build on pull requests 2019-10-04 16:53:51 +01:00
Nick Craig-Wood
cbdab14057 Add 庄天翼 to contributors 2019-10-04 16:53:51 +01:00
庄天翼
7b1274e29a s3: support for multipart copy
Fixes #2375 Fixes #3579
2019-10-04 16:49:06 +01:00
Nick Craig-Wood
d21ddf280c mailru: comment out some debugging statements 2019-10-02 20:10:01 +01:00
Nick Craig-Wood
135717e12b mailru: use lib/encoder 2019-10-02 20:10:01 +01:00
Aleksandar Jankovic
6b55b8b133 s3: add option for multipart failiure behaviour
This is needed for resuming uploads across different sessions.
2019-10-02 16:49:16 +01:00
Nick Craig-Wood
b94b2a3723 mega: fix after lib/encoder change 2019-10-02 12:41:52 +01:00
Nick Craig-Wood
e2914c0097 test_all: ignore some encoding tests with nextcloud integration test 2019-10-02 11:34:08 +01:00
Nick Craig-Wood
fd51f24906 putio: use lib/encoder
And in the process
- fix a bug with + and & in file name
- fix NewObject returning directories as files
2019-10-02 11:34:08 +01:00
Nick Craig-Wood
4615343b73 premiumizeme: use lib/encoder 2019-10-02 11:34:08 +01:00
Fionera
1dc8bcd48c Remove backend dependency from fs/hash 2019-10-01 16:29:58 +01:00
Nick Craig-Wood
def411da62 build: use the release builds not master of nfpm and github-release
Fixes #3580
2019-10-01 16:23:36 +01:00
Nick Craig-Wood
f73dae1e77 bin/get-github-release: support tar.bz2 files 2019-10-01 16:23:36 +01:00
Nick Craig-Wood
77a520c97c fichier: fix accessing files > 2GB on 32 bit systems - fixes #3581 2019-10-01 16:03:49 +01:00
Nick Craig-Wood
23bf6bb4d8 test_all: mark expected failures for minio, wasabi and FTP 2019-10-01 15:40:32 +01:00
Nick Craig-Wood
04eb96b50b fichier: fix NewObject after lib/encoder changes
This bug was introduced as part of the lib/encoder changes in
8d8fad724b.  It caused NewObject not to work for a file with
escaped characters in it.
2019-10-01 15:30:51 +01:00
Fabian Möller
b9bd15a8c9 koofr: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:25 +01:00
Nick Craig-Wood
b581f2de26 sharefile: use lib/encoder 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
5cef5f8b49 lib/encoder: add LeftPeriod encoding 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
8d8fad724b ficher: use lib/encoder 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
4098907511 lib/encoder: add more encode symbols and split existing 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
5b8a339baf docs: Add notes on how to find out the encodings used in a backend 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
3e53376a49 build_csv: fix output of control characters 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
d122d1d191 qingstor: use lib/encoder 2019-09-30 22:00:25 +01:00
Nick Craig-Wood
35d6ff89bf azureblob: use lib/encoder 2019-09-30 22:00:24 +01:00
Nick Craig-Wood
53bec33027 swift: use lib/encoder 2019-09-30 22:00:24 +01:00
Fabian Möller
3304bb7a56 googlecloudstorage: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:24 +01:00
Nick Craig-Wood
f55a99218c lib/encoder: add CrLf encoding 2019-09-30 22:00:24 +01:00
Nick Craig-Wood
6e053ecbd0 s3: only ask for URL encoded directory listings if we need them on Ceph
This works around a bug in Ceph which doesn't encode CommonPrefixes
when using URL encoded directory listings.

See: https://tracker.ceph.com/issues/41870
2019-09-30 22:00:24 +01:00
Nick Craig-Wood
7e738c9d71 fstest: remove WinPath as it is no longer needed 2019-09-30 22:00:24 +01:00
Fabian Möller
7689bd7e21 amazonclouddrive: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:24 +01:00
Fabian Möller
33f129fbbc s3: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:24 +01:00
Nick Craig-Wood
a8adce9c59 s3: fix encoding for control characters - Fixes #3345 2019-09-30 22:00:24 +01:00
Nick Craig-Wood
6ae7bd7914 local: encode invalid UTF-8 on macOS 2019-09-30 22:00:24 +01:00
Nick Craig-Wood
32af4cd6f3 ftp: use lib/encoder 2019-09-30 22:00:24 +01:00
Nick Craig-Wood
ced2616da5 fstests: allow Purge to fail with ErrorDirNotFound 2019-09-30 22:00:24 +01:00
Nick Craig-Wood
b90e4a8769 sftp: fix hashes of files with backslashes 2019-09-30 22:00:24 +01:00
Fabian Möller
00b2c02bf4 pcloud: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:24 +01:00
Fabian Möller
33aea5d43f mega: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:24 +01:00
Fabian Möller
13d8b7979d b2: use lib/encoder
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2019-09-30 22:00:24 +01:00
Nick Craig-Wood
57c1284df7 fstests: make integration tests to check all backends can store any file name
This tests the encoder is working properly
2019-09-30 22:00:24 +01:00
Fabian Möller
f0c2249086 encoder: add edge control characters and fix edge test generation 2019-09-30 14:05:49 +01:00
Fabian Möller
6ba08b8612 info: rewrite invalid character test and reporting 2019-09-30 14:05:49 +01:00
Fabian Möller
c8d3e57418 encodings: add . and .. to all backends, except Drive 2019-09-30 14:05:49 +01:00
Fabian Möller
d5cd026547 encoder: add option to encode . and .. names 2019-09-30 14:05:49 +01:00
Fabian Möller
6c0a749a42 crypt: remove checkValidString
Remove the usage of checkValidString in decryptSegment to allow all
paths which can be created by encryptSegment to be decryptable.
2019-09-30 14:05:49 +01:00
Fabian Möller
4b9fdb8475 opendrive: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
dac20093c5 onedrive: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
d211347d46 dropbox: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
4837bc3546 jottacloud: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
69c51325bb drive: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
05e4f10436 box: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
a98a750fc9 local: use lib/encoder 2019-09-30 14:05:49 +01:00
Fabian Möller
c09b62a088 encodings: add all known backend encodings 2019-09-30 14:05:49 +01:00
Fabian Möller
a56c9ab61d docs: add section for restricted filenames 2019-09-30 14:05:48 +01:00
Fabian Möller
97a218903c fstest: remove WinPath from fstest.Item 2019-09-30 14:05:48 +01:00
Nick Craig-Wood
4627ac5709 New backend for Citrix Sharefile - Fixes #1543
Many thanks to Bob Droog for organizing a test account and extensive
testing.
2019-09-30 12:28:33 +01:00
Nick Craig-Wood
1e7144eb63 docs: Add more notes on making backend docs 2019-09-30 12:27:03 +01:00
Nick Craig-Wood
f29e5b6e7d lib/oauthutil: refactor web server and allow an auth callback 2019-09-30 11:34:30 +01:00
Nick Craig-Wood
25a0e7e8aa lib/oauthutil: add a new redirect URL for oauth.rclone.org
This is for use with oauth providers which won't accept http: links.
2019-09-30 11:23:21 +01:00
Nick Craig-Wood
262ba28dec config: add ReadNonEmptyLine utility function 2019-09-30 11:23:21 +01:00
Nick Craig-Wood
74f6300875 Start v1.49.4-DEV development 2019-09-29 19:47:59 +01:00
Nick Craig-Wood
86dcb54c38 fstests: make tests pass when using -remote :backend: 2019-09-29 17:25:54 +01:00
Nick Craig-Wood
25a0703b45 Add Richard Patel to contributors 2019-09-29 11:14:33 +01:00
Richard Patel
32d5af8fb6 cmd/rcd: Address ZipSlip vulnerability
Don't create files outside of target
directory while unzipping.

Fixes #3529 reported by Nico Waisman at Semmle Security Team
2019-09-29 11:14:21 +01:00
Richard Patel
44b603d2bd lib: add plugin support
This enables loading plugins from RCLONE_PLUGIN_PATH if set.
2019-09-29 11:05:10 +01:00
Nick Craig-Wood
349112df6b oauthutil: fix security problem when running with two users on the same machine
Before this change two users could run `rclone config` for the same
backend on the same machine at the same time.

User A would get as far as starting the web server.  User B would then
fail to start the webserver, but it would open the browser on the
/auth URL which would redirect the user to the login.  This would then
cause user B to authenticate to user A's rclone.

This changes fixes the problem in two ways.

Firstly it passes the state to the /auth call before redirecting and
checks it there, erroring with a 403 error if it doesn't match.  This
would have fixed the problem on its own.

Secondly it delays the opening of the web browser until after the auth
webserver has started which prevents the user entering the credentials
if another auth server is running.

Fixes #3573
2019-09-29 10:42:02 +01:00
Nick Craig-Wood
fef8b98be2 ftp: fix listing of an empty root returning: error dir not found
Before this change if rclone listed an empty root directory then it
would return an error dir not found.

After this change we assume the root directory exists and don't
attempt to check it which was failing before.

See: https://forum.rclone.org/t/ftp-empty-directory-yields-directory-not-found-error/12069/
2019-09-28 18:01:12 +01:00
Nick Craig-Wood
6750af6167 build: make VERSION file be master of the last release - fixes #3570
Prior to this beta releases would appear to be older than the point
release, eg v1.49.0-096-gc41812fc which was released after v1.49.3 and
contains all the patches from v1.49.3.
2019-09-26 16:51:44 +01:00
Nick Craig-Wood
8681ef36d6 build: replace Circle CI build and make GitHub actions the default CI 2019-09-25 16:38:10 +01:00
Nick Craig-Wood
ec9914205f build: remove Appveyor, Circle CI, Travis and Pkgr builds 2019-09-25 16:38:10 +01:00
Ivan Andreev
ccecfa9cb1 chunker: finish meta-format before release
changes:
- chunker: remove GetTier and SetTier
- remove wdmrcompat metaformat
- remove fastopen strategy
- make hash_type option non-advanced
- adverise hash support when possible
- add metadata field "ver", run strict checks
- describe internal behavior in comments
- improve documentation

note:
wdmrcompat used to write file name in the metadata, so maximum metadata
size was 1K; removing it allows to cap size by 200 bytes now.
2019-09-25 11:03:33 +01:00
Ivan Andreev
c41812fc88 tests: bring memory hungry tests close to end 2019-09-24 12:45:12 +01:00
Ivan Andreev
d98d1be3fe accounting: fix panic due to server-side copy fallback 2019-09-24 12:45:12 +01:00
Ivan Andreev
661dc568f3 fstest: let backends advertise maximum file size 2019-09-24 12:45:12 +01:00
Ivan Andreev
1e4691f951 tests/sync: adjust transfer counts for chunker 2019-09-24 12:45:12 +01:00
Ivan Andreev
be674faff1 tests/config: integration tests for chunker
Recommended `rclone.conf` snippets for this `config.yaml`:
```
[TestChunkerLocal]
type = chunker
meta_format = simplejson
remote = /tmp/rclone-chunker-test

[TestChunkerChunk3bLocal]
type = chunker
chunk_size = 3b
meta_format = simplejson
remote = /tmp/rclone-chunker-test

[TestChunkerNometaLocal]
type = chunker
meta_format = none
remote = /tmp/rclone-chunker-test

[TestChunkerChunk3bNometaLocal]
type = chunker
chunk_size = 3b
meta_format = none
remote = /tmp/rclone-chunker-test

[TestChunkerCompatLocal]
type = chunker
meta_format = wdmrcompat
remote = /tmp/rclone-chunker-test
```
2019-09-24 12:45:12 +01:00
Ivan Andreev
c68c919cea docs: chunker documentation 2019-09-24 12:45:12 +01:00
Ivan Andreev
59dba1de88 chunker: implementation + required fstest patch
Note: chunker implements many irrelevant methods (UserInfo, Disconnect etc),
but they are required by TestIntegration/FsCheckWrap and cannot be removed.

Dropped API methods: MergeDirs DirCacheFlush PublicLink UserInfo Disconnect OpenWriterAt

Meta formats:
- renamed old simplejson format to wdmrcompat.
- new simplejson format supports hash sums and verification of chunk size/count.

Change list:
- split-chunking overlay for mailru
- add to all
- fix linter errors
- fix integration tests
- support chunks without meta object
- fix package paths
- propagate context
- fix formatting
- implement new required wrapper interfaces
- also test large file uploads
- simplify options
- user friendly name pattern
- set default chunk size 2G
- fix building with golang 1.9
- fix ci/cd on a separate branch
- fix updated object name (SyncUTFNorm failed)
- fix panic in Box overlay
- workaround: Box rename failed if name taken
- enhance comments in unit test
- fix formatting
- embed wrapped remote rather than inherit
- require wrapped remote to support move (or copy)
- implement 3 (keep fstest)
- drop irrelevant file system interfaces
- factor out Object.mainChunk
- refactor TestLargeUpload as InternalTest
- add unit test for chunk name formats
- new improved simplejson meta format
- tricky case in test FsIsFile (fix+ignore)
- remove debugging print
- hide temporary objects from listings
- fix bugs in chunking reader:
  - return EOF immediately when all data is sent
  - handle case when wrapped remote puts by hash (bug detected by TestRcat)
- chunked file hashing (feature)
- server-side copy across configs (feature)
- robust cleanup of temporary chunks in Put
- linear download strategy (no read-ahead, feature)
- fix unexpected EOF in the box multipart uploader
- throw error if destination ignores data
2019-09-24 12:45:12 +01:00
Fionera
49d6d6425c serve/httplib: Write the template to a buffer to catch render errors
Fixes #3559
2019-09-22 21:31:11 +01:00
Nick Craig-Wood
28cc2009d4 Add Anthony Rusdi to contributors 2019-09-21 14:39:03 +01:00
Nick Craig-Wood
dd4fe9ff60 Add David to contributors 2019-09-21 14:39:03 +01:00
Anthony Rusdi
899f285319 s3: fix signature v2_auth headers
When used with v2_auth = true, PresignRequest doesn't return
signed headers, so remote dest authentication would be fail.
This commit copying back HTTPRequest.Header to headers.

Tested with RiakCS v2.1.0.

Signed-off-by: Anthony Rusdi <33247310+antrusd@users.noreply.github.com>
2019-09-21 14:38:51 +01:00
David
4788545b05 box: add options to get access token via JWT auth 2019-09-20 17:15:16 +01:00
David
1934426789 jwtutil: functionality to get an access token via JWT authentication 2019-09-20 17:15:16 +01:00
David
643192b347 vendor: add pkcs8 helpers for decrypting encrypted private keys 2019-09-20 17:15:16 +01:00
Nick Craig-Wood
1031bcfc5a build: remove azure pipelines build 2019-09-20 16:08:18 +01:00
Nick Craig-Wood
ce00c0a0d9 build: build rclone with github actions 2019-09-20 16:08:18 +01:00
Nick Craig-Wood
1164eed2af lib/pacer: make tests more reliable 2019-09-20 16:07:55 +01:00
Nick Craig-Wood
557edecd40 log: add Stack() function for debugging who calls what 2019-09-20 11:53:08 +01:00
Nick Craig-Wood
b242b0a078 lib/cache,rc/jobs: make tests more reliable 2019-09-20 11:53:08 +01:00
Nick Craig-Wood
08b86cc94b mount: skip tests on <= 2 CPUs to avoid lockup in #3154 2019-09-20 11:53:08 +01:00
Nick Craig-Wood
56544bb2fd accounting: fix file handle leak on errors - fixes #3547
In 53a1a0e3ef we introduced a problem where if there was an
error on the file being transferred then the file was re-opened and
the old one wasn't closed.

This was partially fixed in bfbddab46b however this didn't
address the case of the old file being closed.

This is now fixed by
- marking the file as open again in UpdateReader
- moving the stopping the accounting machinery to a new method Done
2019-09-19 16:20:07 +01:00
Matei David
70e043e641 Fixed typo in Docker image doc. 2019-09-19 16:17:57 +01:00
Dan Walters
c49a71f438 dlna: move root descriptor xml template to the static assets
Reduce binary size.
2019-09-17 12:52:32 +01:00
Dan Walters
5f07bbf8ce dlna: fake out implementation of X_MS_MediaReceiverRegistrar
Using the same responses as minidlna.

Fixes #3502.
2019-09-17 12:52:02 +01:00
Dan Walters
2f10472df3 dlna: count the number of children in the response to BrowseMetadata 2019-09-17 12:28:20 +01:00
Matei David
ab89e93968 Add Matei David to contributors 2019-09-17 10:12:32 +01:00
Matei David
070a8bfcd8 Dockerfile fixes
- ref: https://forum.rclone.org/t/run-docker-container-in-userspace/11734/7
- enable userspace operation
- enable Docker userspace mount exposed to the host
- add more Docker image usage documentation
2019-09-17 10:12:32 +01:00
Ivan Andreev
8fe87c8157 mailru: skip extra http request if data fits in hash 2019-09-17 10:04:51 +01:00
Ivan Andreev
8fb44a822d mailru: fix rare nil pointer panic 2019-09-17 10:04:51 +01:00
Nick Craig-Wood
3cff258577 sftp: fix --sftp-ask-password trying to contact the ssh agent
See: https://forum.rclone.org/t/rclone-command-line/11766
2019-09-16 11:16:27 +01:00
Nick Craig-Wood
66347aff2a fstest: calculate hashes for uploaded test files to fix minio integration tests
Before this change we didn't calculate any hashes for test files
created in the Run framework.

This means that files were uploaded to S3 without a `Content-MD5`
header.  This in turn caused minio to disengage `--compat` mode which
in turn caused the `TestSyncAfterChangingModtimeOnlyWithNoUpdateModTime`
test to fail in `fs/sync`.

After this change we supply all hashes supported by the destination Fs
on the upload object.

This means that the `Content-MD5` is set and minio engages `--compat`
mode to fix the problem.  Using `--compat` on the command line also
fixes the problem.

This much better replicates how objects are actually uploaded with
operations.Copy so should improve the integration tests.
2019-09-16 10:59:01 +01:00
Nick Craig-Wood
b8b12a4000 Start v1.49.3-DEV development 2019-09-15 18:10:08 +01:00
Dan Walters
8c038326b9 dlna: correct output for ContentDirectoryService#Browse with BrowseMetadata
We were marshalling the "cds object" instead of the "upnp object".

Fixes #3253  (I think)
2019-09-15 16:30:39 +01:00
pataquets
fd4b25932c Contrib: Add sample WebDAV server Docker Compose manifest. 2019-09-15 16:06:54 +01:00
pataquets
4374fd1df1 Contrib: Add sample DLNA server Docker Compose manifest. 2019-09-15 16:06:54 +01:00
Nick Craig-Wood
b6065561cf test_all: add ignores for tests which will never pass
- s3 backends which don't support SetTier
- mega which makes a duplicate for TestDirRename
2019-09-15 13:16:15 +01:00
Nick Craig-Wood
ef7bfd3f03 fs: Make prefix free backend config read prefix free env var also
Before this change you could only configure the local backend flags
which don't have the local prefix (eg `--copy-links`) with
`RCLONE_LOCAL_COPY_LINKS`.

This change makes `RCLONE_COPY_LINKS` valid too which is much more
logical for the users.

Fixes #3534
2019-09-14 18:26:07 +01:00
Nick Craig-Wood
ae2edc3b5b help: add short options to backend documentation also 2019-09-14 18:24:05 +01:00
Nick Craig-Wood
0baafb158f mount: allow files of unkown size to be read properly
Before this change, files of unknown size (eg Google Docs) would
appear in file listings with 0 size and would only allow 0 bytes to be
read.

This change sets the direct_io flag in the FUSE return which bypasses
the cache for these files.  This means that they can be read properly.

This is compatible with some, but not all applications.
2019-09-14 13:22:33 +01:00
Nick Craig-Wood
ba121eddf0 vfs: make objects of unknown size readable through the VFS
These objects (eg Google Docs) appear with 0 length in the VFS.

Before this change, these only read 0 bytes.

After this change, even though the size appears to be 0, the objects
can be read to the end.  If the objects are read to the end then the
size on the handle will be updated.
2019-09-14 13:09:07 +01:00
Nick Craig-Wood
2e80e035c9 fstest/mockobject: add UnknownSize() method to make Size() return -1 2019-09-14 13:07:01 +01:00
Nick Craig-Wood
ea9b6087cf fstest/mockfs: allow fs.Objects to be added to the root 2019-09-14 13:05:36 +01:00
Nick Craig-Wood
6959c997e2 config: remove error: can't use --size-only and --ignore-size together.
As sometimes they are useful together, for example when the remote
changes the sizes of the uploaded files.

See: https://forum.rclone.org/t/files-upload-will-be-auto-compress-how-do-i-sync-a-file-to-remote/10578
2019-09-14 10:13:44 +01:00
Nick Craig-Wood
25786cafd3 s3: fix SetModTime on GLACIER/ARCHIVE objects and implement set/get tier
- Read the storage class for each object
- Implement SetTier/GetTier
- Check the storage class on the **object** before using SetModTime

This updates the fix in 1a2fb52 so that SetModTime works when you are
using objects which have been migrated to GLACIER but you aren't using
GLACIER as a storage class.

Fixes #3522
2019-09-14 09:18:55 +01:00
Nick Craig-Wood
23dc313fa5 azureblob: add missing type assertions for GetTier/SetTier 2019-09-14 09:18:55 +01:00
Nick Craig-Wood
1a16849df0 http: fix race introduced in 7982aaf151 2019-09-14 08:48:13 +01:00
Nick Craig-Wood
3b68340eac http: add --http-no-head to stop rclone doing HEAD in listings #3523 2019-09-14 00:17:39 +01:00
Nick Craig-Wood
7982aaf151 http: HEAD directory entries in parallel to speedup #3523 2019-09-14 00:16:44 +01:00
Nick Craig-Wood
7b29ed8ec1 Add Lars Lehtonen to contributors 2019-09-13 23:50:54 +01:00
Lars Lehtonen
c93e0ff8ee rest: fix missing error check 2019-09-13 23:50:39 +01:00
Nick Craig-Wood
3b91fb6a2f Add David Baumgold to contributors 2019-09-13 18:39:36 +01:00
David Baumgold
7d8c15c030 docs: GitHub has a capital H 2019-09-13 18:39:23 +01:00
Nick Craig-Wood
bfbddab46b fs/accounting: Fix "file already closed" on transfer retries
This was caused by the recent reworking of the accounting interface.
The Transfer object was recycling the Accounting object without
resetting the stream.

See: https://forum.rclone.org/t/error-file-already-closed/11469/
See: https://forum.rclone.org/t/rclone-b2-sync-post-error-method-not-supported/11718/
2019-09-13 18:35:02 +01:00
Nick Craig-Wood
e09a4ff019 cmd: Make --progress work in git bash on Windows - fixes #3531
This detects the presence of a VT100 terminal by using the TERM
environment variable and switches to using VT100 codes directly under
windows if it is found.

This makes --progress work correctly with git bash.
2019-09-13 15:24:47 +01:00
Nick Craig-Wood
48e23d8c85 build: fix circleci build to use billziss/xgo-cgofuse container
Use built rclone to do upload
2019-09-12 21:59:50 +01:00
Aleksandar Jankovic
934440a9df accounting: fix total duration calculation
Fixes: #3498
2019-09-12 10:29:40 +01:00
Nick Craig-Wood
29b4f211ab gcs: add context to SDK calls #3257 2019-09-09 23:27:07 +01:00
Nick Craig-Wood
bd863f8868 drive: add context to SDK calls #3257 2019-09-09 23:27:07 +01:00
Nick Craig-Wood
66c23723e3 Add context to all http.NewRequest #3257
When we drop support for go1.12 we can use http.NewRequestWithContext
2019-09-09 23:27:07 +01:00
Nick Craig-Wood
58a531a203 rest: add context propagation to rest library #3257
This fixes up the calling and propagates the contexts for the backends
which use lib/rest.
2019-09-09 23:27:07 +01:00
Ivan Andreev
ba1daea072 mailru: backend for mail.ru 2019-09-09 21:56:16 +01:00
Ivan Andreev
bdcd0b4c64 Add mailru hash (mrhash) 2019-09-09 21:34:15 +01:00
Nick Craig-Wood
94eb9a4014 Start v1.49.2-DEV development 2019-09-08 17:36:21 +01:00
Nick Craig-Wood
e028c006fc test_all: write index.json and add branch, commit and Go version to report 2019-09-08 11:35:56 +01:00
Nick Craig-Wood
3f3f038b73 build: make sure we add version info to test_all build 2019-09-08 11:35:36 +01:00
calisro
2298834e83 docs: spell out google photos download limitations 2019-09-07 11:47:53 +01:00
Nick Craig-Wood
07dfb3aa11 bin: convert python scripts to python3 2019-09-06 22:55:28 +01:00
Danil Semelenov
1382dba3c8 cmd: make autocomplete compatible with bash's posix mode #3489 2019-09-06 13:11:08 +01:00
Nick Craig-Wood
f1347139fa config: check config names more carefully and report errors - fixes #3506
Before this change it was possible to make a remote with an invalid
name in the config file, either manually or with `rclone config
create` (but not with `rclone config`).

When this remote was used, because it was invalid, rclone would
presume this remote name was a local directory for a very suprising
user experience!

This change checks remote names more carefully and returns errors
- when the user tries to use an invalid remote name on the command line
- when an invalid remote name is used in `rclone config create/update/password`
- when the user tries to enter an invalid remote name in `rclone config`

This does not prevent the user entering a remote name with invalid
characters in the config manually, but such a remote will fail
immediately when it is used on the command line.
2019-09-06 12:07:09 +01:00
Nick Craig-Wood
27a730ef8f docs: Update changelog and RELEASE.md from v1.49.1 release 2019-09-06 12:07:09 +01:00
Ivan Andreev
d0c6e5cf5a vfs: skip TestCaseSensitivity on case insensitive backends 2019-09-06 10:44:59 +01:00
Nick Craig-Wood
cf9b973fe4 accounting: fix locking in Transfer to avoid deadlock with --progress
Before this change, using -P occasionally deadlocked on the Transfer
mutex when Transfer.Done() was called with a non nil error and the
StatsInfo mutex since they mutually call each other.

This was fixed by making sure that the Transfer mutex is always
released before calling any StatsInfo methods.

This improves on: 6f87267b34

Fixes #3505
2019-09-06 10:00:44 +01:00
Nick Craig-Wood
ffa1dac10b build: apply gofmt from go1.13 to change case of number literals 2019-09-05 13:59:06 +01:00
Nick Craig-Wood
7b0966880e Add Ivan Andreev to contributors 2019-09-04 21:32:16 +01:00
Ivan Andreev
1c4e33d4ad vfs: add flag --vfs-case-insensitive for windows/macOS mounts
rclone mount when run on Windows & macOS will now default to `--vfs-case-insensitive`.
This means that
2019-09-04 21:30:48 +01:00
Nick Craig-Wood
530ba66d35 operations: fix -u/--update with google photos / files of unknown size
Before this change if -u/--update was in effect we compared the size
of the files to see if the transfer should go ahead.  This was
comparing -1 with an actual size so the transfer always proceeded.

After this change we use the existing `sizeDiffers` function which
does the correct comparison with -1 for files of unknown length.

See: https://forum.rclone.org/t/sync-with-google-photos-to-local-drive-will-result-in-recoping/11605
2019-09-04 17:31:17 +01:00
Danil Semelenov
b3db38ae31 Disable __rclone_custom_func if posix mode is on
A workaround for #3489. Code in `__rclone_custom_func` relies on process substitutions `<(...)` to preserve changes of variables within `while` bodies, which is not supported in the posix mode.
2019-09-04 14:48:10 +01:00
Danil Semelenov
c0d1869204 Fix 'compopt: command not found' on autocomplete on macOS
As reported in #3489.
2019-09-04 14:47:26 +01:00
Nick Craig-Wood
89b6d89077 build: drop support for go1.9 2019-09-04 10:23:48 +01:00
Nick Craig-Wood
ef7b001626 build: update to use go1.13 for the build 2019-09-04 10:23:48 +01:00
Nick Craig-Wood
f97a3e853e Add Alfonso Montero to contributors 2019-09-03 17:26:13 +01:00
Denis
b71ac141cc copyurl: add --auto-filename flag for using file name from url in destination path (#3451) 2019-09-03 17:25:19 +01:00
Nick Craig-Wood
5932acfee3 rc: fix docs for config/create /update /password 2019-09-03 08:34:15 +01:00
Alfonso Montero
e2ce687f93 README.md: Add Docker pulls badge 2019-09-02 15:14:45 +01:00
Nick Craig-Wood
a3fb460c6b docs: add info on how to build and use the docker images 2019-09-02 14:30:11 +01:00
Nick Craig-Wood
8d296d0e1d Start v1.49.1-DEV development 2019-09-02 13:10:47 +01:00
Nick Craig-Wood
20a57aaccb gcs: fix need for elevated permissions on SetModTime - fixes #3493
Before this change we used PATCH on the object to update the metadata.

Apparently this requires the "full_control" scope which Google were
unhappy with in their oauth review.

This changes it to update the metadata by copying the object ontop of
itself (which is the way s3 works).  This can be done with normal
permissions.
2019-09-02 09:26:33 +01:00
Nick Craig-Wood
50a4ed8fc4 operations: fix accounting for server side copies
See: https://forum.rclone.org/t/b2-server-side-copy-doesnt-show-cumulative-progress/11154
2019-09-02 09:26:33 +01:00
Cnly
e2b5ed6c7a docs: fix template argument for mktemp in install.sh 2019-09-02 13:01:45 +08:00
Alfonso Montero
16e7da2cb5 Add Docker workflow support #3460
* Use a multi-stage build to reduce final image size.
* Run 'quicktest' make target before building.
* Built binary won't run on Alpine unless statically linked.
2019-08-29 11:04:57 +01:00
yparitcher
52df19ad34 allow usage of -short in the testing framework 2019-08-29 09:53:23 +01:00
Nick Craig-Wood
693112d57e config: Fix generated passwords being stored as empty password - Fixes #3492 2019-08-28 12:11:03 +01:00
Nick Craig-Wood
0edbc9578d googlephotos,onedrive: fix crash on error response - fixes #3491
This fixes a crash on the google photos backend when an error is
returned from the rest.Call function.

This turned out to be a mis-understanding of the rest docs so
- improved rest.Call docs
- fixed mis-understanding in google photos backend
- fixed similar mis-understading in onedrive backend
2019-08-28 12:11:03 +01:00
Chaitanya
7211c2dca7 rcd: Added missing parameter for web-gui info logs. 2019-08-27 17:21:21 +01:00
Nick Craig-Wood
af192d2507 vendor: update all dependencies 2019-08-26 18:00:17 +01:00
Nick Craig-Wood
d1a39dcc4b Start v1.49.0-DEV development 2019-08-26 17:44:09 +01:00
657 changed files with 70473 additions and 20499 deletions

View File

@@ -1,49 +0,0 @@
version: "{build}"
os: Windows Server 2012 R2
clone_folder: c:\gopath\src\github.com\rclone\rclone
cache:
- '%LocalAppData%\go-build'
environment:
GOPATH: C:\gopath
CPATH: C:\Program Files (x86)\WinFsp\inc\fuse
ORIGPATH: '%PATH%'
NOCCPATH: C:\MinGW\bin;%GOPATH%\bin;%PATH%
PATHCC64: C:\mingw-w64\x86_64-6.3.0-posix-seh-rt_v5-rev1\mingw64\bin;%NOCCPATH%
PATHCC32: C:\mingw-w64\i686-6.3.0-posix-dwarf-rt_v5-rev1\mingw32\bin;%NOCCPATH%
PATH: '%PATHCC64%'
RCLONE_CONFIG_PASS:
secure: sq9CPBbwaeKJv+yd24U44neORYPQVy6jsjnQptC+5yk=
install:
- choco install winfsp -y
- choco install zip -y
- copy c:\MinGW\bin\mingw32-make.exe c:\MinGW\bin\make.exe
build_script:
- echo %PATH%
- echo %GOPATH%
- go version
- go env
- go install
- go build
- make log_since_last_release > %TEMP%\git-log.txt
- make version > %TEMP%\version
- set /p RCLONE_VERSION=<%TEMP%\version
- set PATH=%PATHCC32%
- go run bin/cross-compile.go -release beta-latest -git-log %TEMP%\git-log.txt -include "^windows/386" -cgo -tags cmount %RCLONE_VERSION%
- set PATH=%PATHCC64%
- go run bin/cross-compile.go -release beta-latest -git-log %TEMP%\git-log.txt -include "^windows/amd64" -cgo -no-clean -tags cmount %RCLONE_VERSION%
test_script:
- make GOTAGS=cmount quicktest
artifacts:
- path: rclone.exe
- path: build/*-v*.zip
deploy_script:
- IF "%APPVEYOR_REPO_NAME%" == "rclone/rclone" IF "%APPVEYOR_PULL_REQUEST_NUMBER%" == "" make appveyor_upload

View File

@@ -1,50 +0,0 @@
---
version: 2
jobs:
build:
machine: true
working_directory: ~/.go_workspace/src/github.com/rclone/rclone
steps:
- checkout
- run:
name: Cross-compile rclone
command: |
docker pull rclone/xgo-cgofuse
go get -v github.com/karalabe/xgo
xgo \
--image=rclone/xgo-cgofuse \
--targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
-tags cmount \
.
xgo \
--targets=android/*,ios/* \
.
- run:
name: Prepare artifacts
command: |
mkdir -p /tmp/rclone.dist
cp -R rclone-* /tmp/rclone.dist
mkdir build
cp -R rclone-* build/
- run:
name: Build rclone
command: |
go version
go build
- run:
name: Upload artifacts
command: |
if [[ $CIRCLE_PULL_REQUEST != "" ]]; then
make circleci_upload
fi
- store_artifacts:
path: /tmp/rclone.dist

250
.github/workflows/build.yml vendored Normal file
View File

@@ -0,0 +1,250 @@
---
# Github Actions build for rclone
# -*- compile-command: "yamllint -f parsable build.yml" -*-
name: build
# Trigger the workflow on push or pull request
on:
push:
branches:
- '*'
tags:
- '*'
pull_request:
jobs:
build:
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'modules_race', 'go1.10', 'go1.11', 'go1.12']
include:
- job_name: linux
os: ubuntu-latest
go: '1.13.x'
modules: 'off'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
quicktest: true
deploy: true
- job_name: mac
os: macOS-latest
go: '1.13.x'
modules: 'off'
gotags: '' # cmount doesn't work on osx travis for some reason
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
racequicktest: true
deploy: true
- job_name: windows_amd64
os: windows-latest
go: '1.13.x'
modules: 'off'
gotags: cmount
build_flags: '-include "^windows/amd64" -cgo'
quicktest: true
racequicktest: true
deploy: true
- job_name: windows_386
os: windows-latest
go: '1.13.x'
modules: 'off'
gotags: cmount
goarch: '386'
cgo: '1'
build_flags: '-include "^windows/386" -cgo'
quicktest: true
deploy: true
- job_name: other_os
os: ubuntu-latest
go: '1.13.x'
modules: 'off'
build_flags: '-exclude "^(windows/|darwin/amd64|linux/)"'
compile_all: true
deploy: true
- job_name: modules_race
os: ubuntu-latest
go: '1.13.x'
modules: 'on'
quicktest: true
racequicktest: true
- job_name: go1.10
os: ubuntu-latest
go: '1.10.x'
modules: 'off'
quicktest: true
- job_name: go1.11
os: ubuntu-latest
go: '1.11.x'
modules: 'off'
quicktest: true
- job_name: go1.12
os: ubuntu-latest
go: '1.12.x'
modules: 'off'
quicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@master
with:
path: ./src/github.com/${{ github.repository }}
- name: Install Go
uses: actions/setup-go@v1
with:
go-version: ${{ matrix.go }}
- name: Set environment variables
shell: bash
run: |
echo '::set-env name=GOPATH::${{ runner.workspace }}'
echo '::add-path::${{ runner.workspace }}/bin'
echo '::set-env name=GO111MODULE::${{ matrix.modules }}'
echo '::set-env name=GOTAGS::${{ matrix.gotags }}'
echo '::set-env name=BUILD_FLAGS::${{ matrix.build_flags }}'
if [[ "${{ matrix.goarch }}" != "" ]]; then echo '::set-env name=GOARCH::${{ matrix.goarch }}' ; fi
if [[ "${{ matrix.cgo }}" != "" ]]; then echo '::set-env name=CGO_ENABLED::${{ matrix.cgo }}' ; fi
- name: Install Libraries on Linux
shell: bash
run: |
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse libfuse-dev rpm pkg-config
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
shell: bash
run: |
brew update
brew cask install osxfuse
if: matrix.os == 'macOS-latest'
- name: Install Libraries on Windows
shell: powershell
run: |
$ProgressPreference = 'SilentlyContinue'
choco install -y winfsp zip
Write-Host "::set-env name=CPATH::C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse"
if ($env:GOARCH -eq "386") {
choco install -y mingw --forcex86 --force
Write-Host "::add-path::C:\\ProgramData\\chocolatey\\lib\\mingw\\tools\\install\\mingw32\\bin"
}
# Copy mingw32-make.exe to make.exe so the same command line
# can be used on Windows as on macOS and Linux
$path = (get-command mingw32-make.exe).Path
Copy-Item -Path $path -Destination (Join-Path (Split-Path -Path $path) 'make.exe')
if: matrix.os == 'windows-latest'
- name: Print Go version and environment
shell: bash
run: |
printf "Using go at: $(which go)\n"
printf "Go version: $(go version)\n"
printf "\n\nGo environment:\n\n"
go env
printf "\n\nRclone environment:\n\n"
make vars
printf "\n\nSystem environment:\n\n"
env
- name: Run tests
shell: bash
run: |
make
make quicktest
if: matrix.quicktest
- name: Race test
shell: bash
run: |
make racequicktest
if: matrix.racequicktest
- name: Code quality test
shell: bash
run: |
make build_dep
make check
if: matrix.check
- name: Compile all architectures test
shell: bash
run: |
make
make compile_all
if: matrix.compile_all
- name: Deploy built binaries
shell: bash
run: |
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep ; fi
make travis_beta
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# working-directory: '$(modulePath)'
if: matrix.deploy && github.head_ref == ''
xgo:
timeout-minutes: 60
name: "xgo cross compile"
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@master
with:
path: ./src/github.com/${{ github.repository }}
- name: Set environment variables
shell: bash
run: |
echo '::set-env name=GOPATH::${{ runner.workspace }}'
echo '::add-path::${{ runner.workspace }}/bin'
- name: Cross-compile rclone
run: |
docker pull billziss/xgo-cgofuse
go get -v github.com/karalabe/xgo
xgo \
-image=billziss/xgo-cgofuse \
-targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
-tags cmount \
-dest build \
.
xgo \
-image=billziss/xgo-cgofuse \
-targets=android/*,ios/* \
-dest build \
.
- name: Build rclone
run: |
docker pull golang
docker run --rm -v "$PWD":/usr/src/rclone -w /usr/src/rclone golang go build -mod=vendor -v
- name: Upload artifacts
run: |
make circleci_upload
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
if: github.head_ref == ''

View File

@@ -1,2 +0,0 @@
default_dependencies: false
cli: rclone

View File

@@ -1,128 +0,0 @@
---
language: go
sudo: required
dist: xenial
os:
- linux
go_import_path: github.com/rclone/rclone
before_install:
- git fetch --unshallow --tags
- |
if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
fi
if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
brew update
brew tap caskroom/cask
brew cask install osxfuse
fi
if [[ "$TRAVIS_OS_NAME" == "windows" ]]; then
choco install -y winfsp zip make
cd ../.. # fix crlf in git checkout
mv $TRAVIS_REPO_SLUG _old
git config --global core.autocrlf false
git clone _old $TRAVIS_REPO_SLUG
cd $TRAVIS_REPO_SLUG
fi
install:
- make vars
env:
global:
- GOTAGS=cmount
- GOMAXPROCS=8 # workaround for cmd/mount tests locking up - see #3154
- GO111MODULE=off
- GITHUB_USER=ncw
- secure: gU8gCV9R8Kv/Gn0SmCP37edpfIbPoSvsub48GK7qxJdTU628H0KOMiZW/T0gtV5d67XJZ4eKnhJYlxwwxgSgfejO32Rh5GlYEKT/FuVoH0BD72dM1GDFLSrUiUYOdoHvf/BKIFA3dJFT4lk2ASy4Zh7SEoXHG6goBlqUpYx8hVA=
- secure: Uaiveq+/rvQjO03GzvQZV2J6pZfedoFuhdXrLVhhHSeP4ZBca0olw7xaqkabUyP3LkVYXMDSX8EbyeuQT1jfEe5wp5sBdfaDtuYW6heFyjiHIIIbVyBfGXon6db4ETBjOaX/Xt8uktrgNge6qFlj+kpnmpFGxf0jmDLw1zgg7tk=
addons:
apt:
packages:
- fuse
- libfuse-dev
- rpm
- pkg-config
cache:
directories:
- $HOME/.cache/go-build
matrix:
allow_failures:
- go: tip
include:
- go: 1.9.x
script:
- make quicktest
- go: 1.10.x
script:
- make quicktest
- go: 1.11.x
script:
- make quicktest
- go: 1.12.x
name: Linux
env:
- GOTAGS=cmount
- BUILD_FLAGS='-include "^linux/"'
- DEPLOY=true
script:
- make build_dep
- make check
- make quicktest
- go: 1.12.x
name: Go Modules / Race
env:
- GO111MODULE=on
- GOPROXY=https://proxy.golang.org
script:
- make quicktest
- make racequicktest
- go: 1.12.x
name: Other OS
env:
- DEPLOY=true
- BUILD_FLAGS='-exclude "^(windows|darwin|linux)/"'
script:
- make
- make compile_all
- go: 1.12.x
name: macOS
os: osx
env:
- GOTAGS= # cmount doesn't work on osx travis for some reason
- BUILD_FLAGS='-include "^darwin/" -cgo'
- DEPLOY=true
cache:
directories:
- $HOME/Library/Caches/go-build
script:
- make
- make quicktest
- make racequicktest
# - os: windows
# name: Windows
# go: 1.12.x
# env:
# - GOTAGS=cmount
# - CPATH='C:\Program Files (x86)\WinFsp\inc\fuse'
# - BUILD_FLAGS='-include "^windows/amd64" -cgo' # 386 doesn't build yet
# #filter_secrets: false # works around a problem with secrets under windows
# cache:
# directories:
# - ${LocalAppData}/go-build
# script:
# - make
# - make quicktest
# - make racequicktest
- go: tip
script:
- make quicktest
deploy:
provider: script
script: make travis_beta
skip_cleanup: true
on:
repo: rclone/rclone
all_branches: true
condition: $TRAVIS_PULL_REQUEST == false && $DEPLOY == true

View File

@@ -341,6 +341,12 @@ Getting going
* Add your remote to the imports in `backend/all/all.go`
* HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead.
* Try to implement as many optional methods as possible as it makes the remote more usable.
* Use fs/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed
* `go install -tags noencode`
* `rclone purge -v TestRemote:rclone-info`
* `rclone info -vv --write-json remote.json TestRemote:rclone-info`
* `go run cmd/info/internal/build_csv/main.go -o remote.csv remote.json`
* open `remote.csv` in a spreadsheet and examine
Unit tests
@@ -367,12 +373,54 @@ Or if you want to run the integration tests manually:
See the [testing](#testing) section for more information on integration tests.
Add your fs to the docs - you'll need to pick an icon for it from [fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in alphabetical order but with the local file system last.
Add your fs to the docs - you'll need to pick an icon for it from
[fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in
alphabetical order of full name of remote (eg `drive` is ordered as
`Google Drive`) but with the local file system last.
* `README.md` - main GitHub page
* `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
* make sure this has the `autogenerated options` comments in (see your reference backend docs)
* update them with `make backenddocs` - revert any changes in other backends
* `docs/content/overview.md` - overview docs
* `docs/content/docs.md` - list of remotes in config section
* `docs/content/about.md` - front page of rclone.org
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
* `bin/make_manual.py` - add the page to the `docs` constant
Once you've written the docs, run `make serve` and check they look OK
in the web browser and the links (internal and external) all work.
## Writing a plugin ##
New features (backends, commands) can also be added "out-of-tree", through Go plugins.
Changes will be kept in a dynamically loaded file instead of being compiled into the main binary.
This is useful if you can't merge your changes upstream or don't want to maintain a fork of rclone.
Usage
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
- `KIND` should be one of `backend`, `command` or `bundle`.
- Example: A plugin with backend support for PiFS would be called
`librcloneplugin_backend_pifs.so`.
- Loading
- Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282))
- Supported on rclone v1.50 or greater.
- All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded.
- If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source of rclone)
Building
To turn your existing additions into a Go plugin, move them to an external repository
and change the top-level package name to `main`.
Check `rclone --version` and make sure that the plugin's rclone dependency and host Go version match.
Then, run `go build -buildmode=plugin -o PLUGIN_NAME.so .` to build the plugin.
[Go reference](https://godoc.org/github.com/rclone/rclone/lib/plugin)
[Minimal example](https://gist.github.com/terorie/21b517ee347828e899e1913efc1d684f)

View File

@@ -12,10 +12,11 @@ RUN ./rclone version
# Begin final image
FROM alpine:latest
RUN apk --no-cache add ca-certificates
RUN apk --no-cache add ca-certificates fuse
WORKDIR /root/
COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/
COPY --from=builder /go/src/github.com/rclone/rclone/rclone .
ENTRYPOINT [ "rclone" ]
ENTRYPOINT [ "./rclone" ]
WORKDIR /data
ENV XDG_CONFIG_HOME=/config

67
MANUAL.html generated
View File

@@ -17,7 +17,7 @@
<header>
<h1 class="title">rclone(1) User Manual</h1>
<p class="author">Nick Craig-Wood</p>
<p class="date">Sep 08, 2019</p>
<p class="date">Aug 26, 2019</p>
</header>
<h1 id="rclone---rsync-for-cloud-storage">Rclone - rsync for cloud storage</h1>
<p>Rclone is a command line program to sync files and directories to and from:</p>
@@ -134,20 +134,6 @@ sudo mv rclone /usr/local/bin/</code></pre>
<pre><code>cd .. &amp;&amp; rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip</code></pre>
<p>Run <code>rclone config</code> to setup. See <a href="https://rclone.org/docs/">rclone config docs</a> for more details.</p>
<pre><code>rclone config</code></pre>
<h2 id="install-with-docker">Install with docker</h2>
<p>The rclone maintains a <a href="https://hub.docker.com/r/rclone/rclone">docker image for rclone</a>. These images are autobuilt by docker hub from the rclone source based on a minimal Alpine linux image.</p>
<p>The <code>:latest</code> tag will always point to the latest stable release. You can use the <code>:beta</code> tag to get the latest build from master. You can also use version tags, eg <code>:1.49.1</code>, <code>:1.49</code> or <code>:1</code>.</p>
<pre><code>$ docker pull rclone/rclone:latest
latest: Pulling from rclone/rclone
Digest: sha256:0e0ced72671989bb837fea8e88578b3fc48371aa45d209663683e24cfdaa0e11
...
$ docker run --rm rclone/rclone:latest version
rclone v1.49.1
- os/arch: linux/amd64
- go version: go1.12.9</code></pre>
<p>You will probably want to mount rclones config file directory or file from the host, or configure rclone with environment variables.</p>
<p>Eg to share your local config with the container</p>
<pre><code>docker run -v ~/.config/rclone:/root/.config/rclone rclone/rclone:latest listremotes</code></pre>
<h2 id="install-from-source">Install from source</h2>
<p>Make sure you have at least <a href="https://golang.org/">Go</a> 1.7 installed. <a href="https://golang.org/dl/">Download go</a> if necessary. The latest release is recommended. Then</p>
<pre><code>git clone https://github.com/rclone/rclone.git
@@ -3315,16 +3301,12 @@ rclone rc cache/expire remote=/ withData=true</code></pre>
<p>This takes the following parameters</p>
<ul>
<li>name - name of remote</li>
<li>parameters - a map of { “key”: “value” } pairs</li>
<li>type - type of the new remote</li>
</ul>
<p>See the <a href="https://rclone.org/commands/rclone_config_create/">config create command</a> command for more information on the above.</p>
<p>Authentication is required for this call.</p>
<h3 id="configdelete-delete-a-remote-in-the-config-file.-configdelete">config/delete: Delete a remote in the config file. {#config/delete}</h3>
<p>Parameters:</p>
<ul>
<li>name - name of remote to delete</li>
</ul>
<p>Parameters: - name - name of remote to delete</p>
<p>See the <a href="https://rclone.org/commands/rclone_config_delete/">config delete command</a> command for more information on the above.</p>
<p>Authentication is required for this call.</p>
<h3 id="configdump-dumps-the-config-file.-configdump">config/dump: Dumps the config file. {#config/dump}</h3>
@@ -3344,7 +3326,6 @@ rclone rc cache/expire remote=/ withData=true</code></pre>
<p>This takes the following parameters</p>
<ul>
<li>name - name of remote</li>
<li>parameters - a map of { “key”: “value” } pairs</li>
</ul>
<p>See the <a href="https://rclone.org/commands/rclone_config_password/">config password command</a> command for more information on the above.</p>
<p>Authentication is required for this call.</p>
@@ -3356,7 +3337,6 @@ rclone rc cache/expire remote=/ withData=true</code></pre>
<p>This takes the following parameters</p>
<ul>
<li>name - name of remote</li>
<li>parameters - a map of { “key”: “value” } pairs</li>
</ul>
<p>See the <a href="https://rclone.org/commands/rclone_config_update/">config update command</a> command for more information on the above.</p>
<p>Authentication is required for this call.</p>
@@ -4610,7 +4590,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default &quot;rclone/v1.49.2&quot;)
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default &quot;rclone/v1.49.0&quot;)
-v, --verbose count Print lots more stuff (repeat for more)</code></pre>
<h2 id="backend-flags">Backend Flags</h2>
<p>These flags are available for every command. They control the backends and may be set in the config file.</p>
@@ -12308,41 +12288,6 @@ $ tree /tmp/b
</ul>
<!--- autogenerated options stop -->
<h1 id="changelog">Changelog</h1>
<h2 id="v1.49.2---2019-09-08">v1.49.2 - 2019-09-08</h2>
<ul>
<li>New Features
<ul>
<li>build: Add Docker workflow support (Alfonso Montero)</li>
</ul></li>
<li>Bug Fixes
<ul>
<li>accounting: Fix locking in Transfer to avoid deadlock with progress (Nick Craig-Wood)</li>
<li>docs: Fix template argument for mktemp in install.sh (Cnly)</li>
<li>operations: Fix -u/update with google photos / files of unknown size (Nick Craig-Wood)</li>
<li>rc: Fix docs for config/create /update /password (Nick Craig-Wood)</li>
</ul></li>
<li>Google Cloud Storage
<ul>
<li>Fix need for elevated permissions on SetModTime (Nick Craig-Wood)</li>
</ul></li>
</ul>
<h2 id="v1.49.1---2019-08-28">v1.49.1 - 2019-08-28</h2>
<p>Point release to fix config bug and google photos backend.</p>
<ul>
<li>Bug Fixes
<ul>
<li>config: Fix generated passwords being stored as empty password (Nick Craig-Wood)</li>
<li>rcd: Added missing parameter for web-gui info logs. (Chaitanya)</li>
</ul></li>
<li>Googlephotos
<ul>
<li>Fix crash on error response (Nick Craig-Wood)</li>
</ul></li>
<li>Onedrive
<ul>
<li>Fix crash on error response (Nick Craig-Wood)</li>
</ul></li>
</ul>
<h2 id="v1.49.0---2019-08-26">v1.49.0 - 2019-08-26</h2>
<ul>
<li>New backends
@@ -12357,10 +12302,8 @@ $ tree /tmp/b
<li>Experimental <a href="https://rclone.org/gui/">web GUI</a> (Chaitanya Bankanhal)</li>
<li>Implement <code>--compare-dest</code> &amp; <code>--copy-dest</code> (yparitcher)</li>
<li>Implement <code>--suffix</code> without <code>--backup-dir</code> for backup to current dir (yparitcher)</li>
<li><code>config reconnect</code> to re-login (re-run the oauth login) for the backend. (Nick Craig-Wood)</li>
<li><code>config userinfo</code> to discover which user you are logged in as. (Nick Craig-Wood)</li>
<li><code>config disconnect</code> to disconnect you (log out) from the backend. (Nick Craig-Wood)</li>
<li>Add <code>--use-json-log</code> for JSON logging (justinalin)</li>
<li>Add <code>config reconnect</code>, <code>config userinfo</code> and <code>config disconnect</code> subcommands. (Nick Craig-Wood)</li>
<li>Add context propagation to rclone (Aleksandar Jankovic)</li>
<li>Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic)</li>
<li>Add Higher units for ETA (AbelThar)</li>
@@ -15342,7 +15285,7 @@ export NO_PROXY=$no_proxy</code></pre>
<pre><code>mkdir -p /etc/ssl/certs/
curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
ntpclient -s -h pool.ntp.org</code></pre>
<p>The two environment variables <code>SSL_CERT_FILE</code> and <code>SSL_CERT_DIR</code>, mentioned in the <a href="https://godoc.org/crypto/x509">x509 pacakge</a>, provide an additional way to provide the SSL root certificates.</p>
<p>The two environment variables <code>SSL_CERT_FILE</code> and <code>SSL_CERT_DIR</code>, mentioned in the <a href="https://godoc.org/crypto/x509">x509 package</a>, provide an additional way to provide the SSL root certificates.</p>
<p>Note that you may need to add the <code>--insecure</code> option to the <code>curl</code> command line if it doesnt work without.</p>
<pre><code>curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt</code></pre>
<h3 id="rclone-gives-failed-to-load-config-file-function-not-implemented-error">Rclone gives Failed to load config file: function not implemented error</h3>

68
MANUAL.md generated
View File

@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Sep 08, 2019
% Aug 26, 2019
# Rclone - rsync for cloud storage
@@ -151,36 +151,6 @@ Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/)
rclone config
## Install with docker ##
The rclone maintains a [docker image for rclone](https://hub.docker.com/r/rclone/rclone).
These images are autobuilt by docker hub from the rclone source based
on a minimal Alpine linux image.
The `:latest` tag will always point to the latest stable release. You
can use the `:beta` tag to get the latest build from master. You can
also use version tags, eg `:1.49.1`, `:1.49` or `:1`.
```
$ docker pull rclone/rclone:latest
latest: Pulling from rclone/rclone
Digest: sha256:0e0ced72671989bb837fea8e88578b3fc48371aa45d209663683e24cfdaa0e11
...
$ docker run --rm rclone/rclone:latest version
rclone v1.49.1
- os/arch: linux/amd64
- go version: go1.12.9
```
You will probably want to mount rclone's config file directory or file
from the host, or configure rclone with environment variables.
Eg to share your local config with the container
```
docker run -v ~/.config/rclone:/root/.config/rclone rclone/rclone:latest listremotes
```
## Install from source ##
Make sure you have at least [Go](https://golang.org/) 1.7
@@ -7040,7 +7010,6 @@ Show statistics for the cache remote.
This takes the following parameters
- name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
- type - type of the new remote
@@ -7051,7 +7020,6 @@ Authentication is required for this call.
### config/delete: Delete a remote in the config file. {#config/delete}
Parameters:
- name - name of remote to delete
See the [config delete command](https://rclone.org/commands/rclone_config_delete/) command for more information on the above.
@@ -7092,7 +7060,6 @@ Authentication is required for this call.
This takes the following parameters
- name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
See the [config password command](https://rclone.org/commands/rclone_config_password/) command for more information on the above.
@@ -7113,7 +7080,6 @@ Authentication is required for this call.
This takes the following parameters
- name - name of remote
- parameters - a map of \{ "key": "value" \} pairs
See the [config update command](https://rclone.org/commands/rclone_config_update/) command for more information on the above.
@@ -8279,7 +8245,7 @@ These flags are available for every command.
--use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.2")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.0")
-v, --verbose count Print lots more stuff (repeat for more)
```
@@ -18500,30 +18466,6 @@ to override the default choice.
# Changelog
## v1.49.2 - 2019-09-08
* New Features
* build: Add Docker workflow support (Alfonso Montero)
* Bug Fixes
* accounting: Fix locking in Transfer to avoid deadlock with --progress (Nick Craig-Wood)
* docs: Fix template argument for mktemp in install.sh (Cnly)
* operations: Fix -u/--update with google photos / files of unknown size (Nick Craig-Wood)
* rc: Fix docs for config/create /update /password (Nick Craig-Wood)
* Google Cloud Storage
* Fix need for elevated permissions on SetModTime (Nick Craig-Wood)
## v1.49.1 - 2019-08-28
Point release to fix config bug and google photos backend.
* Bug Fixes
* config: Fix generated passwords being stored as empty password (Nick Craig-Wood)
* rcd: Added missing parameter for web-gui info logs. (Chaitanya)
* Googlephotos
* Fix crash on error response (Nick Craig-Wood)
* Onedrive
* Fix crash on error response (Nick Craig-Wood)
## v1.49.0 - 2019-08-26
* New backends
@@ -18535,10 +18477,8 @@ Point release to fix config bug and google photos backend.
* Experimental [web GUI](https://rclone.org/gui/) (Chaitanya Bankanhal)
* Implement `--compare-dest` & `--copy-dest` (yparitcher)
* Implement `--suffix` without `--backup-dir` for backup to current dir (yparitcher)
* `config reconnect` to re-login (re-run the oauth login) for the backend. (Nick Craig-Wood)
* `config userinfo` to discover which user you are logged in as. (Nick Craig-Wood)
* `config disconnect` to disconnect you (log out) from the backend. (Nick Craig-Wood)
* Add `--use-json-log` for JSON logging (justinalin)
* Add `config reconnect`, `config userinfo` and `config disconnect` subcommands. (Nick Craig-Wood)
* Add context propagation to rclone (Aleksandar Jankovic)
* Reworking internal statistics interfaces so they work with rc jobs (Aleksandar Jankovic)
* Add Higher units for ETA (AbelThar)
@@ -20686,7 +20626,7 @@ curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bag
ntpclient -s -h pool.ntp.org
```
The two environment variables `SSL_CERT_FILE` and `SSL_CERT_DIR`, mentioned in the [x509 pacakge](https://godoc.org/crypto/x509),
The two environment variables `SSL_CERT_FILE` and `SSL_CERT_DIR`, mentioned in the [x509 package](https://godoc.org/crypto/x509),
provide an additional way to provide the SSL root certificates.
Note that you may need to add the `--insecure` option to the `curl` command line if it doesn't work without.

79
MANUAL.txt generated
View File

@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
Sep 08, 2019
Aug 26, 2019
@@ -164,33 +164,6 @@ Run rclone config to setup. See rclone config docs for more details.
rclone config
Install with docker
The rclone maintains a docker image for rclone. These images are
autobuilt by docker hub from the rclone source based on a minimal Alpine
linux image.
The :latest tag will always point to the latest stable release. You can
use the :beta tag to get the latest build from master. You can also use
version tags, eg :1.49.1, :1.49 or :1.
$ docker pull rclone/rclone:latest
latest: Pulling from rclone/rclone
Digest: sha256:0e0ced72671989bb837fea8e88578b3fc48371aa45d209663683e24cfdaa0e11
...
$ docker run --rm rclone/rclone:latest version
rclone v1.49.1
- os/arch: linux/amd64
- go version: go1.12.9
You will probably want to mount rclones config file directory or file
from the host, or configure rclone with environment variables.
Eg to share your local config with the container
docker run -v ~/.config/rclone:/root/.config/rclone rclone/rclone:latest listremotes
Install from source
Make sure you have at least Go 1.7 installed. Download go if necessary.
@@ -6677,7 +6650,6 @@ config/create: create the config for a remote. {#config/create}
This takes the following parameters
- name - name of remote
- parameters - a map of { “key”: “value” } pairs
- type - type of the new remote
See the config create command command for more information on the above.
@@ -6686,9 +6658,7 @@ Authentication is required for this call.
config/delete: Delete a remote in the config file. {#config/delete}
Parameters:
- name - name of remote to delete
Parameters: - name - name of remote to delete
See the config delete command command for more information on the above.
@@ -6725,7 +6695,6 @@ config/password: password the config for a remote. {#config/password}
This takes the following parameters
- name - name of remote
- parameters - a map of { “key”: “value” } pairs
See the config password command command for more information on the
above.
@@ -6746,7 +6715,6 @@ config/update: update the config for a remote. {#config/update}
This takes the following parameters
- name - name of remote
- parameters - a map of { “key”: “value” } pairs
See the config update command command for more information on the above.
@@ -7856,7 +7824,7 @@ These flags are available for every command.
--use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.2")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.0")
-v, --verbose count Print lots more stuff (repeat for more)
@@ -17955,37 +17923,6 @@ override the default choice.
CHANGELOG
v1.49.2 - 2019-09-08
- New Features
- build: Add Docker workflow support (Alfonso Montero)
- Bug Fixes
- accounting: Fix locking in Transfer to avoid deadlock with
progress (Nick Craig-Wood)
- docs: Fix template argument for mktemp in install.sh (Cnly)
- operations: Fix -u/update with google photos / files of unknown
size (Nick Craig-Wood)
- rc: Fix docs for config/create /update /password (Nick
Craig-Wood)
- Google Cloud Storage
- Fix need for elevated permissions on SetModTime (Nick
Craig-Wood)
v1.49.1 - 2019-08-28
Point release to fix config bug and google photos backend.
- Bug Fixes
- config: Fix generated passwords being stored as empty password
(Nick Craig-Wood)
- rcd: Added missing parameter for web-gui info logs. (Chaitanya)
- Googlephotos
- Fix crash on error response (Nick Craig-Wood)
- Onedrive
- Fix crash on error response (Nick Craig-Wood)
v1.49.0 - 2019-08-26
- New backends
@@ -17998,13 +17935,9 @@ v1.49.0 - 2019-08-26
- Implement --compare-dest & --copy-dest (yparitcher)
- Implement --suffix without --backup-dir for backup to current
dir (yparitcher)
- config reconnect to re-login (re-run the oauth login) for the
backend. (Nick Craig-Wood)
- config userinfo to discover which user you are logged in as.
(Nick Craig-Wood)
- config disconnect to disconnect you (log out) from the backend.
(Nick Craig-Wood)
- Add --use-json-log for JSON logging (justinalin)
- Add config reconnect, config userinfo and config disconnect
subcommands. (Nick Craig-Wood)
- Add context propagation to rclone (Aleksandar Jankovic)
- Reworking internal statistics interfaces so they work with rc
jobs (Aleksandar Jankovic)
@@ -20637,7 +20570,7 @@ time which is important for SSL to work properly.
ntpclient -s -h pool.ntp.org
The two environment variables SSL_CERT_FILE and SSL_CERT_DIR, mentioned
in the x509 pacakge, provide an additional way to provide the SSL root
in the x509 package, provide an additional way to provide the SSL root
certificates.
Note that you may need to add the --insecure option to the curl command

View File

@@ -1,18 +1,29 @@
SHELL = bash
BRANCH := $(or $(APPVEYOR_REPO_BRANCH),$(TRAVIS_BRANCH),$(BUILD_SOURCEBRANCHNAME),$(shell git rev-parse --abbrev-ref HEAD))
# Branch we are working on
BRANCH := $(or $(APPVEYOR_REPO_BRANCH),$(TRAVIS_BRANCH),$(BUILD_SOURCEBRANCHNAME),$(lastword $(subst /, ,$(GITHUB_REF))),$(shell git rev-parse --abbrev-ref HEAD))
# Tag of the current commit, if any. If this is not "" then we are building a release
RELEASE_TAG := $(shell git tag -l --points-at HEAD)
# Version of last release (may not be on this branch)
VERSION := $(shell cat VERSION)
# Last tag on this branch
LAST_TAG := $(shell git describe --tags --abbrev=0)
ifeq ($(BRANCH),$(LAST_TAG))
# If we are working on a release, override branch to master
ifdef RELEASE_TAG
BRANCH := master
endif
TAG_BRANCH := -$(BRANCH)
BRANCH_PATH := branch/
# If building HEAD or master then unset TAG_BRANCH and BRANCH_PATH
ifeq ($(subst HEAD,,$(subst master,,$(BRANCH))),)
TAG_BRANCH :=
BRANCH_PATH :=
endif
TAG := $(shell echo $$(git describe --abbrev=8 --tags | sed 's/-\([0-9]\)-/-00\1-/; s/-\([0-9][0-9]\)-/-0\1-/'))$(TAG_BRANCH)
NEW_TAG := $(shell echo $(LAST_TAG) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f.0", $$_)')
ifneq ($(TAG),$(LAST_TAG))
# Make version suffix -DDD-gCCCCCCCC (D=commits since last relase, C=Commit) or blank
VERSION_SUFFIX := $(shell git describe --abbrev=8 --tags | perl -lpe 's/^v\d+\.\d+\.\d+//; s/^-(\d+)/"-".sprintf("%03d",$$1)/e;')
# TAG is current version + number of commits since last release + branch
TAG := $(VERSION)$(VERSION_SUFFIX)$(TAG_BRANCH)
NEXT_VERSION := $(shell echo $(VERSION) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f.0", $$_)')
ifndef RELEASE_TAG
TAG := $(TAG)-beta
endif
GO_VERSION := $(shell go version)
@@ -33,8 +44,9 @@ endif
.PHONY: rclone test_all vars version
rclone:
go install -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS)
cp -av `go env GOPATH`/bin/rclone .
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS)
mkdir -p `go env GOPATH`/bin/
cp -av rclone`go env GOEXE` `go env GOPATH`/bin/
test_all:
go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/rclone/rclone/fstest/test_all
@@ -43,8 +55,8 @@ vars:
@echo SHELL="'$(SHELL)'"
@echo BRANCH="'$(BRANCH)'"
@echo TAG="'$(TAG)'"
@echo LAST_TAG="'$(LAST_TAG)'"
@echo NEW_TAG="'$(NEW_TAG)'"
@echo VERSION="'$(VERSION)'"
@echo NEXT_VERSION="'$(NEXT_VERSION)'"
@echo GO_VERSION="'$(GO_VERSION)'"
@echo BETA_URL="'$(BETA_URL)'"
@@ -75,8 +87,8 @@ build_dep:
# Get the release dependencies
release_dep:
go get -u github.com/goreleaser/nfpm/...
go get -u github.com/aktau/github-release
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64.tar.gz'
go run bin/get-github-release.go -extract github-release aktau/github-release 'linux-amd64-github-release.tar.bz2'
# Update dependencies
update:
@@ -191,24 +203,25 @@ serve: website
cd docs && hugo server -v -w
tag: doc
@echo "Old tag is $(LAST_TAG)"
@echo "New tag is $(NEW_TAG)"
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEW_TAG)\"\n" | gofmt > fs/version.go
echo -n "$(NEW_TAG)" > docs/layouts/partials/version.html
git tag -s -m "Version $(NEW_TAG)" $(NEW_TAG)
bin/make_changelog.py $(LAST_TAG) $(NEW_TAG) > docs/content/changelog.md.new
@echo "Old tag is $(VERSION)"
@echo "New tag is $(NEXT_VERSION)"
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEXT_VERSION)\"\n" | gofmt > fs/version.go
echo -n "$(NEXT_VERSION)" > docs/layouts/partials/version.html
echo "$(NEXT_VERSION)" > VERSION
git tag -s -m "Version $(NEXT_VERSION)" $(NEXT_VERSION)
bin/make_changelog.py $(LAST_TAG) $(NEXT_VERSION) > docs/content/changelog.md.new
mv docs/content/changelog.md.new docs/content/changelog.md
@echo "Edit the new changelog in docs/content/changelog.md"
@echo "Then commit all the changes"
@echo git commit -m \"Version $(NEW_TAG)\" -a -v
@echo git commit -m \"Version $(NEXT_VERSION)\" -a -v
@echo "And finally run make retag before make cross etc"
retag:
git tag -f -s -m "Version $(LAST_TAG)" $(LAST_TAG)
git tag -f -s -m "Version $(VERSION)" $(VERSION)
startdev:
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(LAST_TAG)-DEV\"\n" | gofmt > fs/version.go
git commit -m "Start $(LAST_TAG)-DEV development" fs/version.go
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(VERSION)-DEV\"\n" | gofmt > fs/version.go
git commit -m "Start $(VERSION)-DEV development" fs/version.go
winzip:
zip -9 rclone-$(TAG).zip rclone.exe

View File

@@ -14,6 +14,7 @@
[![CircleCI](https://circleci.com/gh/rclone/rclone/tree/master.svg?style=svg)](https://circleci.com/gh/rclone/rclone/tree/master)
[![Go Report Card](https://goreportcard.com/badge/github.com/rclone/rclone)](https://goreportcard.com/report/github.com/rclone/rclone)
[![GoDoc](https://godoc.org/github.com/rclone/rclone?status.svg)](https://godoc.org/github.com/rclone/rclone)
[![Docker Pulls](https://img.shields.io/docker/pulls/rclone/rclone)](https://hub.docker.com/r/rclone/rclone)
# Rclone
@@ -21,13 +22,14 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
## Storage providers
* 1Fichier [:page_facing_up:](https://rclone.org/ficher/)
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
@@ -40,6 +42,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
@@ -74,6 +77,7 @@ Please see [the full list of all storage providers and their features](https://r
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, e.g. two different cloud accounts
* Optional large file chunking ([Chunker](https://rclone.org/chunker/))
* Optional encryption ([Crypt](https://rclone.org/crypt/))
* Optional cache ([Cache](https://rclone.org/cache/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))

View File

@@ -1,8 +1,14 @@
Extra required software for making a release
# Release
This file describes how to make the various kinds of releases
## Extra required software for making a release
* [github-release](https://github.com/aktau/github-release) for uploading packages
* pandoc for making the html and man pages
Making a release
## Making a release
* git status - make sure everything is checked in
* Check travis & appveyor builds are green
* make check
@@ -26,8 +32,8 @@ Making a release
* # announce with forum post, twitter post, G+ post
Early in the next release cycle update the vendored dependencies
* Review any pinned packages in go.mod and remove if possible
* GO111MODULE=on go get -u github.com/spf13/cobra@master
* make update
* git status
* git add new files
@@ -48,26 +54,45 @@ Can be fixed with
* GO111MODULE=on go mod vendor
Making a point release. If rclone needs a point release due to some
horrendous bug, then
* git branch v1.XX v1.XX-fixes
## Making a point release
If rclone needs a point release due to some horrendous bug:
First make the release branch. If this is a second point release then
this will be done already.
* BASE_TAG=v1.XX # eg v1.49
* NEW_TAG=${BASE_TAG}.Y # eg v1.49.1
* echo $BASE_TAG $NEW_TAG # v1.49 v1.49.1
* git branch ${BASE_TAG} ${BASE_TAG}-fixes
Now
* git co ${BASE_TAG}-fixes
* git cherry-pick any fixes
* Test (see above)
* make NEW_TAG=v1.XX.1 tag
* make NEXT_VERSION=${NEW_TAG} tag
* edit docs/content/changelog.md
* make TAG=v1.43.1 doc
* git commit -a -v -m "Version v1.XX.1"
* git tag -d -v1.XX.1
* git tag -s -m "Version v1.XX.1" v1.XX.1
* git push --tags -u origin v1.XX-fixes
* make BRANCH_PATH= TAG=v1.43.1 fetch_binaries
* make TAG=v1.43.1 tarball
* make TAG=v1.43.1 sign_upload
* make TAG=v1.43.1 check_sign
* make TAG=v1.43.1 upload
* make TAG=v1.43.1 upload_website
* make TAG=v1.43.1 upload_github
* NB this overwrites the current beta so after the release, rebuild the last travis build
* make TAG=${NEW_TAG} doc
* git commit -a -v -m "Version ${NEW_TAG}"
* git tag -d ${NEW_TAG}
* git tag -s -m "Version ${NEW_TAG}" ${NEW_TAG}
* git push --tags -u origin ${BASE_TAG}-fixes
* Wait for builds to complete
* make BRANCH_PATH= TAG=${NEW_TAG} fetch_binaries
* make TAG=${NEW_TAG} tarball
* make TAG=${NEW_TAG} sign_upload
* make TAG=${NEW_TAG} check_sign
* make TAG=${NEW_TAG} upload
* make TAG=${NEW_TAG} upload_website
* make TAG=${NEW_TAG} upload_github
* NB this overwrites the current beta so we need to do this
* git co master
* make LAST_TAG=${NEW_TAG} startdev
* # cherry pick the changes to the changelog and VERSION
* git checkout ${BASE_TAG}-fixes VERSION docs/content/changelog.md
* git commit --amend
* git push
* Announce!
## Making a manual build of docker

1
VERSION Normal file
View File

@@ -0,0 +1 @@
v1.49.5

View File

@@ -1,239 +0,0 @@
---
# Azure pipelines build for rclone
# Parts stolen shamelessly from all round the Internet, especially Caddy
# -*- compile-command: "yamllint -f parsable azure-pipelines.yml" -*-
trigger:
branches:
include:
- '*'
tags:
include:
- '*'
variables:
GOROOT: $(gorootDir)/go
GOPATH: $(system.defaultWorkingDirectory)/gopath
GOCACHE: $(system.defaultWorkingDirectory)/gocache
GOBIN: $(GOPATH)/bin
modulePath: '$(GOPATH)/src/github.com/$(build.repository.name)'
GO111MODULE: 'off'
GOTAGS: cmount
GO_LATEST: false
CPATH: ''
GO_INSTALL_ARCH: amd64
strategy:
matrix:
linux:
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: latest
GOTAGS: cmount
BUILD_FLAGS: '-include "^linux/"'
MAKE_CHECK: true
MAKE_QUICKTEST: true
DEPLOY: true
mac:
imageName: macos-10.13
gorootDir: /usr/local
GO_VERSION: latest
GOTAGS: "" # cmount doesn't work on osx travis for some reason
BUILD_FLAGS: '-include "^darwin/" -cgo'
MAKE_QUICKTEST: true
MAKE_RACEQUICKTEST: true
DEPLOY: true
windows_amd64:
imageName: windows-2019
gorootDir: C:\
GO_VERSION: latest
BUILD_FLAGS: '-include "^windows/amd64" -cgo'
MAKE_QUICKTEST: true
DEPLOY: true
windows_386:
imageName: windows-2019
gorootDir: C:\
GO_VERSION: latest
GO_INSTALL_ARCH: 386
BUILD_FLAGS: '-include "^windows/386" -cgo'
MAKE_QUICKTEST: true
DEPLOY: true
other_os:
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: latest
BUILD_FLAGS: '-exclude "^(windows|darwin|linux)/"'
MAKE_COMPILE_ALL: true
DEPLOY: true
modules_race:
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: latest
GO111MODULE: on
GOPROXY: https://proxy.golang.org
MAKE_QUICKTEST: true
MAKE_RACEQUICKTEST: true
go1.9:
imageName: ubuntu-16.04
gorootDir: /usr/local
GOCACHE: '' # build caching only came in go1.10
GO_VERSION: go1.9.7
MAKE_QUICKTEST: true
go1.10:
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: go1.10.8
MAKE_QUICKTEST: true
go1.11:
imageName: ubuntu-16.04
gorootDir: /usr/local
GO_VERSION: go1.11.12
MAKE_QUICKTEST: true
pool:
vmImage: $(imageName)
steps:
- bash: |
latestGo=$(curl "https://golang.org/VERSION?m=text")
echo "##vso[task.setvariable variable=GO_VERSION]$latestGo"
echo "##vso[task.setvariable variable=GO_LATEST]true"
echo "Latest Go version: $latestGo"
condition: eq( variables['GO_VERSION'], 'latest' )
continueOnError: false
displayName: "Get latest Go version"
- bash: |
sudo rm -f $(which go)
echo '##vso[task.prependpath]$(GOBIN)'
echo '##vso[task.prependpath]$(GOROOT)/bin'
mkdir -p '$(modulePath)'
shopt -s extglob
shopt -s dotglob
mv !(gopath) '$(modulePath)'
continueOnError: false
displayName: Remove old Go, set GOBIN/GOROOT, and move project into GOPATH
- task: CacheBeta@0
inputs:
key: go-build-cache | "$(Agent.JobName)"
path: $(GOCACHE)
continueOnError: true
displayName: Cache go build
condition: ne( variables['GOCACHE'], '' )
# Install Libraries (varies by platform)
- bash: |
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse libfuse-dev rpm pkg-config
condition: eq( variables['Agent.OS'], 'Linux' )
continueOnError: false
displayName: Install Libraries on Linux
- bash: |
brew update
brew tap caskroom/cask
brew cask install osxfuse
condition: eq( variables['Agent.OS'], 'Darwin' )
continueOnError: false
displayName: Install Libraries on macOS
- powershell: |
$ProgressPreference = 'SilentlyContinue'
choco install -y winfsp zip
Write-Host "##vso[task.setvariable variable=CPATH]C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse"
if ($env:GO_INSTALL_ARCH -eq "386") {
choco install -y mingw --forcex86 --force
Write-Host "##vso[task.prependpath]C:\\ProgramData\\chocolatey\\lib\\mingw\\tools\\install\\mingw32\\bin"
}
# Copy mingw32-make.exe to make.exe so the same command line
# can be used on Windows as on macOS and Linux
$path = (get-command mingw32-make.exe).Path
Copy-Item -Path $path -Destination (Join-Path (Split-Path -Path $path) 'make.exe')
condition: eq( variables['Agent.OS'], 'Windows_NT' )
continueOnError: false
displayName: Install Libraries on Windows
# Install Go (this varies by platform)
- bash: |
wget "https://dl.google.com/go/$(GO_VERSION).linux-$(GO_INSTALL_ARCH).tar.gz"
sudo mkdir $(gorootDir)
sudo chown ${USER}:${USER} $(gorootDir)
tar -C $(gorootDir) -xzf "$(GO_VERSION).linux-$(GO_INSTALL_ARCH).tar.gz"
condition: eq( variables['Agent.OS'], 'Linux' )
continueOnError: false
displayName: Install Go on Linux
- bash: |
wget "https://dl.google.com/go/$(GO_VERSION).darwin-$(GO_INSTALL_ARCH).tar.gz"
sudo tar -C $(gorootDir) -xzf "$(GO_VERSION).darwin-$(GO_INSTALL_ARCH).tar.gz"
condition: eq( variables['Agent.OS'], 'Darwin' )
continueOnError: false
displayName: Install Go on macOS
- powershell: |
$ProgressPreference = 'SilentlyContinue'
Write-Host "Downloading Go $(GO_VERSION) for $(GO_INSTALL_ARCH)"
(New-Object System.Net.WebClient).DownloadFile("https://dl.google.com/go/$(GO_VERSION).windows-$(GO_INSTALL_ARCH).zip", "$(GO_VERSION).windows-$(GO_INSTALL_ARCH).zip")
Write-Host "Extracting Go"
Expand-Archive "$(GO_VERSION).windows-$(GO_INSTALL_ARCH).zip" -DestinationPath "$(gorootDir)"
condition: eq( variables['Agent.OS'], 'Windows_NT' )
continueOnError: false
displayName: Install Go on Windows
# Display environment for debugging
- bash: |
printf "Using go at: $(which go)\n"
printf "Go version: $(go version)\n"
printf "\n\nGo environment:\n\n"
go env
printf "\n\nRclone environment:\n\n"
make vars
printf "\n\nSystem environment:\n\n"
env
workingDirectory: '$(modulePath)'
displayName: Print Go version and environment
# Run Tests
- bash: |
make
make quicktest
workingDirectory: '$(modulePath)'
displayName: Run tests
condition: eq( variables['MAKE_QUICKTEST'], 'true' )
- bash: |
make racequicktest
workingDirectory: '$(modulePath)'
displayName: Race test
condition: eq( variables['MAKE_RACEQUICKTEST'], 'true' )
- bash: |
make build_dep
make check
workingDirectory: '$(modulePath)'
displayName: Code quality test
condition: eq( variables['MAKE_CHECK'], 'true' )
- bash: |
make
make compile_all
workingDirectory: '$(modulePath)'
displayName: Compile all architectures test
condition: eq( variables['MAKE_COMPILE_ALL'], 'true' )
- bash: |
make travis_beta
env:
RCLONE_CONFIG_PASS: $(RCLONE_CONFIG_PASS)
BETA_SUBDIR: 'azure_pipelines' # FIXME remove when removing travis/appveyor
workingDirectory: '$(modulePath)'
displayName: Deploy built binaries
condition: and( eq( variables['DEPLOY'], 'true' ), ne( variables['Build.Reason'], 'PullRequest' ) )

View File

@@ -8,6 +8,7 @@ import (
_ "github.com/rclone/rclone/backend/b2"
_ "github.com/rclone/rclone/backend/box"
_ "github.com/rclone/rclone/backend/cache"
_ "github.com/rclone/rclone/backend/chunker"
_ "github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/drive"
_ "github.com/rclone/rclone/backend/dropbox"
@@ -20,6 +21,7 @@ import (
_ "github.com/rclone/rclone/backend/jottacloud"
_ "github.com/rclone/rclone/backend/koofr"
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/mailru"
_ "github.com/rclone/rclone/backend/mega"
_ "github.com/rclone/rclone/backend/onedrive"
_ "github.com/rclone/rclone/backend/opendrive"
@@ -29,6 +31,7 @@ import (
_ "github.com/rclone/rclone/backend/qingstor"
_ "github.com/rclone/rclone/backend/s3"
_ "github.com/rclone/rclone/backend/sftp"
_ "github.com/rclone/rclone/backend/sharefile"
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/webdav"

View File

@@ -28,6 +28,7 @@ import (
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
@@ -38,6 +39,7 @@ import (
)
const (
enc = encodings.AmazonCloudDrive
folderKind = "FOLDER"
fileKind = "FILE"
statusAvailable = "AVAILABLE"
@@ -384,7 +386,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
var resp *http.Response
var subFolder *acd.Folder
err = f.pacer.Call(func() (bool, error) {
subFolder, resp, err = folder.GetFolder(leaf)
subFolder, resp, err = folder.GetFolder(enc.FromStandardName(leaf))
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -411,7 +413,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
var resp *http.Response
var info *acd.Folder
err = f.pacer.Call(func() (bool, error) {
info, resp, err = folder.CreateFolder(leaf)
info, resp, err = folder.CreateFolder(enc.FromStandardName(leaf))
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -479,6 +481,7 @@ func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly
if !hasValidParent {
continue
}
*node.Name = enc.ToStandardName(*node.Name)
// Store the nodes up in case we have to retry the listing
out = append(out, node)
}
@@ -668,7 +671,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
err = f.pacer.CallNoRetry(func() (bool, error) {
start := time.Now()
f.tokenRenewer.Start()
info, resp, err = folder.Put(in, leaf)
info, resp, err = folder.Put(in, enc.FromStandardName(leaf))
f.tokenRenewer.Stop()
var ok bool
ok, info, err = f.checkUpload(ctx, resp, in, src, info, err, time.Since(start))
@@ -1038,7 +1041,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
var resp *http.Response
var info *acd.File
err = o.fs.pacer.Call(func() (bool, error) {
info, resp, err = folder.GetFile(leaf)
info, resp, err = folder.GetFile(enc.FromStandardName(leaf))
return o.fs.shouldRetry(resp, err)
})
if err != nil {
@@ -1158,7 +1161,7 @@ func (f *Fs) restoreNode(info *acd.Node) (newInfo *acd.Node, err error) {
func (f *Fs) renameNode(info *acd.Node, newName string) (newInfo *acd.Node, err error) {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
newInfo, resp, err = info.Rename(newName)
newInfo, resp, err = info.Rename(enc.FromStandardName(newName))
return f.shouldRetry(resp, err)
})
return newInfo, err
@@ -1354,10 +1357,11 @@ func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), checkpoin
if len(node.Parents) > 0 {
if path, ok := f.dirCache.GetInv(node.Parents[0]); ok {
// and append the drive file name to compute the full file name
name := enc.ToStandardName(*node.Name)
if len(path) > 0 {
path = path + "/" + *node.Name
path = path + "/" + name
} else {
path = *node.Name
path = name
}
// this will now clear the actual file too
pathsToClear = append(pathsToClear, entryType{path: path, entryType: fs.EntryObject})

View File

@@ -28,6 +28,7 @@ import (
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
@@ -60,6 +61,8 @@ const (
emulatorBlobEndpoint = "http://127.0.0.1:10000/devstoreaccount1"
)
const enc = encodings.AzureBlob
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -208,7 +211,8 @@ func parsePath(path string) (root string) {
// split returns container and containerPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (containerName, containerPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
containerName, containerPath = bucket.Split(path.Join(f.root, rootRelativePath))
return enc.FromStandardName(containerName), enc.FromStandardPath(containerPath)
}
// split returns container and containerPath from the object
@@ -575,18 +579,18 @@ func (f *Fs) list(ctx context.Context, container, directory, prefix string, addC
}
// Advance marker to next
marker = response.NextMarker
for i := range response.Segment.BlobItems {
file := &response.Segment.BlobItems[i]
// Finish if file name no longer has prefix
// if prefix != "" && !strings.HasPrefix(file.Name, prefix) {
// return nil
// }
if !strings.HasPrefix(file.Name, prefix) {
fs.Debugf(f, "Odd name received %q", file.Name)
remote := enc.ToStandardPath(file.Name)
if !strings.HasPrefix(remote, prefix) {
fs.Debugf(f, "Odd name received %q", remote)
continue
}
remote := file.Name[len(prefix):]
remote = remote[len(prefix):]
if isDirectoryMarker(*file.Properties.ContentLength, file.Metadata, remote) {
continue // skip directory marker
}
@@ -602,6 +606,7 @@ func (f *Fs) list(ctx context.Context, container, directory, prefix string, addC
// Send the subdirectories
for _, remote := range response.Segment.BlobPrefixes {
remote := strings.TrimRight(remote.Name, "/")
remote = enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, prefix) {
fs.Debugf(f, "Odd directory name received %q", remote)
continue
@@ -665,7 +670,7 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
return entries, nil
}
err = f.listContainersToFn(func(container *azblob.ContainerItem) error {
d := fs.NewDir(container.Name, container.Properties.LastModified)
d := fs.NewDir(enc.ToStandardName(container.Name), container.Properties.LastModified)
f.cache.MarkOK(container.Name)
entries = append(entries, d)
return nil
@@ -1115,7 +1120,7 @@ func (o *Object) parseTimeString(timeString string) (err error) {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return err
}
o.modTime = time.Unix(unixMilliseconds/1E3, (unixMilliseconds%1E3)*1E6).UTC()
o.modTime = time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC()
return nil
}
@@ -1508,4 +1513,6 @@ var (
_ fs.ListRer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
_ fs.GetTierer = &Object{}
_ fs.SetTierer = &Object{}
)

View File

@@ -50,7 +50,7 @@ type Timestamp time.Time
// MarshalJSON turns a Timestamp into JSON (in UTC)
func (t *Timestamp) MarshalJSON() (out []byte, err error) {
timestamp := (*time.Time)(t).UTC().UnixNano()
return []byte(strconv.FormatInt(timestamp/1E6, 10)), nil
return []byte(strconv.FormatInt(timestamp/1e6, 10)), nil
}
// UnmarshalJSON turns JSON into a Timestamp
@@ -59,7 +59,7 @@ func (t *Timestamp) UnmarshalJSON(data []byte) error {
if err != nil {
return err
}
*t = Timestamp(time.Unix(timestamp/1E3, (timestamp%1E3)*1E6).UTC())
*t = Timestamp(time.Unix(timestamp/1e3, (timestamp%1e3)*1e6).UTC())
return nil
}

View File

@@ -25,6 +25,7 @@ import (
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
@@ -34,6 +35,8 @@ import (
"github.com/rclone/rclone/lib/rest"
)
const enc = encodings.B2
const (
defaultEndpoint = "https://api.backblazeb2.com"
headerPrefix = "x-bz-info-" // lower case as that is what the server returns
@@ -273,11 +276,11 @@ func (f *Fs) shouldRetryNoReauth(resp *http.Response, err error) (bool, error) {
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if resp != nil && resp.StatusCode == 401 {
fs.Debugf(f, "Unauthorized: %v", err)
// Reauth
authErr := f.authorizeAccount()
authErr := f.authorizeAccount(ctx)
if authErr != nil {
err = authErr
}
@@ -393,13 +396,13 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
fs.Debugf(f, "Setting test header \"%s: %s\"", testModeHeader, testMode)
}
f.fillBufferTokens()
err = f.authorizeAccount()
err = f.authorizeAccount(ctx)
if err != nil {
return nil, errors.Wrap(err, "failed to authorize account")
}
// If this is a key limited to a single bucket, it must exist already
if f.rootBucket != "" && f.info.Allowed.BucketID != "" {
allowedBucket := f.info.Allowed.BucketName
allowedBucket := enc.ToStandardName(f.info.Allowed.BucketName)
if allowedBucket == "" {
return nil, errors.New("bucket that application key is restricted to no longer exists")
}
@@ -431,7 +434,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// authorizeAccount gets the API endpoint and auth token. Can be used
// for reauthentication too.
func (f *Fs) authorizeAccount() error {
func (f *Fs) authorizeAccount(ctx context.Context) error {
f.authMu.Lock()
defer f.authMu.Unlock()
opts := rest.Opts{
@@ -443,7 +446,7 @@ func (f *Fs) authorizeAccount() error {
ExtraHeaders: map[string]string{"Authorization": ""}, // unset the Authorization for this request
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, nil, &f.info)
resp, err := f.srv.CallJSON(ctx, &opts, nil, &f.info)
return f.shouldRetryNoReauth(resp, err)
})
if err != nil {
@@ -466,10 +469,10 @@ func (f *Fs) hasPermission(permission string) bool {
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
//
// This should be returned with returnUploadURL when finished
func (f *Fs) getUploadURL(bucket string) (upload *api.GetUploadURLResponse, err error) {
func (f *Fs) getUploadURL(ctx context.Context, bucket string) (upload *api.GetUploadURLResponse, err error) {
f.uploadMu.Lock()
defer f.uploadMu.Unlock()
bucketID, err := f.getBucketID(bucket)
bucketID, err := f.getBucketID(ctx, bucket)
if err != nil {
return nil, err
}
@@ -489,8 +492,8 @@ func (f *Fs) getUploadURL(bucket string) (upload *api.GetUploadURLResponse, err
BucketID: bucketID,
}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &upload)
return f.shouldRetry(resp, err)
resp, err := f.srv.CallJSON(ctx, &opts, &request, &upload)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get upload URL")
@@ -609,7 +612,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
if !recurse {
delimiter = "/"
}
bucketID, err := f.getBucketID(bucket)
bucketID, err := f.getBucketID(ctx, bucket)
if err != nil {
return err
}
@@ -620,11 +623,11 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
var request = api.ListFileNamesRequest{
BucketID: bucketID,
MaxFileCount: chunkSize,
Prefix: directory,
Prefix: enc.FromStandardPath(directory),
Delimiter: delimiter,
}
if directory != "" {
request.StartFileName = directory
request.StartFileName = enc.FromStandardPath(directory)
}
opts := rest.Opts{
Method: "POST",
@@ -636,14 +639,15 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
for {
var response api.ListFileNamesResponse
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
}
for i := range response.Files {
file := &response.Files[i]
file.Name = enc.ToStandardPath(file.Name)
// Finish if file name no longer has prefix
if prefix != "" && !strings.HasPrefix(file.Name, prefix) {
return nil
@@ -727,7 +731,7 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
// listBuckets returns all the buckets to out
func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) {
err = f.listBucketsToFn(func(bucket *api.Bucket) error {
err = f.listBucketsToFn(ctx, func(bucket *api.Bucket) error {
d := fs.NewDir(bucket.Name, time.Time{})
entries = append(entries, d)
return nil
@@ -820,7 +824,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
type listBucketFn func(*api.Bucket) error
// listBucketsToFn lists the buckets to the function supplied
func (f *Fs) listBucketsToFn(fn listBucketFn) error {
func (f *Fs) listBucketsToFn(ctx context.Context, fn listBucketFn) error {
var account = api.ListBucketsRequest{
AccountID: f.info.AccountID,
BucketID: f.info.Allowed.BucketID,
@@ -832,8 +836,8 @@ func (f *Fs) listBucketsToFn(fn listBucketFn) error {
Path: "/b2_list_buckets",
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &account, &response)
return f.shouldRetry(resp, err)
resp, err := f.srv.CallJSON(ctx, &opts, &account, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
@@ -844,6 +848,7 @@ func (f *Fs) listBucketsToFn(fn listBucketFn) error {
f._bucketType = make(map[string]string, 1)
for i := range response.Buckets {
bucket := &response.Buckets[i]
bucket.Name = enc.ToStandardName(bucket.Name)
f.cache.MarkOK(bucket.Name)
f._bucketID[bucket.Name] = bucket.ID
f._bucketType[bucket.Name] = bucket.Type
@@ -862,14 +867,14 @@ func (f *Fs) listBucketsToFn(fn listBucketFn) error {
// getbucketType finds the bucketType for the current bucket name
// can be one of allPublic. allPrivate, or snapshot
func (f *Fs) getbucketType(bucket string) (bucketType string, err error) {
func (f *Fs) getbucketType(ctx context.Context, bucket string) (bucketType string, err error) {
f.bucketTypeMutex.Lock()
bucketType = f._bucketType[bucket]
f.bucketTypeMutex.Unlock()
if bucketType != "" {
return bucketType, nil
}
err = f.listBucketsToFn(func(bucket *api.Bucket) error {
err = f.listBucketsToFn(ctx, func(bucket *api.Bucket) error {
// listBucketsToFn reads bucket Types
return nil
})
@@ -897,14 +902,14 @@ func (f *Fs) clearBucketType(bucket string) {
}
// getBucketID finds the ID for the current bucket name
func (f *Fs) getBucketID(bucket string) (bucketID string, err error) {
func (f *Fs) getBucketID(ctx context.Context, bucket string) (bucketID string, err error) {
f.bucketIDMutex.Lock()
bucketID = f._bucketID[bucket]
f.bucketIDMutex.Unlock()
if bucketID != "" {
return bucketID, nil
}
err = f.listBucketsToFn(func(bucket *api.Bucket) error {
err = f.listBucketsToFn(ctx, func(bucket *api.Bucket) error {
// listBucketsToFn sets IDs
return nil
})
@@ -965,20 +970,20 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
}
var request = api.CreateBucketRequest{
AccountID: f.info.AccountID,
Name: bucket,
Name: enc.FromStandardName(bucket),
Type: "allPrivate",
}
var response api.Bucket
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
if apiErr.Code == "duplicate_bucket_name" {
// Check this is our bucket - buckets are globally unique and this
// might be someone elses.
_, getBucketErr := f.getBucketID(bucket)
_, getBucketErr := f.getBucketID(ctx, bucket)
if getBucketErr == nil {
// found so it is our bucket
return nil
@@ -1009,7 +1014,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
Method: "POST",
Path: "/b2_delete_bucket",
}
bucketID, err := f.getBucketID(bucket)
bucketID, err := f.getBucketID(ctx, bucket)
if err != nil {
return err
}
@@ -1019,8 +1024,8 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
}
var response api.Bucket
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "failed to delete bucket")
@@ -1038,8 +1043,8 @@ func (f *Fs) Precision() time.Duration {
}
// hide hides a file on the remote
func (f *Fs) hide(bucket, bucketPath string) error {
bucketID, err := f.getBucketID(bucket)
func (f *Fs) hide(ctx context.Context, bucket, bucketPath string) error {
bucketID, err := f.getBucketID(ctx, bucket)
if err != nil {
return err
}
@@ -1049,12 +1054,12 @@ func (f *Fs) hide(bucket, bucketPath string) error {
}
var request = api.HideFileRequest{
BucketID: bucketID,
Name: bucketPath,
Name: enc.FromStandardPath(bucketPath),
}
var response api.File
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
@@ -1070,19 +1075,19 @@ func (f *Fs) hide(bucket, bucketPath string) error {
}
// deleteByID deletes a file version given Name and ID
func (f *Fs) deleteByID(ID, Name string) error {
func (f *Fs) deleteByID(ctx context.Context, ID, Name string) error {
opts := rest.Opts{
Method: "POST",
Path: "/b2_delete_file_version",
}
var request = api.DeleteFileRequest{
ID: ID,
Name: Name,
Name: enc.FromStandardPath(Name),
}
var response api.File
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrapf(err, "failed to delete %q", Name)
@@ -1132,7 +1137,7 @@ func (f *Fs) purge(ctx context.Context, bucket, directory string, oldOnly bool)
continue
}
tr := accounting.Stats(ctx).NewCheckingTransfer(oi)
err = f.deleteByID(object.ID, object.Name)
err = f.deleteByID(ctx, object.ID, object.Name)
checkErr(err)
tr.Done(err)
}
@@ -1205,7 +1210,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
destBucketID, err := f.getBucketID(dstBucket)
destBucketID, err := f.getBucketID(ctx, dstBucket)
if err != nil {
return nil, err
}
@@ -1215,14 +1220,14 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
var request = api.CopyFileRequest{
SourceID: srcObj.id,
Name: dstPath,
Name: enc.FromStandardPath(dstPath),
MetadataDirective: "COPY",
DestBucketID: destBucketID,
}
var response api.FileInfo
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1245,7 +1250,7 @@ func (f *Fs) Hashes() hash.Set {
// getDownloadAuthorization returns authorization token for downloading
// without account.
func (f *Fs) getDownloadAuthorization(bucket, remote string) (authorization string, err error) {
func (f *Fs) getDownloadAuthorization(ctx context.Context, bucket, remote string) (authorization string, err error) {
validDurationInSeconds := time.Duration(f.opt.DownloadAuthorizationDuration).Nanoseconds() / 1e9
if validDurationInSeconds <= 0 || validDurationInSeconds > 604800 {
return "", errors.New("--b2-download-auth-duration must be between 1 sec and 1 week")
@@ -1253,7 +1258,7 @@ func (f *Fs) getDownloadAuthorization(bucket, remote string) (authorization stri
if !f.hasPermission("shareFiles") {
return "", errors.New("sharing a file link requires the shareFiles permission")
}
bucketID, err := f.getBucketID(bucket)
bucketID, err := f.getBucketID(ctx, bucket)
if err != nil {
return "", err
}
@@ -1263,13 +1268,13 @@ func (f *Fs) getDownloadAuthorization(bucket, remote string) (authorization stri
}
var request = api.GetDownloadAuthorizationRequest{
BucketID: bucketID,
FileNamePrefix: path.Join(f.root, remote),
FileNamePrefix: enc.FromStandardPath(path.Join(f.root, remote)),
ValidDurationInSeconds: validDurationInSeconds,
}
var response api.GetDownloadAuthorizationResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return "", errors.Wrap(err, "failed to get download authorization")
@@ -1301,12 +1306,12 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
}
absPath := "/" + bucketPath
link = RootURL + "/file/" + urlEncode(bucket) + absPath
bucketType, err := f.getbucketType(bucket)
bucketType, err := f.getbucketType(ctx, bucket)
if err != nil {
return "", err
}
if bucketType == "allPrivate" || bucketType == "snapshot" {
AuthorizationToken, err := f.getDownloadAuthorization(bucket, remote)
AuthorizationToken, err := f.getDownloadAuthorization(ctx, bucket, remote)
if err != nil {
return "", err
}
@@ -1453,7 +1458,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
// timeString returns modTime as the number of milliseconds
// elapsed since January 1, 1970 UTC as a decimal string.
func timeString(modTime time.Time) string {
return strconv.FormatInt(modTime.UnixNano()/1E6, 10)
return strconv.FormatInt(modTime.UnixNano()/1e6, 10)
}
// parseTimeString converts a decimal string number of milliseconds
@@ -1468,7 +1473,7 @@ func (o *Object) parseTimeString(timeString string) (err error) {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return nil
}
o.modTime = time.Unix(unixMilliseconds/1E3, (unixMilliseconds%1E3)*1E6).UTC()
o.modTime = time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC()
return nil
}
@@ -1498,15 +1503,15 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
}
var request = api.CopyFileRequest{
SourceID: o.id,
Name: bucketPath, // copy to same name
Name: enc.FromStandardPath(bucketPath), // copy to same name
MetadataDirective: "REPLACE",
ContentType: info.ContentType,
Info: info.Info,
}
var response api.FileInfo
err = o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(&opts, &request, &response)
return o.fs.shouldRetry(resp, err)
resp, err := o.fs.srv.CallJSON(ctx, &opts, &request, &response)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
@@ -1600,12 +1605,12 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
opts.Path += "/b2api/v1/b2_download_file_by_id?fileId=" + urlEncode(o.id)
} else {
bucket, bucketPath := o.split()
opts.Path += "/file/" + urlEncode(bucket) + "/" + urlEncode(bucketPath)
opts.Path += "/file/" + urlEncode(enc.FromStandardName(bucket)) + "/" + urlEncode(enc.FromStandardPath(bucketPath))
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
return o.fs.shouldRetry(resp, err)
resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to open for download")
@@ -1701,7 +1706,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
o.fs.putUploadBlock(buf)
return err
}
return up.Stream(buf)
return up.Stream(ctx, buf)
} else if err == io.EOF || err == io.ErrUnexpectedEOF {
fs.Debugf(o, "File has %d bytes, which makes only one chunk. Using direct upload.", n)
defer o.fs.putUploadBlock(buf)
@@ -1715,7 +1720,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return err
}
return up.Upload()
return up.Upload(ctx)
}
modTime := src.ModTime(ctx)
@@ -1729,7 +1734,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
// Get upload URL
upload, err := o.fs.getUploadURL(bucket)
upload, err := o.fs.getUploadURL(ctx, bucket)
if err != nil {
return err
}
@@ -1797,7 +1802,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Body: in,
ExtraHeaders: map[string]string{
"Authorization": upload.AuthorizationToken,
"X-Bz-File-Name": urlEncode(bucketPath),
"X-Bz-File-Name": urlEncode(enc.FromStandardPath(bucketPath)),
"Content-Type": fs.MimeType(ctx, src),
sha1Header: calculatedSha1,
timeHeader: timeString(modTime),
@@ -1807,8 +1812,8 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var response api.FileInfo
// Don't retry, return a retry error instead
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(&opts, nil, &response)
retry, err := o.fs.shouldRetry(resp, err)
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &response)
retry, err := o.fs.shouldRetry(ctx, resp, err)
// On retryable error clear UploadURL
if retry {
fs.Debugf(o, "Clearing upload URL because of error: %v", err)
@@ -1829,9 +1834,9 @@ func (o *Object) Remove(ctx context.Context) error {
return errNotWithVersions
}
if o.fs.opt.HardDelete {
return o.fs.deleteByID(o.id, bucketPath)
return o.fs.deleteByID(ctx, o.id, bucketPath)
}
return o.fs.hide(bucket, bucketPath)
return o.fs.hide(ctx, bucket, bucketPath)
}
// MimeType of an Object if known, "" otherwise

View File

@@ -105,13 +105,13 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
Path: "/b2_start_large_file",
}
bucket, bucketPath := o.split()
bucketID, err := f.getBucketID(bucket)
bucketID, err := f.getBucketID(ctx, bucket)
if err != nil {
return nil, err
}
var request = api.StartLargeFileRequest{
BucketID: bucketID,
Name: bucketPath,
Name: enc.FromStandardPath(bucketPath),
ContentType: fs.MimeType(ctx, src),
Info: map[string]string{
timeKey: timeString(modTime),
@@ -125,8 +125,8 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, &request, &response)
return f.shouldRetry(resp, err)
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -150,7 +150,7 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
//
// This should be returned with returnUploadURL when finished
func (up *largeUpload) getUploadURL() (upload *api.GetUploadPartURLResponse, err error) {
func (up *largeUpload) getUploadURL(ctx context.Context) (upload *api.GetUploadPartURLResponse, err error) {
up.uploadMu.Lock()
defer up.uploadMu.Unlock()
if len(up.uploads) == 0 {
@@ -162,8 +162,8 @@ func (up *largeUpload) getUploadURL() (upload *api.GetUploadPartURLResponse, err
ID: up.id,
}
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(&opts, &request, &upload)
return up.f.shouldRetry(resp, err)
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &upload)
return up.f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get upload URL")
@@ -192,12 +192,12 @@ func (up *largeUpload) clearUploadURL() {
}
// Transfer a chunk
func (up *largeUpload) transferChunk(part int64, body []byte) error {
func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byte) error {
err := up.f.pacer.Call(func() (bool, error) {
fs.Debugf(up.o, "Sending chunk %d length %d", part, len(body))
// Get upload URL
upload, err := up.getUploadURL()
upload, err := up.getUploadURL(ctx)
if err != nil {
return false, err
}
@@ -241,8 +241,8 @@ func (up *largeUpload) transferChunk(part int64, body []byte) error {
var response api.UploadPartResponse
resp, err := up.f.srv.CallJSON(&opts, nil, &response)
retry, err := up.f.shouldRetry(resp, err)
resp, err := up.f.srv.CallJSON(ctx, &opts, nil, &response)
retry, err := up.f.shouldRetry(ctx, resp, err)
if err != nil {
fs.Debugf(up.o, "Error sending chunk %d (retry=%v): %v: %#v", part, retry, err, err)
}
@@ -264,7 +264,7 @@ func (up *largeUpload) transferChunk(part int64, body []byte) error {
}
// finish closes off the large upload
func (up *largeUpload) finish() error {
func (up *largeUpload) finish(ctx context.Context) error {
fs.Debugf(up.o, "Finishing large file upload with %d parts", up.parts)
opts := rest.Opts{
Method: "POST",
@@ -276,8 +276,8 @@ func (up *largeUpload) finish() error {
}
var response api.FileInfo
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(&opts, &request, &response)
return up.f.shouldRetry(resp, err)
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response)
return up.f.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
@@ -286,7 +286,7 @@ func (up *largeUpload) finish() error {
}
// cancel aborts the large upload
func (up *largeUpload) cancel() error {
func (up *largeUpload) cancel(ctx context.Context) error {
opts := rest.Opts{
Method: "POST",
Path: "/b2_cancel_large_file",
@@ -296,18 +296,18 @@ func (up *largeUpload) cancel() error {
}
var response api.CancelLargeFileResponse
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(&opts, &request, &response)
return up.f.shouldRetry(resp, err)
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response)
return up.f.shouldRetry(ctx, resp, err)
})
return err
}
func (up *largeUpload) managedTransferChunk(wg *sync.WaitGroup, errs chan error, part int64, buf []byte) {
func (up *largeUpload) managedTransferChunk(ctx context.Context, wg *sync.WaitGroup, errs chan error, part int64, buf []byte) {
wg.Add(1)
go func(part int64, buf []byte) {
defer wg.Done()
defer up.f.putUploadBlock(buf)
err := up.transferChunk(part, buf)
err := up.transferChunk(ctx, part, buf)
if err != nil {
select {
case errs <- err:
@@ -317,7 +317,7 @@ func (up *largeUpload) managedTransferChunk(wg *sync.WaitGroup, errs chan error,
}(part, buf)
}
func (up *largeUpload) finishOrCancelOnError(err error, errs chan error) error {
func (up *largeUpload) finishOrCancelOnError(ctx context.Context, err error, errs chan error) error {
if err == nil {
select {
case err = <-errs:
@@ -326,19 +326,19 @@ func (up *largeUpload) finishOrCancelOnError(err error, errs chan error) error {
}
if err != nil {
fs.Debugf(up.o, "Cancelling large file upload due to error: %v", err)
cancelErr := up.cancel()
cancelErr := up.cancel(ctx)
if cancelErr != nil {
fs.Errorf(up.o, "Failed to cancel large file upload: %v", cancelErr)
}
return err
}
return up.finish()
return up.finish(ctx)
}
// Stream uploads the chunks from the input, starting with a required initial
// chunk. Assumes the file size is unknown and will upload until the input
// reaches EOF.
func (up *largeUpload) Stream(initialUploadBlock []byte) (err error) {
func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock []byte) (err error) {
fs.Debugf(up.o, "Starting streaming of large file (id %q)", up.id)
errs := make(chan error, 1)
hasMoreParts := true
@@ -346,7 +346,7 @@ func (up *largeUpload) Stream(initialUploadBlock []byte) (err error) {
// Transfer initial chunk
up.size = int64(len(initialUploadBlock))
up.managedTransferChunk(&wg, errs, 1, initialUploadBlock)
up.managedTransferChunk(ctx, &wg, errs, 1, initialUploadBlock)
outer:
for part := int64(2); hasMoreParts; part++ {
@@ -388,16 +388,16 @@ outer:
}
// Transfer the chunk
up.managedTransferChunk(&wg, errs, part, buf)
up.managedTransferChunk(ctx, &wg, errs, part, buf)
}
wg.Wait()
up.sha1s = up.sha1s[:up.parts]
return up.finishOrCancelOnError(err, errs)
return up.finishOrCancelOnError(ctx, err, errs)
}
// Upload uploads the chunks from the input
func (up *largeUpload) Upload() error {
func (up *largeUpload) Upload(ctx context.Context) error {
fs.Debugf(up.o, "Starting upload of large file in %d chunks (id %q)", up.parts, up.id)
remaining := up.size
errs := make(chan error, 1)
@@ -428,10 +428,10 @@ outer:
}
// Transfer the chunk
up.managedTransferChunk(&wg, errs, part, buf)
up.managedTransferChunk(ctx, &wg, errs, part, buf)
remaining -= reqSize
}
wg.Wait()
return up.finishOrCancelOnError(err, errs)
return up.finishOrCancelOnError(ctx, err, errs)
}

View File

@@ -202,3 +202,23 @@ type CommitUpload struct {
ContentModifiedAt Time `json:"content_modified_at"`
} `json:"attributes"`
}
// ConfigJSON defines the shape of a box config.json
type ConfigJSON struct {
BoxAppSettings AppSettings `json:"boxAppSettings"`
EnterpriseID string `json:"enterpriseID"`
}
// AppSettings defines the shape of the boxAppSettings within box config.json
type AppSettings struct {
ClientID string `json:"clientID"`
ClientSecret string `json:"clientSecret"`
AppAuth AppAuth `json:"appAuth"`
}
// AppAuth defines the shape of the appAuth within boxAppSettings in config.json
type AppAuth struct {
PublicKeyID string `json:"publicKeyID"`
PrivateKey string `json:"privateKey"`
Passphrase string `json:"passphrase"`
}

View File

@@ -11,8 +11,12 @@ package box
import (
"context"
"crypto/rsa"
"encoding/json"
"encoding/pem"
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"net/url"
@@ -21,6 +25,10 @@ import (
"strings"
"time"
"github.com/rclone/rclone/lib/jwtutil"
"github.com/youmark/pkcs8"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/box/api"
"github.com/rclone/rclone/fs"
@@ -28,15 +36,20 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/oauth2"
"golang.org/x/oauth2/jws"
)
const enc = encodings.Box
const (
rcloneClientID = "d0374ba6pgmaguie02ge15sv1mllndho"
rcloneEncryptedClientSecret = "sYbJYm99WB8jzeaLPU0OPDMJKIkZvD2qOn3SyEMfiJr03RdtDt3xcZEIudRhbIDL"
@@ -49,6 +62,7 @@ const (
listChunks = 1000 // chunk size to read directory listings
minUploadCutoff = 50000000 // upload cutoff can be no lower than this
defaultUploadCutoff = 50 * 1024 * 1024
tokenURL = "https://api.box.com/oauth2/token"
)
// Globals
@@ -73,9 +87,34 @@ func init() {
Description: "Box",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
err := oauthutil.Config("box", name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
jsonFile, ok := m.Get("box_config_file")
boxSubType, boxSubTypeOk := m.Get("box_sub_type")
var err error
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" {
boxConfig, err := getBoxConfig(jsonFile)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
privateKey, err := getDecryptedPrivateKey(boxConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
claims, err := getClaims(boxConfig, boxSubType)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
signingHeaders := getSigningHeaders(boxConfig)
queryParams := getQueryParams(boxConfig)
client := fshttp.NewClient(fs.Config)
err = jwtutil.Config("box", name, claims, signingHeaders, queryParams, privateKey, m, client)
if err != nil {
log.Fatalf("Failed to configure token with jwt authentication: %v", err)
}
} else {
err = oauthutil.Config("box", name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token with oauth authentication: %v", err)
}
}
},
Options: []fs.Option{{
@@ -84,6 +123,19 @@ func init() {
}, {
Name: config.ConfigClientSecret,
Help: "Box App Client Secret\nLeave blank normally.",
}, {
Name: "box_config_file",
Help: "Box App config.json location\nLeave blank normally.",
}, {
Name: "box_sub_type",
Default: "user",
Examples: []fs.OptionExample{{
Value: "user",
Help: "Rclone should act on behalf of a user",
}, {
Value: "enterprise",
Help: "Rclone should act on behalf of a service account",
}},
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to multipart upload (>= 50MB).",
@@ -98,6 +150,74 @@ func init() {
})
}
func getBoxConfig(configFile string) (boxConfig *api.ConfigJSON, err error) {
file, err := ioutil.ReadFile(configFile)
if err != nil {
return nil, errors.Wrap(err, "box: failed to read Box config")
}
err = json.Unmarshal(file, &boxConfig)
if err != nil {
return nil, errors.Wrap(err, "box: failed to parse Box config")
}
return boxConfig, nil
}
func getClaims(boxConfig *api.ConfigJSON, boxSubType string) (claims *jws.ClaimSet, err error) {
val, err := jwtutil.RandomHex(20)
if err != nil {
return nil, errors.Wrap(err, "box: failed to generate random string for jti")
}
claims = &jws.ClaimSet{
Iss: boxConfig.BoxAppSettings.ClientID,
Sub: boxConfig.EnterpriseID,
Aud: tokenURL,
Iat: time.Now().Unix(),
Exp: time.Now().Add(time.Second * 45).Unix(),
PrivateClaims: map[string]interface{}{
"box_sub_type": boxSubType,
"aud": tokenURL,
"jti": val,
},
}
return claims, nil
}
func getSigningHeaders(boxConfig *api.ConfigJSON) *jws.Header {
signingHeaders := &jws.Header{
Algorithm: "RS256",
Typ: "JWT",
KeyID: boxConfig.BoxAppSettings.AppAuth.PublicKeyID,
}
return signingHeaders
}
func getQueryParams(boxConfig *api.ConfigJSON) map[string]string {
queryParams := map[string]string{
"client_id": boxConfig.BoxAppSettings.ClientID,
"client_secret": boxConfig.BoxAppSettings.ClientSecret,
}
return queryParams
}
func getDecryptedPrivateKey(boxConfig *api.ConfigJSON) (key *rsa.PrivateKey, err error) {
block, rest := pem.Decode([]byte(boxConfig.BoxAppSettings.AppAuth.PrivateKey))
if len(rest) > 0 {
return nil, errors.Wrap(err, "box: extra data included in private key")
}
rsaKey, err := pkcs8.ParsePKCS8PrivateKey(block.Bytes, []byte(boxConfig.BoxAppSettings.AppAuth.Passphrase))
if err != nil {
return nil, errors.Wrap(err, "box: failed to decrypt private key")
}
return rsaKey.(*rsa.PrivateKey), nil
}
// Options defines the configuration for this backend
type Options struct {
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
@@ -181,18 +301,6 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// substitute reserved characters for box
func replaceReservedChars(x string) string {
// Backslash for FULLWIDTH REVERSE SOLIDUS
return strings.Replace(x, "\\", "", -1)
}
// restore reserved characters for box
func restoreReservedChars(x string) string {
// FULLWIDTH REVERSE SOLIDUS for Backslash
return strings.Replace(x, "", "\\", -1)
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Item, err error) {
// defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
@@ -204,7 +312,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
return nil, err
}
found, err := f.listAll(directoryID, false, true, func(item *api.Item) bool {
found, err := f.listAll(ctx, directoryID, false, true, func(item *api.Item) bool {
if item.Name == leaf {
info = item
return true
@@ -352,7 +460,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
// Find the leaf in pathID
found, err = f.listAll(pathID, true, false, func(item *api.Item) bool {
found, err = f.listAll(ctx, pathID, true, false, func(item *api.Item) bool {
if item.Name == leaf {
pathIDOut = item.ID
return true
@@ -380,13 +488,13 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
Parameters: fieldsValue(),
}
mkdir := api.CreateFolder{
Name: replaceReservedChars(leaf),
Name: enc.FromStandardName(leaf),
Parent: api.Parent{
ID: pathID,
},
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &mkdir, &info)
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info)
return shouldRetry(resp, err)
})
if err != nil {
@@ -408,7 +516,7 @@ type listAllFn func(*api.Item) bool
// Lists the directory required calling the user function on each item found
//
// If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
opts := rest.Opts{
Method: "GET",
Path: "/folders/" + dirID + "/items",
@@ -423,7 +531,7 @@ OUTER:
var result api.FolderItems
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -446,7 +554,7 @@ OUTER:
if item.ItemStatus != api.ItemStatusActive {
continue
}
item.Name = restoreReservedChars(item.Name)
item.Name = enc.ToStandardName(item.Name)
if fn(item) {
found = true
break OUTER
@@ -479,7 +587,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return nil, err
}
var iErr error
_, err = f.listAll(directoryID, false, false, func(info *api.Item) bool {
_, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool {
remote := path.Join(dir, info.Name)
if info.Type == api.ItemTypeFolder {
// cache the directory ID for later lookups
@@ -581,14 +689,14 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
}
// deleteObject removes an object by ID
func (f *Fs) deleteObject(id string) error {
func (f *Fs) deleteObject(ctx context.Context, id string) error {
opts := rest.Opts{
Method: "DELETE",
Path: "/files/" + id,
NoResponse: true,
}
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(&opts)
resp, err := f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
}
@@ -619,7 +727,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
opts.Parameters.Set("recursive", strconv.FormatBool(!check))
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
if err != nil {
@@ -682,9 +790,8 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
Path: "/files/" + srcObj.id + "/copy",
Parameters: fieldsValue(),
}
replacedLeaf := replaceReservedChars(leaf)
copyFile := api.CopyFile{
Name: replacedLeaf,
Name: enc.FromStandardName(leaf),
Parent: api.Parent{
ID: directoryID,
},
@@ -692,7 +799,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var resp *http.Response
var info *api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &copyFile, &info)
resp, err = f.srv.CallJSON(ctx, &opts, &copyFile, &info)
return shouldRetry(resp, err)
})
if err != nil {
@@ -715,7 +822,7 @@ func (f *Fs) Purge(ctx context.Context) error {
}
// move a file or folder
func (f *Fs) move(endpoint, id, leaf, directoryID string) (info *api.Item, err error) {
func (f *Fs) move(ctx context.Context, endpoint, id, leaf, directoryID string) (info *api.Item, err error) {
// Move the object
opts := rest.Opts{
Method: "PUT",
@@ -723,14 +830,14 @@ func (f *Fs) move(endpoint, id, leaf, directoryID string) (info *api.Item, err e
Parameters: fieldsValue(),
}
move := api.UpdateFileMove{
Name: replaceReservedChars(leaf),
Name: enc.FromStandardName(leaf),
Parent: api.Parent{
ID: directoryID,
},
}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &move, &info)
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info)
return shouldRetry(resp, err)
})
if err != nil {
@@ -762,7 +869,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
// Do the move
info, err := f.move("/files/", srcObj.id, leaf, directoryID)
info, err := f.move(ctx, "/files/", srcObj.id, leaf, directoryID)
if err != nil {
return nil, err
}
@@ -845,7 +952,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
}
// Do the move
_, err = f.move("/folders/", srcID, leaf, directoryID)
_, err = f.move(ctx, "/folders/", srcID, leaf, directoryID)
if err != nil {
return err
}
@@ -887,7 +994,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (string, error) {
var info api.Item
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &shareLink, &info)
resp, err = f.srv.CallJSON(ctx, &opts, &shareLink, &info)
return shouldRetry(resp, err)
})
return info.SharedLink.URL, err
@@ -924,11 +1031,6 @@ func (o *Object) Remote() string {
return o.remote
}
// srvPath returns a path for use in server
func (o *Object) srvPath() string {
return replaceReservedChars(o.fs.rootSlash() + o.remote)
}
// Hash returns the SHA-1 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.SHA1 {
@@ -1006,7 +1108,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
}
var info *api.Item
err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(&opts, &update, &info)
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info)
return shouldRetry(resp, err)
})
return info, err
@@ -1039,7 +1141,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1051,9 +1153,9 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// upload does a single non-multipart upload
//
// This is recommended for less than 50 MB of content
func (o *Object) upload(in io.Reader, leaf, directoryID string, modTime time.Time) (err error) {
func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID string, modTime time.Time) (err error) {
upload := api.UploadFile{
Name: replaceReservedChars(leaf),
Name: enc.FromStandardName(leaf),
ContentModifiedAt: api.Time(modTime),
ContentCreatedAt: api.Time(modTime),
Parent: api.Parent{
@@ -1078,7 +1180,7 @@ func (o *Object) upload(in io.Reader, leaf, directoryID string, modTime time.Tim
opts.Path = "/files/content"
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, &upload, &result)
resp, err = o.fs.srv.CallJSON(ctx, &opts, &upload, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1111,16 +1213,16 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Upload with simple or multipart
if size <= int64(o.fs.opt.UploadCutoff) {
err = o.upload(in, leaf, directoryID, modTime)
err = o.upload(ctx, in, leaf, directoryID, modTime)
} else {
err = o.uploadMultipart(in, leaf, directoryID, size, modTime)
err = o.uploadMultipart(ctx, in, leaf, directoryID, size, modTime)
}
return err
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return o.fs.deleteObject(o.id)
return o.fs.deleteObject(ctx, o.id)
}
// ID returns the ID of the Object if known, or "" if not

View File

@@ -4,6 +4,7 @@ package box
import (
"bytes"
"context"
"crypto/sha1"
"encoding/base64"
"encoding/json"
@@ -22,7 +23,7 @@ import (
)
// createUploadSession creates an upload session for the object
func (o *Object) createUploadSession(leaf, directoryID string, size int64) (response *api.UploadSessionResponse, err error) {
func (o *Object) createUploadSession(ctx context.Context, leaf, directoryID string, size int64) (response *api.UploadSessionResponse, err error) {
opts := rest.Opts{
Method: "POST",
Path: "/files/upload_sessions",
@@ -37,11 +38,11 @@ func (o *Object) createUploadSession(leaf, directoryID string, size int64) (resp
} else {
opts.Path = "/files/upload_sessions"
request.FolderID = directoryID
request.FileName = replaceReservedChars(leaf)
request.FileName = enc.FromStandardName(leaf)
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, &request, &response)
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, &response)
return shouldRetry(resp, err)
})
return
@@ -53,7 +54,7 @@ func sha1Digest(digest []byte) string {
}
// uploadPart uploads a part in an upload session
func (o *Object) uploadPart(SessionID string, offset, totalSize int64, chunk []byte, wrap accounting.WrapFn) (response *api.UploadPartResponse, err error) {
func (o *Object) uploadPart(ctx context.Context, SessionID string, offset, totalSize int64, chunk []byte, wrap accounting.WrapFn) (response *api.UploadPartResponse, err error) {
chunkSize := int64(len(chunk))
sha1sum := sha1.Sum(chunk)
opts := rest.Opts{
@@ -70,7 +71,7 @@ func (o *Object) uploadPart(SessionID string, offset, totalSize int64, chunk []b
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
opts.Body = wrap(bytes.NewReader(chunk))
resp, err = o.fs.srv.CallJSON(&opts, nil, &response)
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(resp, err)
})
if err != nil {
@@ -80,7 +81,7 @@ func (o *Object) uploadPart(SessionID string, offset, totalSize int64, chunk []b
}
// commitUpload finishes an upload session
func (o *Object) commitUpload(SessionID string, parts []api.Part, modTime time.Time, sha1sum []byte) (result *api.FolderItems, err error) {
func (o *Object) commitUpload(ctx context.Context, SessionID string, parts []api.Part, modTime time.Time, sha1sum []byte) (result *api.FolderItems, err error) {
opts := rest.Opts{
Method: "POST",
Path: "/files/upload_sessions/" + SessionID + "/commit",
@@ -104,7 +105,7 @@ func (o *Object) commitUpload(SessionID string, parts []api.Part, modTime time.T
outer:
for tries = 0; tries < maxTries; tries++ {
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, &request, nil)
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil)
if err != nil {
return shouldRetry(resp, err)
}
@@ -154,7 +155,7 @@ outer:
}
// abortUpload cancels an upload session
func (o *Object) abortUpload(SessionID string) (err error) {
func (o *Object) abortUpload(ctx context.Context, SessionID string) (err error) {
opts := rest.Opts{
Method: "DELETE",
Path: "/files/upload_sessions/" + SessionID,
@@ -163,16 +164,16 @@ func (o *Object) abortUpload(SessionID string) (err error) {
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
return err
}
// uploadMultipart uploads a file using multipart upload
func (o *Object) uploadMultipart(in io.Reader, leaf, directoryID string, size int64, modTime time.Time) (err error) {
func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, leaf, directoryID string, size int64, modTime time.Time) (err error) {
// Create upload session
session, err := o.createUploadSession(leaf, directoryID, size)
session, err := o.createUploadSession(ctx, leaf, directoryID, size)
if err != nil {
return errors.Wrap(err, "multipart upload create session failed")
}
@@ -183,7 +184,7 @@ func (o *Object) uploadMultipart(in io.Reader, leaf, directoryID string, size in
defer func() {
if err != nil {
fs.Debugf(o, "Cancelling multipart upload: %v", err)
cancelErr := o.abortUpload(session.ID)
cancelErr := o.abortUpload(ctx, session.ID)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", err)
}
@@ -235,7 +236,7 @@ outer:
defer wg.Done()
defer o.fs.uploadToken.Put()
fs.Debugf(o, "Uploading part %d/%d offset %v/%v part size %v", part+1, session.TotalParts, fs.SizeSuffix(position), fs.SizeSuffix(size), fs.SizeSuffix(chunkSize))
partResponse, err := o.uploadPart(session.ID, position, size, buf, wrap)
partResponse, err := o.uploadPart(ctx, session.ID, position, size, buf, wrap)
if err != nil {
err = errors.Wrap(err, "multipart upload failed to upload part")
select {
@@ -263,7 +264,7 @@ outer:
}
// Finalise the upload session
result, err := o.commitUpload(session.ID, parts, modTime, hash.Sum(nil))
result, err := o.commitUpload(ctx, session.ID, parts, modTime, hash.Sum(nil))
if err != nil {
return errors.Wrap(err, "multipart upload failed to finalize")
}

View File

@@ -19,5 +19,6 @@ func TestIntegration(t *testing.T) {
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "MergeDirs", "OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache
})
}

2199
backend/chunker/chunker.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,605 @@
package chunker
import (
"bytes"
"context"
"flag"
"fmt"
"io/ioutil"
"path"
"regexp"
"strings"
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Command line flags
var (
UploadKilobytes = flag.Int("upload-kilobytes", 0, "Upload size in Kilobytes, set this to test large uploads")
)
// test that chunking does not break large uploads
func testPutLarge(t *testing.T, f *Fs, kilobytes int) {
t.Run(fmt.Sprintf("PutLarge%dk", kilobytes), func(t *testing.T) {
fstests.TestPutLarge(context.Background(), t, f, &fstest.Item{
ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"),
Path: fmt.Sprintf("chunker-upload-%dk", kilobytes),
Size: int64(kilobytes) * int64(fs.KibiByte),
})
})
}
// test chunk name parser
func testChunkNameFormat(t *testing.T, f *Fs) {
saveOpt := f.opt
defer func() {
// restore original settings (f is pointer, f.opt is struct)
f.opt = saveOpt
_ = f.setChunkNameFormat(f.opt.NameFormat)
}()
assertFormat := func(pattern, wantDataFormat, wantCtrlFormat, wantNameRegexp string) {
err := f.setChunkNameFormat(pattern)
assert.NoError(t, err)
assert.Equal(t, wantDataFormat, f.dataNameFmt)
assert.Equal(t, wantCtrlFormat, f.ctrlNameFmt)
assert.Equal(t, wantNameRegexp, f.nameRegexp.String())
}
assertFormatValid := func(pattern string) {
err := f.setChunkNameFormat(pattern)
assert.NoError(t, err)
}
assertFormatInvalid := func(pattern string) {
err := f.setChunkNameFormat(pattern)
assert.Error(t, err)
}
assertMakeName := func(wantChunkName, mainName string, chunkNo int, ctrlType string, xactNo int64) {
gotChunkName := f.makeChunkName(mainName, chunkNo, ctrlType, xactNo)
assert.Equal(t, wantChunkName, gotChunkName)
}
assertMakeNamePanics := func(mainName string, chunkNo int, ctrlType string, xactNo int64) {
assert.Panics(t, func() {
_ = f.makeChunkName(mainName, chunkNo, ctrlType, xactNo)
}, "makeChunkName(%q,%d,%q,%d) should panic", mainName, chunkNo, ctrlType, xactNo)
}
assertParseName := func(fileName, wantMainName string, wantChunkNo int, wantCtrlType string, wantXactNo int64) {
gotMainName, gotChunkNo, gotCtrlType, gotXactNo := f.parseChunkName(fileName)
assert.Equal(t, wantMainName, gotMainName)
assert.Equal(t, wantChunkNo, gotChunkNo)
assert.Equal(t, wantCtrlType, gotCtrlType)
assert.Equal(t, wantXactNo, gotXactNo)
}
const newFormatSupported = false // support for patterns not starting with base name (*)
// valid formats
assertFormat(`*.rclone_chunk.###`, `%s.rclone_chunk.%03d`, `%s.rclone_chunk._%s`, `^(.+?)\.rclone_chunk\.(?:([0-9]{3,})|_([a-z]{3,9}))(?:\.\.tmp_([0-9]{10,19}))?$`)
assertFormat(`*.rclone_chunk.#`, `%s.rclone_chunk.%d`, `%s.rclone_chunk._%s`, `^(.+?)\.rclone_chunk\.(?:([0-9]+)|_([a-z]{3,9}))(?:\.\.tmp_([0-9]{10,19}))?$`)
assertFormat(`*_chunk_#####`, `%s_chunk_%05d`, `%s_chunk__%s`, `^(.+?)_chunk_(?:([0-9]{5,})|_([a-z]{3,9}))(?:\.\.tmp_([0-9]{10,19}))?$`)
assertFormat(`*-chunk-#`, `%s-chunk-%d`, `%s-chunk-_%s`, `^(.+?)-chunk-(?:([0-9]+)|_([a-z]{3,9}))(?:\.\.tmp_([0-9]{10,19}))?$`)
assertFormat(`*-chunk-#-%^$()[]{}.+-!?:\`, `%s-chunk-%d-%%^$()[]{}.+-!?:\`, `%s-chunk-_%s-%%^$()[]{}.+-!?:\`, `^(.+?)-chunk-(?:([0-9]+)|_([a-z]{3,9}))-%\^\$\(\)\[\]\{\}\.\+-!\?:\\(?:\.\.tmp_([0-9]{10,19}))?$`)
if newFormatSupported {
assertFormat(`_*-chunk-##,`, `_%s-chunk-%02d,`, `_%s-chunk-_%s,`, `^_(.+?)-chunk-(?:([0-9]{2,})|_([a-z]{3,9})),(?:\.\.tmp_([0-9]{10,19}))?$`)
}
// invalid formats
assertFormatInvalid(`chunk-#`)
assertFormatInvalid(`*-chunk`)
assertFormatInvalid(`*-*-chunk-#`)
assertFormatInvalid(`*-chunk-#-#`)
assertFormatInvalid(`#-chunk-*`)
assertFormatInvalid(`*/#`)
assertFormatValid(`*#`)
assertFormatInvalid(`**#`)
assertFormatInvalid(`#*`)
assertFormatInvalid(``)
assertFormatInvalid(`-`)
// quick tests
if newFormatSupported {
assertFormat(`part_*_#`, `part_%s_%d`, `part_%s__%s`, `^part_(.+?)_(?:([0-9]+)|_([a-z]{3,9}))(?:\.\.tmp_([0-9]{10,19}))?$`)
f.opt.StartFrom = 1
assertMakeName(`part_fish_1`, "fish", 0, "", -1)
assertParseName(`part_fish_43`, "fish", 42, "", -1)
assertMakeName(`part_fish_3..tmp_0000000004`, "fish", 2, "", 4)
assertParseName(`part_fish_4..tmp_0000000005`, "fish", 3, "", 5)
assertMakeName(`part_fish__locks`, "fish", -2, "locks", -3)
assertParseName(`part_fish__locks`, "fish", -1, "locks", -1)
assertMakeName(`part_fish__blockinfo..tmp_1234567890123456789`, "fish", -3, "blockinfo", 1234567890123456789)
assertParseName(`part_fish__blockinfo..tmp_1234567890123456789`, "fish", -1, "blockinfo", 1234567890123456789)
}
// prepare format for long tests
assertFormat(`*.chunk.###`, `%s.chunk.%03d`, `%s.chunk._%s`, `^(.+?)\.chunk\.(?:([0-9]{3,})|_([a-z]{3,9}))(?:\.\.tmp_([0-9]{10,19}))?$`)
f.opt.StartFrom = 2
// valid data chunks
assertMakeName(`fish.chunk.003`, "fish", 1, "", -1)
assertMakeName(`fish.chunk.011..tmp_0000054321`, "fish", 9, "", 54321)
assertMakeName(`fish.chunk.011..tmp_1234567890`, "fish", 9, "", 1234567890)
assertMakeName(`fish.chunk.1916..tmp_123456789012345`, "fish", 1914, "", 123456789012345)
assertParseName(`fish.chunk.003`, "fish", 1, "", -1)
assertParseName(`fish.chunk.004..tmp_0000000021`, "fish", 2, "", 21)
assertParseName(`fish.chunk.021`, "fish", 19, "", -1)
assertParseName(`fish.chunk.323..tmp_1234567890123456789`, "fish", 321, "", 1234567890123456789)
// parsing invalid data chunk names
assertParseName(`fish.chunk.3`, "", -1, "", -1)
assertParseName(`fish.chunk.001`, "", -1, "", -1)
assertParseName(`fish.chunk.21`, "", -1, "", -1)
assertParseName(`fish.chunk.-21`, "", -1, "", -1)
assertParseName(`fish.chunk.004.tmp_0000000021`, "", -1, "", -1)
assertParseName(`fish.chunk.003..tmp_123456789`, "", -1, "", -1)
assertParseName(`fish.chunk.003..tmp_012345678901234567890123456789`, "", -1, "", -1)
assertParseName(`fish.chunk.003..tmp_-1`, "", -1, "", -1)
// valid control chunks
assertMakeName(`fish.chunk._info`, "fish", -1, "info", -1)
assertMakeName(`fish.chunk._locks`, "fish", -2, "locks", -1)
assertMakeName(`fish.chunk._blockinfo`, "fish", -3, "blockinfo", -1)
assertParseName(`fish.chunk._info`, "fish", -1, "info", -1)
assertParseName(`fish.chunk._locks`, "fish", -1, "locks", -1)
assertParseName(`fish.chunk._blockinfo`, "fish", -1, "blockinfo", -1)
// valid temporary control chunks
assertMakeName(`fish.chunk._info..tmp_0000000021`, "fish", -1, "info", 21)
assertMakeName(`fish.chunk._locks..tmp_0000054321`, "fish", -2, "locks", 54321)
assertMakeName(`fish.chunk._uploads..tmp_0000000000`, "fish", -3, "uploads", 0)
assertMakeName(`fish.chunk._blockinfo..tmp_1234567890123456789`, "fish", -4, "blockinfo", 1234567890123456789)
assertParseName(`fish.chunk._info..tmp_0000000021`, "fish", -1, "info", 21)
assertParseName(`fish.chunk._locks..tmp_0000054321`, "fish", -1, "locks", 54321)
assertParseName(`fish.chunk._uploads..tmp_0000000000`, "fish", -1, "uploads", 0)
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789`, "fish", -1, "blockinfo", 1234567890123456789)
// parsing invalid control chunk names
assertParseName(`fish.chunk.info`, "", -1, "", -1)
assertParseName(`fish.chunk.locks`, "", -1, "", -1)
assertParseName(`fish.chunk.uploads`, "", -1, "", -1)
assertParseName(`fish.chunk.blockinfo`, "", -1, "", -1)
assertParseName(`fish.chunk._os`, "", -1, "", -1)
assertParseName(`fish.chunk._futuredata`, "", -1, "", -1)
assertParseName(`fish.chunk._me_ta`, "", -1, "", -1)
assertParseName(`fish.chunk._in-fo`, "", -1, "", -1)
assertParseName(`fish.chunk._.bin`, "", -1, "", -1)
assertParseName(`fish.chunk._locks..tmp_123456789`, "", -1, "", -1)
assertParseName(`fish.chunk._meta..tmp_-1`, "", -1, "", -1)
assertParseName(`fish.chunk._blockinfo..tmp_012345678901234567890123456789`, "", -1, "", -1)
// short control chunk names: 3 letters ok, 1-2 letters not allowed
assertMakeName(`fish.chunk._ext`, "fish", -1, "ext", -1)
assertMakeName(`fish.chunk._ext..tmp_0000000021`, "fish", -1, "ext", 21)
assertParseName(`fish.chunk._int`, "fish", -1, "int", -1)
assertParseName(`fish.chunk._int..tmp_0000000021`, "fish", -1, "int", 21)
assertMakeNamePanics("fish", -1, "in", -1)
assertMakeNamePanics("fish", -1, "up", 4)
assertMakeNamePanics("fish", -1, "x", -1)
assertMakeNamePanics("fish", -1, "c", 4)
// base file name can sometimes look like a valid chunk name
assertParseName(`fish.chunk.003.chunk.004`, "fish.chunk.003", 2, "", -1)
assertParseName(`fish.chunk.003.chunk.005..tmp_0000000021`, "fish.chunk.003", 3, "", 21)
assertParseName(`fish.chunk.003.chunk._info`, "fish.chunk.003", -1, "info", -1)
assertParseName(`fish.chunk.003.chunk._blockinfo..tmp_1234567890123456789`, "fish.chunk.003", -1, "blockinfo", 1234567890123456789)
assertParseName(`fish.chunk.003.chunk._Meta`, "", -1, "", -1)
assertParseName(`fish.chunk.003.chunk._x..tmp_0000054321`, "", -1, "", -1)
assertParseName(`fish.chunk.004..tmp_0000000021.chunk.004`, "fish.chunk.004..tmp_0000000021", 2, "", -1)
assertParseName(`fish.chunk.004..tmp_0000000021.chunk.005..tmp_0000000021`, "fish.chunk.004..tmp_0000000021", 3, "", 21)
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._info`, "fish.chunk.004..tmp_0000000021", -1, "info", -1)
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._blockinfo..tmp_1234567890123456789`, "fish.chunk.004..tmp_0000000021", -1, "blockinfo", 1234567890123456789)
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._Meta`, "", -1, "", -1)
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._x..tmp_0000054321`, "", -1, "", -1)
assertParseName(`fish.chunk._info.chunk.004`, "fish.chunk._info", 2, "", -1)
assertParseName(`fish.chunk._info.chunk.005..tmp_0000000021`, "fish.chunk._info", 3, "", 21)
assertParseName(`fish.chunk._info.chunk._info`, "fish.chunk._info", -1, "info", -1)
assertParseName(`fish.chunk._info.chunk._blockinfo..tmp_1234567890123456789`, "fish.chunk._info", -1, "blockinfo", 1234567890123456789)
assertParseName(`fish.chunk._info.chunk._info.chunk._Meta`, "", -1, "", -1)
assertParseName(`fish.chunk._info.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", -1)
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789.chunk.004`, "fish.chunk._blockinfo..tmp_1234567890123456789", 2, "", -1)
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789.chunk.005..tmp_0000000021`, "fish.chunk._blockinfo..tmp_1234567890123456789", 3, "", 21)
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789.chunk._info`, "fish.chunk._blockinfo..tmp_1234567890123456789", -1, "info", -1)
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789.chunk._blockinfo..tmp_1234567890123456789`, "fish.chunk._blockinfo..tmp_1234567890123456789", -1, "blockinfo", 1234567890123456789)
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789.chunk._info.chunk._Meta`, "", -1, "", -1)
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", -1)
// attempts to make invalid chunk names
assertMakeNamePanics("fish", -1, "", -1) // neither data nor control
assertMakeNamePanics("fish", 0, "info", -1) // both data and control
assertMakeNamePanics("fish", -1, "futuredata", -1) // control type too long
assertMakeNamePanics("fish", -1, "123", -1) // digits not allowed
assertMakeNamePanics("fish", -1, "Meta", -1) // only lower case letters allowed
assertMakeNamePanics("fish", -1, "in-fo", -1) // punctuation not allowed
assertMakeNamePanics("fish", -1, "_info", -1)
assertMakeNamePanics("fish", -1, "info_", -1)
assertMakeNamePanics("fish", -2, ".bind", -3)
assertMakeNamePanics("fish", -2, "bind.", -3)
assertMakeNamePanics("fish", -1, "", 1) // neither data nor control
assertMakeNamePanics("fish", 0, "info", 12) // both data and control
assertMakeNamePanics("fish", -1, "futuredata", 45) // control type too long
assertMakeNamePanics("fish", -1, "123", 123) // digits not allowed
assertMakeNamePanics("fish", -1, "Meta", 456) // only lower case letters allowed
assertMakeNamePanics("fish", -1, "in-fo", 321) // punctuation not allowed
assertMakeNamePanics("fish", -1, "_info", 15678)
assertMakeNamePanics("fish", -1, "info_", 999)
assertMakeNamePanics("fish", -2, ".bind", 0)
assertMakeNamePanics("fish", -2, "bind.", 0)
}
func testSmallFileInternals(t *testing.T, f *Fs) {
const dir = "small"
ctx := context.Background()
saveOpt := f.opt
defer func() {
f.opt.FailHard = false
_ = operations.Purge(ctx, f.base, dir)
f.opt = saveOpt
}()
f.opt.FailHard = false
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
checkSmallFileInternals := func(obj fs.Object) {
assert.NotNil(t, obj)
o, ok := obj.(*Object)
assert.True(t, ok)
assert.NotNil(t, o)
if o == nil {
return
}
switch {
case !f.useMeta:
// If meta format is "none", non-chunked file (even empty)
// internally is a single chunk without meta object.
assert.Nil(t, o.main)
assert.True(t, o.isComposite()) // sorry, sometimes a name is misleading
assert.Equal(t, 1, len(o.chunks))
case f.hashAll:
// Consistent hashing forces meta object on small files too
assert.NotNil(t, o.main)
assert.True(t, o.isComposite())
assert.Equal(t, 1, len(o.chunks))
default:
// normally non-chunked file is kept in the Object's main field
assert.NotNil(t, o.main)
assert.False(t, o.isComposite())
assert.Equal(t, 0, len(o.chunks))
}
}
checkContents := func(obj fs.Object, contents string) {
assert.NotNil(t, obj)
assert.Equal(t, int64(len(contents)), obj.Size())
r, err := obj.Open(ctx)
assert.NoError(t, err)
assert.NotNil(t, r)
if r == nil {
return
}
data, err := ioutil.ReadAll(r)
assert.NoError(t, err)
assert.Equal(t, contents, string(data))
_ = r.Close()
}
checkHashsum := func(obj fs.Object) {
var ht hash.Type
switch {
case !f.hashAll:
return
case f.useMD5:
ht = hash.MD5
case f.useSHA1:
ht = hash.SHA1
default:
return
}
// even empty files must have hashsum in consistent mode
sum, err := obj.Hash(ctx, ht)
assert.NoError(t, err)
assert.NotEqual(t, sum, "")
}
checkSmallFile := func(name, contents string) {
filename := path.Join(dir, name)
item := fstest.Item{Path: filename, ModTime: modTime}
_, put := fstests.PutTestContents(ctx, t, f, &item, contents, false)
assert.NotNil(t, put)
checkSmallFileInternals(put)
checkContents(put, contents)
checkHashsum(put)
// objects returned by Put and NewObject must have similar structure
obj, err := f.NewObject(ctx, filename)
assert.NoError(t, err)
assert.NotNil(t, obj)
checkSmallFileInternals(obj)
checkContents(obj, contents)
checkHashsum(obj)
_ = obj.Remove(ctx)
_ = put.Remove(ctx) // for good
}
checkSmallFile("emptyfile", "")
checkSmallFile("smallfile", "Ok")
}
func testPreventCorruption(t *testing.T, f *Fs) {
if f.opt.ChunkSize > 50 {
t.Skip("this test requires small chunks")
}
const dir = "corrupted"
ctx := context.Background()
saveOpt := f.opt
defer func() {
f.opt.FailHard = false
_ = operations.Purge(ctx, f.base, dir)
f.opt = saveOpt
}()
f.opt.FailHard = true
contents := random.String(250)
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
const overlapMessage = "chunk overlap"
assertOverlapError := func(err error) {
assert.Error(t, err)
if err != nil {
assert.Contains(t, err.Error(), overlapMessage)
}
}
newFile := func(name string) fs.Object {
item := fstest.Item{Path: path.Join(dir, name), ModTime: modTime}
_, obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
require.NotNil(t, obj)
return obj
}
billyObj := newFile("billy")
billyChunkName := func(chunkNo int) string {
return f.makeChunkName(billyObj.Remote(), chunkNo, "", -1)
}
err := f.Mkdir(ctx, billyChunkName(1))
assertOverlapError(err)
_, err = f.Move(ctx, newFile("silly1"), billyChunkName(2))
assert.Error(t, err)
assert.True(t, err == fs.ErrorCantMove || (err != nil && strings.Contains(err.Error(), overlapMessage)))
_, err = f.Copy(ctx, newFile("silly2"), billyChunkName(3))
assert.Error(t, err)
assert.True(t, err == fs.ErrorCantCopy || (err != nil && strings.Contains(err.Error(), overlapMessage)))
// accessing chunks in strict mode is prohibited
f.opt.FailHard = true
billyChunk4Name := billyChunkName(4)
billyChunk4, err := f.NewObject(ctx, billyChunk4Name)
assertOverlapError(err)
f.opt.FailHard = false
billyChunk4, err = f.NewObject(ctx, billyChunk4Name)
assert.NoError(t, err)
require.NotNil(t, billyChunk4)
f.opt.FailHard = true
_, err = f.Put(ctx, bytes.NewBufferString(contents), billyChunk4)
assertOverlapError(err)
// you can freely read chunks (if you have an object)
r, err := billyChunk4.Open(ctx)
assert.NoError(t, err)
var chunkContents []byte
assert.NotPanics(t, func() {
chunkContents, err = ioutil.ReadAll(r)
_ = r.Close()
})
assert.NoError(t, err)
assert.NotEqual(t, contents, string(chunkContents))
// but you can't change them
err = billyChunk4.Update(ctx, bytes.NewBufferString(contents), newFile("silly3"))
assertOverlapError(err)
// Remove isn't special, you can't corrupt files even if you have an object
err = billyChunk4.Remove(ctx)
assertOverlapError(err)
// recreate billy in case it was anyhow corrupted
willyObj := newFile("willy")
willyChunkName := f.makeChunkName(willyObj.Remote(), 1, "", -1)
f.opt.FailHard = false
willyChunk, err := f.NewObject(ctx, willyChunkName)
f.opt.FailHard = true
assert.NoError(t, err)
require.NotNil(t, willyChunk)
_, err = operations.Copy(ctx, f, willyChunk, willyChunkName, newFile("silly4"))
assertOverlapError(err)
// operations.Move will return error when chunker's Move refused
// to corrupt target file, but reverts to copy/delete method
// still trying to delete target chunk. Chunker must come to rescue.
_, err = operations.Move(ctx, f, willyChunk, willyChunkName, newFile("silly5"))
assertOverlapError(err)
r, err = willyChunk.Open(ctx)
assert.NoError(t, err)
assert.NotPanics(t, func() {
_, err = ioutil.ReadAll(r)
_ = r.Close()
})
assert.NoError(t, err)
}
func testChunkNumberOverflow(t *testing.T, f *Fs) {
if f.opt.ChunkSize > 50 {
t.Skip("this test requires small chunks")
}
const dir = "wreaked"
const wreakNumber = 10200300
ctx := context.Background()
saveOpt := f.opt
defer func() {
f.opt.FailHard = false
_ = operations.Purge(ctx, f.base, dir)
f.opt = saveOpt
}()
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
contents := random.String(100)
newFile := func(f fs.Fs, name string) (fs.Object, string) {
filename := path.Join(dir, name)
item := fstest.Item{Path: filename, ModTime: modTime}
_, obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
require.NotNil(t, obj)
return obj, filename
}
f.opt.FailHard = false
file, fileName := newFile(f, "wreaker")
wreak, _ := newFile(f.base, f.makeChunkName("wreaker", wreakNumber, "", -1))
f.opt.FailHard = false
fstest.CheckListingWithRoot(t, f, dir, nil, nil, f.Precision())
_, err := f.NewObject(ctx, fileName)
assert.Error(t, err)
f.opt.FailHard = true
_, err = f.List(ctx, dir)
assert.Error(t, err)
_, err = f.NewObject(ctx, fileName)
assert.Error(t, err)
f.opt.FailHard = false
_ = wreak.Remove(ctx)
_ = file.Remove(ctx)
}
func testMetadataInput(t *testing.T, f *Fs) {
const minChunkForTest = 50
if f.opt.ChunkSize < minChunkForTest {
t.Skip("this test requires chunks that fit metadata")
}
const dir = "usermeta"
ctx := context.Background()
saveOpt := f.opt
defer func() {
f.opt.FailHard = false
_ = operations.Purge(ctx, f.base, dir)
f.opt = saveOpt
}()
f.opt.FailHard = false
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
putFile := func(f fs.Fs, name, contents, message string, check bool) fs.Object {
item := fstest.Item{Path: name, ModTime: modTime}
_, obj := fstests.PutTestContents(ctx, t, f, &item, contents, check)
assert.NotNil(t, obj, message)
return obj
}
runSubtest := func(contents, name string) {
description := fmt.Sprintf("file with %s metadata", name)
filename := path.Join(dir, name)
require.True(t, len(contents) > 2 && len(contents) < minChunkForTest, description+" test data is correct")
part := putFile(f.base, f.makeChunkName(filename, 0, "", -1), "oops", "", true)
_ = putFile(f, filename, contents, "upload "+description, false)
obj, err := f.NewObject(ctx, filename)
assert.NoError(t, err, "access "+description)
assert.NotNil(t, obj)
assert.Equal(t, int64(len(contents)), obj.Size(), "size "+description)
o, ok := obj.(*Object)
assert.NotNil(t, ok)
if o != nil {
assert.True(t, o.isComposite() && len(o.chunks) == 1, description+" is forced composite")
o = nil
}
defer func() {
_ = obj.Remove(ctx)
_ = part.Remove(ctx)
}()
r, err := obj.Open(ctx)
assert.NoError(t, err, "open "+description)
assert.NotNil(t, r, "open stream of "+description)
if err == nil && r != nil {
data, err := ioutil.ReadAll(r)
assert.NoError(t, err, "read all of "+description)
assert.Equal(t, contents, string(data), description+" contents is ok")
_ = r.Close()
}
}
metaData, err := marshalSimpleJSON(ctx, 3, 1, "", "")
require.NoError(t, err)
todaysMeta := string(metaData)
runSubtest(todaysMeta, "today")
pastMeta := regexp.MustCompile(`"ver":[0-9]+`).ReplaceAllLiteralString(todaysMeta, `"ver":1`)
pastMeta = regexp.MustCompile(`"size":[0-9]+`).ReplaceAllLiteralString(pastMeta, `"size":0`)
runSubtest(pastMeta, "past")
futureMeta := regexp.MustCompile(`"ver":[0-9]+`).ReplaceAllLiteralString(todaysMeta, `"ver":999`)
futureMeta = regexp.MustCompile(`"nchunks":[0-9]+`).ReplaceAllLiteralString(futureMeta, `"nchunks":0,"x":"y"`)
runSubtest(futureMeta, "future")
}
// InternalTest dispatches all internal tests
func (f *Fs) InternalTest(t *testing.T) {
t.Run("PutLarge", func(t *testing.T) {
if *UploadKilobytes <= 0 {
t.Skip("-upload-kilobytes is not set")
}
testPutLarge(t, f, *UploadKilobytes)
})
t.Run("ChunkNameFormat", func(t *testing.T) {
testChunkNameFormat(t, f)
})
t.Run("SmallFileInternals", func(t *testing.T) {
testSmallFileInternals(t, f)
})
t.Run("PreventCorruption", func(t *testing.T) {
testPreventCorruption(t, f)
})
t.Run("ChunkNumberOverflow", func(t *testing.T) {
testChunkNumberOverflow(t, f)
})
t.Run("MetadataInput", func(t *testing.T) {
testMetadataInput(t, f)
})
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -0,0 +1,58 @@
// Test the Chunker filesystem interface
package chunker_test
import (
"flag"
"os"
"path/filepath"
"testing"
_ "github.com/rclone/rclone/backend/all" // for integration tests
"github.com/rclone/rclone/backend/chunker"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// Command line flags
var (
// Invalid characters are not supported by some remotes, eg. Mailru.
// We enable testing with invalid characters when -remote is not set, so
// chunker overlays a local directory, but invalid characters are disabled
// by default when -remote is set, eg. when test_all runs backend tests.
// You can still test with invalid characters using the below flag.
UseBadChars = flag.Bool("bad-chars", false, "Set to test bad characters in file names when -remote is set")
)
// TestIntegration runs integration tests against a concrete remote
// set by the -remote flag. If the flag is not set, it creates a
// dynamic chunker overlay wrapping a local temporary directory.
func TestIntegration(t *testing.T) {
opt := fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*chunker.Object)(nil),
SkipBadWindowsCharacters: !*UseBadChars,
UnimplementableObjectMethods: []string{
"MimeType",
"GetTier",
"SetTier",
},
UnimplementableFsMethods: []string{
"PublicLink",
"OpenWriterAt",
"MergeDirs",
"DirCacheFlush",
"UserInfo",
"Disconnect",
},
}
if *fstest.RemoteName == "" {
name := "TestChunker"
opt.RemoteName = name + ":"
tempDir := filepath.Join(os.TempDir(), "rclone-chunker-test-standard")
opt.ExtraConfig = []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "chunker"},
{Name: name, Key: "remote", Value: tempDir},
}
}
fstests.Run(t, &opt)
}

View File

@@ -208,21 +208,6 @@ func (c *cipher) putBlock(buf []byte) {
c.buffers.Put(buf)
}
// check to see if the byte string is valid with no control characters
// from 0x00 to 0x1F and is a valid UTF-8 string
func checkValidString(buf []byte) error {
for i := range buf {
c := buf[i]
if c >= 0x00 && c < 0x20 || c == 0x7F {
return ErrorBadDecryptControlChar
}
}
if !utf8.Valid(buf) {
return ErrorBadDecryptUTF8
}
return nil
}
// encodeFileName encodes a filename using a modified version of
// standard base32 as described in RFC4648
//
@@ -294,10 +279,6 @@ func (c *cipher) decryptSegment(ciphertext string) (string, error) {
if err != nil {
return "", err
}
err = checkValidString(plaintext)
if err != nil {
return "", err
}
return string(plaintext), err
}

View File

@@ -44,69 +44,6 @@ func TestNewNameEncryptionModeString(t *testing.T) {
assert.Equal(t, NameEncryptionMode(3).String(), "Unknown mode #3")
}
func TestValidString(t *testing.T) {
for _, test := range []struct {
in string
expected error
}{
{"", nil},
{"\x01", ErrorBadDecryptControlChar},
{"a\x02", ErrorBadDecryptControlChar},
{"abc\x03", ErrorBadDecryptControlChar},
{"abc\x04def", ErrorBadDecryptControlChar},
{"\x05d", ErrorBadDecryptControlChar},
{"\x06def", ErrorBadDecryptControlChar},
{"\x07", ErrorBadDecryptControlChar},
{"\x08", ErrorBadDecryptControlChar},
{"\x09", ErrorBadDecryptControlChar},
{"\x0A", ErrorBadDecryptControlChar},
{"\x0B", ErrorBadDecryptControlChar},
{"\x0C", ErrorBadDecryptControlChar},
{"\x0D", ErrorBadDecryptControlChar},
{"\x0E", ErrorBadDecryptControlChar},
{"\x0F", ErrorBadDecryptControlChar},
{"\x10", ErrorBadDecryptControlChar},
{"\x11", ErrorBadDecryptControlChar},
{"\x12", ErrorBadDecryptControlChar},
{"\x13", ErrorBadDecryptControlChar},
{"\x14", ErrorBadDecryptControlChar},
{"\x15", ErrorBadDecryptControlChar},
{"\x16", ErrorBadDecryptControlChar},
{"\x17", ErrorBadDecryptControlChar},
{"\x18", ErrorBadDecryptControlChar},
{"\x19", ErrorBadDecryptControlChar},
{"\x1A", ErrorBadDecryptControlChar},
{"\x1B", ErrorBadDecryptControlChar},
{"\x1C", ErrorBadDecryptControlChar},
{"\x1D", ErrorBadDecryptControlChar},
{"\x1E", ErrorBadDecryptControlChar},
{"\x1F", ErrorBadDecryptControlChar},
{"\x20", nil},
{"\x7E", nil},
{"\x7F", ErrorBadDecryptControlChar},
{"£100", nil},
{`hello? sausage/êé/Hello, 世界/ " ' @ < > & ?/z.txt`, nil},
{"£100", nil},
// Following tests from https://secure.php.net/manual/en/reference.pcre.pattern.modifiers.php#54805
{"a", nil}, // Valid ASCII
{"\xc3\xb1", nil}, // Valid 2 Octet Sequence
{"\xc3\x28", ErrorBadDecryptUTF8}, // Invalid 2 Octet Sequence
{"\xa0\xa1", ErrorBadDecryptUTF8}, // Invalid Sequence Identifier
{"\xe2\x82\xa1", nil}, // Valid 3 Octet Sequence
{"\xe2\x28\xa1", ErrorBadDecryptUTF8}, // Invalid 3 Octet Sequence (in 2nd Octet)
{"\xe2\x82\x28", ErrorBadDecryptUTF8}, // Invalid 3 Octet Sequence (in 3rd Octet)
{"\xf0\x90\x8c\xbc", nil}, // Valid 4 Octet Sequence
{"\xf0\x28\x8c\xbc", ErrorBadDecryptUTF8}, // Invalid 4 Octet Sequence (in 2nd Octet)
{"\xf0\x90\x28\xbc", ErrorBadDecryptUTF8}, // Invalid 4 Octet Sequence (in 3rd Octet)
{"\xf0\x28\x8c\x28", ErrorBadDecryptUTF8}, // Invalid 4 Octet Sequence (in 4th Octet)
{"\xf8\xa1\xa1\xa1\xa1", ErrorBadDecryptUTF8}, // Valid 5 Octet Sequence (but not Unicode!)
{"\xfc\xa1\xa1\xa1\xa1\xa1", ErrorBadDecryptUTF8}, // Valid 6 Octet Sequence (but not Unicode!)
} {
actual := checkValidString([]byte(test.in))
assert.Equal(t, actual, test.expected, fmt.Sprintf("in=%q", test.in))
}
}
func TestEncodeFileName(t *testing.T) {
for _, test := range []struct {
in string
@@ -210,8 +147,6 @@ func TestDecryptSegment(t *testing.T) {
{encodeFileName([]byte("a")), ErrorNotAMultipleOfBlocksize},
{encodeFileName([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize},
{encodeFileName([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong},
{c.encryptSegment("\x01"), ErrorBadDecryptControlChar},
{c.encryptSegment("\xc3\x28"), ErrorBadDecryptUTF8},
} {
actual, actualErr := c.decryptSegment(test.in)
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr))
@@ -705,16 +640,16 @@ var (
// Test test infrastructure first!
func TestRandomSource(t *testing.T) {
source := newRandomSource(1E8)
sink := newRandomSource(1E8)
source := newRandomSource(1e8)
sink := newRandomSource(1e8)
n, err := io.Copy(sink, source)
assert.NoError(t, err)
assert.Equal(t, int64(1E8), n)
assert.Equal(t, int64(1e8), n)
source = newRandomSource(1E8)
source = newRandomSource(1e8)
buf := make([]byte, 16)
_, _ = source.Read(buf)
sink = newRandomSource(1E8)
sink = newRandomSource(1e8)
_, err = io.Copy(sink, source)
assert.Error(t, err, "Error in stream")
}
@@ -754,23 +689,23 @@ func testEncryptDecrypt(t *testing.T, bufSize int, copySize int64) {
}
func TestEncryptDecrypt1(t *testing.T) {
testEncryptDecrypt(t, 1, 1E7)
testEncryptDecrypt(t, 1, 1e7)
}
func TestEncryptDecrypt32(t *testing.T) {
testEncryptDecrypt(t, 32, 1E8)
testEncryptDecrypt(t, 32, 1e8)
}
func TestEncryptDecrypt4096(t *testing.T) {
testEncryptDecrypt(t, 4096, 1E8)
testEncryptDecrypt(t, 4096, 1e8)
}
func TestEncryptDecrypt65536(t *testing.T) {
testEncryptDecrypt(t, 65536, 1E8)
testEncryptDecrypt(t, 65536, 1e8)
}
func TestEncryptDecrypt65537(t *testing.T) {
testEncryptDecrypt(t, 65537, 1E8)
testEncryptDecrypt(t, 65537, 1e8)
}
var (
@@ -803,7 +738,7 @@ func TestEncryptData(t *testing.T) {
} {
c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err)
c.cryptoRand = newRandomSource(1E8) // nodge the crypto rand generator
c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator
// Check encode works
buf := bytes.NewBuffer(test.in)
@@ -826,7 +761,7 @@ func TestEncryptData(t *testing.T) {
func TestNewEncrypter(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err)
c.cryptoRand = newRandomSource(1E8) // nodge the crypto rand generator
c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator
z := &zeroes{}
@@ -853,7 +788,7 @@ func TestNewEncrypterErrUnexpectedEOF(t *testing.T) {
fh, err := c.newEncrypter(in, nil)
assert.NoError(t, err)
n, err := io.CopyN(ioutil.Discard, fh, 1E6)
n, err := io.CopyN(ioutil.Discard, fh, 1e6)
assert.Equal(t, io.ErrUnexpectedEOF, err)
assert.Equal(t, int64(32), n)
}
@@ -885,7 +820,7 @@ func (c *closeDetector) Close() error {
func TestNewDecrypter(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err)
c.cryptoRand = newRandomSource(1E8) // nodge the crypto rand generator
c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator
cd := newCloseDetector(bytes.NewBuffer(file0))
fh, err := c.newDecrypter(cd)
@@ -936,7 +871,7 @@ func TestNewDecrypterErrUnexpectedEOF(t *testing.T) {
fh, err := c.newDecrypter(in)
assert.NoError(t, err)
n, err := io.CopyN(ioutil.Discard, fh, 1E6)
n, err := io.CopyN(ioutil.Discard, fh, 1e6)
assert.Equal(t, io.ErrUnexpectedEOF, err)
assert.Equal(t, int64(16), n)
}

View File

@@ -32,6 +32,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
@@ -47,6 +48,8 @@ import (
"google.golang.org/api/googleapi"
)
const enc = encodings.Drive
// Constants
const (
rcloneClientID = "202264815644.apps.googleusercontent.com"
@@ -156,6 +159,7 @@ func init() {
Description: "Google Drive",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
ctx := context.TODO()
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
@@ -177,7 +181,7 @@ func init() {
log.Fatalf("Failed to configure token: %v", err)
}
}
err = configTeamDrive(opt, m, name)
err = configTeamDrive(ctx, opt, m, name)
if err != nil {
log.Fatalf("Failed to configure team drive: %v", err)
}
@@ -598,11 +602,10 @@ func (f *Fs) list(ctx context.Context, dirIDs []string, title string, directorie
}
var stems []string
if title != "" {
searchTitle := enc.FromStandardName(title)
// Escaping the backslash isn't documented but seems to work
searchTitle := strings.Replace(title, `\`, `\\`, -1)
searchTitle = strings.Replace(searchTitle, `\`, `\\`, -1)
searchTitle = strings.Replace(searchTitle, `'`, `\'`, -1)
// Convert to / for search
searchTitle = strings.Replace(searchTitle, "", "/", -1)
var titleQuery bytes.Buffer
_, _ = fmt.Fprintf(&titleQuery, "(name='%s'", searchTitle)
@@ -663,18 +666,16 @@ OUTER:
for {
var files *drive.FileList
err = f.pacer.Call(func() (bool, error) {
files, err = list.Fields(googleapi.Field(fields)).Do()
files, err = list.Fields(googleapi.Field(fields)).Context(ctx).Do()
return shouldRetry(err)
})
if err != nil {
return false, errors.Wrap(err, "couldn't list directory")
}
for _, item := range files.Files {
// Convert / to for listing purposes
item.Name = strings.Replace(item.Name, "/", "", -1)
item.Name = enc.ToStandardName(item.Name)
// Check the case of items is correct since
// the `=` operator is case insensitive.
if title != "" && title != item.Name {
found := false
for _, stem := range stems {
@@ -778,7 +779,7 @@ func parseExtensions(extensionsIn ...string) (extensions, mimeTypes []string, er
}
// Figure out if the user wants to use a team drive
func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name string) error {
// Stop if we are running non-interactive config
if fs.Config.AutoConfirm {
return nil
@@ -806,7 +807,7 @@ func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
for {
var teamDrives *drive.TeamDriveList
err = newPacer(opt).Call(func() (bool, error) {
teamDrives, err = listTeamDrives.Do()
teamDrives, err = listTeamDrives.Context(ctx).Do()
return shouldRetry(err)
})
if err != nil {
@@ -1209,6 +1210,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
leaf = enc.FromStandardName(leaf)
// fmt.Println("Making", path)
// Define the metadata for the directory we are going to create.
createInfo := &drive.File{
@@ -1644,6 +1646,7 @@ func (f *Fs) createFileInfo(ctx context.Context, remote string, modTime time.Tim
return nil, err
}
leaf = enc.FromStandardName(leaf)
// Define the metadata for the file we are going to create.
createInfo := &drive.File{
Name: leaf,
@@ -1734,7 +1737,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
}
} else {
// Upload the file in chunks
info, err = f.Upload(in, size, srcMimeType, "", remote, createInfo)
info, err = f.Upload(ctx, in, size, srcMimeType, "", remote, createInfo)
if err != nil {
return nil, err
}
@@ -1975,7 +1978,7 @@ func (f *Fs) Purge(ctx context.Context) error {
// CleanUp empties the trash
func (f *Fs) CleanUp(ctx context.Context) error {
err := f.pacer.Call(func() (bool, error) {
err := f.svc.Files.EmptyTrash().Do()
err := f.svc.Files.EmptyTrash().Context(ctx).Do()
return shouldRetry(err)
})
@@ -1994,7 +1997,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var about *drive.About
var err error
err = f.pacer.Call(func() (bool, error) {
about, err = f.svc.About.Get().Fields("storageQuota").Do()
about, err = f.svc.About.Get().Fields("storageQuota").Context(ctx).Do()
return shouldRetry(err)
})
if err != nil {
@@ -2253,7 +2256,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
}
}
fs.Debugf(f, "Checking for changes on remote")
startPageToken, err = f.changeNotifyRunner(notifyFunc, startPageToken)
startPageToken, err = f.changeNotifyRunner(ctx, notifyFunc, startPageToken)
if err != nil {
fs.Infof(f, "Change notify listener failure: %s", err)
}
@@ -2275,7 +2278,7 @@ func (f *Fs) changeNotifyStartPageToken() (pageToken string, err error) {
return startPageToken.StartPageToken, nil
}
func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), startPageToken string) (newStartPageToken string, err error) {
func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.EntryType), startPageToken string) (newStartPageToken string, err error) {
pageToken := startPageToken
for {
var changeList *drive.ChangeList
@@ -2291,7 +2294,7 @@ func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), startPage
if f.isTeamDrive {
changesCall.TeamDriveId(f.opt.TeamDriveID)
}
changeList, err = changesCall.Do()
changeList, err = changesCall.Context(ctx).Do()
return shouldRetry(err)
})
if err != nil {
@@ -2315,6 +2318,7 @@ func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), startPage
// find the new path
if change.File != nil {
change.File.Name = enc.ToStandardName(change.File.Name)
changeType := fs.EntryDirectory
if change.File.MimeType != driveFolderType {
changeType = fs.EntryObject
@@ -2499,7 +2503,7 @@ func (o *baseObject) Storable() bool {
// httpResponse gets an http.Response object for the object
// using the url and method passed in
func (o *baseObject) httpResponse(url, method string, options []fs.OpenOption) (req *http.Request, res *http.Response, err error) {
func (o *baseObject) httpResponse(ctx context.Context, url, method string, options []fs.OpenOption) (req *http.Request, res *http.Response, err error) {
if url == "" {
return nil, nil, errors.New("forbidden to download - check sharing permission")
}
@@ -2507,6 +2511,7 @@ func (o *baseObject) httpResponse(url, method string, options []fs.OpenOption) (
if err != nil {
return req, nil, err
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
fs.OpenOptionAddHTTPHeaders(req.Header, options)
if o.bytes == 0 {
// Don't supply range requests for 0 length objects as they always fail
@@ -2577,8 +2582,8 @@ func isGoogleError(err error, what string) bool {
}
// open a url for reading
func (o *baseObject) open(url string, options ...fs.OpenOption) (in io.ReadCloser, err error) {
_, res, err := o.httpResponse(url, "GET", options)
func (o *baseObject) open(ctx context.Context, url string, options ...fs.OpenOption) (in io.ReadCloser, err error) {
_, res, err := o.httpResponse(ctx, url, "GET", options)
if err != nil {
if isGoogleError(err, "cannotDownloadAbusiveFile") {
if o.fs.opt.AcknowledgeAbuse {
@@ -2589,7 +2594,7 @@ func (o *baseObject) open(url string, options ...fs.OpenOption) (in io.ReadClose
url += "?"
}
url += "acknowledgeAbuse=true"
_, res, err = o.httpResponse(url, "GET", options)
_, res, err = o.httpResponse(ctx, url, "GET", options)
} else {
err = errors.Wrap(err, "Use the --drive-acknowledge-abuse flag to download this file")
}
@@ -2618,7 +2623,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
o.v2Download = false
}
}
return o.baseObject.open(o.url, options...)
return o.baseObject.open(ctx, o.url, options...)
}
func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Update the size with what we are reading as it can change from
@@ -2643,7 +2648,7 @@ func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in
if offset != 0 {
return nil, errors.New("partial downloads are not supported while exporting Google Documents")
}
in, err = o.baseObject.open(o.url, options...)
in, err = o.baseObject.open(ctx, o.url, options...)
if in != nil {
in = &openDocumentFile{o: o, in: in}
}
@@ -2678,7 +2683,7 @@ func (o *linkObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.
return ioutil.NopCloser(bytes.NewReader(data)), nil
}
func (o *baseObject) update(updateInfo *drive.File, uploadMimeType string, in io.Reader,
func (o *baseObject) update(ctx context.Context, updateInfo *drive.File, uploadMimeType string, in io.Reader,
src fs.ObjectInfo) (info *drive.File, err error) {
// Make the API request to upload metadata and file data.
size := src.Size()
@@ -2696,7 +2701,7 @@ func (o *baseObject) update(updateInfo *drive.File, uploadMimeType string, in io
return
}
// Upload the file in chunks
return o.fs.Upload(in, size, uploadMimeType, o.id, o.remote, updateInfo)
return o.fs.Upload(ctx, in, size, uploadMimeType, o.id, o.remote, updateInfo)
}
// Update the already existing object
@@ -2710,7 +2715,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
MimeType: srcMimeType,
ModifiedTime: src.ModTime(ctx).Format(timeFormatOut),
}
info, err := o.baseObject.update(updateInfo, srcMimeType, in, src)
info, err := o.baseObject.update(ctx, updateInfo, srcMimeType, in, src)
if err != nil {
return err
}
@@ -2747,7 +2752,7 @@ func (o *documentObject) Update(ctx context.Context, in io.Reader, src fs.Object
}
updateInfo.MimeType = importMimeType
info, err := o.baseObject.update(updateInfo, srcMimeType, in, src)
info, err := o.baseObject.update(ctx, updateInfo, srcMimeType, in, src)
if err != nil {
return err
}

View File

@@ -11,6 +11,7 @@
package drive
import (
"context"
"encoding/json"
"fmt"
"io"
@@ -50,7 +51,7 @@ type resumableUpload struct {
}
// Upload the io.Reader in of size bytes with contentType and info
func (f *Fs) Upload(in io.Reader, size int64, contentType, fileID, remote string, info *drive.File) (*drive.File, error) {
func (f *Fs) Upload(ctx context.Context, in io.Reader, size int64, contentType, fileID, remote string, info *drive.File) (*drive.File, error) {
params := url.Values{
"alt": {"json"},
"uploadType": {"resumable"},
@@ -81,6 +82,7 @@ func (f *Fs) Upload(in io.Reader, size int64, contentType, fileID, remote string
if err != nil {
return false, err
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
googleapi.Expand(req.URL, map[string]string{
"fileId": fileID,
})
@@ -106,12 +108,13 @@ func (f *Fs) Upload(in io.Reader, size int64, contentType, fileID, remote string
MediaType: contentType,
ContentLength: size,
}
return rx.Upload()
return rx.Upload(ctx)
}
// Make an http.Request for the range passed in
func (rx *resumableUpload) makeRequest(start int64, body io.ReadSeeker, reqSize int64) *http.Request {
func (rx *resumableUpload) makeRequest(ctx context.Context, start int64, body io.ReadSeeker, reqSize int64) *http.Request {
req, _ := http.NewRequest("POST", rx.URI, body)
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.ContentLength = reqSize
if reqSize != 0 {
req.Header.Set("Content-Range", fmt.Sprintf("bytes %v-%v/%v", start, start+reqSize-1, rx.ContentLength))
@@ -129,8 +132,8 @@ var rangeRE = regexp.MustCompile(`^0\-(\d+)$`)
// Query drive for the amount transferred so far
//
// If error is nil, then start should be valid
func (rx *resumableUpload) transferStatus() (start int64, err error) {
req := rx.makeRequest(0, nil, 0)
func (rx *resumableUpload) transferStatus(ctx context.Context) (start int64, err error) {
req := rx.makeRequest(ctx, 0, nil, 0)
res, err := rx.f.client.Do(req)
if err != nil {
return 0, err
@@ -157,9 +160,9 @@ func (rx *resumableUpload) transferStatus() (start int64, err error) {
}
// Transfer a chunk - caller must call googleapi.CloseBody(res) if err == nil || res != nil
func (rx *resumableUpload) transferChunk(start int64, chunk io.ReadSeeker, chunkSize int64) (int, error) {
func (rx *resumableUpload) transferChunk(ctx context.Context, start int64, chunk io.ReadSeeker, chunkSize int64) (int, error) {
_, _ = chunk.Seek(0, io.SeekStart)
req := rx.makeRequest(start, chunk, chunkSize)
req := rx.makeRequest(ctx, start, chunk, chunkSize)
res, err := rx.f.client.Do(req)
if err != nil {
return 599, err
@@ -192,7 +195,7 @@ func (rx *resumableUpload) transferChunk(start int64, chunk io.ReadSeeker, chunk
// Upload uploads the chunks from the input
// It retries each chunk using the pacer and --low-level-retries
func (rx *resumableUpload) Upload() (*drive.File, error) {
func (rx *resumableUpload) Upload(ctx context.Context) (*drive.File, error) {
start := int64(0)
var StatusCode int
var err error
@@ -207,7 +210,7 @@ func (rx *resumableUpload) Upload() (*drive.File, error) {
// Transfer the chunk
err = rx.f.pacer.Call(func() (bool, error) {
fs.Debugf(rx.remote, "Sending chunk %d length %d", start, reqSize)
StatusCode, err = rx.transferChunk(start, chunk, reqSize)
StatusCode, err = rx.transferChunk(ctx, start, chunk, reqSize)
again, err := shouldRetry(err)
if StatusCode == statusResumeIncomplete || StatusCode == http.StatusCreated || StatusCode == http.StatusOK {
again = false

View File

@@ -39,11 +39,13 @@ import (
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/team"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/users"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/dropbox/dbhash"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/oauthutil"
@@ -52,6 +54,8 @@ import (
"golang.org/x/oauth2"
)
const enc = encodings.Dropbox
// Constants
const (
rcloneClientID = "5jcck7diasz0rqy"
@@ -102,10 +106,14 @@ var (
// A regexp matching path names for files Dropbox ignores
// See https://www.dropbox.com/en/help/145 - Ignored files
ignoredFiles = regexp.MustCompile(`(?i)(^|/)(desktop\.ini|thumbs\.db|\.ds_store|icon\r|\.dropbox|\.dropbox.attr)$`)
// DbHashType is the hash.Type for Dropbox
DbHashType hash.Type
)
// Register with Fs
func init() {
DbHashType = hash.RegisterHash("Dropbox", 64, dbhash.New)
fs.Register(&fs.RegInfo{
Name: "dropbox",
Description: "Dropbox",
@@ -372,14 +380,15 @@ func (f *Fs) setRoot(root string) {
// getMetadata gets the metadata for a file or directory
func (f *Fs) getMetadata(objPath string) (entry files.IsMetadata, notFound bool, err error) {
err = f.pacer.Call(func() (bool, error) {
entry, err = f.srv.GetMetadata(&files.GetMetadataArg{Path: objPath})
entry, err = f.srv.GetMetadata(&files.GetMetadataArg{
Path: enc.FromStandardPath(objPath),
})
return shouldRetry(err)
})
if err != nil {
switch e := err.(type) {
case files.GetMetadataAPIError:
switch e.EndpointError.Path.Tag {
case files.LookupErrorNotFound:
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.LookupErrorNotFound {
notFound = true
err = nil
}
@@ -466,7 +475,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
for {
if !started {
arg := files.ListFolderArg{
Path: root,
Path: enc.FromStandardPath(root),
Recursive: false,
}
if root == "/" {
@@ -479,8 +488,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if err != nil {
switch e := err.(type) {
case files.ListFolderAPIError:
switch e.EndpointError.Path.Tag {
case files.LookupErrorNotFound:
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.LookupErrorNotFound {
err = fs.ErrorDirNotFound
}
}
@@ -517,7 +525,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Only the last element is reliably cased in PathDisplay
entryPath := metadata.PathDisplay
leaf := path.Base(entryPath)
leaf := enc.ToStandardName(path.Base(entryPath))
remote := path.Join(dir, leaf)
if folderInfo != nil {
d := fs.NewDir(remote, time.Now())
@@ -575,7 +583,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// create it
arg2 := files.CreateFolderArg{
Path: root,
Path: enc.FromStandardPath(root),
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.srv.CreateFolderV2(&arg2)
@@ -601,6 +609,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return errors.Wrap(err, "Rmdir")
}
root = enc.FromStandardPath(root)
// check directory empty
arg := files.ListFolderArg{
Path: root,
@@ -657,9 +666,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
// Copy
arg := files.RelocationArg{}
arg.FromPath = srcObj.remotePath()
arg.ToPath = dstObj.remotePath()
arg := files.RelocationArg{
RelocationPath: files.RelocationPath{
FromPath: enc.FromStandardPath(srcObj.remotePath()),
ToPath: enc.FromStandardPath(dstObj.remotePath()),
},
}
var err error
var result *files.RelocationResult
err = f.pacer.Call(func() (bool, error) {
@@ -691,7 +703,9 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
func (f *Fs) Purge(ctx context.Context) (err error) {
// Let dropbox delete the filesystem tree
err = f.pacer.Call(func() (bool, error) {
_, err = f.srv.DeleteV2(&files.DeleteArg{Path: f.slashRoot})
_, err = f.srv.DeleteV2(&files.DeleteArg{
Path: enc.FromStandardPath(f.slashRoot),
})
return shouldRetry(err)
})
return err
@@ -720,9 +734,12 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
// Do the move
arg := files.RelocationArg{}
arg.FromPath = srcObj.remotePath()
arg.ToPath = dstObj.remotePath()
arg := files.RelocationArg{
RelocationPath: files.RelocationPath{
FromPath: enc.FromStandardPath(srcObj.remotePath()),
ToPath: enc.FromStandardPath(dstObj.remotePath()),
},
}
var err error
var result *files.RelocationResult
err = f.pacer.Call(func() (bool, error) {
@@ -747,7 +764,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err error) {
absPath := "/" + path.Join(f.Root(), remote)
absPath := enc.FromStandardPath(path.Join(f.slashRoot, remote))
fs.Debugf(f, "attempting to share '%s' (absolute path: %s)", remote, absPath)
createArg := sharing.CreateSharedLinkWithSettingsArg{
Path: absPath,
@@ -758,7 +775,8 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
return shouldRetry(err)
})
if err != nil && strings.Contains(err.Error(), sharing.CreateSharedLinkWithSettingsErrorSharedLinkAlreadyExists) {
if err != nil && strings.Contains(err.Error(),
sharing.CreateSharedLinkWithSettingsErrorSharedLinkAlreadyExists) {
fs.Debugf(absPath, "has a public link already, attempting to retrieve it")
listArg := sharing.ListSharedLinksArg{
Path: absPath,
@@ -820,9 +838,12 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// ...apparently not necessary
// Do the move
arg := files.RelocationArg{}
arg.FromPath = srcPath
arg.ToPath = dstPath
arg := files.RelocationArg{
RelocationPath: files.RelocationPath{
FromPath: enc.FromStandardPath(srcPath),
ToPath: enc.FromStandardPath(dstPath),
},
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.srv.MoveV2(&arg)
return shouldRetry(err)
@@ -863,7 +884,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.Dropbox)
return hash.Set(DbHashType)
}
// ------------------------------------------------------------
@@ -888,7 +909,7 @@ func (o *Object) Remote() string {
// Hash returns the dropbox special hash
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.Dropbox {
if t != DbHashType {
return "", hash.ErrUnsupported
}
err := o.readMetaData()
@@ -977,7 +998,10 @@ func (o *Object) Storable() bool {
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
fs.FixRangeOption(options, o.bytes)
headers := fs.OpenOptionHeaders(options)
arg := files.DownloadArg{Path: o.remotePath(), ExtraHeaders: headers}
arg := files.DownloadArg{
Path: enc.FromStandardPath(o.remotePath()),
ExtraHeaders: headers,
}
err = o.fs.pacer.Call(func() (bool, error) {
_, in, err = o.fs.srv.Download(&arg)
return shouldRetry(err)
@@ -986,7 +1010,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
switch e := err.(type) {
case files.DownloadAPIError:
// Don't attempt to retry copyright violation errors
if e.EndpointError.Path.Tag == files.LookupErrorRestrictedContent {
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.LookupErrorRestrictedContent {
return nil, fserrors.NoRetryError(err)
}
}
@@ -1104,10 +1128,9 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
remote := o.remotePath()
if ignoredFiles.MatchString(remote) {
fs.Logf(o, "File name disallowed - not uploading")
return nil
return fserrors.NoRetryError(errors.Errorf("file name %q is disallowed - not uploading", path.Base(remote)))
}
commitInfo := files.NewCommitInfo(o.remotePath())
commitInfo := files.NewCommitInfo(enc.FromStandardPath(o.remotePath()))
commitInfo.Mode.Tag = "overwrite"
// The Dropbox API only accepts timestamps in UTC with second precision.
commitInfo.ClientModified = src.ModTime(ctx).UTC().Round(time.Second)
@@ -1132,7 +1155,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object
func (o *Object) Remove(ctx context.Context) (err error) {
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.srv.DeleteV2(&files.DeleteArg{Path: o.remotePath()})
_, err = o.fs.srv.DeleteV2(&files.DeleteArg{
Path: enc.FromStandardPath(o.remotePath()),
})
return shouldRetry(err)
})
return err

View File

@@ -32,7 +32,7 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
var isAlphaNumeric = regexp.MustCompile(`^[a-zA-Z0-9]+$`).MatchString
func (f *Fs) getDownloadToken(url string) (*GetTokenResponse, error) {
func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenResponse, error) {
request := DownloadRequest{
URL: url,
Single: 1,
@@ -44,7 +44,7 @@ func (f *Fs) getDownloadToken(url string) (*GetTokenResponse, error) {
var token GetTokenResponse
err := f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, &request, &token)
resp, err := f.rest.CallJSON(ctx, &opts, &request, &token)
return shouldRetry(resp, err)
})
if err != nil {
@@ -72,7 +72,7 @@ func (f *Fs) listSharedFiles(ctx context.Context, id string) (entries fs.DirEntr
var sharedFiles SharedFolderResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, nil, &sharedFiles)
resp, err := f.rest.CallJSON(ctx, &opts, nil, &sharedFiles)
return shouldRetry(resp, err)
})
if err != nil {
@@ -88,7 +88,7 @@ func (f *Fs) listSharedFiles(ctx context.Context, id string) (entries fs.DirEntr
return entries, nil
}
func (f *Fs) listFiles(directoryID int) (filesList *FilesList, err error) {
func (f *Fs) listFiles(ctx context.Context, directoryID int) (filesList *FilesList, err error) {
// fs.Debugf(f, "Requesting files for dir `%s`", directoryID)
request := ListFilesRequest{
FolderID: directoryID,
@@ -101,17 +101,21 @@ func (f *Fs) listFiles(directoryID int) (filesList *FilesList, err error) {
filesList = &FilesList{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, &request, filesList)
resp, err := f.rest.CallJSON(ctx, &opts, &request, filesList)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list files")
}
for i := range filesList.Items {
item := &filesList.Items[i]
item.Filename = enc.ToStandardName(item.Filename)
}
return filesList, nil
}
func (f *Fs) listFolders(directoryID int) (foldersList *FoldersList, err error) {
func (f *Fs) listFolders(ctx context.Context, directoryID int) (foldersList *FoldersList, err error) {
// fs.Debugf(f, "Requesting folders for id `%s`", directoryID)
request := ListFolderRequest{
@@ -125,12 +129,17 @@ func (f *Fs) listFolders(directoryID int) (foldersList *FoldersList, err error)
foldersList = &FoldersList{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, &request, foldersList)
resp, err := f.rest.CallJSON(ctx, &opts, &request, foldersList)
return shouldRetry(resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list folders")
}
foldersList.Name = enc.ToStandardName(foldersList.Name)
for i := range foldersList.SubFolders {
folder := &foldersList.SubFolders[i]
folder.Name = enc.ToStandardName(folder.Name)
}
// fs.Debugf(f, "Got FoldersList for id `%s`", directoryID)
@@ -153,12 +162,12 @@ func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, er
return nil, err
}
files, err := f.listFiles(folderID)
files, err := f.listFiles(ctx, folderID)
if err != nil {
return nil, err
}
folders, err := f.listFolders(folderID)
folders, err := f.listFolders(ctx, folderID)
if err != nil {
return nil, err
}
@@ -166,7 +175,6 @@ func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, er
entries = make([]fs.DirEntry, len(files.Items)+len(folders.SubFolders))
for i, item := range files.Items {
item.Filename = restoreReservedChars(item.Filename)
entries[i] = f.newObjectFromFile(ctx, dir, item)
}
@@ -176,7 +184,6 @@ func (f *Fs) listDir(ctx context.Context, dir string) (entries fs.DirEntries, er
return nil, err
}
folder.Name = restoreReservedChars(folder.Name)
fullPath := getRemote(dir, folder.Name)
folderID := strconv.Itoa(folder.ID)
@@ -205,8 +212,8 @@ func getRemote(dir, fileName string) string {
return dir + "/" + fileName
}
func (f *Fs) makeFolder(leaf string, folderID int) (response *MakeFolderResponse, err error) {
name := replaceReservedChars(leaf)
func (f *Fs) makeFolder(ctx context.Context, leaf string, folderID int) (response *MakeFolderResponse, err error) {
name := enc.FromStandardName(leaf)
// fs.Debugf(f, "Creating folder `%s` in id `%s`", name, directoryID)
request := MakeFolderRequest{
@@ -221,7 +228,7 @@ func (f *Fs) makeFolder(leaf string, folderID int) (response *MakeFolderResponse
response = &MakeFolderResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, &request, response)
resp, err := f.rest.CallJSON(ctx, &opts, &request, response)
return shouldRetry(resp, err)
})
if err != nil {
@@ -233,7 +240,7 @@ func (f *Fs) makeFolder(leaf string, folderID int) (response *MakeFolderResponse
return response, err
}
func (f *Fs) removeFolder(name string, folderID int) (response *GenericOKResponse, err error) {
func (f *Fs) removeFolder(ctx context.Context, name string, folderID int) (response *GenericOKResponse, err error) {
// fs.Debugf(f, "Removing folder with id `%s`", directoryID)
request := &RemoveFolderRequest{
@@ -248,7 +255,7 @@ func (f *Fs) removeFolder(name string, folderID int) (response *GenericOKRespons
response = &GenericOKResponse{}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rest.CallJSON(&opts, request, response)
resp, err = f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(resp, err)
})
if err != nil {
@@ -263,7 +270,7 @@ func (f *Fs) removeFolder(name string, folderID int) (response *GenericOKRespons
return response, nil
}
func (f *Fs) deleteFile(url string) (response *GenericOKResponse, err error) {
func (f *Fs) deleteFile(ctx context.Context, url string) (response *GenericOKResponse, err error) {
request := &RemoveFileRequest{
Files: []RmFile{
{url},
@@ -277,7 +284,7 @@ func (f *Fs) deleteFile(url string) (response *GenericOKResponse, err error) {
response = &GenericOKResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, request, response)
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(resp, err)
})
@@ -290,7 +297,7 @@ func (f *Fs) deleteFile(url string) (response *GenericOKResponse, err error) {
return response, nil
}
func (f *Fs) getUploadNode() (response *GetUploadNodeResponse, err error) {
func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse, err error) {
// fs.Debugf(f, "Requesting Upload node")
opts := rest.Opts{
@@ -301,7 +308,7 @@ func (f *Fs) getUploadNode() (response *GetUploadNodeResponse, err error) {
response = &GetUploadNodeResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, nil, response)
resp, err := f.rest.CallJSON(ctx, &opts, nil, response)
return shouldRetry(resp, err)
})
if err != nil {
@@ -313,10 +320,10 @@ func (f *Fs) getUploadNode() (response *GetUploadNodeResponse, err error) {
return response, err
}
func (f *Fs) uploadFile(in io.Reader, size int64, fileName, folderID, uploadID, node string) (response *http.Response, err error) {
func (f *Fs) uploadFile(ctx context.Context, in io.Reader, size int64, fileName, folderID, uploadID, node string) (response *http.Response, err error) {
// fs.Debugf(f, "Uploading File `%s`", fileName)
fileName = replaceReservedChars(fileName)
fileName = enc.FromStandardName(fileName)
if len(uploadID) > 10 || !isAlphaNumeric(uploadID) {
return nil, errors.New("Invalid UploadID")
@@ -343,7 +350,7 @@ func (f *Fs) uploadFile(in io.Reader, size int64, fileName, folderID, uploadID,
}
err = f.pacer.CallNoRetry(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, nil, nil)
resp, err := f.rest.CallJSON(ctx, &opts, nil, nil)
return shouldRetry(resp, err)
})
@@ -356,7 +363,7 @@ func (f *Fs) uploadFile(in io.Reader, size int64, fileName, folderID, uploadID,
return response, err
}
func (f *Fs) endUpload(uploadID string, nodeurl string) (response *EndFileUploadResponse, err error) {
func (f *Fs) endUpload(ctx context.Context, uploadID string, nodeurl string) (response *EndFileUploadResponse, err error) {
// fs.Debugf(f, "Ending File Upload `%s`", uploadID)
if len(uploadID) > 10 || !isAlphaNumeric(uploadID) {
@@ -377,7 +384,7 @@ func (f *Fs) endUpload(uploadID string, nodeurl string) (response *EndFileUpload
response = &EndFileUploadResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(&opts, nil, response)
resp, err := f.rest.CallJSON(ctx, &opts, nil, response)
return shouldRetry(resp, err)
})

View File

@@ -13,6 +13,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
@@ -28,6 +29,8 @@ const (
decayConstant = 2 // bigger for slower decay, exponential
)
const enc = encodings.Fichier
func init() {
fs.Register(&fs.RegInfo{
Name: "fichier",
@@ -74,7 +77,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
if err != nil {
return "", false, err
}
folders, err := f.listFolders(folderID)
folders, err := f.listFolders(ctx, folderID)
if err != nil {
return "", false, err
}
@@ -95,7 +98,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
if err != nil {
return "", err
}
resp, err := f.makeFolder(leaf, folderID)
resp, err := f.makeFolder(ctx, leaf, folderID)
if err != nil {
return "", err
}
@@ -141,8 +144,7 @@ func (f *Fs) Features() *fs.Features {
//
// On Windows avoid single character remote names as they can be mixed
// up with drive letters.
func NewFs(name string, rootleaf string, config configmap.Mapper) (fs.Fs, error) {
root := replaceReservedChars(rootleaf)
func NewFs(name string, root string, config configmap.Mapper) (fs.Fs, error) {
opt := new(Options)
err := configstruct.Set(config, opt)
if err != nil {
@@ -251,7 +253,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
if err != nil {
return nil, err
}
files, err := f.listFiles(folderID)
files, err := f.listFiles(ctx, folderID)
if err != nil {
return nil, err
}
@@ -298,13 +300,13 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
// This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that.
func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size int64, options ...fs.OpenOption) (fs.Object, error) {
if size > int64(100E9) {
if size > int64(100e9) {
return nil, errors.New("File too big, cant upload")
} else if size == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
}
nodeResponse, err := f.getUploadNode()
nodeResponse, err := f.getUploadNode(ctx)
if err != nil {
return nil, err
}
@@ -314,12 +316,12 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
return nil, err
}
_, err = f.uploadFile(in, size, leaf, directoryID, nodeResponse.ID, nodeResponse.URL)
_, err = f.uploadFile(ctx, in, size, leaf, directoryID, nodeResponse.ID, nodeResponse.URL)
if err != nil {
return nil, err
}
fileUploadResponse, err := f.endUpload(nodeResponse.ID, nodeResponse.URL)
fileUploadResponse, err := f.endUpload(ctx, nodeResponse.ID, nodeResponse.URL)
if err != nil {
return nil, err
}
@@ -346,7 +348,7 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
Date: time.Now().Format("2006-01-02 15:04:05"),
Filename: link.Filename,
Pass: 0,
Size: int(fileSize),
Size: fileSize,
URL: link.Download,
},
}, nil
@@ -393,7 +395,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return err
}
_, err = f.removeFolder(dir, folderID)
_, err = f.removeFolder(ctx, dir, folderID)
if err != nil {
return err
}

View File

@@ -43,7 +43,7 @@ func (o *Object) ModTime(ctx context.Context) time.Time {
// Size returns the size of the file
func (o *Object) Size() int64 {
return int64(o.file.Size)
return o.file.Size
}
// Fs returns read only access to the Fs that this object is part of
@@ -74,8 +74,8 @@ func (o *Object) SetModTime(context.Context, time.Time) error {
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
fs.FixRangeOption(options, int64(o.file.Size))
downloadToken, err := o.fs.getDownloadToken(o.file.URL)
fs.FixRangeOption(options, o.file.Size)
downloadToken, err := o.fs.getDownloadToken(ctx, o.file.URL)
if err != nil {
return nil, err
@@ -89,7 +89,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.rest.Call(&opts)
resp, err = o.fs.rest.Call(ctx, &opts)
return shouldRetry(resp, err)
})
@@ -131,7 +131,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
func (o *Object) Remove(ctx context.Context) error {
// fs.Debugf(f, "Removing file `%s` with url `%s`", o.file.Filename, o.file.URL)
_, err := o.fs.deleteFile(o.file.URL)
_, err := o.fs.deleteFile(ctx, o.file.URL)
if err != nil {
return err

View File

@@ -1,71 +0,0 @@
/*
Translate file names for 1fichier
1Fichier reserved characters
The following characters are 1Fichier reserved characters, and can't
be used in 1Fichier folder and file names.
*/
package fichier
import (
"regexp"
"strings"
)
// charMap holds replacements for characters
//
// 1Fichier has a restricted set of characters compared to other cloud
// storage systems, so we to map these to the FULLWIDTH unicode
// equivalents
//
// http://unicode-search.net/unicode-namesearch.pl?term=SOLIDUS
var (
charMap = map[rune]rune{
'\\': '', // FULLWIDTH REVERSE SOLIDUS
'<': '', // FULLWIDTH LESS-THAN SIGN
'>': '', // FULLWIDTH GREATER-THAN SIGN
'"': '', // FULLWIDTH QUOTATION MARK - not on the list but seems to be reserved
'\'': '', // FULLWIDTH APOSTROPHE
'$': '', // FULLWIDTH DOLLAR SIGN
'`': '', // FULLWIDTH GRAVE ACCENT
' ': '␠', // SYMBOL FOR SPACE
}
invCharMap map[rune]rune
fixStartingWithSpace = regexp.MustCompile(`(/|^) `)
)
func init() {
// Create inverse charMap
invCharMap = make(map[rune]rune, len(charMap))
for k, v := range charMap {
invCharMap[v] = k
}
}
// replaceReservedChars takes a path and substitutes any reserved
// characters in it
func replaceReservedChars(in string) string {
// file names can't start with space either
in = fixStartingWithSpace.ReplaceAllString(in, "$1"+string(charMap[' ']))
// Replace reserved characters
return strings.Map(func(c rune) rune {
if replacement, ok := charMap[c]; ok && c != ' ' {
return replacement
}
return c
}, in)
}
// restoreReservedChars takes a path and undoes any substitutions
// made by replaceReservedChars
func restoreReservedChars(in string) string {
return strings.Map(func(c rune) rune {
if replacement, ok := invCharMap[c]; ok {
return replacement
}
return c
}, in)
}

View File

@@ -1,24 +0,0 @@
package fichier
import "testing"
func TestReplace(t *testing.T) {
for _, test := range []struct {
in string
out string
}{
{"", ""},
{"abc 123", "abc 123"},
{"\"'<>/\\$`", `/`},
{" leading space", "␠leading space"},
} {
got := replaceReservedChars(test.in)
if got != test.out {
t.Errorf("replaceReservedChars(%q) want %q got %q", test.in, test.out, got)
}
got2 := restoreReservedChars(got)
if got2 != test.in {
t.Errorf("restoreReservedChars(%q) want %q got %q", got, test.in, got2)
}
}
}

View File

@@ -69,7 +69,7 @@ type SharedFolderResponse []SharedFile
type SharedFile struct {
Filename string `json:"filename"`
Link string `json:"link"`
Size int `json:"size"`
Size int64 `json:"size"`
}
// EndFileUploadResponse is the response structure of the corresponding request
@@ -93,7 +93,7 @@ type File struct {
Date string `json:"date"`
Filename string `json:"filename"`
Pass int `json:"pass"`
Size int `json:"size"`
Size int64 `json:"size"`
URL string `json:"url"`
}

View File

@@ -17,11 +17,14 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
)
const enc = encodings.FTP
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -62,6 +65,11 @@ func init() {
Help: "Do not verify the TLS certificate of the server",
Default: false,
Advanced: true,
}, {
Name: "disable_epsv",
Help: "Disable using EPSV even if server advertises support",
Default: false,
Advanced: true,
},
},
})
@@ -76,6 +84,7 @@ type Options struct {
TLS bool `config:"tls"`
Concurrency int `config:"concurrency"`
SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableEPSV bool `config:"disable_epsv"`
}
// Fs represents a remote FTP server
@@ -141,6 +150,9 @@ func (f *Fs) ftpConnection() (*ftp.ServerConn, error) {
}
ftpConfig = append(ftpConfig, ftp.DialWithTLS(tlsConfig))
}
if f.opt.DisableEPSV {
ftpConfig = append(ftpConfig, ftp.DialWithDisabledEPSV(true))
}
c, err := ftp.Dial(f.dialAddr, ftpConfig...)
if err != nil {
fs.Errorf(f, "Error while Dialing %s: %s", f.dialAddr, err)
@@ -295,10 +307,37 @@ func translateErrorDir(err error) error {
return err
}
// entryToStandard converts an incoming ftp.Entry to Standard encoding
func entryToStandard(entry *ftp.Entry) {
// Skip . and .. as we don't want these encoded
if entry.Name == "." || entry.Name == ".." {
return
}
entry.Name = enc.ToStandardName(entry.Name)
entry.Target = enc.ToStandardPath(entry.Target)
}
// dirFromStandardPath returns dir in encoded form.
func dirFromStandardPath(dir string) string {
// Skip . and .. as we don't want these encoded
if dir == "." || dir == ".." {
return dir
}
return enc.FromStandardPath(dir)
}
// findItem finds a directory entry for the name in its parent directory
func (f *Fs) findItem(remote string) (entry *ftp.Entry, err error) {
// defer fs.Trace(remote, "")("o=%v, err=%v", &o, &err)
fullPath := path.Join(f.root, remote)
if fullPath == "" || fullPath == "." || fullPath == "/" {
// if root, assume exists and synthesize an entry
return &ftp.Entry{
Name: "",
Type: ftp.EntryTypeFolder,
Time: time.Now(),
}, nil
}
dir := path.Dir(fullPath)
base := path.Base(fullPath)
@@ -306,12 +345,13 @@ func (f *Fs) findItem(remote string) (entry *ftp.Entry, err error) {
if err != nil {
return nil, errors.Wrap(err, "findItem")
}
files, err := c.List(dir)
files, err := c.List(dirFromStandardPath(dir))
f.putFtpConnection(&c, err)
if err != nil {
return nil, translateErrorFile(err)
}
for _, file := range files {
entryToStandard(file)
if file.Name == base {
return file, nil
}
@@ -366,7 +406,7 @@ func (f *Fs) dirExists(remote string) (exists bool, err error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer fs.Trace(dir, "curlevel=%d", curlevel)("")
// defer log.Trace(dir, "dir=%q", dir)("entries=%v, err=%v", &entries, &err)
c, err := f.getFtpConnection()
if err != nil {
return nil, errors.Wrap(err, "list")
@@ -378,7 +418,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
resultchan := make(chan []*ftp.Entry, 1)
errchan := make(chan error, 1)
go func() {
result, err := c.List(path.Join(f.root, dir))
result, err := c.List(dirFromStandardPath(path.Join(f.root, dir)))
f.putFtpConnection(&c, err)
if err != nil {
errchan <- err
@@ -415,6 +455,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
for i := range files {
object := files[i]
entryToStandard(object)
newremote := path.Join(dir, object.Name)
switch object.Type {
case ftp.EntryTypeFolder:
@@ -484,19 +525,21 @@ func (f *Fs) getInfo(remote string) (fi *FileInfo, err error) {
if err != nil {
return nil, errors.Wrap(err, "getInfo")
}
files, err := c.List(dir)
files, err := c.List(dirFromStandardPath(dir))
f.putFtpConnection(&c, err)
if err != nil {
return nil, translateErrorFile(err)
}
for i := range files {
if files[i].Name == base {
file := files[i]
entryToStandard(file)
if file.Name == base {
info := &FileInfo{
Name: remote,
Size: files[i].Size,
ModTime: files[i].Time,
IsDir: files[i].Type == ftp.EntryTypeFolder,
Size: file.Size,
ModTime: file.Time,
IsDir: file.Type == ftp.EntryTypeFolder,
}
return info, nil
}
@@ -506,6 +549,7 @@ func (f *Fs) getInfo(remote string) (fi *FileInfo, err error) {
// mkdir makes the directory and parents using unrooted paths
func (f *Fs) mkdir(abspath string) error {
abspath = path.Clean(abspath)
if abspath == "." || abspath == "/" {
return nil
}
@@ -527,7 +571,7 @@ func (f *Fs) mkdir(abspath string) error {
if connErr != nil {
return errors.Wrap(connErr, "mkdir")
}
err = c.MakeDir(abspath)
err = c.MakeDir(dirFromStandardPath(abspath))
f.putFtpConnection(&c, err)
switch errX := err.(type) {
case *textproto.Error:
@@ -563,7 +607,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
if err != nil {
return errors.Wrap(translateErrorFile(err), "Rmdir")
}
err = c.RemoveDir(path.Join(f.root, dir))
err = c.RemoveDir(dirFromStandardPath(path.Join(f.root, dir)))
f.putFtpConnection(&c, err)
return translateErrorDir(err)
}
@@ -584,8 +628,8 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, errors.Wrap(err, "Move")
}
err = c.Rename(
path.Join(srcObj.fs.root, srcObj.remote),
path.Join(f.root, remote),
enc.FromStandardPath(path.Join(srcObj.fs.root, srcObj.remote)),
enc.FromStandardPath(path.Join(f.root, remote)),
)
f.putFtpConnection(&c, err)
if err != nil {
@@ -638,8 +682,8 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return errors.Wrap(err, "DirMove")
}
err = c.Rename(
srcPath,
dstPath,
dirFromStandardPath(srcPath),
dirFromStandardPath(dstPath),
)
f.putFtpConnection(&c, err)
if err != nil {
@@ -765,7 +809,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
if err != nil {
return nil, errors.Wrap(err, "open")
}
fd, err := c.RetrFrom(path, uint64(offset))
fd, err := c.RetrFrom(enc.FromStandardPath(path), uint64(offset))
if err != nil {
o.fs.putFtpConnection(&c, err)
return nil, errors.Wrap(err, "open")
@@ -800,7 +844,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return errors.Wrap(err, "Update")
}
err = c.Stor(path, in)
err = c.Stor(enc.FromStandardPath(path), in)
if err != nil {
_ = c.Quit() // toss this connection to avoid sync errors
remove()
@@ -830,7 +874,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
if err != nil {
return errors.Wrap(err, "Remove")
}
err = c.Delete(path)
err = c.Delete(enc.FromStandardPath(path))
o.fs.putFtpConnection(&c, err)
}
return err

View File

@@ -32,6 +32,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
@@ -68,6 +69,8 @@ var (
}
)
const enc = encodings.GoogleCloudStorage
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -349,7 +352,8 @@ func parsePath(path string) (root string) {
// split returns bucket and bucketPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
bucketName, bucketPath = bucket.Split(path.Join(f.root, rootRelativePath))
return enc.FromStandardName(bucketName), enc.FromStandardPath(bucketPath)
}
// split returns bucket and bucketPath from the object
@@ -374,6 +378,7 @@ func (f *Fs) setRoot(root string) {
// NewFs constructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.TODO()
var oAuthClient *http.Client
// Parse config into Options struct
@@ -437,8 +442,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the object exists
encodedDirectory := enc.FromStandardPath(f.rootDirectory)
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.Get(f.rootBucket, f.rootDirectory).Do()
_, err = f.svc.Objects.Get(f.rootBucket, encodedDirectory).Context(ctx).Do()
return shouldRetry(err)
})
if err == nil {
@@ -457,7 +463,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *storage.Object) (fs.Object, error) {
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *storage.Object) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
@@ -465,7 +471,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *storage.Object) (fs.Object,
if info != nil {
o.setMetaData(info)
} else {
err := o.readMetaData() // reads info and meta, returning an error
err := o.readMetaData(ctx) // reads info and meta, returning an error
if err != nil {
return nil, err
}
@@ -476,7 +482,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *storage.Object) (fs.Object,
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
return f.newObjectWithInfo(ctx, remote, nil)
}
// listFn is called from list to handle an object.
@@ -504,7 +510,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
for {
var objects *storage.Objects
err = f.pacer.Call(func() (bool, error) {
objects, err = list.Do()
objects, err = list.Context(ctx).Do()
return shouldRetry(err)
})
if err != nil {
@@ -521,6 +527,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
if !strings.HasSuffix(remote, "/") {
continue
}
remote = enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
@@ -536,11 +543,12 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
}
}
for _, object := range objects.Items {
if !strings.HasPrefix(object.Name, prefix) {
remote := enc.ToStandardPath(object.Name)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", object.Name)
continue
}
remote := object.Name[len(prefix):]
remote = remote[len(prefix):]
isDirectory := strings.HasSuffix(remote, "/")
if addBucket {
remote = path.Join(bucket, remote)
@@ -563,12 +571,12 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
}
// Convert a list item into a DirEntry
func (f *Fs) itemToDirEntry(remote string, object *storage.Object, isDirectory bool) (fs.DirEntry, error) {
func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *storage.Object, isDirectory bool) (fs.DirEntry, error) {
if isDirectory {
d := fs.NewDir(remote, time.Time{}).SetSize(int64(object.Size))
return d, nil
}
o, err := f.newObjectWithInfo(remote, object)
o, err := f.newObjectWithInfo(ctx, remote, object)
if err != nil {
return nil, err
}
@@ -579,7 +587,7 @@ func (f *Fs) itemToDirEntry(remote string, object *storage.Object, isDirectory b
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
// List the objects
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
if err != nil {
return err
}
@@ -605,14 +613,14 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
for {
var buckets *storage.Buckets
err = f.pacer.Call(func() (bool, error) {
buckets, err = listBuckets.Do()
buckets, err = listBuckets.Context(ctx).Do()
return shouldRetry(err)
})
if err != nil {
return nil, err
}
for _, bucket := range buckets.Items {
d := fs.NewDir(bucket.Name, time.Time{})
d := fs.NewDir(enc.ToStandardName(bucket.Name), time.Time{})
entries = append(entries, d)
}
if buckets.NextPageToken == "" {
@@ -664,7 +672,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
list := walk.NewListRHelper(callback)
listR := func(bucket, directory, prefix string, addBucket bool) error {
return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
if err != nil {
return err
}
@@ -731,7 +739,7 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
// List something from the bucket to see if it exists. Doing it like this enables the use of a
// service account that only has the "Storage Object Admin" role. See #2193 for details.
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.List(bucket).MaxResults(1).Do()
_, err = f.svc.Objects.List(bucket).MaxResults(1).Context(ctx).Do()
return shouldRetry(err)
})
if err == nil {
@@ -766,7 +774,7 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
if !f.opt.BucketPolicyOnly {
insertBucket.PredefinedAcl(f.opt.BucketACL)
}
_, err = insertBucket.Do()
_, err = insertBucket.Context(ctx).Do()
return shouldRetry(err)
})
}, nil)
@@ -783,7 +791,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
}
return f.cache.Remove(bucket, func() error {
return f.pacer.Call(func() (bool, error) {
err = f.svc.Buckets.Delete(bucket).Do()
err = f.svc.Buckets.Delete(bucket).Context(ctx).Do()
return shouldRetry(err)
})
})
@@ -828,7 +836,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if !f.opt.BucketPolicyOnly {
copyObject.DestinationPredefinedAcl(f.opt.ObjectACL)
}
newObject, err = copyObject.Do()
newObject, err = copyObject.Context(ctx).Do()
return shouldRetry(err)
})
if err != nil {
@@ -912,10 +920,10 @@ func (o *Object) setMetaData(info *storage.Object) {
}
// readObjectInfo reads the definition for an object
func (o *Object) readObjectInfo() (object *storage.Object, err error) {
func (o *Object) readObjectInfo(ctx context.Context) (object *storage.Object, err error) {
bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) {
object, err = o.fs.svc.Objects.Get(bucket, bucketPath).Do()
object, err = o.fs.svc.Objects.Get(bucket, bucketPath).Context(ctx).Do()
return shouldRetry(err)
})
if err != nil {
@@ -932,11 +940,11 @@ func (o *Object) readObjectInfo() (object *storage.Object, err error) {
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
func (o *Object) readMetaData() (err error) {
func (o *Object) readMetaData(ctx context.Context) (err error) {
if !o.modTime.IsZero() {
return nil
}
object, err := o.readObjectInfo()
object, err := o.readObjectInfo(ctx)
if err != nil {
return err
}
@@ -949,7 +957,7 @@ func (o *Object) readMetaData() (err error) {
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData()
err := o.readMetaData(ctx)
if err != nil {
// fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now()
@@ -967,7 +975,7 @@ func metadataFromModTime(modTime time.Time) map[string]string {
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) {
// read the complete existing object first
object, err := o.readObjectInfo()
object, err := o.readObjectInfo(ctx)
if err != nil {
return err
}
@@ -986,7 +994,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error)
if !o.fs.opt.BucketPolicyOnly {
copyObject.DestinationPredefinedAcl(o.fs.opt.ObjectACL)
}
newObject, err = copyObject.Do()
newObject, err = copyObject.Context(ctx).Do()
return shouldRetry(err)
})
if err != nil {
@@ -1007,6 +1015,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if err != nil {
return nil, err
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
fs.FixRangeOption(options, o.bytes)
fs.OpenOptionAddHTTPHeaders(req.Header, options)
var res *http.Response
@@ -1054,7 +1063,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if !o.fs.opt.BucketPolicyOnly {
insertObject.PredefinedAcl(o.fs.opt.ObjectACL)
}
newObject, err = insertObject.Do()
newObject, err = insertObject.Context(ctx).Do()
return shouldRetry(err)
})
if err != nil {
@@ -1069,7 +1078,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
func (o *Object) Remove(ctx context.Context) (err error) {
bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.svc.Objects.Delete(bucket, bucketPath).Do()
err = o.fs.svc.Objects.Delete(bucket, bucketPath).Context(ctx).Do()
return shouldRetry(err)
})
return err

View File

@@ -290,7 +290,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
// fetchEndpoint gets the openid endpoint named from the Google config
func (f *Fs) fetchEndpoint(name string) (endpoint string, err error) {
func (f *Fs) fetchEndpoint(ctx context.Context, name string) (endpoint string, err error) {
// Get openID config without auth
opts := rest.Opts{
Method: "GET",
@@ -298,7 +298,7 @@ func (f *Fs) fetchEndpoint(name string) (endpoint string, err error) {
}
var openIDconfig map[string]interface{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.unAuth.CallJSON(&opts, nil, &openIDconfig)
resp, err := f.unAuth.CallJSON(ctx, &opts, nil, &openIDconfig)
return shouldRetry(resp, err)
})
if err != nil {
@@ -316,7 +316,7 @@ func (f *Fs) fetchEndpoint(name string) (endpoint string, err error) {
// UserInfo fetches info about the current user with oauth2
func (f *Fs) UserInfo(ctx context.Context) (userInfo map[string]string, err error) {
endpoint, err := f.fetchEndpoint("userinfo_endpoint")
endpoint, err := f.fetchEndpoint(ctx, "userinfo_endpoint")
if err != nil {
return nil, err
}
@@ -327,7 +327,7 @@ func (f *Fs) UserInfo(ctx context.Context) (userInfo map[string]string, err erro
RootURL: endpoint,
}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, nil, &userInfo)
resp, err := f.srv.CallJSON(ctx, &opts, nil, &userInfo)
return shouldRetry(resp, err)
})
if err != nil {
@@ -338,7 +338,7 @@ func (f *Fs) UserInfo(ctx context.Context) (userInfo map[string]string, err erro
// Disconnect kills the token and refresh token
func (f *Fs) Disconnect(ctx context.Context) (err error) {
endpoint, err := f.fetchEndpoint("revocation_endpoint")
endpoint, err := f.fetchEndpoint(ctx, "revocation_endpoint")
if err != nil {
return err
}
@@ -358,7 +358,7 @@ func (f *Fs) Disconnect(ctx context.Context) (err error) {
}
var res interface{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(&opts, nil, &res)
resp, err := f.srv.CallJSON(ctx, &opts, nil, &res)
return shouldRetry(resp, err)
})
if err != nil {
@@ -423,7 +423,7 @@ func findID(name string) string {
// list the albums into an internal cache
// FIXME cache invalidation
func (f *Fs) listAlbums(shared bool) (all *albums, err error) {
func (f *Fs) listAlbums(ctx context.Context, shared bool) (all *albums, err error) {
f.albumsMu.Lock()
defer f.albumsMu.Unlock()
all, ok := f.albums[shared]
@@ -445,7 +445,7 @@ func (f *Fs) listAlbums(shared bool) (all *albums, err error) {
var result api.ListAlbums
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -482,7 +482,7 @@ type listFn func(remote string, object *api.MediaItem, isDirectory bool) error
// dir is the starting directory, "" for root
//
// Set recurse to read sub directories
func (f *Fs) list(filter api.SearchFilter, fn listFn) (err error) {
func (f *Fs) list(ctx context.Context, filter api.SearchFilter, fn listFn) (err error) {
opts := rest.Opts{
Method: "POST",
Path: "/mediaItems:search",
@@ -494,7 +494,7 @@ func (f *Fs) list(filter api.SearchFilter, fn listFn) (err error) {
var result api.MediaItems
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &filter, &result)
resp, err = f.srv.CallJSON(ctx, &opts, &filter, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -543,7 +543,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, item *api.MediaI
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, prefix string, filter api.SearchFilter) (entries fs.DirEntries, err error) {
// List the objects
err = f.list(filter, func(remote string, item *api.MediaItem, isDirectory bool) error {
err = f.list(ctx, filter, func(remote string, item *api.MediaItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, prefix+remote, item, isDirectory)
if err != nil {
return err
@@ -638,7 +638,7 @@ func (f *Fs) createAlbum(ctx context.Context, albumTitle string) (album *api.Alb
var result api.Album
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, request, &result)
resp, err = f.srv.CallJSON(ctx, &opts, request, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -654,7 +654,7 @@ func (f *Fs) createAlbum(ctx context.Context, albumTitle string) (album *api.Alb
func (f *Fs) getOrCreateAlbum(ctx context.Context, albumTitle string) (album *api.Album, err error) {
f.createMu.Lock()
defer f.createMu.Unlock()
albums, err := f.listAlbums(false)
albums, err := f.listAlbums(ctx, false)
if err != nil {
return nil, err
}
@@ -708,7 +708,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
return err
}
albumTitle := match[1]
allAlbums, err := f.listAlbums(false)
allAlbums, err := f.listAlbums(ctx, false)
if err != nil {
return err
}
@@ -773,7 +773,7 @@ func (o *Object) Size() int64 {
RootURL: o.downloadURL(),
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
if err != nil {
@@ -824,7 +824,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
var item api.MediaItem
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, nil, &item)
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &item)
return shouldRetry(resp, err)
})
if err != nil {
@@ -901,7 +901,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
if err != nil {
@@ -954,7 +954,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var token []byte
var resp *http.Response
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
if err != nil {
return shouldRetry(resp, err)
}
@@ -986,7 +986,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
var result api.BatchCreateResponse
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, request, &result)
resp, err = o.fs.srv.CallJSON(ctx, &opts, request, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1029,7 +1029,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, &request, nil)
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil)
return shouldRetry(resp, err)
})
if err != nil {

View File

@@ -20,7 +20,7 @@ import (
// file pattern parsing
type lister interface {
listDir(ctx context.Context, prefix string, filter api.SearchFilter) (entries fs.DirEntries, err error)
listAlbums(shared bool) (all *albums, err error)
listAlbums(ctx context.Context, shared bool) (all *albums, err error)
listUploads(ctx context.Context, dir string) (entries fs.DirEntries, err error)
dirTime() time.Time
}
@@ -296,7 +296,7 @@ func yearMonthDayFilter(ctx context.Context, f lister, match []string) (sf api.S
// is a prefix of another album, or actual files, or a combination of
// the two.
func albumsToEntries(ctx context.Context, f lister, shared bool, prefix string, albumPath string) (entries fs.DirEntries, err error) {
albums, err := f.listAlbums(shared)
albums, err := f.listAlbums(ctx, shared)
if err != nil {
return nil, err
}

View File

@@ -44,7 +44,7 @@ func (f *testLister) listDir(ctx context.Context, prefix string, filter api.Sear
}
// mock listAlbums for testing
func (f *testLister) listAlbums(shared bool) (all *albums, err error) {
func (f *testLister) listAlbums(ctx context.Context, shared bool) (all *albums, err error) {
return f.albums, nil
}

View File

@@ -13,6 +13,7 @@ import (
"path"
"strconv"
"strings"
"sync"
"time"
"github.com/pkg/errors"
@@ -77,6 +78,26 @@ Note that this may cause rclone to confuse genuine HTML files with
directories.`,
Default: false,
Advanced: true,
}, {
Name: "no_head",
Help: `Don't use HEAD requests to find file sizes in dir listing
If your site is being very slow to load then you can try this option.
Normally rclone does a HEAD request for each potential file in a
directory listing to:
- find its size
- check it really exists
- check to see if it is a directory
If you set this option, rclone will not do the HEAD request. This will mean
- directory listings are much quicker
- rclone won't have the times or sizes of any files
- some files that don't exist may be in the listing
`,
Default: false,
Advanced: true,
}},
}
fs.Register(fsi)
@@ -86,6 +107,7 @@ directories.`,
type Options struct {
Endpoint string `config:"url"`
NoSlash bool `config:"no_slash"`
NoHead bool `config:"no_head"`
Headers fs.CommaSepList `config:"headers"`
}
@@ -124,6 +146,7 @@ func statusError(res *http.Response, err error) error {
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.TODO()
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
@@ -162,6 +185,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// check to see if points to a file
req, err := http.NewRequest("HEAD", u.String(), nil)
if err == nil {
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
addHeaders(req, opt)
res, err := noRedir.Do(req)
err = statusError(res, err)
@@ -237,7 +261,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
fs: f,
remote: remote,
}
err := o.stat()
err := o.stat(ctx)
if err != nil {
return nil, err
}
@@ -355,7 +379,7 @@ func (f *Fs) addHeaders(req *http.Request) {
}
// Read the directory passed in
func (f *Fs) readDir(dir string) (names []string, err error) {
func (f *Fs) readDir(ctx context.Context, dir string) (names []string, err error) {
URL := f.url(dir)
u, err := url.Parse(URL)
if err != nil {
@@ -369,6 +393,7 @@ func (f *Fs) readDir(dir string) (names []string, err error) {
if err != nil {
return nil, errors.Wrap(err, "readDir failed")
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
f.addHeaders(req)
res, err := f.httpClient.Do(req)
if err == nil {
@@ -408,34 +433,53 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if !strings.HasSuffix(dir, "/") && dir != "" {
dir += "/"
}
names, err := f.readDir(dir)
names, err := f.readDir(ctx, dir)
if err != nil {
return nil, errors.Wrapf(err, "error listing %q", dir)
}
var (
entriesMu sync.Mutex // to protect entries
wg sync.WaitGroup
in = make(chan string, fs.Config.Checkers)
)
add := func(entry fs.DirEntry) {
entriesMu.Lock()
entries = append(entries, entry)
entriesMu.Unlock()
}
for i := 0; i < fs.Config.Checkers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for remote := range in {
file := &Object{
fs: f,
remote: remote,
}
switch err := file.stat(ctx); err {
case nil:
add(file)
case fs.ErrorNotAFile:
// ...found a directory not a file
add(fs.NewDir(remote, timeUnset))
default:
fs.Debugf(remote, "skipping because of error: %v", err)
}
}
}()
}
for _, name := range names {
isDir := name[len(name)-1] == '/'
name = strings.TrimRight(name, "/")
remote := path.Join(dir, name)
if isDir {
dir := fs.NewDir(remote, timeUnset)
entries = append(entries, dir)
add(fs.NewDir(remote, timeUnset))
} else {
file := &Object{
fs: f,
remote: remote,
}
switch err = file.stat(); err {
case nil:
entries = append(entries, file)
case fs.ErrorNotAFile:
// ...found a directory not a file
dir := fs.NewDir(remote, timeUnset)
entries = append(entries, dir)
default:
fs.Debugf(remote, "skipping because of error: %v", err)
}
in <- remote
}
}
close(in)
wg.Wait()
return entries, nil
}
@@ -492,12 +536,19 @@ func (o *Object) url() string {
}
// stat updates the info field in the Object
func (o *Object) stat() error {
func (o *Object) stat(ctx context.Context) error {
if o.fs.opt.NoHead {
o.size = -1
o.modTime = timeUnset
o.contentType = fs.MimeType(ctx, o)
return nil
}
url := o.url()
req, err := http.NewRequest("HEAD", url, nil)
if err != nil {
return errors.Wrap(err, "stat failed")
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
o.fs.addHeaders(req)
res, err := o.fs.httpClient.Do(req)
if err == nil && res.StatusCode == http.StatusNotFound {
@@ -546,6 +597,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if err != nil {
return nil, errors.Wrap(err, "Open failed")
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
// Add optional headers
for k, v := range fs.OpenOptionHeaders(options) {

View File

@@ -1,6 +1,7 @@
package hubic
import (
"context"
"net/http"
"time"
@@ -26,7 +27,7 @@ func newAuth(f *Fs) *auth {
func (a *auth) Request(*swift.Connection) (r *http.Request, err error) {
const retries = 10
for try := 1; try <= retries; try++ {
err = a.f.getCredentials()
err = a.f.getCredentials(context.TODO())
if err == nil {
break
}

View File

@@ -7,6 +7,7 @@ package hubic
// to be revisted after some actual experience.
import (
"context"
"encoding/json"
"fmt"
"io/ioutil"
@@ -115,11 +116,12 @@ func (f *Fs) String() string {
// getCredentials reads the OpenStack Credentials using the Hubic API
//
// The credentials are read into the Fs
func (f *Fs) getCredentials() (err error) {
func (f *Fs) getCredentials(ctx context.Context) (err error) {
req, err := http.NewRequest("GET", "https://api.hubic.com/1.0/account/credentials", nil)
if err != nil {
return err
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
resp, err := f.client.Do(req)
if err != nil {
return err

View File

@@ -26,6 +26,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
@@ -36,6 +37,8 @@ import (
"golang.org/x/oauth2"
)
const enc = encodings.JottaCloud
// Globals
const (
minSleep = 10 * time.Millisecond
@@ -77,6 +80,7 @@ func init() {
Description: "JottaCloud",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
ctx := context.TODO()
tokenString, ok := m.Get("token")
if ok && tokenString != "" {
fmt.Printf("Already have a token - refresh?\n")
@@ -88,7 +92,7 @@ func init() {
srv := rest.NewClient(fshttp.NewClient(fs.Config))
fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n")
if config.Confirm() {
deviceRegistration, err := registerDevice(srv)
deviceRegistration, err := registerDevice(ctx, srv)
if err != nil {
log.Fatalf("Failed to register device: %v", err)
}
@@ -113,7 +117,7 @@ func init() {
username := config.ReadLine()
password := config.GetPassword("Your Jottacloud password is only required during setup and will not be stored.")
token, err := doAuth(srv, username, password)
token, err := doAuth(ctx, srv, username, password)
if err != nil {
log.Fatalf("Failed to get oauth token: %s", err)
}
@@ -132,7 +136,7 @@ func init() {
srv = rest.NewClient(oAuthClient).SetRoot(rootURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
device, mountpoint, err := setupMountpoint(srv, apiSrv)
device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv)
if err != nil {
log.Fatalf("Failed to setup mountpoint: %s", err)
}
@@ -246,7 +250,7 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
}
// registerDevice register a new device for use with the jottacloud API
func registerDevice(srv *rest.Client) (reg *api.DeviceRegistrationResponse, err error) {
func registerDevice(ctx context.Context, srv *rest.Client) (reg *api.DeviceRegistrationResponse, err error) {
// random generator to generate random device names
seededRand := rand.New(rand.NewSource(time.Now().UnixNano()))
randonDeviceNamePartLength := 21
@@ -269,12 +273,12 @@ func registerDevice(srv *rest.Client) (reg *api.DeviceRegistrationResponse, err
}
var deviceRegistration *api.DeviceRegistrationResponse
_, err = srv.CallJSON(&opts, nil, &deviceRegistration)
_, err = srv.CallJSON(ctx, &opts, nil, &deviceRegistration)
return deviceRegistration, err
}
// doAuth runs the actual token request
func doAuth(srv *rest.Client, username, password string) (token oauth2.Token, err error) {
func doAuth(ctx context.Context, srv *rest.Client, username, password string) (token oauth2.Token, err error) {
// prepare out token request with username and password
values := url.Values{}
values.Set("grant_type", "PASSWORD")
@@ -291,7 +295,7 @@ func doAuth(srv *rest.Client, username, password string) (token oauth2.Token, er
// do the first request
var jsonToken api.TokenJSON
resp, err := srv.CallJSON(&opts, nil, &jsonToken)
resp, err := srv.CallJSON(ctx, &opts, nil, &jsonToken)
if err != nil {
// if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header
if resp != nil {
@@ -303,7 +307,7 @@ func doAuth(srv *rest.Client, username, password string) (token oauth2.Token, er
authCode = strings.Replace(authCode, "-", "", -1) // remove any "-" contained in the code so we have a 6 digit number
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode
resp, err = srv.CallJSON(&opts, nil, &jsonToken)
resp, err = srv.CallJSON(ctx, &opts, nil, &jsonToken)
}
}
}
@@ -316,13 +320,13 @@ func doAuth(srv *rest.Client, username, password string) (token oauth2.Token, er
}
// setupMountpoint sets up a custom device and mountpoint if desired by the user
func setupMountpoint(srv *rest.Client, apiSrv *rest.Client) (device, mountpoint string, err error) {
cust, err := getCustomerInfo(apiSrv)
func setupMountpoint(ctx context.Context, srv *rest.Client, apiSrv *rest.Client) (device, mountpoint string, err error) {
cust, err := getCustomerInfo(ctx, apiSrv)
if err != nil {
return "", "", err
}
acc, err := getDriveInfo(srv, cust.Username)
acc, err := getDriveInfo(ctx, srv, cust.Username)
if err != nil {
return "", "", err
}
@@ -333,7 +337,7 @@ func setupMountpoint(srv *rest.Client, apiSrv *rest.Client) (device, mountpoint
fmt.Printf("Please select the device to use. Normally this will be Jotta\n")
device = config.Choose("Devices", deviceNames, nil, false)
dev, err := getDeviceInfo(srv, path.Join(cust.Username, device))
dev, err := getDeviceInfo(ctx, srv, path.Join(cust.Username, device))
if err != nil {
return "", "", err
}
@@ -351,13 +355,13 @@ func setupMountpoint(srv *rest.Client, apiSrv *rest.Client) (device, mountpoint
}
// getCustomerInfo queries general information about the account
func getCustomerInfo(srv *rest.Client) (info *api.CustomerInfo, err error) {
func getCustomerInfo(ctx context.Context, srv *rest.Client) (info *api.CustomerInfo, err error) {
opts := rest.Opts{
Method: "GET",
Path: "account/v1/customer",
}
_, err = srv.CallJSON(&opts, nil, &info)
_, err = srv.CallJSON(ctx, &opts, nil, &info)
if err != nil {
return nil, errors.Wrap(err, "couldn't get customer info")
}
@@ -366,13 +370,13 @@ func getCustomerInfo(srv *rest.Client) (info *api.CustomerInfo, err error) {
}
// getDriveInfo queries general information about the account and the available devices and mountpoints.
func getDriveInfo(srv *rest.Client, username string) (info *api.DriveInfo, err error) {
func getDriveInfo(ctx context.Context, srv *rest.Client, username string) (info *api.DriveInfo, err error) {
opts := rest.Opts{
Method: "GET",
Path: username,
}
_, err = srv.CallXML(&opts, nil, &info)
_, err = srv.CallXML(ctx, &opts, nil, &info)
if err != nil {
return nil, errors.Wrap(err, "couldn't get drive info")
}
@@ -381,13 +385,13 @@ func getDriveInfo(srv *rest.Client, username string) (info *api.DriveInfo, err e
}
// getDeviceInfo queries Information about a jottacloud device
func getDeviceInfo(srv *rest.Client, path string) (info *api.JottaDevice, err error) {
func getDeviceInfo(ctx context.Context, srv *rest.Client, path string) (info *api.JottaDevice, err error) {
opts := rest.Opts{
Method: "GET",
Path: urlPathEscape(path),
}
_, err = srv.CallXML(&opts, nil, &info)
_, err = srv.CallXML(ctx, &opts, nil, &info)
if err != nil {
return nil, errors.Wrap(err, "couldn't get device info")
}
@@ -407,7 +411,7 @@ func (f *Fs) setEndpointURL() {
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(path string) (info *api.JottaFile, err error) {
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.JottaFile, err error) {
opts := rest.Opts{
Method: "GET",
Path: f.filePath(path),
@@ -415,7 +419,7 @@ func (f *Fs) readMetaDataForPath(path string) (info *api.JottaFile, err error) {
var result api.JottaFile
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
@@ -459,7 +463,7 @@ func urlPathEscape(in string) string {
// filePathRaw returns an unescaped file path (f.root, file)
func (f *Fs) filePathRaw(file string) string {
return path.Join(f.endpointURL, replaceReservedChars(path.Join(f.root, file)))
return path.Join(f.endpointURL, enc.FromStandardPath(path.Join(f.root, file)))
}
// filePath returns a escaped file path (f.root, file)
@@ -492,6 +496,7 @@ func grantTypeFilter(req *http.Request) {
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.TODO()
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
@@ -546,11 +551,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.readMetaDataForPath("")
_, err := f.readMetaDataForPath(ctx, "")
return err
})
cust, err := getCustomerInfo(f.apiSrv)
cust, err := getCustomerInfo(ctx, f.apiSrv)
if err != nil {
return nil, err
}
@@ -582,7 +587,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *api.JottaFile) (fs.Object, error) {
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.JottaFile) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
@@ -592,7 +597,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *api.JottaFile) (fs.Object, e
// Set info
err = o.setMetaData(info)
} else {
err = o.readMetaData(false) // reads info and meta, returning an error
err = o.readMetaData(ctx, false) // reads info and meta, returning an error
}
if err != nil {
return nil, err
@@ -603,11 +608,11 @@ func (f *Fs) newObjectWithInfo(remote string, info *api.JottaFile) (fs.Object, e
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
return f.newObjectWithInfo(ctx, remote, nil)
}
// CreateDir makes a directory
func (f *Fs) CreateDir(path string) (jf *api.JottaFolder, err error) {
func (f *Fs) CreateDir(ctx context.Context, path string) (jf *api.JottaFolder, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf)
var resp *http.Response
opts := rest.Opts{
@@ -619,7 +624,7 @@ func (f *Fs) CreateDir(path string) (jf *api.JottaFolder, err error) {
opts.Parameters.Set("mkDir", "true")
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &jf)
resp, err = f.srv.CallXML(ctx, &opts, nil, &jf)
return shouldRetry(resp, err)
})
if err != nil {
@@ -648,7 +653,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
var resp *http.Response
var result api.JottaFolder
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
@@ -671,7 +676,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if item.Deleted {
continue
}
remote := path.Join(dir, restoreReservedChars(item.Name))
remote := path.Join(dir, enc.ToStandardName(item.Name))
d := fs.NewDir(remote, time.Time(item.ModifiedAt))
entries = append(entries, d)
}
@@ -681,8 +686,8 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if item.Deleted || item.State != "COMPLETED" {
continue
}
remote := path.Join(dir, restoreReservedChars(item.Name))
o, err := f.newObjectWithInfo(remote, item)
remote := path.Join(dir, enc.ToStandardName(item.Name))
o, err := f.newObjectWithInfo(ctx, remote, item)
if err != nil {
continue
}
@@ -696,7 +701,7 @@ type listFileDirFn func(fs.DirEntry) error
// List the objects and directories into entries, from a
// special kind of JottaFolder representing a FileDirLis
func (f *Fs) listFileDir(remoteStartPath string, startFolder *api.JottaFolder, fn listFileDirFn) error {
func (f *Fs) listFileDir(ctx context.Context, remoteStartPath string, startFolder *api.JottaFolder, fn listFileDirFn) error {
pathPrefix := "/" + f.filePathRaw("") // Non-escaped prefix of API paths to be cut off, to be left with the remote path including the remoteStartPath
pathPrefixLength := len(pathPrefix)
startPath := path.Join(pathPrefix, remoteStartPath) // Non-escaped API path up to and including remoteStartPath, to decide if it should be created as a new dir object
@@ -706,7 +711,7 @@ func (f *Fs) listFileDir(remoteStartPath string, startFolder *api.JottaFolder, f
if folder.Deleted {
return nil
}
folderPath := restoreReservedChars(path.Join(folder.Path, folder.Name))
folderPath := enc.ToStandardPath(path.Join(folder.Path, folder.Name))
folderPathLength := len(folderPath)
var remoteDir string
if folderPathLength > pathPrefixLength {
@@ -724,8 +729,8 @@ func (f *Fs) listFileDir(remoteStartPath string, startFolder *api.JottaFolder, f
if file.Deleted || file.State != "COMPLETED" {
continue
}
remoteFile := path.Join(remoteDir, restoreReservedChars(file.Name))
o, err := f.newObjectWithInfo(remoteFile, file)
remoteFile := path.Join(remoteDir, enc.ToStandardName(file.Name))
o, err := f.newObjectWithInfo(ctx, remoteFile, file)
if err != nil {
return err
}
@@ -754,7 +759,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
var resp *http.Response
var result api.JottaFolder // Could be JottaFileDirList, but JottaFolder is close enough
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -767,7 +772,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
return errors.Wrap(err, "couldn't list files")
}
list := walk.NewListRHelper(callback)
err = f.listFileDir(dir, &result, func(entry fs.DirEntry) error {
err = f.listFileDir(ctx, dir, &result, func(entry fs.DirEntry) error {
return list.Add(entry)
})
if err != nil {
@@ -821,7 +826,7 @@ func (f *Fs) mkParentDir(ctx context.Context, dirPath string) error {
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
_, err := f.CreateDir(dir)
_, err := f.CreateDir(ctx, dir)
return err
}
@@ -860,7 +865,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
if err != nil {
@@ -888,18 +893,18 @@ func (f *Fs) Purge(ctx context.Context) error {
}
// copyOrMoves copies or moves directories or files depending on the method parameter
func (f *Fs) copyOrMove(method, src, dest string) (info *api.JottaFile, err error) {
func (f *Fs) copyOrMove(ctx context.Context, method, src, dest string) (info *api.JottaFile, err error) {
opts := rest.Opts{
Method: "POST",
Path: src,
Parameters: url.Values{},
}
opts.Parameters.Set(method, "/"+path.Join(f.endpointURL, replaceReservedChars(path.Join(f.root, dest))))
opts.Parameters.Set(method, "/"+path.Join(f.endpointURL, enc.FromStandardPath(path.Join(f.root, dest))))
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &info)
resp, err = f.srv.CallXML(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
})
if err != nil {
@@ -928,13 +933,13 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil {
return nil, err
}
info, err := f.copyOrMove("cp", srcObj.filePath(), remote)
info, err := f.copyOrMove(ctx, "cp", srcObj.filePath(), remote)
if err != nil {
return nil, errors.Wrap(err, "couldn't copy file")
}
return f.newObjectWithInfo(remote, info)
return f.newObjectWithInfo(ctx, remote, info)
//return f.newObjectWithInfo(remote, &result)
}
@@ -958,13 +963,13 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil {
return nil, err
}
info, err := f.copyOrMove("mv", srcObj.filePath(), remote)
info, err := f.copyOrMove(ctx, "mv", srcObj.filePath(), remote)
if err != nil {
return nil, errors.Wrap(err, "couldn't move file")
}
return f.newObjectWithInfo(remote, info)
return f.newObjectWithInfo(ctx, remote, info)
//return f.newObjectWithInfo(remote, result)
}
@@ -1002,7 +1007,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return fs.ErrorDirExists
}
_, err = f.copyOrMove("mvDir", path.Join(f.endpointURL, replaceReservedChars(srcPath))+"/", dstRemote)
_, err = f.copyOrMove(ctx, "mvDir", path.Join(f.endpointURL, enc.FromStandardPath(srcPath))+"/", dstRemote)
if err != nil {
return errors.Wrap(err, "couldn't move directory")
@@ -1027,7 +1032,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
var resp *http.Response
var result api.JottaFile
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
@@ -1058,7 +1063,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
// About gets quota information
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
info, err := getDriveInfo(f.srv, f.user)
info, err := getDriveInfo(ctx, f.srv, f.user)
if err != nil {
return nil, err
}
@@ -1113,7 +1118,8 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
err := o.readMetaData(false)
ctx := context.TODO()
err := o.readMetaData(ctx, false)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return 0
@@ -1137,11 +1143,11 @@ func (o *Object) setMetaData(info *api.JottaFile) (err error) {
}
// readMetaData reads and updates the metadata for an object
func (o *Object) readMetaData(force bool) (err error) {
func (o *Object) readMetaData(ctx context.Context, force bool) (err error) {
if o.hasMetaData && !force {
return nil
}
info, err := o.fs.readMetaDataForPath(o.remote)
info, err := o.fs.readMetaDataForPath(ctx, o.remote)
if err != nil {
return err
}
@@ -1156,7 +1162,7 @@ func (o *Object) readMetaData(force bool) (err error) {
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData(false)
err := o.readMetaData(ctx, false)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now()
@@ -1188,7 +1194,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
opts.Parameters.Set("mode", "bin")
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1292,13 +1298,13 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Created: fileDate,
Modified: fileDate,
Md5: md5String,
Path: path.Join(o.fs.opt.Mountpoint, replaceReservedChars(path.Join(o.fs.root, o.remote))),
Path: path.Join(o.fs.opt.Mountpoint, enc.FromStandardPath(path.Join(o.fs.root, o.remote))),
}
// send it
var response api.AllocateFileResponse
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.apiSrv.CallJSON(&opts, &request, &response)
resp, err = o.fs.apiSrv.CallJSON(ctx, &opts, &request, &response)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1329,7 +1335,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
// send the remaining bytes
resp, err = o.fs.apiSrv.CallJSON(&opts, nil, &result)
resp, err = o.fs.apiSrv.CallJSON(ctx, &opts, nil, &result)
if err != nil {
return err
}
@@ -1341,7 +1347,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
o.modTime = time.Unix(result.Modified/1000, 0)
} else {
// If the file state is COMPLETE we don't need to upload it because the file was already found but we still ned to update our metadata
return o.readMetaData(true)
return o.readMetaData(ctx, true)
}
return nil
@@ -1363,7 +1369,7 @@ func (o *Object) Remove(ctx context.Context) error {
}
return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallXML(&opts, nil, nil)
resp, err := o.fs.srv.CallXML(ctx, &opts, nil, nil)
return shouldRetry(resp, err)
})
}

View File

@@ -1,77 +0,0 @@
/*
Translate file names for JottaCloud adapted from OneDrive
The following characters are JottaCloud reserved characters, and can't
be used in JottaCloud folder and file names.
jottacloud = "/" / "\" / "*" / "<" / ">" / "?" / "!" / "&" / ":" / ";" / "|" / "#" / "%" / """ / "'" / "." / "~"
*/
package jottacloud
import (
"regexp"
"strings"
)
// charMap holds replacements for characters
//
// Onedrive has a restricted set of characters compared to other cloud
// storage systems, so we to map these to the FULLWIDTH unicode
// equivalents
//
// http://unicode-search.net/unicode-namesearch.pl?term=SOLIDUS
var (
charMap = map[rune]rune{
'\\': '', // FULLWIDTH REVERSE SOLIDUS
'*': '', // FULLWIDTH ASTERISK
'<': '', // FULLWIDTH LESS-THAN SIGN
'>': '', // FULLWIDTH GREATER-THAN SIGN
'?': '', // FULLWIDTH QUESTION MARK
':': '', // FULLWIDTH COLON
';': '', // FULLWIDTH SEMICOLON
'|': '', // FULLWIDTH VERTICAL LINE
'"': '', // FULLWIDTH QUOTATION MARK - not on the list but seems to be reserved
' ': '␠', // SYMBOL FOR SPACE
}
invCharMap map[rune]rune
fixStartingWithSpace = regexp.MustCompile(`(/|^) `)
fixEndingWithSpace = regexp.MustCompile(` (/|$)`)
)
func init() {
// Create inverse charMap
invCharMap = make(map[rune]rune, len(charMap))
for k, v := range charMap {
invCharMap[v] = k
}
}
// replaceReservedChars takes a path and substitutes any reserved
// characters in it
func replaceReservedChars(in string) string {
// Filenames can't start with space
in = fixStartingWithSpace.ReplaceAllString(in, "$1"+string(charMap[' ']))
// Filenames can't end with space
in = fixEndingWithSpace.ReplaceAllString(in, string(charMap[' '])+"$1")
return strings.Map(func(c rune) rune {
if replacement, ok := charMap[c]; ok && c != ' ' {
return replacement
}
return c
}, in)
}
// restoreReservedChars takes a path and undoes any substitutions
// made by replaceReservedChars
func restoreReservedChars(in string) string {
return strings.Map(func(c rune) rune {
if replacement, ok := invCharMap[c]; ok {
return replacement
}
return c
}, in)
}

View File

@@ -1,28 +0,0 @@
package jottacloud
import "testing"
func TestReplace(t *testing.T) {
for _, test := range []struct {
in string
out string
}{
{"", ""},
{"abc 123", "abc 123"},
{`\*<>?:;|"`, ``},
{`\*<>?:;|"\*<>?:;|"`, ``},
{" leading space", "␠leading space"},
{"trailing space ", "trailing space␠"},
{" leading space/ leading space/ leading space", "␠leading space/␠leading space/␠leading space"},
{"trailing space /trailing space /trailing space ", "trailing space␠/trailing space␠/trailing space␠"},
} {
got := replaceReservedChars(test.in)
if got != test.out {
t.Errorf("replaceReservedChars(%q) want %q got %q", test.in, test.out, got)
}
got2 := restoreReservedChars(got)
if got2 != test.in {
t.Errorf("restoreReservedChars(%q) want %q got %q", got, test.in, got2)
}
}
}

View File

@@ -15,12 +15,15 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/hash"
httpclient "github.com/koofr/go-httpclient"
koofrclient "github.com/koofr/go-koofrclient"
)
const enc = encodings.Koofr
// Register Fs with rclone
func init() {
fs.Register(&fs.RegInfo{
@@ -242,7 +245,7 @@ func (f *Fs) Hashes() hash.Set {
// fullPath constructs a full, absolute path from a Fs root relative path,
func (f *Fs) fullPath(part string) string {
return path.Join("/", f.root, part)
return enc.FromStandardPath(path.Join("/", f.root, part))
}
// NewFs constructs a new filesystem given a root path and configuration options
@@ -293,7 +296,7 @@ func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
}
return nil, errors.New("Failed to find mount " + opt.MountID)
}
rootFile, err := f.client.FilesInfo(f.mountID, "/"+f.root)
rootFile, err := f.client.FilesInfo(f.mountID, enc.FromStandardPath("/"+f.root))
if err == nil && rootFile.Type != "dir" {
f.root = dir(f.root)
err = fs.ErrorIsFile
@@ -311,13 +314,14 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
entries = make([]fs.DirEntry, len(files))
for i, file := range files {
remote := path.Join(dir, enc.ToStandardName(file.Name))
if file.Type == "dir" {
entries[i] = fs.NewDir(path.Join(dir, file.Name), time.Unix(0, 0))
entries[i] = fs.NewDir(remote, time.Unix(0, 0))
} else {
entries[i] = &Object{
fs: f,
info: file,
remote: path.Join(dir, file.Name),
remote: remote,
}
}
}

View File

@@ -0,0 +1,9 @@
//+build darwin
package local
import (
"github.com/rclone/rclone/fs/encodings"
)
const enc = encodings.LocalMacOS

View File

@@ -0,0 +1,9 @@
//+build !windows,!darwin
package local
import (
"github.com/rclone/rclone/fs/encodings"
)
const enc = encodings.LocalUnix

View File

@@ -0,0 +1,9 @@
//+build windows
package local
import (
"github.com/rclone/rclone/fs/encodings"
)
const enc = encodings.LocalWindows

View File

@@ -142,19 +142,19 @@ type Fs struct {
dev uint64 // device number of root node
precisionOk sync.Once // Whether we need to read the precision
precision time.Duration // precision of local filesystem
wmu sync.Mutex // used for locking access to 'warned'.
warnedMu sync.Mutex // used for locking access to 'warned'.
warned map[string]struct{} // whether we have warned about this string
// do os.Lstat or os.Stat
lstat func(name string) (os.FileInfo, error)
dirNames *mapper // directory name mapping
objectHashesMu sync.Mutex // global lock for Object.hashes
}
// Object represents a local filesystem object
type Object struct {
fs *Fs // The Fs this object is part of
remote string // The remote path - properly UTF-8 encoded - for rclone
path string // The local path - may not be properly UTF-8 encoded - for OS
remote string // The remote path (encoded path)
path string // The local path (OS path)
size int64 // file metadata - always present
mode os.FileMode
modTime time.Time
@@ -183,14 +183,13 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
f := &Fs{
name: name,
opt: *opt,
warned: make(map[string]struct{}),
dev: devUnset,
lstat: os.Lstat,
dirNames: newMapper(),
name: name,
opt: *opt,
warned: make(map[string]struct{}),
dev: devUnset,
lstat: os.Lstat,
}
f.root = f.cleanPath(root)
f.root = cleanRootPath(root, f.opt.NoUNC)
f.features = (&fs.Features{
CaseInsensitive: f.caseInsensitive(),
CanHaveEmptyDirectories: true,
@@ -235,12 +234,12 @@ func (f *Fs) Name() string {
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
return enc.ToStandardPath(filepath.ToSlash(f.root))
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("Local file system at %s", f.root)
return fmt.Sprintf("Local file system at %s", f.Root())
}
// Features returns the optional features of this Fs
@@ -268,33 +267,27 @@ func (f *Fs) caseInsensitive() bool {
// and returns a new path, removing the suffix as needed,
// It also returns whether this is a translated link at all
//
// for regular files, dstPath is returned unchanged
func translateLink(remote, dstPath string) (newDstPath string, isTranslatedLink bool) {
// for regular files, localPath is returned unchanged
func translateLink(remote, localPath string) (newLocalPath string, isTranslatedLink bool) {
isTranslatedLink = strings.HasSuffix(remote, linkSuffix)
newDstPath = strings.TrimSuffix(dstPath, linkSuffix)
return newDstPath, isTranslatedLink
newLocalPath = strings.TrimSuffix(localPath, linkSuffix)
return newLocalPath, isTranslatedLink
}
// newObject makes a half completed Object
//
// if dstPath is empty then it is made from remote
func (f *Fs) newObject(remote, dstPath string) *Object {
func (f *Fs) newObject(remote string) *Object {
translatedLink := false
if dstPath == "" {
dstPath = f.cleanPath(filepath.Join(f.root, remote))
}
remote = f.cleanRemote(remote)
localPath := f.localPath(remote)
if f.opt.TranslateSymlinks {
// Possibly receive a new name for dstPath
dstPath, translatedLink = translateLink(remote, dstPath)
// Possibly receive a new name for localPath
localPath, translatedLink = translateLink(remote, localPath)
}
return &Object{
fs: f,
remote: remote,
path: dstPath,
path: localPath,
translatedLink: translatedLink,
}
}
@@ -302,8 +295,8 @@ func (f *Fs) newObject(remote, dstPath string) *Object {
// Return an Object from a path
//
// May return nil if an error occurred
func (f *Fs) newObjectWithInfo(remote, dstPath string, info os.FileInfo) (fs.Object, error) {
o := f.newObject(remote, dstPath)
func (f *Fs) newObjectWithInfo(remote string, info os.FileInfo) (fs.Object, error) {
o := f.newObject(remote)
if info != nil {
o.setMetadata(info)
} else {
@@ -332,7 +325,7 @@ func (f *Fs) newObjectWithInfo(remote, dstPath string, info os.FileInfo) (fs.Obj
// NewObject finds the Object at remote. If it can't be found
// it returns the error ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, "", nil)
return f.newObjectWithInfo(remote, nil)
}
// List the objects and directories in dir into entries. The
@@ -345,10 +338,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
dir = f.dirNames.Load(dir)
fsDirPath := f.cleanPath(filepath.Join(f.root, dir))
remote := f.cleanRemote(dir)
fsDirPath := f.localPath(dir)
_, err = os.Stat(fsDirPath)
if err != nil {
return nil, fs.ErrorDirNotFound
@@ -410,11 +400,11 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
for _, fi := range fis {
name := fi.Name()
mode := fi.Mode()
newRemote := path.Join(remote, name)
newPath := filepath.Join(fsDirPath, name)
newRemote := f.cleanRemote(dir, name)
// Follow symlinks if required
if f.opt.FollowSymlinks && (mode&os.ModeSymlink) != 0 {
fi, err = os.Stat(newPath)
localPath := filepath.Join(fsDirPath, name)
fi, err = os.Stat(localPath)
if os.IsNotExist(err) {
// Skip bad symlinks
err = fserrors.NoRetryError(errors.Wrap(err, "symlink"))
@@ -431,7 +421,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Ignore directories which are symlinks. These are junction points under windows which
// are kind of a souped up symlink. Unix doesn't have directories which are symlinks.
if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
d := fs.NewDir(f.dirNames.Save(newRemote, f.cleanRemote(newRemote)), fi.ModTime())
d := fs.NewDir(newRemote, fi.ModTime())
entries = append(entries, d)
}
} else {
@@ -439,7 +429,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if f.opt.TranslateSymlinks && fi.Mode()&os.ModeSymlink != 0 {
newRemote += linkSuffix
}
fso, err := f.newObjectWithInfo(newRemote, newPath, fi)
fso, err := f.newObjectWithInfo(newRemote, fi)
if err != nil {
return nil, err
}
@@ -452,67 +442,28 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return entries, nil
}
// cleanRemote makes string a valid UTF-8 string for remote strings.
//
// Any invalid UTF-8 characters will be replaced with utf8.RuneError
// It also normalises the UTF-8 and converts the slashes if necessary.
func (f *Fs) cleanRemote(name string) string {
if !utf8.ValidString(name) {
f.wmu.Lock()
if _, ok := f.warned[name]; !ok {
fs.Logf(f, "Replacing invalid UTF-8 characters in %q", name)
f.warned[name] = struct{}{}
func (f *Fs) cleanRemote(dir, filename string) (remote string) {
remote = path.Join(dir, enc.ToStandardName(filename))
if !utf8.ValidString(filename) {
f.warnedMu.Lock()
if _, ok := f.warned[remote]; !ok {
fs.Logf(f, "Replacing invalid UTF-8 characters in %q", remote)
f.warned[remote] = struct{}{}
}
f.wmu.Unlock()
name = string([]rune(name))
f.warnedMu.Unlock()
}
name = filepath.ToSlash(name)
return name
return
}
// mapper maps raw to cleaned directory names
type mapper struct {
mu sync.RWMutex // mutex to protect the below
m map[string]string // map of un-normalised directory names
}
func newMapper() *mapper {
return &mapper{
m: make(map[string]string),
}
}
// Lookup a directory name to make a local name (reverses
// cleanDirName)
//
// FIXME this is temporary before we make a proper Directory object
func (m *mapper) Load(in string) string {
m.mu.RLock()
out, ok := m.m[in]
m.mu.RUnlock()
if ok {
return out
}
return in
}
// Cleans a directory name recording if it needed to be altered
//
// FIXME this is temporary before we make a proper Directory object
func (m *mapper) Save(in, out string) string {
if in != out {
m.mu.Lock()
m.m[out] = in
m.mu.Unlock()
}
return out
func (f *Fs) localPath(name string) string {
return filepath.Join(f.root, filepath.FromSlash(enc.FromStandardPath(name)))
}
// Put the Object to the local filesystem
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
remote := src.Remote()
// Temporary Object under construction - info filled in by Update()
o := f.newObject(remote, "")
o := f.newObject(src.Remote())
err := o.Update(ctx, in, src, options...)
if err != nil {
return nil, err
@@ -528,13 +479,13 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
// Mkdir creates the directory if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// FIXME: https://github.com/syncthing/syncthing/blob/master/lib/osutil/mkdirall_windows.go
root := f.cleanPath(filepath.Join(f.root, dir))
err := os.MkdirAll(root, 0777)
localPath := f.localPath(dir)
err := os.MkdirAll(localPath, 0777)
if err != nil {
return err
}
if dir == "" {
fi, err := f.lstat(root)
fi, err := f.lstat(localPath)
if err != nil {
return err
}
@@ -547,8 +498,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
//
// If it isn't empty it will return an error
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
root := f.cleanPath(filepath.Join(f.root, dir))
return os.Remove(root)
return os.Remove(f.localPath(dir))
}
// Precision of the file system
@@ -644,7 +594,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
// Temporary Object under construction
dstObj := f.newObject(remote, "")
dstObj := f.newObject(remote)
// Check it is a file if it exists
err := dstObj.lstat()
@@ -701,8 +651,8 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
srcPath := f.cleanPath(filepath.Join(srcFs.root, srcRemote))
dstPath := f.cleanPath(filepath.Join(f.root, dstRemote))
srcPath := srcFs.localPath(srcRemote)
dstPath := f.localPath(dstRemote)
// Check if destination exists
_, err := os.Lstat(dstPath)
@@ -736,7 +686,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Supported
return hash.Supported()
}
// ------------------------------------------------------------
@@ -836,13 +786,6 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
// Storable returns a boolean showing if this object is storable
func (o *Object) Storable() bool {
// Check for control characters in the remote name and show non storable
for _, c := range o.Remote() {
if c >= 0x00 && c < 0x20 || c == 0x7F {
fs.Logf(o.fs, "Can't store file with control characters: %q", o.Remote())
return false
}
}
mode := o.mode
if mode&os.ModeSymlink != 0 && !o.fs.opt.TranslateSymlinks {
if !o.fs.opt.SkipSymlinks {
@@ -1087,7 +1030,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
// Temporary Object under construction
o := f.newObject(remote, "")
o := f.newObject(remote)
err := o.mkdirAll()
if err != nil {
@@ -1139,49 +1082,32 @@ func (o *Object) Remove(ctx context.Context) error {
return remove(o.path)
}
// cleanPathFragment cleans an OS path fragment which is part of a
// bigger path and not necessarily absolute
func cleanPathFragment(s string) string {
if s == "" {
return s
}
s = filepath.Clean(s)
func cleanRootPath(s string, noUNC bool) string {
if runtime.GOOS == "windows" {
s = strings.Replace(s, `/`, `\`, -1)
}
return s
}
s = filepath.ToSlash(s)
vol := filepath.VolumeName(s)
s = vol + enc.FromStandardPath(s[len(vol):])
s = filepath.FromSlash(s)
// cleanPath cleans and makes absolute the path passed in and returns
// an OS path.
//
// The input might be in OS form or rclone form or a mixture, but the
// output is in OS form.
//
// On windows it makes the path UNC also and replaces any characters
// Windows can't deal with with their replacements.
func (f *Fs) cleanPath(s string) string {
s = cleanPathFragment(s)
if runtime.GOOS == "windows" {
if !filepath.IsAbs(s) && !strings.HasPrefix(s, "\\") {
s2, err := filepath.Abs(s)
if err == nil {
s = s2
}
}
if !f.opt.NoUNC {
if !noUNC {
// Convert to UNC
s = uncPath(s)
}
s = cleanWindowsName(f, s)
} else {
if !filepath.IsAbs(s) {
s2, err := filepath.Abs(s)
if err == nil {
s = s2
}
return s
}
if !filepath.IsAbs(s) {
s2, err := filepath.Abs(s)
if err == nil {
s = s2
}
}
s = enc.FromStandardPath(s)
return s
}
@@ -1190,63 +1116,21 @@ var isAbsWinDrive = regexp.MustCompile(`^[a-zA-Z]\:\\`)
// uncPath converts an absolute Windows path
// to a UNC long path.
func uncPath(s string) string {
// UNC can NOT use "/", so convert all to "\"
s = strings.Replace(s, `/`, `\`, -1)
func uncPath(l string) string {
// If prefix is "\\", we already have a UNC path or server.
if strings.HasPrefix(s, `\\`) {
if strings.HasPrefix(l, `\\`) {
// If already long path, just keep it
if strings.HasPrefix(s, `\\?\`) {
return s
if strings.HasPrefix(l, `\\?\`) {
return l
}
// Trim "\\" from path and add UNC prefix.
return `\\?\UNC\` + strings.TrimPrefix(s, `\\`)
return `\\?\UNC\` + strings.TrimPrefix(l, `\\`)
}
if isAbsWinDrive.MatchString(s) {
return `\\?\` + s
if isAbsWinDrive.MatchString(l) {
return `\\?\` + l
}
return s
}
// cleanWindowsName will clean invalid Windows characters replacing them with _
func cleanWindowsName(f *Fs, name string) string {
original := name
var name2 string
if strings.HasPrefix(name, `\\?\`) {
name2 = `\\?\`
name = strings.TrimPrefix(name, `\\?\`)
}
if strings.HasPrefix(name, `//?/`) {
name2 = `//?/`
name = strings.TrimPrefix(name, `//?/`)
}
// Colon is allowed as part of a drive name X:\
colonAt := strings.Index(name, ":")
if colonAt > 0 && colonAt < 3 && len(name) > colonAt+1 {
// Copy to name2, which is unfiltered
name2 += name[0 : colonAt+1]
name = name[colonAt+1:]
}
name2 += strings.Map(func(r rune) rune {
switch r {
case '<', '>', '"', '|', '?', '*', ':':
return '_'
}
return r
}, name)
if name2 != original && f != nil {
f.wmu.Lock()
if _, ok := f.warned[name]; !ok {
fs.Logf(f, "Replacing invalid characters in %q to %q", name, name2)
f.warned[name] = struct{}{}
}
f.wmu.Unlock()
}
return name2
return l
}
// Check the interfaces are satisfied

View File

@@ -25,19 +25,6 @@ func TestMain(m *testing.M) {
fstest.TestMain(m)
}
func TestMapper(t *testing.T) {
m := newMapper()
assert.Equal(t, m.m, map[string]string{})
assert.Equal(t, "potato", m.Save("potato", "potato"))
assert.Equal(t, m.m, map[string]string{})
assert.Equal(t, "-r'áö", m.Save("-r?'a´o¨", "-r'áö"))
assert.Equal(t, m.m, map[string]string{
"-r'áö": "-r?'a´o¨",
})
assert.Equal(t, "potato", m.Load("potato"))
assert.Equal(t, "-r?'a´o¨", m.Load("-r'áö"))
}
// Test copy with source file that's updating
func TestUpdatingCheck(t *testing.T) {
r := fstest.NewRun(t)
@@ -57,7 +44,7 @@ func TestUpdatingCheck(t *testing.T) {
require.NoError(t, err)
o := &Object{size: fi.Size(), modTime: fi.ModTime(), fs: &Fs{}}
wrappedFd := readers.NewLimitedReadCloser(fd, -1)
hash, err := hash.NewMultiHasherTypes(hash.Supported)
hash, err := hash.NewMultiHasherTypes(hash.Supported())
require.NoError(t, err)
in := localOpenFile{
o: o,

View File

@@ -1,29 +1,26 @@
package local
import (
"runtime"
"testing"
)
var uncTestPaths = []string{
"C:\\Ba*d\\P|a?t<h>\\Windows\\Folder",
"C:/Ba*d/P|a?t<h>/Windows\\Folder",
"C:\\Windows\\Folder",
"\\\\?\\C:\\Windows\\Folder",
"//?/C:/Windows/Folder",
"\\\\?\\UNC\\server\\share\\Desktop",
"\\\\?\\unC\\server\\share\\Desktop\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path",
"\\\\server\\share\\Desktop\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path",
"C:\\Desktop\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path",
"C:\\AbsoluteToRoot\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path\\Very Long path",
"\\\\server\\share\\Desktop",
"\\\\?\\UNC\\\\share\\folder\\Desktop",
"\\\\server\\share",
`C:\Ba*d\P|a?t<h>\Windows\Folder`,
`C:\Windows\Folder`,
`\\?\C:\Windows\Folder`,
`\\?\UNC\server\share\Desktop`,
`\\?\unC\server\share\Desktop\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path`,
`\\server\share\Desktop\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path`,
`C:\Desktop\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path`,
`C:\AbsoluteToRoot\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path\Very Long path`,
`\\server\share\Desktop`,
`\\?\UNC\\share\folder\Desktop`,
`\\server\share`,
}
var uncTestPathsResults = []string{
`\\?\C:\Ba*d\P|a?t<h>\Windows\Folder`,
`\\?\C:\Ba*d\P|a?t<h>\Windows\Folder`,
`\\?\C:\Windows\Folder`,
`\\?\C:\Windows\Folder`,
`\\?\C:\Windows\Folder`,
`\\?\UNC\server\share\Desktop`,
@@ -51,38 +48,23 @@ func TestUncPaths(t *testing.T) {
}
}
var utf8Tests = [][2]string{
{"ABC", "ABC"},
{string([]byte{0x80}), "<22>"},
{string([]byte{'a', 0x80, 'b'}), "a<>b"},
}
func TestCleanRemote(t *testing.T) {
f := &Fs{}
f.warned = make(map[string]struct{})
for _, test := range utf8Tests {
got := f.cleanRemote(test[0])
expect := test[1]
if got != expect {
t.Fatalf("got %q, expected %q", got, expect)
}
}
}
// Test Windows character replacements
var testsWindows = [][2]string{
{`c:\temp`, `c:\temp`},
{`\\?\UNC\theserver\dir\file.txt`, `\\?\UNC\theserver\dir\file.txt`},
{`//?/UNC/theserver/dir\file.txt`, `//?/UNC/theserver/dir\file.txt`},
{"c:/temp", "c:/temp"},
{"/temp/file.txt", "/temp/file.txt"},
{`!\"#¤%&/()=;:*^?+-`, "!\\_#¤%&/()=;__^_+-"},
{`<>"|?*:&\<>"|?*:&\<>"|?*:&`, "_______&\\_______&\\_______&"},
{`//?/UNC/theserver/dir\file.txt`, `\\?\UNC\theserver\dir\file.txt`},
{`c:/temp`, `c:\temp`},
{`/temp/file.txt`, `\temp\file.txt`},
{`c:\!\"#¤%&/()=;:*^?+-`, `c:\!\#¤%&\()=;^+-`},
{`c:\<>"|?*:&\<>"|?*:&\<>"|?*:&`, `c:\&\&\&`},
}
func TestCleanWindows(t *testing.T) {
if runtime.GOOS != "windows" {
t.Skipf("windows only")
}
for _, test := range testsWindows {
got := cleanWindowsName(nil, test[0])
got := cleanRootPath(test[0], true)
expect := test[1]
if got != expect {
t.Fatalf("got %q, expected %q", got, expect)

107
backend/mailru/api/bin.go Normal file
View File

@@ -0,0 +1,107 @@
package api
// BIN protocol constants
const (
BinContentType = "application/x-www-form-urlencoded"
TreeIDLength = 12
DunnoNodeIDLength = 16
)
// Operations in binary protocol
const (
OperationAddFile = 103 // 0x67
OperationRename = 105 // 0x69
OperationCreateFolder = 106 // 0x6A
OperationFolderList = 117 // 0x75
OperationSharedFoldersList = 121 // 0x79
// TODO investigate opcodes below
Operation154MaybeItemInfo = 154 // 0x9A
Operation102MaybeAbout = 102 // 0x66
Operation104MaybeDelete = 104 // 0x68
)
// CreateDir protocol constants
const (
MkdirResultOK = 0
MkdirResultSourceNotExists = 1
MkdirResultAlreadyExists = 4
MkdirResultExistsDifferentCase = 9
MkdirResultInvalidName = 10
MkdirResultFailed254 = 254
)
// Move result codes
const (
MoveResultOK = 0
MoveResultSourceNotExists = 1
MoveResultFailed002 = 2
MoveResultAlreadyExists = 4
MoveResultFailed005 = 5
MoveResultFailed254 = 254
)
// AddFile result codes
const (
AddResultOK = 0
AddResultError01 = 1
AddResultDunno04 = 4
AddResultWrongPath = 5
AddResultNoFreeSpace = 7
AddResultDunno09 = 9
AddResultInvalidName = 10
AddResultNotModified = 12
AddResultFailedA = 253
AddResultFailedB = 254
)
// List request options
const (
ListOptTotalSpace = 1
ListOptDelete = 2
ListOptFingerprint = 4
ListOptUnknown8 = 8
ListOptUnknown16 = 16
ListOptFolderSize = 32
ListOptUsedSpace = 64
ListOptUnknown128 = 128
ListOptUnknown256 = 256
)
// ListOptDefaults ...
const ListOptDefaults = ListOptUnknown128 | ListOptUnknown256 | ListOptFolderSize | ListOptTotalSpace | ListOptUsedSpace
// List parse flags
const (
ListParseDone = 0
ListParseReadItem = 1
ListParsePin = 2
ListParsePinUpper = 3
ListParseUnknown15 = 15
)
// List operation results
const (
ListResultOK = 0
ListResultNotExists = 1
ListResultDunno02 = 2
ListResultDunno03 = 3
ListResultAlreadyExists04 = 4
ListResultDunno05 = 5
ListResultDunno06 = 6
ListResultDunno07 = 7
ListResultDunno08 = 8
ListResultAlreadyExists09 = 9
ListResultDunno10 = 10
ListResultDunno11 = 11
ListResultDunno12 = 12
ListResultFailedB = 253
ListResultFailedA = 254
)
// Directory item types
const (
ListItemMountPoint = 0
ListItemFile = 1
ListItemFolder = 2
ListItemSharedFolder = 3
)

View File

@@ -0,0 +1,225 @@
package api
// BIN protocol helpers
import (
"bufio"
"bytes"
"encoding/binary"
"io"
"log"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/lib/readers"
)
// protocol errors
var (
ErrorPrematureEOF = errors.New("Premature EOF")
ErrorInvalidLength = errors.New("Invalid length")
ErrorZeroTerminate = errors.New("String must end with zero")
)
// BinWriter is a binary protocol writer
type BinWriter struct {
b *bytes.Buffer // growing byte buffer
a []byte // temporary buffer for next varint
}
// NewBinWriter creates a binary protocol helper
func NewBinWriter() *BinWriter {
return &BinWriter{
b: new(bytes.Buffer),
a: make([]byte, binary.MaxVarintLen64),
}
}
// Bytes returns binary data
func (w *BinWriter) Bytes() []byte {
return w.b.Bytes()
}
// Reader returns io.Reader with binary data
func (w *BinWriter) Reader() io.Reader {
return bytes.NewReader(w.b.Bytes())
}
// WritePu16 writes a short as unsigned varint
func (w *BinWriter) WritePu16(val int) {
if val < 0 || val > 65535 {
log.Fatalf("Invalid UInt16 %v", val)
}
w.WritePu64(int64(val))
}
// WritePu32 writes a signed long as unsigned varint
func (w *BinWriter) WritePu32(val int64) {
if val < 0 || val > 4294967295 {
log.Fatalf("Invalid UInt32 %v", val)
}
w.WritePu64(val)
}
// WritePu64 writes an unsigned (actually, signed) long as unsigned varint
func (w *BinWriter) WritePu64(val int64) {
if val < 0 {
log.Fatalf("Invalid UInt64 %v", val)
}
w.b.Write(w.a[:binary.PutUvarint(w.a, uint64(val))])
}
// WriteString writes a zero-terminated string
func (w *BinWriter) WriteString(str string) {
buf := []byte(str)
w.WritePu64(int64(len(buf) + 1))
w.b.Write(buf)
w.b.WriteByte(0)
}
// Write writes a byte buffer
func (w *BinWriter) Write(buf []byte) {
w.b.Write(buf)
}
// WriteWithLength writes a byte buffer prepended with its length as varint
func (w *BinWriter) WriteWithLength(buf []byte) {
w.WritePu64(int64(len(buf)))
w.b.Write(buf)
}
// BinReader is a binary protocol reader helper
type BinReader struct {
b *bufio.Reader
count *readers.CountingReader
err error // keeps the first error encountered
}
// NewBinReader creates a binary protocol reader helper
func NewBinReader(reader io.Reader) *BinReader {
r := &BinReader{}
r.count = readers.NewCountingReader(reader)
r.b = bufio.NewReader(r.count)
return r
}
// Count returns number of bytes read
func (r *BinReader) Count() uint64 {
return r.count.BytesRead()
}
// Error returns first encountered error or nil
func (r *BinReader) Error() error {
return r.err
}
// check() keeps the first error encountered in a stream
func (r *BinReader) check(err error) bool {
if err == nil {
return true
}
if r.err == nil {
// keep the first error
r.err = err
}
if err != io.EOF {
log.Fatalf("Error parsing response: %v", err)
}
return false
}
// ReadByteAsInt reads a single byte as uint32, returns -1 for EOF or errors
func (r *BinReader) ReadByteAsInt() int {
if octet, err := r.b.ReadByte(); r.check(err) {
return int(octet)
}
return -1
}
// ReadByteAsShort reads a single byte as uint16, returns -1 for EOF or errors
func (r *BinReader) ReadByteAsShort() int16 {
if octet, err := r.b.ReadByte(); r.check(err) {
return int16(octet)
}
return -1
}
// ReadIntSpl reads two bytes as little-endian uint16, returns -1 for EOF or errors
func (r *BinReader) ReadIntSpl() int {
var val uint16
if r.check(binary.Read(r.b, binary.LittleEndian, &val)) {
return int(val)
}
return -1
}
// ReadULong returns uint64 equivalent of -1 for EOF or errors
func (r *BinReader) ReadULong() uint64 {
if val, err := binary.ReadUvarint(r.b); r.check(err) {
return val
}
return 0xffffffffffffffff
}
// ReadPu32 returns -1 for EOF or errors
func (r *BinReader) ReadPu32() int64 {
if val, err := binary.ReadUvarint(r.b); r.check(err) {
return int64(val)
}
return -1
}
// ReadNBytes reads given number of bytes, returns invalid data for EOF or errors
func (r *BinReader) ReadNBytes(len int) []byte {
buf := make([]byte, len)
n, err := r.b.Read(buf)
if r.check(err) {
return buf
}
if n != len {
r.check(ErrorPrematureEOF)
}
return buf
}
// ReadBytesByLength reads buffer length and its bytes
func (r *BinReader) ReadBytesByLength() []byte {
len := r.ReadPu32()
if len < 0 {
r.check(ErrorInvalidLength)
return []byte{}
}
return r.ReadNBytes(int(len))
}
// ReadString reads a zero-terminated string with length
func (r *BinReader) ReadString() string {
len := int(r.ReadPu32())
if len < 1 {
r.check(ErrorInvalidLength)
return ""
}
buf := make([]byte, len-1)
n, err := r.b.Read(buf)
if !r.check(err) {
return ""
}
if n != len-1 {
r.check(ErrorPrematureEOF)
return ""
}
zeroByte, err := r.b.ReadByte()
if !r.check(err) {
return ""
}
if zeroByte != 0 {
r.check(ErrorZeroTerminate)
return ""
}
return string(buf)
}
// ReadDate reads a Unix encoded time
func (r *BinReader) ReadDate() time.Time {
return time.Unix(r.ReadPu32(), 0)
}

248
backend/mailru/api/m1.go Normal file
View File

@@ -0,0 +1,248 @@
package api
import (
"fmt"
)
// M1 protocol constants and structures
const (
APIServerURL = "https://cloud.mail.ru"
PublicLinkURL = "https://cloud.mail.ru/public/"
DispatchServerURL = "https://dispatcher.cloud.mail.ru"
OAuthURL = "https://o2.mail.ru/token"
OAuthClientID = "cloud-win"
)
// ServerErrorResponse represents erroneous API response.
type ServerErrorResponse struct {
Message string `json:"body"`
Time int64 `json:"time"`
Status int `json:"status"`
}
func (e *ServerErrorResponse) Error() string {
return fmt.Sprintf("server error %d (%s)", e.Status, e.Message)
}
// FileErrorResponse represents erroneous API response for a file
type FileErrorResponse struct {
Body struct {
Home struct {
Value string `json:"value"`
Error string `json:"error"`
} `json:"home"`
} `json:"body"`
Status int `json:"status"`
Account string `json:"email,omitempty"`
Time int64 `json:"time,omitempty"`
Message string // non-json, calculated field
}
func (e *FileErrorResponse) Error() string {
return fmt.Sprintf("file error %d (%s)", e.Status, e.Body.Home.Error)
}
// UserInfoResponse contains account metadata
type UserInfoResponse struct {
Body struct {
AccountType string `json:"account_type"`
AccountVerified bool `json:"account_verified"`
Cloud struct {
Beta struct {
Allowed bool `json:"allowed"`
Asked bool `json:"asked"`
} `json:"beta"`
Billing struct {
ActiveCostID string `json:"active_cost_id"`
ActiveRateID string `json:"active_rate_id"`
AutoProlong bool `json:"auto_prolong"`
Basequota int64 `json:"basequota"`
Enabled bool `json:"enabled"`
Expires int `json:"expires"`
Prolong bool `json:"prolong"`
Promocodes struct {
} `json:"promocodes"`
Subscription []interface{} `json:"subscription"`
Version string `json:"version"`
} `json:"billing"`
Bonuses struct {
CameraUpload bool `json:"camera_upload"`
Complete bool `json:"complete"`
Desktop bool `json:"desktop"`
Feedback bool `json:"feedback"`
Links bool `json:"links"`
Mobile bool `json:"mobile"`
Registration bool `json:"registration"`
} `json:"bonuses"`
Enable struct {
Sharing bool `json:"sharing"`
} `json:"enable"`
FileSizeLimit int64 `json:"file_size_limit"`
Space struct {
BytesTotal int64 `json:"bytes_total"`
BytesUsed int `json:"bytes_used"`
Overquota bool `json:"overquota"`
} `json:"space"`
} `json:"cloud"`
Cloudflags struct {
Exists bool `json:"exists"`
} `json:"cloudflags"`
Domain string `json:"domain"`
Login string `json:"login"`
Newbie bool `json:"newbie"`
UI struct {
ExpandLoader bool `json:"expand_loader"`
Kind string `json:"kind"`
Sidebar bool `json:"sidebar"`
Sort struct {
Order string `json:"order"`
Type string `json:"type"`
} `json:"sort"`
Thumbs bool `json:"thumbs"`
} `json:"ui"`
} `json:"body"`
Email string `json:"email"`
Status int `json:"status"`
Time int64 `json:"time"`
}
// ListItem ...
type ListItem struct {
Count struct {
Folders int `json:"folders"`
Files int `json:"files"`
} `json:"count,omitempty"`
Kind string `json:"kind"`
Type string `json:"type"`
Name string `json:"name"`
Home string `json:"home"`
Size int64 `json:"size"`
Mtime int64 `json:"mtime,omitempty"`
Hash string `json:"hash,omitempty"`
VirusScan string `json:"virus_scan,omitempty"`
Tree string `json:"tree,omitempty"`
Grev int `json:"grev,omitempty"`
Rev int `json:"rev,omitempty"`
}
// ItemInfoResponse ...
type ItemInfoResponse struct {
Email string `json:"email"`
Body ListItem `json:"body"`
Time int64 `json:"time"`
Status int `json:"status"`
}
// FolderInfoResponse ...
type FolderInfoResponse struct {
Body struct {
Count struct {
Folders int `json:"folders"`
Files int `json:"files"`
} `json:"count"`
Tree string `json:"tree"`
Name string `json:"name"`
Grev int `json:"grev"`
Size int64 `json:"size"`
Sort struct {
Order string `json:"order"`
Type string `json:"type"`
} `json:"sort"`
Kind string `json:"kind"`
Rev int `json:"rev"`
Type string `json:"type"`
Home string `json:"home"`
List []ListItem `json:"list"`
} `json:"body,omitempty"`
Time int64 `json:"time"`
Status int `json:"status"`
Email string `json:"email"`
}
// ShardInfoResponse ...
type ShardInfoResponse struct {
Email string `json:"email"`
Body struct {
Video []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"video"`
ViewDirect []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"view_direct"`
WeblinkView []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"weblink_view"`
WeblinkVideo []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"weblink_video"`
WeblinkGet []struct {
Count int `json:"count"`
URL string `json:"url"`
} `json:"weblink_get"`
Stock []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"stock"`
WeblinkThumbnails []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"weblink_thumbnails"`
PublicUpload []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"public_upload"`
Auth []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"auth"`
Web []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"web"`
View []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"view"`
Upload []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"upload"`
Get []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"get"`
Thumbnails []struct {
Count string `json:"count"`
URL string `json:"url"`
} `json:"thumbnails"`
} `json:"body"`
Time int64 `json:"time"`
Status int `json:"status"`
}
// CleanupResponse ...
type CleanupResponse struct {
Email string `json:"email"`
Time int64 `json:"time"`
StatusStr string `json:"status"`
}
// GenericResponse ...
type GenericResponse struct {
Email string `json:"email"`
Time int64 `json:"time"`
Status int `json:"status"`
// ignore other fields
}
// GenericBodyResponse ...
type GenericBodyResponse struct {
Email string `json:"email"`
Body string `json:"body"`
Time int64 `json:"time"`
Status int `json:"status"`
}

2390
backend/mailru/mailru.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,18 @@
// Test Mailru filesystem interface
package mailru_test
import (
"testing"
"github.com/rclone/rclone/backend/mailru"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestMailru:",
NilObject: (*mailru.Object)(nil),
SkipBadWindowsCharacters: true,
})
}

View File

@@ -0,0 +1,134 @@
// Package mrhash implements the mailru hash, which is a modified SHA1.
// If file size is less than or equal to the SHA1 block size (20 bytes),
// its hash is simply its data right-padded with zero bytes.
// Hash sum of a larger file is computed as a SHA1 sum of the file data
// bytes concatenated with a decimal representation of the data length.
package mrhash
import (
"crypto/sha1"
"encoding"
"encoding/hex"
"errors"
"hash"
"strconv"
)
const (
// BlockSize of the checksum in bytes.
BlockSize = sha1.BlockSize
// Size of the checksum in bytes.
Size = sha1.Size
startString = "mrCloud"
hashError = "hash function returned error"
)
// Global errors
var (
ErrorInvalidHash = errors.New("invalid hash")
)
type digest struct {
total int // bytes written into hash so far
sha hash.Hash // underlying SHA1
small []byte // small content
}
// New returns a new hash.Hash computing the Mailru checksum.
func New() hash.Hash {
d := &digest{}
d.Reset()
return d
}
// Write writes len(p) bytes from p to the underlying data stream. It returns
// the number of bytes written from p (0 <= n <= len(p)) and any error
// encountered that caused the write to stop early. Write must return a non-nil
// error if it returns n < len(p). Write must not modify the slice data, even
// temporarily.
//
// Implementations must not retain p.
func (d *digest) Write(p []byte) (n int, err error) {
n, err = d.sha.Write(p)
if err != nil {
panic(hashError)
}
d.total += n
if d.total <= Size {
d.small = append(d.small, p...)
}
return n, nil
}
// Sum appends the current hash to b and returns the resulting slice.
// It does not change the underlying hash state.
func (d *digest) Sum(b []byte) []byte {
// If content is small, return it padded to Size
if d.total <= Size {
padded := make([]byte, Size)
copy(padded, d.small)
return append(b, padded...)
}
endString := strconv.Itoa(d.total)
copy, err := cloneSHA1(d.sha)
if err == nil {
_, err = copy.Write([]byte(endString))
}
if err != nil {
panic(hashError)
}
return copy.Sum(b)
}
// cloneSHA1 clones state of SHA1 hash
func cloneSHA1(orig hash.Hash) (clone hash.Hash, err error) {
state, err := orig.(encoding.BinaryMarshaler).MarshalBinary()
if err != nil {
return nil, err
}
clone = sha1.New()
err = clone.(encoding.BinaryUnmarshaler).UnmarshalBinary(state)
return
}
// Reset resets the Hash to its initial state.
func (d *digest) Reset() {
d.sha = sha1.New()
_, _ = d.sha.Write([]byte(startString))
d.total = 0
}
// Size returns the number of bytes Sum will return.
func (d *digest) Size() int {
return Size
}
// BlockSize returns the hash's underlying block size.
// The Write method must be able to accept any amount
// of data, but it may operate more efficiently if all writes
// are a multiple of the block size.
func (d *digest) BlockSize() int {
return BlockSize
}
// Sum returns the Mailru checksum of the data.
func Sum(data []byte) []byte {
var d digest
d.Reset()
_, _ = d.Write(data)
return d.Sum(nil)
}
// DecodeString converts a string to the Mailru hash
func DecodeString(s string) ([]byte, error) {
b, err := hex.DecodeString(s)
if err != nil || len(b) != Size {
return nil, ErrorInvalidHash
}
return b, nil
}
// must implement this interface
var (
_ hash.Hash = (*digest)(nil)
)

View File

@@ -0,0 +1,81 @@
package mrhash_test
import (
"encoding/hex"
"fmt"
"testing"
"github.com/rclone/rclone/backend/mailru/mrhash"
"github.com/stretchr/testify/assert"
)
func testChunk(t *testing.T, chunk int) {
data := make([]byte, chunk)
for i := 0; i < chunk; i++ {
data[i] = 'A'
}
for _, test := range []struct {
n int
want string
}{
{0, "0000000000000000000000000000000000000000"},
{1, "4100000000000000000000000000000000000000"},
{2, "4141000000000000000000000000000000000000"},
{19, "4141414141414141414141414141414141414100"},
{20, "4141414141414141414141414141414141414141"},
{21, "eb1d05e78a18691a5aa196a6c2b60cd40b5faafb"},
{22, "037e6d960601118a0639afbeff30fe716c66ed2d"},
{4096, "45a16aa192502b010280fb5b44274c601a91fd9f"},
{4194303, "fa019d5bd26498cf6abe35e0d61801bf19bf704b"},
{4194304, "5ed0e07aa6ea5c1beb9402b4d807258f27d40773"},
{4194305, "67bd0b9247db92e0e7d7e29a0947a50fedcb5452"},
{8388607, "41a8e2eb044c2e242971b5445d7be2a13fc0dd84"},
{8388608, "267a970917c624c11fe624276ec60233a66dc2c0"},
{8388609, "37b60b308d553d2732aefb62b3ea88f74acfa13f"},
} {
d := mrhash.New()
var toWrite int
for toWrite = test.n; toWrite >= chunk; toWrite -= chunk {
n, err := d.Write(data)
assert.Nil(t, err)
assert.Equal(t, chunk, n)
}
n, err := d.Write(data[:toWrite])
assert.Nil(t, err)
assert.Equal(t, toWrite, n)
got1 := hex.EncodeToString(d.Sum(nil))
assert.Equal(t, test.want, got1, fmt.Sprintf("when testing length %d", n))
got2 := hex.EncodeToString(d.Sum(nil))
assert.Equal(t, test.want, got2, fmt.Sprintf("when testing length %d (2nd sum)", n))
}
}
func TestHashChunk16M(t *testing.T) { testChunk(t, 16*1024*1024) }
func TestHashChunk8M(t *testing.T) { testChunk(t, 8*1024*1024) }
func TestHashChunk4M(t *testing.T) { testChunk(t, 4*1024*1024) }
func TestHashChunk2M(t *testing.T) { testChunk(t, 2*1024*1024) }
func TestHashChunk1M(t *testing.T) { testChunk(t, 1*1024*1024) }
func TestHashChunk64k(t *testing.T) { testChunk(t, 64*1024) }
func TestHashChunk32k(t *testing.T) { testChunk(t, 32*1024) }
func TestHashChunk2048(t *testing.T) { testChunk(t, 2048) }
func TestHashChunk2047(t *testing.T) { testChunk(t, 2047) }
func TestSumCalledTwice(t *testing.T) {
d := mrhash.New()
assert.NotPanics(t, func() { d.Sum(nil) })
d.Reset()
assert.NotPanics(t, func() { d.Sum(nil) })
assert.NotPanics(t, func() { d.Sum(nil) })
_, _ = d.Write([]byte{1})
assert.NotPanics(t, func() { d.Sum(nil) })
}
func TestSize(t *testing.T) {
d := mrhash.New()
assert.Equal(t, 20, d.Size())
}
func TestBlockSize(t *testing.T) {
d := mrhash.New()
assert.Equal(t, 64, d.BlockSize())
}

View File

@@ -29,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/pacer"
@@ -36,6 +37,8 @@ import (
mega "github.com/t3rm1n4l/go-mega"
)
const enc = encodings.Mega
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
@@ -245,14 +248,15 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
// splitNodePath splits nodePath into / separated parts, returning nil if it
// should refer to the root
// should refer to the root.
// It also encodes the parts into backend specific encoding
func splitNodePath(nodePath string) (parts []string) {
nodePath = path.Clean(nodePath)
parts = strings.Split(nodePath, "/")
if len(parts) == 1 && (parts[0] == "." || parts[0] == "/") {
if nodePath == "." || nodePath == "/" {
return nil
}
return parts
nodePath = enc.FromStandardPath(nodePath)
return strings.Split(nodePath, "/")
}
// findNode looks up the node for the path of the name given from the root given
@@ -418,7 +422,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
errors := 0
// similar to f.deleteNode(trash) but with HardDelete as true
for _, item := range items {
fs.Debugf(f, "Deleting trash %q", item.GetName())
fs.Debugf(f, "Deleting trash %q", enc.ToStandardName(item.GetName()))
deleteErr := f.pacer.Call(func() (bool, error) {
err := f.srv.Delete(item, true)
return shouldRetry(err)
@@ -500,7 +504,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
var iErr error
_, err = f.list(ctx, dirNode, func(info *mega.Node) bool {
remote := path.Join(dir, info.GetName())
remote := path.Join(dir, enc.ToStandardName(info.GetName()))
switch info.GetType() {
case mega.FOLDER, mega.ROOT, mega.INBOX, mega.TRASH:
d := fs.NewDir(remote, info.GetTimeStamp()).SetID(info.GetHash())
@@ -722,7 +726,7 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
if srcLeaf != dstLeaf {
//log.Printf("rename %q to %q", srcLeaf, dstLeaf)
err = f.pacer.Call(func() (bool, error) {
err = f.srv.Rename(info, dstLeaf)
err = f.srv.Rename(info, enc.FromStandardName(dstLeaf))
return shouldRetry(err)
})
if err != nil {
@@ -871,13 +875,13 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
}
// move them into place
for _, info := range infos {
fs.Infof(srcDir, "merging %q", info.GetName())
fs.Infof(srcDir, "merging %q", enc.ToStandardName(info.GetName()))
err = f.pacer.Call(func() (bool, error) {
err = f.srv.Move(info, dstDirNode)
return shouldRetry(err)
})
if err != nil {
return errors.Wrapf(err, "MergeDirs move failed on %q in %v", info.GetName(), srcDir)
return errors.Wrapf(err, "MergeDirs move failed on %q in %v", enc.ToStandardName(info.GetName()), srcDir)
}
}
// rmdir (into trash) the now empty source directory
@@ -1120,7 +1124,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var u *mega.Upload
err = o.fs.pacer.Call(func() (bool, error) {
u, err = o.fs.srv.NewUpload(dirNode, leaf, size)
u, err = o.fs.srv.NewUpload(dirNode, enc.FromStandardName(leaf), size)
return shouldRetry(err)
})
if err != nil {

View File

@@ -15,17 +15,18 @@ import (
"strings"
"time"
"github.com/rclone/rclone/lib/atexit"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/onedrive/api"
"github.com/rclone/rclone/backend/onedrive/quickxorhash"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
@@ -34,6 +35,8 @@ import (
"golang.org/x/oauth2"
)
const enc = encodings.OneDrive
const (
rcloneClientID = "b15665d9-eda6-4092-8539-0eec376afd59"
rcloneEncryptedClientSecret = "_JUdzh3LnKNqSPcf4Wu5fgMFIQOI8glZu_akYgR8yf6egowNBg-R"
@@ -63,15 +66,20 @@ var (
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
}
// QuickXorHashType is the hash.Type for OneDrive
QuickXorHashType hash.Type
)
// Register with Fs
func init() {
QuickXorHashType = hash.RegisterHash("QuickXorHash", 40, quickxorhash.New)
fs.Register(&fs.RegInfo{
Name: "onedrive",
Description: "Microsoft OneDrive",
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
ctx := context.TODO()
err := oauthutil.Config("onedrive", name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
@@ -143,7 +151,7 @@ func init() {
}
sites := siteResponse{}
_, err := srv.CallJSON(&opts, nil, &sites)
_, err := srv.CallJSON(ctx, &opts, nil, &sites)
if err != nil {
log.Fatalf("Failed to query available sites: %v", err)
}
@@ -172,7 +180,7 @@ func init() {
// query Microsoft Graph
if finalDriveID == "" {
drives := drivesResponse{}
_, err := srv.CallJSON(&opts, nil, &drives)
_, err := srv.CallJSON(ctx, &opts, nil, &drives)
if err != nil {
log.Fatalf("Failed to query available drives: %v", err)
}
@@ -194,7 +202,7 @@ func init() {
RootURL: graphURL,
Path: "/drives/" + finalDriveID + "/root"}
var rootItem api.Item
_, err = srv.CallJSON(&opts, nil, &rootItem)
_, err = srv.CallJSON(ctx, &opts, nil, &rootItem)
if err != nil {
log.Fatalf("Failed to query root for drive %s: %v", finalDriveID, err)
}
@@ -217,9 +225,9 @@ func init() {
Help: "Microsoft App Client Secret\nLeave blank normally.",
}, {
Name: "chunk_size",
Help: `Chunk size to upload files with - must be multiple of 320k.
Help: `Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
Above this size files will be chunked - must be multiple of 320k. Note
Above this size files will be chunked - must be multiple of 320k (327,680 bytes). Note
that the chunks will be buffered into memory.`,
Default: defaultChunkSize,
Advanced: true,
@@ -343,10 +351,10 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
// instead of simply using `drives/driveID/root:/itemPath` because it works for
// "shared with me" folders in OneDrive Personal (See #2536, #2778)
// This path pattern comes from https://github.com/OneDrive/onedrive-api-docs/issues/908#issuecomment-417488480
func (f *Fs) readMetaDataForPathRelativeToID(normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) {
opts := newOptsCall(normalizedID, "GET", ":/"+withTrailingColon(rest.URLPathEscape(replaceReservedChars(relPath))))
func (f *Fs) readMetaDataForPathRelativeToID(ctx context.Context, normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) {
opts := newOptsCall(normalizedID, "GET", ":/"+withTrailingColon(rest.URLPathEscape(enc.FromStandardPath(relPath))))
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
})
@@ -367,11 +375,11 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
} else {
opts = rest.Opts{
Method: "GET",
Path: "/root:/" + rest.URLPathEscape(replaceReservedChars(path)),
Path: "/root:/" + rest.URLPathEscape(enc.FromStandardPath(path)),
}
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
})
return info, resp, err
@@ -426,7 +434,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
}
}
return f.readMetaDataForPathRelativeToID(baseNormalizedID, relPath)
return f.readMetaDataForPathRelativeToID(ctx, baseNormalizedID, relPath)
}
// errorHandler parses a non 2xx error response into an error
@@ -592,7 +600,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
if !ok {
return "", false, errors.New("couldn't find parent ID")
}
info, resp, err := f.readMetaDataForPathRelativeToID(pathID, leaf)
info, resp, err := f.readMetaDataForPathRelativeToID(ctx, pathID, leaf)
if err != nil {
if resp != nil && resp.StatusCode == http.StatusNotFound {
return "", false, nil
@@ -615,11 +623,11 @@ func (f *Fs) CreateDir(ctx context.Context, dirID, leaf string) (newID string, e
var info *api.Item
opts := newOptsCall(dirID, "POST", "/children")
mkdir := api.CreateItemRequest{
Name: replaceReservedChars(leaf),
Name: enc.FromStandardName(leaf),
ConflictBehavior: "fail",
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &mkdir, &info)
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info)
return shouldRetry(resp, err)
})
if err != nil {
@@ -642,7 +650,7 @@ type listAllFn func(*api.Item) bool
// Lists the directory required calling the user function on each item found
//
// If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
// Top parameter asks for bigger pages of data
// https://dev.onedrive.com/odata/optional-query-parameters.htm
opts := newOptsCall(dirID, "GET", "/children?$top=1000")
@@ -651,7 +659,7 @@ OUTER:
var result api.ListChildrenResponse
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -675,7 +683,7 @@ OUTER:
if item.Deleted != nil {
continue
}
item.Name = restoreReservedChars(item.GetName())
item.Name = enc.ToStandardName(item.GetName())
if fn(item) {
found = true
break OUTER
@@ -709,7 +717,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return nil, err
}
var iErr error
_, err = f.listAll(directoryID, false, false, func(info *api.Item) bool {
_, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool {
if !f.opt.ExposeOneNoteFiles && info.GetPackageType() == api.PackageTypeOneNote {
fs.Debugf(info.Name, "OneNote file not shown in directory listing")
return false
@@ -793,12 +801,12 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
}
// deleteObject removes an object by ID
func (f *Fs) deleteObject(id string) error {
func (f *Fs) deleteObject(ctx context.Context, id string) error {
opts := newOptsCall(id, "DELETE", "")
opts.NoResponse = true
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(&opts)
resp, err := f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
}
@@ -821,7 +829,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
}
if check {
// check to see if there are any items
found, err := f.listAll(rootID, false, false, func(item *api.Item) bool {
found, err := f.listAll(ctx, rootID, false, false, func(item *api.Item) bool {
return true
})
if err != nil {
@@ -831,7 +839,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
return fs.ErrorDirectoryNotEmpty
}
}
err = f.deleteObject(rootID)
err = f.deleteObject(ctx, rootID)
if err != nil {
return err
}
@@ -912,8 +920,8 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
srcPath := srcObj.fs.rootSlash() + srcObj.remote
dstPath := f.rootSlash() + remote
srcPath := srcObj.rootPath()
dstPath := f.rootPath(remote)
if strings.ToLower(srcPath) == strings.ToLower(dstPath) {
return nil, errors.Errorf("can't copy %q -> %q as are same name when lowercase", srcPath, dstPath)
}
@@ -931,7 +939,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
id, dstDriveID, _ := parseNormalizedID(directoryID)
replacedLeaf := replaceReservedChars(leaf)
replacedLeaf := enc.FromStandardName(leaf)
copyReq := api.CopyItemRequest{
Name: &replacedLeaf,
ParentReference: api.ItemReference{
@@ -941,7 +949,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &copyReq, nil)
resp, err = f.srv.CallJSON(ctx, &opts, &copyReq, nil)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1015,7 +1023,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
opts := newOptsCall(srcObj.id, "PATCH", "")
move := api.MoveItemRequest{
Name: replaceReservedChars(leaf),
Name: enc.FromStandardName(leaf),
ParentReference: &api.ItemReference{
DriveID: dstDriveID,
ID: id,
@@ -1029,7 +1037,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
var resp *http.Response
var info api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &move, &info)
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1122,7 +1130,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
}
// Get timestamps of src so they can be preserved
srcInfo, _, err := srcFs.readMetaDataForPathRelativeToID(srcID, "")
srcInfo, _, err := srcFs.readMetaDataForPathRelativeToID(ctx, srcID, "")
if err != nil {
return err
}
@@ -1130,7 +1138,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// Do the move
opts := newOptsCall(srcID, "PATCH", "")
move := api.MoveItemRequest{
Name: replaceReservedChars(leaf),
Name: enc.FromStandardName(leaf),
ParentReference: &api.ItemReference{
DriveID: dstDriveID,
ID: parsedDstDirID,
@@ -1144,7 +1152,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
var resp *http.Response
var info api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &move, &info)
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1170,7 +1178,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &drive)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &drive)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1191,12 +1199,12 @@ func (f *Fs) Hashes() hash.Set {
if f.driveType == driveTypePersonal {
return hash.Set(hash.SHA1)
}
return hash.Set(hash.QuickXorHash)
return hash.Set(QuickXorHashType)
}
// PublicLink returns a link for downloading without accout.
func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err error) {
info, _, err := f.readMetaDataForPath(ctx, f.srvPath(remote))
info, _, err := f.readMetaDataForPath(ctx, f.rootPath(remote))
if err != nil {
return "", err
}
@@ -1210,7 +1218,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
var resp *http.Response
var result api.CreateShareLinkResponse
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, &share, &result)
resp, err = f.srv.CallJSON(ctx, &opts, &share, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1240,9 +1248,19 @@ func (o *Object) Remote() string {
return o.remote
}
// rootPath returns a path for use in server given a remote
func (f *Fs) rootPath(remote string) string {
return f.rootSlash() + remote
}
// rootPath returns a path for use in local functions
func (o *Object) rootPath() string {
return o.fs.rootPath(o.remote)
}
// srvPath returns a path for use in server given a remote
func (f *Fs) srvPath(remote string) string {
return replaceReservedChars(f.rootSlash() + remote)
return enc.FromStandardPath(f.rootSlash() + remote)
}
// srvPath returns a path for use in server
@@ -1257,7 +1275,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
return o.sha1, nil
}
} else {
if t == hash.QuickXorHash {
if t == QuickXorHashType {
return o.quickxorhash, nil
}
}
@@ -1319,7 +1337,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
if o.hasMetaData {
return nil
}
info, _, err := o.fs.readMetaDataForPath(ctx, o.srvPath())
info, _, err := o.fs.readMetaDataForPath(ctx, o.rootPath())
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
if apiErr.ErrorInfo.Code == "itemNotFound" {
@@ -1354,7 +1372,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
opts = rest.Opts{
Method: "PATCH",
RootURL: rootURL,
Path: "/" + drive + "/items/" + trueDirID + ":/" + withTrailingColon(rest.URLPathEscape(leaf)),
Path: "/" + drive + "/items/" + trueDirID + ":/" + withTrailingColon(rest.URLPathEscape(enc.FromStandardName(leaf))),
}
} else {
opts = rest.Opts{
@@ -1370,7 +1388,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
}
var info *api.Item
err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(&opts, &update, &info)
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info)
return shouldRetry(resp, err)
})
return info, err
@@ -1405,7 +1423,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
opts.Options = options
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1428,7 +1446,8 @@ func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (re
opts = rest.Opts{
Method: "POST",
RootURL: rootURL,
Path: "/" + drive + "/items/" + id + ":/" + rest.URLPathEscape(replaceReservedChars(leaf)) + ":/createUploadSession",
Path: fmt.Sprintf("/%s/items/%s:/%s:/createUploadSession",
drive, id, rest.URLPathEscape(enc.FromStandardName(leaf))),
}
} else {
opts = rest.Opts{
@@ -1441,7 +1460,7 @@ func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (re
createRequest.Item.FileSystemInfo.LastModifiedDateTime = api.Timestamp(modTime)
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, &createRequest, &response)
resp, err = o.fs.srv.CallJSON(ctx, &opts, &createRequest, &response)
if apiErr, ok := err.(*api.Error); ok {
if apiErr.ErrorInfo.Code == "nameAlreadyExists" {
// Make the error more user-friendly
@@ -1454,7 +1473,7 @@ func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (re
}
// uploadFragment uploads a part
func (o *Object) uploadFragment(url string, start int64, totalSize int64, chunk io.ReadSeeker, chunkSize int64) (info *api.Item, err error) {
func (o *Object) uploadFragment(ctx context.Context, url string, start int64, totalSize int64, chunk io.ReadSeeker, chunkSize int64) (info *api.Item, err error) {
opts := rest.Opts{
Method: "PUT",
RootURL: url,
@@ -1467,7 +1486,7 @@ func (o *Object) uploadFragment(url string, start int64, totalSize int64, chunk
var body []byte
err = o.fs.pacer.Call(func() (bool, error) {
_, _ = chunk.Seek(0, io.SeekStart)
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
if err != nil {
return shouldRetry(resp, err)
}
@@ -1487,7 +1506,7 @@ func (o *Object) uploadFragment(url string, start int64, totalSize int64, chunk
}
// cancelUploadSession cancels an upload session
func (o *Object) cancelUploadSession(url string) (err error) {
func (o *Object) cancelUploadSession(ctx context.Context, url string) (err error) {
opts := rest.Opts{
Method: "DELETE",
RootURL: url,
@@ -1495,7 +1514,7 @@ func (o *Object) cancelUploadSession(url string) (err error) {
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
return
@@ -1517,7 +1536,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64,
}
fs.Debugf(o, "Cancelling multipart upload")
cancelErr := o.cancelUploadSession(uploadURL)
cancelErr := o.cancelUploadSession(ctx, uploadURL)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr)
}
@@ -1553,7 +1572,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64,
}
seg := readers.NewRepeatableReader(io.LimitReader(in, n))
fs.Debugf(o, "Uploading segment %d/%d size %d", position, size, n)
info, err = o.uploadFragment(uploadURL, position, size, seg, n)
info, err = o.uploadFragment(ctx, uploadURL, position, size, seg, n)
if err != nil {
return nil, err
}
@@ -1580,7 +1599,7 @@ func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64,
opts = rest.Opts{
Method: "PUT",
RootURL: rootURL,
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(leaf) + ":/content",
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(enc.FromStandardName(leaf)) + ":/content",
ContentLength: &size,
Body: in,
}
@@ -1594,7 +1613,7 @@ func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64,
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, nil, &info)
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info)
if apiErr, ok := err.(*api.Error); ok {
if apiErr.ErrorInfo.Code == "nameAlreadyExists" {
// Make the error more user-friendly
@@ -1646,7 +1665,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return o.fs.deleteObject(o.id)
return o.fs.deleteObject(ctx, o.id)
}
// MimeType of an Object if known, "" otherwise

View File

@@ -1,91 +0,0 @@
/*
Translate file names for one drive
OneDrive reserved characters
The following characters are OneDrive reserved characters, and can't
be used in OneDrive folder and file names.
onedrive-reserved = "/" / "\" / "*" / "<" / ">" / "?" / ":" / "|"
onedrive-business-reserved
= "/" / "\" / "*" / "<" / ">" / "?" / ":" / "|" / "#" / "%"
Note: Folder names can't end with a period (.).
Note: OneDrive for Business file or folder names cannot begin with a
tilde ('~').
*/
package onedrive
import (
"regexp"
"strings"
)
// charMap holds replacements for characters
//
// Onedrive has a restricted set of characters compared to other cloud
// storage systems, so we to map these to the FULLWIDTH unicode
// equivalents
//
// http://unicode-search.net/unicode-namesearch.pl?term=SOLIDUS
var (
charMap = map[rune]rune{
'\\': '', // FULLWIDTH REVERSE SOLIDUS
'*': '', // FULLWIDTH ASTERISK
'<': '', // FULLWIDTH LESS-THAN SIGN
'>': '', // FULLWIDTH GREATER-THAN SIGN
'?': '', // FULLWIDTH QUESTION MARK
':': '', // FULLWIDTH COLON
'|': '', // FULLWIDTH VERTICAL LINE
'#': '', // FULLWIDTH NUMBER SIGN
'%': '', // FULLWIDTH PERCENT SIGN
'"': '', // FULLWIDTH QUOTATION MARK - not on the list but seems to be reserved
'.': '', // FULLWIDTH FULL STOP
'~': '', // FULLWIDTH TILDE
' ': '␠', // SYMBOL FOR SPACE
}
invCharMap map[rune]rune
fixEndingInPeriod = regexp.MustCompile(`\.(/|$)`)
fixStartingWithTilde = regexp.MustCompile(`(/|^)~`)
fixStartingWithSpace = regexp.MustCompile(`(/|^) `)
)
func init() {
// Create inverse charMap
invCharMap = make(map[rune]rune, len(charMap))
for k, v := range charMap {
invCharMap[v] = k
}
}
// replaceReservedChars takes a path and substitutes any reserved
// characters in it
func replaceReservedChars(in string) string {
// Folder names can't end with a period '.'
in = fixEndingInPeriod.ReplaceAllString(in, string(charMap['.'])+"$1")
// OneDrive for Business file or folder names cannot begin with a tilde '~'
in = fixStartingWithTilde.ReplaceAllString(in, "$1"+string(charMap['~']))
// Apparently file names can't start with space either
in = fixStartingWithSpace.ReplaceAllString(in, "$1"+string(charMap[' ']))
// Replace reserved characters
return strings.Map(func(c rune) rune {
if replacement, ok := charMap[c]; ok && c != '.' && c != '~' && c != ' ' {
return replacement
}
return c
}, in)
}
// restoreReservedChars takes a path and undoes any substitutions
// made by replaceReservedChars
func restoreReservedChars(in string) string {
return strings.Map(func(c rune) rune {
if replacement, ok := invCharMap[c]; ok {
return replacement
}
return c
}, in)
}

View File

@@ -1,30 +0,0 @@
package onedrive
import "testing"
func TestReplace(t *testing.T) {
for _, test := range []struct {
in string
out string
}{
{"", ""},
{"abc 123", "abc 123"},
{`\*<>?:|#%".~`, `.~`},
{`\*<>?:|#%".~/\*<>?:|#%".~`, `.~/.~`},
{" leading space", "␠leading space"},
{"~leading tilde", "leading tilde"},
{"trailing dot.", "trailing dot"},
{" leading space/ leading space/ leading space", "␠leading space/␠leading space/␠leading space"},
{"~leading tilde/~leading tilde/~leading tilde", "leading tilde/leading tilde/leading tilde"},
{"trailing dot./trailing dot./trailing dot.", "trailing dot/trailing dot/trailing dot"},
} {
got := replaceReservedChars(test.in)
if got != test.out {
t.Errorf("replaceReservedChars(%q) want %q got %q", test.in, test.out, got)
}
got2 := restoreReservedChars(got)
if got2 != test.in {
t.Errorf("restoreReservedChars(%q) want %q got %q", got, test.in, got2)
}
}
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
@@ -25,6 +26,8 @@ import (
"github.com/rclone/rclone/lib/rest"
)
const enc = encodings.OpenDrive
const (
defaultEndpoint = "https://dev.opendrive.com/api/v1"
minSleep = 10 * time.Millisecond
@@ -161,7 +164,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
Method: "POST",
Path: "/session/login.json",
}
resp, err = f.srv.CallJSON(&opts, &account, &f.session)
resp, err = f.srv.CallJSON(ctx, &opts, &account, &f.session)
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -246,7 +249,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
}
// deleteObject removes an object by ID
func (f *Fs) deleteObject(id string) error {
func (f *Fs) deleteObject(ctx context.Context, id string) error {
return f.pacer.Call(func() (bool, error) {
removeDirData := removeFolder{SessionID: f.session.SessionID, FolderID: id}
opts := rest.Opts{
@@ -254,7 +257,7 @@ func (f *Fs) deleteObject(id string) error {
NoResponse: true,
Path: "/folder/remove.json",
}
resp, err := f.srv.CallJSON(&opts, &removeDirData, nil)
resp, err := f.srv.CallJSON(ctx, &opts, &removeDirData, nil)
return f.shouldRetry(resp, err)
})
}
@@ -275,14 +278,14 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if err != nil {
return err
}
item, err := f.readMetaDataForFolderID(rootID)
item, err := f.readMetaDataForFolderID(ctx, rootID)
if err != nil {
return err
}
if check && len(item.Files) != 0 {
return errors.New("folder not empty")
}
err = f.deleteObject(rootID)
err = f.deleteObject(ctx, rootID)
if err != nil {
return err
}
@@ -353,7 +356,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
Method: "POST",
Path: "/file/move_copy.json",
}
resp, err = f.srv.CallJSON(&opts, &copyFileData, &response)
resp, err = f.srv.CallJSON(ctx, &opts, &copyFileData, &response)
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -410,7 +413,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
Method: "POST",
Path: "/file/move_copy.json",
}
resp, err = f.srv.CallJSON(&opts, &copyFileData, &response)
resp, err = f.srv.CallJSON(ctx, &opts, &copyFileData, &response)
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -509,7 +512,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
Method: "POST",
Path: "/folder/move_copy.json",
}
resp, err = f.srv.CallJSON(&opts, &moveFolderData, &response)
resp, err = f.srv.CallJSON(ctx, &opts, &moveFolderData, &response)
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -585,18 +588,18 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
fs: f,
remote: remote,
}
return o, leaf, directoryID, nil
return o, enc.FromStandardName(leaf), directoryID, nil
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForFolderID(id string) (info *FolderList, err error) {
func (f *Fs) readMetaDataForFolderID(ctx context.Context, id string) (info *FolderList, err error) {
var resp *http.Response
opts := rest.Opts{
Method: "GET",
Path: "/folder/list.json/" + f.session.SessionID + "/" + id,
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -636,12 +639,16 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
var resp *http.Response
response := createFileResponse{}
err := o.fs.pacer.Call(func() (bool, error) {
createFileData := createFile{SessionID: o.fs.session.SessionID, FolderID: directoryID, Name: replaceReservedChars(leaf)}
createFileData := createFile{
SessionID: o.fs.session.SessionID,
FolderID: directoryID,
Name: leaf,
}
opts := rest.Opts{
Method: "POST",
Path: "/upload/create_file.json",
}
resp, err = o.fs.srv.CallJSON(&opts, &createFileData, &response)
resp, err = o.fs.srv.CallJSON(ctx, &opts, &createFileData, &response)
return o.fs.shouldRetry(resp, err)
})
if err != nil {
@@ -683,7 +690,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
err = f.pacer.Call(func() (bool, error) {
createDirData := createFolder{
SessionID: f.session.SessionID,
FolderName: replaceReservedChars(leaf),
FolderName: enc.FromStandardName(leaf),
FolderSubParent: pathID,
FolderIsPublic: 0,
FolderPublicUpl: 0,
@@ -694,7 +701,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
Method: "POST",
Path: "/folder.json",
}
resp, err = f.srv.CallJSON(&opts, &createDirData, &response)
resp, err = f.srv.CallJSON(ctx, &opts, &createDirData, &response)
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -722,15 +729,15 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
Method: "GET",
Path: "/folder/list.json/" + f.session.SessionID + "/" + pathID,
}
resp, err = f.srv.CallJSON(&opts, nil, &folderList)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &folderList)
return f.shouldRetry(resp, err)
})
if err != nil {
return "", false, errors.Wrap(err, "failed to get folder list")
}
leaf = enc.FromStandardName(leaf)
for _, folder := range folderList.Folders {
folder.Name = restoreReservedChars(folder.Name)
// fs.Debugf(nil, "Folder: %s (%s)", folder.Name, folder.FolderID)
if leaf == folder.Name {
@@ -769,7 +776,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
folderList := FolderList{}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &folderList)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &folderList)
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -777,7 +784,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
for _, folder := range folderList.Folders {
folder.Name = restoreReservedChars(folder.Name)
folder.Name = enc.ToStandardName(folder.Name)
// fs.Debugf(nil, "Folder: %s (%s)", folder.Name, folder.FolderID)
remote := path.Join(dir, folder.Name)
// cache the directory ID for later lookups
@@ -788,7 +795,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
for _, file := range folderList.Files {
file.Name = restoreReservedChars(file.Name)
file.Name = enc.ToStandardName(file.Name)
// fs.Debugf(nil, "File: %s (%s)", file.Name, file.FileID)
remote := path.Join(dir, file.Name)
o, err := f.newObjectWithInfo(ctx, remote, &file)
@@ -851,9 +858,13 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
NoResponse: true,
Path: "/file/filesettings.json",
}
update := modTimeFile{SessionID: o.fs.session.SessionID, FileID: o.id, FileModificationTime: strconv.FormatInt(modTime.Unix(), 10)}
update := modTimeFile{
SessionID: o.fs.session.SessionID,
FileID: o.id,
FileModificationTime: strconv.FormatInt(modTime.Unix(), 10),
}
err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(&opts, &update, nil)
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, nil)
return o.fs.shouldRetry(resp, err)
})
@@ -873,7 +884,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err)
})
if err != nil {
@@ -892,7 +903,7 @@ func (o *Object) Remove(ctx context.Context) error {
NoResponse: true,
Path: "/file.json/" + o.fs.session.SessionID + "/" + o.id,
}
resp, err := o.fs.srv.Call(&opts)
resp, err := o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err)
})
}
@@ -920,7 +931,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Method: "POST",
Path: "/upload/open_file_upload.json",
}
resp, err := o.fs.srv.CallJSON(&opts, &openUploadData, &openResponse)
resp, err := o.fs.srv.CallJSON(ctx, &opts, &openUploadData, &openResponse)
return o.fs.shouldRetry(resp, err)
})
if err != nil {
@@ -966,7 +977,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
MultipartFileName: o.remote, // ..name of the file for the attached file
}
resp, err = o.fs.srv.CallJSON(&opts, nil, &reply)
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &reply)
return o.fs.shouldRetry(resp, err)
})
if err != nil {
@@ -989,7 +1000,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Method: "POST",
Path: "/upload/close_file_upload.json",
}
resp, err = o.fs.srv.CallJSON(&opts, &closeUploadData, &closeResponse)
resp, err = o.fs.srv.CallJSON(ctx, &opts, &closeUploadData, &closeResponse)
return o.fs.shouldRetry(resp, err)
})
if err != nil {
@@ -1015,7 +1026,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
NoResponse: true,
Path: "/file/access.json",
}
resp, err = o.fs.srv.CallJSON(&opts, &update, nil)
resp, err = o.fs.srv.CallJSON(ctx, &opts, &update, nil)
return o.fs.shouldRetry(resp, err)
})
if err != nil {
@@ -1038,9 +1049,10 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
err = o.fs.pacer.Call(func() (bool, error) {
opts := rest.Opts{
Method: "GET",
Path: "/folder/itembyname.json/" + o.fs.session.SessionID + "/" + directoryID + "?name=" + url.QueryEscape(replaceReservedChars(leaf)),
Path: fmt.Sprintf("/folder/itembyname.json/%s/%s?name=%s",
o.fs.session.SessionID, directoryID, url.QueryEscape(enc.FromStandardName(leaf))),
}
resp, err = o.fs.srv.CallJSON(&opts, nil, &folderList)
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &folderList)
return o.fs.shouldRetry(resp, err)
})
if err != nil {

View File

@@ -1,78 +0,0 @@
/*
Translate file names for OpenDrive
OpenDrive reserved characters
The following characters are OpenDrive reserved characters, and can't
be used in OpenDrive folder and file names.
\ / : * ? " < > |
OpenDrive files and folders can't have leading or trailing spaces also.
*/
package opendrive
import (
"regexp"
"strings"
)
// charMap holds replacements for characters
//
// OpenDrive has a restricted set of characters compared to other cloud
// storage systems, so we to map these to the FULLWIDTH unicode
// equivalents
//
// http://unicode-search.net/unicode-namesearch.pl?term=SOLIDUS
var (
charMap = map[rune]rune{
'\\': '', // FULLWIDTH REVERSE SOLIDUS
':': '', // FULLWIDTH COLON
'*': '', // FULLWIDTH ASTERISK
'?': '', // FULLWIDTH QUESTION MARK
'"': '', // FULLWIDTH QUOTATION MARK
'<': '', // FULLWIDTH LESS-THAN SIGN
'>': '', // FULLWIDTH GREATER-THAN SIGN
'|': '', // FULLWIDTH VERTICAL LINE
' ': '␠', // SYMBOL FOR SPACE
}
fixStartingWithSpace = regexp.MustCompile(`(/|^) `)
fixEndingWithSpace = regexp.MustCompile(` (/|$)`)
invCharMap map[rune]rune
)
func init() {
// Create inverse charMap
invCharMap = make(map[rune]rune, len(charMap))
for k, v := range charMap {
invCharMap[v] = k
}
}
// replaceReservedChars takes a path and substitutes any reserved
// characters in it
func replaceReservedChars(in string) string {
// Filenames can't start with space
in = fixStartingWithSpace.ReplaceAllString(in, "$1"+string(charMap[' ']))
// Filenames can't end with space
in = fixEndingWithSpace.ReplaceAllString(in, string(charMap[' '])+"$1")
return strings.Map(func(c rune) rune {
if replacement, ok := charMap[c]; ok && c != ' ' {
return replacement
}
return c
}, in)
}
// restoreReservedChars takes a path and undoes any substitutions
// made by replaceReservedChars
func restoreReservedChars(in string) string {
return strings.Map(func(c rune) rune {
if replacement, ok := invCharMap[c]; ok {
return replacement
}
return c
}, in)
}

View File

@@ -1,28 +0,0 @@
package opendrive
import "testing"
func TestReplace(t *testing.T) {
for _, test := range []struct {
in string
out string
}{
{"", ""},
{"abc 123", "abc 123"},
{`\*<>?:|#%".~`, `#%.~`},
{`\*<>?:|#%".~/\*<>?:|#%".~`, `#%.~/#%.~`},
{" leading space", "␠leading space"},
{" path/ leading spaces", "␠path/␠ leading spaces"},
{"trailing space ", "trailing space␠"},
{"trailing spaces /path ", "trailing spaces ␠/path␠"},
} {
got := replaceReservedChars(test.in)
if got != test.out {
t.Errorf("replaceReservedChars(%q) want %q got %q", test.in, test.out, got)
}
got2 := restoreReservedChars(got)
if got2 != test.in {
t.Errorf("restoreReservedChars(%q) want %q got %q", got, test.in, got2)
}
}
}

View File

@@ -26,6 +26,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
@@ -35,6 +36,8 @@ import (
"golang.org/x/oauth2"
)
const enc = encodings.Pcloud
const (
rcloneClientID = "DnONSzyJXpm"
rcloneEncryptedClientSecret = "ej1OIF39VOQQ0PXaSdK9ztkLw3tdLNscW2157TKNQdQKkICR4uU7aFg4eFM"
@@ -175,21 +178,6 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
return doRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// substitute reserved characters for pcloud
//
// Generally all characters are allowed in filenames, except the NULL
// byte, forward and backslash (/,\ and \0)
func replaceReservedChars(x string) string {
// Backslash for FULLWIDTH REVERSE SOLIDUS
return strings.Replace(x, "\\", "", -1)
}
// restore reserved characters for pcloud
func restoreReservedChars(x string) string {
// FULLWIDTH REVERSE SOLIDUS for Backslash
return strings.Replace(x, "", "\\", -1)
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Item, err error) {
// defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
@@ -201,7 +189,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
return nil, err
}
found, err := f.listAll(directoryID, false, true, func(item *api.Item) bool {
found, err := f.listAll(ctx, directoryID, false, true, func(item *api.Item) bool {
if item.Name == leaf {
info = item
return true
@@ -334,7 +322,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
// Find the leaf in pathID
found, err = f.listAll(pathID, true, false, func(item *api.Item) bool {
found, err = f.listAll(ctx, pathID, true, false, func(item *api.Item) bool {
if item.Name == leaf {
pathIDOut = item.ID
return true
@@ -354,10 +342,10 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
Path: "/createfolder",
Parameters: url.Values{},
}
opts.Parameters.Set("name", replaceReservedChars(leaf))
opts.Parameters.Set("name", enc.FromStandardName(leaf))
opts.Parameters.Set("folderid", dirIDtoNumber(pathID))
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
})
@@ -400,7 +388,7 @@ type listAllFn func(*api.Item) bool
// Lists the directory required calling the user function on each item found
//
// If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
opts := rest.Opts{
Method: "GET",
Path: "/listfolder",
@@ -412,7 +400,7 @@ func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn list
var result api.ItemResult
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
})
@@ -430,7 +418,7 @@ func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn list
continue
}
}
item.Name = restoreReservedChars(item.Name)
item.Name = enc.ToStandardName(item.Name)
if fn(item) {
found = true
break
@@ -458,7 +446,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return nil, err
}
var iErr error
_, err = f.listAll(directoryID, false, false, func(info *api.Item) bool {
_, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool {
remote := path.Join(dir, info.Name)
if info.IsFolder {
// cache the directory ID for later lookups
@@ -563,7 +551,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var resp *http.Response
var result api.ItemResult
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
})
@@ -622,13 +610,13 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
Parameters: url.Values{},
}
opts.Parameters.Set("fileid", fileIDtoNumber(srcObj.id))
opts.Parameters.Set("toname", replaceReservedChars(leaf))
opts.Parameters.Set("toname", enc.FromStandardName(leaf))
opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("mtime", fmt.Sprintf("%d", srcObj.modTime.Unix()))
var resp *http.Response
var result api.ItemResult
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
})
@@ -666,7 +654,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
var resp *http.Response
var result api.Error
return f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Update(err)
return shouldRetry(resp, err)
})
@@ -701,12 +689,12 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
Parameters: url.Values{},
}
opts.Parameters.Set("fileid", fileIDtoNumber(srcObj.id))
opts.Parameters.Set("toname", replaceReservedChars(leaf))
opts.Parameters.Set("toname", enc.FromStandardName(leaf))
opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
var resp *http.Response
var result api.ItemResult
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
})
@@ -798,12 +786,12 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
Parameters: url.Values{},
}
opts.Parameters.Set("folderid", dirIDtoNumber(srcID))
opts.Parameters.Set("toname", replaceReservedChars(leaf))
opts.Parameters.Set("toname", enc.FromStandardName(leaf))
opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
var resp *http.Response
var result api.ItemResult
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
})
@@ -830,7 +818,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var resp *http.Response
var q api.UserInfo
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &q)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &q)
err = q.Error.Update(err)
return shouldRetry(resp, err)
})
@@ -881,7 +869,7 @@ func (o *Object) getHashes(ctx context.Context) (err error) {
}
opts.Parameters.Set("fileid", fileIDtoNumber(o.id))
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, nil, &result)
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
})
@@ -984,7 +972,7 @@ func (o *Object) Storable() bool {
}
// downloadURL fetches the download link
func (o *Object) downloadURL() (URL string, err error) {
func (o *Object) downloadURL(ctx context.Context) (URL string, err error) {
if o.id == "" {
return "", errors.New("can't download - no id")
}
@@ -1000,7 +988,7 @@ func (o *Object) downloadURL() (URL string, err error) {
}
opts.Parameters.Set("fileid", fileIDtoNumber(o.id))
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, nil, &result)
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
})
@@ -1016,7 +1004,7 @@ func (o *Object) downloadURL() (URL string, err error) {
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
url, err := o.downloadURL()
url, err := o.downloadURL(ctx)
if err != nil {
return nil, err
}
@@ -1027,7 +1015,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1078,7 +1066,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
}
leaf = replaceReservedChars(leaf)
leaf = enc.FromStandardName(leaf)
opts.Parameters.Set("filename", leaf)
opts.Parameters.Set("folderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("nopartial", "1")
@@ -1104,7 +1092,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, nil, &result)
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
})
@@ -1134,7 +1122,7 @@ func (o *Object) Remove(ctx context.Context) error {
var result api.ItemResult
opts.Parameters.Set("fileid", fileIDtoNumber(o.id))
return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(&opts, nil, &result)
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
})

View File

@@ -2,9 +2,7 @@
// object storage system.
package premiumizeme
/* FIXME
escaping needs fixing
/*
Run of rclone info
stringNeedsEscaping = []rune{
0x00, 0x0A, 0x0D, 0x22, 0x2F, 0x5C, 0xBF, 0xFE
@@ -36,6 +34,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
@@ -47,6 +46,8 @@ import (
"golang.org/x/oauth2"
)
const enc = encodings.PremiumizeMe
const (
rcloneClientID = "658922194"
rcloneEncryptedClientSecret = "B5YIvQoRIhcpAYs8HYeyjb9gK-ftmZEbqdh_gNfc4RgO9Q"
@@ -170,24 +171,6 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// substitute reserved characters
func replaceReservedChars(x string) string {
// Backslash for FULLWIDTH REVERSE SOLIDUS
x = strings.Replace(x, "\\", "", -1)
// Double quote for FULLWIDTH QUOTATION MARK
x = strings.Replace(x, `"`, "", -1)
return x
}
// restore reserved characters
func restoreReservedChars(x string) string {
// FULLWIDTH QUOTATION MARK for Double quote
x = strings.Replace(x, "", `"`, -1)
// FULLWIDTH REVERSE SOLIDUS for Backslash
x = strings.Replace(x, "", "\\", -1)
return x
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string, directoriesOnly bool, filesOnly bool) (info *api.Item, err error) {
// defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
@@ -200,7 +183,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, directoriesOn
}
lcLeaf := strings.ToLower(leaf)
found, err := f.listAll(directoryID, directoriesOnly, filesOnly, func(item *api.Item) bool {
found, err := f.listAll(ctx, directoryID, directoriesOnly, filesOnly, func(item *api.Item) bool {
if strings.ToLower(item.Name) == lcLeaf {
info = item
return true
@@ -361,7 +344,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
// Find the leaf in pathID
found, err = f.listAll(pathID, true, false, func(item *api.Item) bool {
found, err = f.listAll(ctx, pathID, true, false, func(item *api.Item) bool {
if item.Name == leaf {
pathIDOut = item.ID
return true
@@ -381,12 +364,12 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
Path: "/folder/create",
Parameters: f.baseParams(),
MultipartParams: url.Values{
"name": {replaceReservedChars(leaf)},
"name": {enc.FromStandardName(leaf)},
"parent_id": {pathID},
},
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
})
if err != nil {
@@ -411,7 +394,7 @@ type listAllFn func(*api.Item) bool
// Lists the directory required calling the user function on each item found
//
// If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
opts := rest.Opts{
Method: "GET",
Path: "/folder/list",
@@ -423,7 +406,7 @@ func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn list
var result api.FolderListResponse
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -446,7 +429,7 @@ func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn list
fs.Debugf(f, "Ignoring %q - unknown type %q", item.Name, item.Type)
continue
}
item.Name = restoreReservedChars(item.Name)
item.Name = enc.ToStandardName(item.Name)
if fn(item) {
found = true
break
@@ -475,7 +458,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return nil, err
}
var iErr error
_, err = f.listAll(directoryID, false, false, func(info *api.Item) bool {
_, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool {
remote := path.Join(dir, info.Name)
if info.Type == api.ItemTypeFolder {
// cache the directory ID for later lookups
@@ -589,7 +572,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
// need to check if empty as it will delete recursively by default
if check {
found, err := f.listAll(rootID, false, false, func(item *api.Item) bool {
found, err := f.listAll(ctx, rootID, false, false, func(item *api.Item) bool {
return true
})
if err != nil {
@@ -611,7 +594,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var resp *http.Response
var result api.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -654,8 +637,8 @@ func (f *Fs) Purge(ctx context.Context) error {
// between directories and a separate one to rename them. We try to
// call the minimum number of API calls.
func (f *Fs) move(ctx context.Context, isFile bool, id, oldLeaf, newLeaf, oldDirectoryID, newDirectoryID string) (err error) {
newLeaf = replaceReservedChars(newLeaf)
oldLeaf = replaceReservedChars(oldLeaf)
newLeaf = enc.FromStandardName(newLeaf)
oldLeaf = enc.FromStandardName(oldLeaf)
doRenameLeaf := oldLeaf != newLeaf
doMove := oldDirectoryID != newDirectoryID
@@ -686,11 +669,11 @@ func (f *Fs) move(ctx context.Context, isFile bool, id, oldLeaf, newLeaf, oldDir
} else {
opts.MultipartParams.Set("folders[]", id)
}
//replacedLeaf := replaceReservedChars(leaf)
//replacedLeaf := enc.FromStandardName(leaf)
var resp *http.Response
var result api.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -860,7 +843,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
Parameters: f.baseParams(),
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
})
if err != nil {
@@ -908,7 +891,7 @@ func (o *Object) Remote() string {
// srvPath returns a path for use in server
func (o *Object) srvPath() string {
return replaceReservedChars(o.fs.rootSlash() + o.remote)
return enc.FromStandardPath(o.fs.rootSlash() + o.remote)
}
// Hash returns the SHA-1 of an object returning a lowercase hex string
@@ -992,7 +975,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1023,7 +1006,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return err
}
leaf = replaceReservedChars(leaf)
leaf = enc.FromStandardName(leaf)
var resp *http.Response
var info api.FolderUploadinfoResponse
@@ -1036,7 +1019,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
},
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, nil, &info)
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info)
if err != nil {
return shouldRetry(resp, err)
}
@@ -1096,7 +1079,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
var result api.Response
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, nil, &result)
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1138,7 +1121,7 @@ func (f *Fs) renameLeaf(ctx context.Context, isFile bool, id string, newLeaf str
var resp *http.Response
var result api.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1163,7 +1146,7 @@ func (f *Fs) remove(ctx context.Context, id string) (err error) {
var resp *http.Response
var result api.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &result)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {

View File

@@ -145,7 +145,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
var entry putio.File
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "creating folder. part: %s, parentID: %d", leaf, parentID)
entry, err = f.client.Files.CreateFolder(ctx, leaf, parentID)
entry, err = f.client.Files.CreateFolder(ctx, enc.FromStandardName(leaf), parentID)
return shouldRetry(err)
})
return itoa(entry.ID), err
@@ -172,11 +172,11 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
return
}
for _, child := range children {
if child.Name == leaf {
if enc.ToStandardName(child.Name) == leaf {
found = true
pathIDOut = itoa(child.ID)
if !child.IsDir() {
err = fs.ErrorNotAFile
err = fs.ErrorIsFile
}
return
}
@@ -214,7 +214,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return
}
for _, child := range children {
remote := path.Join(dir, child.Name)
remote := path.Join(dir, enc.ToStandardName(child.Name))
// fs.Debugf(f, "child: %s", remote)
if child.IsDir() {
f.dirCache.Put(remote, itoa(child.ID))
@@ -289,9 +289,10 @@ func (f *Fs) createUpload(ctx context.Context, name string, size int64, parentID
if err != nil {
return false, err
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.Header.Set("tus-resumable", "1.0.0")
req.Header.Set("upload-length", strconv.FormatInt(size, 10))
b64name := base64.StdEncoding.EncodeToString([]byte(name))
b64name := base64.StdEncoding.EncodeToString([]byte(enc.FromStandardName(name)))
b64true := base64.StdEncoding.EncodeToString([]byte("true"))
b64parentID := base64.StdEncoding.EncodeToString([]byte(parentID))
b64modifiedAt := base64.StdEncoding.EncodeToString([]byte(modTime.Format(time.RFC3339)))
@@ -354,7 +355,7 @@ func (f *Fs) sendUpload(ctx context.Context, location string, size int64, in io.
func (f *Fs) transferChunk(ctx context.Context, location string, start int64, chunk io.ReadSeeker, chunkSize int64) (fileID int64, err error) {
// defer log.Trace(f, "location=%v, start=%v, chunkSize=%v", location, start, chunkSize)("fileID=%v, err=%v", fileID, &err)
_, _ = chunk.Seek(0, io.SeekStart)
req, err := f.makeUploadPatchRequest(location, chunk, start, chunkSize)
req, err := f.makeUploadPatchRequest(ctx, location, chunk, start, chunkSize)
if err != nil {
return 0, err
}
@@ -379,11 +380,12 @@ func (f *Fs) transferChunk(ctx context.Context, location string, start int64, ch
return fileID, nil
}
func (f *Fs) makeUploadPatchRequest(location string, in io.Reader, offset, length int64) (*http.Request, error) {
func (f *Fs) makeUploadPatchRequest(ctx context.Context, location string, in io.Reader, offset, length int64) (*http.Request, error) {
req, err := http.NewRequest("PATCH", location, in)
if err != nil {
return nil, err
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.Header.Set("tus-resumable", "1.0.0")
req.Header.Set("upload-offset", strconv.FormatInt(offset, 10))
req.Header.Set("content-length", strconv.FormatInt(length, 10))
@@ -503,7 +505,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (o fs.Objec
params := url.Values{}
params.Set("file_id", strconv.FormatInt(srcObj.file.ID, 10))
params.Set("parent_id", directoryID)
params.Set("name", leaf)
params.Set("name", enc.FromStandardName(leaf))
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/copy", strings.NewReader(params.Encode()))
if err != nil {
return false, err
@@ -542,7 +544,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (o fs.Objec
params := url.Values{}
params.Set("file_id", strconv.FormatInt(srcObj.file.ID, 10))
params.Set("parent_id", directoryID)
params.Set("name", leaf)
params.Set("name", enc.FromStandardName(leaf))
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/move", strings.NewReader(params.Encode()))
if err != nil {
return false, err
@@ -631,7 +633,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
params := url.Values{}
params.Set("file_id", srcID)
params.Set("parent_id", dstDirectoryID)
params.Set("name", leaf)
params.Set("name", enc.FromStandardName(leaf))
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/move", strings.NewReader(params.Encode()))
if err != nil {
return false, err

View File

@@ -137,7 +137,7 @@ func (o *Object) readEntry(ctx context.Context) (f *putio.File, err error) {
}
err = o.fs.pacer.Call(func() (bool, error) {
// fs.Debugf(o, "requesting child. directoryID: %s, name: %s", directoryID, leaf)
req, err := o.fs.client.NewRequest(ctx, "GET", "/v2/files/"+directoryID+"/child?name="+url.PathEscape(leaf), nil)
req, err := o.fs.client.NewRequest(ctx, "GET", "/v2/files/"+directoryID+"/child?name="+url.QueryEscape(enc.FromStandardName(leaf)), nil)
if err != nil {
return false, err
}
@@ -147,6 +147,12 @@ func (o *Object) readEntry(ctx context.Context) (f *putio.File, err error) {
}
return shouldRetry(err)
})
if err != nil {
return nil, err
}
if resp.File.IsDir() {
return nil, fs.ErrorNotAFile
}
return &resp.File, err
}
@@ -223,7 +229,11 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var resp *http.Response
headers := fs.OpenOptionHeaders(options)
err = o.fs.pacer.Call(func() (bool, error) {
req, _ := http.NewRequest(http.MethodGet, storageURL, nil)
req, err := http.NewRequest(http.MethodGet, storageURL, nil)
if err != nil {
return shouldRetry(err)
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.Header.Set("User-Agent", o.fs.client.UserAgent)
// merge headers with extra headers

View File

@@ -8,11 +8,25 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/oauthutil"
"golang.org/x/oauth2"
)
/*
// TestPutio
stringNeedsEscaping = []rune{
'/', '\x00'
}
maxFileLength = 255
canWriteUnnormalized = true
canReadUnnormalized = true
canReadRenormalized = true
canStream = false
*/
const enc = encodings.Putio
// Constants
const (
rcloneClientID = "4131"

View File

@@ -20,6 +20,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
@@ -29,6 +30,8 @@ import (
qs "github.com/yunify/qingstor-sdk-go/v3/service"
)
const enc = encodings.QingStor
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -184,7 +187,8 @@ func parsePath(path string) (root string) {
// split returns bucket and bucketPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
bucketName, bucketPath = bucket.Split(path.Join(f.root, rootRelativePath))
return enc.FromStandardName(bucketName), enc.FromStandardPath(bucketPath)
}
// split returns bucket and bucketPath from the object
@@ -353,7 +357,8 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, err
}
_, err = bucketInit.HeadObject(f.rootDirectory, &qs.HeadObjectInput{})
encodedDirectory := enc.FromStandardPath(f.rootDirectory)
_, err = bucketInit.HeadObject(encodedDirectory, &qs.HeadObjectInput{})
if err == nil {
newRoot := path.Dir(f.root)
if newRoot == "." {
@@ -550,6 +555,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
continue
}
remote := *commonPrefix
remote = enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
@@ -569,12 +575,13 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
}
for _, object := range resp.Keys {
key := qs.StringValue(object.Key)
if !strings.HasPrefix(key, prefix) {
fs.Logf(f, "Odd name received %q", key)
remote := qs.StringValue(object.Key)
remote = enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
}
remote := key[len(prefix):]
remote = remote[len(prefix):]
if addBucket {
remote = path.Join(bucket, remote)
}
@@ -646,7 +653,7 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
}
for _, bucket := range resp.Buckets {
d := fs.NewDir(qs.StringValue(bucket.Name), qs.TimeValue(bucket.Created))
d := fs.NewDir(enc.ToStandardName(qs.StringValue(bucket.Name)), qs.TimeValue(bucket.Created))
entries = append(entries, d)
}
return entries, nil

View File

@@ -17,11 +17,14 @@ import (
"context"
"encoding/base64"
"encoding/hex"
"encoding/xml"
"fmt"
"io"
"net/http"
"net/url"
"path"
"regexp"
"strconv"
"strings"
"time"
@@ -41,6 +44,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
@@ -50,6 +54,8 @@ import (
"github.com/rclone/rclone/lib/rest"
)
const enc = encodings.S3
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -748,6 +754,17 @@ Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.`,
See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)`,
Default: false,
Advanced: true,
}, {
Name: "leave_parts_on_error",
Provider: "AWS",
Help: `If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
It should be set to true for resuming uploads across different sessions.
WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.
`,
Default: false,
Advanced: true,
}},
})
}
@@ -788,6 +805,7 @@ type Options struct {
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"`
LeavePartsOnError bool `config:"leave_parts_on_error"`
}
// Fs represents a remote s3 server
@@ -818,6 +836,7 @@ type Object struct {
lastModified time.Time // Last modified
meta map[string]*string // The object metadata if known - may be nil
mimeType string // MimeType of object - may be ""
storageClass string // eg GLACIER
}
// ------------------------------------------------------------
@@ -898,7 +917,8 @@ func parsePath(path string) (root string) {
// split returns bucket and bucketPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
bucketName, bucketPath = bucket.Split(path.Join(f.root, rootRelativePath))
return enc.FromStandardName(bucketName), enc.FromStandardPath(bucketPath)
}
// split returns bucket and bucketPath from the object
@@ -1089,12 +1109,15 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
SetTier: true,
GetTier: true,
}).Fill(f)
if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the object exists
encodedDirectory := enc.FromStandardPath(f.rootDirectory)
req := s3.HeadObjectInput{
Bucket: &f.rootBucket,
Key: &f.rootDirectory,
Key: &encodedDirectory,
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.HeadObject(&req)
@@ -1132,6 +1155,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *s3.Obje
}
o.etag = aws.StringValue(info.ETag)
o.bytes = aws.Int64Value(info.Size)
o.storageClass = aws.StringValue(info.StorageClass)
} else {
err := o.readMetaData(ctx) // reads info and meta, returning an error
if err != nil {
@@ -1214,6 +1238,22 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
delimiter = "/"
}
var marker *string
// URL encode the listings so we can use control characters in object names
// See: https://github.com/aws/aws-sdk-go/issues/1914
//
// However this doesn't work perfectly under Ceph (and hence DigitalOcean/Dreamhost) because
// it doesn't encode CommonPrefixes.
// See: https://tracker.ceph.com/issues/41870
//
// This does not work under IBM COS also: See https://github.com/rclone/rclone/issues/3345
// though maybe it does on some versions.
//
// This does work with minio but was only added relatively recently
// https://github.com/minio/minio/pull/7265
//
// So we enable only on providers we know supports it properly, all others can retry when a
// XML Syntax error is detected.
var urlEncodeListings = (f.opt.Provider == "AWS" || f.opt.Provider == "Wasabi" || f.opt.Provider == "Alibaba")
for {
// FIXME need to implement ALL loop
req := s3.ListObjectsInput{
@@ -1223,10 +1263,26 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
MaxKeys: &maxKeys,
Marker: marker,
}
if urlEncodeListings {
req.EncodingType = aws.String(s3.EncodingTypeUrl)
}
var resp *s3.ListObjectsOutput
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.ListObjectsWithContext(ctx, &req)
if err != nil && !urlEncodeListings {
if awsErr, ok := err.(awserr.RequestFailure); ok {
if origErr := awsErr.OrigErr(); origErr != nil {
if _, ok := origErr.(*xml.SyntaxError); ok {
// Retry the listing with URL encoding as there were characters that XML can't encode
urlEncodeListings = true
req.EncodingType = aws.String(s3.EncodingTypeUrl)
fs.Debugf(f, "Retrying listing because of characters which can't be XML encoded")
return true, err
}
}
}
}
return f.shouldRetry(err)
})
if err != nil {
@@ -1255,6 +1311,14 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
continue
}
remote := *commonPrefix.Prefix
if urlEncodeListings {
remote, err = url.QueryUnescape(remote)
if err != nil {
fs.Logf(f, "failed to URL decode %q in listing common prefix: %v", *commonPrefix.Prefix, err)
continue
}
}
remote = enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
@@ -1274,6 +1338,14 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
}
for _, object := range resp.Contents {
remote := aws.StringValue(object.Key)
if urlEncodeListings {
remote, err = url.QueryUnescape(remote)
if err != nil {
fs.Logf(f, "failed to URL decode %q in listing: %v", aws.StringValue(object.Key), err)
continue
}
}
remote = enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
@@ -1358,7 +1430,7 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
return nil, err
}
for _, bucket := range resp.Buckets {
bucketName := aws.StringValue(bucket.Name)
bucketName := enc.ToStandardName(aws.StringValue(bucket.Name))
f.cache.MarkOK(bucketName)
d := fs.NewDir(bucketName, aws.TimeValue(bucket.CreationDate))
entries = append(entries, d)
@@ -1550,6 +1622,132 @@ func pathEscape(s string) string {
return strings.Replace(rest.URLPathEscape(s), "+", "%2B", -1)
}
// copy does a server side copy
//
// It adds the boiler plate to the req passed in and calls the s3
// method
func (f *Fs) copy(ctx context.Context, req *s3.CopyObjectInput, dstBucket, dstPath, srcBucket, srcPath string, srcSize int64) error {
req.Bucket = &dstBucket
req.ACL = &f.opt.ACL
req.Key = &dstPath
source := pathEscape(path.Join(srcBucket, srcPath))
req.CopySource = &source
if f.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &f.opt.ServerSideEncryption
}
if f.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &f.opt.SSEKMSKeyID
}
if req.StorageClass == nil && f.opt.StorageClass != "" {
req.StorageClass = &f.opt.StorageClass
}
if srcSize >= int64(f.opt.UploadCutoff) {
return f.copyMultipart(ctx, req, dstBucket, dstPath, srcBucket, srcPath, srcSize)
}
return f.pacer.Call(func() (bool, error) {
_, err := f.c.CopyObjectWithContext(ctx, req)
return f.shouldRetry(err)
})
}
func calculateRange(partSize, partIndex, numParts, totalSize int64) string {
start := partIndex * partSize
var ends string
if partIndex == numParts-1 {
if totalSize >= 0 {
ends = strconv.FormatInt(totalSize, 10)
}
} else {
ends = strconv.FormatInt(start+partSize-1, 10)
}
return fmt.Sprintf("bytes=%v-%v", start, ends)
}
func (f *Fs) copyMultipart(ctx context.Context, req *s3.CopyObjectInput, dstBucket, dstPath, srcBucket, srcPath string, srcSize int64) (err error) {
var cout *s3.CreateMultipartUploadOutput
if err := f.pacer.Call(func() (bool, error) {
var err error
cout, err = f.c.CreateMultipartUploadWithContext(ctx, &s3.CreateMultipartUploadInput{
Bucket: &dstBucket,
Key: &dstPath,
})
return f.shouldRetry(err)
}); err != nil {
return err
}
uid := cout.UploadId
defer func() {
if err != nil {
// We can try to abort the upload, but ignore the error.
_ = f.pacer.Call(func() (bool, error) {
_, err := f.c.AbortMultipartUploadWithContext(ctx, &s3.AbortMultipartUploadInput{
Bucket: &dstBucket,
Key: &dstPath,
UploadId: uid,
RequestPayer: req.RequestPayer,
})
return f.shouldRetry(err)
})
}
}()
partSize := int64(f.opt.ChunkSize)
numParts := (srcSize-1)/partSize + 1
var parts []*s3.CompletedPart
for partNum := int64(1); partNum <= numParts; partNum++ {
if err := f.pacer.Call(func() (bool, error) {
partNum := partNum
uploadPartReq := &s3.UploadPartCopyInput{
Bucket: &dstBucket,
Key: &dstPath,
PartNumber: &partNum,
UploadId: uid,
CopySourceRange: aws.String(calculateRange(partSize, partNum-1, numParts, srcSize)),
// Args copy from req
CopySource: req.CopySource,
CopySourceIfMatch: req.CopySourceIfMatch,
CopySourceIfModifiedSince: req.CopySourceIfModifiedSince,
CopySourceIfNoneMatch: req.CopySourceIfNoneMatch,
CopySourceIfUnmodifiedSince: req.CopySourceIfUnmodifiedSince,
CopySourceSSECustomerAlgorithm: req.CopySourceSSECustomerAlgorithm,
CopySourceSSECustomerKey: req.CopySourceSSECustomerKey,
CopySourceSSECustomerKeyMD5: req.CopySourceSSECustomerKeyMD5,
RequestPayer: req.RequestPayer,
SSECustomerAlgorithm: req.SSECustomerAlgorithm,
SSECustomerKey: req.SSECustomerKey,
SSECustomerKeyMD5: req.SSECustomerKeyMD5,
}
uout, err := f.c.UploadPartCopyWithContext(ctx, uploadPartReq)
if err != nil {
return f.shouldRetry(err)
}
parts = append(parts, &s3.CompletedPart{
PartNumber: &partNum,
ETag: uout.CopyPartResult.ETag,
})
return false, nil
}); err != nil {
return err
}
}
return f.pacer.Call(func() (bool, error) {
_, err := f.c.CompleteMultipartUploadWithContext(ctx, &s3.CompleteMultipartUploadInput{
Bucket: &dstBucket,
Key: &dstPath,
MultipartUpload: &s3.CompletedMultipartUpload{
Parts: parts,
},
RequestPayer: req.RequestPayer,
UploadId: uid,
})
return f.shouldRetry(err)
})
}
// Copy src to this remote using server side copy operations.
//
// This is stored with the remote path given
@@ -1571,27 +1769,10 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantCopy
}
srcBucket, srcPath := srcObj.split()
source := pathEscape(path.Join(srcBucket, srcPath))
req := s3.CopyObjectInput{
Bucket: &dstBucket,
ACL: &f.opt.ACL,
Key: &dstPath,
CopySource: &source,
MetadataDirective: aws.String(s3.MetadataDirectiveCopy),
}
if f.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &f.opt.ServerSideEncryption
}
if f.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &f.opt.SSEKMSKeyID
}
if f.opt.StorageClass != "" {
req.StorageClass = &f.opt.StorageClass
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.CopyObjectWithContext(ctx, &req)
return f.shouldRetry(err)
})
err = f.copy(ctx, &req, dstBucket, dstPath, srcBucket, srcPath, srcObj.Size())
if err != nil {
return nil, err
}
@@ -1691,6 +1872,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
o.etag = aws.StringValue(resp.ETag)
o.bytes = size
o.meta = resp.Metadata
o.storageClass = aws.StringValue(resp.StorageClass)
if resp.LastModified == nil {
fs.Logf(o, "Failed to read last modified from HEAD: %v", err)
o.lastModified = time.Now()
@@ -1741,39 +1923,19 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return nil
}
// Guess the content type
mimeType := fs.MimeType(ctx, o)
// Can't update metadata here, so return this error to force a recopy
if o.storageClass == "GLACIER" || o.storageClass == "DEEP_ARCHIVE" {
return fs.ErrorCantSetModTime
}
// Copy the object to itself to update the metadata
bucket, bucketPath := o.split()
sourceKey := path.Join(bucket, bucketPath)
directive := s3.MetadataDirectiveReplace // replace metadata with that passed in
req := s3.CopyObjectInput{
Bucket: &bucket,
ACL: &o.fs.opt.ACL,
Key: &bucketPath,
ContentType: &mimeType,
CopySource: aws.String(pathEscape(sourceKey)),
ContentType: aws.String(fs.MimeType(ctx, o)), // Guess the content type
Metadata: o.meta,
MetadataDirective: &directive,
MetadataDirective: aws.String(s3.MetadataDirectiveReplace), // replace metadata with that passed in
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass == "GLACIER" || o.fs.opt.StorageClass == "DEEP_ARCHIVE" {
return fs.ErrorCantSetModTime
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
err = o.fs.pacer.Call(func() (bool, error) {
_, err := o.fs.c.CopyObjectWithContext(ctx, &req)
return o.fs.shouldRetry(err)
})
return err
return o.fs.copy(ctx, &req, bucket, bucketPath, bucket, bucketPath, o.bytes)
}
// Storable raturns a boolean indicating if this object is storable
@@ -1832,7 +1994,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if multipart {
uploader = s3manager.NewUploader(o.fs.ses, func(u *s3manager.Uploader) {
u.Concurrency = o.fs.opt.UploadConcurrency
u.LeavePartsOnError = false
u.LeavePartsOnError = o.fs.opt.LeavePartsOnError
u.S3 = o.fs.c
u.PartSize = int64(o.fs.opt.ChunkSize)
@@ -1932,6 +2094,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return errors.Wrap(err, "s3 upload: sign request")
}
if o.fs.opt.V2Auth && headers == nil {
headers = putObj.HTTPRequest.Header
}
// Set request to nil if empty so as not to make chunked encoding
if size == 0 {
in = nil
@@ -1942,7 +2108,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return errors.Wrap(err, "s3 upload: new request")
}
httpReq = httpReq.WithContext(ctx)
httpReq = httpReq.WithContext(ctx) // go1.13 can use NewRequestWithContext
// set the headers we signed and the length
httpReq.Header = headers
@@ -1998,6 +2164,31 @@ func (o *Object) MimeType(ctx context.Context) string {
return o.mimeType
}
// SetTier performs changing storage class
func (o *Object) SetTier(tier string) (err error) {
ctx := context.TODO()
tier = strings.ToUpper(tier)
bucket, bucketPath := o.split()
req := s3.CopyObjectInput{
MetadataDirective: aws.String(s3.MetadataDirectiveCopy),
StorageClass: aws.String(tier),
}
err = o.fs.copy(ctx, &req, bucket, bucketPath, bucket, bucketPath, o.bytes)
if err != nil {
return err
}
o.storageClass = tier
return err
}
// GetTier returns storage class as string
func (o *Object) GetTier() string {
if o.storageClass == "" {
return "STANDARD"
}
return o.storageClass
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
@@ -2006,4 +2197,6 @@ var (
_ fs.ListRer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
_ fs.GetTierer = &Object{}
_ fs.SetTierer = &Object{}
)

View File

@@ -11,8 +11,9 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestS3:",
NilObject: (*Object)(nil),
RemoteName: "TestS3:",
NilObject: (*Object)(nil),
TiersToTest: []string{"STANDARD", "STANDARD_IA"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
},

View File

@@ -103,9 +103,14 @@ when the ssh-agent contains many keys.`,
Default: false,
Help: "Disable the execution of SSH commands to determine if remote file hashing is available.\nLeave blank or set to false to enable hashing (recommended), set to true to disable hashing.",
}, {
Name: "ask_password",
Default: false,
Help: "Allow asking for SFTP password when needed.",
Name: "ask_password",
Default: false,
Help: `Allow asking for SFTP password when needed.
If this is set and no password is supplied then rclone will:
- ask for a password
- not contact the ssh agent
`,
Advanced: true,
}, {
Name: "path_override",
@@ -364,7 +369,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
keyFile := env.ShellExpand(opt.KeyFile)
// Add ssh agent-auth if no password or file specified
if (opt.Pass == "" && keyFile == "") || opt.KeyUseAgent {
if (opt.Pass == "" && keyFile == "" && !opt.AskPassword) || opt.KeyUseAgent {
sshAgentClient, _, err := sshagent.New()
if err != nil {
return nil, errors.Wrap(err, "couldn't connect to ssh-agent")
@@ -945,6 +950,7 @@ func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
escapedPath = shellEscape(path.Join(o.fs.opt.PathOverride, o.remote))
}
err = session.Run(hashCmd + " " + escapedPath)
fs.Debugf(nil, "sftp cmd = %s", escapedPath)
if err != nil {
_ = session.Close()
fs.Debugf(o, "Failed to calculate %v hash: %v (%s)", r, err, bytes.TrimSpace(stderr.Bytes()))
@@ -952,7 +958,10 @@ func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
}
_ = session.Close()
str := parseHash(stdout.Bytes())
b := stdout.Bytes()
fs.Debugf(nil, "sftp output = %q", b)
str := parseHash(b)
fs.Debugf(nil, "sftp hash = %q", str)
if r == hash.MD5 {
o.md5sum = &str
} else if r == hash.SHA1 {
@@ -961,7 +970,7 @@ func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
return str, nil
}
var shellEscapeRegex = regexp.MustCompile(`[^A-Za-z0-9_.,:/@\n-]`)
var shellEscapeRegex = regexp.MustCompile("[^A-Za-z0-9_.,:/\\@\u0080-\uFFFFFFFF\n-]")
// Escape a string s.t. it cannot cause unintended behavior
// when sending it to a shell.
@@ -974,7 +983,9 @@ func shellEscape(str string) string {
// an invocation of md5sum/sha1sum to a hash string
// as expected by the rest of this application
func parseHash(bytes []byte) string {
return strings.Split(string(bytes), " ")[0] // Split at hash / filename separator
// For strings with backslash *sum writes a leading \
// https://unix.stackexchange.com/q/313733/94054
return strings.Split(strings.TrimLeft(string(bytes), "\\"), " ")[0] // Split at hash / filename separator
}
// Parses the byte array output from the SSH session

View File

@@ -0,0 +1,152 @@
// Package api contains definitions for using the premiumize.me API
package api
import (
"fmt"
"time"
"github.com/pkg/errors"
)
// ListRequestSelect should be used in $select for Items/Children
const ListRequestSelect = "odata.count,FileCount,Name,FileName,CreationDate,IsHidden,FileSizeBytes,odata.type,Id,Hash,ClientModifiedDate"
// ListResponse is returned from the Items/Children call
type ListResponse struct {
OdataCount int `json:"odata.count"`
Value []Item `json:"value"`
}
// Item Types
const (
ItemTypeFolder = "ShareFile.Api.Models.Folder"
ItemTypeFile = "ShareFile.Api.Models.File"
)
// Item refers to a file or folder
type Item struct {
FileCount int32 `json:"FileCount,omitempty"`
Name string `json:"Name,omitempty"`
FileName string `json:"FileName,omitempty"`
CreatedAt time.Time `json:"CreationDate,omitempty"`
ModifiedAt time.Time `json:"ClientModifiedDate,omitempty"`
IsHidden bool `json:"IsHidden,omitempty"`
Size int64 `json:"FileSizeBytes,omitempty"`
Type string `json:"odata.type,omitempty"`
ID string `json:"Id,omitempty"`
Hash string `json:"Hash,omitempty"`
}
// Error is an odata error return
type Error struct {
Code string `json:"code"`
Message struct {
Lang string `json:"lang"`
Value string `json:"value"`
} `json:"message"`
Reason string `json:"reason"`
}
// Satisfy error interface
func (e *Error) Error() string {
return fmt.Sprintf("%s: %s: %s", e.Message.Value, e.Code, e.Reason)
}
// Check Error satisfies error interface
var _ error = &Error{}
// DownloadSpecification is the response to /Items/Download
type DownloadSpecification struct {
Token string `json:"DownloadToken"`
URL string `json:"DownloadUrl"`
Metadata string `json:"odata.metadata"`
Type string `json:"odata.type"`
}
// UploadRequest is set to /Items/Upload2 to receive an UploadSpecification
type UploadRequest struct {
Method string `json:"method"` // Upload method: one of: standard, streamed or threaded
Raw bool `json:"raw"` // Raw post if true or MIME upload if false
Filename string `json:"fileName"` // Uploaded item file name.
Filesize *int64 `json:"fileSize,omitempty"` // Uploaded item file size.
Overwrite bool `json:"overwrite"` // Indicates whether items with the same name will be overwritten or not.
CreatedDate time.Time `json:"ClientCreatedDate"` // Created Date of this Item.
ModifiedDate time.Time `json:"ClientModifiedDate"` // Modified Date of this Item.
BatchID string `json:"batchId,omitempty"` // Indicates part of a batch. Batched uploads do not send notification until the whole batch is completed.
BatchLast *bool `json:"batchLast,omitempty"` // Indicates is the last in a batch. Upload notifications for the whole batch are sent after this upload.
CanResume *bool `json:"canResume,omitempty"` // Indicates uploader supports resume.
StartOver *bool `json:"startOver,omitempty"` // Indicates uploader wants to restart the file - i.e., ignore previous failed upload attempts.
Tool string `json:"tool,omitempty"` // Identifies the uploader tool.
Title string `json:"title,omitempty"` // Item Title
Details string `json:"details,omitempty"` // Item description
IsSend *bool `json:"isSend,omitempty"` // Indicates that this upload is part of a Send operation
SendGUID string `json:"sendGuid,omitempty"` // Used if IsSend is true. Specifies which Send operation this upload is part of.
OpID string `json:"opid,omitempty"` // Used for Asynchronous copy/move operations - called by Zones to push files to other Zones
ThreadCount *int `json:"threadCount,omitempty"` // Specifies the number of threads the threaded uploader will use. Only used is method is threaded, ignored otherwise
Notify *bool `json:"notify,omitempty"` // Indicates whether users will be notified of this upload - based on folder preferences
ExpirationDays *int `json:"expirationDays,omitempty"` // File expiration days
BaseFileID string `json:"baseFileId,omitempty"` // Used to check conflict in file during File Upload.
}
// UploadSpecification is returned from /Items/Upload
type UploadSpecification struct {
Method string `json:"Method"` // The Upload method that must be used for this upload
PrepareURI string `json:"PrepareUri"` // If provided, clients must issue a request to this Uri before uploading any data.
ChunkURI string `json:"ChunkUri"` // Specifies the URI the client must send the file data to
FinishURI string `json:"FinishUri"` // If provided, specifies the final call the client must perform to finish the upload process
ProgressData string `json:"ProgressData"` // Allows the client to check progress of standard uploads
IsResume bool `json:"IsResume"` // Specifies a Resumable upload is supproted.
ResumeIndex int64 `json:"ResumeIndex"` // Specifies the initial index for resuming, if IsResume is true.
ResumeOffset int64 `json:"ResumeOffset"` // Specifies the initial file offset by bytes, if IsResume is true
ResumeFileHash string `json:"ResumeFileHash"` // Specifies the MD5 hash of the first ResumeOffset bytes of the partial file found at the server
MaxNumberOfThreads int `json:"MaxNumberOfThreads"` // Specifies the max number of chunks that can be sent simultaneously for threaded uploads
}
// UploadFinishResponse is returnes from calling UploadSpecification.FinishURI
type UploadFinishResponse struct {
Error bool `json:"error"`
ErrorMessage string `json:"errorMessage"`
ErrorCode int `json:"errorCode"`
Value []struct {
UploadID string `json:"uploadid"`
ParentID string `json:"parentid"`
ID string `json:"id"`
StreamID string `json:"streamid"`
FileName string `json:"filename"`
DisplayName string `json:"displayname"`
Size int `json:"size"`
Md5 string `json:"md5"`
} `json:"value"`
}
// ID returns the ID of the first response if available
func (finish *UploadFinishResponse) ID() (string, error) {
if finish.Error {
return "", errors.Errorf("upload failed: %s (%d)", finish.ErrorMessage, finish.ErrorCode)
}
if len(finish.Value) == 0 {
return "", errors.New("upload failed: no results returned")
}
return finish.Value[0].ID, nil
}
// Parent is the ID of the parent folder
type Parent struct {
ID string `json:"Id,omitempty"`
}
// Zone is where the data is stored
type Zone struct {
ID string `json:"Id,omitempty"`
}
// UpdateItemRequest is sent to PATCH /v3/Items(id)
type UpdateItemRequest struct {
Name string `json:"Name,omitempty"`
FileName string `json:"FileName,omitempty"`
Description string `json:"Description,omitempty"`
ExpirationDate *time.Time `json:"ExpirationDate,omitempty"`
Parent *Parent `json:"Parent,omitempty"`
Zone *Zone `json:"Zone,omitempty"`
ModifiedAt *time.Time `json:"ClientModifiedDate,omitempty"`
}

View File

@@ -0,0 +1,22 @@
// +build ignore
package main
import (
"log"
"net/http"
"github.com/shurcooL/vfsgen"
)
func main() {
var AssetDir http.FileSystem = http.Dir("./tzdata")
err := vfsgen.Generate(AssetDir, vfsgen.Options{
PackageName: "sharefile",
BuildTags: "!dev",
VariableName: "tzdata",
})
if err != nil {
log.Fatalln(err)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,34 @@
// Test filesystem interface
package sharefile
import (
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestSharefile:",
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
CeilChunkSize: fstests.NextPowerOfTwo,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
)

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,18 @@
#!/bin/bash
set -e
# Extract just the America/New_York timezone from
tzinfo=$(go env GOROOT)/lib/time/zoneinfo.zip
rm -rf tzdata
mkdir tzdata
cd tzdata
unzip ${tzinfo} America/New_York
cd ..
# Make the embedded assets
go run generate_tzdata.go
# tidy up
rm -rf tzdata

261
backend/sharefile/upload.go Normal file
View File

@@ -0,0 +1,261 @@
// Upload large files for sharefile
//
// Docs - https://api.sharefile.com/rest/docs/resource.aspx?name=Items#Upload_File
package sharefile
import (
"bytes"
"context"
"crypto/md5"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"strings"
"sync"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/sharefile/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/rest"
)
// largeUpload is used to control the upload of large files which need chunking
type largeUpload struct {
ctx context.Context
f *Fs // parent Fs
o *Object // object being uploaded
in io.Reader // read the data from here
wrap accounting.WrapFn // account parts being transferred
size int64 // total size
parts int64 // calculated number of parts, if known
info *api.UploadSpecification // where to post chunks etc
threads int // number of threads to use in upload
streamed bool // set if using streamed upload
}
// newLargeUpload starts an upload of object o from in with metadata in src
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, info *api.UploadSpecification) (up *largeUpload, err error) {
size := src.Size()
parts := int64(-1)
if size >= 0 {
parts = size / int64(o.fs.opt.ChunkSize)
if size%int64(o.fs.opt.ChunkSize) != 0 {
parts++
}
}
var streamed bool
switch strings.ToLower(info.Method) {
case "streamed":
streamed = true
case "threaded":
streamed = false
default:
return nil, errors.Errorf("can't use method %q with newLargeUpload", info.Method)
}
threads := fs.Config.Transfers
if threads > info.MaxNumberOfThreads {
threads = info.MaxNumberOfThreads
}
// unwrap the accounting from the input, we use wrap to put it
// back on after the buffering
in, wrap := accounting.UnWrap(in)
up = &largeUpload{
ctx: ctx,
f: f,
o: o,
in: in,
wrap: wrap,
size: size,
threads: threads,
info: info,
parts: parts,
streamed: streamed,
}
return up, nil
}
// parse the api.UploadFinishResponse in respBody
func (up *largeUpload) parseUploadFinishResponse(respBody []byte) (err error) {
var finish api.UploadFinishResponse
err = json.Unmarshal(respBody, &finish)
if err != nil {
// Sometimes the unmarshal fails in which case return the body
return errors.Errorf("upload: bad response: %q", bytes.TrimSpace(respBody))
}
return up.o.checkUploadResponse(up.ctx, &finish)
}
// Transfer a chunk
func (up *largeUpload) transferChunk(ctx context.Context, part int64, offset int64, body []byte, fileHash string) error {
md5sumRaw := md5.Sum(body)
md5sum := hex.EncodeToString(md5sumRaw[:])
size := int64(len(body))
// Add some more parameters to the ChunkURI
u := up.info.ChunkURI
u += fmt.Sprintf("&index=%d&byteOffset=%d&hash=%s&fmt=json",
part, offset, md5sum,
)
if fileHash != "" {
u += fmt.Sprintf("&finish=true&fileSize=%d&fileHash=%s",
offset+int64(len(body)),
fileHash,
)
}
opts := rest.Opts{
Method: "POST",
RootURL: u,
ContentLength: &size,
}
var respBody []byte
err := up.f.pacer.Call(func() (bool, error) {
fs.Debugf(up.o, "Sending chunk %d length %d", part, len(body))
opts.Body = up.wrap(bytes.NewReader(body))
resp, err := up.f.srv.Call(ctx, &opts)
if err != nil {
fs.Debugf(up.o, "Error sending chunk %d: %v", part, err)
} else {
respBody, err = rest.ReadBody(resp)
}
// retry all errors now that the multipart upload has started
return err != nil, err
})
if err != nil {
fs.Debugf(up.o, "Error sending chunk %d: %v", part, err)
return err
}
// If last chunk and using "streamed" transfer, get the response back now
if up.streamed && fileHash != "" {
return up.parseUploadFinishResponse(respBody)
}
fs.Debugf(up.o, "Done sending chunk %d", part)
return nil
}
// finish closes off the large upload and reads the metadata
func (up *largeUpload) finish(ctx context.Context) error {
fs.Debugf(up.o, "Finishing large file upload")
// For a streamed transfer we will already have read the info
if up.streamed {
return nil
}
opts := rest.Opts{
Method: "POST",
RootURL: up.info.FinishURI,
}
var respBody []byte
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.Call(ctx, &opts)
if err != nil {
return shouldRetry(resp, err)
}
respBody, err = rest.ReadBody(resp)
// retry all errors now that the multipart upload has started
return err != nil, err
})
if err != nil {
return err
}
return up.parseUploadFinishResponse(respBody)
}
// Upload uploads the chunks from the input
func (up *largeUpload) Upload(ctx context.Context) error {
if up.parts >= 0 {
fs.Debugf(up.o, "Starting upload of large file in %d chunks", up.parts)
} else {
fs.Debugf(up.o, "Starting streaming upload of large file")
}
var (
offset int64
errs = make(chan error, 1)
wg sync.WaitGroup
err error
wholeFileHash = md5.New()
eof = false
)
outer:
for part := int64(0); !eof; part++ {
// Check any errors
select {
case err = <-errs:
break outer
default:
}
// Get a block of memory
buf := up.f.getUploadBlock()
// Read the chunk
var n int
n, err = readers.ReadFill(up.in, buf)
if err == io.EOF {
eof = true
buf = buf[:n]
err = nil
} else if err != nil {
up.f.putUploadBlock(buf)
break outer
}
// Hash it
_, _ = io.Copy(wholeFileHash, bytes.NewBuffer(buf))
// Get file hash if was last chunk
fileHash := ""
if eof {
fileHash = hex.EncodeToString(wholeFileHash.Sum(nil))
}
// Transfer the chunk
wg.Add(1)
transferChunk := func(part, offset int64, buf []byte, fileHash string) {
defer wg.Done()
defer up.f.putUploadBlock(buf)
err := up.transferChunk(ctx, part, offset, buf, fileHash)
if err != nil {
select {
case errs <- err:
default:
}
}
}
if up.streamed {
transferChunk(part, offset, buf, fileHash) // streamed
} else {
go transferChunk(part, offset, buf, fileHash) // multithreaded
}
offset += int64(n)
}
wg.Wait()
// check size read is correct
if eof && err == nil && up.size >= 0 && up.size != offset {
err = errors.Errorf("upload: short read: read %d bytes expected %d", up.size, offset)
}
// read any errors
if err == nil {
select {
case err = <-errs:
default:
}
}
// finish regardless of errors
finishErr := up.finish(ctx)
if err == nil {
err = finishErr
}
return err
}

View File

@@ -17,6 +17,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
@@ -60,6 +61,8 @@ copy operations.`,
Advanced: true,
}}
const enc = encodings.Swift
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -320,7 +323,8 @@ func parsePath(path string) (root string) {
// split returns container and containerPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (container, containerPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
container, containerPath = bucket.Split(path.Join(f.root, rootRelativePath))
return enc.FromStandardName(container), enc.FromStandardPath(containerPath)
}
// split returns container and containerPath from the object
@@ -441,9 +445,10 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
// Check to see if the object exists - ignoring directory markers
var info swift.Object
var err error
encodedDirectory := enc.FromStandardPath(f.rootDirectory)
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
info, rxHeaders, err = f.c.Object(f.rootContainer, f.rootDirectory)
info, rxHeaders, err = f.c.Object(f.rootContainer, encodedDirectory)
return shouldRetryHeaders(rxHeaders, err)
})
if err == nil && info.ContentType != directoryMarkerContentType {
@@ -553,17 +558,18 @@ func (f *Fs) listContainerRoot(container, directory, prefix string, addContainer
if !recurse {
isDirectory = strings.HasSuffix(object.Name, "/")
}
if !strings.HasPrefix(object.Name, prefix) {
fs.Logf(f, "Odd name received %q", object.Name)
remote := enc.ToStandardPath(object.Name)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
}
if object.Name == prefix {
if remote == prefix {
// If we have zero length directory markers ending in / then swift
// will return them in the listing for the directory which causes
// duplicate directories. Ignore them here.
continue
}
remote := object.Name[len(prefix):]
remote = remote[len(prefix):]
if addContainer {
remote = path.Join(container, remote)
}
@@ -635,7 +641,7 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
}
for _, container := range containers {
f.cache.MarkOK(container.Name)
d := fs.NewDir(container.Name, time.Time{}).SetSize(container.Bytes).SetItems(container.Count)
d := fs.NewDir(enc.ToStandardName(container.Name), time.Time{}).SetSize(container.Bytes).SetItems(container.Count)
entries = append(entries, d)
}
return entries, nil

View File

@@ -3,7 +3,9 @@ package odrvcookie
import (
"bytes"
"context"
"encoding/xml"
"fmt"
"html/template"
"net/http"
"net/http/cookiejar"
@@ -31,13 +33,13 @@ type CookieResponse struct {
FedAuth http.Cookie
}
// SuccessResponse hold a response from the sharepoint webdav
type SuccessResponse struct {
// SharepointSuccessResponse holds a response from a successful microsoft login
type SharepointSuccessResponse struct {
XMLName xml.Name `xml:"Envelope"`
Succ SuccessResponseBody `xml:"Body"`
Body SuccessResponseBody `xml:"Body"`
}
// SuccessResponseBody is the body of a success response, it holds the token
// SuccessResponseBody is the body of a successful response, it holds the token
type SuccessResponseBody struct {
XMLName xml.Name
Type string `xml:"RequestSecurityTokenResponse>TokenType"`
@@ -46,6 +48,24 @@ type SuccessResponseBody struct {
Token string `xml:"RequestSecurityTokenResponse>RequestedSecurityToken>BinarySecurityToken"`
}
// SharepointError holds a error response microsoft login
type SharepointError struct {
XMLName xml.Name `xml:"Envelope"`
Body ErrorResponseBody `xml:"Body"`
}
func (e *SharepointError) Error() string {
return fmt.Sprintf("%s: %s (%s)", e.Body.FaultCode, e.Body.Reason, e.Body.Detail)
}
// ErrorResponseBody contains the body of a erroneous repsonse
type ErrorResponseBody struct {
XMLName xml.Name
FaultCode string `xml:"Fault>Code>Subcode>Value"`
Reason string `xml:"Fault>Reason>Text"`
Detail string `xml:"Fault>Detail>error>internalerror>text"`
}
// reqString is a template that gets populated with the user data in order to retrieve a "BinarySecurityToken"
const reqString = `<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope"
xmlns:a="http://www.w3.org/2005/08/addressing"
@@ -91,15 +111,15 @@ func New(pUser, pPass, pEndpoint string) CookieAuth {
// Cookies creates a CookieResponse. It fetches the auth token and then
// retrieves the Cookies
func (ca *CookieAuth) Cookies() (*CookieResponse, error) {
tokenResp, err := ca.getSPToken()
func (ca *CookieAuth) Cookies(ctx context.Context) (*CookieResponse, error) {
tokenResp, err := ca.getSPToken(ctx)
if err != nil {
return nil, err
}
return ca.getSPCookie(tokenResp)
}
func (ca *CookieAuth) getSPCookie(conf *SuccessResponse) (*CookieResponse, error) {
func (ca *CookieAuth) getSPCookie(conf *SharepointSuccessResponse) (*CookieResponse, error) {
spRoot, err := url.Parse(ca.endpoint)
if err != nil {
return nil, errors.Wrap(err, "Error while constructing endpoint URL")
@@ -122,7 +142,7 @@ func (ca *CookieAuth) getSPCookie(conf *SuccessResponse) (*CookieResponse, error
}
// Send the previously acquired Token as a Post parameter
if _, err = client.Post(u.String(), "text/xml", strings.NewReader(conf.Succ.Token)); err != nil {
if _, err = client.Post(u.String(), "text/xml", strings.NewReader(conf.Body.Token)); err != nil {
return nil, errors.Wrap(err, "Error while grabbing cookies from endpoint: %v")
}
@@ -140,7 +160,7 @@ func (ca *CookieAuth) getSPCookie(conf *SuccessResponse) (*CookieResponse, error
return &cookieResponse, nil
}
func (ca *CookieAuth) getSPToken() (conf *SuccessResponse, err error) {
func (ca *CookieAuth) getSPToken(ctx context.Context) (conf *SharepointSuccessResponse, err error) {
reqData := map[string]interface{}{
"Username": ca.user,
"Password": ca.pass,
@@ -160,6 +180,7 @@ func (ca *CookieAuth) getSPToken() (conf *SuccessResponse, err error) {
if err != nil {
return nil, err
}
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
client := fshttp.NewClient(fs.Config)
resp, err := client.Do(req)
@@ -175,12 +196,21 @@ func (ca *CookieAuth) getSPToken() (conf *SuccessResponse, err error) {
}
s := respBuf.Bytes()
conf = &SuccessResponse{}
conf = &SharepointSuccessResponse{}
err = xml.Unmarshal(s, conf)
if err != nil {
// FIXME: Try to parse with FailedResponse struct (check for server error code)
return nil, errors.Wrap(err, "Error while reading endpoint response")
if conf.Body.Token == "" {
// xml Unmarshal won't fail if the response doesn't contain a token
// However, the token will be empty
sErr := &SharepointError{}
errSErr := xml.Unmarshal(s, sErr)
if errSErr == nil {
return nil, sErr
}
}
if err != nil {
return nil, errors.Wrap(err, "Error while reading endpoint response")
}
return
}

View File

@@ -205,7 +205,7 @@ func itemIsDir(item *api.Response) bool {
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(path string, depth string) (info *api.Prop, err error) {
func (f *Fs) readMetaDataForPath(ctx context.Context, path string, depth string) (info *api.Prop, err error) {
// FIXME how do we read back additional properties?
opts := rest.Opts{
Method: "PROPFIND",
@@ -221,7 +221,7 @@ func (f *Fs) readMetaDataForPath(path string, depth string) (info *api.Prop, err
var result api.Multistatus
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
@@ -229,7 +229,7 @@ func (f *Fs) readMetaDataForPath(path string, depth string) (info *api.Prop, err
switch apiErr.StatusCode {
case http.StatusNotFound:
if f.retryWithZeroDepth && depth != "0" {
return f.readMetaDataForPath(path, "0")
return f.readMetaDataForPath(ctx, path, "0")
}
return nil, fs.ErrorObjectNotFound
case http.StatusMovedPermanently, http.StatusFound, http.StatusSeeOther:
@@ -353,7 +353,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
}
f.srv.SetErrorHandler(errorHandler)
err = f.setQuirks(opt.Vendor)
err = f.setQuirks(ctx, opt.Vendor)
if err != nil {
return nil, err
}
@@ -424,7 +424,7 @@ func (f *Fs) fetchAndSetBearerToken() error {
}
// setQuirks adjusts the Fs for the vendor passed in
func (f *Fs) setQuirks(vendor string) error {
func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
switch vendor {
case "owncloud":
f.canStream = true
@@ -440,13 +440,13 @@ func (f *Fs) setQuirks(vendor string) error {
// They have to be set instead of BasicAuth
f.srv.RemoveHeader("Authorization") // We don't need this Header if using cookies
spCk := odrvcookie.New(f.opt.User, f.opt.Pass, f.endpointURL)
spCookies, err := spCk.Cookies()
spCookies, err := spCk.Cookies(ctx)
if err != nil {
return err
}
odrvcookie.NewRenew(12*time.Hour, func() {
spCookies, err := spCk.Cookies()
spCookies, err := spCk.Cookies(ctx)
if err != nil {
fs.Errorf("could not renew cookies: %s", err.Error())
return
@@ -477,7 +477,7 @@ func (f *Fs) setQuirks(vendor string) error {
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *api.Prop) (fs.Object, error) {
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Prop) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
@@ -487,7 +487,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *api.Prop) (fs.Object, error)
// Set info
err = o.setMetaData(info)
} else {
err = o.readMetaData() // reads info and meta, returning an error
err = o.readMetaData(ctx) // reads info and meta, returning an error
}
if err != nil {
return nil, err
@@ -498,7 +498,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *api.Prop) (fs.Object, error)
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
return f.newObjectWithInfo(ctx, remote, nil)
}
// Read the normal props, plus the checksums
@@ -528,7 +528,7 @@ type listAllFn func(string, bool, *api.Prop) bool
// Lists the directory required calling the user function on each item found
//
// If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, depth string, fn listAllFn) (found bool, err error) {
func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, filesOnly bool, depth string, fn listAllFn) (found bool, err error) {
opts := rest.Opts{
Method: "PROPFIND",
Path: f.dirPath(dir), // FIXME Should not start with /
@@ -542,7 +542,7 @@ func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, depth str
var result api.Multistatus
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &result)
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -550,7 +550,7 @@ func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, depth str
// does not exist
if apiErr.StatusCode == http.StatusNotFound {
if f.retryWithZeroDepth && depth != "0" {
return f.listAll(dir, directoriesOnly, filesOnly, "0", fn)
return f.listAll(ctx, dir, directoriesOnly, filesOnly, "0", fn)
}
return found, fs.ErrorDirNotFound
}
@@ -625,14 +625,14 @@ func (f *Fs) listAll(dir string, directoriesOnly bool, filesOnly bool, depth str
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
var iErr error
_, err = f.listAll(dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool {
_, err = f.listAll(ctx, dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool {
if isDir {
d := fs.NewDir(remote, time.Time(info.Modified))
// .SetID(info.ID)
// FIXME more info from dir? can set size, items?
entries = append(entries, d)
} else {
o, err := f.newObjectWithInfo(remote, info)
o, err := f.newObjectWithInfo(ctx, remote, info)
if err != nil {
iErr = err
return true
@@ -696,7 +696,7 @@ func (f *Fs) mkParentDir(ctx context.Context, dirPath string) error {
}
// low level mkdir, only makes the directory, doesn't attempt to create parents
func (f *Fs) _mkdir(dirPath string) error {
func (f *Fs) _mkdir(ctx context.Context, dirPath string) error {
// We assume the root is already created
if dirPath == "" {
return nil
@@ -711,7 +711,7 @@ func (f *Fs) _mkdir(dirPath string) error {
NoResponse: true,
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(&opts)
resp, err := f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
@@ -727,13 +727,13 @@ func (f *Fs) _mkdir(dirPath string) error {
// mkdir makes the directory and parents using native paths
func (f *Fs) mkdir(ctx context.Context, dirPath string) error {
// defer log.Trace(dirPath, "")("")
err := f._mkdir(dirPath)
err := f._mkdir(ctx, dirPath)
if apiErr, ok := err.(*api.Error); ok {
// parent does not exist so create it first then try again
if apiErr.StatusCode == http.StatusConflict {
err = f.mkParentDir(ctx, dirPath)
if err == nil {
err = f._mkdir(dirPath)
err = f._mkdir(ctx, dirPath)
}
}
}
@@ -749,17 +749,17 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// dirNotEmpty returns true if the directory exists and is not Empty
//
// if the directory does not exist then err will be ErrorDirNotFound
func (f *Fs) dirNotEmpty(dir string) (found bool, err error) {
return f.listAll(dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool {
func (f *Fs) dirNotEmpty(ctx context.Context, dir string) (found bool, err error) {
return f.listAll(ctx, dir, false, false, defaultDepth, func(remote string, isDir bool, info *api.Prop) bool {
return true
})
}
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(dir string, check bool) error {
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if check {
notEmpty, err := f.dirNotEmpty(dir)
notEmpty, err := f.dirNotEmpty(ctx, dir)
if err != nil {
return err
}
@@ -775,7 +775,7 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, nil)
resp, err = f.srv.CallXML(ctx, &opts, nil, nil)
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -789,7 +789,7 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(dir, true)
return f.purgeCheck(ctx, dir, true)
}
// Precision return the precision of this Fs
@@ -835,10 +835,10 @@ func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, metho
},
}
if f.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1E9)
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1e9)
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -870,7 +870,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck("", false)
return f.purgeCheck(ctx, "", false)
}
// Move src to this remote using server side move operations.
@@ -904,7 +904,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
dstPath := f.filePath(dstRemote)
// Check if destination exists
_, err := f.dirNotEmpty(dstRemote)
_, err := f.dirNotEmpty(ctx, dstRemote)
if err == nil {
return fs.ErrorDirExists
}
@@ -934,7 +934,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
},
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -975,7 +975,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &q)
resp, err = f.srv.CallXML(ctx, &opts, nil, &q)
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -1028,7 +1028,8 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
err := o.readMetaData()
ctx := context.TODO()
err := o.readMetaData(ctx)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return 0
@@ -1052,11 +1053,11 @@ func (o *Object) setMetaData(info *api.Prop) (err error) {
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
func (o *Object) readMetaData() (err error) {
func (o *Object) readMetaData(ctx context.Context) (err error) {
if o.hasMetaData {
return nil
}
info, err := o.fs.readMetaDataForPath(o.remote, defaultDepth)
info, err := o.fs.readMetaDataForPath(ctx, o.remote, defaultDepth)
if err != nil {
return err
}
@@ -1068,7 +1069,7 @@ func (o *Object) readMetaData() (err error) {
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData()
err := o.readMetaData(ctx)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now()
@@ -1095,7 +1096,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err)
})
if err != nil {
@@ -1128,7 +1129,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if o.fs.useOCMtime || o.fs.hasChecksums {
opts.ExtraHeaders = map[string]string{}
if o.fs.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1E9)
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1e9)
}
if o.fs.hasChecksums {
// Set an upload checksum - prefer SHA1
@@ -1143,7 +1144,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err)
})
if err != nil {
@@ -1159,7 +1160,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
// read metadata from remote
o.hasMetaData = false
return o.readMetaData()
return o.readMetaData(ctx)
}
// Remove an object
@@ -1170,7 +1171,7 @@ func (o *Object) Remove(ctx context.Context) error {
NoResponse: true,
}
return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.Call(&opts)
resp, err := o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err)
})
}

View File

@@ -20,6 +20,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/oauthutil"
@@ -29,6 +30,8 @@ import (
"golang.org/x/oauth2"
)
const enc = encodings.Yandex
//oAuth
const (
rcloneClientID = "ac39b43b9eba4cae8ffb788c06d816a8"
@@ -200,14 +203,14 @@ func (f *Fs) dirPath(file string) string {
return path.Join(f.diskRoot, file) + "/"
}
func (f *Fs) readMetaDataForPath(path string, options *api.ResourceInfoRequestOptions) (*api.ResourceInfoResponse, error) {
func (f *Fs) readMetaDataForPath(ctx context.Context, path string, options *api.ResourceInfoRequestOptions) (*api.ResourceInfoResponse, error) {
opts := rest.Opts{
Method: "GET",
Path: "/resources",
Parameters: url.Values{},
}
opts.Parameters.Set("path", path)
opts.Parameters.Set("path", enc.FromStandardPath(path))
if options.SortMode != nil {
opts.Parameters.Set("sort", options.SortMode.String())
@@ -226,7 +229,7 @@ func (f *Fs) readMetaDataForPath(path string, options *api.ResourceInfoRequestOp
var info api.ResourceInfoResponse
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
})
@@ -234,11 +237,13 @@ func (f *Fs) readMetaDataForPath(path string, options *api.ResourceInfoRequestOp
return nil, err
}
info.Name = enc.ToStandardName(info.Name)
return &info, nil
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.TODO()
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
@@ -284,7 +289,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
//request object meta info
// Check to see if the object exists and is a file
//request object meta info
if info, err := f.readMetaDataForPath(f.diskRoot, &api.ResourceInfoRequestOptions{}); err != nil {
if info, err := f.readMetaDataForPath(ctx, f.diskRoot, &api.ResourceInfoRequestOptions{}); err != nil {
} else {
if info.ResourceType == "file" {
@@ -301,7 +306,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
// Convert a list item into a DirEntry
func (f *Fs) itemToDirEntry(remote string, object *api.ResourceInfoResponse) (fs.DirEntry, error) {
func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *api.ResourceInfoResponse) (fs.DirEntry, error) {
switch object.ResourceType {
case "dir":
t, err := time.Parse(time.RFC3339Nano, object.Modified)
@@ -311,7 +316,7 @@ func (f *Fs) itemToDirEntry(remote string, object *api.ResourceInfoResponse) (fs
d := fs.NewDir(remote, t).SetSize(object.Size)
return d, nil
case "file":
o, err := f.newObjectWithInfo(remote, object)
o, err := f.newObjectWithInfo(ctx, remote, object)
if err != nil {
return nil, err
}
@@ -343,7 +348,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
Limit: limit,
Offset: offset,
}
info, err := f.readMetaDataForPath(root, opts)
info, err := f.readMetaDataForPath(ctx, root, opts)
if err != nil {
if apiErr, ok := err.(*api.ErrorResponse); ok {
@@ -359,8 +364,9 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if info.ResourceType == "dir" {
//list all subdirs
for _, element := range info.Embedded.Items {
element.Name = enc.ToStandardName(element.Name)
remote := path.Join(dir, element.Name)
entry, err := f.itemToDirEntry(remote, &element)
entry, err := f.itemToDirEntry(ctx, remote, &element)
if err != nil {
return nil, err
}
@@ -386,7 +392,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *api.ResourceInfoResponse) (fs.Object, error) {
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.ResourceInfoResponse) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
@@ -395,7 +401,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *api.ResourceInfoResponse) (f
if info != nil {
err = o.setMetaData(info)
} else {
err = o.readMetaData()
err = o.readMetaData(ctx)
if apiErr, ok := err.(*api.ErrorResponse); ok {
// does not exist
if apiErr.ErrorName == "DiskNotFoundError" {
@@ -412,7 +418,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *api.ResourceInfoResponse) (f
// NewObject finds the Object at remote. If it can't be found it
// returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
return f.newObjectWithInfo(ctx, remote, nil)
}
// Creates from the parameters passed in a half finished Object which
@@ -446,7 +452,7 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
}
// CreateDir makes a directory
func (f *Fs) CreateDir(path string) (err error) {
func (f *Fs) CreateDir(ctx context.Context, path string) (err error) {
//fmt.Printf("CreateDir: %s\n", path)
var resp *http.Response
@@ -457,14 +463,18 @@ func (f *Fs) CreateDir(path string) (err error) {
NoResponse: true,
}
opts.Parameters.Set("path", path)
// If creating a directory with a : use (undocumented) disk: prefix
if strings.IndexRune(path, ':') >= 0 {
path = "disk:" + path
}
opts.Parameters.Set("path", enc.FromStandardPath(path))
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
if err != nil {
//fmt.Printf("CreateDir Error: %s\n", err.Error())
// fmt.Printf("CreateDir %q Error: %s\n", path, err.Error())
return err
}
// fmt.Printf("...Id %q\n", *info.Id)
@@ -474,7 +484,7 @@ func (f *Fs) CreateDir(path string) (err error) {
// This really needs improvement and especially proper error checking
// but Yandex does not publish a List of possible errors and when they're
// expected to occur.
func (f *Fs) mkDirs(path string) (err error) {
func (f *Fs) mkDirs(ctx context.Context, path string) (err error) {
//trim filename from path
//dirString := strings.TrimSuffix(path, filepath.Base(path))
//trim "disk:" from path
@@ -483,7 +493,7 @@ func (f *Fs) mkDirs(path string) (err error) {
return nil
}
if err = f.CreateDir(dirString); err != nil {
if err = f.CreateDir(ctx, dirString); err != nil {
if apiErr, ok := err.(*api.ErrorResponse); ok {
// allready exists
if apiErr.ErrorName != "DiskPathPointsToExistentDirectoryError" {
@@ -493,7 +503,7 @@ func (f *Fs) mkDirs(path string) (err error) {
for _, element := range dirs {
if element != "" {
mkdirpath += element + "/" //path separator /
if err = f.CreateDir(mkdirpath); err != nil {
if err = f.CreateDir(ctx, mkdirpath); err != nil {
// ignore errors while creating dirs
}
}
@@ -505,7 +515,7 @@ func (f *Fs) mkDirs(path string) (err error) {
return err
}
func (f *Fs) mkParentDirs(resPath string) error {
func (f *Fs) mkParentDirs(ctx context.Context, resPath string) error {
// defer log.Trace(dirPath, "")("")
// chop off trailing / if it exists
if strings.HasSuffix(resPath, "/") {
@@ -515,17 +525,17 @@ func (f *Fs) mkParentDirs(resPath string) error {
if parent == "." {
parent = ""
}
return f.mkDirs(parent)
return f.mkDirs(ctx, parent)
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
path := f.filePath(dir)
return f.mkDirs(path)
return f.mkDirs(ctx, path)
}
// waitForJob waits for the job with status in url to complete
func (f *Fs) waitForJob(location string) (err error) {
func (f *Fs) waitForJob(ctx context.Context, location string) (err error) {
opts := rest.Opts{
RootURL: location,
Method: "GET",
@@ -535,7 +545,7 @@ func (f *Fs) waitForJob(location string) (err error) {
var resp *http.Response
var body []byte
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
resp, err = f.srv.Call(ctx, &opts)
if err != nil {
return fserrors.ShouldRetry(err), err
}
@@ -564,20 +574,20 @@ func (f *Fs) waitForJob(location string) (err error) {
return errors.Errorf("async operation didn't complete after %v", fs.Config.Timeout)
}
func (f *Fs) delete(path string, hardDelete bool) (err error) {
func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) (err error) {
opts := rest.Opts{
Method: "DELETE",
Path: "/resources",
Parameters: url.Values{},
}
opts.Parameters.Set("path", path)
opts.Parameters.Set("path", enc.FromStandardPath(path))
opts.Parameters.Set("permanently", strconv.FormatBool(hardDelete))
var resp *http.Response
var body []byte
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
resp, err = f.srv.Call(ctx, &opts)
if err != nil {
return fserrors.ShouldRetry(err), err
}
@@ -595,19 +605,19 @@ func (f *Fs) delete(path string, hardDelete bool) (err error) {
if err != nil {
return errors.Wrapf(err, "async info result not JSON: %q", body)
}
return f.waitForJob(info.HRef)
return f.waitForJob(ctx, info.HRef)
}
return nil
}
// purgeCheck remotes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(dir string, check bool) error {
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
root := f.filePath(dir)
if check {
//to comply with rclone logic we check if the directory is empty before delete.
//send request to get list of objects in this directory.
info, err := f.readMetaDataForPath(root, &api.ResourceInfoRequestOptions{})
info, err := f.readMetaDataForPath(ctx, root, &api.ResourceInfoRequestOptions{})
if err != nil {
return errors.Wrap(err, "rmdir failed")
}
@@ -616,14 +626,14 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
}
}
//delete directory
return f.delete(root, false)
return f.delete(ctx, root, false)
}
// Rmdir deletes the container
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(dir, true)
return f.purgeCheck(ctx, dir, true)
}
// Purge deletes all the files and the container
@@ -632,25 +642,25 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck("", false)
return f.purgeCheck(ctx, "", false)
}
// copyOrMoves copies or moves directories or files depending on the method parameter
func (f *Fs) copyOrMove(method, src, dst string, overwrite bool) (err error) {
func (f *Fs) copyOrMove(ctx context.Context, method, src, dst string, overwrite bool) (err error) {
opts := rest.Opts{
Method: "POST",
Path: "/resources/" + method,
Parameters: url.Values{},
}
opts.Parameters.Set("from", src)
opts.Parameters.Set("path", dst)
opts.Parameters.Set("from", enc.FromStandardPath(src))
opts.Parameters.Set("path", enc.FromStandardPath(dst))
opts.Parameters.Set("overwrite", strconv.FormatBool(overwrite))
var resp *http.Response
var body []byte
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
resp, err = f.srv.Call(ctx, &opts)
if err != nil {
return fserrors.ShouldRetry(err), err
}
@@ -668,7 +678,7 @@ func (f *Fs) copyOrMove(method, src, dst string, overwrite bool) (err error) {
if err != nil {
return errors.Wrapf(err, "async info result not JSON: %q", body)
}
return f.waitForJob(info.HRef)
return f.waitForJob(ctx, info.HRef)
}
return nil
}
@@ -690,11 +700,11 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
dstPath := f.filePath(remote)
err := f.mkParentDirs(dstPath)
err := f.mkParentDirs(ctx, dstPath)
if err != nil {
return nil, err
}
err = f.copyOrMove("copy", srcObj.filePath(), dstPath, false)
err = f.copyOrMove(ctx, "copy", srcObj.filePath(), dstPath, false)
if err != nil {
return nil, errors.Wrap(err, "couldn't copy file")
@@ -720,11 +730,11 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
dstPath := f.filePath(remote)
err := f.mkParentDirs(dstPath)
err := f.mkParentDirs(ctx, dstPath)
if err != nil {
return nil, err
}
err = f.copyOrMove("move", srcObj.filePath(), dstPath, false)
err = f.copyOrMove(ctx, "move", srcObj.filePath(), dstPath, false)
if err != nil {
return nil, errors.Wrap(err, "couldn't move file")
@@ -758,12 +768,12 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return errors.New("can't move root directory")
}
err := f.mkParentDirs(dstPath)
err := f.mkParentDirs(ctx, dstPath)
if err != nil {
return err
}
_, err = f.readMetaDataForPath(dstPath, &api.ResourceInfoRequestOptions{})
_, err = f.readMetaDataForPath(ctx, dstPath, &api.ResourceInfoRequestOptions{})
if apiErr, ok := err.(*api.ErrorResponse); ok {
// does not exist
if apiErr.ErrorName == "DiskNotFoundError" {
@@ -775,7 +785,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return fs.ErrorDirExists
}
err = f.copyOrMove("move", srcPath, dstPath, false)
err = f.copyOrMove(ctx, "move", srcPath, dstPath, false)
if err != nil {
return errors.Wrap(err, "couldn't move directory")
@@ -793,16 +803,16 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
}
opts := rest.Opts{
Method: "PUT",
Path: path,
Path: enc.FromStandardPath(path),
Parameters: url.Values{},
NoResponse: true,
}
opts.Parameters.Set("path", f.filePath(remote))
opts.Parameters.Set("path", enc.FromStandardPath(f.filePath(remote)))
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
@@ -819,7 +829,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
return "", errors.Wrap(err, "couldn't create public link")
}
info, err := f.readMetaDataForPath(f.filePath(remote), &api.ResourceInfoRequestOptions{})
info, err := f.readMetaDataForPath(ctx, f.filePath(remote), &api.ResourceInfoRequestOptions{})
if err != nil {
return "", err
}
@@ -840,7 +850,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(&opts)
resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
return err
@@ -857,7 +867,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var info api.DiskInfo
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
})
@@ -924,11 +934,11 @@ func (o *Object) setMetaData(info *api.ResourceInfoResponse) (err error) {
}
// readMetaData reads ands sets the new metadata for a storage.Object
func (o *Object) readMetaData() (err error) {
func (o *Object) readMetaData(ctx context.Context) (err error) {
if o.hasMetaData {
return nil
}
info, err := o.fs.readMetaDataForPath(o.filePath(), &api.ResourceInfoRequestOptions{})
info, err := o.fs.readMetaDataForPath(ctx, o.filePath(), &api.ResourceInfoRequestOptions{})
if err != nil {
return err
}
@@ -943,7 +953,7 @@ func (o *Object) readMetaData() (err error) {
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData()
err := o.readMetaData(ctx)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return time.Now()
@@ -953,7 +963,8 @@ func (o *Object) ModTime(ctx context.Context) time.Time {
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
err := o.readMetaData()
ctx := context.TODO()
err := o.readMetaData(ctx)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return 0
@@ -974,7 +985,7 @@ func (o *Object) Storable() bool {
return true
}
func (o *Object) setCustomProperty(property string, value string) (err error) {
func (o *Object) setCustomProperty(ctx context.Context, property string, value string) (err error) {
var resp *http.Response
opts := rest.Opts{
Method: "PATCH",
@@ -983,14 +994,14 @@ func (o *Object) setCustomProperty(property string, value string) (err error) {
NoResponse: true,
}
opts.Parameters.Set("path", o.filePath())
opts.Parameters.Set("path", enc.FromStandardPath(o.filePath()))
rcm := map[string]interface{}{
property: value,
}
cpr := api.CustomPropertyResponse{CustomProperties: rcm}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, &cpr, nil)
resp, err = o.fs.srv.CallJSON(ctx, &opts, &cpr, nil)
return shouldRetry(resp, err)
})
return err
@@ -1001,7 +1012,7 @@ func (o *Object) setCustomProperty(property string, value string) (err error) {
// Commits the datastore
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
// set custom_property 'rclone_modified' of object to modTime
err := o.setCustomProperty("rclone_modified", modTime.Format(time.RFC3339Nano))
err := o.setCustomProperty(ctx, "rclone_modified", modTime.Format(time.RFC3339Nano))
if err != nil {
return err
}
@@ -1020,10 +1031,10 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Parameters: url.Values{},
}
opts.Parameters.Set("path", o.filePath())
opts.Parameters.Set("path", enc.FromStandardPath(o.filePath()))
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, nil, &dl)
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &dl)
return shouldRetry(resp, err)
})
@@ -1038,7 +1049,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
if err != nil {
@@ -1047,7 +1058,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return resp.Body, err
}
func (o *Object) upload(in io.Reader, overwrite bool, mimeType string) (err error) {
func (o *Object) upload(ctx context.Context, in io.Reader, overwrite bool, mimeType string) (err error) {
// prepare upload
var resp *http.Response
var ur api.AsyncInfo
@@ -1057,11 +1068,11 @@ func (o *Object) upload(in io.Reader, overwrite bool, mimeType string) (err erro
Parameters: url.Values{},
}
opts.Parameters.Set("path", o.filePath())
opts.Parameters.Set("path", enc.FromStandardPath(o.filePath()))
opts.Parameters.Set("overwrite", strconv.FormatBool(overwrite))
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(&opts, nil, &ur)
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &ur)
return shouldRetry(resp, err)
})
@@ -1079,7 +1090,7 @@ func (o *Object) upload(in io.Reader, overwrite bool, mimeType string) (err erro
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(&opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
@@ -1097,13 +1108,13 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
remote := o.filePath()
//create full path to file before upload.
err := o.fs.mkParentDirs(remote)
err := o.fs.mkParentDirs(ctx, remote)
if err != nil {
return err
}
//upload file
err = o.upload(in1, true, fs.MimeType(ctx, src))
err = o.upload(ctx, in1, true, fs.MimeType(ctx, src))
if err != nil {
return err
}
@@ -1120,7 +1131,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return o.fs.delete(o.filePath(), false)
return o.fs.delete(ctx, o.filePath(), false)
}
// MimeType of an Object if known, "" otherwise

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python3
"""
This is a tool to decrypt file names in rclone logs.
@@ -43,13 +43,13 @@ def map_log_file(crypt_map, log_file):
"""
with open(log_file) as fd:
for line in fd:
for cipher, plain in crypt_map.iteritems():
for cipher, plain in crypt_map.items():
line = line.replace(cipher, plain)
sys.stdout.write(line)
def main():
if len(sys.argv) < 3:
print "Syntax: %s <crypt-mapping-file> <log-file>" % sys.argv[0]
print("Syntax: %s <crypt-mapping-file> <log-file>" % sys.argv[0])
raise SystemExit(1)
mapping_file, log_file = sys.argv[1:]
crypt_map = read_crypt_map(mapping_file)

View File

@@ -9,6 +9,7 @@ package main
import (
"archive/tar"
"compress/bzip2"
"compress/gzip"
"encoding/json"
"flag"
@@ -349,6 +350,8 @@ func untar(srcFile, fileName, extractDir string) {
log.Fatalf("Couldn't open gzip: %v", err)
}
in = gzf
} else if srcExt == ".bz2" {
in = bzip2.NewReader(f)
}
tarReader := tar.NewReader(in)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python2
#!/usr/bin/env python3
"""
Make backend documentation
"""
@@ -52,9 +52,9 @@ if __name__ == "__main__":
for backend in find_backends():
try:
alter_doc(backend)
except Exception, e:
print "Failed adding docs for %s backend: %s" % (backend, e)
except Exception as e:
print("Failed adding docs for %s backend: %s" % (backend, e))
failed += 1
else:
success += 1
print "Added docs for %d backends with %d failures" % (success, failed)
print("Added docs for %d backends with %d failures" % (success, failed))

View File

@@ -1,4 +1,4 @@
#!/usr/bin/python
#!/usr/bin/python3
"""
Generate a markdown changelog for the rclone project
"""
@@ -99,10 +99,11 @@ def process_log(log):
def main():
if len(sys.argv) != 3:
print >>sys.stderr, "Syntax: %s vX.XX vX.XY" % sys.argv[0]
print("Syntax: %s vX.XX vX.XY" % sys.argv[0], file=sys.stderr)
sys.exit(1)
version, next_version = sys.argv[1], sys.argv[2]
log = subprocess.check_output(["git", "log", '''--pretty=format:%H|%an|%aI|%s'''] + [version+".."+next_version])
log = log.decode("utf-8")
by_category = process_log(log)
# Output backends first so remaining in by_category are core items
@@ -112,7 +113,7 @@ def main():
out("local", title="Local")
out("cache", title="Cache")
out("crypt", title="Crypt")
backend_names = sorted(x for x in by_category.keys() if x in backends)
backend_names = sorted(x for x in list(by_category.keys()) if x in backends)
for backend_name in backend_names:
if backend_name in backend_titles:
backend_title = backend_titles[backend_name]
@@ -123,7 +124,7 @@ def main():
# Split remaining in by_category into new features and fixes
new_features = defaultdict(list)
bugfixes = defaultdict(list)
for name, messages in by_category.iteritems():
for name, messages in by_category.items():
for message in messages:
if IS_FIX_RE.search(message):
bugfixes[name].append(message)

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python2
#!/usr/bin/env python3
"""
Make single page versions of the documentation for release and
conversion into man pages etc.
@@ -31,6 +31,8 @@ docs = [
"b2.md",
"box.md",
"cache.md",
"chunker.md",
"sharefile.md",
"crypt.md",
"dropbox.md",
"ftp.md",
@@ -41,6 +43,7 @@ docs = [
"hubic.md",
"jottacloud.md",
"koofr.md",
"mailru.md",
"mega.md",
"azureblob.md",
"onedrive.md",
@@ -118,8 +121,8 @@ def check_docs(docpath):
docs_set = set(docs)
if files == docs_set:
return
print "Files on disk but not in docs variable: %s" % ", ".join(files - docs_set)
print "Files in docs variable but not on disk: %s" % ", ".join(docs_set - files)
print("Files on disk but not in docs variable: %s" % ", ".join(files - docs_set))
print("Files in docs variable but not on disk: %s" % ", ".join(docs_set - files))
raise ValueError("Missing files")
def read_command(command):
@@ -142,7 +145,7 @@ def read_commands(docpath):
def main():
check_docs(docpath)
command_docs = read_commands(docpath)
command_docs = read_commands(docpath).replace("\\", "\\\\") # escape \ so we can use command_docs in re.sub
with open(outfile, "w") as out:
out.write("""\
%% rclone(1) User Manual
@@ -156,7 +159,7 @@ def main():
if doc == "docs.md":
contents = re.sub(r"The main rclone commands.*?for the full list.", command_docs, contents, 0, re.S)
out.write(contents)
print "Written '%s'" % outfile
print("Written '%s'" % outfile)
if __name__ == "__main__":
main()

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python3
"""
Update the authors.md file with the authors from the git log
"""
@@ -23,13 +23,14 @@ def add_email(name, email):
"""
adds the email passed in to the end of authors.md
"""
print "Adding %s <%s>" % (name, email)
print("Adding %s <%s>" % (name, email))
with open(AUTHORS, "a+") as fd:
print >>fd, " * %s <%s>" % (name, email)
print(" * %s <%s>" % (name, email), file=fd)
subprocess.check_call(["git", "commit", "-m", "Add %s to contributors" % name, AUTHORS])
def main():
out = subprocess.check_output(["git", "log", '--reverse', '--format=%an|%ae', "master"])
out = out.decode("utf-8")
previous = load()
for line in out.split("\n"):

View File

@@ -174,7 +174,11 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
// If file exists then srcFileName != "", however if the file
// doesn't exist then we assume it is a directory...
if srcFileName != "" {
dstRemote, dstFileName = fspath.Split(dstRemote)
var err error
dstRemote, dstFileName, err = fspath.Split(dstRemote)
if err != nil {
log.Fatalf("Parsing %q failed: %v", args[1], err)
}
if dstRemote == "" {
dstRemote = "."
}
@@ -197,7 +201,10 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
// NewFsDstFile creates a new dst fs with a destination file name from the arguments
func NewFsDstFile(args []string) (fdst fs.Fs, dstFileName string) {
dstRemote, dstFileName := fspath.Split(args[0])
dstRemote, dstFileName, err := fspath.Split(args[0])
if err != nil {
log.Fatalf("Parsing %q failed: %v", args[0], err)
}
if dstRemote == "" {
dstRemote = "."
}

View File

@@ -267,8 +267,8 @@ func (fsys *FS) Statfs(path string, stat *fuse.Statfs_t) (errc int) {
stat.Blocks = fsBlocks // Total data blocks in file system.
stat.Bfree = fsBlocks // Free blocks in file system.
stat.Bavail = fsBlocks // Free blocks in file system if you're not root.
stat.Files = 1E9 // Total files in file system.
stat.Ffree = 1E9 // Free files in file system.
stat.Files = 1e9 // Total files in file system.
stat.Ffree = 1e9 // Free files in file system.
stat.Bsize = blockSize // Block size
stat.Namemax = 255 // Maximum file name length?
stat.Frsize = blockSize // Fragment size, smallest addressable data size in the file system.
@@ -299,6 +299,9 @@ func (fsys *FS) Open(path string, flags int) (errc int, fh uint64) {
return translateError(err), fhUnset
}
// FIXME add support for unknown length files setting direct_io
// See: https://github.com/billziss-gh/cgofuse/issues/38
return 0, fsys.openHandle(handle)
}

View File

@@ -4,12 +4,18 @@ import (
"context"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations"
"github.com/spf13/cobra"
)
var (
autoFilename = false
)
func init() {
cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&autoFilename, "auto-filename", "a", autoFilename, "Get the file name from the url and use it for destination file path")
}
var commandDefintion = &cobra.Command{
@@ -18,13 +24,22 @@ var commandDefintion = &cobra.Command{
Long: `
Download urls content and copy it to destination
without saving it in tmp storage.
Setting --auto-filename flag will cause retrieving file name from url and using it in destination path.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
fsdst, dstFileName := cmd.NewFsDstFile(args[1:])
var dstFileName string
var fsdst fs.Fs
if autoFilename {
fsdst = cmd.NewFsDir(args[1:])
} else {
fsdst, dstFileName = cmd.NewFsDstFile(args[1:])
}
cmd.Run(true, true, command, func() error {
_, err := operations.CopyURL(context.Background(), fsdst, dstFileName, args[0])
_, err := operations.CopyURL(context.Background(), fsdst, dstFileName, args[0], autoFilename)
return err
})
},

Some files were not shown because too many files have changed in this diff Show More