1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

182 Commits

Author SHA1 Message Date
Nick Craig-Wood
40b9e312c6 vfs: factor the vfs cache into its own package 2020-02-28 17:00:32 +00:00
Nick Craig-Wood
2b268f9724 build: fixup formatting after go1.14 go fmt changes 2020-02-28 16:58:33 +00:00
Nick Craig-Wood
7a5a74cecb crypt: clarify that directory_name_encryption depends on filename_encryption
See: https://forum.rclone.org/t/directory-name-encryption-is-set-to-always-false-when-choosing-filename-encryption-off/14600
2020-02-28 16:26:45 +00:00
Nick Craig-Wood
54a0c6b8ad Add valery1707 to contributors 2020-02-28 16:26:45 +00:00
valery1707
1ad23c4dc8 mailru: Describe 2FA requirements (#4015)
Fair enough
2020-02-28 16:54:09 +03:00
Dan Walters
7586a345ff dlna: cds: use modification time as date in dlna metadata
We havn't been outputting anything for this until now, which leads to my
Samsung showing an epoch/1970 date for all files.
2020-02-27 18:05:18 +01:00
Nick Craig-Wood
393b94bb70 vfs: add --vfs-read-wait and --vfs-write-wait flags
--vfs-read-wait duration    Time to wait for in-sequence read before seeking. (default 5ms)
    --vfs-write-wait duration   Time to wait for in-sequence write before giving error. (default 1s)

See: https://forum.rclone.org/t/constantly-high-iowait-add-log/14156
2020-02-27 16:12:33 +00:00
Nick Craig-Wood
e3c11c9ca1 mount: add --async-read flag to disable asynchronous reads
See: https://forum.rclone.org/t/constantly-high-iowait-add-log/14156
2020-02-27 16:12:33 +00:00
Nick Craig-Wood
3c91abce74 vfs: fix race condition caused by unlocked reading of Dir.path 2020-02-27 15:50:41 +00:00
Nick Craig-Wood
87d856d71b cache: disable race tests until bbolt is fixed
bbolt fails with "unsafe pointer conversion" under the go1.14 race
detector.

Disable race tests until https://github.com/etcd-io/bbolt/issues/187
is fixed.
2020-02-27 08:05:28 +00:00
Nick Craig-Wood
3855c003ce build: update to use go1.14 for the build 2020-02-26 21:26:47 +00:00
Nick Craig-Wood
abb9f89f65 vendor: update all dependencies 2020-02-26 21:26:46 +00:00
Nick Craig-Wood
17b4058ee9 mount: constrain to go1.13 or above otherwise bazil.org/fuse fails to compile 2020-02-26 21:26:46 +00:00
Nick Craig-Wood
9663f9b2ab mount: ignore --allow-root flag with a warning as it has been removed upstream
For background see: https://github.com/bazil/fuse/issues/144
2020-02-26 21:11:25 +00:00
Nick Craig-Wood
d6e10dba33 docs: fix confusion over processor names in download table
See: https://forum.rclone.org/t/intel-processor-download-help/14558
2020-02-26 16:39:46 +00:00
Nick Craig-Wood
da5cbc194a ftp: fix lockup on Close failures when using concurrency limit #3984
Before this change if rclone failed to close a file download for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.

This fix returns the concurrency token regardless of the error status.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
e8eb658ba5 ftp: fix lockup on failed upload when using concurrency limit #3984
Before this change if rclone failed to upload a file for some reason
it would leak a concurrency token. When all the tokens were leaked
then rclone would lock up.

The fix returns the concurrency token regardless of the error state.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
28f69f25a0 ftp: fix lockup when using concurrency limit on failed connections #3984
Before this change if rclone failed to make an FTP connection for some
reason it would leak a concurrency token. When all the tokens were
leaked then rclone would lock up.

The fix returns the concurrency token if creating the FTP connection
returns an error.
2020-02-25 14:38:12 +00:00
Nick Craig-Wood
07e4b9bb7f operations: fix multithread copy test to use the correct modify window
In bde0334bd8 "operations: fix setting the timestamp on Windows
for multithread copy" the test for multithread copy failed to take
into account the modify window of the remote under test.
2020-02-25 13:30:35 +00:00
Aleksandar Jankovic
708b967f15 backend/s3: fix multipart abort context
S3 couldn't abort multi-part upload when context is canceled
because canceled context prevents abort request from being sent.
2020-02-25 12:11:32 +01:00
Dan Walters
7e2568a312 dlna: cds: don't specify childCount at all when unknown
Basically, solving #3541 with a different approach - bringing in
the upstream upnpav module, and changing ChildCount from int to a
*int to avoid childCount="0" in the XML output when that value is
simply unknown.

Current approach is leading to some recursion issues and according
to the DLNA spec it shouldn't be necessary, anyway.
2020-02-25 08:41:00 +01:00
Nick Craig-Wood
bde0334bd8 operations: fix setting the timestamp on Windows for multithread copy
Before this fix we attempted to set the modification time on the file
when it was open. This works fine on Linux but not on Windows. The
test was also incorrect testing the source file rather than the
destination file.

This closes the file before setting the modification time and fixes
the tests.

Fixes #3994
2020-02-24 17:30:09 +00:00
Aleksandar Janković
5470d34740 backend/s3: use low-level-retries as the number of SDK retries
Amazon S3 is built to handle different kinds of workloads.
In rare cases where S3 is not able to scale for whatever reason users
will face status 500 errors.
Main mechanism for handling these errors are retries.
Amount of needed retries varies for each different use case.

This change is making retries for s3 backend configurable by using
--low-level-retries option.
2020-02-24 16:43:44 +01:00
Maciej Zimnoch
ac9cb50fdb backend/s3: use memory pool for buffer allocations
Currently each multipart upload allocated his own buffers, which after
file upload was garbaged. Next files couldn't leverage already allocated
memory which resulted in inefficent memory management. This change
introduces backend memory pool keeping memory chunks which can be
used during object operations.

Fixes #3967
2020-02-24 13:32:32 +01:00
buengese
4a8b548add jottacloud: use RawURLEncoding when decoding base64 encoded login token - fixes #3945 2020-02-22 23:12:56 +01:00
Lars Lehtonen
481c8a40ea backend/azureblob: fmt nit 2020-02-20 15:50:53 +01:00
Lars Lehtonen
25ef3a281b backend/azureblob: remove unused Object.parseTimeString() 2020-02-20 15:50:53 +01:00
Lars Lehtonen
219bd97e8a backend/qingstor: lint fix 2020-02-14 18:11:01 +00:00
Lars Lehtonen
8b14cd24aa backend/qingstor: prune multiUploader.list() 2020-02-14 18:11:01 +00:00
Nick Craig-Wood
3893c14889 operations: make rcat obey --ignore-checksum 2020-02-14 12:47:11 +00:00
Nick Craig-Wood
c41fbc0f90 operations: move Rcat tests back to main test file now S3 prob is fixed 2020-02-14 12:26:52 +00:00
Nick Craig-Wood
f45425e5a9 operations: factor CommonHash out of Copy for re-use elsewhere 2020-02-14 12:12:10 +00:00
Nick Craig-Wood
bd9fd629bc test: add TestS3MinioEdge to test leading edge minio too #3934 2020-02-13 11:01:06 +00:00
Nick Craig-Wood
3b19f48929 testserver: add provider to TestS3Minio #3934
This is necessary to pass the TestIntegration/FsMkdir/FsEncoding
listing tests.
2020-02-13 11:01:06 +00:00
Outvi V
4996edc030 oauthutil: make instructions for rclone authorize more robust
It appends "--" to the rclone authorize command line before client_id,
in case that the client_id or client_secret has a prefix of "-"
(OneDrive's does), which affects the argument parsing.
2020-02-13 10:18:23 +00:00
Michał Matczuk
964f1f6a7e fs/accounting: Restore "Max number of stats groups reached" log line
Changed log level to debug.
2020-02-12 21:21:25 +00:00
Michał Matczuk
e75c1f70bb backend/s3: Added 500 as retryErrorCode
The error code 500 Internal Error indicates that Amazon S3 is unable to handle the request at that time. The error code 503 Slow Down typically indicates that the requests to the S3 bucket are very high, exceeding the request rates described in Request Rate and Performance Guidelines.

Because Amazon S3 is a distributed service, a very small percentage of 5xx errors are expected during normal use of the service. All requests that return 5xx errors from Amazon S3 can and should be retried, so we recommend that applications making requests to Amazon S3 have a fault-tolerance mechanism to recover from these errors.

https://aws.amazon.com/premiumsupport/knowledge-center/http-5xx-errors-s3/
2020-02-12 11:43:18 +00:00
Michał Matczuk
19a4d74ee7 backend/s3: Fail fast multipart upload
When a part upload request fails error is returned and gCtx is cancelled.
This does not prevent from other parts being tried.
They immediately fail due to a canceled context, but are retried by rclone anyway...

Example AWS debug output

```
-----------------------------------------------------
2020/02/11 14:12:17 DEBUG: Retrying Request s3/UploadPart, attempt 4
2020/02/11 14:12:17 DEBUG: Request s3/UploadPart Details:
---[ REQUEST POST-SIGN ]-----------------------------
PUT /backuptest-rclone/huge/file.db?partNumber=11&uploadId=190939b4-3c43-4b98-ac11-92303e3f11b0 HTTP/1.1
Host: 192.168.100.99:9000
User-Agent: aws-sdk-go/1.23.8 (go1.13.1; linux; amd64)
Content-Length: 5242880
Authorization: AWS4-HMAC-SHA256 Credential=miniouser/20200211/us-east-1/s3/aws4_request, SignedHeaders=content-length;content-md5;expect;host;x-amz-content-sha256;x-amz-date, Signature=3fc03a01f651cec09b05290459e9ceb26db9a8aa00c4e1b16e8cf5617eb81da8
Content-Md5: XzY+DlipXwbL6bvGYsXftg==
Expect: 100-Continue
X-Amz-Content-Sha256: c036cbb7553a909f8b8877d4461924307f27ecb66cff928eeeafd569c3887e29
X-Amz-Date: 20200211T131217Z
Accept-Encoding: gzip

-----------------------------------------------------
http://192.168.100.99:9000/backuptest-rclone/huge/file.db?partNumber=11&uploadId=190939b4-3c43-4b98-ac11-92303e3f11b0
2020/02/11 14:12:17 DEBUG: Response s3/UploadPart Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 500 InternalServerError
Content-Length: 0
-----------------------------------------------------
UploadPartWithContext() error InternalError: We encountered an internal error. Please try again
	status code: 500, request id: , host id:

2020/02/11 14:12:18 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
2020/02/11 14:12:20 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
2020/02/11 14:12:22 DEBUG ERROR: Request s3/UploadPart:
---[ REQUEST DUMP ERROR ]-----------------------------
context canceled
------------------------------------------------------
UploadPartWithContext() error RequestCanceled: request context canceled
caused by: context canceled
```

This adds a fail fast behaviour in case the context was cancelled.
2020-02-12 11:40:34 +00:00
Nick Craig-Wood
55b5eded23 Add Frederick Zhang to contributors 2020-02-12 11:33:56 +00:00
Lars Lehtonen
3dbcf0af2d backend/cache: Remove Unused Functions
This removes the unused functions run.writeRemoteRandomBytes() run.writeObjectRandomBytes() run.listPath() Directory.parentRemote() and Persistent.dumpRoot().
2020-02-12 11:23:57 +00:00
Lars Lehtonen
4e1a511f88 vfs: explicitly ignore unused variables 2020-02-12 11:20:54 +00:00
Frederick Zhang
b71e1a16b1 docs: update account type during OneDrive app registration 2020-02-12 11:04:49 +00:00
Nick Craig-Wood
ec1271818f mount2: hide mount2 command for the moment 2020-02-11 14:28:13 +00:00
Nick Craig-Wood
8318020387 Implement mount2 with go-fuse
This passes the tests and works efficiently with the non sequential vfs ReadAt fix.
2020-02-11 14:28:13 +00:00
Nick Craig-Wood
c38d7be373 vendor: add github.com/hanwen/go-fuse/v2@master for mount2 2020-02-11 14:28:13 +00:00
Nick Craig-Wood
dc31212c3d Add Tim Gallant to contributors 2020-02-11 14:28:13 +00:00
Lars Lehtonen
ac60b36e77 backend/premiumizeme: prune unused functions 2020-02-11 12:17:35 +00:00
Tim Gallant
1d73f071f6 fs: improve log output when no changes are made - fixes #3454
- changes a few log messages to debug level
- adds a log message for when 0 bytes are transferred
2020-02-11 12:16:15 +00:00
Nick Craig-Wood
5c869d5bd3 cmd: make stats be printed on non-zero exit code
This also rationalises the exit sequence so that dumping
goroutines/open files happens regardless of exit error state.

See: https://forum.rclone.org/t/transfer-stats-on-non-0-exit/14211
2020-02-10 16:40:43 +00:00
Nick Craig-Wood
a54210a2e4 docs: restore missing mount --daemon docs
This was done as part of ebfeec9fb4 which unfortunately patched
the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
040d226028 docs: restore missing QingStor doc fix
This was done as part of 64fce8438b which unfortunately patched
the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
8b664c3ec5 docs: restore lost spelling fixes
These came from 3d424c6e08 which unfortunately got added the
docs to the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
102a38bb95 docs: restore lost VFS poll interval docs
These came from 3d475dc0ee which unfortunately got added the
docs to the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
7a54e13110 docs: restore lost VFS case insensitive docs
These came from 1c4e33d4ad which unfortunately
added the docs to the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
feee92c790 docs: restore lost mount share docs
These came from 162fdfe455 which unfortunately added the docs to
the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
de93852512 docs: restore lost auth proxy logs
These came from f2a789ea98 which unfortunately added the docs to
the auto generated files.
2020-02-10 15:29:39 +00:00
Nick Craig-Wood
dfb710eab7 gendocs: add autogenerated header to all docs 2020-02-10 15:29:39 +00:00
Nick Craig-Wood
25cfeb2a64 webdav: Fix X-OC-Mtime header for Transip compatibility - fixes #3126 2020-02-10 11:57:12 +00:00
Nick Craig-Wood
90377f5e65 s3: Specify that Minio supports URL encoding in listings
Thanks to @harshavardhana for pointing this out

See #3934 for background
2020-02-09 12:03:20 +00:00
Lars Lehtonen
f1d9bd5eab lib/oauthutil: replace deprecated oauth2.NoContext 2020-02-07 17:49:29 +00:00
Lars Lehtonen
4ee3c21a9d cmd/serve/ftp: replace deprecated os.SEEK_SET with io.SeekStart 2020-02-06 10:58:34 +00:00
Lars Lehtonen
fe6f4135b4 fs/rc: fix dropped error 2020-02-04 11:31:06 +00:00
Nick Craig-Wood
3dfa63b85c onedrive: fix occasional 416 errors on multipart uploads
Before this change, when uploading multipart files, onedrive would
sometimes return an unexpected 416 error and rclone would abort the
transfer.

This is usually after a 500 error which caused rclone to do a retry.

This change checks the upload position on a 416 error and works how
much of the current chunk to skip, then retries (or skips) the current
chunk as appropriate.

If the position is before the current chunk or after the current chunk
then rclone will abort the transfer.

See: https://forum.rclone.org/t/fragment-overlap-error-with-encrypted-onedrive/14001

Fixes #3131
2020-02-01 21:15:07 +00:00
Gary Kim
ff2343475a docs: Update README.md shields for changed CI
Signed-off-by: Gary Kim <gary@garykim.dev>
2020-02-01 20:17:40 +00:00
Nick Craig-Wood
bffd7f0f14 docs: note how to use GitHub's online editor to edit rclone's docs 2020-02-01 13:44:03 +00:00
Nick Craig-Wood
7c55fafe33 Add Durval Menezes to contributors 2020-02-01 13:41:47 +00:00
Nick Craig-Wood
2e7fe06beb Add Dave Koston to contributors 2020-02-01 13:41:47 +00:00
Durval Menezes
8ff91ff31b docs: Update the "Making your own client_id" section for drive
...so it accurately describes the new "Enhanced Security" Google process to get your own Client ID and Client Secret to use with rclone.
2020-02-01 13:41:18 +00:00
Nick Craig-Wood
4d1c616e97 Start v1.51.0-DEV development 2020-02-01 12:32:21 +00:00
Nick Craig-Wood
43daecd89b Version v1.51.0 2020-02-01 10:40:01 +00:00
Dave Koston
9f99c20232 s3: Add StackPath Object Storage Support 2020-01-31 16:05:44 +00:00
Nick Craig-Wood
97ed8db75d drive: hide dangerous config from the configurator
This hides:

- "use_created_date"
- "use_shared_date"
- "size_as_quota"

from the configurator (rclone config) as they interfere with normal
operations and shouldn't be set in a backend config.

They can still be put in the config file by hand and will still work
as variables, etc.

This adds some more docs to "size_as_quota" also.

Fixes #3912
2020-01-31 10:09:33 +00:00
Nick Craig-Wood
f80d98553a dbhashsum: stop it returning UNSUPPORTED on dropbox 2020-01-29 19:49:42 +00:00
Nick Craig-Wood
b3e7a9d01c Add Benjapol Worakan to contributors 2020-01-29 13:38:21 +00:00
Benjapol Worakan
01fc063128 docs: fix spacing error under -P in docs.md 2020-01-29 13:38:03 +00:00
Gary Kim
e71edd5577 cmd: always print elapsed time to tenth place seconds in progress
Before this change, the elapsed time shown with the --progress flag
would not print ".0s" so the elapsed time.

This change will make it so that the line width is kept a bit more
consistent by always printing to a fixed-point.

This does change the displayed value when the elapsed time
is less than 1s, in which it used to be that the value would be shown
in ms or smaller units.

Signed-off-by: Gary Kim <gary@garykim.dev>
2020-01-29 12:28:01 +00:00
Nick Craig-Wood
27a34dd183 Add Motonori IWAMURO to contributors 2020-01-29 12:16:37 +00:00
Motonori IWAMURO
7662f15939 onedrive: add support "Retry-After" header 2020-01-29 12:16:18 +00:00
Nick Craig-Wood
bfd9f32188 lsjson: add --no-mimetype flag, speed up lsf
Before this changed we unconditionally fetched the MimeType. On Some
backends like s3 and swift this takes an extra transaction which meant
that `lsf` on those backends was needlessly slow.

This adds an internal option so `lsf` can declare whether it wants
MimeTypes or not depending on whether the user asked for them and an
external flag `--no-mimetype` for `lsjson`.

See: https://forum.rclone.org/t/reliably-setup-incremental-updates/14006/8
2020-01-26 16:38:00 +00:00
Nick Craig-Wood
9c9cdf1712 fs: don't run tests for --max-duration on remote backends
This is a timing dependent test and to make it long enough so that it
would work with the remotes would make it too long for local tests.

The code paths are identical for local vs non-local so just run on
local.

This fixes the integration tests.
2020-01-26 09:23:03 +00:00
Nick Craig-Wood
0e5537cd25 Add unbelauscht to contributors 2020-01-26 09:23:03 +00:00
unbelauscht
151d0a274e swift: Update OVH API endpoint - Fixes #3890 2020-01-24 17:43:00 +00:00
Nick Craig-Wood
dc77ec4ba1 Add boosh to contributors 2020-01-24 13:29:36 +00:00
boosh
0d7573dd81 fs: Add --max-duration flag to control the maximum duration of a transfer session
This gives you more control over how long rclone will run for, making
it easier to script backups, e.g. via cron. Once the `--max-duration`
time limit is reached, no new transfers will be initiated, but those
already in-flight will be allowed to complete.

Fixes #985
2020-01-24 13:28:56 +00:00
Nick Craig-Wood
e4d2d228bd webdav: add Referer header to fix problems with WAFs - fixes #3868 2020-01-23 15:56:17 +00:00
Nick Craig-Wood
ede36b001b Add Damon Permezel to contributors 2020-01-23 15:40:23 +00:00
Nick Craig-Wood
3afb2a4798 config: use SpaceSepList for argument to --password-command
This is to enable arguments with spaces in.
2020-01-23 15:39:15 +00:00
Nick Craig-Wood
62dbdcdbcc config: use the environment variable which goes with --password-command 2020-01-23 15:39:15 +00:00
Damon Permezel
06df133159 config: add --password-command to allow dynamic config password - fixes #3694 2020-01-23 15:39:15 +00:00
Xiaoxing Ye
0ab2693da6 doc: add desc about gzip and http dump
Fix #3872
2020-01-23 12:42:44 +00:00
Nick Craig-Wood
4b1cb1be43 Add jtagcat to contributors 2020-01-23 12:38:56 +00:00
Nick Craig-Wood
9d96680329 Add thestigma to contributors 2020-01-23 12:38:56 +00:00
jtagcat
d694bb30e5 install.sh: create ~/.config/rclone directory
This allows people with their config already made copy their config to the new server(s), without having to create the directories.
2020-01-23 12:37:46 +00:00
buengese
9c858c3228 crypt: correctly handle trailing dot 2020-01-22 01:40:04 +01:00
thestigma
7125cb10f5 doc: Add note to filtering.md about --files-from and rclone lsf 2020-01-20 17:37:24 +00:00
Nick Craig-Wood
ba421fd069 Add landall to contributors 2020-01-20 17:30:27 +00:00
landall
77e55b8265 hashsum: Add flag --base64 flag - fixes #3663
This flag can be used to output QuickXorHash in the same format as MS
Graph API.
2020-01-20 17:29:58 +00:00
Nick Craig-Wood
18d26e2ddb Revert "vendor: update x/crypto/ssh - to fix Windows password length issues fixes #3798"
This turned out to introduce a regression, not being able to press Enter.

See: #3888 and https://github.com/golang/go/issues/36609

This reverts commit 251cfc100e.
2020-01-20 12:46:52 +00:00
Nick Craig-Wood
f338a2d907 Add Benjamin Richter to contributors 2020-01-20 12:46:46 +00:00
Benjamin Richter
77fa8194f2 onedrive: add Sites.Read.All permission - Fixes #1770 2020-01-20 12:30:19 +00:00
Xiaoxing Ye
ccaca04a5d rcd: move webgui apart; option to disable browser
Fix #3601, #3785
2020-01-20 12:27:55 +00:00
Nick Craig-Wood
84191ac6dc vfs: fix incorrect modtime for mv into mount with --vfs-cache-modes write
When a file has its modtime set while it is open we delay setting the
modtime until the file is closed.

The file is then uploaded in Flush. In Release we check the cached
file has been uploaded by comparing modtimes and or hashes and upload
it again if it has changed.

Before this change we forgot to change the time on the cached file
when we updated the time file on the object, so this mean that Release
reset the time to the wrong time and uploaded the file again on
remotes which don't support hashes (eg crypt).

The fix was to set the modtime of the cached file at the same time we
set the modtime of the remote object. This means that the files check
as identical in Release so it doesn't try to upload the file.

This means that we avoid a double upload and the modtime is correct.

See: https://forum.rclone.org/t/modification-time-with-vfs-cache/13906/8
2020-01-19 12:52:48 +00:00
Nick Craig-Wood
7cf8ea354c dedupe: add missing modes to help string 2020-01-19 11:09:45 +00:00
Nick Craig-Wood
24ef00a258 build: implement a framework for starting test servers during tests
Test servers are implemented by docker containers and run real servers
for rclone to test against.
2020-01-18 16:47:37 +00:00
Nick Craig-Wood
00d30ce0d7 opendrive: implement --opendrive-chunk-size #3707 2020-01-18 11:56:01 +00:00
Nick Craig-Wood
db39adeb3e sftp: open files for update write only to fix AWS SFTP interop - fixes #3776 2020-01-18 11:46:56 +00:00
Nick Craig-Wood
ef7ac088c0 operations: make NewOverrideObjectInfo public and factor uses 2020-01-18 11:41:33 +00:00
Nick Craig-Wood
08a3957880 cache: fix fatal error: concurrent map writes - fixes #2378 2020-01-18 11:27:00 +00:00
Nick Craig-Wood
4499b08afc drive: log an ERROR if an incomplete search is returned 2020-01-18 11:22:26 +00:00
Nick Craig-Wood
422ad38e5b copyurl: add --stdout flag to write to stdout 2020-01-18 11:15:51 +00:00
Nick Craig-Wood
0b7f959433 cmount: when setting dates discard out of range dates
It appears that sometimes Windows/WinFSP/cgofuse sends dates which are
the epoch to rclone.  These dates appear as 1601-01-01 00:00:00 plus
or minus the timezone.

These dates aren't being sent from rclone.

This patch filters dates out before 1601-01-02 so rclone does not
attempt to set them.

See: https://forum.rclone.org/t/bug-corruption-of-modtime-via-vfs-layer/12204
See: https://forum.rclone.org/t/io-error-googleapi-error-403-insufficient-permission-insufficientpermissions/11372
See: https://github.com/billziss-gh/cgofuse/issues/35
2020-01-18 11:13:35 +00:00
Nick Craig-Wood
4b9da601be dropbox: treat insufficient_space errors as non retriable errors
Before this change rclone would keep trying to upload files after
dropbox had signalled it was full.

This change makes the relevant error a non-retriable error.

See: https://forum.rclone.org/t/why-does-a-file-transfer-continue-when-there-is-no-available-storage/13677
2020-01-18 11:10:18 +00:00
Nick Craig-Wood
c789436580 The memory backend
This is a bucket based remote
2020-01-18 10:41:08 +00:00
Nick Craig-Wood
277d94feac fshttp: add --expect-continue-timeout default 1s - fixes #3835
Before this change the expect/continue timeout was set to
--conntimeout which was 60s by default which is too long to wait.

This was noticed when using s3 with a proxy which apparently didn't
support expect / continue properly.

Set --expect-continue-timeout 0 to disable expect/continue.
2020-01-18 09:49:22 +00:00
Nick Craig-Wood
6757244918 drive: use multipart resumable uploads for streaming and uploads in mount
Before this change we used non multipart uploads for files of unknown
size (streaming and uploads in mount).  This is slower and less
reliable and is not recommended by Google for files smaller than 5MB.

After this change we use multipart resumable uploads for all files of
unknown length.  This will use an extra transaction so is less
efficient for files under the chunk size, however the natural
buffering in the operations.Rcat call specified by
`--streaming-upload-cutoff` will overcome this.

See: https://forum.rclone.org/t/upload-behaviour-and-speed-when-using-vfs-cache/9920/
2020-01-17 22:03:10 +00:00
Nick Craig-Wood
36157d8ae5 vendor: update t3rm1n4l/go-mega - fixes mega: couldn't login: crypto/aes: invalid key size 0
Fixes #3740
2020-01-17 21:42:32 +00:00
Nick Craig-Wood
251cfc100e vendor: update x/crypto/ssh - to fix Windows password length issues fixes #3798 2020-01-17 21:33:47 +00:00
Nick Craig-Wood
9fb10064ee vendor: run go mod tidy 2020-01-17 21:19:46 +00:00
Nick Craig-Wood
bedeaf23af sugarsync: new backend - fixes #622 2020-01-17 17:39:34 +00:00
Nick Craig-Wood
14e93bfd8a rest: call the Signer with the mutex unlocked
This enables the signer to adjust rest parameters and call rest again
if necessary.
2020-01-17 15:00:23 +00:00
Nick Craig-Wood
65071599a2 rest: don't canonicalise headers starting with *
This leaves a way of adding headers which shouldn't be canonicalised.
2020-01-17 15:00:23 +00:00
Nick Craig-Wood
5403e1c79a lib/encoder: remove noencode tag and update CONTRIBUTING 2020-01-17 15:00:01 +00:00
Nick Craig-Wood
5697caf20b Add Felix Hungenberg to contributors 2020-01-17 15:00:01 +00:00
Felix Hungenberg
68056f08ab docs: Use correct link for sftp wikipedia article 2020-01-17 13:20:42 +00:00
Nick Craig-Wood
81002747c5 dedupe: implement keep smallest too
This is to help deduping google docs and their exported versions if
they accidentally get uploaded to the source again.

See: https://forum.rclone.org/t/my-stupidity-or-a-bug/13861
2020-01-17 13:08:37 +00:00
Nick Craig-Wood
1bd9f522e0 build: compress the test builds
This is to use less space and to fix the caddy/rclone interaction
using too much bandwidth.

See: https://caddy.community/t/how-to-stop-caddy-sniffing-mime-types-and-return-a-default-mime-type/6819
2020-01-17 11:47:40 +00:00
Thomas Eales
3a1b41ac22 docs: fixed small typo
is -> it
2020-01-16 15:24:36 +00:00
Nick Craig-Wood
375d25f158 sync: implement --order-by flag to order transfers - fixes #1205 2020-01-16 15:24:36 +00:00
Nick Craig-Wood
0e57335396 docs: document new backend encoder parameter 2020-01-16 14:40:36 +00:00
Nick Craig-Wood
bafe7d5a73 backends: move encoding definitions from fs/encodings 2020-01-16 14:40:36 +00:00
Nick Craig-Wood
c555dc71c2 lib/encoder: move definitions here and remove uint casts 2020-01-16 14:40:36 +00:00
Nick Craig-Wood
3c620d521d backend: adjust backends to have encoding parameter
Fixes #3761
Fixes #3836
Fixes #3841
2020-01-16 14:40:36 +00:00
Nick Craig-Wood
0a5c83ece1 lib/encoder: add string rendering and parsing for encodings 2020-01-16 14:40:36 +00:00
Nick Craig-Wood
1ba5e99152 graphics: add more differently sized logos 2020-01-15 16:25:20 +00:00
Nick Craig-Wood
95c83b37fb vfs: only run TestRWCacheRename on remotes which can rename
This fixes the 1fichier integration tests.
2020-01-15 16:25:04 +00:00
Nick Craig-Wood
89634795b0 Add Paul Tinsley to contributors 2020-01-15 16:25:04 +00:00
Nick Craig-Wood
b88dec51e5 proxy: replace use of bcrypt with sha256
Unfortunately bcrypt only hashes the first 72 bytes of a given input
which meant that using it on ssh keys which are longer than 72 bytes
was incorrect.

This swaps over to using sha256 which should be adequate for the
purpose of protecting in memory passwords where the unencrypted
password is likely in memory too.
2020-01-15 16:23:57 +00:00
Paul Tinsley
f2a789ea98 serve sftp: Add support for public key with auth proxy - fixes #3572 2020-01-15 16:23:57 +00:00
Nick Craig-Wood
63128834da vfs: fix open file renaming on drive when using --vfs-cache-mode writes
Before this change, when uploading files from the VFS cache which were
pending a rename, rclone would use the new path of the object when
specifiying the destination remote.  This didn't cause a problem with
most backends as the subsequent rename did nothing, however with the
drive backend, since it updates objects, the incorrect Remote was
embedded in the object.  This caused the rename to apparently succeed
but the object be at the wrong location.

The fix for this was to make sure we upload to the path stored in the
object if available.

This problem was spotted by the new rename tests for the VFS layer.
2020-01-13 17:37:54 +00:00
Nick Craig-Wood
5f822f2660 sftp: fix "failed to parse private key file: ssh: not an encrypted key" error
This error started happening after updating golang/x/crypto which was
done as a side effect of:

3801b8109 vendor: update termbox-go to fix ncdu command on FreeBSD

This turned out to be a deliberate policy of making
ssh.ParsePrivateKeyWithPassphrase fail if the passphrase was empty.

See: https://go-review.googlesource.com/c/crypto/+/207599

This fix calls ssh.ParsePrivateKey if the passphrase is empty and
ssh.ParsePrivateKeyWithPassphrase otherwise which fixes the problem.
2020-01-13 11:05:16 +00:00
Nick Craig-Wood
b81601baff test_all: ignore TestIntegration/FsMkdir/FsPutFiles/FsPutStream/0 on wasabi
This has been reported to Wasabi and they've confirmed as a known
issue that multipart uploads can't be 0 sized even though that is
incompatible with AWS S3.
2020-01-13 09:47:11 +00:00
Nick Craig-Wood
58064bdd2b drive: add --drive-stop-on-upload-limit flag to stop syncs when upload limit reached
If the --drive-stop-on-upload-limit flag is in effect this checks the
error string from Google Drive to see if it is the error you get when
you've breached your 750GB a day limit.

If so then it turns this error into a Fatal error which should stop
the sync.

Fixes #3857
2020-01-12 15:47:31 +00:00
Nick Craig-Wood
ba01d5e8ab Add Thomas Eales to contributors 2020-01-12 14:23:57 +00:00
Nick Craig-Wood
e510d460c2 Add Kuang-che Wu to contributors 2020-01-12 14:23:57 +00:00
Thomas Eales
42de601fa6 crypt: reorder the filename encryption options
This brings the default `standard` to the top of the list to replace `off`
2020-01-12 14:23:35 +00:00
Kuang-che Wu
3801b8109e vendor: update termbox-go to fix ncdu command on FreeBSD
see 58d4fcbce2
2020-01-12 14:20:12 +00:00
Nick Craig-Wood
e0d41da3e3 vendor: add uncommitted files from previous change 2020-01-11 17:56:14 +00:00
Nick Craig-Wood
92662baceb vendor: update github.com/t3rm1n4l/go-mega to fix mega "illegal base64 data at input byte 22"
Thanks to Ajaja for figuring this out.

See: https://forum.rclone.org/t/problem-to-login-with-mega/12276
2020-01-11 16:47:06 +00:00
Nick Craig-Wood
87c844bce1 onedrive: clarify docs around making your own client ID
See: https://forum.rclone.org/t/onedrive-token-configuration-failure/13634
2020-01-11 16:30:13 +00:00
Nick Craig-Wood
ae340cf7d9 log: factor flags into logflags package - fixes #3792 2020-01-09 13:25:37 +00:00
Nick Craig-Wood
11f501bd44 operations: move interface assertion to tests to remove pflag dependency #3792 2020-01-09 13:25:37 +00:00
Nick Craig-Wood
a4bc4daf30 mounttest: fix unreliable tests on Windows CI
The failure is this which is not reproducable locally, only on the CI
servers.

    --- FAIL: TestMount/CacheMode=minimal/TestWriteFileOverwrite (1.01s)
        fs.go:351:
            Error Trace:    fs.go:351
                            write.go:65
            Error:          Received unexpected error:
                            open E:testwrite: The request could not be performed because of an I/O device error.
            Test:           TestMount/CacheMode=minimal/TestWriteFileOverwrite

The corresponding ERROR from the log is this:

    ERROR : IO error: truncate C:\Users\runneradmin\AppData\Local\rclone\vfs\local\C\Users\RUNNER~1\AppData\Local\Temp\rclone298719627\testwrite: Access is denied.

Instead of using ioutil.WriteFile this fix uses an equivalent based on
rclone's lib/file which doesn't set the exclusive flag on
Windows. This allows files to be deleted that are open.  It also
deletes existing files if an error is received and retries.
2020-01-09 11:11:49 +00:00
Nick Craig-Wood
51dca8c8d4 bin: update windows test paths for new setup 2020-01-09 10:55:18 +00:00
Nick Craig-Wood
6b3021209a Add Ole Schütt to contributors 2020-01-09 10:36:44 +00:00
Ole Schütt
f263828edc operations: write debug message when hashes could not be checked 2020-01-09 10:35:31 +00:00
Maciej Zimnoch
b7019a91c2 fs/operations: Clear accounting before low level retry
Statistics of ransfers which were interrupted are not cleared before
retry iteration. These transfers completed with over 100 percentage.

This change clears transfer accounting before next retry iteration is
done in order to keep numbers in track.

Fixes #3861
2020-01-09 10:32:49 +00:00
Alex Chen
27c3481ea4 build: fix CI for forks and related docs (#3847) 2020-01-09 01:27:44 +08:00
Nick Craig-Wood
706da80d88 mount: don't build on go1.10 as bazil/fuse no longer supports it 2020-01-08 08:44:02 +00:00
Nick Craig-Wood
b6e86b2c7f s3: fix missing x-amz-meta-md5chksum headers for multipart uploads
This reverts "s3: fix DisableChecksum condition" which introduced the
problem.

This reverts commit c05bb63f96.

The code was correct as it stands - the comment was incorrect and this
commit updates it.

See: https://forum.rclone.org/t/s3-upload-md5-check-sum/13706
2020-01-07 19:39:39 +00:00
Nick Craig-Wood
4453fa4ba6 drive: fix --fast-list when using appDataFolder
In listings if the ID `appDataFolder` is used to list a directory the
parents of the items returned have the actual ID instead the alias
`appDataFolder`.  This confused the ListR routine into ignoring all
these items.

This change makes the listing routine accept all parent IDs returned
if there was only one ID in the query.  This fixes the `appDataFolder`
problem. This means we are relying on Google Drive to only return the
items we asked for which is probably OK.

Fixes #3851
2020-01-05 19:57:13 +00:00
Nick Craig-Wood
540fd3f173 local: fix update of hidden files on Windows - fixes #3839 2020-01-05 19:52:22 +00:00
Nick Craig-Wood
1af4bb0c84 Add Tennix to contributors 2020-01-05 19:50:00 +00:00
Tennix
15d19131bd s3: use aws web identity role provider 2020-01-05 19:49:31 +00:00
Nick Craig-Wood
9d993e584b s3: force path style bucket access to off for AWS deprecation
AWS are deprecating path style bucket access so rclone should stop
using it by default for this provider.

This change shouldn't break any workflows as all AWS endpoints support
virtual hosted style lookups of buckets.  It may even improve
performance.

See: https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/
2020-01-05 17:53:45 +00:00
Nick Craig-Wood
21b17b14a9 vendor: update bazil.org/fuse to fix FreeBSD 12.1 - fixes #3697 2020-01-05 16:35:30 +00:00
Nick Craig-Wood
1b89b38a4c vfs: skip rename tests on remotes which can't rename 2020-01-05 12:34:47 +00:00
Nick Craig-Wood
7242c7ce95 s3: fix multipart upload uploading 0 length files
This regression was introduced by the recent re-write of the s3
multipart upload code.
2020-01-05 12:32:55 +00:00
Nick Craig-Wood
ad2bb86d8c fstests: add test for 0 sized streaming upload 2020-01-05 12:32:55 +00:00
Michał Matczuk
eb10ac346f fs/accounting: Added StatsInfo locking in statsGroups sum function (#3844)
Without the fix we can have a race, example:

```
Write at 0x00c000432039 by goroutine 187:
  github.com/rclone/rclone/fs/accounting.(*StatsInfo).Error()
      fs/accounting/stats.go:495 +0x3f1
  github.com/rclone/rclone/fs/accounting.(*StatsInfo).Error-fm()
      fs/accounting/stats.go:477 +0x55
  github.com/rclone/rclone/fs/walk.listRwalk.func1()
      fs/walk/walk.go:162 +0xd2
  github.com/rclone/rclone/fs/walk.walk.func2()
      fs/walk/walk.go:402 +0x30f

Previous read at 0x00c000432039 by goroutine 184:
  github.com/rclone/rclone/fs/accounting.(*statsGroups).sum()
      fs/accounting/stats_groups.go:351 +0xcae
  github.com/rclone/rclone/fs/accounting.rcTransferredStats()
      fs/accounting/stats_groups.go:132 +0x1f4
```

Fixes #3844
2020-01-04 16:45:24 +00:00
Nick Craig-Wood
7e6fac8b1e s3: re-implement multipart upload to fix memory issues
There have been quite a few reports of problems with the multipart
uploader using too much memory and not retrying possible errors.

Before this change the multipart uploader used the s3manager
abstraction in the AWS SDK.  There are numerous bug reports of this
using up too much memory.

This change re-implements a much simplified version of the s3manager
code specialized for rclone's purposes.

This should use much less memory and retry chunks properly.

See: https://forum.rclone.org/t/memory-usage-s3-alike-to-glacier-without-big-directories/13563
See: https://forum.rclone.org/t/copy-from-local-to-s3-has-high-memory-usage/13405
See: https://forum.rclone.org/t/big-file-upload-to-s3-fails/13575
2020-01-03 22:19:28 +00:00
Nick Craig-Wood
2e0774f3cf Add Thomas Kriechbaumer to contributors 2020-01-03 22:18:23 +00:00
Aleksandar Jankovic
b9fb313f71 fs/accounting: add option to delete stats
Removed PruneAllTransfers because it had no use. startedTransfers are
set to nil in ResetCounters.
2020-01-03 17:44:05 +00:00
Aleksandar Jankovic
0e64df4b4c fs/accounting: consistency cleanup 2020-01-03 17:44:05 +00:00
buengese
69ac04fec9 docs: add GetSky to list of supported providers 2020-01-02 15:37:33 +01:00
buengese
8a2d1dbe24 jottacloud: add support whitelabel versions 2020-01-02 15:37:33 +01:00
Thomas Kriechbaumer
584e705c0c s3: introduce list_chunk option for bucket listing
The S3 ListObject API returns paginated bucket listings, with
"MaxKeys" items for each GET call.

The default value is 1000 entries, but for buckets with millions of
objects it might make sense to request more elements per request, if
the backend supports it. This commit adds a "list_chunk" option for
the user to specify a lower or higher value.

This commit does not add safe guards around this value - if a user
decides to request a too large list, it might result in connection
timeouts (on the server or client).

In AWS S3, there is a fixed limit of 1000, some other services might
have one too.  In Ceph, this can be configured in RadosGW.
2020-01-02 12:15:01 +00:00
Nick Craig-Wood
32a3ba9e3f Add Outvi V to contributors 2020-01-02 11:52:43 +00:00
Outvi V
db1c7f9ca8 s3: Add new region Asia Patific (Hong Kong) 2020-01-02 11:10:48 +00:00
Nick Craig-Wood
207474abab sync: add --no-check-dest flag - fixes #3616 2019-12-29 16:47:57 +00:00
Nick Craig-Wood
f754d897e5 Add Wei He to contributors 2019-12-28 13:29:08 +00:00
Wei He
4daecd3158 docs: fix in-page anchor navigation positioning 2019-12-22 23:33:12 +00:00
Cnly
59c75ba442 accounting: fix error count shown as checks - fixes #3814 2019-12-23 03:03:19 +08:00
891 changed files with 128558 additions and 86924 deletions

View File

@@ -19,12 +19,12 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'modules_race', 'go1.10', 'go1.11', 'go1.12']
job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'modules_race', 'go1.11', 'go1.12', 'go1.13']
include:
- job_name: linux
os: ubuntu-latest
go: '1.13.x'
go: '1.14.x'
modules: 'off'
gotags: cmount
build_flags: '-include "^linux/"'
@@ -34,7 +34,7 @@ jobs:
- job_name: mac
os: macOS-latest
go: '1.13.x'
go: '1.14.x'
modules: 'off'
gotags: '' # cmount doesn't work on osx travis for some reason
build_flags: '-include "^darwin/amd64" -cgo'
@@ -44,7 +44,7 @@ jobs:
- job_name: windows_amd64
os: windows-latest
go: '1.13.x'
go: '1.14.x'
modules: 'off'
gotags: cmount
build_flags: '-include "^windows/amd64" -cgo'
@@ -54,7 +54,7 @@ jobs:
- job_name: windows_386
os: windows-latest
go: '1.13.x'
go: '1.14.x'
modules: 'off'
gotags: cmount
goarch: '386'
@@ -65,7 +65,7 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '1.13.x'
go: '1.14.x'
modules: 'off'
build_flags: '-exclude "^(windows/|darwin/amd64|linux/)"'
compile_all: true
@@ -73,17 +73,11 @@ jobs:
- job_name: modules_race
os: ubuntu-latest
go: '1.13.x'
go: '1.14.x'
modules: 'on'
quicktest: true
racequicktest: true
- job_name: go1.10
os: ubuntu-latest
go: '1.10.x'
modules: 'off'
quicktest: true
- job_name: go1.11
os: ubuntu-latest
go: '1.11.x'
@@ -96,6 +90,12 @@ jobs:
modules: 'off'
quicktest: true
- job_name: go1.13
os: ubuntu-latest
go: '1.13.x'
modules: 'off'
quicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
@@ -104,7 +104,8 @@ jobs:
- name: Checkout
uses: actions/checkout@v1
with:
path: ./src/github.com/${{ github.repository }}
# Checkout into a fixed path to avoid import path problems on go < 1.11
path: ./src/github.com/rclone/rclone
- name: Install Go
uses: actions/setup-go@v1
@@ -201,7 +202,8 @@ jobs:
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# working-directory: '$(modulePath)'
if: matrix.deploy && github.head_ref == ''
# Deploy binaries if enabled in config && not a PR && not a fork
if: matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
xgo:
timeout-minutes: 60
@@ -213,7 +215,8 @@ jobs:
- name: Checkout
uses: actions/checkout@v1
with:
path: ./src/github.com/${{ github.repository }}
# Checkout into a fixed path to avoid import path problems on go < 1.11
path: ./src/github.com/rclone/rclone
- name: Set environment variables
shell: bash
@@ -247,4 +250,5 @@ jobs:
make circleci_upload
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
if: github.head_ref == ''
# Upload artifacts if not a PR && not a fork
if: github.head_ref == '' && github.repository == 'rclone/rclone'

3
.gitignore vendored
View File

@@ -7,4 +7,5 @@ rclone.iml
.idea
.history
*.test
*.log
*.log
*.iml

View File

@@ -82,13 +82,9 @@ You patch will get reviewed and you might get asked to fix some stuff.
If so, then make the changes in the same branch, squash the commits,
rebase it to master then push it to GitHub with `--force`.
## Enabling CI for your fork ##
## CI for your fork ##
The CI config files for rclone have taken care of forks of the project, so you can enable CI for your fork repo easily.
rclone currently uses [Travis CI](https://travis-ci.org/), [AppVeyor](https://ci.appveyor.com/), and
[Circle CI](https://circleci.com/) to build the project. To enable them for your fork, simply go into their
websites, find your fork of rclone, and enable building there.
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
## Testing ##
@@ -152,6 +148,7 @@ with modules beneath.
* ...commands
* docs - the documentation and website
* content - adjust these docs only - everything else is autogenerated
* command - these are auto generated - edit the corresponding .go file
* fs - main rclone definitions - minimal amount of code
* accounting - bandwidth limiting and statistics
* asyncreader - an io.Reader which reads ahead
@@ -207,6 +204,9 @@ don't need to run these when adding a feature.
Documentation for rclone sub commands is with their code, eg
`cmd/ls/ls.go`.
Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy.
## Making a release ##
There are separate instructions for making a release in the RELEASE.md
@@ -341,10 +341,9 @@ Getting going
* Add your remote to the imports in `backend/all/all.go`
* HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead.
* Try to implement as many optional methods as possible as it makes the remote more usable.
* Use fs/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed
* `go install -tags noencode`
* Use lib/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed
* `rclone purge -v TestRemote:rclone-info`
* `rclone info -vv --write-json remote.json TestRemote:rclone-info`
* `rclone info --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
* `go run cmd/info/internal/build_csv/main.go -o remote.csv remote.json`
* open `remote.csv` in a spreadsheet and examine
@@ -359,7 +358,7 @@ Integration tests
* Add your backend to `fstest/test_all/config.yaml`
* Once you've done that then you can use the integration test framework from the project root:
* go install ./...
* test_all -backend remote
* test_all -backends remote
Or if you want to run the integration tests manually:

15788
MANUAL.html generated

File diff suppressed because one or more lines are too long

1485
MANUAL.md generated

File diff suppressed because it is too large Load Diff

14875
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -183,6 +183,9 @@ endif
@echo Beta release ready at $(BETA_URL)
circleci_upload:
sudo chown -R $$USER build
find build -type l -delete
gzip -r9v build
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
ifndef BRANCH_PATH
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest

View File

@@ -8,10 +8,7 @@
[Installation](https://rclone.org/install/) |
[Forum](https://forum.rclone.org/)
[![Build Status](https://travis-ci.org/rclone/rclone.svg?branch=master)](https://travis-ci.org/rclone/rclone)
[![Windows Build Status](https://ci.appveyor.com/api/projects/status/github/rclone/rclone?branch=master&passingText=windows%20-%20ok&svg=true)](https://ci.appveyor.com/project/rclone/rclone)
[![Build Status](https://dev.azure.com/rclone/rclone/_apis/build/status/rclone.rclone?branchName=master)](https://dev.azure.com/rclone/rclone/_build/latest?definitionId=2&branchName=master)
[![CircleCI](https://circleci.com/gh/rclone/rclone/tree/master.svg?style=svg)](https://circleci.com/gh/rclone/rclone/tree/master)
[![Build Status](https://github.com/rclone/rclone/workflows/build/badge.svg)](https://github.com/rclone/rclone/actions?query=workflow%3Abuild)
[![Go Report Card](https://goreportcard.com/badge/github.com/rclone/rclone)](https://goreportcard.com/report/github.com/rclone/rclone)
[![GoDoc](https://godoc.org/github.com/rclone/rclone?status.svg)](https://godoc.org/github.com/rclone/rclone)
[![Docker Pulls](https://img.shields.io/docker/pulls/rclone/rclone)](https://hub.docker.com/r/rclone/rclone)
@@ -34,6 +31,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* FTP [:page_facing_up:](https://rclone.org/ftp/)
* GetSky [:page_facing_up:](https://rclone.org/jottacloud/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
@@ -45,6 +43,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
* Memory [:page_facing_up:](https://rclone.org/memory/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
* Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
* Minio [:page_facing_up:](https://rclone.org/s3/#minio)
@@ -61,6 +60,8 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)

View File

@@ -9,19 +9,20 @@ This file describes how to make the various kinds of releases
## Making a release
* git checkout master
* git pull
* git status - make sure everything is checked in
* Check travis & appveyor builds are green
* make check
* Check GitHub actions build for master is Green
* make test # see integration test server or run locally
* make tag
* edit docs/content/changelog.md
* edit docs/content/changelog.md # make sure to remove duplicate logs from point releases
* make tidy
* make doc
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX.0"
* make retag
* git push --tags origin master
* # Wait for the appveyor and travis builds to complete then...
* # Wait for the GitHub builds to complete then...
* make fetch_binaries
* make tarball
* make sign_upload

View File

@@ -1 +1 @@
v1.50.2
v1.51.0

View File

@@ -23,6 +23,7 @@ import (
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/mailru"
_ "github.com/rclone/rclone/backend/mega"
_ "github.com/rclone/rclone/backend/memory"
_ "github.com/rclone/rclone/backend/onedrive"
_ "github.com/rclone/rclone/backend/opendrive"
_ "github.com/rclone/rclone/backend/pcloud"
@@ -32,6 +33,7 @@ import (
_ "github.com/rclone/rclone/backend/s3"
_ "github.com/rclone/rclone/backend/sftp"
_ "github.com/rclone/rclone/backend/sharefile"
_ "github.com/rclone/rclone/backend/sugarsync"
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/webdav"

View File

@@ -28,18 +28,17 @@ import (
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"golang.org/x/oauth2"
)
const (
enc = encodings.AmazonCloudDrive
folderKind = "FOLDER"
fileKind = "FILE"
statusAvailable = "AVAILABLE"
@@ -137,15 +136,23 @@ which downloads the file through a temporary URL directly from the
underlying S3 storage.`,
Default: defaultTempLinkThreshold,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
Default: (encoder.Base |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Checkpoint string `config:"checkpoint"`
UploadWaitPerGB fs.Duration `config:"upload_wait_per_gb"`
TempLinkThreshold fs.SizeSuffix `config:"templink_threshold"`
Checkpoint string `config:"checkpoint"`
UploadWaitPerGB fs.Duration `config:"upload_wait_per_gb"`
TempLinkThreshold fs.SizeSuffix `config:"templink_threshold"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote acd server
@@ -386,7 +393,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
var resp *http.Response
var subFolder *acd.Folder
err = f.pacer.Call(func() (bool, error) {
subFolder, resp, err = folder.GetFolder(enc.FromStandardName(leaf))
subFolder, resp, err = folder.GetFolder(f.opt.Enc.FromStandardName(leaf))
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -413,7 +420,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
var resp *http.Response
var info *acd.Folder
err = f.pacer.Call(func() (bool, error) {
info, resp, err = folder.CreateFolder(enc.FromStandardName(leaf))
info, resp, err = folder.CreateFolder(f.opt.Enc.FromStandardName(leaf))
return f.shouldRetry(resp, err)
})
if err != nil {
@@ -481,7 +488,7 @@ func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly
if !hasValidParent {
continue
}
*node.Name = enc.ToStandardName(*node.Name)
*node.Name = f.opt.Enc.ToStandardName(*node.Name)
// Store the nodes up in case we have to retry the listing
out = append(out, node)
}
@@ -671,7 +678,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
err = f.pacer.CallNoRetry(func() (bool, error) {
start := time.Now()
f.tokenRenewer.Start()
info, resp, err = folder.Put(in, enc.FromStandardName(leaf))
info, resp, err = folder.Put(in, f.opt.Enc.FromStandardName(leaf))
f.tokenRenewer.Stop()
var ok bool
ok, info, err = f.checkUpload(ctx, resp, in, src, info, err, time.Since(start))
@@ -1041,7 +1048,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
var resp *http.Response
var info *acd.File
err = o.fs.pacer.Call(func() (bool, error) {
info, resp, err = folder.GetFile(enc.FromStandardName(leaf))
info, resp, err = folder.GetFile(o.fs.opt.Enc.FromStandardName(leaf))
return o.fs.shouldRetry(resp, err)
})
if err != nil {
@@ -1161,7 +1168,7 @@ func (f *Fs) restoreNode(info *acd.Node) (newInfo *acd.Node, err error) {
func (f *Fs) renameNode(info *acd.Node, newName string) (newInfo *acd.Node, err error) {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
newInfo, resp, err = info.Rename(enc.FromStandardName(newName))
newInfo, resp, err = info.Rename(f.opt.Enc.FromStandardName(newName))
return f.shouldRetry(resp, err)
})
return newInfo, err
@@ -1357,7 +1364,7 @@ func (f *Fs) changeNotifyRunner(notifyFunc func(string, fs.EntryType), checkpoin
if len(node.Parents) > 0 {
if path, ok := f.dirCache.GetInv(node.Parents[0]); ok {
// and append the drive file name to compute the full file name
name := enc.ToStandardName(*node.Name)
name := f.opt.Enc.ToStandardName(*node.Name)
if len(path) > 0 {
path = path + "/" + name
} else {

View File

@@ -16,7 +16,6 @@ import (
"net/http"
"net/url"
"path"
"strconv"
"strings"
"sync"
"time"
@@ -26,14 +25,15 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
)
@@ -61,8 +61,6 @@ const (
emulatorBlobEndpoint = "http://127.0.0.1:10000/devstoreaccount1"
)
const enc = encodings.AzureBlob
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -127,21 +125,32 @@ If blobs are in "archive tier" at remote, trying to perform data transfer
operations from remote will not be allowed. User should first restore by
tiering blob to "Hot" or "Cool".`,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.EncodeInvalidUtf8 |
encoder.EncodeSlash |
encoder.EncodeCtl |
encoder.EncodeDel |
encoder.EncodeBackSlash |
encoder.EncodeRightPeriod),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
SASURL string `config:"sas_url"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
ListChunkSize uint `config:"list_chunk"`
AccessTier string `config:"access_tier"`
UseEmulator bool `config:"use_emulator"`
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
SASURL string `config:"sas_url"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
ListChunkSize uint `config:"list_chunk"`
AccessTier string `config:"access_tier"`
UseEmulator bool `config:"use_emulator"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote azure server
@@ -189,7 +198,7 @@ func (f *Fs) Root() string {
// String converts this Fs to a string
func (f *Fs) String() string {
if f.rootContainer == "" {
return fmt.Sprintf("Azure root")
return "Azure root"
}
if f.rootDirectory == "" {
return fmt.Sprintf("Azure container %s", f.rootContainer)
@@ -212,7 +221,7 @@ func parsePath(path string) (root string) {
// relative to f.root
func (f *Fs) split(rootRelativePath string) (containerName, containerPath string) {
containerName, containerPath = bucket.Split(path.Join(f.root, rootRelativePath))
return enc.FromStandardName(containerName), enc.FromStandardPath(containerPath)
return f.opt.Enc.FromStandardName(containerName), f.opt.Enc.FromStandardPath(containerPath)
}
// split returns container and containerPath from the object
@@ -588,7 +597,7 @@ func (f *Fs) list(ctx context.Context, container, directory, prefix string, addC
// if prefix != "" && !strings.HasPrefix(file.Name, prefix) {
// return nil
// }
remote := enc.ToStandardPath(file.Name)
remote := f.opt.Enc.ToStandardPath(file.Name)
if !strings.HasPrefix(remote, prefix) {
fs.Debugf(f, "Odd name received %q", remote)
continue
@@ -609,7 +618,7 @@ func (f *Fs) list(ctx context.Context, container, directory, prefix string, addC
// Send the subdirectories
for _, remote := range response.Segment.BlobPrefixes {
remote := strings.TrimRight(remote.Name, "/")
remote = enc.ToStandardPath(remote)
remote = f.opt.Enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, prefix) {
fs.Debugf(f, "Odd directory name received %q", remote)
continue
@@ -673,7 +682,7 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
return entries, nil
}
err = f.listContainersToFn(func(container *azblob.ContainerItem) error {
d := fs.NewDir(enc.ToStandardName(container.Name), container.Properties.LastModified)
d := fs.NewDir(f.opt.Enc.ToStandardName(container.Name), container.Properties.LastModified)
f.cache.MarkOK(container.Name)
entries = append(entries, d)
return nil
@@ -1111,22 +1120,6 @@ func (o *Object) readMetaData() (err error) {
return o.decodeMetaDataFromPropertiesResponse(blobProperties)
}
// parseTimeString converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time and stores it in
// the modTime variable.
func (o *Object) parseTimeString(timeString string) (err error) {
if timeString == "" {
return nil
}
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
if err != nil {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return err
}
o.modTime = time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC()
return nil
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the

View File

@@ -23,20 +23,19 @@ import (
"github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
const enc = encodings.B2
const (
defaultEndpoint = "https://api.backblazeb2.com"
headerPrefix = "x-bz-info-" // lower case as that is what the server returns
@@ -146,23 +145,34 @@ The duration before the download authorization token will expire.
The minimum value is 1 second. The maximum value is one week.`,
Default: fs.Duration(7 * 24 * time.Hour),
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// See: https://www.backblaze.com/b2/docs/files.html
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
// FIXME: allow /, but not leading, trailing or double
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
TestMode string `config:"test_mode"`
Versions bool `config:"versions"`
HardDelete bool `config:"hard_delete"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableCheckSum bool `config:"disable_checksum"`
DownloadURL string `config:"download_url"`
DownloadAuthorizationDuration fs.Duration `config:"download_auth_duration"`
Account string `config:"account"`
Key string `config:"key"`
Endpoint string `config:"endpoint"`
TestMode string `config:"test_mode"`
Versions bool `config:"versions"`
HardDelete bool `config:"hard_delete"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableCheckSum bool `config:"disable_checksum"`
DownloadURL string `config:"download_url"`
DownloadAuthorizationDuration fs.Duration `config:"download_auth_duration"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote b2 server
@@ -402,7 +412,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
// If this is a key limited to a single bucket, it must exist already
if f.rootBucket != "" && f.info.Allowed.BucketID != "" {
allowedBucket := enc.ToStandardName(f.info.Allowed.BucketName)
allowedBucket := f.opt.Enc.ToStandardName(f.info.Allowed.BucketName)
if allowedBucket == "" {
return nil, errors.New("bucket that application key is restricted to no longer exists")
}
@@ -623,11 +633,11 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
var request = api.ListFileNamesRequest{
BucketID: bucketID,
MaxFileCount: chunkSize,
Prefix: enc.FromStandardPath(directory),
Prefix: f.opt.Enc.FromStandardPath(directory),
Delimiter: delimiter,
}
if directory != "" {
request.StartFileName = enc.FromStandardPath(directory)
request.StartFileName = f.opt.Enc.FromStandardPath(directory)
}
opts := rest.Opts{
Method: "POST",
@@ -647,7 +657,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
}
for i := range response.Files {
file := &response.Files[i]
file.Name = enc.ToStandardPath(file.Name)
file.Name = f.opt.Enc.ToStandardPath(file.Name)
// Finish if file name no longer has prefix
if prefix != "" && !strings.HasPrefix(file.Name, prefix) {
return nil
@@ -848,7 +858,7 @@ func (f *Fs) listBucketsToFn(ctx context.Context, fn listBucketFn) error {
f._bucketType = make(map[string]string, 1)
for i := range response.Buckets {
bucket := &response.Buckets[i]
bucket.Name = enc.ToStandardName(bucket.Name)
bucket.Name = f.opt.Enc.ToStandardName(bucket.Name)
f.cache.MarkOK(bucket.Name)
f._bucketID[bucket.Name] = bucket.ID
f._bucketType[bucket.Name] = bucket.Type
@@ -970,7 +980,7 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
}
var request = api.CreateBucketRequest{
AccountID: f.info.AccountID,
Name: enc.FromStandardName(bucket),
Name: f.opt.Enc.FromStandardName(bucket),
Type: "allPrivate",
}
var response api.Bucket
@@ -1054,7 +1064,7 @@ func (f *Fs) hide(ctx context.Context, bucket, bucketPath string) error {
}
var request = api.HideFileRequest{
BucketID: bucketID,
Name: enc.FromStandardPath(bucketPath),
Name: f.opt.Enc.FromStandardPath(bucketPath),
}
var response api.File
err = f.pacer.Call(func() (bool, error) {
@@ -1082,7 +1092,7 @@ func (f *Fs) deleteByID(ctx context.Context, ID, Name string) error {
}
var request = api.DeleteFileRequest{
ID: ID,
Name: enc.FromStandardPath(Name),
Name: f.opt.Enc.FromStandardPath(Name),
}
var response api.File
err := f.pacer.Call(func() (bool, error) {
@@ -1220,7 +1230,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
var request = api.CopyFileRequest{
SourceID: srcObj.id,
Name: enc.FromStandardPath(dstPath),
Name: f.opt.Enc.FromStandardPath(dstPath),
MetadataDirective: "COPY",
DestBucketID: destBucketID,
}
@@ -1268,7 +1278,7 @@ func (f *Fs) getDownloadAuthorization(ctx context.Context, bucket, remote string
}
var request = api.GetDownloadAuthorizationRequest{
BucketID: bucketID,
FileNamePrefix: enc.FromStandardPath(path.Join(f.root, remote)),
FileNamePrefix: f.opt.Enc.FromStandardPath(path.Join(f.root, remote)),
ValidDurationInSeconds: validDurationInSeconds,
}
var response api.GetDownloadAuthorizationResponse
@@ -1509,7 +1519,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
}
var request = api.CopyFileRequest{
SourceID: o.id,
Name: enc.FromStandardPath(bucketPath), // copy to same name
Name: o.fs.opt.Enc.FromStandardPath(bucketPath), // copy to same name
MetadataDirective: "REPLACE",
ContentType: info.ContentType,
Info: info.Info,
@@ -1611,7 +1621,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
opts.Path += "/b2api/v1/b2_download_file_by_id?fileId=" + urlEncode(o.id)
} else {
bucket, bucketPath := o.split()
opts.Path += "/file/" + urlEncode(enc.FromStandardName(bucket)) + "/" + urlEncode(enc.FromStandardPath(bucketPath))
opts.Path += "/file/" + urlEncode(o.fs.opt.Enc.FromStandardName(bucket)) + "/" + urlEncode(o.fs.opt.Enc.FromStandardPath(bucketPath))
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
@@ -1808,7 +1818,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Body: in,
ExtraHeaders: map[string]string{
"Authorization": upload.AuthorizationToken,
"X-Bz-File-Name": urlEncode(enc.FromStandardPath(bucketPath)),
"X-Bz-File-Name": urlEncode(o.fs.opt.Enc.FromStandardPath(bucketPath)),
"Content-Type": fs.MimeType(ctx, src),
sha1Header: calculatedSha1,
timeHeader: timeString(modTime),

View File

@@ -111,7 +111,7 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
}
var request = api.StartLargeFileRequest{
BucketID: bucketID,
Name: enc.FromStandardPath(bucketPath),
Name: f.opt.Enc.FromStandardPath(bucketPath),
ContentType: fs.MimeType(ctx, src),
Info: map[string]string{
timeKey: timeString(modTime),

View File

@@ -25,6 +25,7 @@ import (
"strings"
"time"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/jwtutil"
"github.com/youmark/pkcs8"
@@ -36,7 +37,6 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
@@ -48,8 +48,6 @@ import (
"golang.org/x/oauth2/jws"
)
const enc = encodings.Box
const (
rcloneClientID = "d0374ba6pgmaguie02ge15sv1mllndho"
rcloneEncryptedClientSecret = "sYbJYm99WB8jzeaLPU0OPDMJKIkZvD2qOn3SyEMfiJr03RdtDt3xcZEIudRhbIDL"
@@ -146,6 +144,21 @@ func init() {
Help: "Max number of times to try committing a multipart file.",
Default: 100,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// From https://developer.box.com/docs/error-codes#section-400-bad-request :
// > Box only supports file or folder names that are 255 characters or less.
// > File names containing non-printable ascii, "/" or "\", names with leading
// > or trailing spaces, and the special names “.” and “..” are also unsupported.
//
// Testing revealed names with leading spaces work fine.
// Also encode invalid UTF-8 bytes as json doesn't handle them properly.
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8),
}},
})
}
@@ -220,8 +233,9 @@ func getDecryptedPrivateKey(boxConfig *api.ConfigJSON) (key *rsa.PrivateKey, err
// Options defines the configuration for this backend
type Options struct {
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CommitRetries int `config:"commit_retries"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CommitRetries int `config:"commit_retries"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote box
@@ -488,7 +502,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
Parameters: fieldsValue(),
}
mkdir := api.CreateFolder{
Name: enc.FromStandardName(leaf),
Name: f.opt.Enc.FromStandardName(leaf),
Parent: api.Parent{
ID: pathID,
},
@@ -554,7 +568,7 @@ OUTER:
if item.ItemStatus != api.ItemStatusActive {
continue
}
item.Name = enc.ToStandardName(item.Name)
item.Name = f.opt.Enc.ToStandardName(item.Name)
if fn(item) {
found = true
break OUTER
@@ -791,7 +805,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
Parameters: fieldsValue(),
}
copyFile := api.CopyFile{
Name: enc.FromStandardName(leaf),
Name: f.opt.Enc.FromStandardName(leaf),
Parent: api.Parent{
ID: directoryID,
},
@@ -830,7 +844,7 @@ func (f *Fs) move(ctx context.Context, endpoint, id, leaf, directoryID string) (
Parameters: fieldsValue(),
}
move := api.UpdateFileMove{
Name: enc.FromStandardName(leaf),
Name: f.opt.Enc.FromStandardName(leaf),
Parent: api.Parent{
ID: directoryID,
},
@@ -1155,7 +1169,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// This is recommended for less than 50 MB of content
func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID string, modTime time.Time) (err error) {
upload := api.UploadFile{
Name: enc.FromStandardName(leaf),
Name: o.fs.opt.Enc.FromStandardName(leaf),
ContentModifiedAt: api.Time(modTime),
ContentCreatedAt: api.Time(modTime),
Parent: api.Parent{

View File

@@ -38,7 +38,7 @@ func (o *Object) createUploadSession(ctx context.Context, leaf, directoryID stri
} else {
opts.Path = "/files/upload_sessions"
request.FolderID = directoryID
request.FileName = enc.FromStandardName(leaf)
request.FileName = o.fs.opt.Enc.FromStandardName(leaf)
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {

View File

@@ -1,4 +1,5 @@
// +build !plan9
// +build !race
package cache_test
@@ -17,7 +18,6 @@ import (
"path/filepath"
"runtime"
"runtime/debug"
"strconv"
"strings"
"testing"
"time"
@@ -1139,23 +1139,6 @@ func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser {
return f
}
func (r *run) writeRemoteRandomBytes(t *testing.T, f fs.Fs, p string, size int64) string {
remote := path.Join(p, strconv.Itoa(rand.Int())+".bin")
// create some rand test data
testData := randStringBytes(int(size))
r.writeRemoteBytes(t, f, remote, testData)
return remote
}
func (r *run) writeObjectRandomBytes(t *testing.T, f fs.Fs, p string, size int64) fs.Object {
remote := path.Join(p, strconv.Itoa(rand.Int())+".bin")
// create some rand test data
testData := randStringBytes(int(size))
return r.writeObjectBytes(t, f, remote, testData)
}
func (r *run) writeRemoteString(t *testing.T, f fs.Fs, remote, content string) {
r.writeRemoteBytes(t, f, remote, []byte(content))
}
@@ -1344,26 +1327,6 @@ func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]interface{}, error)
return l, err
}
func (r *run) listPath(t *testing.T, f fs.Fs, remote string) []string {
var err error
var l []string
if r.useMount {
var list []os.FileInfo
list, err = ioutil.ReadDir(path.Join(r.mntDir, remote))
for _, ll := range list {
l = append(l, ll.Name())
}
} else {
var list fs.DirEntries
list, err = f.List(context.Background(), remote)
for _, ll := range list {
l = append(l, ll.Remote())
}
}
require.NoError(t, err)
return l
}
func (r *run) copyFile(t *testing.T, f fs.Fs, src, dst string) error {
in, err := os.Open(src)
if err != nil {

21
backend/cache/cache_mount_other_test.go vendored Normal file
View File

@@ -0,0 +1,21 @@
// +build !linux !go1.13
// +build !darwin !go1.13
// +build !freebsd !go1.13
// +build !windows
// +build !race
package cache_test
import (
"testing"
"github.com/rclone/rclone/fs"
)
func (r *run) mountFs(t *testing.T, f fs.Fs) {
panic("mountFs not defined for this platform")
}
func (r *run) unmountFs(t *testing.T, f fs.Fs) {
panic("unmountFs not defined for this platform")
}

View File

@@ -1,4 +1,5 @@
// +build !plan9,!windows
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
// +build !race
package cache_test

View File

@@ -1,4 +1,5 @@
// +build windows
// +build !race
package cache_test

View File

@@ -1,6 +1,7 @@
// Test Cache filesystem interface
// +build !plan9
// +build !race
package cache_test

View File

@@ -1,4 +1,5 @@
// +build !plan9
// +build !race
package cache_test

View File

@@ -101,15 +101,6 @@ func (d *Directory) abs() string {
return cleanPath(path.Join(d.Dir, d.Name))
}
// parentRemote returns the absolute path parent remote
func (d *Directory) parentRemote() string {
absPath := d.abs()
if absPath == "" {
return ""
}
return cleanPath(path.Dir(absPath))
}
// ModTime returns the cached ModTime
func (d *Directory) ModTime(ctx context.Context) time.Time {
return time.Unix(0, d.CacheModTime)

View File

@@ -24,15 +24,16 @@ const (
type Object struct {
fs.Object `json:"-"`
ParentFs fs.Fs `json:"-"` // parent fs
CacheFs *Fs `json:"-"` // cache fs
Name string `json:"name"` // name of the directory
Dir string `json:"dir"` // abs path of the object
CacheModTime int64 `json:"modTime"` // modification or creation time - IsZero for unknown
CacheSize int64 `json:"size"` // size of directory and contents or -1 if unknown
CacheStorable bool `json:"storable"` // says whether this object can be stored
CacheType string `json:"cacheType"`
CacheTs time.Time `json:"cacheTs"`
ParentFs fs.Fs `json:"-"` // parent fs
CacheFs *Fs `json:"-"` // cache fs
Name string `json:"name"` // name of the directory
Dir string `json:"dir"` // abs path of the object
CacheModTime int64 `json:"modTime"` // modification or creation time - IsZero for unknown
CacheSize int64 `json:"size"` // size of directory and contents or -1 if unknown
CacheStorable bool `json:"storable"` // says whether this object can be stored
CacheType string `json:"cacheType"`
CacheTs time.Time `json:"cacheTs"`
cacheHashesMu sync.Mutex
CacheHashes map[hash.Type]string // all supported hashes cached
refreshMutex sync.Mutex
@@ -103,7 +104,9 @@ func (o *Object) updateData(ctx context.Context, source fs.Object) {
o.CacheSize = source.Size()
o.CacheStorable = source.Storable()
o.CacheTs = time.Now()
o.cacheHashesMu.Lock()
o.CacheHashes = make(map[hash.Type]string)
o.cacheHashesMu.Unlock()
}
// Fs returns its FS info
@@ -268,7 +271,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
o.CacheModTime = src.ModTime(ctx).UnixNano()
o.CacheSize = src.Size()
o.cacheHashesMu.Lock()
o.CacheHashes = make(map[hash.Type]string)
o.cacheHashesMu.Unlock()
o.CacheTs = time.Now()
o.persist()
@@ -309,11 +314,12 @@ func (o *Object) Remove(ctx context.Context) error {
// since it might or might not be called, this is lazy loaded
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
_ = o.refresh(ctx)
o.cacheHashesMu.Lock()
if o.CacheHashes == nil {
o.CacheHashes = make(map[hash.Type]string)
}
cachedHash, found := o.CacheHashes[ht]
o.cacheHashesMu.Unlock()
if found {
return cachedHash, nil
}
@@ -324,7 +330,9 @@ func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
if err != nil {
return "", err
}
o.cacheHashesMu.Lock()
o.CacheHashes[ht] = liveHash
o.cacheHashesMu.Unlock()
o.persist()
fs.Debugf(o, "object hash cached: %v", liveHash)

View File

@@ -767,31 +767,6 @@ func (b *Persistent) iterateBuckets(buk *bolt.Bucket, bucketFn func(name string)
return err
}
func (b *Persistent) dumpRoot() string {
var itBuckets func(buk *bolt.Bucket) map[string]interface{}
itBuckets = func(buk *bolt.Bucket) map[string]interface{} {
m := make(map[string]interface{})
c := buk.Cursor()
for k, v := c.First(); k != nil; k, v = c.Next() {
if v == nil {
buk2 := buk.Bucket(k)
m[string(k)] = itBuckets(buk2)
} else {
m[string(k)] = "-"
}
}
return m
}
var mm map[string]interface{}
_ = b.db.View(func(tx *bolt.Tx) error {
mm = itBuckets(tx.Bucket([]byte(RootBucket)))
return nil
})
raw, _ := json.MarshalIndent(mm, "", " ")
return string(raw)
}
// addPendingUpload adds a new file to the pending queue of uploads
func (b *Persistent) addPendingUpload(destPath string, started bool) error {
return b.db.Update(func(tx *bolt.Tx) error {

View File

@@ -5,6 +5,7 @@ import (
"context"
"fmt"
"io"
"path"
"strings"
"time"
@@ -35,19 +36,21 @@ func init() {
Default: "standard",
Examples: []fs.OptionExample{
{
Value: "off",
Help: "Don't encrypt the file names. Adds a \".bin\" extension only.",
}, {
Value: "standard",
Help: "Encrypt the filenames see the docs for the details.",
}, {
Value: "obfuscate",
Help: "Very simple filename obfuscation.",
}, {
Value: "off",
Help: "Don't encrypt the file names. Adds a \".bin\" extension only.",
},
},
}, {
Name: "directory_name_encryption",
Help: "Option to either encrypt directory names or leave them intact.",
Name: "directory_name_encryption",
Help: `Option to either encrypt directory names or leave them intact.
NB If filename_encryption is "off" then this option will do nothing.`,
Default: true,
Examples: []fs.OptionExample{
{
@@ -144,6 +147,10 @@ func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, errors.Wrapf(err, "failed to parse remote %q to wrap", remote)
}
// Make sure to remove trailing . reffering to the current dir
if path.Base(rpath) == "." {
rpath = strings.TrimSuffix(rpath, ".")
}
// Look for a file first
remotePath := fspath.JoinRootPath(wPath, cipher.EncryptFileName(rpath))
wrappedFs, err := wInfo.NewFs(wName, remotePath, wConfig)

View File

@@ -33,12 +33,12 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
@@ -49,8 +49,6 @@ import (
"google.golang.org/api/googleapi"
)
const enc = encodings.Drive
// Constants
const (
rcloneClientID = "202264815644.apps.googleusercontent.com"
@@ -326,6 +324,7 @@ Photos folder" option in your google drive settings. You can then copy
or move the photos locally and use the date the image was taken
(created) set as the modification date.`,
Advanced: true,
Hide: fs.OptionHideConfigurator,
}, {
Name: "use_shared_date",
Default: false,
@@ -337,6 +336,7 @@ unexpected consequences when uploading/downloading files.
If both this flag and "--drive-use-created-date" are set, the created
date is used.`,
Advanced: true,
Hide: fs.OptionHideConfigurator,
}, {
Name: "list_chunk",
Default: 1000,
@@ -395,11 +395,22 @@ will download it anyway.`,
}, {
Name: "size_as_quota",
Default: false,
Help: `Show storage quota usage for file size.
Help: `Show sizes as storage quota usage, not actual size.
The storage used by a file is the size of the current version plus any
older versions that have been set to keep forever.`,
Show the size of a file as the the storage quota used. This is the
current version plus any older versions that have been set to keep
forever.
**WARNING**: This flag may have some unexpected consequences.
It is not recommended to set this flag in your config - the
recommended usage is using the flag form --drive-size-as-quota when
doing rclone ls/lsl/lsf/lsjson/etc only.
If you do use this flag for syncing (not recommended) then you will
need to use --ignore size also.`,
Advanced: true,
Hide: fs.OptionHideConfigurator,
}, {
Name: "v2_download_min_size",
Default: fs.SizeSuffix(-1),
@@ -439,6 +450,30 @@ See: https://github.com/rclone/rclone/issues/3631
`,
Advanced: true,
}, {
Name: "stop_on_upload_limit",
Default: false,
Help: `Make upload limit errors be fatal
At the time of writing it is only possible to upload 750GB of data to
Google Drive a day (this is an undocumented limit). When this limit is
reached Google Drive produces a slightly different error message. When
this flag is set it causes these errors to be fatal. These will stop
the in-progress sync.
Note that this detection is relying on error message strings which
Google don't document so it may break in the future.
See: https://github.com/rclone/rclone/issues/3857
`,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
// Don't encode / as it's a valid name character in drive.
Default: encoder.EncodeInvalidUtf8,
}},
})
@@ -458,36 +493,38 @@ See: https://github.com/rclone/rclone/issues/3631
// Options defines the configuration for this backend
type Options struct {
Scope string `config:"scope"`
RootFolderID string `config:"root_folder_id"`
ServiceAccountFile string `config:"service_account_file"`
ServiceAccountCredentials string `config:"service_account_credentials"`
TeamDriveID string `config:"team_drive"`
AuthOwnerOnly bool `config:"auth_owner_only"`
UseTrash bool `config:"use_trash"`
SkipGdocs bool `config:"skip_gdocs"`
SkipChecksumGphotos bool `config:"skip_checksum_gphotos"`
SharedWithMe bool `config:"shared_with_me"`
TrashedOnly bool `config:"trashed_only"`
Extensions string `config:"formats"`
ExportExtensions string `config:"export_formats"`
ImportExtensions string `config:"import_formats"`
AllowImportNameChange bool `config:"allow_import_name_change"`
UseCreatedDate bool `config:"use_created_date"`
UseSharedDate bool `config:"use_shared_date"`
ListChunk int64 `config:"list_chunk"`
Impersonate string `config:"impersonate"`
AlternateExport bool `config:"alternate_export"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
AcknowledgeAbuse bool `config:"acknowledge_abuse"`
KeepRevisionForever bool `config:"keep_revision_forever"`
SizeAsQuota bool `config:"size_as_quota"`
V2DownloadMinSize fs.SizeSuffix `config:"v2_download_min_size"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
PacerBurst int `config:"pacer_burst"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
DisableHTTP2 bool `config:"disable_http2"`
Scope string `config:"scope"`
RootFolderID string `config:"root_folder_id"`
ServiceAccountFile string `config:"service_account_file"`
ServiceAccountCredentials string `config:"service_account_credentials"`
TeamDriveID string `config:"team_drive"`
AuthOwnerOnly bool `config:"auth_owner_only"`
UseTrash bool `config:"use_trash"`
SkipGdocs bool `config:"skip_gdocs"`
SkipChecksumGphotos bool `config:"skip_checksum_gphotos"`
SharedWithMe bool `config:"shared_with_me"`
TrashedOnly bool `config:"trashed_only"`
Extensions string `config:"formats"`
ExportExtensions string `config:"export_formats"`
ImportExtensions string `config:"import_formats"`
AllowImportNameChange bool `config:"allow_import_name_change"`
UseCreatedDate bool `config:"use_created_date"`
UseSharedDate bool `config:"use_shared_date"`
ListChunk int64 `config:"list_chunk"`
Impersonate string `config:"impersonate"`
AlternateExport bool `config:"alternate_export"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
AcknowledgeAbuse bool `config:"acknowledge_abuse"`
KeepRevisionForever bool `config:"keep_revision_forever"`
SizeAsQuota bool `config:"size_as_quota"`
V2DownloadMinSize fs.SizeSuffix `config:"v2_download_min_size"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
PacerBurst int `config:"pacer_burst"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
DisableHTTP2 bool `config:"disable_http2"`
StopOnUploadLimit bool `config:"stop_on_upload_limit"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote drive server
@@ -558,7 +595,7 @@ func (f *Fs) Features() *fs.Features {
}
// shouldRetry determines whether a given err rates being retried
func shouldRetry(err error) (bool, error) {
func (f *Fs) shouldRetry(err error) (bool, error) {
if err == nil {
return false, nil
}
@@ -574,6 +611,10 @@ func shouldRetry(err error) (bool, error) {
if len(gerr.Errors) > 0 {
reason := gerr.Errors[0].Reason
if reason == "rateLimitExceeded" || reason == "userRateLimitExceeded" {
if f.opt.StopOnUploadLimit && gerr.Errors[0].Message == "User rate limit exceeded." {
fs.Errorf(f, "Received upload limit error: %v", err)
return false, fserrors.FatalError(err)
}
return true, err
}
}
@@ -610,7 +651,7 @@ func (f *Fs) getRootID() (string, error) {
Fields("id").
SupportsAllDrives(true).
Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return "", errors.Wrap(err, "couldn't find root directory ID")
@@ -655,7 +696,7 @@ func (f *Fs) list(ctx context.Context, dirIDs []string, title string, directorie
}
var stems []string
if title != "" {
searchTitle := enc.FromStandardName(title)
searchTitle := f.opt.Enc.FromStandardName(title)
// Escaping the backslash isn't documented but seems to work
searchTitle = strings.Replace(searchTitle, `\`, `\\`, -1)
searchTitle = strings.Replace(searchTitle, `'`, `\'`, -1)
@@ -716,20 +757,23 @@ func (f *Fs) list(ctx context.Context, dirIDs []string, title string, directorie
fields += ",quotaBytesUsed"
}
fields = fmt.Sprintf("files(%s),nextPageToken", fields)
fields = fmt.Sprintf("files(%s),nextPageToken,incompleteSearch", fields)
OUTER:
for {
var files *drive.FileList
err = f.pacer.Call(func() (bool, error) {
files, err = list.Fields(googleapi.Field(fields)).Context(ctx).Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return false, errors.Wrap(err, "couldn't list directory")
}
if files.IncompleteSearch {
fs.Errorf(f, "search result INCOMPLETE")
}
for _, item := range files.Files {
item.Name = enc.ToStandardName(item.Name)
item.Name = f.opt.Enc.ToStandardName(item.Name)
// Check the case of items is correct since
// the `=` operator is case insensitive.
if title != "" && title != item.Name {
@@ -860,11 +904,12 @@ func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name
var driveIDs, driveNames []string
listTeamDrives := svc.Teamdrives.List().PageSize(100)
listFailed := false
var defaultFs Fs // default Fs with default Options
for {
var teamDrives *drive.TeamDriveList
err = newPacer(opt).Call(func() (bool, error) {
teamDrives, err = listTeamDrives.Context(ctx).Do()
return shouldRetry(err)
return defaultFs.shouldRetry(err)
})
if err != nil {
fmt.Printf("Listing team drives failed: %v\n", err)
@@ -1290,7 +1335,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
leaf = enc.FromStandardName(leaf)
leaf = f.opt.Enc.FromStandardName(leaf)
// fmt.Println("Making", path)
// Define the metadata for the directory we are going to create.
createInfo := &drive.File{
@@ -1305,7 +1350,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
Fields("id").
SupportsAllDrives(true).
Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return "", err
@@ -1348,7 +1393,7 @@ func (f *Fs) fetchFormats() {
about, err = f.svc.About.Get().
Fields("exportFormats,importFormats").
Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
fs.Errorf(f, "Failed to get Drive exportFormats and importFormats: %v", err)
@@ -1546,15 +1591,23 @@ func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in <-chan list
listRSlices{dirs, paths}.Sort()
var iErr error
_, err := f.list(ctx, dirs, "", false, false, false, func(item *drive.File) bool {
// shared with me items have no parents when at the root
if f.opt.SharedWithMe && len(item.Parents) == 0 && len(paths) == 1 && paths[0] == "" {
item.Parents = dirs
}
for _, parent := range item.Parents {
// only handle parents that are in the requested dirs list
i := sort.SearchStrings(dirs, parent)
if i == len(dirs) || dirs[i] != parent {
continue
var i int
// If only one item in paths then no need to search for the ID
// assuming google drive is doing its job properly.
//
// Note that we at the root when len(paths) == 1 && paths[0] == ""
if len(paths) == 1 {
// don't check parents at root because
// - shared with me items have no parents at the root
// - if using a root alias, eg "root" or "appDataFolder" the ID won't match
i = 0
} else {
// only handle parents that are in the requested dirs list if not at root
i = sort.SearchStrings(dirs, parent)
if i == len(dirs) || dirs[i] != parent {
continue
}
}
remote := path.Join(paths[i], item.Name)
entry, err := f.itemToDirEntry(remote, item)
@@ -1740,7 +1793,7 @@ func (f *Fs) createFileInfo(ctx context.Context, remote string, modTime time.Tim
return nil, err
}
leaf = enc.FromStandardName(leaf)
leaf = f.opt.Enc.FromStandardName(leaf)
// Define the metadata for the file we are going to create.
createInfo := &drive.File{
Name: leaf,
@@ -1814,7 +1867,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
}
var info *drive.File
if size == 0 || size < int64(f.opt.UploadCutoff) {
if size >= 0 && size < int64(f.opt.UploadCutoff) {
// Make the API request to upload metadata and file data.
// Don't retry, return a retry error instead
err = f.pacer.CallNoRetry(func() (bool, error) {
@@ -1824,7 +1877,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
SupportsAllDrives(true).
KeepRevisionForever(f.opt.KeepRevisionForever).
Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return nil, err
@@ -1867,7 +1920,7 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
Fields("").
SupportsAllDrives(true).
Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return errors.Wrapf(err, "MergeDirs move failed on %q in %v", info.Name, srcDir)
@@ -1913,7 +1966,7 @@ func (f *Fs) rmdir(ctx context.Context, directoryID string, useTrash bool) error
SupportsAllDrives(true).
Do()
}
return shouldRetry(err)
return f.shouldRetry(err)
})
}
@@ -2011,7 +2064,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
SupportsAllDrives(true).
KeepRevisionForever(f.opt.KeepRevisionForever).
Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return nil, err
@@ -2060,7 +2113,7 @@ func (f *Fs) Purge(ctx context.Context) error {
SupportsAllDrives(true).
Do()
}
return shouldRetry(err)
return f.shouldRetry(err)
})
f.dirCache.ResetRoot()
if err != nil {
@@ -2073,7 +2126,7 @@ func (f *Fs) Purge(ctx context.Context) error {
func (f *Fs) CleanUp(ctx context.Context) error {
err := f.pacer.Call(func() (bool, error) {
err := f.svc.Files.EmptyTrash().Context(ctx).Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
@@ -2090,7 +2143,7 @@ func (f *Fs) teamDriveOK(ctx context.Context) (err error) {
var td *drive.Drive
err = f.pacer.Call(func() (bool, error) {
td, err = f.svc.Drives.Get(f.opt.TeamDriveID).Fields("name,id,capabilities,createdTime,restrictions").Context(ctx).Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "failed to get Team/Shared Drive info")
@@ -2113,7 +2166,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var err error
err = f.pacer.Call(func() (bool, error) {
about, err = f.svc.About.Get().Fields("storageQuota").Context(ctx).Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get Drive storageQuota")
@@ -2185,7 +2238,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
Fields(partialFields).
SupportsAllDrives(true).
Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return nil, err
@@ -2221,7 +2274,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
Fields("").
SupportsAllDrives(true).
Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return "", err
@@ -2321,7 +2374,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
Fields("").
SupportsAllDrives(true).
Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return err
@@ -2387,7 +2440,7 @@ func (f *Fs) changeNotifyStartPageToken() (pageToken string, err error) {
changes.DriveId(f.opt.TeamDriveID)
}
startPageToken, err = changes.Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return
@@ -2416,7 +2469,7 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
changesCall.Spaces("appDataFolder")
}
changeList, err = changesCall.Context(ctx).Do()
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return
@@ -2439,7 +2492,7 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
// find the new path
if change.File != nil {
change.File.Name = enc.ToStandardName(change.File.Name)
change.File.Name = f.opt.Enc.ToStandardName(change.File.Name)
changeType := fs.EntryDirectory
if change.File.MimeType != driveFolderType {
changeType = fs.EntryObject
@@ -2607,7 +2660,7 @@ func (o *baseObject) SetModTime(ctx context.Context, modTime time.Time) error {
Fields(partialFields).
SupportsAllDrives(true).
Do()
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
if err != nil {
return err
@@ -2646,7 +2699,7 @@ func (o *baseObject) httpResponse(ctx context.Context, url, method string, optio
_ = res.Body.Close() // ignore error
}
}
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
if err != nil {
return req, nil, err
@@ -2736,7 +2789,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Fields("downloadUrl").
SupportsAllDrives(true).
Do()
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
if err == nil {
fs.Debugf(o, "Using v2 download: %v", v2File.DownloadUrl)
@@ -2808,7 +2861,7 @@ func (o *baseObject) update(ctx context.Context, updateInfo *drive.File, uploadM
src fs.ObjectInfo) (info *drive.File, err error) {
// Make the API request to upload metadata and file data.
size := src.Size()
if size == 0 || size < int64(o.fs.opt.UploadCutoff) {
if size >= 0 && size < int64(o.fs.opt.UploadCutoff) {
// Don't retry, return a retry error instead
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
info, err = o.fs.svc.Files.Update(o.id, updateInfo).
@@ -2817,7 +2870,7 @@ func (o *baseObject) update(ctx context.Context, updateInfo *drive.File, uploadM
SupportsAllDrives(true).
KeepRevisionForever(o.fs.opt.KeepRevisionForever).
Do()
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
return
}
@@ -2917,7 +2970,7 @@ func (o *baseObject) Remove(ctx context.Context) error {
SupportsAllDrives(true).
Do()
}
return shouldRetry(err)
return o.fs.shouldRetry(err)
})
return err
}

View File

@@ -11,16 +11,15 @@
package drive
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"regexp"
"strconv"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/readers"
@@ -88,13 +87,15 @@ func (f *Fs) Upload(ctx context.Context, in io.Reader, size int64, contentType,
})
req.Header.Set("Content-Type", "application/json; charset=UTF-8")
req.Header.Set("X-Upload-Content-Type", contentType)
req.Header.Set("X-Upload-Content-Length", fmt.Sprintf("%v", size))
if size >= 0 {
req.Header.Set("X-Upload-Content-Length", fmt.Sprintf("%v", size))
}
res, err = f.client.Do(req)
if err == nil {
defer googleapi.CloseBody(res)
err = googleapi.CheckResponse(res)
}
return shouldRetry(err)
return f.shouldRetry(err)
})
if err != nil {
return nil, err
@@ -116,49 +117,19 @@ func (rx *resumableUpload) makeRequest(ctx context.Context, start int64, body io
req, _ := http.NewRequest("POST", rx.URI, body)
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.ContentLength = reqSize
totalSize := "*"
if rx.ContentLength >= 0 {
totalSize = strconv.FormatInt(rx.ContentLength, 10)
}
if reqSize != 0 {
req.Header.Set("Content-Range", fmt.Sprintf("bytes %v-%v/%v", start, start+reqSize-1, rx.ContentLength))
req.Header.Set("Content-Range", fmt.Sprintf("bytes %v-%v/%v", start, start+reqSize-1, totalSize))
} else {
req.Header.Set("Content-Range", fmt.Sprintf("bytes */%v", rx.ContentLength))
req.Header.Set("Content-Range", fmt.Sprintf("bytes */%v", totalSize))
}
req.Header.Set("Content-Type", rx.MediaType)
return req
}
// rangeRE matches the transfer status response from the server. $1 is
// the last byte index uploaded.
var rangeRE = regexp.MustCompile(`^0\-(\d+)$`)
// Query drive for the amount transferred so far
//
// If error is nil, then start should be valid
func (rx *resumableUpload) transferStatus(ctx context.Context) (start int64, err error) {
req := rx.makeRequest(ctx, 0, nil, 0)
res, err := rx.f.client.Do(req)
if err != nil {
return 0, err
}
defer googleapi.CloseBody(res)
if res.StatusCode == http.StatusCreated || res.StatusCode == http.StatusOK {
return rx.ContentLength, nil
}
if res.StatusCode != statusResumeIncomplete {
err = googleapi.CheckResponse(res)
if err != nil {
return 0, err
}
return 0, errors.Errorf("unexpected http return code %v", res.StatusCode)
}
Range := res.Header.Get("Range")
if m := rangeRE.FindStringSubmatch(Range); len(m) == 2 {
start, err = strconv.ParseInt(m[1], 10, 64)
if err == nil {
return start, nil
}
}
return 0, errors.Errorf("unable to parse range %q", Range)
}
// Transfer a chunk - caller must call googleapi.CloseBody(res) if err == nil || res != nil
func (rx *resumableUpload) transferChunk(ctx context.Context, start int64, chunk io.ReadSeeker, chunkSize int64) (int, error) {
_, _ = chunk.Seek(0, io.SeekStart)
@@ -200,18 +171,40 @@ func (rx *resumableUpload) Upload(ctx context.Context) (*drive.File, error) {
var StatusCode int
var err error
buf := make([]byte, int(rx.f.opt.ChunkSize))
for start < rx.ContentLength {
reqSize := rx.ContentLength - start
if reqSize >= int64(rx.f.opt.ChunkSize) {
reqSize = int64(rx.f.opt.ChunkSize)
for finished := false; !finished; {
var reqSize int64
var chunk io.ReadSeeker
if rx.ContentLength >= 0 {
// If size known use repeatable reader for smoother bwlimit
if start >= rx.ContentLength {
break
}
reqSize = rx.ContentLength - start
if reqSize >= int64(rx.f.opt.ChunkSize) {
reqSize = int64(rx.f.opt.ChunkSize)
}
chunk = readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize)
} else {
// If size unknown read into buffer
var n int
n, err = readers.ReadFill(rx.Media, buf)
if err == io.EOF {
// Send the last chunk with the correct ContentLength
// otherwise Google doesn't know we've finished
rx.ContentLength = start + int64(n)
finished = true
} else if err != nil {
return nil, err
}
reqSize = int64(n)
chunk = bytes.NewReader(buf[:reqSize])
}
chunk := readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize)
// Transfer the chunk
err = rx.f.pacer.Call(func() (bool, error) {
fs.Debugf(rx.remote, "Sending chunk %d length %d", start, reqSize)
StatusCode, err = rx.transferChunk(ctx, start, chunk, reqSize)
again, err := shouldRetry(err)
again, err := rx.f.shouldRetry(err)
if StatusCode == statusResumeIncomplete || StatusCode == http.StatusCreated || StatusCode == http.StatusOK {
again = false
err = nil

View File

@@ -45,17 +45,15 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
"golang.org/x/oauth2"
)
const enc = encodings.Dropbox
// Constants
const (
rcloneClientID = "5jcck7diasz0rqy"
@@ -147,14 +145,28 @@ memory. It can be set smaller if you are tight on memory.`, maxChunkSize),
Help: "Impersonate this user when using a business account.",
Default: "",
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// https://www.dropbox.com/help/syncing-uploads/files-not-syncing lists / and \
// as invalid characters.
// Testing revealed names with trailing spaces and the DEL character don't work.
// Also encode invalid UTF-8 bytes as json doesn't handle them properly.
Default: (encoder.Base |
encoder.EncodeBackSlash |
encoder.EncodeDel |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Impersonate string `config:"impersonate"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Impersonate string `config:"impersonate"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote dropbox server
@@ -381,7 +393,7 @@ func (f *Fs) setRoot(root string) {
func (f *Fs) getMetadata(objPath string) (entry files.IsMetadata, notFound bool, err error) {
err = f.pacer.Call(func() (bool, error) {
entry, err = f.srv.GetMetadata(&files.GetMetadataArg{
Path: enc.FromStandardPath(objPath),
Path: f.opt.Enc.FromStandardPath(objPath),
})
return shouldRetry(err)
})
@@ -475,7 +487,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
for {
if !started {
arg := files.ListFolderArg{
Path: enc.FromStandardPath(root),
Path: f.opt.Enc.FromStandardPath(root),
Recursive: false,
}
if root == "/" {
@@ -525,7 +537,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Only the last element is reliably cased in PathDisplay
entryPath := metadata.PathDisplay
leaf := enc.ToStandardName(path.Base(entryPath))
leaf := f.opt.Enc.ToStandardName(path.Base(entryPath))
remote := path.Join(dir, leaf)
if folderInfo != nil {
d := fs.NewDir(remote, time.Now())
@@ -583,7 +595,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// create it
arg2 := files.CreateFolderArg{
Path: enc.FromStandardPath(root),
Path: f.opt.Enc.FromStandardPath(root),
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.srv.CreateFolderV2(&arg2)
@@ -609,7 +621,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return errors.Wrap(err, "Rmdir")
}
root = enc.FromStandardPath(root)
root = f.opt.Enc.FromStandardPath(root)
// check directory empty
arg := files.ListFolderArg{
Path: root,
@@ -668,8 +680,8 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// Copy
arg := files.RelocationArg{
RelocationPath: files.RelocationPath{
FromPath: enc.FromStandardPath(srcObj.remotePath()),
ToPath: enc.FromStandardPath(dstObj.remotePath()),
FromPath: f.opt.Enc.FromStandardPath(srcObj.remotePath()),
ToPath: f.opt.Enc.FromStandardPath(dstObj.remotePath()),
},
}
var err error
@@ -704,7 +716,7 @@ func (f *Fs) Purge(ctx context.Context) (err error) {
// Let dropbox delete the filesystem tree
err = f.pacer.Call(func() (bool, error) {
_, err = f.srv.DeleteV2(&files.DeleteArg{
Path: enc.FromStandardPath(f.slashRoot),
Path: f.opt.Enc.FromStandardPath(f.slashRoot),
})
return shouldRetry(err)
})
@@ -736,8 +748,8 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
// Do the move
arg := files.RelocationArg{
RelocationPath: files.RelocationPath{
FromPath: enc.FromStandardPath(srcObj.remotePath()),
ToPath: enc.FromStandardPath(dstObj.remotePath()),
FromPath: f.opt.Enc.FromStandardPath(srcObj.remotePath()),
ToPath: f.opt.Enc.FromStandardPath(dstObj.remotePath()),
},
}
var err error
@@ -764,7 +776,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err error) {
absPath := enc.FromStandardPath(path.Join(f.slashRoot, remote))
absPath := f.opt.Enc.FromStandardPath(path.Join(f.slashRoot, remote))
fs.Debugf(f, "attempting to share '%s' (absolute path: %s)", remote, absPath)
createArg := sharing.CreateSharedLinkWithSettingsArg{
Path: absPath,
@@ -840,8 +852,8 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// Do the move
arg := files.RelocationArg{
RelocationPath: files.RelocationPath{
FromPath: enc.FromStandardPath(srcPath),
ToPath: enc.FromStandardPath(dstPath),
FromPath: f.opt.Enc.FromStandardPath(srcPath),
ToPath: f.opt.Enc.FromStandardPath(dstPath),
},
}
err = f.pacer.Call(func() (bool, error) {
@@ -999,7 +1011,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
fs.FixRangeOption(options, o.bytes)
headers := fs.OpenOptionHeaders(options)
arg := files.DownloadArg{
Path: enc.FromStandardPath(o.remotePath()),
Path: o.fs.opt.Enc.FromStandardPath(o.remotePath()),
ExtraHeaders: headers,
}
err = o.fs.pacer.Call(func() (bool, error) {
@@ -1111,6 +1123,13 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
return false, nil
}
entry, err = o.fs.srv.UploadSessionFinish(args, chunk)
// If error is insufficient space then don't retry
if e, ok := err.(files.UploadSessionFinishAPIError); ok {
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.WriteErrorInsufficientSpace {
err = fserrors.NoRetryError(err)
return false, err
}
}
// after the first chunk is uploaded, we retry everything
return err != nil, err
})
@@ -1130,7 +1149,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if ignoredFiles.MatchString(remote) {
return fserrors.NoRetryError(errors.Errorf("file name %q is disallowed - not uploading", path.Base(remote)))
}
commitInfo := files.NewCommitInfo(enc.FromStandardPath(o.remotePath()))
commitInfo := files.NewCommitInfo(o.fs.opt.Enc.FromStandardPath(o.remotePath()))
commitInfo.Mode.Tag = "overwrite"
// The Dropbox API only accepts timestamps in UTC with second precision.
commitInfo.ClientModified = src.ModTime(ctx).UTC().Round(time.Second)
@@ -1156,7 +1175,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
func (o *Object) Remove(ctx context.Context) (err error) {
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.srv.DeleteV2(&files.DeleteArg{
Path: enc.FromStandardPath(o.remotePath()),
Path: o.fs.opt.Enc.FromStandardPath(o.remotePath()),
})
return shouldRetry(err)
})

View File

@@ -109,7 +109,7 @@ func (f *Fs) listFiles(ctx context.Context, directoryID int) (filesList *FilesLi
}
for i := range filesList.Items {
item := &filesList.Items[i]
item.Filename = enc.ToStandardName(item.Filename)
item.Filename = f.opt.Enc.ToStandardName(item.Filename)
}
return filesList, nil
@@ -135,10 +135,10 @@ func (f *Fs) listFolders(ctx context.Context, directoryID int) (foldersList *Fol
if err != nil {
return nil, errors.Wrap(err, "couldn't list folders")
}
foldersList.Name = enc.ToStandardName(foldersList.Name)
foldersList.Name = f.opt.Enc.ToStandardName(foldersList.Name)
for i := range foldersList.SubFolders {
folder := &foldersList.SubFolders[i]
folder.Name = enc.ToStandardName(folder.Name)
folder.Name = f.opt.Enc.ToStandardName(folder.Name)
}
// fs.Debugf(f, "Got FoldersList for id `%s`", directoryID)
@@ -213,7 +213,7 @@ func getRemote(dir, fileName string) string {
}
func (f *Fs) makeFolder(ctx context.Context, leaf string, folderID int) (response *MakeFolderResponse, err error) {
name := enc.FromStandardName(leaf)
name := f.opt.Enc.FromStandardName(leaf)
// fs.Debugf(f, "Creating folder `%s` in id `%s`", name, directoryID)
request := MakeFolderRequest{
@@ -323,7 +323,7 @@ func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse
func (f *Fs) uploadFile(ctx context.Context, in io.Reader, size int64, fileName, folderID, uploadID, node string) (response *http.Response, err error) {
// fs.Debugf(f, "Uploading File `%s`", fileName)
fileName = enc.FromStandardName(fileName)
fileName = f.opt.Enc.FromStandardName(fileName)
if len(uploadID) > 10 || !isAlphaNumeric(uploadID) {
return nil, errors.New("Invalid UploadID")

View File

@@ -11,12 +11,13 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
@@ -29,8 +30,6 @@ const (
decayConstant = 2 // bigger for slower decay, exponential
)
const enc = encodings.Fichier
func init() {
fs.Register(&fs.RegInfo{
Name: "fichier",
@@ -38,25 +37,48 @@ func init() {
Config: func(name string, config configmap.Mapper) {
},
NewFs: NewFs,
Options: []fs.Option{
{
Help: "Your API Key, get it from https://1fichier.com/console/params.pl",
Name: "api_key",
},
{
Help: "If you want to download a shared folder, add this parameter",
Name: "shared_folder",
Required: false,
Advanced: true,
},
},
Options: []fs.Option{{
Help: "Your API Key, get it from https://1fichier.com/console/params.pl",
Name: "api_key",
}, {
Help: "If you want to download a shared folder, add this parameter",
Name: "shared_folder",
Required: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Characters that need escaping
//
// '\\': '', // FULLWIDTH REVERSE SOLIDUS
// '<': '', // FULLWIDTH LESS-THAN SIGN
// '>': '', // FULLWIDTH GREATER-THAN SIGN
// '"': '', // FULLWIDTH QUOTATION MARK - not on the list but seems to be reserved
// '\'': '', // FULLWIDTH APOSTROPHE
// '$': '', // FULLWIDTH DOLLAR SIGN
// '`': '', // FULLWIDTH GRAVE ACCENT
//
// Leading space and trailing space
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeSingleQuote |
encoder.EncodeBackQuote |
encoder.EncodeDoubleQuote |
encoder.EncodeLtGt |
encoder.EncodeDollar |
encoder.EncodeLeftSpace |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
APIKey string `config:"api_key"`
SharedFolder string `config:"shared_folder"`
APIKey string `config:"api_key"`
SharedFolder string `config:"shared_folder"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs is the interface a cloud storage system must provide
@@ -64,9 +86,9 @@ type Fs struct {
root string
name string
features *fs.Features
opt Options
dirCache *dircache.DirCache
baseClient *http.Client
options *Options
pacer *fs.Pacer
rest *rest.Client
}
@@ -162,7 +184,7 @@ func NewFs(name string, root string, config configmap.Mapper) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
options: opt,
opt: *opt,
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
baseClient: &http.Client{},
}
@@ -176,7 +198,7 @@ func NewFs(name string, root string, config configmap.Mapper) (fs.Fs, error) {
f.rest = rest.NewClient(client).SetRoot(apiBaseURL)
f.rest.SetHeader("Authorization", "Bearer "+f.options.APIKey)
f.rest.SetHeader("Authorization", "Bearer "+f.opt.APIKey)
f.dirCache = dircache.New(root, rootID, f)
@@ -226,8 +248,8 @@ func NewFs(name string, root string, config configmap.Mapper) (fs.Fs, error) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if f.options.SharedFolder != "" {
return f.listSharedFiles(ctx, f.options.SharedFolder)
if f.opt.SharedFolder != "" {
return f.listSharedFiles(ctx, f.opt.SharedFolder)
}
dirContent, err := f.listDir(ctx, dir)

View File

@@ -14,77 +14,86 @@ import (
"github.com/jlaffaye/ftp"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
)
const enc = encodings.FTP
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "ftp",
Description: "FTP Connection",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "host",
Help: "FTP host to connect to",
Required: true,
Examples: []fs.OptionExample{{
Value: "ftp.example.com",
Help: "Connect to ftp.example.com",
}},
}, {
Name: "user",
Help: "FTP username, leave blank for current username, " + os.Getenv("USER"),
}, {
Name: "port",
Help: "FTP port, leave blank to use default (21)",
}, {
Name: "pass",
Help: "FTP password",
IsPassword: true,
Required: true,
}, {
Name: "tls",
Help: "Use FTP over TLS (Implicit)",
Default: false,
}, {
Name: "concurrency",
Help: "Maximum number of FTP simultaneous connections, 0 for unlimited",
Default: 0,
Advanced: true,
}, {
Name: "no_check_certificate",
Help: "Do not verify the TLS certificate of the server",
Default: false,
Advanced: true,
}, {
Name: "disable_epsv",
Help: "Disable using EPSV even if server advertises support",
Default: false,
Advanced: true,
},
},
Options: []fs.Option{{
Name: "host",
Help: "FTP host to connect to",
Required: true,
Examples: []fs.OptionExample{{
Value: "ftp.example.com",
Help: "Connect to ftp.example.com",
}},
}, {
Name: "user",
Help: "FTP username, leave blank for current username, " + os.Getenv("USER"),
}, {
Name: "port",
Help: "FTP port, leave blank to use default (21)",
}, {
Name: "pass",
Help: "FTP password",
IsPassword: true,
Required: true,
}, {
Name: "tls",
Help: "Use FTP over TLS (Implicit)",
Default: false,
}, {
Name: "concurrency",
Help: "Maximum number of FTP simultaneous connections, 0 for unlimited",
Default: 0,
Advanced: true,
}, {
Name: "no_check_certificate",
Help: "Do not verify the TLS certificate of the server",
Default: false,
Advanced: true,
}, {
Name: "disable_epsv",
Help: "Disable using EPSV even if server advertises support",
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// The FTP protocal can't handle trailing spaces (for instance
// pureftpd turns them into _)
//
// proftpd can't handle '*' in file names
// pureftpd can't handle '[', ']' or '*'
Default: (encoder.Display |
encoder.EncodeRightSpace),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
TLS bool `config:"tls"`
Concurrency int `config:"concurrency"`
SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableEPSV bool `config:"disable_epsv"`
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
TLS bool `config:"tls"`
Concurrency int `config:"concurrency"`
SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableEPSV bool `config:"disable_epsv"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote FTP server
@@ -181,7 +190,11 @@ func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
if c != nil {
return c, nil
}
return f.ftpConnection()
c, err = f.ftpConnection()
if err != nil && f.opt.Concurrency > 0 {
f.tokens.Put()
}
return c, err
}
// Return an FTP connection to the pool
@@ -194,7 +207,13 @@ func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
defer f.tokens.Put()
}
if pc == nil {
return
}
c := *pc
if c == nil {
return
}
*pc = nil
if err != nil {
// If not a regular FTP error code then check the connection
@@ -308,22 +327,22 @@ func translateErrorDir(err error) error {
}
// entryToStandard converts an incoming ftp.Entry to Standard encoding
func entryToStandard(entry *ftp.Entry) {
func (f *Fs) entryToStandard(entry *ftp.Entry) {
// Skip . and .. as we don't want these encoded
if entry.Name == "." || entry.Name == ".." {
return
}
entry.Name = enc.ToStandardName(entry.Name)
entry.Target = enc.ToStandardPath(entry.Target)
entry.Name = f.opt.Enc.ToStandardName(entry.Name)
entry.Target = f.opt.Enc.ToStandardPath(entry.Target)
}
// dirFromStandardPath returns dir in encoded form.
func dirFromStandardPath(dir string) string {
func (f *Fs) dirFromStandardPath(dir string) string {
// Skip . and .. as we don't want these encoded
if dir == "." || dir == ".." {
return dir
}
return enc.FromStandardPath(dir)
return f.opt.Enc.FromStandardPath(dir)
}
// findItem finds a directory entry for the name in its parent directory
@@ -345,13 +364,13 @@ func (f *Fs) findItem(remote string) (entry *ftp.Entry, err error) {
if err != nil {
return nil, errors.Wrap(err, "findItem")
}
files, err := c.List(dirFromStandardPath(dir))
files, err := c.List(f.dirFromStandardPath(dir))
f.putFtpConnection(&c, err)
if err != nil {
return nil, translateErrorFile(err)
}
for _, file := range files {
entryToStandard(file)
f.entryToStandard(file)
if file.Name == base {
return file, nil
}
@@ -418,7 +437,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
resultchan := make(chan []*ftp.Entry, 1)
errchan := make(chan error, 1)
go func() {
result, err := c.List(dirFromStandardPath(path.Join(f.root, dir)))
result, err := c.List(f.dirFromStandardPath(path.Join(f.root, dir)))
f.putFtpConnection(&c, err)
if err != nil {
errchan <- err
@@ -455,7 +474,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
for i := range files {
object := files[i]
entryToStandard(object)
f.entryToStandard(object)
newremote := path.Join(dir, object.Name)
switch object.Type {
case ftp.EntryTypeFolder:
@@ -525,7 +544,7 @@ func (f *Fs) getInfo(remote string) (fi *FileInfo, err error) {
if err != nil {
return nil, errors.Wrap(err, "getInfo")
}
files, err := c.List(dirFromStandardPath(dir))
files, err := c.List(f.dirFromStandardPath(dir))
f.putFtpConnection(&c, err)
if err != nil {
return nil, translateErrorFile(err)
@@ -533,7 +552,7 @@ func (f *Fs) getInfo(remote string) (fi *FileInfo, err error) {
for i := range files {
file := files[i]
entryToStandard(file)
f.entryToStandard(file)
if file.Name == base {
info := &FileInfo{
Name: remote,
@@ -571,7 +590,7 @@ func (f *Fs) mkdir(abspath string) error {
if connErr != nil {
return errors.Wrap(connErr, "mkdir")
}
err = c.MakeDir(dirFromStandardPath(abspath))
err = c.MakeDir(f.dirFromStandardPath(abspath))
f.putFtpConnection(&c, err)
switch errX := err.(type) {
case *textproto.Error:
@@ -607,7 +626,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
if err != nil {
return errors.Wrap(translateErrorFile(err), "Rmdir")
}
err = c.RemoveDir(dirFromStandardPath(path.Join(f.root, dir)))
err = c.RemoveDir(f.dirFromStandardPath(path.Join(f.root, dir)))
f.putFtpConnection(&c, err)
return translateErrorDir(err)
}
@@ -628,8 +647,8 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, errors.Wrap(err, "Move")
}
err = c.Rename(
enc.FromStandardPath(path.Join(srcObj.fs.root, srcObj.remote)),
enc.FromStandardPath(path.Join(f.root, remote)),
f.opt.Enc.FromStandardPath(path.Join(srcObj.fs.root, srcObj.remote)),
f.opt.Enc.FromStandardPath(path.Join(f.root, remote)),
)
f.putFtpConnection(&c, err)
if err != nil {
@@ -682,8 +701,8 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return errors.Wrap(err, "DirMove")
}
err = c.Rename(
dirFromStandardPath(srcPath),
dirFromStandardPath(dstPath),
f.dirFromStandardPath(srcPath),
f.dirFromStandardPath(dstPath),
)
f.putFtpConnection(&c, err)
if err != nil {
@@ -769,11 +788,13 @@ func (f *ftpReadCloser) Close() error {
case <-timer.C:
// if timer fired assume no error but connection dead
fs.Errorf(f.f, "Timeout when waiting for connection Close")
f.f.putFtpConnection(nil, nil)
return nil
}
// if errors while reading or closing, dump the connection
if err != nil || f.err != nil {
_ = f.c.Quit()
f.f.putFtpConnection(nil, nil)
} else {
f.f.putFtpConnection(&f.c, nil)
}
@@ -809,7 +830,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
if err != nil {
return nil, errors.Wrap(err, "open")
}
fd, err := c.RetrFrom(enc.FromStandardPath(path), uint64(offset))
fd, err := c.RetrFrom(o.fs.opt.Enc.FromStandardPath(path), uint64(offset))
if err != nil {
o.fs.putFtpConnection(&c, err)
return nil, errors.Wrap(err, "open")
@@ -844,10 +865,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return errors.Wrap(err, "Update")
}
err = c.Stor(enc.FromStandardPath(path), in)
err = c.Stor(o.fs.opt.Enc.FromStandardPath(path), in)
if err != nil {
_ = c.Quit() // toss this connection to avoid sync errors
remove()
o.fs.putFtpConnection(nil, err)
return errors.Wrap(err, "update stor")
}
o.fs.putFtpConnection(&c, nil)
@@ -874,7 +896,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
if err != nil {
return errors.Wrap(err, "Remove")
}
err = c.Delete(enc.FromStandardPath(path))
err = c.Delete(o.fs.opt.Enc.FromStandardPath(path))
o.fs.putFtpConnection(&c, err)
}
return err

View File

@@ -5,13 +5,44 @@ import (
"testing"
"github.com/rclone/rclone/backend/ftp"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFTP:",
RemoteName: "TestFTPProftpd:",
NilObject: (*ftp.Object)(nil),
})
}
func TestIntegration2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFTPRclone:",
NilObject: (*ftp.Object)(nil),
})
}
func TestIntegration3(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFTPPureftpd:",
NilObject: (*ftp.Object)(nil),
})
}
// func TestIntegration4(t *testing.T) {
// if *fstest.RemoteName != "" {
// t.Skip("skipping as -remote is set")
// }
// fstests.Run(t, &fstests.Opt{
// RemoteName: "TestFTPVsftpd:",
// NilObject: (*ftp.Object)(nil),
// })
// }

View File

@@ -32,12 +32,12 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"golang.org/x/oauth2"
@@ -69,8 +69,6 @@ var (
}
)
const enc = encodings.GoogleCloudStorage
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -248,20 +246,28 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
Value: "DURABLE_REDUCED_AVAILABILITY",
Help: "Durable reduced availability storage class",
}},
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Base |
encoder.EncodeCrLf |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
ProjectNumber string `config:"project_number"`
ServiceAccountFile string `config:"service_account_file"`
ServiceAccountCredentials string `config:"service_account_credentials"`
ObjectACL string `config:"object_acl"`
BucketACL string `config:"bucket_acl"`
BucketPolicyOnly bool `config:"bucket_policy_only"`
Location string `config:"location"`
StorageClass string `config:"storage_class"`
ProjectNumber string `config:"project_number"`
ServiceAccountFile string `config:"service_account_file"`
ServiceAccountCredentials string `config:"service_account_credentials"`
ObjectACL string `config:"object_acl"`
BucketACL string `config:"bucket_acl"`
BucketPolicyOnly bool `config:"bucket_policy_only"`
Location string `config:"location"`
StorageClass string `config:"storage_class"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote storage server
@@ -353,7 +359,7 @@ func parsePath(path string) (root string) {
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
bucketName, bucketPath = bucket.Split(path.Join(f.root, rootRelativePath))
return enc.FromStandardName(bucketName), enc.FromStandardPath(bucketPath)
return f.opt.Enc.FromStandardName(bucketName), f.opt.Enc.FromStandardPath(bucketPath)
}
// split returns bucket and bucketPath from the object
@@ -442,7 +448,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the object exists
encodedDirectory := enc.FromStandardPath(f.rootDirectory)
encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory)
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.Get(f.rootBucket, encodedDirectory).Context(ctx).Do()
return shouldRetry(err)
@@ -527,7 +533,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
if !strings.HasSuffix(remote, "/") {
continue
}
remote = enc.ToStandardPath(remote)
remote = f.opt.Enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
@@ -543,7 +549,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
}
}
for _, object := range objects.Items {
remote := enc.ToStandardPath(object.Name)
remote := f.opt.Enc.ToStandardPath(object.Name)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", object.Name)
continue
@@ -620,7 +626,7 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
return nil, err
}
for _, bucket := range buckets.Items {
d := fs.NewDir(enc.ToStandardName(bucket.Name), time.Time{})
d := fs.NewDir(f.opt.Enc.ToStandardName(bucket.Name), time.Time{})
entries = append(entries, d)
}
if buckets.NextPageToken == "" {

View File

@@ -12,6 +12,7 @@ import (
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
@@ -26,17 +27,6 @@ const (
fileNameUpload = "rclone-test-image2.jpg"
)
// Wrapper to override the remote for an object
type overrideRemoteObject struct {
fs.Object
remote string
}
// Remote returns the overridden remote name
func (o *overrideRemoteObject) Remote() string {
return o.remote
}
func TestIntegration(t *testing.T) {
ctx := context.Background()
fstest.Initialise()
@@ -66,7 +56,7 @@ func TestIntegration(t *testing.T) {
require.NoError(t, err)
in, err := srcObj.Open(ctx)
require.NoError(t, err)
dstObj, err := f.Put(ctx, in, &overrideRemoteObject{srcObj, remote})
dstObj, err := f.Put(ctx, in, operations.NewOverrideRemote(srcObj, remote))
require.NoError(t, err)
assert.Equal(t, remote, dstObj.Remote())
_ = in.Close()
@@ -231,7 +221,7 @@ func TestIntegration(t *testing.T) {
require.NoError(t, err)
in, err := srcObj.Open(ctx)
require.NoError(t, err)
dstObj, err := f.Put(ctx, in, &overrideRemoteObject{srcObj, remote})
dstObj, err := f.Put(ctx, in, operations.NewOverrideRemote(srcObj, remote))
require.NoError(t, err)
assert.Equal(t, remote, dstObj.Remote())
_ = in.Close()

View File

@@ -54,6 +54,37 @@ type LoginToken struct {
AuthToken string `json:"auth_token"`
}
// WellKnown contains some configuration parameters for setting up endpoints
type WellKnown struct {
Issuer string `json:"issuer"`
AuthorizationEndpoint string `json:"authorization_endpoint"`
TokenEndpoint string `json:"token_endpoint"`
TokenIntrospectionEndpoint string `json:"token_introspection_endpoint"`
UserinfoEndpoint string `json:"userinfo_endpoint"`
EndSessionEndpoint string `json:"end_session_endpoint"`
JwksURI string `json:"jwks_uri"`
CheckSessionIframe string `json:"check_session_iframe"`
GrantTypesSupported []string `json:"grant_types_supported"`
ResponseTypesSupported []string `json:"response_types_supported"`
SubjectTypesSupported []string `json:"subject_types_supported"`
IDTokenSigningAlgValuesSupported []string `json:"id_token_signing_alg_values_supported"`
UserinfoSigningAlgValuesSupported []string `json:"userinfo_signing_alg_values_supported"`
RequestObjectSigningAlgValuesSupported []string `json:"request_object_signing_alg_values_supported"`
ResponseNodesSupported []string `json:"response_modes_supported"`
RegistrationEndpoint string `json:"registration_endpoint"`
TokenEndpointAuthMethodsSupported []string `json:"token_endpoint_auth_methods_supported"`
TokenEndpointAuthSigningAlgValuesSupported []string `json:"token_endpoint_auth_signing_alg_values_supported"`
ClaimsSupported []string `json:"claims_supported"`
ClaimTypesSupported []string `json:"claim_types_supported"`
ClaimsParameterSupported bool `json:"claims_parameter_supported"`
ScopesSupported []string `json:"scopes_supported"`
RequestParameterSupported bool `json:"request_parameter_supported"`
RequestURIParameterSupported bool `json:"request_uri_parameter_supported"`
CodeChallengeMethodsSupported []string `json:"code_challenge_methods_supported"`
TLSClientCertificateBoundAccessTokens bool `json:"tls_client_certificate_bound_access_tokens"`
IntrospectionEndpoint string `json:"introspection_endpoint"`
}
// TokenJSON is the struct representing the HTTP response from OAuth2
// providers returning a token in JSON form.
type TokenJSON struct {

View File

@@ -26,19 +26,17 @@ import (
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/oauth2"
)
const enc = encodings.JottaCloud
// Globals
const (
minSleep = 10 * time.Millisecond
@@ -49,10 +47,11 @@ const (
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com/"
baseURL = "https://www.jottacloud.com/"
tokenURL = "https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token"
defaultTokenURL = "https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token"
cachePrefix = "rclone-jcmd5-"
configDevice = "device"
configMountpoint = "mountpoint"
configTokenURL = "tokenURL"
configVersion = 1
)
@@ -61,8 +60,8 @@ var (
oauthConfig = &oauth2.Config{
ClientID: "jottacli",
Endpoint: oauth2.Endpoint{
AuthURL: tokenURL,
TokenURL: tokenURL,
AuthURL: defaultTokenURL,
TokenURL: defaultTokenURL,
},
RedirectURL: oauthutil.RedirectLocalhostURL,
}
@@ -85,8 +84,6 @@ func init() {
log.Fatalf("Failed to parse config version - corrupted config")
}
refresh = ver != configVersion
} else {
refresh = true
}
if refresh {
@@ -109,7 +106,7 @@ func init() {
fmt.Printf("Login Token> ")
loginToken := config.ReadLine()
token, err := doAuth(ctx, srv, loginToken)
token, err := doAuth(ctx, srv, loginToken, m)
if err != nil {
log.Fatalf("Failed to get oauth token: %s", err)
}
@@ -158,18 +155,29 @@ func init() {
Help: "Files bigger than this can be resumed if the upload fail's.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Encode invalid UTF-8 bytes as xml doesn't handle them properly.
//
// Also: '*', '/', ':', '<', '>', '?', '\"', '\x00', '|'
Default: (encoder.Display |
encoder.EncodeWin | // :?"*<>|
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Device string `config:"device"`
Mountpoint string `config:"mountpoint"`
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
HardDelete bool `config:"hard_delete"`
Unlink bool `config:"unlink"`
UploadThreshold fs.SizeSuffix `config:"upload_resume_limit"`
Device string `config:"device"`
Mountpoint string `config:"mountpoint"`
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
HardDelete bool `config:"hard_delete"`
Unlink bool `config:"unlink"`
UploadThreshold fs.SizeSuffix `config:"upload_resume_limit"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote jottacloud
@@ -244,12 +252,13 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
}
// doAuth runs the actual token request
func doAuth(ctx context.Context, srv *rest.Client, loginTokenBase64 string) (token oauth2.Token, err error) {
loginTokenBytes, err := base64.StdEncoding.DecodeString(loginTokenBase64)
func doAuth(ctx context.Context, srv *rest.Client, loginTokenBase64 string, m configmap.Mapper) (token oauth2.Token, err error) {
loginTokenBytes, err := base64.RawURLEncoding.DecodeString(loginTokenBase64)
if err != nil {
return token, err
}
// decode login token
var loginToken api.LoginToken
decoder := json.NewDecoder(bytes.NewReader(loginTokenBytes))
err = decoder.Decode(&loginToken)
@@ -257,17 +266,22 @@ func doAuth(ctx context.Context, srv *rest.Client, loginTokenBase64 string) (tok
return token, err
}
// we don't seem to need any data from this link but the API is not happy if skip it
// retrieve endpoint urls
opts := rest.Opts{
Method: "GET",
RootURL: loginToken.WellKnownLink,
NoResponse: true,
Method: "GET",
RootURL: loginToken.WellKnownLink,
}
_, err = srv.Call(ctx, &opts)
var wellKnown api.WellKnown
_, err = srv.CallJSON(ctx, &opts, nil, &wellKnown)
if err != nil {
return token, err
}
// save the tokenurl
oauthConfig.Endpoint.AuthURL = wellKnown.TokenEndpoint
oauthConfig.Endpoint.TokenURL = wellKnown.TokenEndpoint
m.Set(configTokenURL, wellKnown.TokenEndpoint)
// prepare out token request with username and password
values := url.Values{}
values.Set("client_id", "jottacli")
@@ -441,7 +455,7 @@ func urlPathEscape(in string) string {
// filePathRaw returns an unescaped file path (f.root, file)
func (f *Fs) filePathRaw(file string) string {
return path.Join(f.endpointURL, enc.FromStandardPath(path.Join(f.root, file)))
return path.Join(f.endpointURL, f.opt.Enc.FromStandardPath(path.Join(f.root, file)))
}
// filePath returns a escaped file path (f.root, file)
@@ -459,6 +473,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, err
}
// Check config version
var ok bool
var version string
if version, ok = m.Get("configVersion"); ok {
@@ -472,15 +487,23 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.New("Outdated config - please reconfigure this backend")
}
rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root)
// if custome endpoints are set use them else stick with defaults
if tokenURL, ok := m.Get(configTokenURL); ok {
oauthConfig.Endpoint.TokenURL = tokenURL
// jottacloud is weird. we need to use the tokenURL as authURL
oauthConfig.Endpoint.AuthURL = tokenURL
}
// Create OAuth Client
baseClient := fshttp.NewClient(fs.Config)
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, oauthConfig, baseClient)
if err != nil {
return nil, errors.Wrap(err, "Failed to configure Jottacloud oauth client")
}
rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root)
f := &Fs{
name: name,
root: root,
@@ -624,7 +647,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if item.Deleted {
continue
}
remote := path.Join(dir, enc.ToStandardName(item.Name))
remote := path.Join(dir, f.opt.Enc.ToStandardName(item.Name))
d := fs.NewDir(remote, time.Time(item.ModifiedAt))
entries = append(entries, d)
}
@@ -634,7 +657,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if item.Deleted || item.State != "COMPLETED" {
continue
}
remote := path.Join(dir, enc.ToStandardName(item.Name))
remote := path.Join(dir, f.opt.Enc.ToStandardName(item.Name))
o, err := f.newObjectWithInfo(ctx, remote, item)
if err != nil {
continue
@@ -659,7 +682,7 @@ func (f *Fs) listFileDir(ctx context.Context, remoteStartPath string, startFolde
if folder.Deleted {
return nil
}
folderPath := enc.ToStandardPath(path.Join(folder.Path, folder.Name))
folderPath := f.opt.Enc.ToStandardPath(path.Join(folder.Path, folder.Name))
folderPathLength := len(folderPath)
var remoteDir string
if folderPathLength > pathPrefixLength {
@@ -677,7 +700,7 @@ func (f *Fs) listFileDir(ctx context.Context, remoteStartPath string, startFolde
if file.Deleted || file.State != "COMPLETED" {
continue
}
remoteFile := path.Join(remoteDir, enc.ToStandardName(file.Name))
remoteFile := path.Join(remoteDir, f.opt.Enc.ToStandardName(file.Name))
o, err := f.newObjectWithInfo(ctx, remoteFile, file)
if err != nil {
return err
@@ -848,7 +871,7 @@ func (f *Fs) copyOrMove(ctx context.Context, method, src, dest string) (info *ap
Parameters: url.Values{},
}
opts.Parameters.Set(method, "/"+path.Join(f.endpointURL, enc.FromStandardPath(path.Join(f.root, dest))))
opts.Parameters.Set(method, "/"+path.Join(f.endpointURL, f.opt.Enc.FromStandardPath(path.Join(f.root, dest))))
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
@@ -955,7 +978,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return fs.ErrorDirExists
}
_, err = f.copyOrMove(ctx, "mvDir", path.Join(f.endpointURL, enc.FromStandardPath(srcPath))+"/", dstRemote)
_, err = f.copyOrMove(ctx, "mvDir", path.Join(f.endpointURL, f.opt.Enc.FromStandardPath(srcPath))+"/", dstRemote)
if err != nil {
return errors.Wrap(err, "couldn't move directory")
@@ -1246,7 +1269,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Created: fileDate,
Modified: fileDate,
Md5: md5String,
Path: path.Join(o.fs.opt.Mountpoint, enc.FromStandardPath(path.Join(o.fs.root, o.remote))),
Path: path.Join(o.fs.opt.Mountpoint, o.fs.opt.Enc.FromStandardPath(path.Join(o.fs.root, o.remote))),
}
// send it

View File

@@ -12,65 +12,71 @@ import (
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
httpclient "github.com/koofr/go-httpclient"
koofrclient "github.com/koofr/go-koofrclient"
)
const enc = encodings.Koofr
// Register Fs with rclone
func init() {
fs.Register(&fs.RegInfo{
Name: "koofr",
Description: "Koofr",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "endpoint",
Help: "The Koofr API endpoint to use",
Default: "https://app.koofr.net",
Required: true,
Advanced: true,
}, {
Name: "mountid",
Help: "Mount ID of the mount to use. If omitted, the primary mount is used.",
Required: false,
Default: "",
Advanced: true,
}, {
Name: "setmtime",
Help: "Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.",
Default: true,
Required: true,
Advanced: true,
}, {
Name: "user",
Help: "Your Koofr user name",
Required: true,
}, {
Name: "password",
Help: "Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)",
IsPassword: true,
Required: true,
},
},
Options: []fs.Option{{
Name: "endpoint",
Help: "The Koofr API endpoint to use",
Default: "https://app.koofr.net",
Required: true,
Advanced: true,
}, {
Name: "mountid",
Help: "Mount ID of the mount to use. If omitted, the primary mount is used.",
Required: false,
Default: "",
Advanced: true,
}, {
Name: "setmtime",
Help: "Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.",
Default: true,
Required: true,
Advanced: true,
}, {
Name: "user",
Help: "Your Koofr user name",
Required: true,
}, {
Name: "password",
Help: "Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)",
IsPassword: true,
Required: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options represent the configuration of the Koofr backend
type Options struct {
Endpoint string `config:"endpoint"`
MountID string `config:"mountid"`
User string `config:"user"`
Password string `config:"password"`
SetMTime bool `config:"setmtime"`
Endpoint string `config:"endpoint"`
MountID string `config:"mountid"`
User string `config:"user"`
Password string `config:"password"`
SetMTime bool `config:"setmtime"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// A Fs is a representation of a remote Koofr Fs
@@ -246,7 +252,7 @@ func (f *Fs) Hashes() hash.Set {
// fullPath constructs a full, absolute path from a Fs root relative path,
func (f *Fs) fullPath(part string) string {
return enc.FromStandardPath(path.Join("/", f.root, part))
return f.opt.Enc.FromStandardPath(path.Join("/", f.root, part))
}
// NewFs constructs a new filesystem given a root path and configuration options
@@ -299,7 +305,7 @@ func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
}
return nil, errors.New("Failed to find mount " + opt.MountID)
}
rootFile, err := f.client.FilesInfo(f.mountID, enc.FromStandardPath("/"+f.root))
rootFile, err := f.client.FilesInfo(f.mountID, f.opt.Enc.FromStandardPath("/"+f.root))
if err == nil && rootFile.Type != "dir" {
f.root = dir(f.root)
err = fs.ErrorIsFile
@@ -317,7 +323,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
entries = make([]fs.DirEntry, len(files))
for i, file := range files {
remote := path.Join(dir, enc.ToStandardName(file.Name))
remote := path.Join(dir, f.opt.Enc.ToStandardName(file.Name))
if file.Type == "dir" {
entries[i] = fs.NewDir(remote, time.Unix(0, 0))
} else {

View File

@@ -2,8 +2,10 @@
package local
import (
"github.com/rclone/rclone/fs/encodings"
)
import "github.com/rclone/rclone/lib/encoder"
const enc = encodings.LocalMacOS
// This is the encoding used by the local backend for macOS
//
// macOS can't store invalid UTF-8, it converts them into %XX encoding
const defaultEnc = (encoder.Base |
encoder.EncodeInvalidUtf8)

View File

@@ -2,8 +2,7 @@
package local
import (
"github.com/rclone/rclone/fs/encodings"
)
import "github.com/rclone/rclone/lib/encoder"
const enc = encodings.LocalUnix
// This is the encoding used by the local backend for non windows platforms
const defaultEnc = encoder.Base

View File

@@ -2,8 +2,32 @@
package local
import (
"github.com/rclone/rclone/fs/encodings"
)
import "github.com/rclone/rclone/lib/encoder"
const enc = encodings.LocalWindows
// This is the encoding used by the local backend for windows platforms
//
// List of replaced characters:
// < (less than) -> '' // FULLWIDTH LESS-THAN SIGN
// > (greater than) -> '' // FULLWIDTH GREATER-THAN SIGN
// : (colon) -> '' // FULLWIDTH COLON
// " (double quote) -> '' // FULLWIDTH QUOTATION MARK
// \ (backslash) -> '' // FULLWIDTH REVERSE SOLIDUS
// | (vertical line) -> '' // FULLWIDTH VERTICAL LINE
// ? (question mark) -> '' // FULLWIDTH QUESTION MARK
// * (asterisk) -> '' // FULLWIDTH ASTERISK
//
// Additionally names can't end with a period (.) or space ( ).
// List of replaced characters:
// . (period) -> '' // FULLWIDTH FULL STOP
// (space) -> '␠' // SYMBOL FOR SPACE
//
// Also encode invalid UTF-8 bytes as Go can't convert them to UTF-16.
//
// https://docs.microsoft.com/de-de/windows/desktop/FileIO/naming-a-file#naming-conventions
const defaultEnc = (encoder.Base |
encoder.EncodeWin |
encoder.EncodeBackSlash |
encoder.EncodeCtl |
encoder.EncodeRightSpace |
encoder.EncodeRightPeriod |
encoder.EncodeInvalidUtf8)

View File

@@ -20,10 +20,12 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/file"
"github.com/rclone/rclone/lib/readers"
)
@@ -115,6 +117,11 @@ Windows/macOS and case sensitive for everything else. Use this flag
to override the default choice.`,
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: defaultEnc,
}},
}
fs.Register(fsi)
@@ -122,15 +129,16 @@ to override the default choice.`,
// Options defines the configuration for this backend
type Options struct {
FollowSymlinks bool `config:"copy_links"`
TranslateSymlinks bool `config:"links"`
SkipSymlinks bool `config:"skip_links"`
NoUTFNorm bool `config:"no_unicode_normalization"`
NoCheckUpdated bool `config:"no_check_updated"`
NoUNC bool `config:"nounc"`
OneFileSystem bool `config:"one_file_system"`
CaseSensitive bool `config:"case_sensitive"`
CaseInsensitive bool `config:"case_insensitive"`
FollowSymlinks bool `config:"copy_links"`
TranslateSymlinks bool `config:"links"`
SkipSymlinks bool `config:"skip_links"`
NoUTFNorm bool `config:"no_unicode_normalization"`
NoCheckUpdated bool `config:"no_check_updated"`
NoUNC bool `config:"nounc"`
OneFileSystem bool `config:"one_file_system"`
CaseSensitive bool `config:"case_sensitive"`
CaseInsensitive bool `config:"case_insensitive"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a local filesystem rooted at root
@@ -189,7 +197,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
dev: devUnset,
lstat: os.Lstat,
}
f.root = cleanRootPath(root, f.opt.NoUNC)
f.root = cleanRootPath(root, f.opt.NoUNC, f.opt.Enc)
f.features = (&fs.Features{
CaseInsensitive: f.caseInsensitive(),
CanHaveEmptyDirectories: true,
@@ -234,7 +242,7 @@ func (f *Fs) Name() string {
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return enc.ToStandardPath(filepath.ToSlash(f.root))
return f.opt.Enc.ToStandardPath(filepath.ToSlash(f.root))
}
// String converts this Fs to a string
@@ -443,7 +451,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
func (f *Fs) cleanRemote(dir, filename string) (remote string) {
remote = path.Join(dir, enc.ToStandardName(filename))
remote = path.Join(dir, f.opt.Enc.ToStandardName(filename))
if !utf8.ValidString(filename) {
f.warnedMu.Lock()
@@ -457,7 +465,7 @@ func (f *Fs) cleanRemote(dir, filename string) (remote string) {
}
func (f *Fs) localPath(name string) string {
return filepath.Join(f.root, filepath.FromSlash(enc.FromStandardPath(name)))
return filepath.Join(f.root, filepath.FromSlash(f.opt.Enc.FromStandardPath(name)))
}
// Put the Object to the local filesystem
@@ -956,7 +964,17 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if !o.translatedLink {
f, err := file.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
if err != nil {
return err
if runtime.GOOS == "windows" && os.IsPermission(err) {
// If permission denied on Windows might be trying to update a
// hidden file, in which case try opening without CREATE
// See: https://stackoverflow.com/questions/13215716/ioerror-errno-13-permission-denied-when-trying-to-open-hidden-file-in-w-mod
f, err = file.OpenFile(o.path, os.O_WRONLY|os.O_TRUNC, 0666)
if err != nil {
return err
}
} else {
return err
}
}
// Pre-allocate the file for performance reasons
err = preAllocate(src.Size(), f)
@@ -1082,7 +1100,7 @@ func (o *Object) Remove(ctx context.Context) error {
return remove(o.path)
}
func cleanRootPath(s string, noUNC bool) string {
func cleanRootPath(s string, noUNC bool, enc encoder.MultiEncoder) string {
if runtime.GOOS == "windows" {
if !filepath.IsAbs(s) && !strings.HasPrefix(s, "\\") {
s2, err := filepath.Abs(s)

View File

@@ -64,7 +64,7 @@ func TestCleanWindows(t *testing.T) {
t.Skipf("windows only")
}
for _, test := range testsWindows {
got := cleanRootPath(test[0], true)
got := cleanRootPath(test[0], true, defaultEnc)
expect := test[1]
if got != expect {
t.Fatalf("got %q, expected %q", got, expect)

View File

@@ -24,16 +24,17 @@ import (
"github.com/rclone/rclone/backend/mailru/mrhash"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
@@ -42,8 +43,6 @@ import (
"golang.org/x/oauth2"
)
const enc = encodings.Mailru
// Global constants
const (
minSleepPacer = 10 * time.Millisecond
@@ -193,21 +192,31 @@ facilitate remote troubleshooting of backend issues. Strict meaning of
flags is not documented and not guaranteed to persist between releases.
Quirks will be removed when the backend grows stable.
Supported quirks: atomicmkdir binlist gzip insecure retry400`,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
Default: (encoder.Display |
encoder.EncodeWin | // :?"*<>|
encoder.EncodeBackSlash |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Username string `config:"user"`
Password string `config:"pass"`
UserAgent string `config:"user_agent"`
CheckHash bool `config:"check_hash"`
SpeedupEnable bool `config:"speedup_enable"`
SpeedupPatterns string `config:"speedup_file_patterns"`
SpeedupMaxDisk fs.SizeSuffix `config:"speedup_max_disk"`
SpeedupMaxMem fs.SizeSuffix `config:"speedup_max_memory"`
Quirks string `config:"quirks"`
Username string `config:"user"`
Password string `config:"pass"`
UserAgent string `config:"user_agent"`
CheckHash bool `config:"check_hash"`
SpeedupEnable bool `config:"speedup_enable"`
SpeedupPatterns string `config:"speedup_file_patterns"`
SpeedupMaxDisk fs.SizeSuffix `config:"speedup_max_disk"`
SpeedupMaxMem fs.SizeSuffix `config:"speedup_max_memory"`
Quirks string `config:"quirks"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// retryErrorCodes is a slice of error codes that we will retry
@@ -607,7 +616,7 @@ func (f *Fs) readItemMetaData(ctx context.Context, path string) (entry fs.DirEnt
Path: "/api/m1/file",
Parameters: url.Values{
"access_token": {token},
"home": {enc.FromStandardPath(path)},
"home": {f.opt.Enc.FromStandardPath(path)},
"offset": {"0"},
"limit": {strconv.Itoa(maxInt32)},
},
@@ -642,7 +651,7 @@ func (f *Fs) readItemMetaData(ctx context.Context, path string) (entry fs.DirEnt
// =0 - for an empty directory
// >0 - for a non-empty directory
func (f *Fs) itemToDirEntry(ctx context.Context, item *api.ListItem) (entry fs.DirEntry, dirSize int, err error) {
remote, err := f.relPath(enc.ToStandardPath(item.Home))
remote, err := f.relPath(f.opt.Enc.ToStandardPath(item.Home))
if err != nil {
return nil, -1, err
}
@@ -708,7 +717,7 @@ func (f *Fs) listM1(ctx context.Context, dirPath string, offset int, limit int)
params.Set("limit", strconv.Itoa(limit))
data := url.Values{}
data.Set("home", enc.FromStandardPath(dirPath))
data.Set("home", f.opt.Enc.FromStandardPath(dirPath))
opts := rest.Opts{
Method: "POST",
@@ -756,7 +765,7 @@ func (f *Fs) listBin(ctx context.Context, dirPath string, depth int) (entries fs
req := api.NewBinWriter()
req.WritePu16(api.OperationFolderList)
req.WriteString(enc.FromStandardPath(dirPath))
req.WriteString(f.opt.Enc.FromStandardPath(dirPath))
req.WritePu32(int64(depth))
req.WritePu32(int64(options))
req.WritePu32(0)
@@ -892,7 +901,7 @@ func (t *treeState) NextRecord() (fs.DirEntry, error) {
if (head & 4096) != 0 {
t.dunnoNodeID = r.ReadNBytes(api.DunnoNodeIDLength)
}
name := enc.FromStandardPath(string(r.ReadBytesByLength()))
name := t.f.opt.Enc.FromStandardPath(string(r.ReadBytesByLength()))
t.dunno1 = int(r.ReadULong())
t.dunno2 = 0
t.dunno3 = 0
@@ -1031,7 +1040,7 @@ func (f *Fs) CreateDir(ctx context.Context, path string) error {
req := api.NewBinWriter()
req.WritePu16(api.OperationCreateFolder)
req.WritePu16(0) // revision
req.WriteString(enc.FromStandardPath(path))
req.WriteString(f.opt.Enc.FromStandardPath(path))
req.WritePu32(0)
token, err := f.accessToken()
@@ -1186,7 +1195,7 @@ func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) error {
return err
}
data := url.Values{"home": {enc.FromStandardPath(path)}}
data := url.Values{"home": {f.opt.Enc.FromStandardPath(path)}}
opts := rest.Opts{
Method: "POST",
Path: "/api/m1/file/remove",
@@ -1243,8 +1252,8 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
data := url.Values{}
data.Set("home", enc.FromStandardPath(srcPath))
data.Set("folder", enc.FromStandardPath(parentDir(dstPath)))
data.Set("home", f.opt.Enc.FromStandardPath(srcPath))
data.Set("folder", f.opt.Enc.FromStandardPath(parentDir(dstPath)))
data.Set("email", f.opt.Username)
data.Set("x-email", f.opt.Username)
@@ -1282,7 +1291,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fmt.Errorf("copy failed with code %d", response.Status)
}
tmpPath := enc.ToStandardPath(response.Body)
tmpPath := f.opt.Enc.ToStandardPath(response.Body)
if tmpPath != dstPath {
// fs.Debugf(f, "rename temporary file %q -> %q\n", tmpPath, dstPath)
err = f.moveItemBin(ctx, tmpPath, dstPath, "rename temporary file")
@@ -1357,9 +1366,9 @@ func (f *Fs) moveItemBin(ctx context.Context, srcPath, dstPath, opName string) e
req := api.NewBinWriter()
req.WritePu16(api.OperationRename)
req.WritePu32(0) // old revision
req.WriteString(enc.FromStandardPath(srcPath))
req.WriteString(f.opt.Enc.FromStandardPath(srcPath))
req.WritePu32(0) // new revision
req.WriteString(enc.FromStandardPath(dstPath))
req.WriteString(f.opt.Enc.FromStandardPath(dstPath))
req.WritePu32(0) // dunno
opts := rest.Opts{
@@ -1450,7 +1459,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
}
data := url.Values{}
data.Set("home", enc.FromStandardPath(f.absPath(remote)))
data.Set("home", f.opt.Enc.FromStandardPath(f.absPath(remote)))
data.Set("email", f.opt.Username)
data.Set("x-email", f.opt.Username)
@@ -2015,7 +2024,7 @@ func (o *Object) addFileMetaData(ctx context.Context, overwrite bool) error {
req := api.NewBinWriter()
req.WritePu16(api.OperationAddFile)
req.WritePu16(0) // revision
req.WriteString(enc.FromStandardPath(o.absPath()))
req.WriteString(o.fs.opt.Enc.FromStandardPath(o.absPath()))
req.WritePu64(o.size)
req.WritePu64(o.modTime.Unix())
req.WritePu32(0)
@@ -2113,7 +2122,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
opts := rest.Opts{
Method: "GET",
Options: options,
Path: url.PathEscape(strings.TrimLeft(enc.FromStandardPath(o.absPath()), "/")),
Path: url.PathEscape(strings.TrimLeft(o.fs.opt.Enc.FromStandardPath(o.absPath()), "/")),
Parameters: url.Values{
"client_id": {api.OAuthClientID},
"token": {token},

View File

@@ -26,19 +26,18 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
mega "github.com/t3rm1n4l/go-mega"
)
const enc = encodings.Mega
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
@@ -83,16 +82,24 @@ than permanently deleting them. If you specify this then rclone will
permanently delete objects instead.`,
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
Default: (encoder.Base |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
User string `config:"user"`
Pass string `config:"pass"`
Debug bool `config:"debug"`
HardDelete bool `config:"hard_delete"`
User string `config:"user"`
Pass string `config:"pass"`
Debug bool `config:"debug"`
HardDelete bool `config:"hard_delete"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote mega
@@ -250,12 +257,12 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// splitNodePath splits nodePath into / separated parts, returning nil if it
// should refer to the root.
// It also encodes the parts into backend specific encoding
func splitNodePath(nodePath string) (parts []string) {
func (f *Fs) splitNodePath(nodePath string) (parts []string) {
nodePath = path.Clean(nodePath)
if nodePath == "." || nodePath == "/" {
return nil
}
nodePath = enc.FromStandardPath(nodePath)
nodePath = f.opt.Enc.FromStandardPath(nodePath)
return strings.Split(nodePath, "/")
}
@@ -263,7 +270,7 @@ func splitNodePath(nodePath string) (parts []string) {
//
// It returns mega.ENOENT if it wasn't found
func (f *Fs) findNode(rootNode *mega.Node, nodePath string) (*mega.Node, error) {
parts := splitNodePath(nodePath)
parts := f.splitNodePath(nodePath)
if parts == nil {
return rootNode, nil
}
@@ -320,7 +327,7 @@ func (f *Fs) mkdir(rootNode *mega.Node, dir string) (node *mega.Node, err error)
f.mkdirMu.Lock()
defer f.mkdirMu.Unlock()
parts := splitNodePath(dir)
parts := f.splitNodePath(dir)
if parts == nil {
return rootNode, nil
}
@@ -422,7 +429,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
errors := 0
// similar to f.deleteNode(trash) but with HardDelete as true
for _, item := range items {
fs.Debugf(f, "Deleting trash %q", enc.ToStandardName(item.GetName()))
fs.Debugf(f, "Deleting trash %q", f.opt.Enc.ToStandardName(item.GetName()))
deleteErr := f.pacer.Call(func() (bool, error) {
err := f.srv.Delete(item, true)
return shouldRetry(err)
@@ -504,7 +511,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
var iErr error
_, err = f.list(ctx, dirNode, func(info *mega.Node) bool {
remote := path.Join(dir, enc.ToStandardName(info.GetName()))
remote := path.Join(dir, f.opt.Enc.ToStandardName(info.GetName()))
switch info.GetType() {
case mega.FOLDER, mega.ROOT, mega.INBOX, mega.TRASH:
d := fs.NewDir(remote, info.GetTimeStamp()).SetID(info.GetHash())
@@ -726,7 +733,7 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
if srcLeaf != dstLeaf {
//log.Printf("rename %q to %q", srcLeaf, dstLeaf)
err = f.pacer.Call(func() (bool, error) {
err = f.srv.Rename(info, enc.FromStandardName(dstLeaf))
err = f.srv.Rename(info, f.opt.Enc.FromStandardName(dstLeaf))
return shouldRetry(err)
})
if err != nil {
@@ -875,13 +882,13 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
}
// move them into place
for _, info := range infos {
fs.Infof(srcDir, "merging %q", enc.ToStandardName(info.GetName()))
fs.Infof(srcDir, "merging %q", f.opt.Enc.ToStandardName(info.GetName()))
err = f.pacer.Call(func() (bool, error) {
err = f.srv.Move(info, dstDirNode)
return shouldRetry(err)
})
if err != nil {
return errors.Wrapf(err, "MergeDirs move failed on %q in %v", enc.ToStandardName(info.GetName()), srcDir)
return errors.Wrapf(err, "MergeDirs move failed on %q in %v", f.opt.Enc.ToStandardName(info.GetName()), srcDir)
}
}
// rmdir (into trash) the now empty source directory
@@ -1124,7 +1131,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var u *mega.Upload
err = o.fs.pacer.Call(func() (bool, error) {
u, err = o.fs.srv.NewUpload(dirNode, enc.FromStandardName(leaf), size)
u, err = o.fs.srv.NewUpload(dirNode, o.fs.opt.Enc.FromStandardName(leaf), size)
return shouldRetry(err)
})
if err != nil {

624
backend/memory/memory.go Normal file
View File

@@ -0,0 +1,624 @@
// Package memory provides an interface to an in memory object storage system
package memory
import (
"bytes"
"context"
"crypto/md5"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"path"
"strings"
"sync"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
)
var (
hashType = hash.MD5
// the object storage is persistent
buckets = newBucketsInfo()
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "memory",
Description: "In memory object storage system.",
NewFs: NewFs,
Options: []fs.Option{},
})
}
// Options defines the configuration for this backend
type Options struct {
}
// Fs represents a remote memory server
type Fs struct {
name string // name of this remote
root string // the path we are working on if any
opt Options // parsed config options
rootBucket string // bucket part of root (if any)
rootDirectory string // directory part of root (if any)
features *fs.Features // optional features
}
// bucketsInfo holds info about all the buckets
type bucketsInfo struct {
mu sync.RWMutex
buckets map[string]*bucketInfo
}
func newBucketsInfo() *bucketsInfo {
return &bucketsInfo{
buckets: make(map[string]*bucketInfo, 16),
}
}
// getBucket gets a names bucket or nil
func (bi *bucketsInfo) getBucket(name string) (b *bucketInfo) {
bi.mu.RLock()
b = bi.buckets[name]
bi.mu.RUnlock()
return b
}
// makeBucket returns the bucket or makes it
func (bi *bucketsInfo) makeBucket(name string) (b *bucketInfo) {
bi.mu.Lock()
defer bi.mu.Unlock()
b = bi.buckets[name]
if b != nil {
return b
}
b = newBucketInfo()
bi.buckets[name] = b
return b
}
// deleteBucket deleted the bucket or returns an error
func (bi *bucketsInfo) deleteBucket(name string) error {
bi.mu.Lock()
defer bi.mu.Unlock()
b := bi.buckets[name]
if b == nil {
return fs.ErrorDirNotFound
}
if !b.isEmpty() {
return fs.ErrorDirectoryNotEmpty
}
delete(bi.buckets, name)
return nil
}
// getObjectData gets an object from (bucketName, bucketPath) or nil
func (bi *bucketsInfo) getObjectData(bucketName, bucketPath string) (od *objectData) {
b := bi.getBucket(bucketName)
if b == nil {
return nil
}
return b.getObjectData(bucketPath)
}
// updateObjectData updates an object from (bucketName, bucketPath)
func (bi *bucketsInfo) updateObjectData(bucketName, bucketPath string, od *objectData) {
b := bi.makeBucket(bucketName)
b.mu.Lock()
b.objects[bucketPath] = od
b.mu.Unlock()
}
// removeObjectData removes an object from (bucketName, bucketPath) returning true if removed
func (bi *bucketsInfo) removeObjectData(bucketName, bucketPath string) (removed bool) {
b := bi.getBucket(bucketName)
if b != nil {
b.mu.Lock()
od := b.objects[bucketPath]
if od != nil {
delete(b.objects, bucketPath)
removed = true
}
b.mu.Unlock()
}
return removed
}
// bucketInfo holds info about a single bucket
type bucketInfo struct {
mu sync.RWMutex
objects map[string]*objectData
}
func newBucketInfo() *bucketInfo {
return &bucketInfo{
objects: make(map[string]*objectData, 16),
}
}
// getBucket gets a names bucket or nil
func (bi *bucketInfo) getObjectData(name string) (od *objectData) {
bi.mu.RLock()
od = bi.objects[name]
bi.mu.RUnlock()
return od
}
// getBucket gets a names bucket or nil
func (bi *bucketInfo) isEmpty() (empty bool) {
bi.mu.RLock()
empty = len(bi.objects) == 0
bi.mu.RUnlock()
return empty
}
// the object data and metadata
type objectData struct {
modTime time.Time
hash string
mimeType string
data []byte
}
// Object describes a memory object
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
od *objectData // the object data
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("Memory root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// parsePath parses a remote 'url'
func parsePath(path string) (root string) {
root = strings.Trim(path, "/")
return
}
// split returns bucket and bucketPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
return bucket.Split(path.Join(f.root, rootRelativePath))
}
// split returns bucket and bucketPath from the object
func (o *Object) split() (bucket, bucketPath string) {
return o.fs.split(o.remote)
}
// setRoot changes the root of the Fs
func (f *Fs) setRoot(root string) {
f.root = parsePath(root)
f.rootBucket, f.rootDirectory = bucket.Split(f.root)
}
// NewFs contstructs an Fs from the path, bucket:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = strings.Trim(root, "/")
f := &Fs{
name: name,
root: root,
opt: *opt,
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
}).Fill(f)
if f.rootBucket != "" && f.rootDirectory != "" {
od := buckets.getObjectData(f.rootBucket, f.rootDirectory)
if od != nil {
newRoot := path.Dir(f.root)
if newRoot == "." {
newRoot = ""
}
f.setRoot(newRoot)
// return an error with an fs which points to the parent
err = fs.ErrorIsFile
}
}
return f, err
}
// newObject makes an object from a remote and an objectData
func (f *Fs) newObject(remote string, od *objectData) *Object {
return &Object{fs: f, remote: remote, od: od}
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
bucket, bucketPath := f.split(remote)
od := buckets.getObjectData(bucket, bucketPath)
if od == nil {
return nil, fs.ErrorObjectNotFound
}
return f.newObject(remote, od), nil
}
// listFn is called from list to handle an object.
type listFn func(remote string, entry fs.DirEntry, isDirectory bool) error
// list the buckets to fn
func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBucket bool, recurse bool, fn listFn) (err error) {
if prefix != "" {
prefix += "/"
}
if directory != "" {
directory += "/"
}
b := buckets.getBucket(bucket)
if b == nil {
return fs.ErrorDirNotFound
}
b.mu.RLock()
defer b.mu.RUnlock()
dirs := make(map[string]struct{})
for absPath, od := range b.objects {
if strings.HasPrefix(absPath, directory) {
remote := absPath[len(prefix):]
if !recurse {
localPath := absPath[len(directory):]
slash := strings.IndexRune(localPath, '/')
if slash >= 0 {
// send a directory if have a slash
dir := directory + localPath[:slash]
if addBucket {
dir = path.Join(bucket, dir)
}
_, found := dirs[dir]
if !found {
err = fn(dir, fs.NewDir(dir, time.Time{}), true)
if err != nil {
return err
}
dirs[dir] = struct{}{}
}
continue // don't send this file if not recursing
}
}
// send an object
if addBucket {
remote = path.Join(bucket, remote)
}
err = fn(remote, f.newObject(remote, od), false)
if err != nil {
return err
}
}
}
return nil
}
// listDir lists the bucket to the entries
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
// List the objects and directories
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, entry fs.DirEntry, isDirectory bool) error {
entries = append(entries, entry)
return nil
})
return entries, err
}
// listBuckets lists the buckets to entries
func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error) {
buckets.mu.RLock()
defer buckets.mu.RUnlock()
for name := range buckets.buckets {
entries = append(entries, fs.NewDir(name, time.Time{}))
}
return entries, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer fslog.Trace(dir, "")("entries = %q, err = %v", &entries, &err)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
}
// ListR lists the objects and directories of the Fs starting
// from dir recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
//
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
bucket, directory := f.split(dir)
list := walk.NewListRHelper(callback)
listR := func(bucket, directory, prefix string, addBucket bool) error {
return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, entry fs.DirEntry, isDirectory bool) error {
return list.Add(entry)
})
}
if bucket == "" {
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
bucket := entry.Remote()
err = listR(bucket, "", f.rootDirectory, true)
if err != nil {
return err
}
}
} else {
err = listR(bucket, directory, f.rootDirectory, f.rootBucket == "")
if err != nil {
return err
}
}
return list.Flush()
}
// Put the object into the bucket
//
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// Temporary Object under construction
fs := &Object{
fs: f,
remote: src.Remote(),
od: &objectData{
modTime: src.ModTime(ctx),
},
}
return fs, fs.Update(ctx, in, src, options...)
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
// Mkdir creates the bucket if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
bucket, _ := f.split(dir)
buckets.makeBucket(bucket)
return nil
}
// Rmdir deletes the bucket if the fs is at the root
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
bucket, directory := f.split(dir)
if bucket == "" || directory != "" {
return nil
}
return buckets.deleteBucket(bucket)
}
// Precision of the remote
func (f *Fs) Precision() time.Duration {
return time.Nanosecond
}
// Copy src to this remote using server side copy operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
dstBucket, dstPath := f.split(remote)
_ = buckets.makeBucket(dstBucket)
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
srcBucket, srcPath := srcObj.split()
od := buckets.getObjectData(srcBucket, srcPath)
if od == nil {
return nil, fs.ErrorObjectNotFound
}
buckets.updateObjectData(dstBucket, dstPath, od)
return f.NewObject(ctx, remote)
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hashType)
}
// ------------------------------------------------------------
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Remote()
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Hash returns the hash of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hashType {
return "", hash.ErrUnsupported
}
if o.od.hash == "" {
sum := md5.Sum(o.od.data)
o.od.hash = hex.EncodeToString(sum[:])
}
return o.od.hash, nil
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return int64(len(o.od.data))
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
//
// SHA-1 will also be updated once the request has completed.
func (o *Object) ModTime(ctx context.Context) (result time.Time) {
return o.od.modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
o.od.modTime = modTime
return nil
}
// Storable returns if this object is storable
func (o *Object) Storable() bool {
return true
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, limit = x.Decode(int64(len(o.od.data)))
case *fs.SeekOption:
offset = x.Offset
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
if offset > int64(len(o.od.data)) {
offset = int64(len(o.od.data))
}
data := o.od.data[offset:]
if limit >= 0 {
if limit > int64(len(data)) {
limit = int64(len(data))
}
data = data[:limit]
}
return ioutil.NopCloser(bytes.NewBuffer(data)), nil
}
// Update the object with the contents of the io.Reader, modTime and size
//
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
bucket, bucketPath := o.split()
data, err := ioutil.ReadAll(in)
if err != nil {
return errors.Wrap(err, "failed to update memory object")
}
o.od = &objectData{
data: data,
hash: "",
modTime: src.ModTime(ctx),
mimeType: fs.MimeType(ctx, o),
}
buckets.updateObjectData(bucket, bucketPath, o.od)
return nil
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
bucket, bucketPath := o.split()
removed := buckets.removeObjectData(bucket, bucketPath)
if !removed {
return fs.ErrorObjectNotFound
}
return nil
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType(ctx context.Context) string {
return o.od.mimeType
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
_ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
)

View File

@@ -0,0 +1,16 @@
// Test memory filesystem interface
package memory
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: ":memory:",
NilObject: (*Object)(nil),
})
}

View File

@@ -12,6 +12,7 @@ import (
"log"
"net/http"
"path"
"strconv"
"strings"
"time"
@@ -23,11 +24,11 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
@@ -35,8 +36,6 @@ import (
"golang.org/x/oauth2"
)
const enc = encodings.OneDrive
const (
rcloneClientID = "b15665d9-eda6-4092-8539-0eec376afd59"
rcloneEncryptedClientSecret = "_JUdzh3LnKNqSPcf4Wu5fgMFIQOI8glZu_akYgR8yf6egowNBg-R"
@@ -61,7 +60,7 @@ var (
AuthURL: "https://login.microsoftonline.com/common/oauth2/v2.0/authorize",
TokenURL: "https://login.microsoftonline.com/common/oauth2/v2.0/token",
},
Scopes: []string{"Files.Read", "Files.ReadWrite", "Files.Read.All", "Files.ReadWrite.All", "offline_access"},
Scopes: []string{"Files.Read", "Files.ReadWrite", "Files.Read.All", "Files.ReadWrite.All", "offline_access", "Sites.Read.All"},
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -252,16 +251,63 @@ delete OneNote files or otherwise want them to show up in directory
listing, set this option.`,
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// List of replaced characters:
// < (less than) -> '' // FULLWIDTH LESS-THAN SIGN
// > (greater than) -> '' // FULLWIDTH GREATER-THAN SIGN
// : (colon) -> '' // FULLWIDTH COLON
// " (double quote) -> '' // FULLWIDTH QUOTATION MARK
// \ (backslash) -> '' // FULLWIDTH REVERSE SOLIDUS
// | (vertical line) -> '' // FULLWIDTH VERTICAL LINE
// ? (question mark) -> '' // FULLWIDTH QUESTION MARK
// * (asterisk) -> '' // FULLWIDTH ASTERISK
// # (number sign) -> '' // FULLWIDTH NUMBER SIGN
// % (percent sign) -> '' // FULLWIDTH PERCENT SIGN
//
// Folder names cannot begin with a tilde ('~')
// List of replaced characters:
// ~ (tilde) -> '' // FULLWIDTH TILDE
//
// Additionally names can't begin with a space ( ) or end with a period (.) or space ( ).
// List of replaced characters:
// . (period) -> '' // FULLWIDTH FULL STOP
// (space) -> '␠' // SYMBOL FOR SPACE
//
// Also encode invalid UTF-8 bytes as json doesn't handle them.
//
// The OneDrive API documentation lists the set of reserved characters, but
// testing showed this list is incomplete. This are the differences:
// - " (double quote) is rejected, but missing in the documentation
// - space at the end of file and folder names is rejected, but missing in the documentation
// - period at the end of file names is rejected, but missing in the documentation
//
// Adding these restrictions to the OneDrive API documentation yields exactly
// the same rules as the Windows naming conventions.
//
// https://docs.microsoft.com/en-us/onedrive/developer/rest-api/concepts/addressing-driveitems?view=odsp-graph-online#path-encoding
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeHashPercent |
encoder.EncodeLeftSpace |
encoder.EncodeLeftTilde |
encoder.EncodeRightPeriod |
encoder.EncodeRightSpace |
encoder.EncodeWin |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DriveID string `config:"drive_id"`
DriveType string `config:"drive_type"`
ExposeOneNoteFiles bool `config:"expose_onenote_files"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DriveID string `config:"drive_id"`
DriveType string `config:"drive_type"`
ExposeOneNoteFiles bool `config:"expose_onenote_files"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote one drive
@@ -335,13 +381,30 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
authRetry := false
if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
authRetry = true
fs.Debugf(nil, "Should retry: %v", err)
retry := false
if resp != nil {
switch resp.StatusCode {
case 401:
if len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
retry = true
fs.Debugf(nil, "Should retry: %v", err)
}
case 429: // Too Many Requests.
// see https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online
if values := resp.Header["Retry-After"]; len(values) == 1 && values[0] != "" {
retryAfter, parseErr := strconv.Atoi(values[0])
if parseErr != nil {
fs.Debugf(nil, "Failed to parse Retry-After: %q: %v", values[0], parseErr)
} else {
duration := time.Second * time.Duration(retryAfter)
retry = true
err = pacer.RetryAfterError(err, duration)
fs.Debugf(nil, "Too many requests. Trying again in %d seconds.", retryAfter)
}
}
}
}
return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
return retry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// readMetaDataForPathRelativeToID reads the metadata for a path relative to an item that is addressed by its normalized ID.
@@ -355,7 +418,7 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
// If `relPath` == '', do not append the slash (See #3664)
func (f *Fs) readMetaDataForPathRelativeToID(ctx context.Context, normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) {
if relPath != "" {
relPath = "/" + withTrailingColon(rest.URLPathEscape(enc.FromStandardPath(relPath)))
relPath = "/" + withTrailingColon(rest.URLPathEscape(f.opt.Enc.FromStandardPath(relPath)))
}
opts := newOptsCall(normalizedID, "GET", ":"+relPath)
err = f.pacer.Call(func() (bool, error) {
@@ -380,7 +443,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
} else {
opts = rest.Opts{
Method: "GET",
Path: "/root:/" + rest.URLPathEscape(enc.FromStandardPath(path)),
Path: "/root:/" + rest.URLPathEscape(f.opt.Enc.FromStandardPath(path)),
}
}
err = f.pacer.Call(func() (bool, error) {
@@ -628,7 +691,7 @@ func (f *Fs) CreateDir(ctx context.Context, dirID, leaf string) (newID string, e
var info *api.Item
opts := newOptsCall(dirID, "POST", "/children")
mkdir := api.CreateItemRequest{
Name: enc.FromStandardName(leaf),
Name: f.opt.Enc.FromStandardName(leaf),
ConflictBehavior: "fail",
}
err = f.pacer.Call(func() (bool, error) {
@@ -688,7 +751,7 @@ OUTER:
if item.Deleted != nil {
continue
}
item.Name = enc.ToStandardName(item.GetName())
item.Name = f.opt.Enc.ToStandardName(item.GetName())
if fn(item) {
found = true
break OUTER
@@ -944,7 +1007,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
id, dstDriveID, _ := parseNormalizedID(directoryID)
replacedLeaf := enc.FromStandardName(leaf)
replacedLeaf := f.opt.Enc.FromStandardName(leaf)
copyReq := api.CopyItemRequest{
Name: &replacedLeaf,
ParentReference: api.ItemReference{
@@ -1028,7 +1091,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
opts := newOptsCall(srcObj.id, "PATCH", "")
move := api.MoveItemRequest{
Name: enc.FromStandardName(leaf),
Name: f.opt.Enc.FromStandardName(leaf),
ParentReference: &api.ItemReference{
DriveID: dstDriveID,
ID: id,
@@ -1143,7 +1206,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// Do the move
opts := newOptsCall(srcID, "PATCH", "")
move := api.MoveItemRequest{
Name: enc.FromStandardName(leaf),
Name: f.opt.Enc.FromStandardName(leaf),
ParentReference: &api.ItemReference{
DriveID: dstDriveID,
ID: parsedDstDirID,
@@ -1265,7 +1328,7 @@ func (o *Object) rootPath() string {
// srvPath returns a path for use in server given a remote
func (f *Fs) srvPath(remote string) string {
return enc.FromStandardPath(f.rootSlash() + remote)
return f.opt.Enc.FromStandardPath(f.rootSlash() + remote)
}
// srvPath returns a path for use in server
@@ -1377,7 +1440,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
opts = rest.Opts{
Method: "PATCH",
RootURL: rootURL,
Path: "/" + drive + "/items/" + trueDirID + ":/" + withTrailingColon(rest.URLPathEscape(enc.FromStandardName(leaf))),
Path: "/" + drive + "/items/" + trueDirID + ":/" + withTrailingColon(rest.URLPathEscape(o.fs.opt.Enc.FromStandardName(leaf))),
}
} else {
opts = rest.Opts{
@@ -1452,7 +1515,7 @@ func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (re
Method: "POST",
RootURL: rootURL,
Path: fmt.Sprintf("/%s/items/%s:/%s:/createUploadSession",
drive, id, rest.URLPathEscape(enc.FromStandardName(leaf))),
drive, id, rest.URLPathEscape(o.fs.opt.Enc.FromStandardName(leaf))),
}
} else {
opts = rest.Opts{
@@ -1477,21 +1540,74 @@ func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (re
return response, err
}
// getPosition gets the current position in a multipart upload
func (o *Object) getPosition(ctx context.Context, url string) (pos int64, err error) {
opts := rest.Opts{
Method: "GET",
RootURL: url,
}
var info api.UploadFragmentResponse
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
})
if err != nil {
return 0, err
}
if len(info.NextExpectedRanges) != 1 {
return 0, errors.Errorf("bad number of ranges in upload position: %v", info.NextExpectedRanges)
}
position := info.NextExpectedRanges[0]
i := strings.IndexByte(position, '-')
if i < 0 {
return 0, errors.Errorf("no '-' in next expected range: %q", position)
}
position = position[:i]
pos, err = strconv.ParseInt(position, 10, 64)
if err != nil {
return 0, errors.Wrapf(err, "bad expected range: %q", position)
}
return pos, nil
}
// uploadFragment uploads a part
func (o *Object) uploadFragment(ctx context.Context, url string, start int64, totalSize int64, chunk io.ReadSeeker, chunkSize int64) (info *api.Item, err error) {
opts := rest.Opts{
Method: "PUT",
RootURL: url,
ContentLength: &chunkSize,
ContentRange: fmt.Sprintf("bytes %d-%d/%d", start, start+chunkSize-1, totalSize),
Body: chunk,
}
// var response api.UploadFragmentResponse
var resp *http.Response
var body []byte
var skip = int64(0)
err = o.fs.pacer.Call(func() (bool, error) {
_, _ = chunk.Seek(0, io.SeekStart)
toSend := chunkSize - skip
opts := rest.Opts{
Method: "PUT",
RootURL: url,
ContentLength: &toSend,
ContentRange: fmt.Sprintf("bytes %d-%d/%d", start+skip, start+chunkSize-1, totalSize),
Body: chunk,
}
_, _ = chunk.Seek(skip, io.SeekStart)
resp, err = o.fs.srv.Call(ctx, &opts)
if err != nil && resp != nil && resp.StatusCode == http.StatusRequestedRangeNotSatisfiable {
fs.Debugf(o, "Received 416 error - reading current position from server: %v", err)
pos, posErr := o.getPosition(ctx, url)
if posErr != nil {
fs.Debugf(o, "Failed to read position: %v", posErr)
return false, posErr
}
skip = pos - start
fs.Debugf(o, "Read position %d, chunk is %d..%d, bytes to skip = %d", pos, start, start+chunkSize, skip)
switch {
case skip < 0:
return false, errors.Wrapf(err, "sent block already (skip %d < 0), can't rewind", skip)
case skip > chunkSize:
return false, errors.Wrapf(err, "position is in the future (skip %d > chunkSize %d), can't skip forward", skip, chunkSize)
case skip == chunkSize:
fs.Debugf(o, "Skipping chunk as already sent (skip %d == chunkSize %d)", skip, chunkSize)
return false, nil
}
return true, errors.Wrapf(err, "retry this chunk skipping %d bytes", skip)
}
if err != nil {
return shouldRetry(resp, err)
}
@@ -1604,7 +1720,7 @@ func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64,
opts = rest.Opts{
Method: "PUT",
RootURL: rootURL,
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(enc.FromStandardName(leaf)) + ":/content",
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(o.fs.opt.Enc.FromStandardName(leaf)) + ":/content",
ContentLength: &size,
Body: in,
}

View File

@@ -13,21 +13,20 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/rest"
)
const enc = encodings.OpenDrive
const (
defaultEndpoint = "https://dev.opendrive.com/api/v1"
minSleep = 10 * time.Millisecond
@@ -50,14 +49,57 @@ func init() {
Help: "Password.",
IsPassword: true,
Required: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// List of replaced characters:
// < (less than) -> '' // FULLWIDTH LESS-THAN SIGN
// > (greater than) -> '' // FULLWIDTH GREATER-THAN SIGN
// : (colon) -> '' // FULLWIDTH COLON
// " (double quote) -> '' // FULLWIDTH QUOTATION MARK
// \ (backslash) -> '' // FULLWIDTH REVERSE SOLIDUS
// | (vertical line) -> '' // FULLWIDTH VERTICAL LINE
// ? (question mark) -> '' // FULLWIDTH QUESTION MARK
// * (asterisk) -> '' // FULLWIDTH ASTERISK
//
// Additionally names can't begin or end with a ASCII whitespace.
// List of replaced characters:
// (space) -> '␠' // SYMBOL FOR SPACE
// (horizontal tab) -> '␉' // SYMBOL FOR HORIZONTAL TABULATION
// (line feed) -> '␊' // SYMBOL FOR LINE FEED
// (vertical tab) -> '␋' // SYMBOL FOR VERTICAL TABULATION
// (carriage return) -> '␍' // SYMBOL FOR CARRIAGE RETURN
//
// Also encode invalid UTF-8 bytes as json doesn't handle them properly.
//
// https://www.opendrive.com/wp-content/uploads/guides/OpenDrive_API_guide.pdf
Default: (encoder.Base |
encoder.EncodeWin |
encoder.EncodeLeftCrLfHtVt |
encoder.EncodeRightCrLfHtVt |
encoder.EncodeBackSlash |
encoder.EncodeLeftSpace |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8),
}, {
Name: "chunk_size",
Help: `Files will be uploaded in chunks this size.
Note that these chunks are buffered in memory so increasing them will
increase memory use.`,
Default: 10 * fs.MebiByte,
Advanced: true,
}},
})
}
// Options defines the configuration for this backend
type Options struct {
UserName string `config:"username"`
Password string `config:"password"`
UserName string `config:"username"`
Password string `config:"password"`
Enc encoder.MultiEncoder `config:"encoding"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
}
// Fs represents a remote server
@@ -588,7 +630,7 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
fs: f,
remote: remote,
}
return o, enc.FromStandardName(leaf), directoryID, nil
return o, f.opt.Enc.FromStandardName(leaf), directoryID, nil
}
// readMetaDataForPath reads the metadata from the path
@@ -690,7 +732,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
err = f.pacer.Call(func() (bool, error) {
createDirData := createFolder{
SessionID: f.session.SessionID,
FolderName: enc.FromStandardName(leaf),
FolderName: f.opt.Enc.FromStandardName(leaf),
FolderSubParent: pathID,
FolderIsPublic: 0,
FolderPublicUpl: 0,
@@ -736,7 +778,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
return "", false, errors.Wrap(err, "failed to get folder list")
}
leaf = enc.FromStandardName(leaf)
leaf = f.opt.Enc.FromStandardName(leaf)
for _, folder := range folderList.Folders {
// fs.Debugf(nil, "Folder: %s (%s)", folder.Name, folder.FolderID)
@@ -784,7 +826,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
for _, folder := range folderList.Folders {
folder.Name = enc.ToStandardName(folder.Name)
folder.Name = f.opt.Enc.ToStandardName(folder.Name)
// fs.Debugf(nil, "Folder: %s (%s)", folder.Name, folder.FolderID)
remote := path.Join(dir, folder.Name)
// cache the directory ID for later lookups
@@ -795,7 +837,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
for _, file := range folderList.Files {
file.Name = enc.ToStandardName(file.Name)
file.Name = f.opt.Enc.ToStandardName(file.Name)
// fs.Debugf(nil, "File: %s (%s)", file.Name, file.FileID)
remote := path.Join(dir, file.Name)
o, err := f.newObjectWithInfo(ctx, remote, &file)
@@ -940,15 +982,13 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// resp.Body.Close()
// fs.Debugf(nil, "PostOpen: %#v", openResponse)
// 10 MB chunks size
chunkSize := int64(1024 * 1024 * 10)
buf := make([]byte, int(chunkSize))
buf := make([]byte, o.fs.opt.ChunkSize)
chunkOffset := int64(0)
remainingBytes := size
chunkCounter := 0
for remainingBytes > 0 {
currentChunkSize := chunkSize
currentChunkSize := int64(o.fs.opt.ChunkSize)
if currentChunkSize > remainingBytes {
currentChunkSize = remainingBytes
}
@@ -1050,7 +1090,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
opts := rest.Opts{
Method: "GET",
Path: fmt.Sprintf("/folder/itembyname.json/%s/%s?name=%s",
o.fs.session.SessionID, directoryID, url.QueryEscape(enc.FromStandardName(leaf))),
o.fs.session.SessionID, directoryID, url.QueryEscape(o.fs.opt.Enc.FromStandardName(leaf))),
}
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &folderList)
return o.fs.shouldRetry(resp, err)

View File

@@ -26,18 +26,16 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/oauth2"
)
const enc = encodings.Pcloud
const (
rcloneClientID = "DnONSzyJXpm"
rcloneEncryptedClientSecret = "ej1OIF39VOQQ0PXaSdK9ztkLw3tdLNscW2157TKNQdQKkICR4uU7aFg4eFM"
@@ -81,12 +79,23 @@ func init() {
}, {
Name: config.ConfigClientSecret,
Help: "Pcloud App Client Secret\nLeave blank normally.",
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
//
// TODO: Investigate Unicode simplification ( gets converted to \ server-side)
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote pcloud
@@ -342,7 +351,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
Path: "/createfolder",
Parameters: url.Values{},
}
opts.Parameters.Set("name", enc.FromStandardName(leaf))
opts.Parameters.Set("name", f.opt.Enc.FromStandardName(leaf))
opts.Parameters.Set("folderid", dirIDtoNumber(pathID))
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
@@ -418,7 +427,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
continue
}
}
item.Name = enc.ToStandardName(item.Name)
item.Name = f.opt.Enc.ToStandardName(item.Name)
if fn(item) {
found = true
break
@@ -610,7 +619,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
Parameters: url.Values{},
}
opts.Parameters.Set("fileid", fileIDtoNumber(srcObj.id))
opts.Parameters.Set("toname", enc.FromStandardName(leaf))
opts.Parameters.Set("toname", f.opt.Enc.FromStandardName(leaf))
opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("mtime", fmt.Sprintf("%d", srcObj.modTime.Unix()))
var resp *http.Response
@@ -689,7 +698,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
Parameters: url.Values{},
}
opts.Parameters.Set("fileid", fileIDtoNumber(srcObj.id))
opts.Parameters.Set("toname", enc.FromStandardName(leaf))
opts.Parameters.Set("toname", f.opt.Enc.FromStandardName(leaf))
opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
var resp *http.Response
var result api.ItemResult
@@ -786,7 +795,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
Parameters: url.Values{},
}
opts.Parameters.Set("folderid", dirIDtoNumber(srcID))
opts.Parameters.Set("toname", enc.FromStandardName(leaf))
opts.Parameters.Set("toname", f.opt.Enc.FromStandardName(leaf))
opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
var resp *http.Response
var result api.ItemResult
@@ -1066,7 +1075,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
}
leaf = enc.FromStandardName(leaf)
leaf = o.fs.opt.Enc.FromStandardName(leaf)
opts.Parameters.Set("filename", leaf)
opts.Parameters.Set("folderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("nopartial", "1")

View File

@@ -31,14 +31,15 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/premiumizeme/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/random"
@@ -46,8 +47,6 @@ import (
"golang.org/x/oauth2"
)
const enc = encodings.PremiumizeMe
const (
rcloneClientID = "658922194"
rcloneEncryptedClientSecret = "B5YIvQoRIhcpAYs8HYeyjb9gK-ftmZEbqdh_gNfc4RgO9Q"
@@ -93,13 +92,23 @@ This is not normally used - use oauth instead.
`,
Hide: fs.OptionHideBoth,
Default: "",
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeDoubleQuote |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
APIKey string `config:"api_key"`
APIKey string `config:"api_key"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote cloud storage system
@@ -306,14 +315,6 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return f, nil
}
// rootSlash returns root with a slash on if it is empty, otherwise empty string
func (f *Fs) rootSlash() string {
if f.root == "" {
return f.root
}
return f.root + "/"
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
@@ -364,7 +365,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
Path: "/folder/create",
Parameters: f.baseParams(),
MultipartParams: url.Values{
"name": {enc.FromStandardName(leaf)},
"name": {f.opt.Enc.FromStandardName(leaf)},
"parent_id": {pathID},
},
}
@@ -429,7 +430,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
fs.Debugf(f, "Ignoring %q - unknown type %q", item.Name, item.Type)
continue
}
item.Name = enc.ToStandardName(item.Name)
item.Name = f.opt.Enc.ToStandardName(item.Name)
if fn(item) {
found = true
break
@@ -637,8 +638,8 @@ func (f *Fs) Purge(ctx context.Context) error {
// between directories and a separate one to rename them. We try to
// call the minimum number of API calls.
func (f *Fs) move(ctx context.Context, isFile bool, id, oldLeaf, newLeaf, oldDirectoryID, newDirectoryID string) (err error) {
newLeaf = enc.FromStandardName(newLeaf)
oldLeaf = enc.FromStandardName(oldLeaf)
newLeaf = f.opt.Enc.FromStandardName(newLeaf)
oldLeaf = f.opt.Enc.FromStandardName(oldLeaf)
doRenameLeaf := oldLeaf != newLeaf
doMove := oldDirectoryID != newDirectoryID
@@ -889,11 +890,6 @@ func (o *Object) Remote() string {
return o.remote
}
// srvPath returns a path for use in server
func (o *Object) srvPath() string {
return enc.FromStandardPath(o.fs.rootSlash() + o.remote)
}
// Hash returns the SHA-1 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
return "", hash.ErrUnsupported
@@ -984,14 +980,6 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return resp.Body, err
}
// metaHash returns a rough hash of metadata to detect if object has been updated
func (o *Object) metaHash() string {
if !o.hasMetaData {
return ""
}
return fmt.Sprintf("remote=%q, size=%d, modTime=%v, id=%q, mimeType=%q", o.remote, o.size, o.modTime, o.id, o.mimeType)
}
// Update the object with the contents of the io.Reader, modTime and size
//
// If existing is set then it updates the object rather than creating a new one
@@ -1006,7 +994,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return err
}
leaf = enc.FromStandardName(leaf)
leaf = o.fs.opt.Enc.FromStandardName(leaf)
var resp *http.Response
var info api.FolderUploadinfoResponse

View File

@@ -17,6 +17,7 @@ import (
"github.com/putdotio/go-putio/putio"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/oauthutil"
@@ -29,6 +30,7 @@ type Fs struct {
name string // name of this remote
root string // the path we are working on
features *fs.Features // optional features
opt Options // options for this Fs
client *putio.Client // client for making API calls to Put.io
pacer *fs.Pacer // To pace the API calls
dirCache *dircache.DirCache // Map of directory path to directory id
@@ -60,6 +62,12 @@ func (f *Fs) Features() *fs.Features {
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (f fs.Fs, err error) {
// defer log.Trace(name, "root=%v", root)("f=%+v, err=%v", &f, &err)
// Parse config into Options struct
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
oAuthClient, _, err := oauthutil.NewClient(name, m, putioConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure putio")
@@ -67,6 +75,7 @@ func NewFs(name, root string, m configmap.Mapper) (f fs.Fs, err error) {
p := &Fs{
name: name,
root: root,
opt: *opt,
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
client: putio.NewClient(oAuthClient),
oAuthClient: oAuthClient,
@@ -127,7 +136,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
var entry putio.File
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "creating folder. part: %s, parentID: %d", leaf, parentID)
entry, err = f.client.Files.CreateFolder(ctx, enc.FromStandardName(leaf), parentID)
entry, err = f.client.Files.CreateFolder(ctx, f.opt.Enc.FromStandardName(leaf), parentID)
return shouldRetry(err)
})
return itoa(entry.ID), err
@@ -154,7 +163,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
return
}
for _, child := range children {
if enc.ToStandardName(child.Name) == leaf {
if f.opt.Enc.ToStandardName(child.Name) == leaf {
found = true
pathIDOut = itoa(child.ID)
if !child.IsDir() {
@@ -196,7 +205,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return
}
for _, child := range children {
remote := path.Join(dir, enc.ToStandardName(child.Name))
remote := path.Join(dir, f.opt.Enc.ToStandardName(child.Name))
// fs.Debugf(f, "child: %s", remote)
if child.IsDir() {
f.dirCache.Put(remote, itoa(child.ID))
@@ -274,7 +283,7 @@ func (f *Fs) createUpload(ctx context.Context, name string, size int64, parentID
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.Header.Set("tus-resumable", "1.0.0")
req.Header.Set("upload-length", strconv.FormatInt(size, 10))
b64name := base64.StdEncoding.EncodeToString([]byte(enc.FromStandardName(name)))
b64name := base64.StdEncoding.EncodeToString([]byte(f.opt.Enc.FromStandardName(name)))
b64true := base64.StdEncoding.EncodeToString([]byte("true"))
b64parentID := base64.StdEncoding.EncodeToString([]byte(parentID))
b64modifiedAt := base64.StdEncoding.EncodeToString([]byte(modTime.Format(time.RFC3339)))
@@ -546,7 +555,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (o fs.Objec
params := url.Values{}
params.Set("file_id", strconv.FormatInt(srcObj.file.ID, 10))
params.Set("parent_id", directoryID)
params.Set("name", enc.FromStandardName(leaf))
params.Set("name", f.opt.Enc.FromStandardName(leaf))
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/copy", strings.NewReader(params.Encode()))
if err != nil {
return false, err
@@ -585,7 +594,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (o fs.Objec
params := url.Values{}
params.Set("file_id", strconv.FormatInt(srcObj.file.ID, 10))
params.Set("parent_id", directoryID)
params.Set("name", enc.FromStandardName(leaf))
params.Set("name", f.opt.Enc.FromStandardName(leaf))
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/move", strings.NewReader(params.Encode()))
if err != nil {
return false, err
@@ -674,7 +683,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
params := url.Values{}
params.Set("file_id", srcID)
params.Set("parent_id", dstDirectoryID)
params.Set("name", enc.FromStandardName(leaf))
params.Set("name", f.opt.Enc.FromStandardName(leaf))
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/move", strings.NewReader(params.Encode()))
if err != nil {
return false, err

View File

@@ -137,7 +137,7 @@ func (o *Object) readEntry(ctx context.Context) (f *putio.File, err error) {
}
err = o.fs.pacer.Call(func() (bool, error) {
// fs.Debugf(o, "requesting child. directoryID: %s, name: %s", directoryID, leaf)
req, err := o.fs.client.NewRequest(ctx, "GET", "/v2/files/"+directoryID+"/child?name="+url.QueryEscape(enc.FromStandardName(leaf)), nil)
req, err := o.fs.client.NewRequest(ctx, "GET", "/v2/files/"+directoryID+"/child?name="+url.QueryEscape(o.fs.opt.Enc.FromStandardName(leaf)), nil)
if err != nil {
return false, err
}

View File

@@ -6,10 +6,11 @@ import (
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"golang.org/x/oauth2"
)
@@ -25,7 +26,6 @@ canReadUnnormalized = true
canReadRenormalized = true
canStream = false
*/
const enc = encodings.Putio
// Constants
const (
@@ -65,9 +65,25 @@ func init() {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Note that \ is renamed to -
//
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Enc encoder.MultiEncoder `config:"encoding"`
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)

View File

@@ -18,20 +18,19 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
qsConfig "github.com/yunify/qingstor-sdk-go/v3/config"
qsErr "github.com/yunify/qingstor-sdk-go/v3/request/errors"
qs "github.com/yunify/qingstor-sdk-go/v3/service"
)
const enc = encodings.QingStor
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -113,6 +112,13 @@ and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.`,
Default: 1,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.EncodeInvalidUtf8 |
encoder.EncodeCtl |
encoder.EncodeSlash),
}},
})
}
@@ -136,15 +142,16 @@ func timestampToTime(tp int64) time.Time {
// Options defines the configuration for this backend
type Options struct {
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Endpoint string `config:"endpoint"`
Zone string `config:"zone"`
ConnectionRetries int `config:"connection_retries"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
UploadConcurrency int `config:"upload_concurrency"`
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Endpoint string `config:"endpoint"`
Zone string `config:"zone"`
ConnectionRetries int `config:"connection_retries"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
UploadConcurrency int `config:"upload_concurrency"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote qingstor server
@@ -188,7 +195,7 @@ func parsePath(path string) (root string) {
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
bucketName, bucketPath = bucket.Split(path.Join(f.root, rootRelativePath))
return enc.FromStandardName(bucketName), enc.FromStandardPath(bucketPath)
return f.opt.Enc.FromStandardName(bucketName), f.opt.Enc.FromStandardPath(bucketPath)
}
// split returns bucket and bucketPath from the object
@@ -357,7 +364,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, err
}
encodedDirectory := enc.FromStandardPath(f.rootDirectory)
encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory)
_, err = bucketInit.HeadObject(encodedDirectory, &qs.HeadObjectInput{})
if err == nil {
newRoot := path.Dir(f.root)
@@ -385,7 +392,7 @@ func (f *Fs) Root() string {
// String converts this Fs to a string
func (f *Fs) String() string {
if f.rootBucket == "" {
return fmt.Sprintf("QingStor root")
return "QingStor root"
}
if f.rootDirectory == "" {
return fmt.Sprintf("QingStor bucket %s", f.rootBucket)
@@ -555,7 +562,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
continue
}
remote := *commonPrefix
remote = enc.ToStandardPath(remote)
remote = f.opt.Enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
@@ -576,7 +583,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
for _, object := range resp.Keys {
remote := qs.StringValue(object.Key)
remote = enc.ToStandardPath(remote)
remote = f.opt.Enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
@@ -653,7 +660,7 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
}
for _, bucket := range resp.Buckets {
d := fs.NewDir(enc.ToStandardName(qs.StringValue(bucket.Name)), qs.TimeValue(bucket.Created))
d := fs.NewDir(f.opt.Enc.ToStandardName(qs.StringValue(bucket.Name)), qs.TimeValue(bucket.Created))
entries = append(entries, d)
}
return entries, nil

View File

@@ -297,21 +297,6 @@ func (mu *multiUploader) send(c chunk) error {
return err
}
// list list the ObjectParts of an multipart upload
func (mu *multiUploader) list() error {
bucketInit, _ := mu.bucketInit()
req := qs.ListMultipartInput{
UploadID: mu.uploadID,
}
fs.Debugf(mu, "Reading multi-part details")
rsp, err := bucketInit.ListMultipart(mu.cfg.key, &req)
if err == nil {
mu.objectParts = rsp.ObjectParts
}
return err
}
// complete complete an multipart upload
func (mu *multiUploader) complete() error {
var err error

View File

@@ -14,7 +14,9 @@ What happens if you CTRL-C a multipart upload
*/
import (
"bytes"
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/xml"
@@ -24,6 +26,7 @@ import (
"net/url"
"path"
"regexp"
"sort"
"strconv"
"strings"
"sync"
@@ -34,29 +37,31 @@ import (
"github.com/aws/aws-sdk-go/aws/corehandlers"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/aws/aws-sdk-go/aws/credentials/stscreds"
"github.com/aws/aws-sdk-go/aws/defaults"
"github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/ncw/swift"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/pool"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/sync/errgroup"
)
const enc = encodings.S3
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -90,6 +95,9 @@ func init() {
}, {
Value: "Netease",
Help: "Netease Object Storage (NOS)",
}, {
Value: "StackPath",
Help: "StackPath Object Storage",
}, {
Value: "Wasabi",
Help: "Wasabi Object Storage",
@@ -160,6 +168,9 @@ func init() {
}, {
Value: "ap-south-1",
Help: "Asia Pacific (Mumbai)\nNeeds location constraint ap-south-1.",
}, {
Value: "ap-east-1",
Help: "Asia Patific (Hong Kong) Region\nNeeds location constraint ap-east-1.",
}, {
Value: "sa-east-1",
Help: "South America (Sao Paulo) Region\nNeeds location constraint sa-east-1.",
@@ -349,10 +360,24 @@ func init() {
Value: "oss-me-east-1.aliyuncs.com",
Help: "Middle East 1 (Dubai)",
}},
}, {
Name: "endpoint",
Help: "Endpoint for StackPath Object Storage.",
Provider: "StackPath",
Examples: []fs.OptionExample{{
Value: "s3.us-east-2.stackpathstorage.com",
Help: "US East Endpoint",
}, {
Value: "s3.us-west-1.stackpathstorage.com",
Help: "US West Endpoint",
}, {
Value: "s3.eu-central-1.stackpathstorage.com",
Help: "EU Endpoint",
}},
}, {
Name: "endpoint",
Help: "Endpoint for S3 API.\nRequired when using an S3 clone.",
Provider: "!AWS,IBMCOS,Alibaba",
Provider: "!AWS,IBMCOS,Alibaba,StackPath",
Examples: []fs.OptionExample{{
Value: "objects-us-east-1.dream.io",
Help: "Dream Objects endpoint",
@@ -428,6 +453,9 @@ func init() {
}, {
Value: "ap-south-1",
Help: "Asia Pacific (Mumbai)",
}, {
Value: "ap-east-1",
Help: "Asia Pacific (Hong Kong)",
}, {
Value: "sa-east-1",
Help: "South America (Sao Paulo) Region.",
@@ -536,7 +564,7 @@ func init() {
}, {
Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\nLeave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,IBMCOS,Alibaba",
Provider: "!AWS,IBMCOS,Alibaba,StackPath",
}, {
Name: "acl",
Help: `Canned ACL used when creating buckets and storing or copying objects.
@@ -755,7 +783,9 @@ if false then rclone will use virtual path style. See [the AWS S3
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
for more info.
Some providers (eg Aliyun OSS or Netease COS) require this set to false.`,
Some providers (eg AWS, Aliyun OSS or Netease COS) require this set to
false - rclone will do this automatically based on the provider
setting.`,
Default: true,
Advanced: true,
}, {
@@ -787,62 +817,110 @@ WARNING: Storing parts of an incomplete multipart upload counts towards space us
`,
Default: false,
Advanced: true,
}},
})
}, {
Name: "list_chunk",
Help: `Size of listing chunk (response list for each ListObject S3 request).
This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification.
Most services truncate the response list to 1000 objects even if requested more than that.
In AWS S3 this is a global maximum and cannot be changed, see [AWS S3](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html).
In Ceph, this can be increased with the "rgw list buckets max chunk" option.
`,
Default: 1000,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Any UTF-8 character is valid in a key, however it can't handle
// invalid UTF-8 and / have a special meaning.
//
// The SDK can't seem to handle uploading files called '.'
//
// FIXME would be nice to add
// - initial / encoding
// - doubled / encoding
// - trailing / encoding
// so that AWS keys are always valid file names
Default: encoder.EncodeInvalidUtf8 |
encoder.EncodeSlash |
encoder.EncodeDot,
}, {
Name: "memory_pool_flush_time",
Default: memoryPoolFlushTime,
Advanced: true,
Help: `How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
This option controls how often unused buffers will be removed from the pool.`,
}, {
Name: "memory_pool_use_mmap",
Default: memoryPoolUseMmap,
Advanced: true,
Help: `Whether to use mmap buffers in internal memory pool.`,
},
}})
}
// Constants
const (
metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
listChunkSize = 1000 // number of items to read at once
maxRetries = 10 // number of retries to make of operations
maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
maxUploadParts = 10000 // maximum allowed number of parts in a multi-part upload
minChunkSize = fs.SizeSuffix(1024 * 1024 * 5)
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
memoryPoolFlushTime = fs.Duration(time.Minute) // flush the cached buffers after this long
memoryPoolUseMmap = false
)
// Options defines the configuration for this backend
type Options struct {
Provider string `config:"provider"`
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Region string `config:"region"`
Endpoint string `config:"endpoint"`
LocationConstraint string `config:"location_constraint"`
ACL string `config:"acl"`
BucketACL string `config:"bucket_acl"`
ServerSideEncryption string `config:"server_side_encryption"`
SSEKMSKeyID string `config:"sse_kms_key_id"`
StorageClass string `config:"storage_class"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CopyCutoff fs.SizeSuffix `config:"copy_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"`
LeavePartsOnError bool `config:"leave_parts_on_error"`
Provider string `config:"provider"`
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Region string `config:"region"`
Endpoint string `config:"endpoint"`
LocationConstraint string `config:"location_constraint"`
ACL string `config:"acl"`
BucketACL string `config:"bucket_acl"`
ServerSideEncryption string `config:"server_side_encryption"`
SSEKMSKeyID string `config:"sse_kms_key_id"`
StorageClass string `config:"storage_class"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CopyCutoff fs.SizeSuffix `config:"copy_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"`
LeavePartsOnError bool `config:"leave_parts_on_error"`
ListChunk int64 `config:"list_chunk"`
Enc encoder.MultiEncoder `config:"encoding"`
MemoryPoolFlushTime fs.Duration `config:"memory_pool_flush_time"`
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
}
// Fs represents a remote s3 server
type Fs struct {
name string // the name of the remote
root string // root of the bucket - ignore all objects above this
opt Options // parsed options
features *fs.Features // optional features
c *s3.S3 // the connection to the s3 server
ses *session.Session // the s3 session
rootBucket string // bucket part of root (if any)
rootDirectory string // directory part of root (if any)
cache *bucket.Cache // cache for bucket creation status
pacer *fs.Pacer // To pace the API calls
srv *http.Client // a plain http client
name string // the name of the remote
root string // root of the bucket - ignore all objects above this
opt Options // parsed options
features *fs.Features // optional features
c *s3.S3 // the connection to the s3 server
ses *session.Session // the s3 session
rootBucket string // bucket part of root (if any)
rootDirectory string // directory part of root (if any)
cache *bucket.Cache // cache for bucket creation status
pacer *fs.Pacer // To pace the API calls
srv *http.Client // a plain http client
poolMu sync.Mutex // mutex protecting memory pools map
pools map[int64]*pool.Pool // memory pools
}
// Object describes a s3 object
@@ -892,7 +970,7 @@ func (f *Fs) Features() *fs.Features {
// retryErrorCodes is a slice of error codes that we will retry
// See: https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
var retryErrorCodes = []int{
// 409, // Conflict - various states that could be resolved on a retry
500, // Internal Server Error - "We encountered an internal error. Please try again."
503, // Service Unavailable/Slow Down - "Reduce your request rate"
}
@@ -940,7 +1018,7 @@ func parsePath(path string) (root string) {
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
bucketName, bucketPath = bucket.Split(path.Join(f.root, rootRelativePath))
return enc.FromStandardName(bucketName), enc.FromStandardPath(bucketPath)
return f.opt.Enc.FromStandardName(bucketName), f.opt.Enc.FromStandardPath(bucketPath)
}
// split returns bucket and bucketPath from the object
@@ -985,6 +1063,11 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
}),
ExpiryWindow: 3 * time.Minute,
},
// Pick up IAM role if we are in EKS
&stscreds.WebIdentityRoleProvider{
ExpiryWindow: 3 * time.Minute,
},
}
cred := credentials.NewChainCredentials(providers)
@@ -1006,11 +1089,11 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
if opt.Region == "" {
opt.Region = "us-east-1"
}
if opt.Provider == "Alibaba" || opt.Provider == "Netease" || opt.UseAccelerateEndpoint {
if opt.Provider == "AWS" || opt.Provider == "Alibaba" || opt.Provider == "Netease" || opt.UseAccelerateEndpoint {
opt.ForcePathStyle = false
}
awsConfig := aws.NewConfig().
WithMaxRetries(maxRetries).
WithMaxRetries(fs.Config.LowLevelRetries).
WithCredentials(cred).
WithHTTPClient(fshttp.NewClient(fs.Config)).
WithS3ForcePathStyle(opt.ForcePathStyle).
@@ -1116,15 +1199,23 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, err
}
pc := fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep)))
// Set pacer retries to 0 because we are relying on SDK retry mechanism.
// Setting it to 1 because in context of pacer it means 1 attempt.
pc.SetRetries(1)
f := &Fs{
name: name,
opt: *opt,
c: c,
ses: ses,
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))),
pacer: pc,
cache: bucket.NewCache(),
srv: fshttp.NewClient(fs.Config),
pools: make(map[int64]*pool.Pool),
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
@@ -1136,7 +1227,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}).Fill(f)
if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the object exists
encodedDirectory := enc.FromStandardPath(f.rootDirectory)
encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory)
req := s3.HeadObjectInput{
Bucket: &f.rootBucket,
Key: &encodedDirectory,
@@ -1254,7 +1345,6 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
if directory != "" {
directory += "/"
}
maxKeys := int64(listChunkSize)
delimiter := ""
if !recurse {
delimiter = "/"
@@ -1275,14 +1365,14 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
//
// So we enable only on providers we know supports it properly, all others can retry when a
// XML Syntax error is detected.
var urlEncodeListings = (f.opt.Provider == "AWS" || f.opt.Provider == "Wasabi" || f.opt.Provider == "Alibaba")
var urlEncodeListings = (f.opt.Provider == "AWS" || f.opt.Provider == "Wasabi" || f.opt.Provider == "Alibaba" || f.opt.Provider == "Minio")
for {
// FIXME need to implement ALL loop
req := s3.ListObjectsInput{
Bucket: &bucket,
Delimiter: &delimiter,
Prefix: &directory,
MaxKeys: &maxKeys,
MaxKeys: &f.opt.ListChunk,
Marker: marker,
}
if urlEncodeListings {
@@ -1340,7 +1430,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
continue
}
}
remote = enc.ToStandardPath(remote)
remote = f.opt.Enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
@@ -1367,7 +1457,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
continue
}
}
remote = enc.ToStandardPath(remote)
remote = f.opt.Enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
@@ -1458,7 +1548,7 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
return nil, err
}
for _, bucket := range resp.Buckets {
bucketName := enc.ToStandardName(aws.StringValue(bucket.Name))
bucketName := f.opt.Enc.ToStandardName(aws.StringValue(bucket.Name))
f.cache.MarkOK(bucketName)
d := fs.NewDir(bucketName, aws.TimeValue(bucket.CreationDate))
entries = append(entries, d)
@@ -1709,8 +1799,9 @@ func (f *Fs) copyMultipart(ctx context.Context, req *s3.CopyObjectInput, dstBuck
defer func() {
if err != nil {
// We can try to abort the upload, but ignore the error.
fs.Debugf(nil, "Cancelling multipart copy")
_ = f.pacer.Call(func() (bool, error) {
_, err := f.c.AbortMultipartUploadWithContext(ctx, &s3.AbortMultipartUploadInput{
_, err := f.c.AbortMultipartUploadWithContext(context.Background(), &s3.AbortMultipartUploadInput{
Bucket: &dstBucket,
Key: &dstPath,
UploadId: uid,
@@ -1812,6 +1903,22 @@ func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
}
func (f *Fs) getMemoryPool(size int64) *pool.Pool {
f.poolMu.Lock()
defer f.poolMu.Unlock()
_, ok := f.pools[size]
if !ok {
f.pools[size] = pool.New(
time.Duration(f.opt.MemoryPoolFlushTime),
int(f.opt.ChunkSize),
f.opt.UploadConcurrency*fs.Config.Transfers,
f.opt.MemoryPoolUseMmap,
)
}
return f.pools[size]
}
// ------------------------------------------------------------
// Fs returns the parent Fs
@@ -2007,6 +2114,191 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var warnStreamUpload sync.Once
func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, size int64, in io.Reader) (err error) {
f := o.fs
// make concurrency machinery
concurrency := f.opt.UploadConcurrency
if concurrency < 1 {
concurrency = 1
}
tokens := pacer.NewTokenDispenser(concurrency)
// calculate size of parts
partSize := int(f.opt.ChunkSize)
// size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize
// buffers here (default 5MB). With a maximum number of parts (10,000) this will be a file of
// 48GB which seems like a not too unreasonable limit.
if size == -1 {
warnStreamUpload.Do(func() {
fs.Logf(f, "Streaming uploads using chunk size %v will have maximum file size of %v",
f.opt.ChunkSize, fs.SizeSuffix(partSize*maxUploadParts))
})
} else {
// Adjust partSize until the number of parts is small enough.
if size/int64(partSize) >= maxUploadParts {
// Calculate partition size rounded up to the nearest MB
partSize = int((((size / maxUploadParts) >> 20) + 1) << 20)
}
}
memPool := f.getMemoryPool(int64(partSize))
var cout *s3.CreateMultipartUploadOutput
err = f.pacer.Call(func() (bool, error) {
var err error
cout, err = f.c.CreateMultipartUploadWithContext(ctx, &s3.CreateMultipartUploadInput{
Bucket: req.Bucket,
ACL: req.ACL,
Key: req.Key,
ContentType: req.ContentType,
Metadata: req.Metadata,
ServerSideEncryption: req.ServerSideEncryption,
SSEKMSKeyId: req.SSEKMSKeyId,
StorageClass: req.StorageClass,
})
return f.shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "multipart upload failed to initialise")
}
uid := cout.UploadId
defer func() {
if o.fs.opt.LeavePartsOnError {
return
}
if err != nil {
// We can try to abort the upload, but ignore the error.
fs.Debugf(o, "Cancelling multipart upload")
errCancel := f.pacer.Call(func() (bool, error) {
_, err := f.c.AbortMultipartUploadWithContext(context.Background(), &s3.AbortMultipartUploadInput{
Bucket: req.Bucket,
Key: req.Key,
UploadId: uid,
RequestPayer: req.RequestPayer,
})
return f.shouldRetry(err)
})
if errCancel != nil {
fs.Debugf(o, "Failed to cancel multipart upload: %v", errCancel)
}
}
}()
var (
g, gCtx = errgroup.WithContext(ctx)
finished = false
partsMu sync.Mutex // to protect parts
parts []*s3.CompletedPart
off int64
)
for partNum := int64(1); !finished; partNum++ {
// Get a block of memory from the pool and token which limits concurrency.
tokens.Get()
buf := memPool.Get()
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in uploading all the other parts.
if gCtx.Err() != nil {
break
}
// Read the chunk
var n int
n, err = readers.ReadFill(in, buf) // this can never return 0, nil
if err == io.EOF {
if n == 0 && partNum != 1 { // end if no data and if not first chunk
break
}
finished = true
} else if err != nil {
return errors.Wrap(err, "multipart upload failed to read source")
}
buf = buf[:n]
partNum := partNum
fs.Debugf(o, "multipart upload starting chunk %d size %v offset %v/%v", partNum, fs.SizeSuffix(n), fs.SizeSuffix(off), fs.SizeSuffix(size))
off += int64(n)
g.Go(func() (err error) {
partLength := int64(len(buf))
// create checksum of buffer for integrity checking
md5sumBinary := md5.Sum(buf)
md5sum := base64.StdEncoding.EncodeToString(md5sumBinary[:])
err = f.pacer.Call(func() (bool, error) {
uploadPartReq := &s3.UploadPartInput{
Body: bytes.NewReader(buf),
Bucket: req.Bucket,
Key: req.Key,
PartNumber: &partNum,
UploadId: uid,
ContentMD5: &md5sum,
ContentLength: &partLength,
RequestPayer: req.RequestPayer,
SSECustomerAlgorithm: req.SSECustomerAlgorithm,
SSECustomerKey: req.SSECustomerKey,
SSECustomerKeyMD5: req.SSECustomerKeyMD5,
}
uout, err := f.c.UploadPartWithContext(gCtx, uploadPartReq)
if err != nil {
if partNum <= int64(concurrency) {
return f.shouldRetry(err)
}
// retry all chunks once have done the first batch
return true, err
}
partsMu.Lock()
parts = append(parts, &s3.CompletedPart{
PartNumber: &partNum,
ETag: uout.ETag,
})
partsMu.Unlock()
return false, nil
})
// return the memory and token
memPool.Put(buf[:partSize])
tokens.Put()
if err != nil {
return errors.Wrap(err, "multipart upload failed to upload part")
}
return nil
})
}
err = g.Wait()
if err != nil {
return err
}
// sort the completed parts by part number
sort.Slice(parts, func(i, j int) bool {
return *parts[i].PartNumber < *parts[j].PartNumber
})
err = f.pacer.Call(func() (bool, error) {
_, err := f.c.CompleteMultipartUploadWithContext(ctx, &s3.CompleteMultipartUploadInput{
Bucket: req.Bucket,
Key: req.Key,
MultipartUpload: &s3.CompletedMultipartUpload{
Parts: parts,
},
RequestPayer: req.RequestPayer,
UploadId: uid,
})
return f.shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "multipart upload failed to finalise")
}
return nil
}
// Update the Object from in with modTime and size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
bucket, bucketPath := o.split()
@@ -2018,41 +2310,19 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
size := src.Size()
multipart := size < 0 || size >= int64(o.fs.opt.UploadCutoff)
var uploader *s3manager.Uploader
if multipart {
uploader = s3manager.NewUploader(o.fs.ses, func(u *s3manager.Uploader) {
u.Concurrency = o.fs.opt.UploadConcurrency
u.LeavePartsOnError = o.fs.opt.LeavePartsOnError
u.S3 = o.fs.c
u.PartSize = int64(o.fs.opt.ChunkSize)
// size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize
// buffers here (default 5MB). With a maximum number of parts (10,000) this will be a file of
// 48GB which seems like a not too unreasonable limit.
if size == -1 {
warnStreamUpload.Do(func() {
fs.Logf(o.fs, "Streaming uploads using chunk size %v will have maximum file size of %v",
o.fs.opt.ChunkSize, fs.SizeSuffix(u.PartSize*s3manager.MaxUploadParts))
})
return
}
// Adjust PartSize until the number of parts is small enough.
if size/u.PartSize >= s3manager.MaxUploadParts {
// Calculate partition size rounded up to the nearest MB
u.PartSize = (((size / s3manager.MaxUploadParts) >> 20) + 1) << 20
}
})
}
// Set the mtime in the meta data
metadata := map[string]*string{
metaMtime: aws.String(swift.TimeToFloatString(modTime)),
}
// read the md5sum if available for non multpart and if
// disable checksum isn't present.
// read the md5sum if available
// - for non multpart
// - so we can add a ContentMD5
// - for multipart provided checksums aren't disabled
// - so we can add the md5sum in the metadata as metaMD5Hash
var md5sum string
if !multipart && !o.fs.opt.DisableChecksum {
if !multipart || !o.fs.opt.DisableChecksum {
hash, err := src.Hash(ctx, hash.MD5)
if err == nil && matchMd5.MatchString(hash) {
hashBytes, err := hex.DecodeString(hash)
@@ -2067,52 +2337,32 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Guess the content type
mimeType := fs.MimeType(ctx, src)
req := s3.PutObjectInput{
Bucket: &bucket,
ACL: &o.fs.opt.ACL,
Key: &bucketPath,
ContentType: &mimeType,
Metadata: metadata,
}
if md5sum != "" {
req.ContentMD5 = &md5sum
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
if multipart {
req := s3manager.UploadInput{
Bucket: &bucket,
ACL: &o.fs.opt.ACL,
Key: &bucketPath,
Body: in,
ContentType: &mimeType,
Metadata: metadata,
//ContentLength: &size,
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = uploader.UploadWithContext(ctx, &req)
return o.fs.shouldRetry(err)
})
err = o.uploadMultipart(ctx, &req, size, in)
if err != nil {
return err
}
} else {
req := s3.PutObjectInput{
Bucket: &bucket,
ACL: &o.fs.opt.ACL,
Key: &bucketPath,
ContentType: &mimeType,
Metadata: metadata,
}
if md5sum != "" {
req.ContentMD5 = &md5sum
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
// Create the request
putObj, _ := o.fs.c.PutObjectRequest(&req)

View File

@@ -439,7 +439,12 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, err
}
}
signer, err := ssh.ParsePrivateKeyWithPassphrase(key, []byte(clearpass))
var signer ssh.Signer
if clearpass == "" {
signer, err = ssh.ParsePrivateKey(key)
} else {
signer, err = ssh.ParsePrivateKeyWithPassphrase(key, []byte(clearpass))
}
if err != nil {
return nil, errors.Wrap(err, "failed to parse private key file")
}
@@ -1210,7 +1215,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return errors.Wrap(err, "Update")
}
file, err := c.sftpClient.Create(o.path())
file, err := c.sftpClient.OpenFile(o.path(), os.O_WRONLY|os.O_CREATE|os.O_TRUNC)
o.fs.putSftpConnection(&c, err)
if err != nil {
return errors.Wrap(err, "Update Create failed")

View File

@@ -8,13 +8,24 @@ import (
"testing"
"github.com/rclone/rclone/backend/sftp"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestSftp:",
RemoteName: "TestSFTPOpenssh:",
NilObject: (*sftp.Object)(nil),
})
}
func TestIntegration2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestSFTPRclone:",
NilObject: (*sftp.Object)(nil),
})
}

View File

@@ -87,13 +87,14 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/sharefile/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/random"
@@ -101,8 +102,6 @@ import (
"golang.org/x/oauth2"
)
const enc = encodings.Sharefile
const (
rcloneClientID = "djQUPlHTUM9EvayYBWuKC5IrVIoQde46"
rcloneEncryptedClientSecret = "v7572bKhUindQL3yDnUAebmgP-QxiwT38JLxVPolcZBl6SSs329MtFzH73x7BeELmMVZtneUPvALSopUZ6VkhQ"
@@ -204,16 +203,30 @@ be set manually to something like: https://XXX.sharefile.com
`,
Advanced: true,
Default: "",
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Base |
encoder.EncodeWin | // :?"*<>|
encoder.EncodeBackSlash | // \
encoder.EncodeCtl |
encoder.EncodeRightSpace |
encoder.EncodeRightPeriod |
encoder.EncodeLeftSpace |
encoder.EncodeLeftPeriod |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
RootFolderID string `config:"root_folder_id"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Endpoint string `config:"endpoint"`
RootFolderID string `config:"root_folder_id"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Endpoint string `config:"endpoint"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote cloud storage system
@@ -301,7 +314,7 @@ func (f *Fs) readMetaDataForIDPath(ctx context.Context, id, path string, directo
}
if path != "" {
opts.Path += "/ByPath"
opts.Parameters.Set("path", "/"+enc.FromStandardPath(path))
opts.Parameters.Set("path", "/"+f.opt.Enc.FromStandardPath(path))
}
var item api.Item
var resp *http.Response
@@ -595,7 +608,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
var resp *http.Response
leaf = enc.FromStandardName(leaf)
leaf = f.opt.Enc.FromStandardName(leaf)
var req = api.Item{
Name: leaf,
FileName: leaf,
@@ -664,7 +677,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
fs.Debugf(f, "Ignoring %q - unknown type %q", item.Name, item.Type)
continue
}
item.Name = enc.ToStandardName(item.Name)
item.Name = f.opt.Enc.ToStandardName(item.Name)
if fn(item) {
found = true
break
@@ -873,7 +886,7 @@ func (f *Fs) updateItem(ctx context.Context, id, leaf, directoryID string, modTi
"overwrite": {"false"},
},
}
leaf = enc.FromStandardName(leaf)
leaf = f.opt.Enc.FromStandardName(leaf)
// FIXME this appears to be a bug in the API
//
// If you set the modified time via PATCH then the server
@@ -1119,7 +1132,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Obj
if err != nil {
return nil, err
}
srcLeaf = enc.FromStandardName(srcLeaf)
srcLeaf = f.opt.Enc.FromStandardName(srcLeaf)
_ = srcParentID
// Create temporary object
@@ -1127,7 +1140,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Obj
if err != nil {
return nil, err
}
dstLeaf = enc.FromStandardName(dstLeaf)
dstLeaf = f.opt.Enc.FromStandardName(dstLeaf)
sameName := strings.ToLower(srcLeaf) == strings.ToLower(dstLeaf)
if sameName && srcParentID == dstParentID {
@@ -1390,7 +1403,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return err
}
leaf = enc.FromStandardName(leaf)
leaf = o.fs.opt.Enc.FromStandardName(leaf)
var req = api.UploadRequest{
Method: "standard",
Raw: true,

View File

@@ -0,0 +1,160 @@
// Package api has type definitions for sugarsync
//
// Converted from the API docs with help from https://www.onlinetool.io/xmltogo/
package api
import (
"encoding/xml"
"time"
)
// AppAuthorization is used to request a refresh token
//
// The token is returned in the Location: field
type AppAuthorization struct {
XMLName xml.Name `xml:"appAuthorization"`
Username string `xml:"username"`
Password string `xml:"password"`
Application string `xml:"application"`
AccessKeyID string `xml:"accessKeyId"`
PrivateAccessKey string `xml:"privateAccessKey"`
}
// TokenAuthRequest is the request to get Authorization
type TokenAuthRequest struct {
XMLName xml.Name `xml:"tokenAuthRequest"`
AccessKeyID string `xml:"accessKeyId"`
PrivateAccessKey string `xml:"privateAccessKey"`
RefreshToken string `xml:"refreshToken"`
}
// Authorization is returned from the TokenAuthRequest
type Authorization struct {
XMLName xml.Name `xml:"authorization"`
Expiration time.Time `xml:"expiration"`
User string `xml:"user"`
}
// File represents a single file
type File struct {
Name string `xml:"displayName"`
Ref string `xml:"ref"`
DsID string `xml:"dsid"`
TimeCreated time.Time `xml:"timeCreated"`
Parent string `xml:"parent"`
Size int64 `xml:"size"`
LastModified time.Time `xml:"lastModified"`
MediaType string `xml:"mediaType"`
PresentOnServer bool `xml:"presentOnServer"`
FileData string `xml:"fileData"`
Versions string `xml:"versions"`
PublicLink PublicLink
}
// Collection represents
// - Workspace Collection
// - Sync Folders collection
// - Folder
type Collection struct {
Type string `xml:"type,attr"`
Name string `xml:"displayName"`
Ref string `xml:"ref"` // only for Folder
DsID string `xml:"dsid"`
TimeCreated time.Time `xml:"timeCreated"`
Parent string `xml:"parent"`
Collections string `xml:"collections"`
Files string `xml:"files"`
Contents string `xml:"contents"`
// Sharing bool `xml:"sharing>enabled,attr"`
}
// CollectionContents is the result of a list call
type CollectionContents struct {
//XMLName xml.Name `xml:"collectionContents"`
Start int `xml:"start,attr"`
HasMore bool `xml:"hasMore,attr"`
End int `xml:"end,attr"`
Collections []Collection `xml:"collection"`
Files []File `xml:"file"`
}
// User is returned from the /user call
type User struct {
XMLName xml.Name `xml:"user"`
Username string `xml:"username"`
Nickname string `xml:"nickname"`
Quota struct {
Limit int64 `xml:"limit"`
Usage int64 `xml:"usage"`
} `xml:"quota"`
Workspaces string `xml:"workspaces"`
SyncFolders string `xml:"syncfolders"`
Deleted string `xml:"deleted"`
MagicBriefcase string `xml:"magicBriefcase"`
WebArchive string `xml:"webArchive"`
MobilePhotos string `xml:"mobilePhotos"`
Albums string `xml:"albums"`
RecentActivities string `xml:"recentActivities"`
ReceivedShares string `xml:"receivedShares"`
PublicLinks string `xml:"publicLinks"`
MaximumPublicLinkSize int `xml:"maximumPublicLinkSize"`
}
// CreateFolder is posted to a folder URL to create a folder
type CreateFolder struct {
XMLName xml.Name `xml:"folder"`
Name string `xml:"displayName"`
}
// MoveFolder is posted to a folder URL to move a folder
type MoveFolder struct {
XMLName xml.Name `xml:"folder"`
Name string `xml:"displayName"`
Parent string `xml:"parent"`
}
// CreateSyncFolder is posted to the root folder URL to create a sync folder
type CreateSyncFolder struct {
XMLName xml.Name `xml:"syncFolder"`
Name string `xml:"displayName"`
}
// CreateFile is posted to a folder URL to create a file
type CreateFile struct {
XMLName xml.Name `xml:"file"`
Name string `xml:"displayName"`
MediaType string `xml:"mediaType"`
}
// MoveFile is posted to a file URL to create a file
type MoveFile struct {
XMLName xml.Name `xml:"file"`
Name string `xml:"displayName"`
Parent string `xml:"parent"`
}
// CopyFile copies a file from source
type CopyFile struct {
XMLName xml.Name `xml:"fileCopy"`
Source string `xml:"source,attr"`
Name string `xml:"displayName"`
}
// PublicLink is the URL and enabled flag for a public link
type PublicLink struct {
XMLName xml.Name `xml:"publicLink"`
URL string `xml:",chardata"`
Enabled bool `xml:"enabled,attr"`
}
// SetPublicLink can be used to enable the file for sharing
type SetPublicLink struct {
XMLName xml.Name `xml:"file"`
PublicLink PublicLink
}
// SetLastModified sets the modified time for a file
type SetLastModified struct {
XMLName xml.Name `xml:"file"`
LastModified time.Time `xml:"lastModified"`
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,59 @@
package sugarsync
import (
"bytes"
"io/ioutil"
"net/http"
"testing"
"github.com/stretchr/testify/assert"
)
func TestErrorHandler(t *testing.T) {
for _, test := range []struct {
name string
body string
code int
status string
want string
}{
{
name: "empty",
body: "",
code: 500,
status: "internal error",
want: `HTTP error 500 (internal error) returned body: ""`,
},
{
name: "unknown",
body: "<h1>unknown</h1>",
code: 500,
status: "internal error",
want: `HTTP error 500 (internal error) returned body: "<h1>unknown</h1>"`,
},
{
name: "blank",
body: "Nothing here <h3></h3>",
code: 500,
status: "internal error",
want: `HTTP error 500 (internal error) returned body: "Nothing here <h3></h3>"`,
},
{
name: "real",
body: "<h1>an error</h1>\n<h3>Can not move sync folder.</h3>\n<p>more stuff</p>",
code: 500,
status: "internal error",
want: `HTTP error 500 (internal error): Can not move sync folder.`,
},
} {
t.Run(test.name, func(t *testing.T) {
resp := http.Response{
Body: ioutil.NopCloser(bytes.NewBufferString(test.body)),
StatusCode: test.code,
Status: test.status,
}
got := errorHandler(&resp)
assert.Equal(t, test.want, got.Error())
})
}
}

View File

@@ -0,0 +1,17 @@
// Test Sugarsync filesystem interface
package sugarsync_test
import (
"testing"
"github.com/rclone/rclone/backend/sugarsync"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestSugarSync:Test",
NilObject: (*sugarsync.Object)(nil),
})
}

View File

@@ -16,15 +16,16 @@ import (
"github.com/ncw/swift"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
)
@@ -60,10 +61,14 @@ Rclone will still chunk files bigger than chunk_size when doing normal
copy operations.`,
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.EncodeInvalidUtf8 |
encoder.EncodeSlash),
}}
const enc = encodings.Swift
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -109,7 +114,7 @@ func init() {
Value: "https://auth.storage.memset.com/v2.0",
}, {
Help: "OVH",
Value: "https://auth.cloud.ovh.net/v2.0",
Value: "https://auth.cloud.ovh.net/v3",
}},
}, {
Name: "user_id",
@@ -187,26 +192,27 @@ provider.`,
// Options defines the configuration for this backend
type Options struct {
EnvAuth bool `config:"env_auth"`
User string `config:"user"`
Key string `config:"key"`
Auth string `config:"auth"`
UserID string `config:"user_id"`
Domain string `config:"domain"`
Tenant string `config:"tenant"`
TenantID string `config:"tenant_id"`
TenantDomain string `config:"tenant_domain"`
Region string `config:"region"`
StorageURL string `config:"storage_url"`
AuthToken string `config:"auth_token"`
AuthVersion int `config:"auth_version"`
ApplicationCredentialID string `config:"application_credential_id"`
ApplicationCredentialName string `config:"application_credential_name"`
ApplicationCredentialSecret string `config:"application_credential_secret"`
StoragePolicy string `config:"storage_policy"`
EndpointType string `config:"endpoint_type"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
NoChunk bool `config:"no_chunk"`
EnvAuth bool `config:"env_auth"`
User string `config:"user"`
Key string `config:"key"`
Auth string `config:"auth"`
UserID string `config:"user_id"`
Domain string `config:"domain"`
Tenant string `config:"tenant"`
TenantID string `config:"tenant_id"`
TenantDomain string `config:"tenant_domain"`
Region string `config:"region"`
StorageURL string `config:"storage_url"`
AuthToken string `config:"auth_token"`
AuthVersion int `config:"auth_version"`
ApplicationCredentialID string `config:"application_credential_id"`
ApplicationCredentialName string `config:"application_credential_name"`
ApplicationCredentialSecret string `config:"application_credential_secret"`
StoragePolicy string `config:"storage_policy"`
EndpointType string `config:"endpoint_type"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
NoChunk bool `config:"no_chunk"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote swift server
@@ -325,7 +331,7 @@ func parsePath(path string) (root string) {
// relative to f.root
func (f *Fs) split(rootRelativePath string) (container, containerPath string) {
container, containerPath = bucket.Split(path.Join(f.root, rootRelativePath))
return enc.FromStandardName(container), enc.FromStandardPath(containerPath)
return f.opt.Enc.FromStandardName(container), f.opt.Enc.FromStandardPath(containerPath)
}
// split returns container and containerPath from the object
@@ -446,7 +452,7 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
// Check to see if the object exists - ignoring directory markers
var info swift.Object
var err error
encodedDirectory := enc.FromStandardPath(f.rootDirectory)
encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory)
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
info, rxHeaders, err = f.c.Object(f.rootContainer, encodedDirectory)
@@ -559,7 +565,7 @@ func (f *Fs) listContainerRoot(container, directory, prefix string, addContainer
if !recurse {
isDirectory = strings.HasSuffix(object.Name, "/")
}
remote := enc.ToStandardPath(object.Name)
remote := f.opt.Enc.ToStandardPath(object.Name)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", remote)
continue
@@ -642,7 +648,7 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
}
for _, container := range containers {
f.cache.MarkOK(container.Name)
d := fs.NewDir(enc.ToStandardName(container.Name), time.Time{}).SetSize(container.Bytes).SetItems(container.Count)
d := fs.NewDir(f.opt.Enc.ToStandardName(container.Name), time.Time{}).SetSize(container.Bytes).SetItems(container.Count)
entries = append(entries, d)
}
return entries, nil

View File

@@ -20,7 +20,7 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestSwift:",
RemoteName: "TestSwiftAIO:",
NilObject: (*Object)(nil),
})
}

View File

@@ -358,6 +358,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if err != nil {
return nil, err
}
f.srv.SetHeader("Referer", u.String())
if root != "" && !rootIsDir {
// Check to see if the root actually an existing file
@@ -837,7 +838,7 @@ func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, metho
},
}
if f.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1e9)
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", src.ModTime(ctx).Unix())
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
@@ -1137,7 +1138,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if o.fs.useOCMtime || o.fs.hasMD5 || o.fs.hasSHA1 {
opts.ExtraHeaders = map[string]string{}
if o.fs.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1e9)
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", src.ModTime(ctx).Unix())
}
// Set one upload checksum
// Owncloud uses one checksum only to check the upload and stores its own SHA1 and MD5

View File

@@ -5,13 +5,36 @@ import (
"testing"
"github.com/rclone/rclone/backend/webdav"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestWebdav:",
RemoteName: "TestWebdavNexcloud:",
NilObject: (*webdav.Object)(nil),
})
}
// TestIntegration runs integration tests against the remote
func TestIntegration2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestWebdavOwncloud:",
NilObject: (*webdav.Object)(nil),
})
}
// TestIntegration runs integration tests against the remote
func TestIntegration3(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestWebdavRclone:",
NilObject: (*webdav.Object)(nil),
})
}

View File

@@ -20,9 +20,9 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
@@ -30,8 +30,6 @@ import (
"golang.org/x/oauth2"
)
const enc = encodings.Yandex
//oAuth
const (
rcloneClientID = "ac39b43b9eba4cae8ffb788c06d816a8"
@@ -80,14 +78,23 @@ func init() {
Help: "Remove existing public link to file/folder with link command rather than creating.\nDefault is false, meaning link command will create or retrieve public link.",
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// Of the control characters \t \n \r are allowed
// it doesn't seem worth making an exception for this
Default: (encoder.Display |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Token string `config:"token"`
Unlink bool `config:"unlink"`
Token string `config:"token"`
Unlink bool `config:"unlink"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote yandex
@@ -210,7 +217,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, options *api.
Parameters: url.Values{},
}
opts.Parameters.Set("path", enc.FromStandardPath(path))
opts.Parameters.Set("path", f.opt.Enc.FromStandardPath(path))
if options.SortMode != nil {
opts.Parameters.Set("sort", options.SortMode.String())
@@ -237,7 +244,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, options *api.
return nil, err
}
info.Name = enc.ToStandardName(info.Name)
info.Name = f.opt.Enc.ToStandardName(info.Name)
return &info, nil
}
@@ -364,7 +371,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if info.ResourceType == "dir" {
//list all subdirs
for _, element := range info.Embedded.Items {
element.Name = enc.ToStandardName(element.Name)
element.Name = f.opt.Enc.ToStandardName(element.Name)
remote := path.Join(dir, element.Name)
entry, err := f.itemToDirEntry(ctx, remote, &element)
if err != nil {
@@ -467,7 +474,7 @@ func (f *Fs) CreateDir(ctx context.Context, path string) (err error) {
if strings.IndexRune(path, ':') >= 0 {
path = "disk:" + path
}
opts.Parameters.Set("path", enc.FromStandardPath(path))
opts.Parameters.Set("path", f.opt.Enc.FromStandardPath(path))
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
@@ -581,7 +588,7 @@ func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) (err erro
Parameters: url.Values{},
}
opts.Parameters.Set("path", enc.FromStandardPath(path))
opts.Parameters.Set("path", f.opt.Enc.FromStandardPath(path))
opts.Parameters.Set("permanently", strconv.FormatBool(hardDelete))
var resp *http.Response
@@ -653,8 +660,8 @@ func (f *Fs) copyOrMove(ctx context.Context, method, src, dst string, overwrite
Parameters: url.Values{},
}
opts.Parameters.Set("from", enc.FromStandardPath(src))
opts.Parameters.Set("path", enc.FromStandardPath(dst))
opts.Parameters.Set("from", f.opt.Enc.FromStandardPath(src))
opts.Parameters.Set("path", f.opt.Enc.FromStandardPath(dst))
opts.Parameters.Set("overwrite", strconv.FormatBool(overwrite))
var resp *http.Response
@@ -803,12 +810,12 @@ func (f *Fs) PublicLink(ctx context.Context, remote string) (link string, err er
}
opts := rest.Opts{
Method: "PUT",
Path: enc.FromStandardPath(path),
Path: f.opt.Enc.FromStandardPath(path),
Parameters: url.Values{},
NoResponse: true,
}
opts.Parameters.Set("path", enc.FromStandardPath(f.filePath(remote)))
opts.Parameters.Set("path", f.opt.Enc.FromStandardPath(f.filePath(remote)))
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
@@ -994,7 +1001,7 @@ func (o *Object) setCustomProperty(ctx context.Context, property string, value s
NoResponse: true,
}
opts.Parameters.Set("path", enc.FromStandardPath(o.filePath()))
opts.Parameters.Set("path", o.fs.opt.Enc.FromStandardPath(o.filePath()))
rcm := map[string]interface{}{
property: value,
}
@@ -1031,7 +1038,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Parameters: url.Values{},
}
opts.Parameters.Set("path", enc.FromStandardPath(o.filePath()))
opts.Parameters.Set("path", o.fs.opt.Enc.FromStandardPath(o.filePath()))
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &dl)
@@ -1068,7 +1075,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, overwrite bool, mimeT
Parameters: url.Values{},
}
opts.Parameters.Set("path", enc.FromStandardPath(o.filePath()))
opts.Parameters.Set("path", o.fs.opt.Enc.FromStandardPath(o.filePath()))
opts.Parameters.Set("overwrite", strconv.FormatBool(overwrite))
err = o.fs.pacer.Call(func() (bool, error) {

View File

@@ -45,6 +45,7 @@ docs = [
"koofr.md",
"mailru.md",
"mega.md",
"memory.md",
"azureblob.md",
"onedrive.md",
"opendrive.md",
@@ -54,6 +55,7 @@ docs = [
"premiumizeme.md",
"putio.md",
"sftp.md",
"sugarsync.md",
"union.md",
"webdav.md",
"yandex.md",

View File

@@ -1,5 +1,6 @@
@echo off
echo Setting environment variables for mingw+WinFsp compile
set GOPATH=X:\go
set PATH=C:\Program Files\mingw-w64\i686-7.1.0-win32-dwarf-rt_v5-rev0\mingw32\bin;%PATH%
set GOPATH=Z:\go
rem set PATH=C:\Program Files\mingw-w64\i686-7.1.0-win32-dwarf-rt_v5-rev0\mingw32\bin;%PATH%
set PATH=C:\Program Files\mingw-w64\x86_64-8.1.0-win32-seh-rt_v6-rev0\mingw64\bin;%PATH%
set CPATH=C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse

View File

@@ -36,6 +36,7 @@ import (
_ "github.com/rclone/rclone/cmd/memtest"
_ "github.com/rclone/rclone/cmd/mkdir"
_ "github.com/rclone/rclone/cmd/mount"
_ "github.com/rclone/rclone/cmd/mount2"
_ "github.com/rclone/rclone/cmd/move"
_ "github.com/rclone/rclone/cmd/moveto"
_ "github.com/rclone/rclone/cmd/ncdu"

View File

@@ -42,11 +42,11 @@ You can use it like this to output a single file
rclone cat remote:path/to/file
Or like this to output any file in dir or subdirectories.
Or like this to output any file in dir or its subdirectories.
rclone cat remote:path/to/dir
Or like this to output any .txt files in dir or subdirectories.
Or like this to output any .txt files in dir or its subdirectories.
rclone --include "*.txt" cat remote:path/to/dir

View File

@@ -226,7 +226,7 @@ func ShowStats() bool {
// Run the function with stats and retries if required
func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
var err error
var cmdErr error
stopStats := func() {}
if !showStats && ShowStats() {
showStats = true
@@ -238,11 +238,11 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
}
SigInfoHandler()
for try := 1; try <= *retries; try++ {
err = f()
err = fs.CountError(err)
cmdErr = f()
cmdErr = fs.CountError(cmdErr)
lastErr := accounting.GlobalStats().GetLastError()
if err == nil {
err = lastErr
if cmdErr == nil {
cmdErr = lastErr
}
if !Retry || !accounting.GlobalStats().Errored() {
if try > 1 {
@@ -278,15 +278,6 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
}
}
stopStats()
if err != nil {
nerrs := accounting.GlobalStats().GetErrors()
if nerrs <= 1 {
log.Printf("Failed to %s: %v", cmd.Name(), err)
} else {
log.Printf("Failed to %s with %d errors: last error was: %v", cmd.Name(), nerrs, err)
}
resolveExitCode(err)
}
if showStats && (accounting.GlobalStats().Errored() || *statsInterval > 0) {
accounting.GlobalStats().Log()
}
@@ -294,7 +285,7 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
// dump all running go-routines
if fs.Config.Dump&fs.DumpGoRoutines != 0 {
err = pprof.Lookup("goroutine").WriteTo(os.Stdout, 1)
err := pprof.Lookup("goroutine").WriteTo(os.Stdout, 1)
if err != nil {
fs.Errorf(nil, "Failed to dump goroutines: %v", err)
}
@@ -305,15 +296,22 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
c := exec.Command("lsof", "-p", strconv.Itoa(os.Getpid()))
c.Stdout = os.Stdout
c.Stderr = os.Stderr
err = c.Run()
err := c.Run()
if err != nil {
fs.Errorf(nil, "Failed to list open files: %v", err)
}
}
if accounting.GlobalStats().Errored() {
resolveExitCode(accounting.GlobalStats().GetLastError())
// Log the final error message and exit
if cmdErr != nil {
nerrs := accounting.GlobalStats().GetErrors()
if nerrs <= 1 {
log.Printf("Failed to %s: %v", cmd.Name(), cmdErr)
} else {
log.Printf("Failed to %s with %d errors: last error was: %v", cmd.Name(), nerrs, cmdErr)
}
}
resolveExitCode(cmdErr)
}
// CheckArgs checks there are enough arguments and prints a message if not

View File

@@ -371,12 +371,7 @@ func (fsys *FS) Write(path string, buff []byte, ofst int64, fh uint64) (n int) {
if errc != 0 {
return errc
}
var err error
if fsys.VFS.Opt.CacheMode < vfs.CacheModeWrites || handle.Node().Mode()&os.ModeAppend == 0 {
n, err = handle.WriteAt(buff, ofst)
} else {
n, err = handle.Write(buff)
}
n, err := handle.WriteAt(buff, ofst)
if err != nil {
return translateError(err)
}
@@ -441,6 +436,11 @@ func (fsys *FS) Rename(oldPath string, newPath string) (errc int) {
return translateError(fsys.VFS.Rename(oldPath, newPath))
}
// Windows sometimes seems to send times that are the epoch which is
// 1601-01-01 +/- timezone so filter out times that are earlier than
// this.
var invalidDateCutoff = time.Date(1601, 1, 2, 0, 0, 0, 0, time.UTC)
// Utimens changes the access and modification times of a file.
func (fsys *FS) Utimens(path string, tmsp []fuse.Timespec) (errc int) {
defer log.Trace(path, "tmsp=%+v", tmsp)("errc=%d", &errc)
@@ -448,12 +448,16 @@ func (fsys *FS) Utimens(path string, tmsp []fuse.Timespec) (errc int) {
if errc != 0 {
return errc
}
var t time.Time
if tmsp == nil || len(tmsp) < 2 {
t = time.Now()
} else {
t = tmsp[1].Time()
fs.Debugf(path, "Utimens: Not setting time as timespec isn't complete: %v", tmsp)
return 0
}
t := tmsp[1].Time()
if t.Before(invalidDateCutoff) {
fs.Debugf(path, "Utimens: Not setting out of range time: %v", t)
return 0
}
fs.Debugf(path, "Utimens: SetModTime: %v", t)
return translateError(node.SetModTime(t))
}

View File

@@ -31,7 +31,7 @@ func init() {
if runtime.GOOS == "windows" {
name = "mount"
}
mountlib.NewMountCommand(name, Mount)
mountlib.NewMountCommand(name, false, Mount)
}
// mountOptions configures the options from the command line flags

View File

@@ -146,7 +146,7 @@ you would do:
If any of the parameters passed is a password field, then rclone will
automatically obscure them before putting them in the config file.
If the remote uses oauth the token will be updated, if you don't
If the remote uses OAuth the token will be updated, if you don't
require this add an extra parameter thus:
rclone config update myremote swift env_auth true config_refresh_token false

View File

@@ -2,6 +2,8 @@ package copyurl
import (
"context"
"errors"
"os"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
@@ -12,37 +14,55 @@ import (
var (
autoFilename = false
stdout = false
)
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &autoFilename, "auto-filename", "a", autoFilename, "Get the file name from the url and use it for destination file path")
flags.BoolVarP(cmdFlags, &autoFilename, "auto-filename", "a", autoFilename, "Get the file name from the URL and use it for destination file path")
flags.BoolVarP(cmdFlags, &stdout, "stdout", "", stdout, "Write the output to stdout rather than a file")
}
var commandDefinition = &cobra.Command{
Use: "copyurl https://example.com dest:path",
Short: `Copy url content to dest.`,
Long: `
Download urls content and copy it to destination
without saving it in tmp storage.
Download a URL's content and copy it to the destination without saving
it in temporary storage.
Setting --auto-filename flag will cause retrieving file name from url and using it in destination path.
Setting --auto-filename will cause the file name to be retreived from
the from URL (after any redirections) and used in the destination
path.
Setting --stdout or making the output file name "-" will cause the
output to be written to standard output.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
RunE: func(command *cobra.Command, args []string) (err error) {
cmd.CheckArgs(1, 2, command, args)
var dstFileName string
var fsdst fs.Fs
if autoFilename {
fsdst = cmd.NewFsDir(args[1:])
} else {
fsdst, dstFileName = cmd.NewFsDstFile(args[1:])
if !stdout {
if len(args) < 2 {
return errors.New("need 2 arguments if not using --stdout")
}
if args[1] == "-" {
stdout = true
} else if autoFilename {
fsdst = cmd.NewFsDir(args[1:])
} else {
fsdst, dstFileName = cmd.NewFsDstFile(args[1:])
}
}
cmd.Run(true, true, command, func() error {
_, err := operations.CopyURL(context.Background(), fsdst, dstFileName, args[0], autoFilename)
if stdout {
err = operations.CopyURLToWriter(context.Background(), args[0], os.Stdout)
} else {
_, err = operations.CopyURL(context.Background(), fsdst, dstFileName, args[0], autoFilename)
}
return err
})
return nil
},
}

View File

@@ -4,9 +4,8 @@ import (
"context"
"os"
"github.com/rclone/rclone/backend/dropbox/dbhash"
"github.com/rclone/rclone/backend/dropbox"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/spf13/cobra"
)
@@ -28,8 +27,7 @@ The output is in the same format as md5sum and sha1sum.
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, false, command, func() error {
dbHashType := hash.RegisterHash("Dropbox", 64, dbhash.New)
return operations.HashLister(context.Background(), dbHashType, fsrc, os.Stdout)
return operations.HashLister(context.Background(), dropbox.DbHashType, fsrc, os.Stdout)
})
},
}

View File

@@ -17,7 +17,7 @@ var (
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlag := commandDefinition.Flags()
flags.FVarP(cmdFlag, &dedupeMode, "dedupe-mode", "", "Dedupe mode interactive|skip|first|newest|oldest|rename.")
flags.FVarP(cmdFlag, &dedupeMode, "dedupe-mode", "", "Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename.")
}
var commandDefinition = &cobra.Command{
@@ -94,6 +94,7 @@ Dedupe can be run non interactively using the ` + "`" + `--dedupe-mode` + "`" +
* ` + "`" + `--dedupe-mode newest` + "`" + ` - removes identical files then keeps the newest one.
* ` + "`" + `--dedupe-mode oldest` + "`" + ` - removes identical files then keeps the oldest one.
* ` + "`" + `--dedupe-mode largest` + "`" + ` - removes identical files then keeps the largest one.
* ` + "`" + `--dedupe-mode smallest` + "`" + ` - removes identical files then keeps the smallest one.
* ` + "`" + `--dedupe-mode rename` + "`" + ` - removes identical files then renames the rest to be different.
For example to rename all the identically named photos in your Google Photos directory, do

View File

@@ -25,6 +25,7 @@ date: %s
title: "%s"
slug: %s
url: %s
# autogenerated - DO NOT EDIT, instead edit the source code in %s and as part of making a release run "make commanddocs"
---
`
@@ -67,7 +68,8 @@ rclone.org website.`,
name := filepath.Base(filename)
base := strings.TrimSuffix(name, path.Ext(name))
url := "/commands/" + strings.ToLower(base) + "/"
return fmt.Sprintf(gendocFrontmatterTemplate, now, strings.Replace(base, "_", " ", -1), base, url)
source := strings.Replace(strings.Replace(base, "rclone", "cmd", -1), "_", "/", -1) + "/"
return fmt.Sprintf(gendocFrontmatterTemplate, now, strings.Replace(base, "_", " ", -1), base, url, source)
}
linkHandler := func(name string) string {
base := strings.TrimSuffix(name, path.Ext(name))

View File

@@ -7,13 +7,20 @@ import (
"os"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/spf13/cobra"
)
var (
outputBase64 = false
)
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &outputBase64, "base64", "", outputBase64, "Output base64 encoded hashsum")
}
var commandDefinition = &cobra.Command{
@@ -55,6 +62,9 @@ Then
}
fsrc := cmd.NewFsSrc(args[1:])
cmd.Run(false, false, command, func() error {
if outputBase64 {
return operations.HashListerBase64(context.Background(), ht, fsrc, os.Stdout)
}
return operations.HashLister(context.Background(), ht, fsrc, os.Stdout)
})
return nil

View File

@@ -10,6 +10,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configflags"
"github.com/rclone/rclone/fs/filter/filterflags"
"github.com/rclone/rclone/fs/log/logflags"
"github.com/rclone/rclone/fs/rc/rcflags"
"github.com/rclone/rclone/lib/atexit"
"github.com/spf13/cobra"
@@ -169,6 +170,7 @@ func setupRootCommand(rootCmd *cobra.Command) {
configflags.AddFlags(pflag.CommandLine)
filterflags.AddFlags(pflag.CommandLine)
rcflags.AddFlags(pflag.CommandLine)
logflags.AddFlags(pflag.CommandLine)
Root.Run = runRoot
Root.Flags().BoolVarP(&version, "version", "V", false, "Print the version number")

View File

@@ -166,10 +166,11 @@ func Lsf(ctx context.Context, fsrc fs.Fs, out io.Writer) error {
list.SetDirSlash(dirSlash)
list.SetAbsolute(absolute)
var opt = operations.ListJSONOpt{
NoModTime: true,
DirsOnly: dirsOnly,
FilesOnly: filesOnly,
Recurse: recurse,
NoModTime: true,
NoMimeType: true,
DirsOnly: dirsOnly,
FilesOnly: filesOnly,
Recurse: recurse,
}
for _, char := range format {
@@ -188,6 +189,7 @@ func Lsf(ctx context.Context, fsrc fs.Fs, out io.Writer) error {
list.AddID()
case 'm':
list.AddMimeType()
opt.NoMimeType = false
case 'e':
list.AddEncrypted()
opt.ShowEncrypted = true

View File

@@ -24,6 +24,7 @@ func init() {
flags.BoolVarP(cmdFlags, &opt.Recurse, "recursive", "R", false, "Recurse into the listing.")
flags.BoolVarP(cmdFlags, &opt.ShowHash, "hash", "", false, "Include hashes in the output (may take longer).")
flags.BoolVarP(cmdFlags, &opt.NoModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up).")
flags.BoolVarP(cmdFlags, &opt.NoMimeType, "no-mimetype", "", false, "Don't read the mime type (can speed things up).")
flags.BoolVarP(cmdFlags, &opt.ShowEncrypted, "encrypted", "M", false, "Show the encrypted names.")
flags.BoolVarP(cmdFlags, &opt.ShowOrigIDs, "original", "", false, "Show the ID of the underlying Object.")
flags.BoolVarP(cmdFlags, &opt.FilesOnly, "files-only", "", false, "Show only files in the listing.")
@@ -59,7 +60,9 @@ The output is an array of Items, where each Item looks like this
If --hash is not specified the Hashes property won't be emitted.
If --no-modtime is specified then ModTime will be blank.
If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (eg s3, swift).
If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (eg s3, swift).
If --encrypted is not specified the Encrypted won't be emitted.

View File

@@ -1,4 +1,4 @@
// +build linux darwin freebsd
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
package mount

View File

@@ -1,4 +1,4 @@
// +build linux darwin freebsd
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
package mount

View File

@@ -1,6 +1,6 @@
// FUSE main Fs
// +build linux darwin freebsd
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
package mount

View File

@@ -1,11 +1,10 @@
// +build linux darwin freebsd
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
package mount
import (
"context"
"io"
"os"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
@@ -42,12 +41,7 @@ var _ fusefs.HandleWriter = (*FileHandle)(nil)
// Write data to the file handle
func (fh *FileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) (err error) {
defer log.Trace(fh, "len=%d, offset=%d", len(req.Data), req.Offset)("written=%d, err=%v", &resp.Size, &err)
var n int
if fh.Handle.Node().VFS().Opt.CacheMode < vfs.CacheModeWrites || fh.Handle.Node().Mode()&os.ModeAppend == 0 {
n, err = fh.Handle.WriteAt(req.Data, req.Offset)
} else {
n, err = fh.Handle.Write(req.Data)
}
n, err := fh.Handle.WriteAt(req.Data, req.Offset)
if err != nil {
return translateError(err)
}

View File

@@ -1,6 +1,6 @@
// Package mount implents a FUSE mounting system for rclone remotes.
// +build linux darwin freebsd
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
package mount
@@ -22,7 +22,7 @@ import (
)
func init() {
mountlib.NewMountCommand("mount", Mount)
mountlib.NewMountCommand("mount", false, Mount)
}
// mountOptions configures the options from the command line flags
@@ -32,12 +32,14 @@ func mountOptions(device string) (options []fuse.MountOption) {
fuse.Subtype("rclone"),
fuse.FSName(device),
fuse.VolumeName(mountlib.VolumeName),
fuse.AsyncRead(),
// Options from benchmarking in the fuse module
//fuse.MaxReadahead(64 * 1024 * 1024),
//fuse.WritebackCache(),
}
if mountlib.AsyncRead {
options = append(options, fuse.AsyncRead())
}
if mountlib.NoAppleDouble {
options = append(options, fuse.NoAppleDouble())
}
@@ -51,7 +53,8 @@ func mountOptions(device string) (options []fuse.MountOption) {
options = append(options, fuse.AllowOther())
}
if mountlib.AllowRoot {
options = append(options, fuse.AllowRoot())
// options = append(options, fuse.AllowRoot())
fs.Errorf(nil, "Ignoring --allow-root. Support has been removed upstream - see https://github.com/bazil/fuse/issues/144 for more info")
}
if mountlib.DefaultPermissions {
options = append(options, fuse.DefaultPermissions())

View File

@@ -1,4 +1,4 @@
// +build linux darwin freebsd
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
package mount

View File

@@ -1,6 +1,14 @@
// Build for mount for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !linux,!darwin,!freebsd
// Invert the build constraint: linux,go1.13 darwin,go1.13 freebsd,go1.13
//
// !((linux&&go1.13) || (darwin&&go1.13) || (freebsd&&go1.13))
// == !(linux&&go1.13) && !(darwin&&go1.13) && !(freebsd&&go1.13))
// == (!linux || !go1.13) && (!darwin || go1.13) && (!freebsd || !go1.13))
// +build !linux !go1.13
// +build !darwin !go1.13
// +build !freebsd !go1.13
package mount

149
cmd/mount2/file.go Normal file
View File

@@ -0,0 +1,149 @@
// +build linux darwin,amd64
package mount2
import (
"context"
"fmt"
"io"
"syscall"
fusefs "github.com/hanwen/go-fuse/v2/fs"
"github.com/hanwen/go-fuse/v2/fuse"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/vfs"
)
// FileHandle is a resource identifier for opened files. Usually, a
// FileHandle should implement some of the FileXxxx interfaces.
//
// All of the FileXxxx operations can also be implemented at the
// InodeEmbedder level, for example, one can implement NodeReader
// instead of FileReader.
//
// FileHandles are useful in two cases: First, if the underlying
// storage systems needs a handle for reading/writing. This is the
// case with Unix system calls, which need a file descriptor (See also
// the function `NewLoopbackFile`). Second, it is useful for
// implementing files whose contents are not tied to an inode. For
// example, a file like `/proc/interrupts` has no fixed content, but
// changes on each open call. This means that each file handle must
// have its own view of the content; this view can be tied to a
// FileHandle. Files that have such dynamic content should return the
// FOPEN_DIRECT_IO flag from their `Open` method. See directio_test.go
// for an example.
type FileHandle struct {
h vfs.Handle
}
// Create a new FileHandle
func newFileHandle(h vfs.Handle) *FileHandle {
return &FileHandle{
h: h,
}
}
// Check interface satistfied
var _ fusefs.FileHandle = (*FileHandle)(nil)
// The String method is for debug printing.
func (f *FileHandle) String() string {
return fmt.Sprintf("fh=%p(%s)", f, f.h.Node().Path())
}
// Read data from a file. The data should be returned as
// ReadResult, which may be constructed from the incoming
// `dest` buffer.
func (f *FileHandle) Read(ctx context.Context, dest []byte, off int64) (res fuse.ReadResult, errno syscall.Errno) {
var n int
var err error
defer log.Trace(f, "off=%d", off)("n=%d, off=%d, errno=%v", &n, &off, &errno)
n, err = f.h.ReadAt(dest, off)
if err == io.EOF {
err = nil
}
return fuse.ReadResultData(dest[:n]), translateError(err)
}
var _ fusefs.FileReader = (*FileHandle)(nil)
// Write the data into the file handle at given offset. After
// returning, the data will be reused and may not referenced.
func (f *FileHandle) Write(ctx context.Context, data []byte, off int64) (written uint32, errno syscall.Errno) {
var n int
var err error
defer log.Trace(f, "off=%d", off)("n=%d, off=%d, errno=%v", &n, &off, &errno)
n, err = f.h.WriteAt(data, off)
return uint32(n), translateError(err)
}
var _ fusefs.FileWriter = (*FileHandle)(nil)
// Flush is called for the close(2) call on a file descriptor. In case
// of a descriptor that was duplicated using dup(2), it may be called
// more than once for the same FileHandle.
func (f *FileHandle) Flush(ctx context.Context) syscall.Errno {
return translateError(f.h.Flush())
}
var _ fusefs.FileFlusher = (*FileHandle)(nil)
// Release is called to before a FileHandle is forgotten. The
// kernel ignores the return value of this method,
// so any cleanup that requires specific synchronization or
// could fail with I/O errors should happen in Flush instead.
func (f *FileHandle) Release(ctx context.Context) syscall.Errno {
return translateError(f.h.Release())
}
var _ fusefs.FileReleaser = (*FileHandle)(nil)
// Fsync is a signal to ensure writes to the Inode are flushed
// to stable storage.
func (f *FileHandle) Fsync(ctx context.Context, flags uint32) (errno syscall.Errno) {
return translateError(f.h.Sync())
}
var _ fusefs.FileFsyncer = (*FileHandle)(nil)
// Getattr reads attributes for an Inode. The library will ensure that
// Mode and Ino are set correctly. For files that are not opened with
// FOPEN_DIRECTIO, Size should be set so it can be read correctly. If
// returning zeroed permissions, the default behavior is to change the
// mode of 0755 (directory) or 0644 (files). This can be switched off
// with the Options.NullPermissions setting. If blksize is unset, 4096
// is assumed, and the 'blocks' field is set accordingly.
func (f *FileHandle) Getattr(ctx context.Context, out *fuse.AttrOut) (errno syscall.Errno) {
defer log.Trace(f, "")("attr=%v, errno=%v", &out, &errno)
setAttrOut(f.h.Node(), out)
return 0
}
var _ fusefs.FileGetattrer = (*FileHandle)(nil)
// Setattr sets attributes for an Inode.
func (f *FileHandle) Setattr(ctx context.Context, in *fuse.SetAttrIn, out *fuse.AttrOut) (errno syscall.Errno) {
defer log.Trace(f, "in=%v", in)("attr=%v, errno=%v", &out, &errno)
var err error
setAttrOut(f.h.Node(), out)
size, ok := in.GetSize()
if ok {
err = f.h.Truncate(int64(size))
if err != nil {
return translateError(err)
}
out.Attr.Size = size
}
mtime, ok := in.GetMTime()
if ok {
err = f.h.Node().SetModTime(mtime)
if err != nil {
return translateError(err)
}
out.Attr.Mtime = uint64(mtime.Unix())
out.Attr.Mtimensec = uint32(mtime.Nanosecond())
}
return 0
}
var _ fusefs.FileSetattrer = (*FileHandle)(nil)

131
cmd/mount2/fs.go Normal file
View File

@@ -0,0 +1,131 @@
// FUSE main Fs
// +build linux darwin,amd64
package mount2
import (
"os"
"syscall"
"github.com/hanwen/go-fuse/v2/fuse"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
)
// FS represents the top level filing system
type FS struct {
VFS *vfs.VFS
f fs.Fs
}
// NewFS creates a pathfs.FileSystem from the fs.Fs passed in
func NewFS(f fs.Fs) *FS {
fsys := &FS{
VFS: vfs.New(f, &vfsflags.Opt),
f: f,
}
return fsys
}
// Root returns the root node
func (f *FS) Root() (node *Node, err error) {
defer log.Trace("", "")("node=%+v, err=%v", &node, &err)
root, err := f.VFS.Root()
if err != nil {
return nil, err
}
return newNode(f, root), nil
}
// SetDebug if called, provide debug output through the log package.
func (f *FS) SetDebug(debug bool) {
fs.Debugf(f.f, "SetDebug %v", debug)
}
// get the Mode from a vfs Node
func getMode(node os.FileInfo) uint32 {
Mode := node.Mode().Perm()
if node.IsDir() {
Mode |= fuse.S_IFDIR
} else {
Mode |= fuse.S_IFREG
}
return uint32(Mode)
}
// fill in attr from node
func setAttr(node vfs.Node, attr *fuse.Attr) {
Size := uint64(node.Size())
const BlockSize = 512
Blocks := (Size + BlockSize - 1) / BlockSize
modTime := node.ModTime()
// set attributes
vfs := node.VFS()
attr.Owner.Gid = vfs.Opt.UID
attr.Owner.Uid = vfs.Opt.GID
attr.Mode = getMode(node)
attr.Size = Size
attr.Nlink = 1
attr.Blocks = Blocks
// attr.Blksize = BlockSize // not supported in freebsd/darwin, defaults to 4k if not set
s := uint64(modTime.Unix())
ns := uint32(modTime.Nanosecond())
attr.Atime = s
attr.Atimensec = ns
attr.Mtime = s
attr.Mtimensec = ns
attr.Ctime = s
attr.Ctimensec = ns
//attr.Rdev
}
// fill in AttrOut from node
func setAttrOut(node vfs.Node, out *fuse.AttrOut) {
setAttr(node, &out.Attr)
out.SetTimeout(mountlib.AttrTimeout)
}
// fill in EntryOut from node
func setEntryOut(node vfs.Node, out *fuse.EntryOut) {
setAttr(node, &out.Attr)
out.SetEntryTimeout(mountlib.AttrTimeout)
out.SetAttrTimeout(mountlib.AttrTimeout)
}
// Translate errors from mountlib into Syscall error numbers
func translateError(err error) syscall.Errno {
if err == nil {
return 0
}
switch errors.Cause(err) {
case vfs.OK:
return 0
case vfs.ENOENT:
return syscall.ENOENT
case vfs.EEXIST:
return syscall.EEXIST
case vfs.EPERM:
return syscall.EPERM
case vfs.ECLOSED:
return syscall.EBADF
case vfs.ENOTEMPTY:
return syscall.ENOTEMPTY
case vfs.ESPIPE:
return syscall.ESPIPE
case vfs.EBADF:
return syscall.EBADF
case vfs.EROFS:
return syscall.EROFS
case vfs.ENOSYS:
return syscall.ENOSYS
case vfs.EINVAL:
return syscall.EINVAL
}
fs.Errorf(nil, "IO error: %v", err)
return syscall.EIO
}

277
cmd/mount2/mount.go Normal file
View File

@@ -0,0 +1,277 @@
// Package mount implents a FUSE mounting system for rclone remotes.
// +build linux darwin,amd64
package mount2
import (
"fmt"
"log"
"os"
"os/signal"
"runtime"
"syscall"
fusefs "github.com/hanwen/go-fuse/v2/fs"
"github.com/hanwen/go-fuse/v2/fuse"
"github.com/okzk/sdnotify"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/vfs"
)
func init() {
mountlib.NewMountCommand("mount2", true, Mount)
}
// mountOptions configures the options from the command line flags
//
// man mount.fuse for more info and note the -o flag for other options
func mountOptions(fsys *FS, f fs.Fs) (mountOpts *fuse.MountOptions) {
device := f.Name() + ":" + f.Root()
mountOpts = &fuse.MountOptions{
AllowOther: mountlib.AllowOther,
FsName: device,
Name: "rclone",
DisableXAttrs: true,
Debug: mountlib.DebugFUSE,
MaxReadAhead: int(mountlib.MaxReadAhead),
// RememberInodes: true,
// SingleThreaded: true,
/*
AllowOther bool
// Options are passed as -o string to fusermount.
Options []string
// Default is _DEFAULT_BACKGROUND_TASKS, 12. This numbers
// controls the allowed number of requests that relate to
// async I/O. Concurrency for synchronous I/O is not limited.
MaxBackground int
// Write size to use. If 0, use default. This number is
// capped at the kernel maximum.
MaxWrite int
// Max read ahead to use. If 0, use default. This number is
// capped at the kernel maximum.
MaxReadAhead int
// If IgnoreSecurityLabels is set, all security related xattr
// requests will return NO_DATA without passing through the
// user defined filesystem. You should only set this if you
// file system implements extended attributes, and you are not
// interested in security labels.
IgnoreSecurityLabels bool // ignoring labels should be provided as a fusermount mount option.
// If RememberInodes is set, we will never forget inodes.
// This may be useful for NFS.
RememberInodes bool
// Values shown in "df -T" and friends
// First column, "Filesystem"
FsName string
// Second column, "Type", will be shown as "fuse." + Name
Name string
// If set, wrap the file system in a single-threaded locking wrapper.
SingleThreaded bool
// If set, return ENOSYS for Getxattr calls, so the kernel does not issue any
// Xattr operations at all.
DisableXAttrs bool
// If set, print debugging information.
Debug bool
// If set, ask kernel to forward file locks to FUSE. If using,
// you must implement the GetLk/SetLk/SetLkw methods.
EnableLocks bool
// If set, ask kernel not to do automatic data cache invalidation.
// The filesystem is fully responsible for invalidating data cache.
ExplicitDataCacheControl bool
*/
}
var opts []string
// FIXME doesn't work opts = append(opts, fmt.Sprintf("max_readahead=%d", maxReadAhead))
if mountlib.AllowNonEmpty {
opts = append(opts, "nonempty")
}
if mountlib.AllowOther {
opts = append(opts, "allow_other")
}
if mountlib.AllowRoot {
opts = append(opts, "allow_root")
}
if mountlib.DefaultPermissions {
opts = append(opts, "default_permissions")
}
if fsys.VFS.Opt.ReadOnly {
opts = append(opts, "ro")
}
if mountlib.WritebackCache {
log.Printf("FIXME --write-back-cache not supported")
// FIXME opts = append(opts,fuse.WritebackCache())
}
// Some OS X only options
if runtime.GOOS == "darwin" {
opts = append(opts,
// VolumeName sets the volume name shown in Finder.
fmt.Sprintf("volname=%s", device),
// NoAppleXattr makes OSXFUSE disallow extended attributes with the
// prefix "com.apple.". This disables persistent Finder state and
// other such information.
"noapplexattr",
// NoAppleDouble makes OSXFUSE disallow files with names used by OS X
// to store extended attributes on file systems that do not support
// them natively.
//
// Such file names are:
//
// ._*
// .DS_Store
"noappledouble",
)
}
mountOpts.Options = opts
return mountOpts
}
// mount the file system
//
// The mount point will be ready when this returns.
//
// returns an error, and an error channel for the serve process to
// report an error when fusermount is called.
func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, error) {
fs.Debugf(f, "Mounting on %q", mountpoint)
fsys := NewFS(f)
// nodeFsOpts := &fusefs.PathNodeFsOptions{
// ClientInodes: false,
// Debug: mountlib.DebugFUSE,
// }
// nodeFs := fusefs.NewPathNodeFs(fsys, nodeFsOpts)
//mOpts := fusefs.NewOptions() // default options
// FIXME
// mOpts.EntryTimeout = 10 * time.Second
// mOpts.AttrTimeout = 10 * time.Second
// mOpts.NegativeTimeout = 10 * time.Second
//mOpts.Debug = mountlib.DebugFUSE
//conn := fusefs.NewFileSystemConnector(nodeFs.Root(), mOpts)
mountOpts := mountOptions(fsys, f)
// FIXME fill out
opts := fusefs.Options{
MountOptions: *mountOpts,
EntryTimeout: &mountlib.AttrTimeout,
AttrTimeout: &mountlib.AttrTimeout,
// UID
// GID
}
root, err := fsys.Root()
if err != nil {
return nil, nil, nil, err
}
rawFS := fusefs.NewNodeFS(root, &opts)
server, err := fuse.NewServer(rawFS, mountpoint, &opts.MountOptions)
if err != nil {
return nil, nil, nil, err
}
//mountOpts := &fuse.MountOptions{}
//server, err := fusefs.Mount(mountpoint, fsys, &opts)
// server, err := fusefs.Mount(mountpoint, root, &opts)
// if err != nil {
// return nil, nil, nil, err
// }
umount := func() error {
// Shutdown the VFS
fsys.VFS.Shutdown()
return server.Unmount()
}
// serverSettings := server.KernelSettings()
// fs.Debugf(f, "Server settings %+v", serverSettings)
// Serve the mount point in the background returning error to errChan
errs := make(chan error, 1)
go func() {
server.Serve()
errs <- nil
}()
fs.Debugf(f, "Waiting for the mount to start...")
err = server.WaitMount()
if err != nil {
return nil, nil, nil, err
}
fs.Debugf(f, "Mount started")
return fsys.VFS, errs, umount, nil
}
// Mount mounts the remote at mountpoint.
//
// If noModTime is set then it
func Mount(f fs.Fs, mountpoint string) error {
// Mount it
vfs, errChan, unmount, err := mount(f, mountpoint)
if err != nil {
return errors.Wrap(err, "failed to mount FUSE fs")
}
sigInt := make(chan os.Signal, 1)
signal.Notify(sigInt, syscall.SIGINT, syscall.SIGTERM)
sigHup := make(chan os.Signal, 1)
signal.Notify(sigHup, syscall.SIGHUP)
atexit.Register(func() {
_ = unmount()
})
if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd")
}
waitloop:
for {
select {
// umount triggered outside the app
case err = <-errChan:
break waitloop
// Program abort: umount
case <-sigInt:
err = unmount()
break waitloop
// user sent SIGHUP to clear the cache
case <-sigHup:
root, err := vfs.Root()
if err != nil {
fs.Errorf(f, "Error reading root: %v", err)
} else {
root.ForgetAll()
}
}
}
_ = sdnotify.Stopping()
if err != nil {
return errors.Wrap(err, "failed to umount FUSE fs")
}
return nil
}

13
cmd/mount2/mount_test.go Normal file
View File

@@ -0,0 +1,13 @@
// +build linux darwin,amd64
package mount2
import (
"testing"
"github.com/rclone/rclone/cmd/mountlib/mounttest"
)
func TestMount(t *testing.T) {
mounttest.RunTests(t, mount)
}

View File

@@ -0,0 +1,7 @@
// Build for mount for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !linux
// +build !darwin !amd64
package mount2

400
cmd/mount2/node.go Normal file
View File

@@ -0,0 +1,400 @@
// +build linux darwin,amd64
package mount2
import (
"context"
"os"
"path"
"syscall"
fusefs "github.com/hanwen/go-fuse/v2/fs"
"github.com/hanwen/go-fuse/v2/fuse"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/vfs"
)
// Node represents a directory or file
type Node struct {
fusefs.Inode
node vfs.Node
fsys *FS
}
// Node types must be InodeEmbedders
var _ fusefs.InodeEmbedder = (*Node)(nil)
// newNode creates a new fusefs.Node from a vfs Node
func newNode(fsys *FS, node vfs.Node) *Node {
return &Node{
node: node,
fsys: fsys,
}
}
// String used for pretty printing.
func (n *Node) String() string {
return n.node.Path()
}
// lookup a Node in a directory
func (n *Node) lookupVfsNodeInDir(leaf string) (vfsNode vfs.Node, errno syscall.Errno) {
dir, ok := n.node.(*vfs.Dir)
if !ok {
return nil, syscall.ENOTDIR
}
vfsNode, err := dir.Stat(leaf)
return vfsNode, translateError(err)
}
// // lookup a Dir given a path
// func (n *Node) lookupDir(path string) (dir *vfs.Dir, code fuse.Status) {
// node, code := fsys.lookupVfsNodeInDir(path)
// if !code.Ok() {
// return nil, code
// }
// dir, ok := n.(*vfs.Dir)
// if !ok {
// return nil, fuse.ENOTDIR
// }
// return dir, fuse.OK
// }
// // lookup a parent Dir given a path returning the dir and the leaf
// func (n *Node) lookupParentDir(filePath string) (leaf string, dir *vfs.Dir, code fuse.Status) {
// parentDir, leaf := path.Split(filePath)
// dir, code = fsys.lookupDir(parentDir)
// return leaf, dir, code
// }
// Statfs implements statistics for the filesystem that holds this
// Inode. If not defined, the `out` argument will zeroed with an OK
// result. This is because OSX filesystems must Statfs, or the mount
// will not work.
func (n *Node) Statfs(ctx context.Context, out *fuse.StatfsOut) syscall.Errno {
defer log.Trace(n, "")("out=%+v", &out)
out = new(fuse.StatfsOut)
const blockSize = 4096
const fsBlocks = (1 << 50) / blockSize
out.Blocks = fsBlocks // Total data blocks in file system.
out.Bfree = fsBlocks // Free blocks in file system.
out.Bavail = fsBlocks // Free blocks in file system if you're not root.
out.Files = 1e9 // Total files in file system.
out.Ffree = 1e9 // Free files in file system.
out.Bsize = blockSize // Block size
out.NameLen = 255 // Maximum file name length?
out.Frsize = blockSize // Fragment size, smallest addressable data size in the file system.
mountlib.ClipBlocks(&out.Blocks)
mountlib.ClipBlocks(&out.Bfree)
mountlib.ClipBlocks(&out.Bavail)
return 0
}
var _ = (fusefs.NodeStatfser)((*Node)(nil))
// Getattr reads attributes for an Inode. The library will ensure that
// Mode and Ino are set correctly. For files that are not opened with
// FOPEN_DIRECTIO, Size should be set so it can be read correctly. If
// returning zeroed permissions, the default behavior is to change the
// mode of 0755 (directory) or 0644 (files). This can be switched off
// with the Options.NullPermissions setting. If blksize is unset, 4096
// is assumed, and the 'blocks' field is set accordingly.
func (n *Node) Getattr(ctx context.Context, f fusefs.FileHandle, out *fuse.AttrOut) syscall.Errno {
setAttrOut(n.node, out)
return 0
}
var _ = (fusefs.NodeGetattrer)((*Node)(nil))
// Setattr sets attributes for an Inode.
func (n *Node) Setattr(ctx context.Context, f fusefs.FileHandle, in *fuse.SetAttrIn, out *fuse.AttrOut) (errno syscall.Errno) {
defer log.Trace(n, "in=%v", in)("out=%#v, errno=%v", &out, &errno)
var err error
setAttrOut(n.node, out)
size, ok := in.GetSize()
if ok {
err = n.node.Truncate(int64(size))
if err != nil {
return translateError(err)
}
out.Attr.Size = size
}
mtime, ok := in.GetMTime()
if ok {
err = n.node.SetModTime(mtime)
if err != nil {
return translateError(err)
}
out.Attr.Mtime = uint64(mtime.Unix())
out.Attr.Mtimensec = uint32(mtime.Nanosecond())
}
return 0
}
var _ = (fusefs.NodeSetattrer)((*Node)(nil))
// Open opens an Inode (of regular file type) for reading. It
// is optional but recommended to return a FileHandle.
func (n *Node) Open(ctx context.Context, flags uint32) (fh fusefs.FileHandle, fuseFlags uint32, errno syscall.Errno) {
defer log.Trace(n, "flags=%#o", flags)("errno=%v", &errno)
// fuse flags are based off syscall flags as are os flags, so
// should be compatible
handle, err := n.node.Open(int(flags))
if err != nil {
return nil, 0, translateError(err)
}
// If size unknown then use direct io to read
if entry := n.node.DirEntry(); entry != nil && entry.Size() < 0 {
fuseFlags |= fuse.FOPEN_DIRECT_IO
}
return newFileHandle(handle), fuseFlags, 0
}
var _ = (fusefs.NodeOpener)((*Node)(nil))
// Lookup should find a direct child of a directory by the child's name. If
// the entry does not exist, it should return ENOENT and optionally
// set a NegativeTimeout in `out`. If it does exist, it should return
// attribute data in `out` and return the Inode for the child. A new
// inode can be created using `Inode.NewInode`. The new Inode will be
// added to the FS tree automatically if the return status is OK.
//
// If a directory does not implement NodeLookuper, the library looks
// for an existing child with the given name.
//
// The input to a Lookup is {parent directory, name string}.
//
// Lookup, if successful, must return an *Inode. Once the Inode is
// returned to the kernel, the kernel can issue further operations,
// such as Open or Getxattr on that node.
//
// A successful Lookup also returns an EntryOut. Among others, this
// contains file attributes (mode, size, mtime, etc.).
//
// FUSE supports other operations that modify the namespace. For
// example, the Symlink, Create, Mknod, Link methods all create new
// children in directories. Hence, they also return *Inode and must
// populate their fuse.EntryOut arguments.
func (n *Node) Lookup(ctx context.Context, name string, out *fuse.EntryOut) (inode *fusefs.Inode, errno syscall.Errno) {
defer log.Trace(n, "name=%q", name)("inode=%v, attr=%v, errno=%v", &inode, &out, &errno)
vfsNode, errno := n.lookupVfsNodeInDir(name)
if errno != 0 {
return nil, errno
}
newNode := &Node{
node: vfsNode,
fsys: n.fsys,
}
// FIXME
// out.SetEntryTimeout(dt time.Duration)
// out.SetAttrTimeout(dt time.Duration)
setEntryOut(vfsNode, out)
return n.NewInode(ctx, newNode, fusefs.StableAttr{Mode: out.Attr.Mode}), 0
}
var _ = (fusefs.NodeLookuper)((*Node)(nil))
// Opendir opens a directory Inode for reading its
// contents. The actual reading is driven from Readdir, so
// this method is just for performing sanity/permission
// checks. The default is to return success.
func (n *Node) Opendir(ctx context.Context) syscall.Errno {
if !n.node.IsDir() {
return syscall.ENOTDIR
}
return 0
}
var _ = (fusefs.NodeOpendirer)((*Node)(nil))
type dirStream struct {
nodes []os.FileInfo
i int
}
// HasNext indicates if there are further entries. HasNext
// might be called on already closed streams.
func (ds *dirStream) HasNext() bool {
return ds.i < len(ds.nodes)
}
// Next retrieves the next entry. It is only called if HasNext
// has previously returned true. The Errno return may be used to
// indicate I/O errors
func (ds *dirStream) Next() (de fuse.DirEntry, errno syscall.Errno) {
// defer log.Trace(nil, "")("de=%+v, errno=%v", &de, &errno)
fi := ds.nodes[ds.i]
de = fuse.DirEntry{
// Mode is the file's mode. Only the high bits (eg. S_IFDIR)
// are considered.
Mode: getMode(fi),
// Name is the basename of the file in the directory.
Name: path.Base(fi.Name()),
// Ino is the inode number.
Ino: 0, // FIXME
}
ds.i++
return de, 0
}
// Close releases resources related to this directory
// stream.
func (ds *dirStream) Close() {
}
var _ fusefs.DirStream = (*dirStream)(nil)
// Readdir opens a stream of directory entries.
//
// Readdir essentiallly returns a list of strings, and it is allowed
// for Readdir to return different results from Lookup. For example,
// you can return nothing for Readdir ("ls my-fuse-mount" is empty),
// while still implementing Lookup ("ls my-fuse-mount/a-specific-file"
// shows a single file).
//
// If a directory does not implement NodeReaddirer, a list of
// currently known children from the tree is returned. This means that
// static in-memory file systems need not implement NodeReaddirer.
func (n *Node) Readdir(ctx context.Context) (ds fusefs.DirStream, errno syscall.Errno) {
defer log.Trace(n, "")("ds=%v, errno=%v", &ds, &errno)
if !n.node.IsDir() {
return nil, syscall.ENOTDIR
}
fh, err := n.node.Open(os.O_RDONLY)
if err != nil {
return nil, translateError(err)
}
defer func() {
closeErr := fh.Close()
if errno == 0 && closeErr != nil {
errno = translateError(closeErr)
}
}()
items, err := fh.Readdir(-1)
if err != nil {
return nil, translateError(err)
}
return &dirStream{
nodes: items,
}, 0
}
var _ = (fusefs.NodeReaddirer)((*Node)(nil))
// Mkdir is similar to Lookup, but must create a directory entry and Inode.
// Default is to return EROFS.
func (n *Node) Mkdir(ctx context.Context, name string, mode uint32, out *fuse.EntryOut) (inode *fusefs.Inode, errno syscall.Errno) {
defer log.Trace(name, "mode=0%o", mode)("inode=%v, errno=%v", &inode, &errno)
dir, ok := n.node.(*vfs.Dir)
if !ok {
return nil, syscall.ENOTDIR
}
newDir, err := dir.Mkdir(name)
if err != nil {
return nil, translateError(err)
}
newNode := newNode(n.fsys, newDir)
setEntryOut(newNode.node, out)
newInode := n.NewInode(ctx, newNode, fusefs.StableAttr{Mode: out.Attr.Mode})
return newInode, 0
}
var _ = (fusefs.NodeMkdirer)((*Node)(nil))
// Create is similar to Lookup, but should create a new
// child. It typically also returns a FileHandle as a
// reference for future reads/writes.
// Default is to return EROFS.
func (n *Node) Create(ctx context.Context, name string, flags uint32, mode uint32, out *fuse.EntryOut) (node *fusefs.Inode, fh fusefs.FileHandle, fuseFlags uint32, errno syscall.Errno) {
defer log.Trace(n, "name=%q, flags=%#o, mode=%#o", name, flags, mode)("node=%v, fh=%v, flags=%#o, errno=%v", &node, &fh, &fuseFlags, &errno)
dir, ok := n.node.(*vfs.Dir)
if !ok {
return nil, nil, 0, syscall.ENOTDIR
}
// translate the fuse flags to os flags
osFlags := int(flags) | os.O_CREATE
file, err := dir.Create(name, osFlags)
if err != nil {
return nil, nil, 0, translateError(err)
}
handle, err := file.Open(osFlags)
if err != nil {
return nil, nil, 0, translateError(err)
}
fh = newFileHandle(handle)
// FIXME
// fh = &fusefs.WithFlags{
// File: fh,
// //FuseFlags: fuse.FOPEN_NONSEEKABLE,
// OpenFlags: flags,
// }
// Find the created node
vfsNode, errno := n.lookupVfsNodeInDir(name)
if errno != 0 {
return nil, nil, 0, errno
}
setEntryOut(vfsNode, out)
newNode := newNode(n.fsys, vfsNode)
fs.Debugf(nil, "attr=%#v", out.Attr)
newInode := n.NewInode(ctx, newNode, fusefs.StableAttr{Mode: out.Attr.Mode})
return newInode, fh, 0, 0
}
var _ = (fusefs.NodeCreater)((*Node)(nil))
// Unlink should remove a child from this directory. If the
// return status is OK, the Inode is removed as child in the
// FS tree automatically. Default is to return EROFS.
func (n *Node) Unlink(ctx context.Context, name string) (errno syscall.Errno) {
defer log.Trace(n, "name=%q", name)("errno=%v", &errno)
vfsNode, errno := n.lookupVfsNodeInDir(name)
if errno != 0 {
return errno
}
return translateError(vfsNode.Remove())
}
var _ = (fusefs.NodeUnlinker)((*Node)(nil))
// Rmdir is like Unlink but for directories.
// Default is to return EROFS.
func (n *Node) Rmdir(ctx context.Context, name string) (errno syscall.Errno) {
defer log.Trace(n, "name=%q", name)("errno=%v", &errno)
vfsNode, errno := n.lookupVfsNodeInDir(name)
if errno != 0 {
return errno
}
return translateError(vfsNode.Remove())
}
var _ = (fusefs.NodeRmdirer)((*Node)(nil))
// Rename should move a child from one directory to a different
// one. The change is effected in the FS tree if the return status is
// OK. Default is to return EROFS.
func (n *Node) Rename(ctx context.Context, oldName string, newParent fusefs.InodeEmbedder, newName string, flags uint32) (errno syscall.Errno) {
defer log.Trace(n, "oldName=%q, newParent=%v, newName=%q", oldName, newParent, newName)("errno=%v", &errno)
oldDir, ok := n.node.(*vfs.Dir)
if !ok {
return syscall.ENOTDIR
}
newParentNode, ok := newParent.(*Node)
if !ok {
fs.Errorf(n, "newParent was not a *Node")
return syscall.EIO
}
newDir, ok := newParentNode.node.(*vfs.Dir)
if !ok {
return syscall.ENOTDIR
}
return translateError(oldDir.Rename(oldName, newName, newDir))
}
var _ = (fusefs.NodeRenamer)((*Node)(nil))

View File

@@ -36,6 +36,7 @@ var (
NoAppleDouble = true // use noappledouble by default
NoAppleXattr = false // do not use noapplexattr by default
DaemonTimeout time.Duration // OSXFUSE only
AsyncRead = true // do async reads by default
)
// Global constants
@@ -98,10 +99,11 @@ func checkMountpointOverlap(root, mountpoint string) error {
}
// NewMountCommand makes a mount command with the given name and Mount function
func NewMountCommand(commandName string, Mount func(f fs.Fs, mountpoint string) error) *cobra.Command {
func NewMountCommand(commandName string, hidden bool, Mount func(f fs.Fs, mountpoint string) error) *cobra.Command {
var commandDefinition = &cobra.Command{
Use: commandName + " remote:path /path/to/mountpoint",
Short: `Mount the remote as file system on a mountpoint.`,
Use: commandName + " remote:path /path/to/mountpoint",
Hidden: hidden,
Short: `Mount the remote as file system on a mountpoint.`,
Long: `
rclone ` + commandName + ` allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
@@ -109,6 +111,11 @@ FUSE.
First set up your remote using ` + "`rclone config`" + `. Check it works with ` + "`rclone ls`" + ` etc.
You can either run mount in foreground mode or background(daemon) mode. Mount runs in
foreground mode by default, use the --daemon flag to specify background mode mode.
Background mode is only supported on Linux and OSX, you can only run mount in
foreground mode on Windows.
Start the mount like this
rclone ` + commandName + ` remote:path/to/files /path/to/local/mount
@@ -117,11 +124,15 @@ Or on Windows like this where X: is an unused drive letter
rclone ` + commandName + ` remote:path/to/files X:
When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal,
the mount is automatically stopped.
When running in background mode the user will have to stop the mount manually (specified below).
When the program ends while in foreground mode, either via Ctrl+C or receiving
a SIGINT or SIGTERM signal, the mount is automatically stopped.
The umount operation can fail, for example when the mountpoint is busy.
When that happens, it is the user's responsibility to stop the mount manually with
When that happens, it is the user's responsibility to stop the mount manually.
Stopping the mount manually:
# Linux
fusermount -u /path/to/local/mount
@@ -157,6 +168,34 @@ infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Archit
which creates drives accessible for everyone on the system or
alternatively using [the nssm service manager](https://nssm.cc/usage).
#### Mount as a network drive
By default, rclone will mount the remote as a normal drive. However,
you can also mount it as a **Network Drive** (or **Network Share**, as
mentioned in some places)
Unlike other systems, Windows provides a different filesystem type for
network drives. Windows and other programs treat the network drives
and fixed/removable drives differently: In network drives, many I/O
operations are optimized, as the high latency and low reliability
(compared to a normal drive) of a network is expected.
Although many people prefer network shares to be mounted as normal
system drives, this might cause some issues, such as programs not
working as expected or freezes and errors while operating with the
mounted remote in Windows Explorer. If you experience any of those,
consider mounting rclone remotes as network shares, as Windows expects
normal drives to be fast and reliable, while cloud storage is far from
that. See also [Limitations](#limitations) section below for more
info
Add "--fuse-flag --VolumePrefix=\server\share" to your "mount"
command, **replacing "share" with any other name of your choice if you
are mounting more than one remote**. Otherwise, the mountpoints will
conflict and your mounted filesystems will overlap.
[Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping)
### Limitations
Without the use of "--vfs-cache-mode" this can only write files
@@ -318,6 +357,7 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
flags.BoolVarP(cmdFlags, &Daemon, "daemon", "", Daemon, "Run mount as a daemon (background mode).")
flags.StringVarP(cmdFlags, &VolumeName, "volname", "", VolumeName, "Set the volume name (not supported by all OSes).")
flags.DurationVarP(cmdFlags, &DaemonTimeout, "daemon-timeout", "", DaemonTimeout, "Time limit for rclone to respond to kernel (not supported by all OSes).")
flags.BoolVarP(cmdFlags, &AsyncRead, "async-read", "", AsyncRead, "Use asynchronous reads.")
if runtime.GOOS == "darwin" {
flags.BoolVarP(cmdFlags, &NoAppleDouble, "noappledouble", "", NoAppleDouble, "Sets the OSXFUSE option noappledouble.")

Some files were not shown because too many files have changed in this diff Show More