1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

199 Commits

Author SHA1 Message Date
Nick Craig-Wood
298cda4163 mount test: die after 60 seconds with traceback to debug #3154 2019-06-05 16:50:08 +01:00
Nick Craig-Wood
0408abc20e add -v to tests to debug #3154 2019-06-05 16:50:08 +01:00
Nick Craig-Wood
ac4c8d8dfc cmd/providers: add DefaultStr, ValueStr and Type fields
These fields are auto generated:
- DefaultStr - a string rendering of Default
- ValueStr - a string rendering of Value
- Type - the type of the option
2019-06-05 16:23:42 +01:00
Nick Craig-Wood
e2b6172f7d build: stop using a GITHUB_USER as secure variable to fix build log
Before this change any occurences of `ncw` were escaped as `[secure]`
in the build log which is needless obfuscation and quite confusing.
2019-06-05 16:23:16 +01:00
Nick Craig-Wood
32f2895472 Add garry415 to contributors 2019-06-03 21:12:55 +01:00
garry415
1124c423ee fs: add --ignore-case-sync for forced case insensitivity - fixes #2773 2019-06-03 21:12:10 +01:00
Nick Craig-Wood
cd5a2d80ca Add JorisE to contributors 2019-06-03 21:08:15 +01:00
JorisE
1fe0773da6 docs: Update Microsoft OneDrive numbering
I think an option has been added and now the manual for OneDrive is off by one.
2019-06-03 21:08:00 +01:00
Nick Craig-Wood
5a941cdcdc local: fix preallocate warning on Linux with ZFS
Under Linux, rclone attempts to preallocate files for efficiency.

Before this change, pre-allocation would fail on ZFS with the error

    Failed to pre-allocate: operation not supported

After this change rclone tries a different flag combination for ZFS
then disables pre-allocate if that doesn't work.

Fixes #3066
2019-06-03 18:02:26 +01:00
Nick Craig-Wood
62681e45fb Add Philip Harvey to contributors 2019-06-03 17:34:44 +01:00
Philip Harvey
1a2fb52266 s3: make SetModTime work for GLACIER while syncing - Fixes #3224
Before this change rclone would fail with

    Failed to set modification time: InvalidObjectState: Operation is not valid for the source object's storage class

when attempting to set the modification time of an object in GLACIER.

After this change rclone will re-upload the object as part of a sync if it needs to change the modification time.

See: https://forum.rclone.org/t/suspected-bug-in-s3-or-compatible-sync-logic-to-glacier/10187
2019-06-03 15:28:19 +01:00
Nick Craig-Wood
ec4e7316f2 docs: update advice for avoiding mega logins 2019-05-29 17:37:53 +01:00
buengese
11264c4fb8 jottacloud: update docs regarding devices and mountpoints 2019-05-29 17:16:42 +01:00
buengese
25f7f2b60a jottacloud: Add support for selecting device and mountpoint. fixes #3069 2019-05-29 17:16:42 +01:00
Nick Craig-Wood
e7c20e0bce operations: make move and copy individual files obey --backup-dir
Before this change, when using rclone copy or move with --backup-dir
and the source was a single file, rclone would fail to use the backup
directory.

This change looks up the backup directory in the Fs cache and uses it
as appropriate.

This affects any commands which call operations.MoveFile or
operations.CopyFile which includes rclone move/moveto/copy/copyto
where the source is a single file.

Fixes #3219
2019-05-27 16:14:55 +01:00
Nick Craig-Wood
8ee6034b23 Look for Fs in the cache rather than calling NewFs directly
This will save operations when rclone is used in remote control mode
or with the same remote multiple times in the command line.
2019-05-27 16:14:55 +01:00
Nick Craig-Wood
206e1caa99 fs/cache: factor Fs caching from fs/rc into its own package 2019-05-27 16:14:55 +01:00
Dan Walters
f0e439de0d dlna: improve logging and error handling
Mostly trying to get logging to happen through rclone's log methods.
Added request logging, and a trace parameter that will dump the
entire request/response for debugging when dealing with poorly
written clients.

Also added a flag to specify the device's "Friendly Name" explicitly,
and made an attempt at allowing mime types in addition to video.
2019-05-27 14:42:33 +01:00
Dan Walters
e5464a2a35 dlna: add some additional metadata, headers, and samsung extensions
Again, mostly just copying what I see in other implementations.  This
does seem to have done the trick so that I can now pause, fast forward,
rewind, etc., on my Samsung F series.
2019-05-27 14:42:33 +01:00
Dan Walters
78d38dda56 dlna: icons and compatibility improvements
Brings in icons for devices to display.  Based on what some
other open implementations have done, it's worth having a simple
stub implmentation of ConnectionManagerService.  Advertise
X_MS_MediaReceiverRegistrar as well, which sounds like it
is necessary for certain MSFT devices (like the X-Box.)
2019-05-27 14:42:33 +01:00
Dan Walters
60bb01b22c dlna: refactor the serve mux
Trying to make it a little easier to understand and work on all the
available routes, etc.
2019-05-27 14:42:33 +01:00
Dan Walters
95a74e02c7 dlna: use a template to render the root service descriptor
For various reasons, it seems to make sense to move away from generating
the XML with objects.  Namespace support is minimal in go, the objects we
have are in an upstream project, and some subtitlties seem likely to
cause problems with poorly written clients.

This removes the empty <iconList></iconList>, but is otherwise the
same output.
2019-05-27 14:42:33 +01:00
Dan Walters
d014aef011 dlna: reformat descriptors with tabs
Reduces size of embedded assets.
2019-05-27 14:42:33 +01:00
Dan Walters
be8c23f0b4 dlna: use vfsgen for static assets
As more assets are added, using vfsgen makes things a bit easier.
2019-05-27 14:42:33 +01:00
Nick Craig-Wood
da3b685cd8 vendor: update github.com/pkg/sftp to fix sftp client issues
See: https://forum.rclone.org/t/failed-to-copy-sftp-folder-not-found-c-ftpsites-ssh-fx-failure/9778
See: https://github.com/pkg/sftp/issues/288
2019-05-24 15:03:23 +01:00
Nick Craig-Wood
9aac2d6965 drive: add notes that cleanup works in the background on drive 2019-05-22 20:48:59 +01:00
Gary Kim
81fad0f0e3 docs: fix typo in cache documentation 2019-05-20 19:20:59 +01:00
Nick Craig-Wood
cff85f0b95 docs: add note on failure to login to mega docs 2019-05-16 10:10:58 +01:00
Nick Craig-Wood
9c0dac4ccd Add Robert Marko to contributors 2019-05-15 13:36:15 +01:00
Robert Marko
5ccc2dcb8f s3: add config info for Wasabi's EU Central endpoint
Wasabi has a EU Central endpoint for a couple months now, so add it to the list.

Signed-off-by: Robert Marko <robimarko@gmail.com>
2019-05-15 13:35:55 +01:00
Gary Kim
8c5503631a docs: add sftp about documentation 2019-05-14 15:22:43 +01:00
Nick Craig-Wood
2f3d794ec6 Add id01 to contributors 2019-05-14 07:55:38 +01:00
id01
abeb12c6df lib/env: Make env_test.go support Windows 2019-05-14 07:55:08 +01:00
Nick Craig-Wood
9c6f3ae82c local: log errors when listing instead of returning an error
Before this change, rclone would return an error from the listing if
there was an unreadable directory, or if there was a problem stat-ing
a directory entry.  This was frustrating because the command
completely aborts at that point when there is work it could do.

After this change rclone lists the directories and reports ERRORs for
unreadable directories or problems stat-ing files, but does return an
error from the listing.  It does set the error flag which means the
command will fail (and objects won't be deleted with `rclone sync`).

This brings rclone's behaviour exactly in to line with rsync's
behaviour.  It does as much as possible, but doesn't let the errors
pass silently.

Fixes #3179
2019-05-13 18:30:33 +01:00
Nick Craig-Wood
870b15313e cmd: log an ERROR for all commands which exit with non-zero status
Before this change, rclone didn't report errors for commands which
didn't return an error directly.  For example `rclone ls` could
encounter an error and rclone would log nothing, even though the exit
code was non zero.

After this change we always log a message if we are exiting with a
non-zero exit code.
2019-05-13 18:28:21 +01:00
Nick Craig-Wood
62a7e44e86 Add didil to contributors 2019-05-13 17:37:46 +01:00
didil
296e4936a0 install: linux skip man pages if no mandb - fixes #3175 2019-05-13 17:37:30 +01:00
Nick Craig-Wood
a0b9d4a239 docs: fix doc generators home directory in backend docs
Thanks @ctlaltdefeat for spotting this
2019-05-13 17:28:36 +01:00
Nick Craig-Wood
99bc013c0a crypt: remove stray debug in ChangeNotify 2019-05-12 16:50:03 +01:00
Nick Craig-Wood
d9cad9d10b mega: cleanup: add logs for -v and -vv 2019-05-12 10:46:21 +01:00
Nick Craig-Wood
0e23c4542f sync: fix integrations tests
2eb31a4f1d broke the integration tests for remotes which use
Copy+Delete as server side Move.
2019-05-12 09:50:20 +01:00
Nick Craig-Wood
f544234e26 gendocs: remove global flags from command help pages 2019-05-11 23:39:50 +01:00
Nick Craig-Wood
dbf9800cbc docs: add serve to README, main page and menu 2019-05-11 23:39:50 +01:00
Nick Craig-Wood
1f19b63264 serve sftp: serve an rclone remote over SFTP 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
5c0e5b85f7 Factor ShellExpand from sftp backend to lib/env 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
edda6d91cd Use go-homedir to read the home directory more reliably 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
1fefa6adfd vendor: add github.com/mitchellh/go-homedir 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
af030f74f5 vfs: make WriteAt for non cached files work with non-sequential writes
This makes WriteAt for non cached files wait a short time if it gets
an out of order write (which would normally cause an error) to see if
the gap will be filled with an in order write.

The makes the SFTP backend work fine with --vfs-cache-mode off
2019-05-11 23:39:04 +01:00
Nick Craig-Wood
ada8c22a97 sftp: send custom client version and debug server version 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
610466c18c sftp: fix about parsing of df results so it can cope with -ve results
This is useful when interacting with "serve sftp" which returns -ve
results when the corresponding value is unknown.
2019-05-11 23:39:04 +01:00
Nick Craig-Wood
9950bb9b7c about: fix crash if backend returns a nil usage 2019-05-11 23:39:04 +01:00
Nick Craig-Wood
7d70e92664 operations: enable multi threaded downloads - Fixes #2252
This implements the --multi-thread-cutoff and --multi-thread-streams
flags to control multi thread downloading to the local backend.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
687cbf3ded operations: if --ignore-checksum is in effect, don't calculate checksum
Before this change we calculated the checksum which is potentially
time consuming and then ignored the result.  After the change we don't
calculate the checksum if we are about to ignore it.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
c3af0a1eca local: only calculate the required hashes for big speedup
Before this change we calculated all possible hashes for the file when
the `Hashes` method was called.

After we only calculate the Hash requested.

Almost all uses of `Hash` just need one checksum.  This will slow down
`rclone lsjson` with the `--hash` flag.  Perhaps lsjson should have a
`--hash-type` flag.

However it will speed up sync/copy/move/check/md5sum/sha1sum etc.

Before it took 12.4 seconds to md5sum a 1GB file, after it takes 3.1
seconds which is the same time the md5sum utility takes.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
822483aac5 accounting: enable accounting without passing through the stream #2252
This is in preparation for multithreaded downloads
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
2eb31a4f1d sync: move Transferring into operations.Copy
This makes the code more consistent with the operations code setting
the transfer statistics up.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
0655738da6 operations: re-work reopen framework so it can take a RangeOption #2252
This is in preparation for multipart downloads.
2019-05-11 23:35:19 +01:00
Nick Craig-Wood
7c4fe3eb75 local: define OpenWriterAt interface and test and implement it #2252
This will enable multipart downloads in future commits
2019-05-11 23:35:19 +01:00
Stefan Breunig
72721f4c8d copyurl: honor --no-check-certificate 2019-05-11 17:44:58 +01:00
Nick Craig-Wood
0c60c00187 Add Peter Berbec to contributors 2019-05-11 17:40:17 +01:00
Peter Berbec
0d511b7878 cmd: implement --stats-one-line-date and --stats-one-line-date-format 2019-05-11 17:39:57 +01:00
Nick Craig-Wood
bd2a7ffcf4 Add Jeff Quinn to contributors 2019-05-11 17:26:56 +01:00
Nick Craig-Wood
7a5ee968e7 Add Jon to contributors 2019-05-11 17:26:56 +01:00
Jeff Quinn
c809334b3d ftp: add FTP List timeout, fixes #3086
The timeout is controlled by the --timeout flag
2019-05-11 17:26:23 +01:00
Animosity022
b88e50cc36 docs: Typo fixes with "a existing"
Fixed a typo with "a existing" to "an existing"
2019-05-11 16:49:48 +01:00
Jon
bbe28df800 docs: Fix typo: Dump HTTP bodies -> Dump HTTP headers 2019-05-11 16:40:40 +01:00
calistri
f865280afa Adds a public IP flag for ftp. Closes #3158
Fixed variable names
2019-05-09 22:52:21 +01:00
Nick Craig-Wood
8beab1aaf2 build: more pre go1.8 workarounds removed 2019-05-08 15:14:51 +01:00
Fionera
b9e16b36e5 Fix Multipart upload check
In the Documentation it states:
// If (opts.MultipartParams or opts.MultipartContentName) and
// opts.Body are set then CallJSON will do a multipart upload with a
// file attached.
2019-05-02 16:22:17 +01:00
Nick Craig-Wood
b68c3ce74d s3: suppport S3 Accelerated endpoints with --s3-use-accelerate-endpoint
Fixes #3123
2019-05-02 14:00:00 +01:00
Fabian Möller
d04b0b856a fserrors: use errors.Walk for the wrapped error types 2019-05-01 16:56:08 +01:00
Gary Kim
d0ff07bdb0 mega: add cleanup support
Fixes #3138
2019-05-01 16:32:34 +01:00
Nick Craig-Wood
577fda059d rc: fix race in tests 2019-05-01 16:09:50 +01:00
Nick Craig-Wood
49d2ab512d test_all: run restic integration tests against local backend 2019-05-01 16:09:50 +01:00
Nick Craig-Wood
9df322e889 tests: make test servers choose a random port to make more reliable
Tests have been randomly failing with messages like

    listen tcp 127.0.0.1:51778: bind: address already in use

Rework all the test servers so they choose a random free port on
startup and use that for the tests to avoid.
2019-05-01 16:09:50 +01:00
Nick Craig-Wood
8f89b03d7b vendor: update github.com/t3rm1n4l/go-mega and dependencies
This is to fix a crash reported in #3140
2019-05-01 16:09:50 +01:00
Fabian Möller
48c09608ea fix spelling 2019-04-30 14:12:18 +02:00
Nick Craig-Wood
7963320a29 lib/errors: don't panic on uncomparable errors #3123
Same fix as c5775cf73d
2019-04-26 09:56:20 +01:00
Nick Craig-Wood
81f8a5e0d9 Use golangci-lint to check everything
Now that this issue is fixed: https://github.com/golangci/golangci-lint/issues/204

We can use golangci-lint to check the printfuncs too.
2019-04-25 15:58:49 +01:00
Nick Craig-Wood
2763598bd1 Add Gary Kim to contributors 2019-04-25 10:51:47 +01:00
Gary Kim
49d7b0d278 sftp: add About support - fixes #3107
This adds support for using About with SFTP remotes. This works by calling the df command remotely.
2019-04-25 10:51:15 +01:00
Animosity022
3d475dc0ee mount: Fix poll interval documentation 2019-04-24 18:21:04 +01:00
Fionera
2657d70567 drive: fix move and copy from TeamDrive to GDrive 2019-04-24 18:11:34 +01:00
Nick Craig-Wood
45df37f55f Add Kyle E. Mitchell to contributors 2019-04-24 18:09:40 +01:00
Kyle E. Mitchell
81007c10cb doc: Fix "conververt" typo 2019-04-24 18:09:24 +01:00
Nick Craig-Wood
aba15f11d8 cache: note unsupported optional methods 2019-04-16 13:34:06 +01:00
Nick Craig-Wood
a57756a05c lsjson, lsf: support showing the Tier of the object 2019-04-16 13:34:06 +01:00
Nick Craig-Wood
eeab7a0a43 crypt: Implement Optional methods SetTier, GetTier - fixes #2895
This implements optional methods on Object
- ID
- SetTier
- GetTier

And declares that it will not implement MimeType for the FsCheck test.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
ac8d1db8d3 crypt: support PublicLink (rclone link) of underlying backend - fixes #3042 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
cd0d43fffb fs: add missing PublicLink to mask
The enables wrapping file systems to declare that they don't support
PublicLink if the underlying fs doesn't.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
cdf12b1fc8 crypt: Fix wrapping of ChangeNotify to decrypt directories properly
Also change the way it is added to make the FsCheckWrap test pass
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
7981e450a4 crypt: make rclone dedupe work through crypt
Implement these optional methods:

- WrapFs
- SetWrapper
- MergeDirs
- DirCacheFlush

Fixes #2233 Fixes #2689
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
e7fc3dcd31 fs: copy the ID too when we copy a Directory object
This means that crypt which wraps directory objects will retain the ID
of the underlying object.
2019-04-16 13:33:10 +01:00
Nick Craig-Wood
2386c5adc1 hubic: fix tests for optional methods 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
2f21aa86b4 fstest: add tests for coverage of optional methods for wrapping Fs 2019-04-16 13:33:10 +01:00
Nick Craig-Wood
16d8014cbb build: drop support for go1.8 2019-04-15 21:49:58 +01:00
Nick Craig-Wood
613a9bb86b vendor: update all dependencies 2019-04-15 20:12:56 +01:00
calisro
8190a81201 lsjson: added EncryptedPath to output - fixes #3094 2019-04-15 18:12:09 +01:00
Nick Craig-Wood
f5795db6d2 build: fix fetch_binaries not to fetch test binaries 2019-04-13 13:08:53 +01:00
Nick Craig-Wood
e2a2eb349f Start v1.47.0-DEV development 2019-04-13 13:08:37 +01:00
Nick Craig-Wood
a0d4fdb2fa Version v1.47.0 2019-04-13 11:01:58 +01:00
Nick Craig-Wood
a28239f005 filter: Make --files-from traverse as before unless --no-traverse is set
In c5ac96e9e7 we made --files-from only read the objects specified and
don't scan directories.

This caused problems with Google drive (very very slow) and B2
(excessive API consumption) so it was decided to make the old
behaviour (traversing the directories) the default with --files-from
and use the existing --no-traverse flag (which has exactly the right
semantics) to enable the new non scanning behaviour.

See: https://forum.rclone.org/t/using-files-from-with-drive-hammers-the-api/8726

Fixes #3102 Fixes #3095
2019-04-12 17:16:49 +01:00
Nick Craig-Wood
b05da61c82 build: move linter build tags into Makefile to fix golangci-lint 2019-04-12 15:48:36 +01:00
Nick Craig-Wood
41f01da625 docs: add link to Go report card 2019-04-12 15:28:37 +01:00
Nick Craig-Wood
901811bb26 docs: Remove references to Google+ 2019-04-12 15:25:17 +01:00
Oliver Heyme
0d4a3520ad jottacloud: add device registration
jottacloud: Updated documenation
2019-04-11 16:31:27 +01:00
calistri
5855714474 lsjson: added --files-only and --dirs-only flags
Factored common code from lsf/lsjson into operations.ListJSON
2019-04-11 11:43:25 +01:00
Nick Craig-Wood
120de505a9 Add Manu to contributors 2019-04-11 10:22:17 +01:00
Manu
6e86526c9d s3: add support for "Glacier Deep Archive" storage class - fixes #3088 2019-04-11 10:21:41 +01:00
Nick Craig-Wood
0862dc9b2b build: update to Xenial in travis build to fix link errors 2019-04-10 15:22:21 +01:00
Nick Craig-Wood
1c301f9f7a drive: Fix creation of duplicates with server side copy - fixes #3067 2019-03-29 16:58:19 +00:00
Nick Craig-Wood
9f6b09dfaf Add Ben Boeckel to contributors 2019-03-28 15:13:17 +00:00
Ben Boeckel
3d424c6e08 docs: fix various typos 2019-03-28 15:12:51 +00:00
Nick Craig-Wood
6fb1c8f51c b2: ignore malformed src_last_modified_millis
This fixes rclone returning `listing failed: strconv.ParseInt` errors
when listing files which have a malformed `src_last_modified_millis`.
This is uploaded by the client so care is needed in interpreting it as
it can be malformed.

Fixes #3065
2019-03-25 15:51:45 +00:00
Nick Craig-Wood
626f0d1886 copy: account for server side copy bytes and obey --max-transfer 2019-03-25 15:36:38 +00:00
Nick Craig-Wood
9ee9fe3885 swift: obey Retry-After to fix OVH restore from cold storage
In as many methods as possible we attempt to obey the Retry-After
header where it is provided.

This means that when objects are being requested from OVH cold storage
rclone will sleep the correct amount of time before retrying.

If the sleeps are short it does them immediately, if long then it
returns an ErrorRetryAfter which will cause the outer retry to sleep
before retrying.

Fixes #3041
2019-03-25 13:41:34 +00:00
Nick Craig-Wood
b0380aad95 vendor: update github.com/ncw/swift to help with #3041 2019-03-25 13:41:34 +00:00
Nick Craig-Wood
2065e73d0b cmd: implement RetryAfter errors which cause a sleep before a retry
Use NewRetryAfterError to return an error which will cause a high
level retry after the delay specified.
2019-03-25 13:41:34 +00:00
Nick Craig-Wood
d3e3bbedf3 docs: Fix typo - fixes #3071 2019-03-25 13:18:41 +00:00
Cnly
8d29d69ade onedrive: Implement graceful cancel 2019-03-19 15:54:51 +08:00
Nick Craig-Wood
6e70d88f54 swift: work around token expiry on CEPH
This implements the Expiry interface so token expiry works properly

This change makes sure that this change from the swift library works
correctly with rclone's custom authenticator.

> Renew the token 60s before the expiry time
>
> The v2 and v3 auth schemes both return the expiry time of the token,
> so instead of waiting for a 401 error, renew the token 60s before this
> time.
>
> This makes transfers more efficient and also works around a bug in
> CEPH which returns 403 instead of 401 when the token expires.
>
> http://tracker.ceph.com/issues/22223
2019-03-18 13:30:59 +00:00
Nick Craig-Wood
595fea757d vendor: update github.com/ncw/swift to bring in Expires changes 2019-03-18 13:30:59 +00:00
Nick Craig-Wood
bb80586473 bin/get-github-release: fetch the most recent not the least recent 2019-03-18 11:29:37 +00:00
Nick Craig-Wood
0d475958c7 Fix errors discovered with go vet nilness tool 2019-03-18 11:23:00 +00:00
Nick Craig-Wood
2728948fb0 Add xopez to contributors 2019-03-18 11:04:10 +00:00
Nick Craig-Wood
3756f211b5 Add Danil Semelenov to contributors 2019-03-18 11:04:10 +00:00
xopez
2faf2aed80 docs: Update Copyright to current Year 2019-03-18 11:03:45 +00:00
Nick Craig-Wood
1bd8183af1 build: use matrix build for travis
This makes the build more efficient, the .travis.yml file more
comprehensible and reduces the Makefile spaghetti.

Windows support is commented out for the moment as it isn't very
reliable yet.
2019-03-17 14:58:18 +00:00
Nick Craig-Wood
5aa706831f b2: ignore already_hidden error on remove
Sometimes (possibly through eventual consistency) b2 returns an
already_hidden error on a delete.  Ignore this since it is harmless.
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
ac7e1dbf62 test_all: add the vfs tests to the integration tests
Fix failing tests for some remotes
2019-03-17 14:56:17 +00:00
Nick Craig-Wood
14ef4437e5 dedupe: fix bug introduced when converting to use walk.ListR #2902
Before the fix we were only de-duping the ListR batches.

Afterwards we dedupe everything.

This will have the consequence that rclone uses more memory as it will
build a map of all the directory names, not just the names in a given
directory.
2019-03-17 11:01:20 +00:00
Danil Semelenov
a0d2ab5b4f cmd: Fix autocompletion of remote paths with spaces - fixes #3047 2019-03-17 10:15:20 +00:00
Nick Craig-Wood
3bfde5f52a ftp: add --ftp-concurrency to limit maximum number of connections
Fixes #2166
2019-03-17 09:57:14 +00:00
Nick Craig-Wood
2b05bd9a08 rc: implement operations/publiclink the equivalent of rclone link
Fixes #3042
2019-03-17 09:41:31 +00:00
Nick Craig-Wood
1318be3b0a vendor: update github.com/goftp/server to fix hang while reading a file from the server
See: https://forum.rclone.org/t/minor-issue-with-linux-ftp-client-and-rclone-ftp-access-denied/8959
2019-03-17 09:30:57 +00:00
Nick Craig-Wood
f4a754a36b drive: add --skip-checksum-gphotos to ignore incorrect checksums on Google Photos
First implementation by @jammin84, re-written by @ncw

Fixes #2207
2019-03-17 09:10:51 +00:00
Nick Craig-Wood
fef73763aa lib/atexit: add SIGTERM to signals which run the exit handlers on unix 2019-03-16 17:47:02 +00:00
Nick Craig-Wood
7267d19ad8 fstest: Use walk.ListR for listing 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
47099466c0 cache: Use walk.ListR for listing the temporary Fs. 2019-03-16 17:41:12 +00:00
Nick Craig-Wood
4376019062 dedupe: Use walk.ListR for listing commands.
This dramatically increases the speed (7x in my tests) of the de-dupe
as google drive supports ListR directly and dedupe did not work with
`--fast-list`.

Fixes #2902
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
e5f4210b09 serve restic: use walk.ListR for listing
This is effectively what the old code did anyway so this should not
make any functional changes.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
d5f2df2f3d Use walk.ListR for listing operations
This will increase speed for backends which support ListR and will not
have the memory overhead of using --fast-list.

It also means that errors are queued until the end so as much of the
remote will be listed as possible before returning an error.

Commands affected are:
- lsf
- ls
- lsl
- lsjson
- lsd
- md5sum/sha1sum/hashsum
- size
- delete
- cat
- settier
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
efd720b533 walk: Implement walk.ListR which will use ListR if at all possible
It otherwise has the nearly the same interface as walk.Walk which it
will fall back to if it can't use ListR.

Using walk.ListR will speed up file system operations by default and
use much less memory and start immediately compared to if --fast-list
had been supplied.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
047f00a411 filter: Add BoundedRecursion method
This indicates that the filter set could be satisfied by a bounded
directory recursion.
2019-03-16 17:41:12 +00:00
Nick Craig-Wood
bb5ac8efbe http: fix socket leak on 404 errors 2019-03-15 17:04:28 +00:00
Nick Craig-Wood
e62bbf761b http: add --http-no-slash for websites with directories with no slashes #3053
See: https://forum.rclone.org/t/is-there-a-way-to-log-into-an-htpp-server/8484
2019-03-15 17:04:06 +00:00
Nick Craig-Wood
54a2e99d97 http: remove duplicates from listings 2019-03-15 16:59:36 +00:00
Nick Craig-Wood
28230d93b4 sync: Implement --suffix-keep-extension for use with --suffix - fixes #3032 2019-03-15 14:21:39 +00:00
Florian Gamböck
3c4407442d cmd: fix completion of remotes
The previous behavior of the remotes completion was that only
alphanumeric characters were allowed in a remote name. This limitation
has been lifted somewhat by #2985, which also allowed an underscore.

With the new implementation introduced in this commit, the completion of
the remote name has been simplified: If there is no colon (":") in the
current word, then complete remote name. Otherwise, complete the path
inside the specified remote. This allows correct completion of all
remote names that are allowed by the config (including - and _).
Actually it matches much more than that, even remote names that are not
allowed by the config, but in such a case there already would be a wrong
identifier in the configuration file.

With this simpler string comparison, we can get rid of the regular
expression, which makes the completion multiple times faster. For a
sample benchmark, try the following:

     # Old way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path =~ ^[[:alnum:]]*$ ]]; done'

     real    0m15,637s
     user    0m15,613s
     sys     0m0,024s

     # New way
     $ time bash -c 'for _ in {1..1000000}; do
         [[ remote:path != *:* ]]; done'

     real    0m1,324s
     user    0m1,304s
     sys     0m0,020s
2019-03-15 13:16:42 +00:00
Dan Walters
caf318d499 dlna: add connection manager service description
The UPnP MediaServer spec says that the ConnectionManager service is
required, and adding it was enough to get dlna support working on my
other TV (LG webOS 2.2.1).
2019-03-15 13:14:31 +00:00
Nick Craig-Wood
2fbb504b66 webdav: fix About/df when reading the available/total returns 0
Some WebDAV servers return an empty Available and Used which parses as 0.

This caused About to return the Total as 0 which can confused mounted
file systems.

After this change we ignore the result if Available and Used are both 0.

See: https://forum.rclone.org/t/windows-mounted-webdav-drive-has-no-free-space/8938
2019-03-15 12:03:04 +00:00
Alex Chen
2b58d1a46f docs: onedrive: Add guide to refreshing token after MFA is enabled 2019-03-14 00:21:05 +08:00
Cnly
1582a21408 onedrive: Always add trailing colon to path when addressing items - #2720, #3039 2019-03-13 11:30:15 +08:00
Nick Craig-Wood
229898dcee Add Dan Walters to contributors 2019-03-11 17:31:46 +00:00
Dan Walters
95194adfd5 dlna: fix root XML service descriptor
The SCPD URL was being set after marshalling the XML, and thus coming
out blank.  Now works on my Samsung TV, and likely fixes some issues
reported by others in #2648.
2019-03-11 17:31:32 +00:00
Nick Craig-Wood
4827496234 webdav: fix race when creating directories - fixes #3035
Before this change a race condition existed in mkdir
- the directory was attempted to be created
- the parent didn't exist so it failed
- the parent was created
- the directory was created again

The last step failed as the directory was created in a different thread.

This was fixed by checking the error messages of MKCOL for both
directory creations, rather than only the first.
2019-03-11 16:20:05 +00:00
Nick Craig-Wood
415eeca6cf drive: fix range requests on 0 length files
Before this change a range request on a 0 length file would fail

    $ rclone cat --head 128 drive:test/emptyfile
    ERROR : open file failed: googleapi: Error 416: Request range not satisfiable, requestedRangeNotSatisfiable

To fix this we remove Range: headers on requests for zero length files.
2019-03-10 15:47:34 +00:00
Nick Craig-Wood
58d9a3e1b5 filter: reload filter when the options are set via the rc - fixes #3018 2019-03-10 13:09:44 +00:00
Nick Craig-Wood
cccadfa7ae rc: add ability for options blocks to register reload functions 2019-03-10 13:09:44 +00:00
ishuah
1b52f8d2a5 copy/sync/move: add --create-empty-src-dirs flag - fixes #2869 2019-03-10 11:56:38 +00:00
Nick Craig-Wood
2078ad68a5 gcs: Allow bucket policy only buckets - fixes #3014
This introduces a new config variable bucket_policy_only.  If this is
set then rclone:

- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
2019-03-10 11:45:42 +00:00
Nick Craig-Wood
368ed9e67d docs: add a FAQ entry about --max-backlog 2019-03-09 16:19:24 +00:00
Nick Craig-Wood
7c30993bb7 Add Fionera to contributors 2019-03-09 16:19:24 +00:00
Fionera
55b9a4ed30 Add ServerSideAcrossConfig Flag and check for it. fixes #2728 2019-03-09 16:18:45 +00:00
jaKa
118a8b949e koofr: implemented a backend for Koofr cloud storage service.
Implemented a Koofr REST API backend.
Added said backend to tests.
Added documentation for said backend.
2019-03-06 13:41:43 +00:00
jaKa
1d14e30383 vendor: add github.com/koofr/go-koofrclient
* added koofr client SDK dep for koofr backend
2019-03-06 13:41:43 +00:00
Nick Craig-Wood
27714e29c3 s3: note incompatibility with CEPH Jewel - fixes #3015 2019-03-06 11:50:37 +00:00
Nick Craig-Wood
9f8e1a1dc5 drive: fix imports of text files
Before this change text file imports were ignored.  This was because
the mime type wasn't matched.

Fix this by adjusting the keys in the mime type maps as well as the
values.

See: https://forum.rclone.org/t/how-to-upload-text-files-to-google-drive-as-google-docs/9014
2019-03-05 17:20:31 +00:00
Nick Craig-Wood
1692c6bd0a vfs: shorten the locking window for vfs/refresh
Before this change we locked the root directory, recursively fetched
the listing, applied it then unlocked the root directory.

After this change we recursively fetch the listing then apply it with
the root directory locked which shortens the time that the root
directory is locked greatly.

With the original method and the new method the subdirectories are
left unlocked and so potentially could be changed leading to
inconsistencies.  This change makes the potential for inconsistencies
slightly worse by leaving the root directory unlocked at a gain of a
much more responsive system while runing vfs/refresh.

See: https://forum.rclone.org/t/rclone-rc-vfs-refresh-locking-directory-being-refreshed/9004
2019-03-05 14:17:42 +00:00
Nick Craig-Wood
d233efbf63 Add marcintustin to contributors 2019-03-01 17:10:26 +00:00
marcintustin
e9a45a5a34 googlecloudstorage: fall back to default application credentials
Fall back to default application credentials when all other credentials sources fail

This change allows users with default application credentials
configured (notably when running on google compute instances) to
dispense with explicitly configuring google cloud storage credentials
in rclone's own configuration.
2019-03-01 18:05:31 +01:00
Nick Craig-Wood
f6eb5c6983 lib/pacer: fix test on macOS 2019-03-01 12:27:33 +00:00
Nick Craig-Wood
2bf19787d5 Add Dr.Rx to contributors 2019-03-01 12:25:16 +00:00
Dr.Rx
0ea3a57ecb azureblob: Enable MD5 checksums when uploading files bigger than the "Cutoff"
This enables MD5 checksum calculation and publication when uploading file above the "Cutoff" limit.
It was explictely ignored in case of multi-block (a.k.a. multipart) uploads to Azure Blob Storage.
2019-03-01 11:12:23 +01:00
Nick Craig-Wood
b353c730d8 vfs: make tests work on remotes which don't support About 2019-02-28 14:05:21 +00:00
Nick Craig-Wood
173dfbd051 vfs: read directory and check for a file before mkdir
Before this change when doing Mkdir the VFS layer could add the new
item to an unread directory which caused confusion.

It could also do mkdir on a file when run on a bucket based remote
which would temporarily overwrite the file with a directory.

Fixes #2993
2019-02-28 14:05:17 +00:00
Nick Craig-Wood
e3bceb9083 operations: fix Overlapping test for Windows native paths 2019-02-28 11:39:32 +00:00
Nick Craig-Wood
52c6b373cc Add calisro to contributors 2019-02-28 10:20:35 +00:00
calisro
0bc0f62277 Recommendation for creating own client ID 2019-02-28 11:20:08 +01:00
Cnly
12c8ee4b4b atexit: allow functions to be unregistered 2019-02-27 23:37:24 +01:00
Nick Craig-Wood
5240f9d1e5 sync: fix integration tests to check correct error 2019-02-27 22:05:16 +00:00
Nick Craig-Wood
997654d77d ncdu: fix display corruption with Chinese characters - #2989 2019-02-27 09:55:28 +00:00
Nick Craig-Wood
f1809451f6 docs: add more examples of config-less usage 2019-02-27 09:41:40 +00:00
Nick Craig-Wood
84c650818e sync: don't allow syncs on overlapping remotes - fixes #2932 2019-02-26 19:25:52 +00:00
Nick Craig-Wood
c5775cf73d fserrors: don't panic on uncomparable errors 2019-02-26 15:39:16 +00:00
Nick Craig-Wood
dca482e058 Add Alexandru Bumbacea to contributors 2019-02-26 15:39:16 +00:00
Nick Craig-Wood
6943169cef Add Six to contributors 2019-02-26 15:38:25 +00:00
Alexandru Bumbacea
4fddec113c sftp: allow custom ssh client config 2019-02-26 16:37:54 +01:00
Six
2114fd8f26 cmd: Fix tab-completion for remotes with underscores in their names 2019-02-26 16:25:45 +01:00
Nick Craig-Wood
63bb6de491 build: update to use go1.12 for the build 2019-02-26 13:18:31 +00:00
Nick Craig-Wood
0a56a168ff bin/get-github-release.go: scrape the downloads page to avoid the API limit
This should fix pull requests build failures which can't use the
github token.
2019-02-25 21:34:59 +00:00
Nick Craig-Wood
88e22087a8 Add Nestar47 to contributors 2019-02-25 21:34:59 +00:00
Nestar47
9404ed703a drive: add docs on team drives and --fast-list eventual consistency 2019-02-25 21:46:27 +01:00
Nick Craig-Wood
c7ecccd5ca mount: remove an obsolete EXPERIMENTAL tag from the docs 2019-02-25 17:53:53 +00:00
Sebastian Bünger
972e27a861 jottacloud: fix token refresh - fixes #2992 2019-02-21 19:26:18 +01:00
Fabian Möller
8f4ea77c07 fs: remove unnecessary pacer warning 2019-02-18 08:42:36 +01:00
Fabian Möller
61616ba864 pacer: make pacer more flexible
Make the pacer package more flexible by extracting the pace calculation
functions into a separate interface. This also allows to move features
that require the fs package like logging and custom errors into the fs
package.

Also add a RetryAfterError sentinel error that can be used to signal a
desired retry time to the Calculator.
2019-02-16 14:38:07 +00:00
Fabian Möller
9ed721a3f6 errors: add lib/errors package 2019-02-16 14:38:07 +00:00
Nick Craig-Wood
0b9d7fec0c lsf: add 'e' format to show encrypted names and 'o' for original IDs
This brings it up to par with lsjson.

This commit also reworks the framework to use ListJSON internally
which removes duplicated code and makes testing easier.
2019-02-14 14:45:35 +00:00
854 changed files with 108977 additions and 41392 deletions

View File

@@ -1,9 +1,5 @@
# golangci-lint configuration options
run:
build-tags:
- cmount
linters:
enable:
- deadcode

View File

@@ -1,54 +1,101 @@
---
language: go
sudo: required
dist: trusty
dist: xenial
os:
- linux
go:
- 1.8.x
- 1.9.x
- 1.10.x
- 1.11.x
- 1.12rc1
- tip
- linux
go_import_path: github.com/ncw/rclone
before_install:
- if [[ $TRAVIS_OS_NAME == linux ]]; then sudo modprobe fuse ; sudo chmod 666 /dev/fuse ; sudo chown root:$USER /etc/fuse.conf ; fi
- if [[ $TRAVIS_OS_NAME == osx ]]; then brew update && brew tap caskroom/cask && brew cask install osxfuse ; fi
- git fetch --unshallow --tags
- |
if [[ "$TRAVIS_OS_NAME" == "linux" ]]; then
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
fi
if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
brew update
brew tap caskroom/cask
brew cask install osxfuse
fi
if [[ "$TRAVIS_OS_NAME" == "windows" ]]; then
choco install -y winfsp zip make
cd ../.. # fix crlf in git checkout
mv $TRAVIS_REPO_SLUG _old
git config --global core.autocrlf false
git clone _old $TRAVIS_REPO_SLUG
cd $TRAVIS_REPO_SLUG
fi
install:
- git fetch --unshallow --tags
- make vars
- make build_dep
script:
- make check
- make quicktest
- make compile_all
- make vars
env:
global:
- GOTAGS=cmount
- GO111MODULE=off
- GITHUB_USER=ncw
- GOTRACEBACK=all
- secure: gU8gCV9R8Kv/Gn0SmCP37edpfIbPoSvsub48GK7qxJdTU628H0KOMiZW/T0gtV5d67XJZ4eKnhJYlxwwxgSgfejO32Rh5GlYEKT/FuVoH0BD72dM1GDFLSrUiUYOdoHvf/BKIFA3dJFT4lk2ASy4Zh7SEoXHG6goBlqUpYx8hVA=
- secure: AMjrMAksDy3QwqGqnvtUg8FL/GNVgNqTqhntLF9HSU0njHhX6YurGGnfKdD9vNHlajPQOewvmBjwNLcDWGn2WObdvmh9Ohep0EmOjZ63kliaRaSSQueSd8y0idfqMQAxep0SObOYbEDVmQh0RCAE9wOVKRaPgw98XvgqWGDq5Tw=
- secure: Uaiveq+/rvQjO03GzvQZV2J6pZfedoFuhdXrLVhhHSeP4ZBca0olw7xaqkabUyP3LkVYXMDSX8EbyeuQT1jfEe5wp5sBdfaDtuYW6heFyjiHIIIbVyBfGXon6db4ETBjOaX/Xt8uktrgNge6qFlj+kpnmpFGxf0jmDLw1zgg7tk=
addons:
apt:
packages:
- fuse
- libfuse-dev
- rpm
- pkg-config
- fuse
- libfuse-dev
- rpm
- pkg-config
cache:
directories:
- $HOME/.cache/go-build
matrix:
allow_failures:
- go: tip
- go: tip
include:
- os: osx
go: 1.12rc1
env: GOTAGS=""
cache:
directories:
- $HOME/Library/Caches/go-build
- go: 1.9.x
script:
- make quicktest
- go: 1.10.x
script:
- make quicktest
- go: 1.11.x
script:
- make quicktest
- go: 1.12.x
env:
- GOTAGS=cmount
script:
- make build_dep
- make check
- make quicktest
- make racequicktest
- make compile_all
- os: osx
go: 1.12.x
env:
- GOTAGS= # cmount doesn't work on osx travis for some reason
cache:
directories:
- $HOME/Library/Caches/go-build
script:
- make
- make quicktest
- make racequicktest
# - os: windows
# go: 1.12.x
# env:
# - GOTAGS=cmount
# - CPATH='C:\Program Files (x86)\WinFsp\inc\fuse'
# #filter_secrets: false # works around a problem with secrets under windows
# cache:
# directories:
# - ${LocalAppData}/go-build
# script:
# - make
# - make quicktest
# - make racequicktest
- go: tip
script:
- make quicktest
deploy:
provider: script
script: make travis_beta
@@ -56,5 +103,5 @@ deploy:
on:
repo: ncw/rclone
all_branches: true
go: 1.12rc1
condition: $TRAVIS_PULL_REQUEST == false
go: 1.12.x
condition: $TRAVIS_PULL_REQUEST == false && $TRAVIS_OS_NAME != "windows"

View File

@@ -135,7 +135,7 @@ then change into the project root and run
make test
This command is run daily on the the integration test server. You can
This command is run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
## Code Organisation ##

File diff suppressed because it is too large Load Diff

692
MANUAL.md

File diff suppressed because it is too large Load Diff

3775
MANUAL.txt

File diff suppressed because it is too large Load Diff

View File

@@ -17,8 +17,6 @@ ifneq ($(TAG),$(LAST_TAG))
endif
GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... | grep -v /vendor/ )
# Run full tests if go >= go1.11
FULL_TESTS := $(shell go version | perl -lne 'print "go$$1.$$2" if /go(\d+)\.(\d+)/ && ($$1 > 1 || $$2 >= 11)')
BETA_PATH := $(BRANCH_PATH)$(TAG)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
@@ -26,6 +24,7 @@ BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
LINTTAGS=--build-tags "$(GOTAGS)"
endif
.PHONY: rclone vars version
@@ -42,7 +41,6 @@ vars:
@echo LAST_TAG="'$(LAST_TAG)'"
@echo NEW_TAG="'$(NEW_TAG)'"
@echo GO_VERSION="'$(GO_VERSION)'"
@echo FULL_TESTS="'$(FULL_TESTS)'"
@echo BETA_URL="'$(BETA_URL)'"
version:
@@ -56,29 +54,20 @@ test: rclone
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) $(GO_FILES)
ifdef FULL_TESTS
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race $(GO_FILES)
endif
RCLONE_CONFIG="/notfound" go test -v $(BUILDTAGS) $(GO_FILES)
racequicktest:
RCLONE_CONFIG="/notfound" go test -v $(BUILDTAGS) -cpu=2 -race $(GO_FILES)
# Do source code quality checks
check: rclone
ifdef FULL_TESTS
@# we still run go vet for -printfuncs which golangci-lint doesn't do yet
@# see: https://github.com/golangci/golangci-lint/issues/204
@echo "-- START CODE QUALITY REPORT -------------------------------"
@go vet $(BUILDTAGS) -printfuncs Debugf,Infof,Logf,Errorf ./...
@golangci-lint run ./...
@golangci-lint run $(LINTTAGS) ./...
@echo "-- END CODE QUALITY REPORT ---------------------------------"
else
@echo Skipping source quality tests as version of go too old
endif
# Get the build dependencies
build_dep:
ifdef FULL_TESTS
go run bin/get-github-release.go -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
endif
# Get the release dependencies
release_dep:
@@ -109,7 +98,7 @@ commanddocs: rclone
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs docs/content/commands/
backenddocs: rclone bin/make_backend_docs.py
./bin/make_backend_docs.py
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" ./bin/make_backend_docs.py
rcdocs: rclone
bin/make_rc_docs.sh
@@ -162,11 +151,7 @@ log_since_last_release:
git log $(LAST_TAG)..
compile_all:
ifdef FULL_TESTS
go run bin/cross-compile.go -parallel 8 -compile-only $(BUILDTAGS) $(TAG)
else
@echo Skipping compile all as version of go too old
endif
appveyor_upload:
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
@@ -186,6 +171,11 @@ BUILD_FLAGS := -exclude "^(windows|darwin)/"
ifeq ($(TRAVIS_OS_NAME),osx)
BUILD_FLAGS := -include "^darwin/" -cgo
endif
ifeq ($(TRAVIS_OS_NAME),windows)
# BUILD_FLAGS := -include "^windows/" -cgo
# 386 doesn't build yet
BUILD_FLAGS := -include "^windows/amd64" -cgo
endif
travis_beta:
ifeq ($(TRAVIS_OS_NAME),linux)
@@ -201,7 +191,7 @@ endif
# Fetch the binary builds from travis and appveyor
fetch_binaries:
rclone -P sync $(BETA_UPLOAD) build/
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w

View File

@@ -7,11 +7,11 @@
[Changelog](https://rclone.org/changelog/) |
[Installation](https://rclone.org/install/) |
[Forum](https://forum.rclone.org/) |
[G+](https://google.com/+RcloneOrg)
[![Build Status](https://travis-ci.org/ncw/rclone.svg?branch=master)](https://travis-ci.org/ncw/rclone)
[![Windows Build Status](https://ci.appveyor.com/api/projects/status/github/ncw/rclone?branch=master&passingText=windows%20-%20ok&svg=true)](https://ci.appveyor.com/project/ncw/rclone)
[![CircleCI](https://circleci.com/gh/ncw/rclone/tree/master.svg?style=svg)](https://circleci.com/gh/ncw/rclone/tree/master)
[![Go Report Card](https://goreportcard.com/badge/github.com/ncw/rclone)](https://goreportcard.com/report/github.com/ncw/rclone)
[![GoDoc](https://godoc.org/github.com/ncw/rclone?status.svg)](https://godoc.org/github.com/ncw/rclone)
# Rclone
@@ -36,6 +36,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Hubic [:page_facing_up:](https://rclone.org/hubic/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
@@ -72,6 +73,8 @@ Please see [the full list of all storage providers and their features](https://r
* Optional encryption ([Crypt](https://rclone.org/crypt/))
* Optional cache ([Cache](https://rclone.org/cache/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
* Multi-threaded downloads to local disk
* Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDav/FTP/SFTP/dlna
## Installation & documentation

View File

@@ -11,7 +11,7 @@ Making a release
* edit docs/content/changelog.md
* make doc
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX"
* git commit -a -v -m "Version v1.XX.0"
* make retag
* git push --tags origin master
* # Wait for the appveyor and travis builds to complete then...
@@ -27,6 +27,7 @@ Making a release
Early in the next release cycle update the vendored dependencies
* Review any pinned packages in go.mod and remove if possible
* GO111MODULE=on go get -u github.com/spf13/cobra@master
* make update
* git status
* git add new files

View File

@@ -14,7 +14,7 @@ import (
func init() {
fsi := &fs.RegInfo{
Name: "alias",
Description: "Alias for a existing remote",
Description: "Alias for an existing remote",
NewFs: NewFs,
Options: []fs.Option{{
Name: "remote",

View File

@@ -16,6 +16,7 @@ import (
_ "github.com/ncw/rclone/backend/http"
_ "github.com/ncw/rclone/backend/hubic"
_ "github.com/ncw/rclone/backend/jottacloud"
_ "github.com/ncw/rclone/backend/koofr"
_ "github.com/ncw/rclone/backend/local"
_ "github.com/ncw/rclone/backend/mega"
_ "github.com/ncw/rclone/backend/onedrive"

View File

@@ -32,7 +32,6 @@ import (
"github.com/ncw/rclone/lib/dircache"
"github.com/ncw/rclone/lib/oauthutil"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/rest"
"github.com/pkg/errors"
"golang.org/x/oauth2"
)
@@ -155,7 +154,7 @@ type Fs struct {
noAuthClient *http.Client // unauthenticated http client
root string // the path we are working on
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
trueRootID string // ID of true root directory
tokenRenewer *oauthutil.Renew // renew the token on expiry
}
@@ -273,7 +272,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root,
opt: *opt,
c: c,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
pacer: fs.NewPacer(pacer.NewAmazonCloudDrive(pacer.MinSleep(minSleep))),
noAuthClient: fshttp.NewClient(fs.Config),
}
f.features = (&fs.Features{
@@ -1093,7 +1092,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
if !bigObject {
in, resp, err = file.OpenHeaders(headers)
} else {
in, resp, err = file.OpenTempURLHeaders(rest.ClientWithHeaderReset(o.fs.noAuthClient, headers), headers)
in, resp, err = file.OpenTempURLHeaders(o.fs.noAuthClient, headers)
}
return o.fs.shouldRetry(resp, err)
})

View File

@@ -1,6 +1,6 @@
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
// +build !plan9,!solaris,go1.8
// +build !plan9,!solaris
package azureblob
@@ -144,7 +144,7 @@ type Fs struct {
containerOKMu sync.Mutex // mutex to protect container OK
containerOK bool // true if we have created the container
containerDeleted bool // true if we have deleted the container
pacer *pacer.Pacer // To pace and retry the API calls
pacer *fs.Pacer // To pace and retry the API calls
uploadToken *pacer.TokenDispenser // control concurrency
}
@@ -347,7 +347,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
opt: *opt,
container: container,
root: directory,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant).SetPacer(pacer.S3Pacer),
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
client: fshttp.NewClient(fs.Config),
}
@@ -1386,16 +1386,16 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
blob := o.getBlobReference()
httpHeaders := azblob.BlobHTTPHeaders{}
httpHeaders.ContentType = fs.MimeType(o)
// Multipart upload doesn't support MD5 checksums at put block calls, hence calculate
// MD5 only for PutBlob requests
if size < int64(o.fs.opt.UploadCutoff) {
if sourceMD5, _ := src.Hash(hash.MD5); sourceMD5 != "" {
sourceMD5bytes, err := hex.DecodeString(sourceMD5)
if err == nil {
httpHeaders.ContentMD5 = sourceMD5bytes
} else {
fs.Debugf(o, "Failed to decode %q as MD5: %v", sourceMD5, err)
}
// Compute the Content-MD5 of the file, for multiparts uploads it
// will be set in PutBlockList API call using the 'x-ms-blob-content-md5' header
// Note: If multipart, a MD5 checksum will also be computed for each uploaded block
// in order to validate its integrity during transport
if sourceMD5, _ := src.Hash(hash.MD5); sourceMD5 != "" {
sourceMD5bytes, err := hex.DecodeString(sourceMD5)
if err == nil {
httpHeaders.ContentMD5 = sourceMD5bytes
} else {
fs.Debugf(o, "Failed to decode %q as MD5: %v", sourceMD5, err)
}
}

View File

@@ -1,4 +1,4 @@
// +build !plan9,!solaris,go1.8
// +build !plan9,!solaris
package azureblob

View File

@@ -1,6 +1,6 @@
// Test AzureBlob filesystem interface
// +build !plan9,!solaris,go1.8
// +build !plan9,!solaris
package azureblob

View File

@@ -1,6 +1,6 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9 solaris !go1.8
// +build plan9 solaris
package azureblob

View File

@@ -167,7 +167,7 @@ type Fs struct {
uploadMu sync.Mutex // lock for upload variable
uploads []*api.GetUploadURLResponse // result of get upload URL calls
authMu sync.Mutex // lock for authorizing the account
pacer *pacer.Pacer // To pace and retry the API calls
pacer *fs.Pacer // To pace and retry the API calls
bufferTokens chan []byte // control concurrency of multipart uploads
}
@@ -251,13 +251,7 @@ func (f *Fs) shouldRetryNoReauth(resp *http.Response, err error) (bool, error) {
fs.Errorf(f, "Malformed %s header %q: %v", retryAfterHeader, retryAfterString, err)
}
}
retryAfterDuration := time.Duration(retryAfter) * time.Second
if f.pacer.GetSleep() < retryAfterDuration {
fs.Debugf(f, "Setting sleep to %v after error: %v", retryAfterDuration, err)
// We set 1/2 the value here because the pacer will double it immediately
f.pacer.SetSleep(retryAfterDuration / 2)
}
return true, err
return true, pacer.RetryAfterError(err, time.Duration(retryAfter)*time.Second)
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@@ -363,7 +357,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
bucket: bucket,
root: directory,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -958,6 +952,13 @@ func (f *Fs) hide(Name string) error {
return f.shouldRetry(resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
if apiErr.Code == "already_hidden" {
// sometimes eventual consistency causes this, so
// ignore this error since it is harmless
return nil
}
}
return errors.Wrapf(err, "failed to hide %q", Name)
}
return nil
@@ -1212,7 +1213,7 @@ func (o *Object) parseTimeString(timeString string) (err error) {
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
if err != nil {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return err
return nil
}
o.modTime = time.Unix(unixMilliseconds/1E3, (unixMilliseconds%1E3)*1E6).UTC()
return nil

View File

@@ -111,7 +111,7 @@ type Fs struct {
features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
uploadToken *pacer.TokenDispenser // control concurrency
}
@@ -260,7 +260,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
}
f.features = (&fs.Features{

View File

@@ -20,6 +20,7 @@ import (
"github.com/ncw/rclone/backend/crypt"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/cache"
"github.com/ncw/rclone/fs/config"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
@@ -481,7 +482,7 @@ func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
return nil, errors.Wrapf(err, "failed to create cache directory %v", f.opt.TempWritePath)
}
f.opt.TempWritePath = filepath.ToSlash(f.opt.TempWritePath)
f.tempFs, err = fs.NewFs(f.opt.TempWritePath)
f.tempFs, err = cache.Get(f.opt.TempWritePath)
if err != nil {
return nil, errors.Wrapf(err, "failed to create temp fs: %v", err)
}
@@ -1191,7 +1192,7 @@ func (f *Fs) Rmdir(dir string) error {
}
var queuedEntries []*Object
err = walk.Walk(f.tempFs, dir, true, -1, func(path string, entries fs.DirEntries, err error) error {
err = walk.ListR(f.tempFs, dir, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
co := ObjectFromOriginal(f, oo)
@@ -1287,7 +1288,7 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
}
var queuedEntries []*Object
err := walk.Walk(f.tempFs, srcRemote, true, -1, func(path string, entries fs.DirEntries, err error) error {
err := walk.ListR(f.tempFs, srcRemote, true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
co := ObjectFromOriginal(f, oo)

View File

@@ -15,7 +15,9 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "MergeDirs", "OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier"},
})
}

View File

@@ -1023,7 +1023,7 @@ func (b *Persistent) ReconcileTempUploads(cacheFs *Fs) error {
}
var queuedEntries []fs.Object
err = walk.Walk(cacheFs.tempFs, "", true, -1, func(path string, entries fs.DirEntries, err error) error {
err = walk.ListR(cacheFs.tempFs, "", true, -1, walk.ListObjects, func(entries fs.DirEntries) error {
for _, o := range entries {
if oo, ok := o.(fs.Object); ok {
queuedEntries = append(queuedEntries, oo)

View File

@@ -169,23 +169,10 @@ func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
WriteMimeType: false,
BucketBased: true,
CanHaveEmptyDirectories: true,
SetTier: true,
GetTier: true,
}).Fill(f).Mask(wrappedFs).WrapsFs(f, wrappedFs)
doChangeNotify := wrappedFs.Features().ChangeNotify
if doChangeNotify != nil {
f.features.ChangeNotify = func(notifyFunc func(string, fs.EntryType), pollInterval <-chan time.Duration) {
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
decrypted, err := f.DecryptFileName(path)
if err != nil {
fs.Logf(f, "ChangeNotify was unable to decrypt %q: %s", path, err)
return
}
notifyFunc(decrypted, entryType)
}
doChangeNotify(wrappedNotifyFunc, pollInterval)
}
}
return f, err
}
@@ -202,6 +189,7 @@ type Options struct {
// Fs represents a wrapped fs.Fs
type Fs struct {
fs.Fs
wrapper fs.Fs
name string
root string
opt Options
@@ -544,6 +532,16 @@ func (f *Fs) UnWrap() fs.Fs {
return f.Fs
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// EncryptFileName returns an encrypted file name
func (f *Fs) EncryptFileName(fileName string) string {
return f.cipher.EncryptFileName(fileName)
@@ -616,6 +614,75 @@ func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType hash.Type) (hashStr
return m.Sums()[hashType], nil
}
// MergeDirs merges the contents of all the directories passed
// in into the first one and rmdirs the other directories.
func (f *Fs) MergeDirs(dirs []fs.Directory) error {
do := f.Fs.Features().MergeDirs
if do == nil {
return errors.New("MergeDirs not supported")
}
out := make([]fs.Directory, len(dirs))
for i, dir := range dirs {
out[i] = fs.NewDirCopy(dir).SetRemote(f.cipher.EncryptDirName(dir.Remote()))
}
return do(out)
}
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
do := f.Fs.Features().DirCacheFlush
if do != nil {
do()
}
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(remote string) (string, error) {
do := f.Fs.Features().PublicLink
if do == nil {
return "", errors.New("PublicLink not supported")
}
o, err := f.NewObject(remote)
if err != nil {
// assume it is a directory
return do(f.cipher.EncryptDirName(remote))
}
return do(o.(*Object).Object.Remote())
}
// ChangeNotify calls the passed function with a path
// that has had changes. If the implementation
// uses polling, it should adhere to the given interval.
func (f *Fs) ChangeNotify(notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
do := f.Fs.Features().ChangeNotify
if do == nil {
return
}
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
// fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
var (
err error
decrypted string
)
switch entryType {
case fs.EntryDirectory:
decrypted, err = f.cipher.DecryptDirName(path)
case fs.EntryObject:
decrypted, err = f.cipher.DecryptFileName(path)
default:
fs.Errorf(path, "crypt ChangeNotify: ignoring unknown EntryType %d", entryType)
return
}
if err != nil {
fs.Logf(f, "ChangeNotify was unable to decrypt %q: %s", path, err)
return
}
notifyFunc(decrypted, entryType)
}
do(wrappedNotifyFunc, pollIntervalChan)
}
// Object describes a wrapped for being read from the Fs
//
// This decrypts the remote name and decrypts the data
@@ -774,6 +841,34 @@ func (o *ObjectInfo) Hash(hash hash.Type) (string, error) {
return "", nil
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
do, ok := o.Object.(fs.IDer)
if !ok {
return ""
}
return do.ID()
}
// SetTier performs changing storage tier of the Object if
// multiple storage classes supported
func (o *Object) SetTier(tier string) error {
do, ok := o.Object.(fs.SetTierer)
if !ok {
return errors.New("crypt: underlying remote does not support SetTier")
}
return do.SetTier(tier)
}
// GetTier returns storage tier or class of the Object
func (o *Object) GetTier() string {
do, ok := o.Object.(fs.GetTierer)
if !ok {
return ""
}
return do.GetTier()
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
@@ -787,7 +882,15 @@ var (
_ fs.UnWrapper = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.ObjectInfo = (*ObjectInfo)(nil)
_ fs.Object = (*Object)(nil)
_ fs.ObjectUnWrapper = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
_ fs.SetTierer = (*Object)(nil)
_ fs.GetTierer = (*Object)(nil)
)

View File

@@ -21,8 +21,10 @@ func TestIntegration(t *testing.T) {
t.Skip("Skipping as -remote not set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*crypt.Object)(nil),
RemoteName: *fstest.RemoteName,
NilObject: (*crypt.Object)(nil),
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
@@ -42,6 +44,8 @@ func TestStandard(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato")},
{Name: name, Key: "filename_encryption", Value: "standard"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
@@ -61,6 +65,8 @@ func TestOff(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "off"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
@@ -80,6 +86,8 @@ func TestObfuscate(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "obfuscate"},
},
SkipBadWindowsCharacters: true,
SkipBadWindowsCharacters: true,
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}

View File

@@ -1,7 +1,4 @@
// Package drive interfaces with the Google Drive object storage system
// +build go1.9
package drive
// FIXME need to deal with some corner cases
@@ -186,10 +183,10 @@ func init() {
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Google Application Client Id\nLeave blank normally.",
Help: "Google Application Client Id\nSetting your own is recommended.\nSee https://rclone.org/drive/#making-your-own-client-id for how to create your own.\nIf you leave this blank, it will use an internal key which is low performance.",
}, {
Name: config.ConfigClientSecret,
Help: "Google Application Client Secret\nLeave blank normally.",
Help: "Google Application Client Secret\nSetting your own is recommended.",
}, {
Name: "scope",
Help: "Scope that rclone should use when requesting access from drive.",
@@ -240,6 +237,22 @@ func init() {
Default: false,
Help: "Skip google documents in all listings.\nIf given, gdocs practically become invisible to rclone.",
Advanced: true,
}, {
Name: "skip_checksum_gphotos",
Default: false,
Help: `Skip MD5 checksum on Google photos and videos only.
Use this if you get checksum errors when transferring Google photos or
videos.
Setting this flag will cause Google photos and videos to return a
blank MD5 checksum.
Google photos are identifed by being in the "photos" space.
Corrupted checksums are caused by Google modifying the image/video but
not updating the checksum.`,
Advanced: true,
}, {
Name: "shared_with_me",
Default: false,
@@ -396,6 +409,7 @@ type Options struct {
AuthOwnerOnly bool `config:"auth_owner_only"`
UseTrash bool `config:"use_trash"`
SkipGdocs bool `config:"skip_gdocs"`
SkipChecksumGphotos bool `config:"skip_checksum_gphotos"`
SharedWithMe bool `config:"shared_with_me"`
TrashedOnly bool `config:"trashed_only"`
Extensions string `config:"formats"`
@@ -426,7 +440,7 @@ type Fs struct {
client *http.Client // authorized client
rootFolderID string // the id of the root folder
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // To pace the API calls
pacer *fs.Pacer // To pace the API calls
exportExtensions []string // preferred extensions to download docs
importMimeTypes []string // MIME types to convert to docs
isTeamDrive bool // true if this is a team drive
@@ -615,6 +629,9 @@ func (f *Fs) list(dirIDs []string, title string, directoriesOnly, filesOnly, inc
if f.opt.AuthOwnerOnly {
fields += ",owners"
}
if f.opt.SkipChecksumGphotos {
fields += ",spaces"
}
fields = fmt.Sprintf("files(%s),nextPageToken", fields)
@@ -676,28 +693,33 @@ func isPowerOfTwo(x int64) bool {
}
// add a charset parameter to all text/* MIME types
func fixMimeType(mimeType string) string {
mediaType, param, err := mime.ParseMediaType(mimeType)
func fixMimeType(mimeTypeIn string) string {
if mimeTypeIn == "" {
return ""
}
mediaType, param, err := mime.ParseMediaType(mimeTypeIn)
if err != nil {
return mimeType
return mimeTypeIn
}
if strings.HasPrefix(mimeType, "text/") && param["charset"] == "" {
mimeTypeOut := mimeTypeIn
if strings.HasPrefix(mediaType, "text/") && param["charset"] == "" {
param["charset"] = "utf-8"
mimeType = mime.FormatMediaType(mediaType, param)
mimeTypeOut = mime.FormatMediaType(mediaType, param)
}
return mimeType
if mimeTypeOut == "" {
panic(errors.Errorf("unable to fix MIME type %q", mimeTypeIn))
}
return mimeTypeOut
}
func fixMimeTypeMap(m map[string][]string) map[string][]string {
for _, v := range m {
func fixMimeTypeMap(in map[string][]string) (out map[string][]string) {
out = make(map[string][]string, len(in))
for k, v := range in {
for i, mt := range v {
fixed := fixMimeType(mt)
if fixed == "" {
panic(errors.Errorf("unable to fix MIME type %q", mt))
}
v[i] = fixed
v[i] = fixMimeType(mt)
}
out[fixMimeType(k)] = v
}
return m
return out
}
func isInternalMimeType(mimeType string) bool {
return strings.HasPrefix(mimeType, "application/vnd.google-apps.")
@@ -789,8 +811,8 @@ func configTeamDrive(opt *Options, m configmap.Mapper, name string) error {
}
// newPacer makes a pacer configured for drive
func newPacer(opt *Options) *pacer.Pacer {
return pacer.New().SetMinSleep(time.Duration(opt.PacerMinSleep)).SetBurst(opt.PacerBurst).SetPacer(pacer.GoogleDrivePacer)
func newPacer(opt *Options) *fs.Pacer {
return fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(opt.PacerMinSleep), pacer.Burst(opt.PacerBurst)))
}
func getServiceAccountClient(opt *Options, credentialsData []byte) (*http.Client, error) {
@@ -902,6 +924,7 @@ func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: true,
}).Fill(f)
// Create a new authorized Drive client.
@@ -996,6 +1019,15 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
// newRegularObject creates a fs.Object for a normal drive.File
func (f *Fs) newRegularObject(remote string, info *drive.File) fs.Object {
// wipe checksum if SkipChecksumGphotos and file is type Photo or Video
if f.opt.SkipChecksumGphotos {
for _, space := range info.Spaces {
if space == "photos" {
info.Md5Checksum = ""
break
}
}
}
return &Object{
baseObject: f.newBaseObject(remote, info),
url: fmt.Sprintf("%sfiles/%s?alt=media", f.svc.BasePath, info.Id),
@@ -1837,16 +1869,24 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
remote = remote[:len(remote)-len(ext)]
}
// Look to see if there is an existing object
existingObject, _ := f.NewObject(remote)
createInfo, err := f.createFileInfo(remote, src.ModTime())
if err != nil {
return nil, err
}
supportTeamDrives, err := f.ShouldSupportTeamDrives(src)
if err != nil {
return nil, err
}
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Copy(srcObj.id, createInfo).
Fields(partialFields).
SupportsTeamDrives(f.isTeamDrive).
SupportsTeamDrives(supportTeamDrives).
KeepRevisionForever(f.opt.KeepRevisionForever).
Do()
return shouldRetry(err)
@@ -1854,7 +1894,17 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
if err != nil {
return nil, err
}
return f.newObjectWithInfo(remote, info)
newObject, err := f.newObjectWithInfo(remote, info)
if err != nil {
return nil, err
}
if existingObject != nil {
err = existingObject.Remove()
if err != nil {
fs.Errorf(existingObject, "Failed to remove existing object after copy: %v", err)
}
}
return newObject, nil
}
// Purge deletes all the files and the container
@@ -1980,6 +2030,11 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
dstParents := strings.Join(dstInfo.Parents, ",")
dstInfo.Parents = nil
supportTeamDrives, err := f.ShouldSupportTeamDrives(src)
if err != nil {
return nil, err
}
// Do the move
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
@@ -1987,7 +2042,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
RemoveParents(srcParentID).
AddParents(dstParents).
Fields(partialFields).
SupportsTeamDrives(f.isTeamDrive).
SupportsTeamDrives(supportTeamDrives).
Do()
return shouldRetry(err)
})
@@ -1998,6 +2053,20 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, info)
}
// ShouldSupportTeamDrives returns the request should support TeamDrives
func (f *Fs) ShouldSupportTeamDrives(src fs.Object) (bool, error) {
srcIsTeamDrive := false
if srcFs, ok := src.Fs().(*Fs); ok {
srcIsTeamDrive = srcFs.isTeamDrive
}
if f.isTeamDrive {
return true, nil
}
return srcIsTeamDrive, nil
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(remote string) (link string, err error) {
id, err := f.dirCache.FindDir(remote, false)
@@ -2430,6 +2499,10 @@ func (o *baseObject) httpResponse(url, method string, options []fs.OpenOption) (
return req, nil, err
}
fs.OpenOptionAddHTTPHeaders(req.Header, options)
if o.bytes == 0 {
// Don't supply range requests for 0 length objects as they always fail
delete(req.Header, "Range")
}
err = o.fs.pacer.Call(func() (bool, error) {
res, err = o.fs.client.Do(req)
if err == nil {

View File

@@ -1,5 +1,3 @@
// +build go1.9
package drive
import (

View File

@@ -1,7 +1,5 @@
// Test Drive filesystem interface
// +build go1.9
package drive
import (

View File

@@ -1,6 +0,0 @@
// Build for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !go1.9
package drive

View File

@@ -8,8 +8,6 @@
//
// This contains code adapted from google.golang.org/api (C) the GO AUTHORS
// +build go1.9
package drive
import (

View File

@@ -160,7 +160,7 @@ type Fs struct {
team team.Client // for the Teams API
slashRoot string // root with "/" prefix, lowercase
slashRootSlash string // root with "/" prefix and postfix, lowercase
pacer *pacer.Pacer // To pace the API calls
pacer *fs.Pacer // To pace the API calls
ns string // The namespace we are using or "" for none
}
@@ -209,7 +209,7 @@ func shouldRetry(err error) (bool, error) {
case auth.RateLimitAPIError:
if e.RateLimitError.RetryAfter > 0 {
fs.Debugf(baseErrString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter)
time.Sleep(time.Duration(e.RateLimitError.RetryAfter) * time.Second)
err = pacer.RetryAfterError(err, time.Duration(e.RateLimitError.RetryAfter)*time.Second)
}
return true, err
}
@@ -273,7 +273,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
f := &Fs{
name: name,
opt: *opt,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
config := dropbox.Config{
LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo

View File

@@ -15,6 +15,7 @@ import (
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/pacer"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
)
@@ -45,6 +46,11 @@ func init() {
Help: "FTP password",
IsPassword: true,
Required: true,
}, {
Name: "concurrency",
Help: "Maximum number of FTP simultaneous connections, 0 for unlimited",
Default: 0,
Advanced: true,
},
},
})
@@ -52,10 +58,11 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
Concurrency int `config:"concurrency"`
}
// Fs represents a remote FTP server
@@ -70,6 +77,7 @@ type Fs struct {
dialAddr string
poolMu sync.Mutex
pool []*ftp.ServerConn
tokens *pacer.TokenDispenser
}
// Object describes an FTP file
@@ -128,6 +136,9 @@ func (f *Fs) ftpConnection() (*ftp.ServerConn, error) {
// Get an FTP connection from the pool, or open a new one
func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
f.tokens.Get()
}
f.poolMu.Lock()
if len(f.pool) > 0 {
c = f.pool[0]
@@ -147,6 +158,9 @@ func (f *Fs) getFtpConnection() (c *ftp.ServerConn, err error) {
// if err is not nil then it checks the connection is alive using a
// NOOP request
func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
if f.opt.Concurrency > 0 {
defer f.tokens.Put()
}
c := *pc
*pc = nil
if err != nil {
@@ -198,6 +212,7 @@ func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
user: user,
pass: pass,
dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
@@ -330,11 +345,36 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
if err != nil {
return nil, errors.Wrap(err, "list")
}
files, err := c.List(path.Join(f.root, dir))
f.putFtpConnection(&c, err)
if err != nil {
return nil, translateErrorDir(err)
var listErr error
var files []*ftp.Entry
resultchan := make(chan []*ftp.Entry, 1)
errchan := make(chan error, 1)
go func() {
result, err := c.List(path.Join(f.root, dir))
f.putFtpConnection(&c, err)
if err != nil {
errchan <- err
return
}
resultchan <- result
}()
// Wait for List for up to Timeout seconds
timer := time.NewTimer(fs.Config.Timeout)
select {
case listErr = <-errchan:
timer.Stop()
return nil, translateErrorDir(listErr)
case files = <-resultchan:
timer.Stop()
case <-timer.C:
// if timer fired assume no error but connection dead
fs.Errorf(f, "Timeout when waiting for List")
return nil, errors.New("Timeout when waiting for List")
}
// Annoyingly FTP returns success for a directory which
// doesn't exist, so check it really doesn't exist if no
// entries found.

View File

@@ -1,7 +1,4 @@
// Package googlecloudstorage provides an interface to Google Cloud Storage
// +build go1.9
package googlecloudstorage
/*
@@ -16,6 +13,7 @@ FIXME Patch/Delete/Get isn't working with files with spaces in - giving 404 erro
*/
import (
"context"
"encoding/base64"
"encoding/hex"
"fmt"
@@ -45,6 +43,8 @@ import (
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
"google.golang.org/api/googleapi"
// NOTE: This API is deprecated
storage "google.golang.org/api/storage/v1"
)
@@ -144,6 +144,22 @@ func init() {
Value: "publicReadWrite",
Help: "Project team owners get OWNER access, and all Users get WRITER access.",
}},
}, {
Name: "bucket_policy_only",
Help: `Access checks should use bucket-level IAM policies.
If you want to upload objects to a bucket with Bucket Policy Only set
then you will need to set this.
When it is set, rclone:
- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
Docs: https://cloud.google.com/storage/docs/bucket-policy-only
`,
Default: false,
}, {
Name: "location",
Help: "Location for the newly created buckets.",
@@ -241,6 +257,7 @@ type Options struct {
ServiceAccountCredentials string `config:"service_account_credentials"`
ObjectACL string `config:"object_acl"`
BucketACL string `config:"bucket_acl"`
BucketPolicyOnly bool `config:"bucket_policy_only"`
Location string `config:"location"`
StorageClass string `config:"storage_class"`
}
@@ -256,7 +273,7 @@ type Fs struct {
bucket string // the bucket we are working on
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
pacer *pacer.Pacer // To pace the API calls
pacer *fs.Pacer // To pace the API calls
}
// Object describes a storage object
@@ -381,7 +398,11 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
} else {
oAuthClient, _, err = oauthutil.NewClient(name, m, storageConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Google Cloud Storage")
ctx := context.Background()
oAuthClient, err = google.DefaultClient(ctx, storage.DevstorageFullControlScope)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Google Cloud Storage")
}
}
}
@@ -395,7 +416,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
bucket: bucket,
root: directory,
opt: *opt,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.GoogleDrivePacer),
pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(minSleep))),
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -709,8 +730,19 @@ func (f *Fs) Mkdir(dir string) (err error) {
Location: f.opt.Location,
StorageClass: f.opt.StorageClass,
}
if f.opt.BucketPolicyOnly {
bucket.IamConfiguration = &storage.BucketIamConfiguration{
BucketPolicyOnly: &storage.BucketIamConfigurationBucketPolicyOnly{
Enabled: true,
},
}
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket).PredefinedAcl(f.opt.BucketACL).Do()
insertBucket := f.svc.Buckets.Insert(f.opt.ProjectNumber, &bucket)
if !f.opt.BucketPolicyOnly {
insertBucket.PredefinedAcl(f.opt.BucketACL)
}
_, err = insertBucket.Do()
return shouldRetry(err)
})
if err == nil {
@@ -976,7 +1008,11 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
}
var newObject *storage.Object
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
newObject, err = o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name).PredefinedAcl(o.fs.opt.ObjectACL).Do()
insertObject := o.fs.svc.Objects.Insert(o.fs.bucket, &object).Media(in, googleapi.ContentType("")).Name(object.Name)
if !o.fs.opt.BucketPolicyOnly {
insertObject.PredefinedAcl(o.fs.opt.ObjectACL)
}
newObject, err = insertObject.Do()
return shouldRetry(err)
})
if err != nil {

View File

@@ -1,7 +1,5 @@
// Test GoogleCloudStorage filesystem interface
// +build go1.9
package googlecloudstorage_test
import (

View File

@@ -1,6 +0,0 @@
// Build for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !go1.9
package googlecloudstorage

View File

@@ -6,6 +6,7 @@ package http
import (
"io"
"mime"
"net/http"
"net/url"
"path"
@@ -44,6 +45,22 @@ func init() {
Value: "https://user:pass@example.com",
Help: "Connect to example.com using a username and password",
}},
}, {
Name: "no_slash",
Help: `Set this if the site doesn't end directories with /
Use this if your target website does not use / on the end of
directories.
A / on the end of a path is how rclone normally tells the difference
between files and directories. If this flag is set, then rclone will
treat all files with Content-Type: text/html as directories and read
URLs from them rather than downloading them.
Note that this may cause rclone to confuse genuine HTML files with
directories.`,
Default: false,
Advanced: true,
}},
}
fs.Register(fsi)
@@ -52,6 +69,7 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
Endpoint string `config:"url"`
NoSlash bool `config:"no_slash"`
}
// Fs stores the interface to the remote HTTP files
@@ -270,14 +288,20 @@ func parse(base *url.URL, in io.Reader) (names []string, err error) {
if err != nil {
return nil, err
}
var walk func(*html.Node)
var (
walk func(*html.Node)
seen = make(map[string]struct{})
)
walk = func(n *html.Node) {
if n.Type == html.ElementNode && n.Data == "a" {
for _, a := range n.Attr {
if a.Key == "href" {
name, err := parseName(base, a.Val)
if err == nil {
names = append(names, name)
if _, found := seen[name]; !found {
names = append(names, name)
seen[name] = struct{}{}
}
}
break
}
@@ -302,14 +326,16 @@ func (f *Fs) readDir(dir string) (names []string, err error) {
return nil, errors.Errorf("internal error: readDir URL %q didn't end in /", URL)
}
res, err := f.httpClient.Get(URL)
if err == nil && res.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
if err == nil {
defer fs.CheckClose(res.Body, &err)
if res.StatusCode == http.StatusNotFound {
return nil, fs.ErrorDirNotFound
}
}
err = statusError(res, err)
if err != nil {
return nil, errors.Wrap(err, "failed to readDir")
}
defer fs.CheckClose(res.Body, &err)
contentType := strings.SplitN(res.Header.Get("Content-Type"), ";", 2)[0]
switch contentType {
@@ -353,11 +379,16 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
fs: f,
remote: remote,
}
if err = file.stat(); err != nil {
switch err = file.stat(); err {
case nil:
entries = append(entries, file)
case fs.ErrorNotAFile:
// ...found a directory not a file
dir := fs.NewDir(remote, timeUnset)
entries = append(entries, dir)
default:
fs.Debugf(remote, "skipping because of error: %v", err)
continue
}
entries = append(entries, file)
}
}
return entries, nil
@@ -433,6 +464,16 @@ func (o *Object) stat() error {
o.size = parseInt64(res.Header.Get("Content-Length"), -1)
o.modTime = t
o.contentType = res.Header.Get("Content-Type")
// If NoSlash is set then check ContentType to see if it is a directory
if o.fs.opt.NoSlash {
mediaType, _, err := mime.ParseMediaType(o.contentType)
if err != nil {
return errors.Wrapf(err, "failed to parse Content-Type: %q", o.contentType)
}
if mediaType == "text/html" {
return fs.ErrorNotAFile
}
}
return nil
}

View File

@@ -1,5 +1,3 @@
// +build go1.8
package http
import (
@@ -65,7 +63,7 @@ func prepare(t *testing.T) (fs.Fs, func()) {
return f, tidy
}
func testListRoot(t *testing.T, f fs.Fs) {
func testListRoot(t *testing.T, f fs.Fs, noSlash bool) {
entries, err := f.List("")
require.NoError(t, err)
@@ -93,15 +91,29 @@ func testListRoot(t *testing.T, f fs.Fs) {
e = entries[3]
assert.Equal(t, "two.html", e.Remote())
assert.Equal(t, int64(7), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
if noSlash {
assert.Equal(t, int64(-1), e.Size())
_, ok = e.(fs.Directory)
assert.True(t, ok)
} else {
assert.Equal(t, int64(41), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
}
}
func TestListRoot(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
testListRoot(t, f)
testListRoot(t, f, false)
}
func TestListRootNoSlash(t *testing.T) {
f, tidy := prepare(t)
f.(*Fs).opt.NoSlash = true
defer tidy()
testListRoot(t, f, true)
}
func TestListSubDir(t *testing.T) {
@@ -194,7 +206,7 @@ func TestIsAFileRoot(t *testing.T) {
f, err := NewFs(remoteName, "one%.txt", m)
assert.Equal(t, err, fs.ErrorIsFile)
testListRoot(t, f)
testListRoot(t, f, false)
}
func TestIsAFileSubDir(t *testing.T) {

View File

@@ -1 +1 @@
potato
<a href="two.html/file.txt">file.txt</a>

View File

@@ -11,7 +11,9 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestHubic:",
NilObject: (*hubic.Object)(nil),
RemoteName: "TestHubic:",
NilObject: (*hubic.Object)(nil),
SkipFsCheckWrap: true,
SkipObjectCheckWrap: true,
})
}

View File

@@ -314,3 +314,9 @@ type UploadResponse struct {
Deleted interface{} `json:"deleted"`
Mime string `json:"mime"`
}
// DeviceRegistrationResponse is the response to registering a device
type DeviceRegistrationResponse struct {
ClientID string `json:"client_id"`
ClientSecret string `json:"client_secret"`
}

View File

@@ -8,6 +8,7 @@ import (
"io"
"io/ioutil"
"log"
"math/rand"
"net/http"
"net/url"
"os"
@@ -40,15 +41,21 @@ const (
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultDevice = "Jotta"
defaultMountpoint = "Sync" // nolint
defaultMountpoint = "Archive"
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com/files/v1/"
baseURL = "https://www.jottacloud.com/"
tokenURL = "https://api.jottacloud.com/auth/v1/token"
registerURL = "https://api.jottacloud.com/auth/v1/register"
cachePrefix = "rclone-jcmd5-"
rcloneClientID = "nibfk8biu12ju7hpqomr8b1e40"
rcloneEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2"
configUsername = "user"
configClientID = "client_id"
configClientSecret = "client_secret"
configDevice = "device"
configMountpoint = "mountpoint"
charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
)
var (
@@ -58,14 +65,13 @@ var (
AuthURL: tokenURL,
TokenURL: tokenURL,
},
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
RedirectURL: oauthutil.RedirectLocalhostURL,
}
)
// Register with Fs
func init() {
// needs to be done early so we can use oauth during config
fs.Register(&fs.RegInfo{
Name: "jottacloud",
Description: "JottaCloud",
@@ -79,14 +85,62 @@ func init() {
}
}
srv := rest.NewClient(fshttp.NewClient(fs.Config))
// ask if we should create a device specifc token: https://github.com/ncw/rclone/issues/2995
fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n")
if config.Confirm() {
// random generator to generate random device names
seededRand := rand.New(rand.NewSource(time.Now().UnixNano()))
randonDeviceNamePartLength := 21
randomDeviceNamePart := make([]byte, randonDeviceNamePartLength)
for i := range randomDeviceNamePart {
randomDeviceNamePart[i] = charset[seededRand.Intn(len(charset))]
}
randomDeviceName := "rclone-" + string(randomDeviceNamePart)
fs.Debugf(nil, "Trying to register device '%s'", randomDeviceName)
values := url.Values{}
values.Set("device_id", randomDeviceName)
// all information comes from https://github.com/ttyridal/aiojotta/wiki/Jotta-protocol-3.-Authentication#token-authentication
opts := rest.Opts{
Method: "POST",
RootURL: registerURL,
ContentType: "application/x-www-form-urlencoded",
ExtraHeaders: map[string]string{"Authorization": "Bearer c2xrZmpoYWRsZmFramhkc2xma2phaHNkbGZramhhc2xkZmtqaGFzZGxrZmpobGtq"},
Parameters: values,
}
var deviceRegistration api.DeviceRegistrationResponse
_, err := srv.CallJSON(&opts, nil, &deviceRegistration)
if err != nil {
log.Fatalf("Failed to register device: %v", err)
}
m.Set(configClientID, deviceRegistration.ClientID)
m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret))
fs.Debugf(nil, "Got clientID '%s' and clientSecret '%s'", deviceRegistration.ClientID, deviceRegistration.ClientSecret)
}
clientID, ok := m.Get(configClientID)
if !ok {
clientID = rcloneClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = rcloneEncryptedClientSecret
}
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
username, ok := m.Get(configUsername)
if !ok {
log.Fatalf("No username defined")
}
password := config.GetPassword("Your Jottacloud password is only required during config and will not be stored.")
password := config.GetPassword("Your Jottacloud password is only required during setup and will not be stored.")
// prepare out token request with username and password
srv := rest.NewClient(fshttp.NewClient(fs.Config))
values := url.Values{}
values.Set("grant_type", "PASSWORD")
values.Set("password", password)
@@ -106,7 +160,7 @@ func init() {
// if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header
if resp != nil {
if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" {
fmt.Printf("This account has 2 factor authentication enabled you will receive a verification code via SMS.\n")
fmt.Printf("This account uses 2 factor authentication you will receive a verification code via SMS.\n")
fmt.Printf("Enter verification code> ")
authCode := config.ReadLine()
authCode = strings.Replace(authCode, "-", "", -1) // the sms received contains a pair of 3 digit numbers seperated by '-' but wants a single 6 digit number
@@ -129,23 +183,49 @@ func init() {
// finally save them in the config
err = oauthutil.PutToken(name, m, &token, true)
if err != nil {
log.Fatalf("Error while setting token: %s", err)
log.Fatalf("Error while saving token: %s", err)
}
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
if config.Confirm() {
oAuthClient, _, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to load oAuthClient: %s", err)
}
srv = rest.NewClient(oAuthClient).SetRoot(rootURL)
acc, err := getAccountInfo(srv, username)
if err != nil {
log.Fatalf("Error getting devices: %s", err)
}
fmt.Printf("Please select the device to use. Normally this will be Jotta\n")
var deviceNames []string
for i := range acc.Devices {
deviceNames = append(deviceNames, acc.Devices[i].Name)
}
result := config.Choose("Devices", deviceNames, nil, false)
m.Set(configDevice, result)
dev, err := getDeviceInfo(srv, path.Join(username, result))
if err != nil {
log.Fatalf("Error getting Mountpoint: %s", err)
}
if len(dev.MountPoints) == 0 {
log.Fatalf("No Mountpoints found for this device.")
}
fmt.Printf("Please select the mountpoint to user. Normally this will be Archive\n")
var mountpointNames []string
for i := range dev.MountPoints {
mountpointNames = append(mountpointNames, dev.MountPoints[i].Name)
}
result = config.Choose("Mountpoints", mountpointNames, nil, false)
m.Set(configMountpoint, result)
}
},
Options: []fs.Option{{
Name: configUsername,
Help: "User Name:",
}, {
Name: "mountpoint",
Help: "The mountpoint to use.",
Required: true,
Examples: []fs.OptionExample{{
Value: "Sync",
Help: "Will be synced by the official client.",
}, {
Value: "Archive",
Help: "Archive",
}},
}, {
Name: "md5_memory_limit",
Help: "Files bigger than this will be cached on disk to calculate the MD5 if required.",
@@ -173,6 +253,7 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
User string `config:"user"`
Device string `config:"device"`
Mountpoint string `config:"mountpoint"`
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
HardDelete bool `config:"hard_delete"`
@@ -190,7 +271,7 @@ type Fs struct {
endpointURL string
srv *rest.Client
apiSrv *rest.Client
pacer *pacer.Pacer
pacer *fs.Pacer
tokenRenewer *oauthutil.Renew // renew the token on expiry
}
@@ -280,18 +361,31 @@ func (f *Fs) readMetaDataForPath(path string) (info *api.JottaFile, err error) {
return &result, nil
}
// getAccountInfo retrieves account information
func (f *Fs) getAccountInfo() (info *api.AccountInfo, err error) {
// getAccountInfo queries general information about the account.
// Takes rest.Client and username as parameter to be easily usable
// during config
func getAccountInfo(srv *rest.Client, username string) (info *api.AccountInfo, err error) {
opts := rest.Opts{
Method: "GET",
Path: urlPathEscape(f.user),
Path: urlPathEscape(username),
}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(&opts, nil, &info)
return shouldRetry(resp, err)
})
_, err = srv.CallXML(&opts, nil, &info)
if err != nil {
return nil, err
}
return info, nil
}
// getDeviceInfo queries Information about a jottacloud device
func getDeviceInfo(srv *rest.Client, path string) (info *api.JottaDevice, err error) {
opts := rest.Opts{
Method: "GET",
Path: urlPathEscape(path),
}
_, err = srv.CallXML(&opts, nil, &info)
if err != nil {
return nil, err
}
@@ -300,12 +394,18 @@ func (f *Fs) getAccountInfo() (info *api.AccountInfo, err error) {
}
// setEndpointUrl reads the account id and generates the API endpoint URL
func (f *Fs) setEndpointURL(mountpoint string) (err error) {
info, err := f.getAccountInfo()
func (f *Fs) setEndpointURL() (err error) {
info, err := getAccountInfo(f.srv, f.user)
if err != nil {
return errors.Wrap(err, "failed to get endpoint url")
}
f.endpointURL = urlPathEscape(path.Join(info.Username, defaultDevice, mountpoint))
if f.opt.Device == "" {
f.opt.Device = defaultDevice
}
if f.opt.Mountpoint == "" {
f.opt.Mountpoint = defaultMountpoint
}
f.endpointURL = urlPathEscape(path.Join(info.Username, f.opt.Device, f.opt.Mountpoint))
return nil
}
@@ -381,6 +481,17 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root)
clientID, ok := m.Get(configClientID)
if !ok {
clientID = rcloneClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = rcloneEncryptedClientSecret
}
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
// the oauth client for the api servers needs
// a filter to fix the grant_type issues (see above)
baseClient := fshttp.NewClient(fs.Config)
@@ -403,7 +514,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
apiSrv: rest.NewClient(oAuthClient).SetRoot(apiURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: true,
@@ -419,7 +530,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return err
})
err = f.setEndpointURL(opt.Mountpoint)
err = f.setEndpointURL()
if err != nil {
return nil, errors.Wrap(err, "couldn't get account info")
}
@@ -677,6 +788,9 @@ func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Obje
//
// The new object may have been created if an error is returned
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if f.opt.Device != "Jotta" {
return nil, errors.New("upload not supported for devices other than Jotta")
}
o := f.createObject(src.Remote(), src.ModTime(), src.Size())
return o, o.Update(in, src, options...)
}
@@ -940,7 +1054,7 @@ func (f *Fs) PublicLink(remote string) (link string, err error) {
// About gets quota information
func (f *Fs) About() (*fs.Usage, error) {
info, err := f.getAccountInfo()
info, err := getAccountInfo(f.srv, f.user)
if err != nil {
return nil, err
}

589
backend/koofr/koofr.go Normal file
View File

@@ -0,0 +1,589 @@
package koofr
import (
"encoding/base64"
"errors"
"fmt"
"io"
"net/http"
"path"
"strings"
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/hash"
httpclient "github.com/koofr/go-httpclient"
koofrclient "github.com/koofr/go-koofrclient"
)
// Register Fs with rclone
func init() {
fs.Register(&fs.RegInfo{
Name: "koofr",
Description: "Koofr",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "endpoint",
Help: "The Koofr API endpoint to use",
Default: "https://app.koofr.net",
Required: true,
Advanced: true,
}, {
Name: "mountid",
Help: "Mount ID of the mount to use. If omitted, the primary mount is used.",
Required: false,
Default: "",
Advanced: true,
}, {
Name: "user",
Help: "Your Koofr user name",
Required: true,
}, {
Name: "password",
Help: "Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)",
IsPassword: true,
Required: true,
},
},
})
}
// Options represent the configuration of the Koofr backend
type Options struct {
Endpoint string `config:"endpoint"`
MountID string `config:"mountid"`
User string `config:"user"`
Password string `config:"password"`
}
// A Fs is a representation of a remote Koofr Fs
type Fs struct {
name string
mountID string
root string
opt Options
features *fs.Features
client *koofrclient.KoofrClient
}
// An Object on the remote Koofr Fs
type Object struct {
fs *Fs
remote string
info koofrclient.FileInfo
}
func base(pth string) string {
rv := path.Base(pth)
if rv == "" || rv == "." {
rv = "/"
}
return rv
}
func dir(pth string) string {
rv := path.Dir(pth)
if rv == "" || rv == "." {
rv = "/"
}
return rv
}
// String returns a string representation of the remote Object
func (o *Object) String() string {
return o.remote
}
// Remote returns the remote path of the Object, relative to Fs root
func (o *Object) Remote() string {
return o.remote
}
// ModTime returns the modification time of the Object
func (o *Object) ModTime() time.Time {
return time.Unix(o.info.Modified/1000, (o.info.Modified%1000)*1000*1000)
}
// Size return the size of the Object in bytes
func (o *Object) Size() int64 {
return o.info.Size
}
// Fs returns a reference to the Koofr Fs containing the Object
func (o *Object) Fs() fs.Info {
return o.fs
}
// Hash returns an MD5 hash of the Object
func (o *Object) Hash(typ hash.Type) (string, error) {
if typ == hash.MD5 {
return o.info.Hash, nil
}
return "", nil
}
// fullPath returns full path of the remote Object (including Fs root)
func (o *Object) fullPath() string {
return o.fs.fullPath(o.remote)
}
// Storable returns true if the Object is storable
func (o *Object) Storable() bool {
return true
}
// SetModTime is not supported
func (o *Object) SetModTime(mtime time.Time) error {
return nil
}
// Open opens the Object for reading
func (o *Object) Open(options ...fs.OpenOption) (io.ReadCloser, error) {
var sOff, eOff int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
sOff = x.Offset
case *fs.RangeOption:
sOff = x.Start
eOff = x.End
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
if sOff == 0 && eOff < 0 {
return o.fs.client.FilesGet(o.fs.mountID, o.fullPath())
}
if sOff < 0 {
sOff = o.Size() - eOff
eOff = o.Size()
}
if eOff > o.Size() {
eOff = o.Size()
}
span := &koofrclient.FileSpan{
Start: sOff,
End: eOff,
}
return o.fs.client.FilesGetRange(o.fs.mountID, o.fullPath(), span)
}
// Update updates the Object contents
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
putopts := &koofrclient.PutFilter{
ForceOverwrite: true,
NoRename: true,
IgnoreNonExisting: true,
}
fullPath := o.fullPath()
dirPath := dir(fullPath)
name := base(fullPath)
err := o.fs.mkdir(dirPath)
if err != nil {
return err
}
info, err := o.fs.client.FilesPutOptions(o.fs.mountID, dirPath, name, in, putopts)
if err != nil {
return err
}
o.info = *info
return nil
}
// Remove deletes the remote Object
func (o *Object) Remove() error {
return o.fs.client.FilesDelete(o.fs.mountID, o.fullPath())
}
// Name returns the name of the Fs
func (f *Fs) Name() string {
return f.name
}
// Root returns the root path of the Fs
func (f *Fs) Root() string {
return f.root
}
// String returns a string representation of the Fs
func (f *Fs) String() string {
return "koofr:" + f.mountID + ":" + f.root
}
// Features returns the optional features supported by this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Precision denotes that setting modification times is not supported
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Hashes returns a set of hashes are Provided by the Fs
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
}
// fullPath constructs a full, absolute path from a Fs root relative path,
func (f *Fs) fullPath(part string) string {
return path.Join("/", f.root, part)
}
// NewFs constructs a new filesystem given a root path and configuration options
func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
pass, err := obscure.Reveal(opt.Password)
if err != nil {
return nil, err
}
client := koofrclient.NewKoofrClient(opt.Endpoint, false)
basicAuth := fmt.Sprintf("Basic %s",
base64.StdEncoding.EncodeToString([]byte(opt.User+":"+pass)))
client.HTTPClient.Headers.Set("Authorization", basicAuth)
mounts, err := client.Mounts()
if err != nil {
return nil, err
}
f := &Fs{
name: name,
root: root,
opt: *opt,
client: client,
}
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
BucketBased: false,
CanHaveEmptyDirectories: true,
}).Fill(f)
for _, m := range mounts {
if opt.MountID != "" {
if m.Id == opt.MountID {
f.mountID = m.Id
break
}
} else if m.IsPrimary {
f.mountID = m.Id
break
}
}
if f.mountID == "" {
if opt.MountID == "" {
return nil, errors.New("Failed to find primary mount")
}
return nil, errors.New("Failed to find mount " + opt.MountID)
}
rootFile, err := f.client.FilesInfo(f.mountID, "/"+f.root)
if err == nil && rootFile.Type != "dir" {
f.root = dir(f.root)
err = fs.ErrorIsFile
} else {
err = nil
}
return f, err
}
// List returns a list of items in a directory
func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
files, err := f.client.FilesList(f.mountID, f.fullPath(dir))
if err != nil {
return nil, translateErrorsDir(err)
}
entries = make([]fs.DirEntry, len(files))
for i, file := range files {
if file.Type == "dir" {
entries[i] = fs.NewDir(path.Join(dir, file.Name), time.Unix(0, 0))
} else {
entries[i] = &Object{
fs: f,
info: file,
remote: path.Join(dir, file.Name),
}
}
}
return entries, nil
}
// NewObject creates a new remote Object for a given remote path
func (f *Fs) NewObject(remote string) (obj fs.Object, err error) {
info, err := f.client.FilesInfo(f.mountID, f.fullPath(remote))
if err != nil {
return nil, translateErrorsObject(err)
}
if info.Type == "dir" {
return nil, fs.ErrorNotAFile
}
return &Object{
fs: f,
info: info,
remote: remote,
}, nil
}
// Put updates a remote Object
func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (obj fs.Object, err error) {
putopts := &koofrclient.PutFilter{
ForceOverwrite: true,
NoRename: true,
IgnoreNonExisting: true,
}
fullPath := f.fullPath(src.Remote())
dirPath := dir(fullPath)
name := base(fullPath)
err = f.mkdir(dirPath)
if err != nil {
return nil, err
}
info, err := f.client.FilesPutOptions(f.mountID, dirPath, name, in, putopts)
if err != nil {
return nil, translateErrorsObject(err)
}
return &Object{
fs: f,
info: *info,
remote: src.Remote(),
}, nil
}
// PutStream updates a remote Object with a stream of unknown size
func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(in, src, options...)
}
// isBadRequest is a predicate which holds true iff the error returned was
// HTTP status 400
func isBadRequest(err error) bool {
switch err := err.(type) {
case httpclient.InvalidStatusError:
if err.Got == http.StatusBadRequest {
return true
}
}
return false
}
// translateErrorsDir translates koofr errors to rclone errors (for a dir
// operation)
func translateErrorsDir(err error) error {
switch err := err.(type) {
case httpclient.InvalidStatusError:
if err.Got == http.StatusNotFound {
return fs.ErrorDirNotFound
}
}
return err
}
// translatesErrorsObject translates Koofr errors to rclone errors (for an object operation)
func translateErrorsObject(err error) error {
switch err := err.(type) {
case httpclient.InvalidStatusError:
if err.Got == http.StatusNotFound {
return fs.ErrorObjectNotFound
}
}
return err
}
// mkdir creates a directory at the given remote path. Creates ancestors if
// neccessary
func (f *Fs) mkdir(fullPath string) error {
if fullPath == "/" {
return nil
}
info, err := f.client.FilesInfo(f.mountID, fullPath)
if err == nil && info.Type == "dir" {
return nil
}
err = translateErrorsDir(err)
if err != nil && err != fs.ErrorDirNotFound {
return err
}
dirs := strings.Split(fullPath, "/")
parent := "/"
for _, part := range dirs {
if part == "" {
continue
}
info, err = f.client.FilesInfo(f.mountID, path.Join(parent, part))
if err != nil || info.Type != "dir" {
err = translateErrorsDir(err)
if err != nil && err != fs.ErrorDirNotFound {
return err
}
err = f.client.FilesNewFolder(f.mountID, parent, part)
if err != nil && !isBadRequest(err) {
return err
}
}
parent = path.Join(parent, part)
}
return nil
}
// Mkdir creates a directory at the given remote path. Creates ancestors if
// necessary
func (f *Fs) Mkdir(dir string) error {
fullPath := f.fullPath(dir)
return f.mkdir(fullPath)
}
// Rmdir removes an (empty) directory at the given remote path
func (f *Fs) Rmdir(dir string) error {
files, err := f.client.FilesList(f.mountID, f.fullPath(dir))
if err != nil {
return translateErrorsDir(err)
}
if len(files) > 0 {
return fs.ErrorDirectoryNotEmpty
}
err = f.client.FilesDelete(f.mountID, f.fullPath(dir))
if err != nil {
return translateErrorsDir(err)
}
return nil
}
// Copy copies a remote Object to the given path
func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
dstFullPath := f.fullPath(remote)
dstDir := dir(dstFullPath)
err := f.mkdir(dstDir)
if err != nil {
return nil, fs.ErrorCantCopy
}
err = f.client.FilesCopy((src.(*Object)).fs.mountID,
(src.(*Object)).fs.fullPath((src.(*Object)).remote),
f.mountID, dstFullPath)
if err != nil {
return nil, fs.ErrorCantCopy
}
return f.NewObject(remote)
}
// Move moves a remote Object to the given path
func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
srcObj := src.(*Object)
dstFullPath := f.fullPath(remote)
dstDir := dir(dstFullPath)
err := f.mkdir(dstDir)
if err != nil {
return nil, fs.ErrorCantMove
}
err = f.client.FilesMove(srcObj.fs.mountID,
srcObj.fs.fullPath(srcObj.remote), f.mountID, dstFullPath)
if err != nil {
return nil, fs.ErrorCantMove
}
return f.NewObject(remote)
}
// DirMove moves a remote directory to the given path
func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
srcFs := src.(*Fs)
srcFullPath := srcFs.fullPath(srcRemote)
dstFullPath := f.fullPath(dstRemote)
if srcFs.mountID == f.mountID && srcFullPath == dstFullPath {
return fs.ErrorDirExists
}
dstDir := dir(dstFullPath)
err := f.mkdir(dstDir)
if err != nil {
return fs.ErrorCantDirMove
}
err = f.client.FilesMove(srcFs.mountID, srcFullPath, f.mountID, dstFullPath)
if err != nil {
return fs.ErrorCantDirMove
}
return nil
}
// About reports space usage (with a MB precision)
func (f *Fs) About() (*fs.Usage, error) {
mount, err := f.client.MountsDetails(f.mountID)
if err != nil {
return nil, err
}
return &fs.Usage{
Total: fs.NewUsageValue(mount.SpaceTotal * 1024 * 1024),
Used: fs.NewUsageValue(mount.SpaceUsed * 1024 * 1024),
Trashed: nil,
Other: nil,
Free: fs.NewUsageValue((mount.SpaceTotal - mount.SpaceUsed) * 1024 * 1024),
Objects: nil,
}, nil
}
// Purge purges the complete Fs
func (f *Fs) Purge() error {
err := translateErrorsDir(f.client.FilesDelete(f.mountID, f.fullPath("")))
return err
}
// linkCreate is a Koofr API request for creating a public link
type linkCreate struct {
Path string `json:"path"`
}
// link is a Koofr API response to creating a public link
type link struct {
ID string `json:"id"`
Name string `json:"name"`
Path string `json:"path"`
Counter int64 `json:"counter"`
URL string `json:"url"`
ShortURL string `json:"shortUrl"`
Hash string `json:"hash"`
Host string `json:"host"`
HasPassword bool `json:"hasPassword"`
Password string `json:"password"`
ValidFrom int64 `json:"validFrom"`
ValidTo int64 `json:"validTo"`
PasswordRequired bool `json:"passwordRequired"`
}
// createLink makes a Koofr API call to create a public link
func createLink(c *koofrclient.KoofrClient, mountID string, path string) (*link, error) {
linkCreate := linkCreate{
Path: path,
}
linkData := link{}
request := httpclient.RequestData{
Method: "POST",
Path: "/api/v2/mounts/" + mountID + "/links",
ExpectedStatus: []int{http.StatusOK, http.StatusCreated},
ReqEncoding: httpclient.EncodingJSON,
ReqValue: linkCreate,
RespEncoding: httpclient.EncodingJSON,
RespValue: &linkData,
}
_, err := c.Request(&request)
if err != nil {
return nil, err
}
return &linkData, nil
}
// PublicLink creates a public link to the remote path
func (f *Fs) PublicLink(remote string) (string, error) {
linkData, err := createLink(f.client, f.mountID, f.fullPath(remote))
if err != nil {
return "", translateErrorsDir(err)
}
return linkData.ShortURL, nil
}

View File

@@ -0,0 +1,14 @@
package koofr_test
import (
"testing"
"github.com/ncw/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestKoofr:",
})
}

View File

@@ -28,8 +28,9 @@ import (
)
// Constants
const devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
const linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link
const devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
const linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link
const useReadDir = (runtime.GOOS == "windows" || runtime.GOOS == "plan9") // these OSes read FileInfos directly
// Register with Fs
func init() {
@@ -327,7 +328,14 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
fd, err := os.Open(fsDirPath)
if err != nil {
return nil, errors.Wrapf(err, "failed to open directory %q", dir)
isPerm := os.IsPermission(err)
err = errors.Wrapf(err, "failed to open directory %q", dir)
fs.Errorf(dir, "%v", err)
if isPerm {
accounting.Stats.Error(fserrors.NoRetryError(err))
err = nil // ignore error but fail sync
}
return nil, err
}
defer func() {
cerr := fd.Close()
@@ -337,12 +345,38 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
}()
for {
fis, err := fd.Readdir(1024)
if err == io.EOF && len(fis) == 0 {
break
var fis []os.FileInfo
if useReadDir {
// Windows and Plan9 read the directory entries with the stat information in which
// shouldn't fail because of unreadable entries.
fis, err = fd.Readdir(1024)
if err == io.EOF && len(fis) == 0 {
break
}
} else {
// For other OSes we read the names only (which shouldn't fail) then stat the
// individual ourselves so we can log errors but not fail the directory read.
var names []string
names, err = fd.Readdirnames(1024)
if err == io.EOF && len(names) == 0 {
break
}
if err == nil {
for _, name := range names {
namepath := filepath.Join(fsDirPath, name)
fi, fierr := os.Lstat(namepath)
if fierr != nil {
err = errors.Wrapf(err, "failed to read directory %q", namepath)
fs.Errorf(dir, "%v", fierr)
accounting.Stats.Error(fserrors.NoRetryError(fierr)) // fail the sync
continue
}
fis = append(fis, fi)
}
}
}
if err != nil {
return nil, errors.Wrapf(err, "failed to read directory %q", dir)
return nil, errors.Wrap(err, "failed to read directory entry")
}
for _, fi := range fis {
@@ -709,9 +743,10 @@ func (o *Object) Hash(r hash.Type) (string, error) {
o.fs.objectHashesMu.Lock()
hashes := o.hashes
hashValue, hashFound := o.hashes[r]
o.fs.objectHashesMu.Unlock()
if !o.modTime.Equal(oldtime) || oldsize != o.size || hashes == nil {
if !o.modTime.Equal(oldtime) || oldsize != o.size || hashes == nil || !hashFound {
var in io.ReadCloser
if !o.translatedLink {
@@ -722,7 +757,7 @@ func (o *Object) Hash(r hash.Type) (string, error) {
if err != nil {
return "", errors.Wrap(err, "hash: failed to open")
}
hashes, err = hash.Stream(in)
hashes, err = hash.StreamTypes(in, hash.NewHashSet(r))
closeErr := in.Close()
if err != nil {
return "", errors.Wrap(err, "hash: failed to read")
@@ -730,11 +765,16 @@ func (o *Object) Hash(r hash.Type) (string, error) {
if closeErr != nil {
return "", errors.Wrap(closeErr, "hash: failed to close")
}
hashValue = hashes[r]
o.fs.objectHashesMu.Lock()
o.hashes = hashes
if o.hashes == nil {
o.hashes = hashes
} else {
o.hashes[r] = hashValue
}
o.fs.objectHashesMu.Unlock()
}
return hashes[r], nil
return hashValue, nil
}
// Size returns the size of an object in bytes
@@ -998,6 +1038,36 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
return o.lstat()
}
// OpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
//
// It truncates any existing object
func (f *Fs) OpenWriterAt(remote string, size int64) (fs.WriterAtCloser, error) {
// Temporary Object under construction
o := f.newObject(remote, "")
err := o.mkdirAll()
if err != nil {
return nil, err
}
if o.translatedLink {
return nil, errors.New("can't open a symlink for random writing")
}
out, err := file.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
if err != nil {
return nil, err
}
// Pre-allocate the file for performance reasons
err = preAllocate(size, out)
if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
}
return out, nil
}
// setMetadata sets the file info from the os.FileInfo passed in
func (o *Object) setMetadata(info os.FileInfo) {
// Don't overwrite the info if we don't need to
@@ -1139,10 +1209,11 @@ func cleanWindowsName(f *Fs, name string) string {
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
_ fs.Purger = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
_ fs.DirMover = &Fs{}
_ fs.Object = &Object{}
_ fs.Fs = &Fs{}
_ fs.Purger = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
_ fs.DirMover = &Fs{}
_ fs.OpenWriterAter = &Fs{}
_ fs.Object = &Object{}
)

View File

@@ -4,16 +4,40 @@ package local
import (
"os"
"sync/atomic"
"github.com/ncw/rclone/fs"
"golang.org/x/sys/unix"
)
var (
fallocFlags = [...]uint32{
unix.FALLOC_FL_KEEP_SIZE, // Default
unix.FALLOC_FL_KEEP_SIZE | unix.FALLOC_FL_PUNCH_HOLE, // for ZFS #3066
}
fallocFlagsIndex int32
)
// preAllocate the file for performance reasons
func preAllocate(size int64, out *os.File) error {
if size <= 0 {
return nil
}
err := unix.Fallocate(int(out.Fd()), unix.FALLOC_FL_KEEP_SIZE, 0, size)
index := atomic.LoadInt32(&fallocFlagsIndex)
again:
if index >= int32(len(fallocFlags)) {
return nil // Fallocate is disabled
}
flags := fallocFlags[index]
err := unix.Fallocate(int(out.Fd()), flags, 0, size)
if err == unix.ENOTSUP {
// Try the next flags combination
index++
atomic.StoreInt32(&fallocFlagsIndex, index)
fs.Debugf(nil, "preAllocate: got error on fallocate, trying combination %d/%d: %v", index, len(fallocFlags), err)
goto again
}
// FIXME could be doing something here
// if err == unix.ENOSPC {
// log.Printf("No space")

View File

@@ -98,7 +98,7 @@ type Fs struct {
opt Options // parsed config options
features *fs.Features // optional features
srv *mega.Mega // the connection to the server
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
rootNodeMu sync.Mutex // mutex for _rootNode
_rootNode *mega.Node // root node - call findRoot to use this
mkdirMu sync.Mutex // used to serialize calls to mkdir / rmdir
@@ -217,7 +217,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root,
opt: *opt,
srv: srv,
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
DuplicateFiles: true,
@@ -402,6 +402,35 @@ func (f *Fs) clearRoot() {
//log.Printf("cleared root directory")
}
// CleanUp deletes all files currently in trash
func (f *Fs) CleanUp() (err error) {
trash := f.srv.FS.GetTrash()
items := []*mega.Node{}
_, err = f.list(trash, func(item *mega.Node) bool {
items = append(items, item)
return false
})
if err != nil {
return errors.Wrap(err, "CleanUp failed to list items in trash")
}
fs.Infof(f, "Deleting %d items from the trash", len(items))
errors := 0
// similar to f.deleteNode(trash) but with HardDelete as true
for _, item := range items {
fs.Debugf(f, "Deleting trash %q", item.GetName())
deleteErr := f.pacer.Call(func() (bool, error) {
err := f.srv.Delete(item, true)
return shouldRetry(err)
})
if deleteErr != nil {
err = deleteErr
errors++
}
}
fs.Infof(f, "Deleted %d items from the trash with %d errors", len(items), errors)
return err
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.

View File

@@ -14,6 +14,8 @@ import (
"strings"
"time"
"github.com/ncw/rclone/lib/atexit"
"github.com/ncw/rclone/backend/onedrive/api"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/config"
@@ -261,7 +263,7 @@ type Fs struct {
features *fs.Features // optional features
srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
driveID string // ID to use for querying Microsoft Graph
driveType string // https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/resources/drive
@@ -335,8 +337,13 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
// readMetaDataForPathRelativeToID reads the metadata for a path relative to an item that is addressed by its normalized ID.
// if `relPath` == "", it reads the metadata for the item with that ID.
//
// We address items using the pattern `drives/driveID/items/itemID:/relativePath`
// instead of simply using `drives/driveID/root:/itemPath` because it works for
// "shared with me" folders in OneDrive Personal (See #2536, #2778)
// This path pattern comes from https://github.com/OneDrive/onedrive-api-docs/issues/908#issuecomment-417488480
func (f *Fs) readMetaDataForPathRelativeToID(normalizedID string, relPath string) (info *api.Item, resp *http.Response, err error) {
opts := newOptsCall(normalizedID, "GET", ":/"+rest.URLPathEscape(replaceReservedChars(relPath)))
opts := newOptsCall(normalizedID, "GET", ":/"+withTrailingColon(rest.URLPathEscape(replaceReservedChars(relPath))))
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(&opts, nil, &info)
return shouldRetry(resp, err)
@@ -475,7 +482,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
driveID: opt.DriveID,
driveType: opt.DriveType,
srv: rest.NewClient(oAuthClient).SetRoot(graphURL + "/drives/" + opt.DriveID),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: true,
@@ -703,9 +710,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
id := info.GetID()
f.dirCache.Put(remote, id)
d := fs.NewDir(remote, time.Time(info.GetLastModifiedDateTime())).SetID(id)
if folder != nil {
d.SetItems(folder.ChildCount)
}
d.SetItems(folder.ChildCount)
entries = append(entries, d)
} else {
o, err := f.newObjectWithInfo(remote, info)
@@ -819,9 +824,6 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
return err
}
f.dirCache.FlushDir(dir)
if err != nil {
return err
}
return nil
}
@@ -1340,12 +1342,12 @@ func (o *Object) setModTime(modTime time.Time) (*api.Item, error) {
opts = rest.Opts{
Method: "PATCH",
RootURL: rootURL,
Path: "/" + drive + "/items/" + trueDirID + ":/" + rest.URLPathEscape(leaf),
Path: "/" + drive + "/items/" + trueDirID + ":/" + withTrailingColon(rest.URLPathEscape(leaf)),
}
} else {
opts = rest.Opts{
Method: "PATCH",
Path: "/root:/" + rest.URLPathEscape(o.srvPath()),
Path: "/root:/" + withTrailingColon(rest.URLPathEscape(o.srvPath())),
}
}
update := api.SetFileSystemInfo{
@@ -1491,23 +1493,40 @@ func (o *Object) uploadMultipart(in io.Reader, size int64, modTime time.Time) (i
return nil, errors.New("unknown-sized upload not supported")
}
uploadURLChan := make(chan string, 1)
gracefulCancel := func() {
uploadURL, ok := <-uploadURLChan
// Reading from uploadURLChan blocks the atexit process until
// we are able to use uploadURL to cancel the upload
if !ok { // createUploadSession failed - no need to cancel upload
return
}
fs.Debugf(o, "Cancelling multipart upload")
cancelErr := o.cancelUploadSession(uploadURL)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr)
}
}
cancelFuncHandle := atexit.Register(gracefulCancel)
// Create upload session
fs.Debugf(o, "Starting multipart upload")
session, err := o.createUploadSession(modTime)
if err != nil {
close(uploadURLChan)
atexit.Unregister(cancelFuncHandle)
return nil, err
}
uploadURL := session.UploadURL
uploadURLChan <- uploadURL
// Cancel the session if something went wrong
defer func() {
if err != nil {
fs.Debugf(o, "Cancelling multipart upload: %v", err)
cancelErr := o.cancelUploadSession(uploadURL)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", err)
}
fs.Debugf(o, "Error encountered during upload: %v", err)
gracefulCancel()
}
atexit.Unregister(cancelFuncHandle)
}()
// Upload the chunks
@@ -1668,6 +1687,21 @@ func getRelativePathInsideBase(base, target string) (string, bool) {
return "", false
}
// Adds a ":" at the end of `remotePath` in a proper manner.
// If `remotePath` already ends with "/", change it to ":/"
// If `remotePath` is "", return "".
// A workaround for #2720 and #3039
func withTrailingColon(remotePath string) string {
if remotePath == "" {
return ""
}
if strings.HasSuffix(remotePath, "/") {
return remotePath[:len(remotePath)-1] + ":/"
}
return remotePath + ":"
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)

View File

@@ -65,7 +65,7 @@ type Fs struct {
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
pacer *pacer.Pacer // To pace and retry the API calls
pacer *fs.Pacer // To pace and retry the API calls
session UserSessionInfo // contains the session data
dirCache *dircache.DirCache // Map of directory path to directory id
}
@@ -144,7 +144,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root,
opt: *opt,
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.dirCache = dircache.New(root, "0", f)
@@ -287,9 +287,6 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
return err
}
f.dirCache.FlushDir(dir)
if err != nil {
return err
}
return nil
}

View File

@@ -95,7 +95,7 @@ type Fs struct {
features *fs.Features // optional features
srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
}
@@ -254,7 +254,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: false,

View File

@@ -369,6 +369,10 @@ func init() {
Value: "s3.us-west-1.wasabisys.com",
Help: "Wasabi US West endpoint",
Provider: "Wasabi",
}, {
Value: "s3.eu-central-1.wasabisys.com",
Help: "Wasabi EU Central endpoint",
Provider: "Wasabi",
}},
}, {
Name: "location_constraint",
@@ -645,6 +649,9 @@ isn't set then "acl" is used instead.`,
}, {
Value: "GLACIER",
Help: "Glacier storage class",
}, {
Value: "DEEP_ARCHIVE",
Help: "Glacier Deep Archive storage class",
}},
}, {
// Mapping from here: https://www.alibabacloud.com/help/doc-detail/64919.htm
@@ -729,6 +736,14 @@ If it is set then rclone will use v2 authentication.
Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.`,
Default: false,
Advanced: true,
}, {
Name: "use_accelerate_endpoint",
Provider: "AWS",
Help: `If true use the AWS S3 accelerated endpoint.
See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)`,
Default: false,
Advanced: true,
}},
})
}
@@ -749,25 +764,26 @@ const (
// Options defines the configuration for this backend
type Options struct {
Provider string `config:"provider"`
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Region string `config:"region"`
Endpoint string `config:"endpoint"`
LocationConstraint string `config:"location_constraint"`
ACL string `config:"acl"`
BucketACL string `config:"bucket_acl"`
ServerSideEncryption string `config:"server_side_encryption"`
SSEKMSKeyID string `config:"sse_kms_key_id"`
StorageClass string `config:"storage_class"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
Provider string `config:"provider"`
EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"`
Region string `config:"region"`
Endpoint string `config:"endpoint"`
LocationConstraint string `config:"location_constraint"`
ACL string `config:"acl"`
BucketACL string `config:"bucket_acl"`
ServerSideEncryption string `config:"server_side_encryption"`
SSEKMSKeyID string `config:"sse_kms_key_id"`
StorageClass string `config:"storage_class"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"`
}
// Fs represents a remote s3 server
@@ -782,7 +798,7 @@ type Fs struct {
bucketOKMu sync.Mutex // mutex to protect bucket OK
bucketOK bool // true if we have created the bucket
bucketDeleted bool // true if we have deleted the bucket
pacer *pacer.Pacer // To pace the API calls
pacer *fs.Pacer // To pace the API calls
srv *http.Client // a plain http client
}
@@ -941,14 +957,15 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
if opt.Region == "" {
opt.Region = "us-east-1"
}
if opt.Provider == "Alibaba" || opt.Provider == "Netease" {
if opt.Provider == "Alibaba" || opt.Provider == "Netease" || opt.UseAccelerateEndpoint {
opt.ForcePathStyle = false
}
awsConfig := aws.NewConfig().
WithMaxRetries(maxRetries).
WithCredentials(cred).
WithHTTPClient(fshttp.NewClient(fs.Config)).
WithS3ForcePathStyle(opt.ForcePathStyle)
WithS3ForcePathStyle(opt.ForcePathStyle).
WithS3UseAccelerate(opt.UseAccelerateEndpoint)
if opt.Region != "" {
awsConfig.WithRegion(opt.Region)
}
@@ -1055,7 +1072,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
c: c,
bucket: bucket,
ses: ses,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.S3Pacer),
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))),
srv: fshttp.NewClient(fs.Config),
}
f.features = (&fs.Features{
@@ -1719,6 +1736,9 @@ func (o *Object) SetModTime(modTime time.Time) error {
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass == "GLACIER" || o.fs.opt.StorageClass == "DEEP_ARCHIVE" {
return fs.ErrorCantSetModTime
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}

View File

@@ -1,6 +1,6 @@
// Package sftp provides a filesystem interface using github.com/pkg/sftp
// +build !plan9,go1.9
// +build !plan9
package sftp
@@ -14,6 +14,7 @@ import (
"os/user"
"path"
"regexp"
"strconv"
"strings"
"sync"
"time"
@@ -25,6 +26,7 @@ import (
"github.com/ncw/rclone/fs/config/obscure"
"github.com/ncw/rclone/fs/fshttp"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/lib/env"
"github.com/ncw/rclone/lib/readers"
"github.com/pkg/errors"
"github.com/pkg/sftp"
@@ -186,10 +188,10 @@ func readCurrentUser() (userName string) {
return os.Getenv("LOGNAME")
}
// Dial starts a client connection to the given SSH server. It is a
// dial starts a client connection to the given SSH server. It is a
// convenience function that connects to the given network address,
// initiates the SSH handshake, and then sets up a Client.
func Dial(network, addr string, sshConfig *ssh.ClientConfig) (*ssh.Client, error) {
func (f *Fs) dial(network, addr string, sshConfig *ssh.ClientConfig) (*ssh.Client, error) {
dialer := fshttp.NewDialer(fs.Config)
conn, err := dialer.Dial(network, addr)
if err != nil {
@@ -199,6 +201,7 @@ func Dial(network, addr string, sshConfig *ssh.ClientConfig) (*ssh.Client, error
if err != nil {
return nil, err
}
fs.Debugf(f, "New connection %s->%s to %q", c.LocalAddr(), c.RemoteAddr(), c.ServerVersion())
return ssh.NewClient(c, chans, reqs), nil
}
@@ -244,7 +247,7 @@ func (f *Fs) sftpConnection() (c *conn, err error) {
c = &conn{
err: make(chan error, 1),
}
c.sshClient, err = Dial("tcp", f.opt.Host+":"+f.opt.Port, f.config)
c.sshClient, err = f.dial("tcp", f.opt.Host+":"+f.opt.Port, f.config)
if err != nil {
return nil, errors.Wrap(err, "couldn't connect SSH")
}
@@ -315,18 +318,6 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
f.poolMu.Unlock()
}
// shellExpand replaces a leading "~" with "${HOME}" and expands all environment
// variables afterwards.
func shellExpand(s string) string {
if s != "" {
if s[0] == '~' {
s = "${HOME}" + s[1:]
}
s = os.ExpandEnv(s)
}
return s
}
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
@@ -347,6 +338,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
Auth: []ssh.AuthMethod{},
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
Timeout: fs.Config.ConnectTimeout,
ClientVersion: "SSH-2.0-" + fs.Config.UserAgent,
}
if opt.UseInsecureCipher {
@@ -354,7 +346,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
sshConfig.Config.Ciphers = append(sshConfig.Config.Ciphers, "aes128-cbc")
}
keyFile := shellExpand(opt.KeyFile)
keyFile := env.ShellExpand(opt.KeyFile)
// Add ssh agent-auth if no password or file specified
if (opt.Pass == "" && keyFile == "") || opt.KeyUseAgent {
sshAgentClient, _, err := sshagent.New()
@@ -427,6 +419,12 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
sshConfig.Auth = append(sshConfig.Auth, ssh.Password(clearpass))
}
return NewFsWithConnection(name, root, opt, sshConfig)
}
// NewFsWithConnection creates a new Fs object from the name and root and a ssh.ClientConfig. It connects to
// the host specified in the ssh.ClientConfig
func NewFsWithConnection(name string, root string, opt *Options, sshConfig *ssh.ClientConfig) (fs.Fs, error) {
f := &Fs{
name: name,
root: root,
@@ -807,6 +805,49 @@ func (f *Fs) Hashes() hash.Set {
return set
}
// About gets usage stats
func (f *Fs) About() (*fs.Usage, error) {
c, err := f.getSftpConnection()
if err != nil {
return nil, errors.Wrap(err, "About get SFTP connection")
}
session, err := c.sshClient.NewSession()
f.putSftpConnection(&c, err)
if err != nil {
return nil, errors.Wrap(err, "About put SFTP connection")
}
var stdout, stderr bytes.Buffer
session.Stdout = &stdout
session.Stderr = &stderr
escapedPath := shellEscape(f.root)
if f.opt.PathOverride != "" {
escapedPath = shellEscape(path.Join(f.opt.PathOverride, f.root))
}
if len(escapedPath) == 0 {
escapedPath = "/"
}
err = session.Run("df -k " + escapedPath)
if err != nil {
_ = session.Close()
return nil, errors.Wrap(err, "About invocation of df failed. Your remote may not support about.")
}
_ = session.Close()
usageTotal, usageUsed, usageAvail := parseUsage(stdout.Bytes())
usage := &fs.Usage{}
if usageTotal >= 0 {
usage.Total = fs.NewUsageValue(usageTotal)
}
if usageUsed >= 0 {
usage.Used = fs.NewUsageValue(usageUsed)
}
if usageAvail >= 0 {
usage.Free = fs.NewUsageValue(usageAvail)
}
return usage, nil
}
// Fs is the filesystem this remote sftp file object is located within
func (o *Object) Fs() fs.Info {
return o.fs
@@ -897,6 +938,35 @@ func parseHash(bytes []byte) string {
return strings.Split(string(bytes), " ")[0] // Split at hash / filename separator
}
// Parses the byte array output from the SSH session
// returned by an invocation of df into
// the disk size, used space, and avaliable space on the disk, in that order.
// Only works when `df` has output info on only one disk
func parseUsage(bytes []byte) (spaceTotal int64, spaceUsed int64, spaceAvail int64) {
spaceTotal, spaceUsed, spaceAvail = -1, -1, -1
lines := strings.Split(string(bytes), "\n")
if len(lines) < 2 {
return
}
split := strings.Fields(lines[1])
if len(split) < 6 {
return
}
spaceTotal, err := strconv.ParseInt(split[1], 10, 64)
if err != nil {
spaceTotal = -1
}
spaceUsed, err = strconv.ParseInt(split[2], 10, 64)
if err != nil {
spaceUsed = -1
}
spaceAvail, err = strconv.ParseInt(split[3], 10, 64)
if err != nil {
spaceAvail = -1
}
return spaceTotal * 1024, spaceUsed * 1024, spaceAvail * 1024
}
// Size returns the size in bytes of the remote sftp file
func (o *Object) Size() int64 {
return o.size

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.9
// +build !plan9
package sftp
@@ -35,3 +35,17 @@ func TestParseHash(t *testing.T) {
assert.Equal(t, test.checksum, got, fmt.Sprintf("Test %d sshOutput = %q", i, test.sshOutput))
}
}
func TestParseUsage(t *testing.T) {
for i, test := range []struct {
sshOutput string
usage [3]int64
}{
{"Filesystem 1K-blocks Used Available Use% Mounted on\n/dev/root 91283092 81111888 10154820 89% /", [3]int64{93473886208, 83058573312, 10398535680}},
{"Filesystem 1K-blocks Used Available Use% Mounted on\ntmpfs 818256 1636 816620 1% /run", [3]int64{837894144, 1675264, 836218880}},
{"Filesystem 1024-blocks Used Available Capacity iused ifree %iused Mounted on\n/dev/disk0s2 244277768 94454848 149566920 39% 997820 4293969459 0% /", [3]int64{250140434432, 96721764352, 153156526080}},
} {
gotSpaceTotal, gotSpaceUsed, gotSpaceAvail := parseUsage([]byte(test.sshOutput))
assert.Equal(t, test.usage, [3]int64{gotSpaceTotal, gotSpaceUsed, gotSpaceAvail}, fmt.Sprintf("Test %d sshOutput = %q", i, test.sshOutput))
}
}

View File

@@ -1,6 +1,6 @@
// Test Sftp filesystem interface
// +build !plan9,go1.9
// +build !plan9
package sftp_test

View File

@@ -1,6 +1,6 @@
// Build for sftp for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9 !go1.9
// +build plan9
package sftp

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.9
// +build !plan9
package sftp

View File

@@ -1,4 +1,4 @@
// +build !plan9,go1.9
// +build !plan9
package sftp

View File

@@ -2,6 +2,7 @@ package swift
import (
"net/http"
"time"
"github.com/ncw/swift"
)
@@ -65,6 +66,14 @@ func (a *auth) Token() string {
return a.parentAuth.Token()
}
// Expires returns the time the token expires if known or Zero if not.
func (a *auth) Expires() (t time.Time) {
if do, ok := a.parentAuth.(swift.Expireser); ok {
t = do.Expires()
}
return t
}
// The CDN url if available
func (a *auth) CdnUrl() string { // nolint
if a.parentAuth == nil {
@@ -74,4 +83,7 @@ func (a *auth) CdnUrl() string { // nolint
}
// Check the interfaces are satisfied
var _ swift.Authenticator = (*auth)(nil)
var (
_ swift.Authenticator = (*auth)(nil)
_ swift.Expireser = (*auth)(nil)
)

View File

@@ -216,7 +216,7 @@ type Fs struct {
containerOK bool // true if we have created the container
segmentsContainer string // container to store the segments (if any) in
noCheckContainer bool // don't check the container before creating it
pacer *pacer.Pacer // To pace the API calls
pacer *fs.Pacer // To pace the API calls
}
// Object describes a swift object
@@ -286,6 +286,31 @@ func shouldRetry(err error) (bool, error) {
return fserrors.ShouldRetry(err), err
}
// shouldRetryHeaders returns a boolean as to whether this err
// deserves to be retried. It reads the headers passed in looking for
// `Retry-After`. It returns the err as a convenience
func shouldRetryHeaders(headers swift.Headers, err error) (bool, error) {
if swiftError, ok := err.(*swift.Error); ok && swiftError.StatusCode == 429 {
if value := headers["Retry-After"]; value != "" {
retryAfter, parseErr := strconv.Atoi(value)
if parseErr != nil {
fs.Errorf(nil, "Failed to parse Retry-After: %q: %v", value, parseErr)
} else {
duration := time.Second * time.Duration(retryAfter)
if duration <= 60*time.Second {
// Do a short sleep immediately
fs.Debugf(nil, "Sleeping for %v to obey Retry-After", duration)
time.Sleep(duration)
return true, err
}
// Delay a long sleep for a retry
return false, fserrors.NewErrorRetryAfter(duration)
}
}
}
return shouldRetry(err)
}
// Pattern to match a swift path
var matcher = regexp.MustCompile(`^/*([^/]*)(.*)$`)
@@ -401,7 +426,7 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
segmentsContainer: container + "_segments",
root: directory,
noCheckContainer: noCheckContainer,
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.S3Pacer),
pacer: fs.NewPacer(pacer.NewS3(pacer.MinSleep(minSleep))),
}
f.features = (&fs.Features{
ReadMimeType: true,
@@ -413,8 +438,9 @@ func NewFsWithConnection(opt *Options, name, root string, c *swift.Connection, n
// Check to see if the object exists - ignoring directory markers
var info swift.Object
err = f.pacer.Call(func() (bool, error) {
info, _, err = f.c.Object(container, directory)
return shouldRetry(err)
var rxHeaders swift.Headers
info, rxHeaders, err = f.c.Object(container, directory)
return shouldRetryHeaders(rxHeaders, err)
})
if err == nil && info.ContentType != directoryMarkerContentType {
f.root = path.Dir(directory)
@@ -723,8 +749,9 @@ func (f *Fs) Mkdir(dir string) error {
var err error = swift.ContainerNotFound
if !f.noCheckContainer {
err = f.pacer.Call(func() (bool, error) {
_, _, err = f.c.Container(f.container)
return shouldRetry(err)
var rxHeaders swift.Headers
_, rxHeaders, err = f.c.Container(f.container)
return shouldRetryHeaders(rxHeaders, err)
})
}
if err == swift.ContainerNotFound {
@@ -816,8 +843,9 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
}
srcFs := srcObj.fs
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
return shouldRetry(err)
var rxHeaders swift.Headers
rxHeaders, err = f.c.ObjectCopy(srcFs.container, srcFs.root+srcObj.remote, f.container, f.root+remote, nil)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
return nil, err
@@ -927,7 +955,7 @@ func (o *Object) readMetaData() (err error) {
var h swift.Headers
err = o.fs.pacer.Call(func() (bool, error) {
info, h, err = o.fs.c.Object(o.fs.container, o.fs.root+o.remote)
return shouldRetry(err)
return shouldRetryHeaders(h, err)
})
if err != nil {
if err == swift.ObjectNotFound {
@@ -1002,8 +1030,9 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
headers := fs.OpenOptionHeaders(options)
_, isRanging := headers["Range"]
err = o.fs.pacer.Call(func() (bool, error) {
in, _, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
return shouldRetry(err)
var rxHeaders swift.Headers
in, rxHeaders, err = o.fs.c.ObjectOpen(o.fs.container, o.fs.root+o.remote, !isRanging, headers)
return shouldRetryHeaders(rxHeaders, err)
})
return
}
@@ -1074,8 +1103,9 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
// Create the segmentsContainer if it doesn't exist
var err error
err = o.fs.pacer.Call(func() (bool, error) {
_, _, err = o.fs.c.Container(o.fs.segmentsContainer)
return shouldRetry(err)
var rxHeaders swift.Headers
_, rxHeaders, err = o.fs.c.Container(o.fs.segmentsContainer)
return shouldRetryHeaders(rxHeaders, err)
})
if err == swift.ContainerNotFound {
headers := swift.Headers{}
@@ -1115,8 +1145,9 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
segmentPath := fmt.Sprintf("%s/%08d", segmentsPath, i)
fs.Debugf(o, "Uploading segment file %q into %q", segmentPath, o.fs.segmentsContainer)
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
return shouldRetry(err)
var rxHeaders swift.Headers
rxHeaders, err = o.fs.c.ObjectPut(o.fs.segmentsContainer, segmentPath, segmentReader, true, "", "", headers)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
return "", err
@@ -1129,8 +1160,9 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
emptyReader := bytes.NewReader(nil)
manifestName := o.fs.root + o.remote
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
return shouldRetry(err)
var rxHeaders swift.Headers
rxHeaders, err = o.fs.c.ObjectPut(o.fs.container, manifestName, emptyReader, true, "", contentType, headers)
return shouldRetryHeaders(rxHeaders, err)
})
return uniquePrefix + "/", err
}
@@ -1174,7 +1206,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
var rxHeaders swift.Headers
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
rxHeaders, err = o.fs.c.ObjectPut(o.fs.container, o.fs.root+o.remote, in, true, "", contentType, headers)
return shouldRetry(err)
return shouldRetryHeaders(rxHeaders, err)
})
if err != nil {
return err

View File

@@ -1,6 +1,13 @@
package swift
import "testing"
import (
"testing"
"time"
"github.com/ncw/rclone/fs/fserrors"
"github.com/ncw/swift"
"github.com/stretchr/testify/assert"
)
func TestInternalUrlEncode(t *testing.T) {
for _, test := range []struct {
@@ -23,3 +30,37 @@ func TestInternalUrlEncode(t *testing.T) {
}
}
}
func TestInternalShouldRetryHeaders(t *testing.T) {
headers := swift.Headers{
"Content-Length": "64",
"Content-Type": "text/html; charset=UTF-8",
"Date": "Mon: 18 Mar 2019 12:11:23 GMT",
"Retry-After": "1",
}
err := &swift.Error{
StatusCode: 429,
Text: "Too Many Requests",
}
// Short sleep should just do the sleep
start := time.Now()
retry, gotErr := shouldRetryHeaders(headers, err)
dt := time.Since(start)
assert.True(t, retry)
assert.Equal(t, err, gotErr)
assert.True(t, dt > time.Second/2)
// Long sleep should return RetryError
headers["Retry-After"] = "3600"
start = time.Now()
retry, gotErr = shouldRetryHeaders(headers, err)
dt = time.Since(start)
assert.True(t, dt < time.Second)
assert.False(t, retry)
assert.Equal(t, true, fserrors.IsRetryAfterError(gotErr))
after := gotErr.(fserrors.RetryAfter).RetryAfter()
dt = after.Sub(start)
assert.True(t, dt >= time.Hour-time.Second && dt <= time.Hour+time.Second)
}

View File

@@ -9,6 +9,7 @@ import (
"time"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/cache"
"github.com/ncw/rclone/fs/config/configmap"
"github.com/ncw/rclone/fs/config/configstruct"
"github.com/ncw/rclone/fs/hash"
@@ -342,7 +343,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
if configName != "local" {
rootString = configName + ":" + rootString
}
myFs, err := fs.NewFs(rootString)
myFs, err := cache.Get(rootString)
if err != nil {
if err == fs.ErrorIsFile {
return myFs, err

View File

@@ -101,7 +101,7 @@ type Fs struct {
endpoint *url.URL // URL of the host
endpointURL string // endpoint as a string
srv *rest.Client // the connection to the one drive server
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
precision time.Duration // mod time precision
canStream bool // set if can stream
useOCMtime bool // set if can use X-OC-Mtime
@@ -318,7 +318,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
endpoint: u,
endpointURL: u.String(),
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(u.String()),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
precision: fs.ModTimeNotSupported,
}
f.features = (&fs.Features{
@@ -644,10 +644,18 @@ func (f *Fs) _mkdir(dirPath string) error {
Path: dirPath,
NoResponse: true,
}
return f.pacer.Call(func() (bool, error) {
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(&opts)
return shouldRetry(resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
// already exists
// owncloud returns 423/StatusLocked if the create is already in progress
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable || apiErr.StatusCode == http.StatusLocked {
return nil
}
}
return err
}
// mkdir makes the directory and parents using native paths
@@ -655,12 +663,7 @@ func (f *Fs) mkdir(dirPath string) error {
// defer log.Trace(dirPath, "")("")
err := f._mkdir(dirPath)
if apiErr, ok := err.(*api.Error); ok {
// already exists
// owncloud returns 423/StatusLocked if the create is already in progress
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable || apiErr.StatusCode == http.StatusLocked {
return nil
}
// parent does not exist
// parent does not exist so create it first then try again
if apiErr.StatusCode == http.StatusConflict {
err = f.mkParentDir(dirPath)
if err == nil {
@@ -913,11 +916,13 @@ func (f *Fs) About() (*fs.Usage, error) {
return nil, errors.Wrap(err, "about call failed")
}
usage := &fs.Usage{}
if q.Available >= 0 && q.Used >= 0 {
usage.Total = fs.NewUsageValue(q.Available + q.Used)
}
if q.Used >= 0 {
usage.Used = fs.NewUsageValue(q.Used)
if q.Available != 0 || q.Used != 0 {
if q.Available >= 0 && q.Used >= 0 {
usage.Total = fs.NewUsageValue(q.Available + q.Used)
}
if q.Used >= 0 {
usage.Used = fs.NewUsageValue(q.Used)
}
}
return usage, nil
}

View File

@@ -93,7 +93,7 @@ type Fs struct {
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the yandex server
pacer *pacer.Pacer // pacer for API calls
pacer *fs.Pacer // pacer for API calls
diskRoot string // root path with "disk:/" container name
}
@@ -269,7 +269,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
name: name,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.setRoot(root)
f.features = (&fs.Features{

View File

@@ -17,14 +17,18 @@ import (
"io/ioutil"
"log"
"net/http"
"net/url"
"os"
"os/exec"
"path"
"path/filepath"
"regexp"
"runtime"
"strings"
"time"
"github.com/ncw/rclone/lib/rest"
"golang.org/x/net/html"
"golang.org/x/sys/unix"
)
@@ -33,6 +37,7 @@ var (
install = flag.Bool("install", false, "Install the downloaded package using sudo dpkg -i.")
extract = flag.String("extract", "", "Extract the named executable from the .tar.gz and install into bindir.")
bindir = flag.String("bindir", defaultBinDir(), "Directory to install files downloaded with -extract.")
useAPI = flag.Bool("use-api", false, "Use the API for finding the release instead of scraping the page.")
// Globals
matchProject = regexp.MustCompile(`^([\w-]+)/([\w-]+)$`)
osAliases = map[string][]string{
@@ -209,6 +214,57 @@ func getAsset(project string, matchName *regexp.Regexp) (string, string) {
return "", ""
}
// Get an asset URL and name by scraping the downloads page
//
// This doesn't use the API so isn't rate limited when not using GITHUB login details
func getAssetFromReleasesPage(project string, matchName *regexp.Regexp) (assetURL string, assetName string) {
baseURL := "https://github.com/" + project + "/releases"
log.Printf("Fetching asset info for %q from %q", project, baseURL)
base, err := url.Parse(baseURL)
if err != nil {
log.Fatalf("URL Parse failed: %v", err)
}
resp, err := http.Get(baseURL)
if err != nil {
log.Fatalf("Failed to fetch release info %q: %v", baseURL, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
log.Printf("Error: %s", readBody(resp.Body))
log.Fatalf("Bad status %d when fetching %q release info: %s", resp.StatusCode, baseURL, resp.Status)
}
doc, err := html.Parse(resp.Body)
if err != nil {
log.Fatalf("Failed to parse web page: %v", err)
}
var walk func(*html.Node)
walk = func(n *html.Node) {
if n.Type == html.ElementNode && n.Data == "a" {
for _, a := range n.Attr {
if a.Key == "href" {
if name := path.Base(a.Val); matchName.MatchString(name) && isOurOsArch(name) {
if u, err := rest.URLJoin(base, a.Val); err == nil {
if assetName == "" {
assetName = name
assetURL = u.String()
}
}
}
break
}
}
}
for c := n.FirstChild; c != nil; c = c.NextSibling {
walk(c)
}
}
walk(doc)
if assetName == "" || assetURL == "" {
log.Fatalf("Didn't find URL in page")
}
return assetURL, assetName
}
// isOurOsArch returns true if s contains our OS and our Arch
func isOurOsArch(s string) bool {
s = strings.ToLower(s)
@@ -346,7 +402,12 @@ func main() {
log.Fatalf("Invalid regexp for name %q: %v", nameRe, err)
}
assetURL, assetName := getAsset(project, matchName)
var assetURL, assetName string
if *useAPI {
assetURL, assetName = getAsset(project, matchName)
} else {
assetURL, assetName = getAssetFromReleasesPage(project, matchName)
}
fileName := filepath.Join(os.TempDir(), assetName)
getFile(assetURL, fileName)

View File

@@ -36,6 +36,7 @@ docs = [
"http.md",
"hubic.md",
"jottacloud.md",
"koofr.md",
"mega.md",
"azureblob.md",
"onedrive.md",

View File

@@ -95,6 +95,9 @@ Use the --json flag for a computer readable output, eg
if err != nil {
return errors.Wrap(err, "About call failed")
}
if u == nil {
return errors.New("nil usage returned")
}
if jsonOutput {
out := json.NewEncoder(os.Stdout)
out.SetIndent("", "\t")

View File

@@ -22,6 +22,7 @@ import (
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/accounting"
"github.com/ncw/rclone/fs/cache"
"github.com/ncw/rclone/fs/config/configflags"
"github.com/ncw/rclone/fs/config/flags"
"github.com/ncw/rclone/fs/filter"
@@ -83,7 +84,7 @@ func NewFsFile(remote string) (fs.Fs, string) {
fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err)
}
f, err := fs.NewFs(remote)
f, err := cache.Get(remote)
switch err {
case fs.ErrorIsFile:
return f, path.Base(fsPath)
@@ -131,7 +132,7 @@ func NewFsSrc(args []string) fs.Fs {
//
// This must point to a directory
func newFsDir(remote string) fs.Fs {
f, err := fs.NewFs(remote)
f, err := cache.Get(remote)
if err != nil {
fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err)
@@ -180,7 +181,7 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
log.Fatalf("%q is a directory", args[1])
}
}
fdst, err := fs.NewFs(dstRemote)
fdst, err := cache.Get(dstRemote)
switch err {
case fs.ErrorIsFile:
fs.CountError(err)
@@ -230,22 +231,34 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
SigInfoHandler()
for try := 1; try <= *retries; try++ {
err = f()
if !Retry || (err == nil && !accounting.Stats.Errored()) {
fs.CountError(err)
lastErr := accounting.Stats.GetLastError()
if err == nil {
err = lastErr
}
if !Retry || !accounting.Stats.Errored() {
if try > 1 {
fs.Errorf(nil, "Attempt %d/%d succeeded", try, *retries)
}
break
}
if fserrors.IsFatalError(err) || accounting.Stats.HadFatalError() {
if accounting.Stats.HadFatalError() {
fs.Errorf(nil, "Fatal error received - not attempting retries")
break
}
if fserrors.IsNoRetryError(err) || (accounting.Stats.Errored() && !accounting.Stats.HadRetryError()) {
if accounting.Stats.Errored() && !accounting.Stats.HadRetryError() {
fs.Errorf(nil, "Can't retry this error - not attempting retries")
break
}
if err != nil {
fs.Errorf(nil, "Attempt %d/%d failed with %d errors and: %v", try, *retries, accounting.Stats.GetErrors(), err)
if retryAfter := accounting.Stats.RetryAfter(); !retryAfter.IsZero() {
d := retryAfter.Sub(time.Now())
if d > 0 {
fs.Logf(nil, "Received retry after error - sleeping until %s (%v)", retryAfter.Format(time.RFC3339Nano), d)
time.Sleep(d)
}
}
if lastErr != nil {
fs.Errorf(nil, "Attempt %d/%d failed with %d errors and: %v", try, *retries, accounting.Stats.GetErrors(), lastErr)
} else {
fs.Errorf(nil, "Attempt %d/%d failed with %d errors", try, *retries, accounting.Stats.GetErrors())
}
@@ -258,7 +271,12 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
}
stopStats()
if err != nil {
log.Printf("Failed to %s: %v", cmd.Name(), err)
nerrs := accounting.Stats.GetErrors()
if nerrs <= 1 {
log.Printf("Failed to %s: %v", cmd.Name(), err)
} else {
log.Printf("Failed to %s with %d errors: last error was: %v", cmd.Name(), nerrs, err)
}
resolveExitCode(err)
}
if showStats && (accounting.Stats.Errored() || *statsInterval > 0) {
@@ -341,8 +359,7 @@ func initConfig() {
configflags.SetFlags()
// Load filters
var err error
filter.Active, err = filter.NewFilter(&filterflags.Opt)
err := filterflags.Reload()
if err != nil {
log.Fatalf("Failed to load filters: %v", err)
}

View File

@@ -127,7 +127,7 @@ func waitFor(fn func() bool) (ok bool) {
func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, error) {
fs.Debugf(f, "Mounting on %q", mountpoint)
// Check the mountpoint - in Windows the mountpoint musn't exist before the mount
// Check the mountpoint - in Windows the mountpoint mustn't exist before the mount
if runtime.GOOS != "windows" {
fi, err := os.Stat(mountpoint)
if err != nil {

View File

@@ -7,8 +7,13 @@ import (
"github.com/spf13/cobra"
)
var (
createEmptySrcDirs = false
)
func init() {
cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after copy")
}
var commandDefintion = &cobra.Command{
@@ -69,7 +74,7 @@ changed recently very efficiently like this:
fsrc, srcFileName, fdst := cmd.NewFsSrcFileDst(args)
cmd.Run(true, true, command, func() error {
if srcFileName == "" {
return sync.CopyDir(fdst, fsrc)
return sync.CopyDir(fdst, fsrc, createEmptySrcDirs)
}
return operations.CopyFile(fdst, fsrc, srcFileName, srcFileName)
})

View File

@@ -48,7 +48,7 @@ destination.
fsrc, srcFileName, fdst, dstFileName := cmd.NewFsSrcDstFiles(args)
cmd.Run(true, true, command, func() error {
if srcFileName == "" {
return sync.CopyDir(fdst, fsrc)
return sync.CopyDir(fdst, fsrc, false)
}
return operations.CopyFile(fdst, fsrc, dstFileName, srcFileName)
})

View File

@@ -11,6 +11,7 @@ import (
"github.com/ncw/rclone/cmd"
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
"github.com/spf13/pflag"
)
func init() {
@@ -50,6 +51,10 @@ rclone.org website.`,
base := strings.TrimSuffix(name, path.Ext(name))
return "/commands/" + strings.ToLower(base) + "/"
}
// Hide all of the root entries flags
cmd.Root.Flags().VisitAll(func(flag *pflag.Flag) {
flag.Hidden = true
})
return doc.GenMarkdownTreeCustom(cmd.Root, out, prepender, linkHandler)
},
}

View File

@@ -45,7 +45,7 @@ __rclone_custom_func() {
else
__rclone_init_completion -n : || return
fi
if [[ $cur =~ ^[[:alnum:]]*$ ]]; then
if [[ $cur != *:* ]]; then
local remote
while IFS= read -r remote; do
[[ $remote != $cur* ]] || COMPREPLY+=("$remote")
@@ -54,10 +54,10 @@ __rclone_custom_func() {
local paths=("$cur"*)
[[ ! -f ${paths[0]} ]] || COMPREPLY+=("${paths[@]}")
fi
elif [[ $cur =~ ^[[:alnum:]]+: ]]; then
else
local path=${cur#*:}
if [[ $path == */* ]]; then
local prefix=${path%/*}
local prefix=$(eval printf '%s' "${path%/*}")
else
local prefix=
fi
@@ -66,6 +66,7 @@ __rclone_custom_func() {
local reply=${prefix:+$prefix/}$line
[[ $reply != $path* ]] || COMPREPLY+=("$reply")
done < <(rclone lsf "${cur%%:*}:$prefix" 2>/dev/null)
[[ ! ${COMPREPLY[@]} ]] || compopt -o filenames
fi
[[ ! ${COMPREPLY[@]} ]] || compopt -o nospace
fi

View File

@@ -2,7 +2,7 @@ package lshelp
// Help describes the common help for all the list commands
var Help = `
Any of the filtering options can be applied to this commmand.
Any of the filtering options can be applied to this command.
There are several related list commands

View File

@@ -10,7 +10,6 @@ import (
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/fs/hash"
"github.com/ncw/rclone/fs/operations"
"github.com/ncw/rclone/fs/walk"
"github.com/pkg/errors"
"github.com/spf13/cobra"
)
@@ -67,8 +66,11 @@ output:
s - size
t - modification time
h - hash
i - ID of object if known
i - ID of object
o - Original ID of underlying object
m - MimeType of object if known
e - encrypted name
T - tier of storage if known, eg "Hot" or "Cool"
So if you wanted the path, size and modification time, you would use
--format "pst", or maybe --format "tsp" to put the path last.
@@ -161,6 +163,12 @@ func Lsf(fsrc fs.Fs, out io.Writer) error {
list.SetCSV(csv)
list.SetDirSlash(dirSlash)
list.SetAbsolute(absolute)
var opt = operations.ListJSONOpt{
NoModTime: true,
DirsOnly: dirsOnly,
FilesOnly: filesOnly,
Recurse: recurse,
}
for _, char := range format {
switch char {
@@ -168,38 +176,31 @@ func Lsf(fsrc fs.Fs, out io.Writer) error {
list.AddPath()
case 't':
list.AddModTime()
opt.NoModTime = false
case 's':
list.AddSize()
case 'h':
list.AddHash(hashType)
opt.ShowHash = true
case 'i':
list.AddID()
case 'm':
list.AddMimeType()
case 'e':
list.AddEncrypted()
opt.ShowEncrypted = true
case 'o':
list.AddOrigID()
opt.ShowOrigIDs = true
case 'T':
list.AddTier()
default:
return errors.Errorf("Unknown format character %q", char)
}
}
return walk.Walk(fsrc, "", false, operations.ConfigMaxDepth(recurse), func(path string, entries fs.DirEntries, err error) error {
if err != nil {
fs.CountError(err)
fs.Errorf(path, "error listing: %v", err)
return nil
}
for _, entry := range entries {
_, isDir := entry.(fs.Directory)
if isDir {
if filesOnly {
continue
}
} else {
if dirsOnly {
continue
}
}
_, _ = fmt.Fprintln(out, list.Format(entry))
}
return operations.ListJSON(fsrc, "", &opt, func(item *operations.ListJSONItem) error {
_, _ = fmt.Fprintln(out, list.Format(item))
return nil
})
}

View File

@@ -23,6 +23,8 @@ func init() {
commandDefintion.Flags().BoolVarP(&opt.NoModTime, "no-modtime", "", false, "Don't read the modification time (can speed things up).")
commandDefintion.Flags().BoolVarP(&opt.ShowEncrypted, "encrypted", "M", false, "Show the encrypted names.")
commandDefintion.Flags().BoolVarP(&opt.ShowOrigIDs, "original", "", false, "Show the ID of the underlying Object.")
commandDefintion.Flags().BoolVarP(&opt.FilesOnly, "files-only", "", false, "Show only files in the listing.")
commandDefintion.Flags().BoolVarP(&opt.DirsOnly, "dirs-only", "", false, "Show only directories in the listing.")
}
var commandDefintion = &cobra.Command{
@@ -55,6 +57,10 @@ If --no-modtime is specified then ModTime will be blank.
If --encrypted is not specified the Encrypted won't be emitted.
If --dirs-only is not specified files in addition to directories are returned
If --files-only is not specified directories in addition to the files will be returned.
The Path field will only show folders below the remote path being listed.
If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".

View File

@@ -3,6 +3,7 @@
package mount
import (
"context"
"os"
"time"
@@ -12,7 +13,6 @@ import (
"github.com/ncw/rclone/fs/log"
"github.com/ncw/rclone/vfs"
"github.com/pkg/errors"
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.8
)
// Dir represents a directory entry
@@ -20,7 +20,7 @@ type Dir struct {
*vfs.Dir
}
// Check interface satsified
// Check interface satisfied
var _ fusefs.Node = (*Dir)(nil)
// Attr updates the attributes of a directory

View File

@@ -3,6 +3,7 @@
package mount
import (
"context"
"io"
"time"
@@ -11,7 +12,6 @@ import (
"github.com/ncw/rclone/cmd/mountlib"
"github.com/ncw/rclone/fs/log"
"github.com/ncw/rclone/vfs"
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.8
)
// File represents a file

View File

@@ -5,6 +5,7 @@
package mount
import (
"context"
"syscall"
"bazil.org/fuse"
@@ -15,7 +16,6 @@ import (
"github.com/ncw/rclone/vfs"
"github.com/ncw/rclone/vfs/vfsflags"
"github.com/pkg/errors"
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.8
)
// FS represents the top level filing system
@@ -24,7 +24,7 @@ type FS struct {
f fs.Fs
}
// Check interface satistfied
// Check interface satisfied
var _ fusefs.FS = (*FS)(nil)
// NewFS makes a new FS
@@ -46,7 +46,7 @@ func (f *FS) Root() (node fusefs.Node, err error) {
return &Dir{root}, nil
}
// Check interface satsified
// Check interface satisfied
var _ fusefs.FSStatfser = (*FS)(nil)
// Statfs is called to obtain file system metadata.

View File

@@ -3,13 +3,13 @@
package mount
import (
"context"
"io"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/ncw/rclone/fs/log"
"github.com/ncw/rclone/vfs"
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.8
)
// FileHandle is an open for read file handle on a File

View File

@@ -39,6 +39,11 @@ var (
// RunTests runs all the tests against all the VFS cache modes
func RunTests(t *testing.T, fn MountFn) {
// Kill everything if the timer elapes
timer := time.AfterFunc(60*time.Second, func() {
panic("mount has locked up")
})
defer timer.Stop()
mountFn = fn
flag.Parse()
cacheModes := []vfs.CacheMode{

View File

@@ -10,11 +10,13 @@ import (
// Globals
var (
deleteEmptySrcDirs = false
createEmptySrcDirs = false
)
func init() {
cmd.Root.AddCommand(commandDefintion)
commandDefintion.Flags().BoolVarP(&deleteEmptySrcDirs, "delete-empty-src-dirs", "", deleteEmptySrcDirs, "Delete empty source dirs after move")
commandDefintion.Flags().BoolVarP(&createEmptySrcDirs, "create-empty-src-dirs", "", createEmptySrcDirs, "Create empty source dirs on destination after move")
}
var commandDefintion = &cobra.Command{
@@ -52,7 +54,7 @@ can speed transfers up greatly.
fsrc, srcFileName, fdst := cmd.NewFsSrcFileDst(args)
cmd.Run(true, true, command, func() error {
if srcFileName == "" {
return sync.MoveDir(fdst, fsrc, deleteEmptySrcDirs)
return sync.MoveDir(fdst, fsrc, deleteEmptySrcDirs, createEmptySrcDirs)
}
return operations.MoveFile(fdst, fsrc, srcFileName, srcFileName)
})

View File

@@ -19,7 +19,7 @@ If source:path is a file or directory then it moves it to a file or
directory named dest:path.
This can be used to rename files or upload single files to other than
their existing name. If the source is a directory then it acts exacty
their existing name. If the source is a directory then it acts exactly
like the move command.
So
@@ -52,7 +52,7 @@ transfer.
cmd.Run(true, true, command, func() error {
if srcFileName == "" {
return sync.MoveDir(fdst, fsrc, false)
return sync.MoveDir(fdst, fsrc, false, false)
}
return operations.MoveFile(fdst, fsrc, dstFileName, srcFileName)
})

View File

@@ -10,6 +10,7 @@ import (
"sort"
"strings"
runewidth "github.com/mattn/go-runewidth"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/cmd/ncdu/scan"
"github.com/ncw/rclone/fs"
@@ -122,7 +123,7 @@ func Printf(x, y int, fg, bg termbox.Attribute, format string, args ...interface
func Line(x, y, xmax int, fg, bg termbox.Attribute, spacer rune, msg string) {
for _, c := range msg {
termbox.SetCell(x, y, c, fg, bg)
x++
x += runewidth.RuneWidth(c)
if x >= xmax {
return
}
@@ -360,7 +361,7 @@ func (u *UI) Draw() error {
Linef(0, h-1, w, termbox.ColorBlack, termbox.ColorWhite, ' ', "Total usage: %v, Objects: %d%s", fs.SizeSuffix(size), count, message)
}
// Show the box on top if requred
// Show the box on top if required
if u.showBox {
u.Box()
}

View File

@@ -82,7 +82,7 @@ var (
progressMu sync.Mutex
)
// printProgress prings the progress with an optional log
// printProgress prints the progress with an optional log
func printProgress(logMessage string) {
progressMu.Lock()
defer progressMu.Unlock()

View File

@@ -1,451 +0,0 @@
package dlna
const contentDirectoryServiceDescription = `<?xml version="1.0"?>
<scpd xmlns="urn:schemas-upnp-org:service-1-0">
<specVersion>
<major>1</major>
<minor>0</minor>
</specVersion>
<actionList>
<action>
<name>GetSearchCapabilities</name>
<argumentList>
<argument>
<name>SearchCaps</name>
<direction>out</direction>
<relatedStateVariable>SearchCapabilities</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetSortCapabilities</name>
<argumentList>
<argument>
<name>SortCaps</name>
<direction>out</direction>
<relatedStateVariable>SortCapabilities</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetSortExtensionCapabilities</name>
<argumentList>
<argument>
<name>SortExtensionCaps</name>
<direction>out</direction>
<relatedStateVariable>SortExtensionCapabilities</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetFeatureList</name>
<argumentList>
<argument>
<name>FeatureList</name>
<direction>out</direction>
<relatedStateVariable>FeatureList</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetSystemUpdateID</name>
<argumentList>
<argument>
<name>Id</name>
<direction>out</direction>
<relatedStateVariable>SystemUpdateID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>Browse</name>
<argumentList>
<argument>
<name>ObjectID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>BrowseFlag</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_BrowseFlag</relatedStateVariable>
</argument>
<argument>
<name>Filter</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Filter</relatedStateVariable>
</argument>
<argument>
<name>StartingIndex</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Index</relatedStateVariable>
</argument>
<argument>
<name>RequestedCount</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
</argument>
<argument>
<name>SortCriteria</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_SortCriteria</relatedStateVariable>
</argument>
<argument>
<name>Result</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
</argument>
<argument>
<name>NumberReturned</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
</argument>
<argument>
<name>TotalMatches</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
</argument>
<argument>
<name>UpdateID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_UpdateID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>Search</name>
<argumentList>
<argument>
<name>ContainerID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>SearchCriteria</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_SearchCriteria</relatedStateVariable>
</argument>
<argument>
<name>Filter</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Filter</relatedStateVariable>
</argument>
<argument>
<name>StartingIndex</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Index</relatedStateVariable>
</argument>
<argument>
<name>RequestedCount</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
</argument>
<argument>
<name>SortCriteria</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_SortCriteria</relatedStateVariable>
</argument>
<argument>
<name>Result</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
</argument>
<argument>
<name>NumberReturned</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
</argument>
<argument>
<name>TotalMatches</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
</argument>
<argument>
<name>UpdateID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_UpdateID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>CreateObject</name>
<argumentList>
<argument>
<name>ContainerID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>Elements</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
</argument>
<argument>
<name>ObjectID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>Result</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>DestroyObject</name>
<argumentList>
<argument>
<name>ObjectID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>UpdateObject</name>
<argumentList>
<argument>
<name>ObjectID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>CurrentTagValue</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_TagValueList</relatedStateVariable>
</argument>
<argument>
<name>NewTagValue</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_TagValueList</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>MoveObject</name>
<argumentList>
<argument>
<name>ObjectID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>NewParentID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>NewObjectID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>ImportResource</name>
<argumentList>
<argument>
<name>SourceURI</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
</argument>
<argument>
<name>DestinationURI</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
</argument>
<argument>
<name>TransferID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_TransferID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>ExportResource</name>
<argumentList>
<argument>
<name>SourceURI</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
</argument>
<argument>
<name>DestinationURI</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
</argument>
<argument>
<name>TransferID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_TransferID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>StopTransferResource</name>
<argumentList>
<argument>
<name>TransferID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_TransferID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>DeleteResource</name>
<argumentList>
<argument>
<name>ResourceURI</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetTransferProgress</name>
<argumentList>
<argument>
<name>TransferID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_TransferID</relatedStateVariable>
</argument>
<argument>
<name>TransferStatus</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_TransferStatus</relatedStateVariable>
</argument>
<argument>
<name>TransferLength</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_TransferLength</relatedStateVariable>
</argument>
<argument>
<name>TransferTotal</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_TransferTotal</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>CreateReference</name>
<argumentList>
<argument>
<name>ContainerID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>ObjectID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>NewID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
</argumentList>
</action>
</actionList>
<serviceStateTable>
<stateVariable sendEvents="no">
<name>SearchCapabilities</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>SortCapabilities</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>SortExtensionCapabilities</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>SystemUpdateID</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>ContainerUpdateIDs</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>TransferIDs</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>FeatureList</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ObjectID</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Result</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_SearchCriteria</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_BrowseFlag</name>
<dataType>string</dataType>
<allowedValueList>
<allowedValue>BrowseMetadata</allowedValue>
<allowedValue>BrowseDirectChildren</allowedValue>
</allowedValueList>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Filter</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_SortCriteria</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Index</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Count</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_UpdateID</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_TransferID</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_TransferStatus</name>
<dataType>string</dataType>
<allowedValueList>
<allowedValue>COMPLETED</allowedValue>
<allowedValue>ERROR</allowedValue>
<allowedValue>IN_PROGRESS</allowedValue>
<allowedValue>STOPPED</allowedValue>
</allowedValueList>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_TransferLength</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_TransferTotal</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_TagValueList</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_URI</name>
<dataType>uri</dataType>
</stateVariable>
</serviceStateTable>
</scpd>`

View File

@@ -9,11 +9,13 @@ import (
"os"
"path"
"path/filepath"
"regexp"
"sort"
"github.com/anacrolix/dms/dlna"
"github.com/anacrolix/dms/upnp"
"github.com/anacrolix/dms/upnpav"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/vfs"
"github.com/pkg/errors"
)
@@ -27,6 +29,8 @@ func (cds *contentDirectoryService) updateIDString() string {
return fmt.Sprintf("%d", uint32(os.Getpid()))
}
var mediaMimeTypeRegexp = regexp.MustCompile("^(video|audio|image)/")
// Turns the given entry and DMS host into a UPnP object. A nil object is
// returned if the entry is not of interest.
func (cds *contentDirectoryService) cdsObjectToUpnpavObject(cdsObject object, fileInfo os.FileInfo, host string) (ret interface{}, err error) {
@@ -47,8 +51,13 @@ func (cds *contentDirectoryService) cdsObjectToUpnpavObject(cdsObject object, fi
return
}
// Hardcode "videoItem" so that files show up in VLC.
obj.Class = "object.item.videoItem"
mimeType := fs.MimeTypeFromName(fileInfo.Name())
mediaType := mediaMimeTypeRegexp.FindStringSubmatch(mimeType)
if mediaType == nil {
return
}
obj.Class = "object.item." + mediaType[1] + "Item"
obj.Title = fileInfo.Name()
item := upnpav.Item{
@@ -65,8 +74,7 @@ func (cds *contentDirectoryService) cdsObjectToUpnpavObject(cdsObject object, fi
"path": {cdsObject.Path},
}.Encode(),
}).String(),
// Hardcode "video/x-matroska" so that files show up in VLC.
ProtocolInfo: fmt.Sprintf("http-get:*:video/x-matroska:%s", dlna.ContentFeatures{
ProtocolInfo: fmt.Sprintf("http-get:*:%s:%s", mimeType, dlna.ContentFeatures{
SupportRange: true,
}.String()),
Bitrate: 0,
@@ -106,14 +114,14 @@ func (cds *contentDirectoryService) readContainer(o object, host string) (ret []
}
obj, err := cds.cdsObjectToUpnpavObject(child, de, host)
if err != nil {
log.Printf("error with %s: %s", child.FilePath(), err)
fs.Errorf(cds, "error with %s: %s", child.FilePath(), err)
continue
}
if obj != nil {
ret = append(ret, obj)
} else {
log.Printf("bad %s", de)
if obj == nil {
fs.Debugf(cds, "unrecognized file type: %s", de)
continue
}
ret = append(ret, obj)
}
return
@@ -192,6 +200,14 @@ func (cds *contentDirectoryService) Handle(action string, argsXML []byte, r *htt
"Result": didlLite(string(result)),
"UpdateID": cds.updateIDString(),
}, nil
case "BrowseMetadata":
result, err := xml.Marshal(obj)
if err != nil {
return nil, err
}
return map[string]string{
"Result": didlLite(string(result)),
}, nil
default:
return nil, upnp.Errorf(upnp.ArgumentValueInvalidErrorCode, "unhandled browse flag: %v", browse.BrowseFlag)
}
@@ -199,6 +215,19 @@ func (cds *contentDirectoryService) Handle(action string, argsXML []byte, r *htt
return map[string]string{
"SearchCaps": "",
}, nil
// Samsung Extensions
case "X_GetFeatureList":
return map[string]string{
"FeatureList": `<Features xmlns="urn:schemas-upnp-org:av:avs" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:schemas-upnp-org:av:avs http://www.upnp.org/schemas/av/avs.xsd">
<Feature name="samsung.com_BASICVIEW" version="1">
<container id="/" type="object.item.imageItem"/>
<container id="/" type="object.item.audioItem"/>
<container id="/" type="object.item.videoItem"/>
</Feature>
</Features>`}, nil
case "X_SetBookmark":
// just ignore
return map[string]string{}, nil
default:
return nil, upnp.InvalidActionError
}

26
cmd/serve/dlna/cms.go Normal file
View File

@@ -0,0 +1,26 @@
package dlna
import (
"net/http"
"github.com/anacrolix/dms/upnp"
)
const defaultProtocolInfo = "http-get:*:video/mpeg:*,http-get:*:video/mp4:*,http-get:*:video/vnd.dlna.mpeg-tts:*,http-get:*:video/avi:*,http-get:*:video/x-matroska:*,http-get:*:video/x-ms-wmv:*,http-get:*:video/wtv:*,http-get:*:audio/mpeg:*,http-get:*:audio/mp3:*,http-get:*:audio/mp4:*,http-get:*:audio/x-ms-wma*,http-get:*:audio/wav:*,http-get:*:audio/L16:*,http-get:*image/jpeg:*,http-get:*image/png:*,http-get:*image/gif:*,http-get:*image/tiff:*"
type connectionManagerService struct {
*server
upnp.Eventing
}
func (cms *connectionManagerService) Handle(action string, argsXML []byte, r *http.Request) (map[string]string, error) {
switch action {
case "GetProtocolInfo":
return map[string]string{
"Source": defaultProtocolInfo,
"Sink": "",
}, nil
default:
return nil, upnp.InvalidActionError
}
}

View File

@@ -0,0 +1,24 @@
//go:generate go run assets_generate.go
// The "go:generate" directive compiles static assets by running assets_generate.go
// +build ignore
package main
import (
"log"
"net/http"
"github.com/shurcooL/vfsgen"
)
func main() {
var AssetDir http.FileSystem = http.Dir("./static")
err := vfsgen.Generate(AssetDir, vfsgen.Options{
PackageName: "data",
BuildTags: "!dev",
VariableName: "Assets",
})
if err != nil {
log.Fatalln(err)
}
}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,4 @@
//go:generate go run assets_generate.go
// The "go:generate" directive compiles static assets by running assets_generate.go
package data

View File

@@ -0,0 +1,182 @@
<?xml version="1.0" encoding="UTF-8"?>
<scpd xmlns="urn:schemas-upnp-org:service-1-0">
<specVersion>
<major>1</major>
<minor>0</minor>
</specVersion>
<actionList>
<action>
<name>GetProtocolInfo</name>
<argumentList>
<argument>
<name>Source</name>
<direction>out</direction>
<relatedStateVariable>SourceProtocolInfo</relatedStateVariable>
</argument>
<argument>
<name>Sink</name>
<direction>out</direction>
<relatedStateVariable>SinkProtocolInfo</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>PrepareForConnection</name>
<argumentList>
<argument>
<name>RemoteProtocolInfo</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ProtocolInfo</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionManager</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionManager</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>Direction</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Direction</relatedStateVariable>
</argument>
<argument>
<name>ConnectionID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>AVTransportID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_AVTransportID</relatedStateVariable>
</argument>
<argument>
<name>RcsID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_RcsID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>ConnectionComplete</name>
<argumentList>
<argument>
<name>ConnectionID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetCurrentConnectionIDs</name>
<argumentList>
<argument>
<name>ConnectionIDs</name>
<direction>out</direction>
<relatedStateVariable>CurrentConnectionIDs</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetCurrentConnectionInfo</name>
<argumentList>
<argument>
<name>ConnectionID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>RcsID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_RcsID</relatedStateVariable>
</argument>
<argument>
<name>AVTransportID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_AVTransportID</relatedStateVariable>
</argument>
<argument>
<name>ProtocolInfo</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ProtocolInfo</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionManager</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionManager</relatedStateVariable>
</argument>
<argument>
<name>PeerConnectionID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionID</relatedStateVariable>
</argument>
<argument>
<name>Direction</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Direction</relatedStateVariable>
</argument>
<argument>
<name>Status</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ConnectionStatus</relatedStateVariable>
</argument>
</argumentList>
</action>
</actionList>
<serviceStateTable>
<stateVariable sendEvents="yes">
<name>SourceProtocolInfo</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>SinkProtocolInfo</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>CurrentConnectionIDs</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ConnectionStatus</name>
<dataType>string</dataType>
<allowedValueList>
<allowedValue>OK</allowedValue>
<allowedValue>ContentFormatMismatch</allowedValue>
<allowedValue>InsufficientBandwidth</allowedValue>
<allowedValue>UnreliableChannel</allowedValue>
<allowedValue>Unknown</allowedValue>
</allowedValueList>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ConnectionManager</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Direction</name>
<dataType>string</dataType>
<allowedValueList>
<allowedValue>Input</allowedValue>
<allowedValue>Output</allowedValue>
</allowedValueList>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ProtocolInfo</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ConnectionID</name>
<dataType>i4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_AVTransportID</name>
<dataType>i4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_RcsID</name>
<dataType>i4</dataType>
</stateVariable>
</serviceStateTable>
</scpd>

View File

@@ -0,0 +1,504 @@
<?xml version="1.0"?>
<scpd xmlns="urn:schemas-upnp-org:service-1-0">
<specVersion>
<major>1</major>
<minor>0</minor>
</specVersion>
<actionList>
<action>
<name>GetSearchCapabilities</name>
<argumentList>
<argument>
<name>SearchCaps</name>
<direction>out</direction>
<relatedStateVariable>SearchCapabilities</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetSortCapabilities</name>
<argumentList>
<argument>
<name>SortCaps</name>
<direction>out</direction>
<relatedStateVariable>SortCapabilities</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetSortExtensionCapabilities</name>
<argumentList>
<argument>
<name>SortExtensionCaps</name>
<direction>out</direction>
<relatedStateVariable>SortExtensionCapabilities</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetFeatureList</name>
<argumentList>
<argument>
<name>FeatureList</name>
<direction>out</direction>
<relatedStateVariable>FeatureList</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetSystemUpdateID</name>
<argumentList>
<argument>
<name>Id</name>
<direction>out</direction>
<relatedStateVariable>SystemUpdateID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>Browse</name>
<argumentList>
<argument>
<name>ObjectID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>BrowseFlag</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_BrowseFlag</relatedStateVariable>
</argument>
<argument>
<name>Filter</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Filter</relatedStateVariable>
</argument>
<argument>
<name>StartingIndex</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Index</relatedStateVariable>
</argument>
<argument>
<name>RequestedCount</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
</argument>
<argument>
<name>SortCriteria</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_SortCriteria</relatedStateVariable>
</argument>
<argument>
<name>Result</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
</argument>
<argument>
<name>NumberReturned</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
</argument>
<argument>
<name>TotalMatches</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
</argument>
<argument>
<name>UpdateID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_UpdateID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>Search</name>
<argumentList>
<argument>
<name>ContainerID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>SearchCriteria</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_SearchCriteria</relatedStateVariable>
</argument>
<argument>
<name>Filter</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Filter</relatedStateVariable>
</argument>
<argument>
<name>StartingIndex</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Index</relatedStateVariable>
</argument>
<argument>
<name>RequestedCount</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
</argument>
<argument>
<name>SortCriteria</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_SortCriteria</relatedStateVariable>
</argument>
<argument>
<name>Result</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
</argument>
<argument>
<name>NumberReturned</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
</argument>
<argument>
<name>TotalMatches</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Count</relatedStateVariable>
</argument>
<argument>
<name>UpdateID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_UpdateID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>CreateObject</name>
<argumentList>
<argument>
<name>ContainerID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>Elements</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
</argument>
<argument>
<name>ObjectID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>Result</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>DestroyObject</name>
<argumentList>
<argument>
<name>ObjectID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>UpdateObject</name>
<argumentList>
<argument>
<name>ObjectID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>CurrentTagValue</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_TagValueList</relatedStateVariable>
</argument>
<argument>
<name>NewTagValue</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_TagValueList</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>MoveObject</name>
<argumentList>
<argument>
<name>ObjectID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>NewParentID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>NewObjectID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>ImportResource</name>
<argumentList>
<argument>
<name>SourceURI</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
</argument>
<argument>
<name>DestinationURI</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
</argument>
<argument>
<name>TransferID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_TransferID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>ExportResource</name>
<argumentList>
<argument>
<name>SourceURI</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
</argument>
<argument>
<name>DestinationURI</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
</argument>
<argument>
<name>TransferID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_TransferID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>StopTransferResource</name>
<argumentList>
<argument>
<name>TransferID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_TransferID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>DeleteResource</name>
<argumentList>
<argument>
<name>ResourceURI</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_URI</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>GetTransferProgress</name>
<argumentList>
<argument>
<name>TransferID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_TransferID</relatedStateVariable>
</argument>
<argument>
<name>TransferStatus</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_TransferStatus</relatedStateVariable>
</argument>
<argument>
<name>TransferLength</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_TransferLength</relatedStateVariable>
</argument>
<argument>
<name>TransferTotal</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_TransferTotal</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>CreateReference</name>
<argumentList>
<argument>
<name>ContainerID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>ObjectID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>NewID</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>X_GetFeatureList</name>
<argumentList>
<argument>
<name>FeatureList</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Featurelist</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>X_SetBookmark</name>
<argumentList>
<argument>
<name>CategoryType</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_CategoryType</relatedStateVariable>
</argument>
<argument>
<name>RID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_RID</relatedStateVariable>
</argument>
<argument>
<name>ObjectID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_ObjectID</relatedStateVariable>
</argument>
<argument>
<name>PosSecond</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_PosSec</relatedStateVariable>
</argument>
</argumentList>
</action>
</actionList>
<serviceStateTable>
<stateVariable sendEvents="no">
<name>SearchCapabilities</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>SortCapabilities</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>SortExtensionCapabilities</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>SystemUpdateID</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>ContainerUpdateIDs</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>TransferIDs</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>FeatureList</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_ObjectID</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Result</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_SearchCriteria</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_BrowseFlag</name>
<dataType>string</dataType>
<allowedValueList>
<allowedValue>BrowseMetadata</allowedValue>
<allowedValue>BrowseDirectChildren</allowedValue>
</allowedValueList>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Filter</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_SortCriteria</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Index</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Count</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_UpdateID</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_TransferID</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_TransferStatus</name>
<dataType>string</dataType>
<allowedValueList>
<allowedValue>COMPLETED</allowedValue>
<allowedValue>ERROR</allowedValue>
<allowedValue>IN_PROGRESS</allowedValue>
<allowedValue>STOPPED</allowedValue>
</allowedValueList>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_TransferLength</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_TransferTotal</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_TagValueList</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_URI</name>
<dataType>uri</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_CategoryType</name>
<dataType>ui4</dataType>
<defaultValue />
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_RID</name>
<dataType>ui4</dataType>
<defaultValue />
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_PosSec</name>
<dataType>ui4</dataType>
<defaultValue />
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Featurelist</name>
<dataType>string</dataType>
<defaultValue />
</stateVariable>
</serviceStateTable>
</scpd>

View File

@@ -0,0 +1,88 @@
<?xml version="1.0" ?>
<scpd xmlns="urn:schemas-upnp-org:service-1-0">
<specVersion>
<major>1</major>
<minor>0</minor>
</specVersion>
<actionList>
<action>
<name>IsAuthorized</name>
<argumentList>
<argument>
<name>DeviceID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_DeviceID</relatedStateVariable>
</argument>
<argument>
<name>Result</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>RegisterDevice</name>
<argumentList>
<argument>
<name>RegistrationReqMsg</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_RegistrationReqMsg</relatedStateVariable>
</argument>
<argument>
<name>RegistrationRespMsg</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_RegistrationRespMsg</relatedStateVariable>
</argument>
</argumentList>
</action>
<action>
<name>IsValidated</name>
<argumentList>
<argument>
<name>DeviceID</name>
<direction>in</direction>
<relatedStateVariable>A_ARG_TYPE_DeviceID</relatedStateVariable>
</argument>
<argument>
<name>Result</name>
<direction>out</direction>
<relatedStateVariable>A_ARG_TYPE_Result</relatedStateVariable>
</argument>
</argumentList>
</action>
</actionList>
<serviceStateTable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_DeviceID</name>
<dataType>string</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_Result</name>
<dataType>int</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_RegistrationReqMsg</name>
<dataType>bin.base64</dataType>
</stateVariable>
<stateVariable sendEvents="no">
<name>A_ARG_TYPE_RegistrationRespMsg</name>
<dataType>bin.base64</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>AuthorizationGrantedUpdateID</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>AuthorizationDeniedUpdateID</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>ValidationSucceededUpdateID</name>
<dataType>ui4</dataType>
</stateVariable>
<stateVariable sendEvents="yes">
<name>ValidationRevokedUpdateID</name>
<dataType>ui4</dataType>
</stateVariable>
</serviceStateTable>
</scpd>

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.6 KiB

View File

@@ -4,20 +4,21 @@ import (
"bytes"
"encoding/xml"
"fmt"
"log"
"net"
"net/http"
"net/url"
"os"
"path"
"strconv"
"strings"
"text/template"
"time"
dms_dlna "github.com/anacrolix/dms/dlna"
"github.com/anacrolix/dms/soap"
"github.com/anacrolix/dms/ssdp"
"github.com/anacrolix/dms/upnp"
"github.com/ncw/rclone/cmd"
"github.com/ncw/rclone/cmd/serve/dlna/data"
"github.com/ncw/rclone/cmd/serve/dlna/dlnaflags"
"github.com/ncw/rclone/fs"
"github.com/ncw/rclone/vfs"
@@ -51,7 +52,7 @@ players might show files that they are not able to play back correctly.
cmd.Run(false, false, command, func() error {
s := newServer(f, &dlnaflags.Opt)
if err := s.Serve(); err != nil {
log.Fatal(err)
return err
}
s.Wait()
return nil
@@ -60,45 +61,12 @@ players might show files that they are not able to play back correctly.
}
const (
serverField = "Linux/3.4 DLNADOC/1.50 UPnP/1.0 DMS/1.0"
rootDeviceType = "urn:schemas-upnp-org:device:MediaServer:1"
rootDeviceModelName = "rclone"
resPath = "/res"
rootDescPath = "/rootDesc.xml"
serviceControlURL = "/ctl"
serverField = "Linux/3.4 DLNADOC/1.50 UPnP/1.0 DMS/1.0"
rootDescPath = "/rootDesc.xml"
resPath = "/res"
serviceControlURL = "/ctl"
)
// Groups the service definition with its XML description.
type service struct {
upnp.Service
SCPD string
}
// Exposed UPnP AV services.
var services = []*service{
{
Service: upnp.Service{
ServiceType: "urn:schemas-upnp-org:service:ContentDirectory:1",
ServiceId: "urn:upnp-org:serviceId:ContentDirectory",
ControlURL: serviceControlURL,
},
SCPD: contentDirectoryServiceDescription,
},
}
func devices() []string {
return []string{
"urn:schemas-upnp-org:device:MediaServer:1",
}
}
func serviceTypes() (ret []string) {
for _, s := range services {
ret = append(ret, s.ServiceType)
}
return
}
type server struct {
// The service SOAP handler keyed by service URN.
services map[string]UPnPService
@@ -107,10 +75,9 @@ type server struct {
HTTPConn net.Listener
httpListenAddr string
httpServeMux *http.ServeMux
handler http.Handler
rootDeviceUUID string
rootDescXML []byte
RootDeviceUUID string
FriendlyName string
@@ -125,16 +92,16 @@ type server struct {
}
func newServer(f fs.Fs, opt *dlnaflags.Options) *server {
hostName, err := os.Hostname()
if err != nil {
hostName = ""
} else {
hostName = " (" + hostName + ")"
friendlyName := opt.FriendlyName
if friendlyName == "" {
friendlyName = makeDefaultFriendlyName()
}
s := &server{
AnnounceInterval: 10 * time.Second,
FriendlyName: "rclone" + hostName,
FriendlyName: friendlyName,
RootDeviceUUID: makeDeviceUUID(friendlyName),
Interfaces: listInterfaces(),
httpListenAddr: opt.ListenAddr,
@@ -142,35 +109,29 @@ func newServer(f fs.Fs, opt *dlnaflags.Options) *server {
vfs: vfs.New(f, &vfsflags.Opt),
}
s.initServicesMap()
s.listInterfaces()
s.httpServeMux = http.NewServeMux()
s.rootDeviceUUID = makeDeviceUUID(s.FriendlyName)
s.rootDescXML, err = xml.MarshalIndent(
upnp.DeviceDesc{
SpecVersion: upnp.SpecVersion{Major: 1, Minor: 0},
Device: upnp.Device{
DeviceType: rootDeviceType,
FriendlyName: s.FriendlyName,
Manufacturer: "rclone (rclone.org)",
ModelName: rootDeviceModelName,
UDN: s.rootDeviceUUID,
ServiceList: func() (ss []upnp.Service) {
for _, s := range services {
ss = append(ss, s.Service)
}
return
}(),
},
s.services = map[string]UPnPService{
"ContentDirectory": &contentDirectoryService{
server: s,
},
"ConnectionManager": &connectionManagerService{
server: s,
},
" ", " ")
if err != nil {
// Contents are hardcoded, so this will never happen in production.
log.Panicf("Marshal root descriptor XML: %v", err)
}
s.rootDescXML = append([]byte(`<?xml version="1.0"?>`), s.rootDescXML...)
s.initMux(s.httpServeMux)
// Setup the various http routes.
r := http.NewServeMux()
r.HandleFunc(resPath, s.resourceHandler)
if opt.LogTrace {
r.Handle(rootDescPath, traceLogging(http.HandlerFunc(s.rootDescHandler)))
r.Handle(serviceControlURL, traceLogging(http.HandlerFunc(s.serviceControlHandler)))
} else {
r.HandleFunc(rootDescPath, s.rootDescHandler)
r.HandleFunc(serviceControlURL, s.serviceControlHandler)
}
r.Handle("/static/", http.StripPrefix("/static/",
withHeader("Cache-Control", "public, max-age=86400",
http.FileServer(data.Assets))))
s.handler = logging(withHeader("Server", serverField, r))
return s
}
@@ -182,117 +143,137 @@ type UPnPService interface {
Unsubscribe(sid string) error
}
// initServicesMap is called during initialization of the server to prepare some internal datastructures.
func (s *server) initServicesMap() {
urn, err := upnp.ParseServiceType(services[0].ServiceType)
if err != nil {
// The service type is hardcoded, so this error should never happen.
log.Panicf("ParseServiceType: %v", err)
}
s.services = map[string]UPnPService{
urn.Type: &contentDirectoryService{
server: s,
},
}
return
// Formats the server as a string (used for logging.)
func (s *server) String() string {
return fmt.Sprintf("DLNA server on %v", s.httpListenAddr)
}
// listInterfaces is called during initialization of the server to list the network interfaces
// on the machine.
func (s *server) listInterfaces() {
ifs, err := net.Interfaces()
// Returns rclone version number as the model number.
func (s *server) ModelNumber() string {
return fs.Version
}
// Template used to generate the root device XML descriptor.
//
// Due to the use of namespaces and various subtleties with device compatibility,
// it turns out to be easier to use a template than to marshal XML.
//
// For rendering, it is passed the server object for context.
var rootDescTmpl = template.Must(template.New("rootDesc").Parse(`<?xml version="1.0"?>
<root xmlns="urn:schemas-upnp-org:device-1-0"
xmlns:dlna="urn:schemas-dlna-org:device-1-0"
xmlns:sec="http://www.sec.co.kr/dlna">
<specVersion>
<major>1</major>
<minor>0</minor>
</specVersion>
<device>
<deviceType>urn:schemas-upnp-org:device:MediaServer:1</deviceType>
<friendlyName>{{.FriendlyName}}</friendlyName>
<manufacturer>rclone (rclone.org)</manufacturer>
<manufacturerURL>https://rclone.org/</manufacturerURL>
<modelDescription>rclone</modelDescription>
<modelName>rclone</modelName>
<modelNumber>{{.ModelNumber}}</modelNumber>
<modelURL>https://rclone.org/</modelURL>
<serialNumber>00000000</serialNumber>
<UDN>{{.RootDeviceUUID}}</UDN>
<dlna:X_DLNACAP/>
<dlna:X_DLNADOC>DMS-1.50</dlna:X_DLNADOC>
<dlna:X_DLNADOC>M-DMS-1.50</dlna:X_DLNADOC>
<sec:ProductCap>smi,DCM10,getMediaInfo.sec,getCaptionInfo.sec</sec:ProductCap>
<sec:X_ProductCap>smi,DCM10,getMediaInfo.sec,getCaptionInfo.sec</sec:X_ProductCap>
<iconList>
<icon>
<mimetype>image/png</mimetype>
<width>48</width>
<height>48</height>
<depth>8</depth>
<url>/static/rclone-48x48.png</url>
</icon>
<icon>
<mimetype>image/png</mimetype>
<width>120</width>
<height>120</height>
<depth>8</depth>
<url>/static/rclone-120x120.png</url>
</icon>
</iconList>
<serviceList>
<service>
<serviceType>urn:schemas-upnp-org:service:ContentDirectory:1</serviceType>
<serviceId>urn:upnp-org:serviceId:ContentDirectory</serviceId>
<SCPDURL>/static/ContentDirectory.xml</SCPDURL>
<controlURL>/ctl</controlURL>
<eventSubURL></eventSubURL>
</service>
<service>
<serviceType>urn:schemas-upnp-org:service:ConnectionManager:1</serviceType>
<serviceId>urn:upnp-org:serviceId:ConnectionManager</serviceId>
<SCPDURL>/static/ConnectionManager.xml</SCPDURL>
<controlURL>/ctl</controlURL>
<eventSubURL></eventSubURL>
</service>
<service>
<serviceType>urn:microsoft.com:service:X_MS_MediaReceiverRegistrar:1</serviceType>
<serviceId>urn:microsoft.com:serviceId:X_MS_MediaReceiverRegistrar</serviceId>
<SCPDURL>/static/X_MS_MediaReceiverRegistrar.xml</SCPDURL>
<controlURL>/ctl</controlURL>
<eventSubURL></eventSubURL>
</service>
</serviceList>
<presentationURL>/</presentationURL>
</device>
</root>`))
// Renders the root device descriptor.
func (s *server) rootDescHandler(w http.ResponseWriter, r *http.Request) {
buffer := new(bytes.Buffer)
err := rootDescTmpl.Execute(buffer, s)
if err != nil {
fs.Errorf(s.f, "list network interfaces: %v", err)
serveError(s, w, "Failed to create root descriptor XML", err)
return
}
var tmp []net.Interface
for _, intf := range ifs {
if intf.Flags&net.FlagUp == 0 || intf.MTU <= 0 {
continue
}
s.Interfaces = append(s.Interfaces, intf)
tmp = append(tmp, intf)
w.Header().Set("content-type", `text/xml; charset="utf-8"`)
w.Header().Set("cache-control", "private, max-age=60")
w.Header().Set("content-length", strconv.FormatInt(int64(buffer.Len()), 10))
_, err = buffer.WriteTo(w)
if err != nil {
// Network error
fs.Debugf(s, "Error writing rootDesc: %v", err)
}
}
func (s *server) initMux(mux *http.ServeMux) {
mux.HandleFunc(resPath, func(w http.ResponseWriter, r *http.Request) {
remotePath := r.URL.Query().Get("path")
node, err := s.vfs.Stat(remotePath)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Length", strconv.FormatInt(node.Size(), 10))
file := node.(*vfs.File)
in, err := file.Open(os.O_RDONLY)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
defer fs.CheckClose(in, &err)
http.ServeContent(w, r, remotePath, node.ModTime(), in)
return
})
mux.HandleFunc(rootDescPath, func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("content-type", `text/xml; charset="utf-8"`)
w.Header().Set("content-length", fmt.Sprint(len(s.rootDescXML)))
w.Header().Set("server", serverField)
_, err := w.Write(s.rootDescXML)
if err != nil {
fs.Errorf(s, "Failed to serve root descriptor XML: %v", err)
}
})
// Install handlers to serve SCPD for each UPnP service.
for _, s := range services {
p := path.Join("/scpd", s.ServiceId)
s.SCPDURL = p
mux.HandleFunc(s.SCPDURL, func(serviceDesc string) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("content-type", `text/xml; charset="utf-8"`)
http.ServeContent(w, r, ".xml", time.Time{}, bytes.NewReader([]byte(serviceDesc)))
}
}(s.SCPD))
}
mux.HandleFunc(serviceControlURL, s.serviceControlHandler)
}
// Handle a service control HTTP request.
func (s *server) serviceControlHandler(w http.ResponseWriter, r *http.Request) {
soapActionString := r.Header.Get("SOAPACTION")
soapAction, err := upnp.ParseActionHTTPHeader(soapActionString)
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
serveError(s, w, "Could not parse SOAPACTION header", err)
return
}
var env soap.Envelope
if err := xml.NewDecoder(r.Body).Decode(&env); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
serveError(s, w, "Could not parse SOAP request body", err)
return
}
w.Header().Set("Content-Type", `text/xml; charset="utf-8"`)
w.Header().Set("Ext", "")
w.Header().Set("server", serverField)
soapRespXML, code := func() ([]byte, int) {
respArgs, err := s.soapActionResponse(soapAction, env.Body.Action, r)
if err != nil {
fs.Errorf(s, "Error invoking %v: %v", soapAction, err)
upnpErr := upnp.ConvertError(err)
return mustMarshalXML(soap.NewFault("UPnPError", upnpErr)), 500
return mustMarshalXML(soap.NewFault("UPnPError", upnpErr)), http.StatusInternalServerError
}
return marshalSOAPResponse(soapAction, respArgs), 200
return marshalSOAPResponse(soapAction, respArgs), http.StatusOK
}()
bodyStr := fmt.Sprintf(`<?xml version="1.0" encoding="utf-8" standalone="yes"?><s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" s:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"><s:Body>%s</s:Body></s:Envelope>`, soapRespXML)
w.WriteHeader(code)
if _, err := w.Write([]byte(bodyStr)); err != nil {
log.Print(err)
fs.Infof(s, "Error writing response: %v", err)
}
}
@@ -306,6 +287,36 @@ func (s *server) soapActionResponse(sa upnp.SoapAction, actionRequestXML []byte,
return service.Handle(sa.Action, actionRequestXML, r)
}
// Serves actual resources (media files).
func (s *server) resourceHandler(w http.ResponseWriter, r *http.Request) {
remotePath := r.URL.Query().Get("path")
node, err := s.vfs.Stat(remotePath)
if err != nil {
http.NotFound(w, r)
return
}
w.Header().Set("Content-Length", strconv.FormatInt(node.Size(), 10))
// add some DLNA specific headers
if r.Header.Get("getContentFeatures.dlna.org") != "" {
w.Header().Set("contentFeatures.dlna.org", dms_dlna.ContentFeatures{
SupportRange: true,
}.String())
}
w.Header().Set("transferMode.dlna.org", "Streaming")
file := node.(*vfs.File)
in, err := file.Open(os.O_RDONLY)
if err != nil {
serveError(node, w, "Could not open resource", err)
return
}
defer fs.CheckClose(in, &err)
http.ServeContent(w, r, remotePath, node.ModTime(), in)
}
// Serve runs the server - returns the error only if
// the listener was not started; does not block, so
// use s.Wait() to block on the listener indefinitely.
@@ -381,13 +392,19 @@ func (s *server) ssdpInterface(intf net.Interface) {
return url.String()
}
// Note that the devices and services advertised here via SSDP should be
// in agreement with the rootDesc XML descriptor that is defined above.
ssdpServer := ssdp.Server{
Interface: intf,
Devices: devices(),
Services: serviceTypes(),
Interface: intf,
Devices: []string{
"urn:schemas-upnp-org:device:MediaServer:1"},
Services: []string{
"urn:schemas-upnp-org:service:ContentDirectory:1",
"urn:schemas-upnp-org:service:ConnectionManager:1",
"urn:microsoft.com:service:X_MS_MediaReceiverRegistrar:1"},
Location: advertiseLocationFn,
Server: serverField,
UUID: s.rootDeviceUUID,
UUID: s.RootDeviceUUID,
NotifyInterval: s.AnnounceInterval,
}
@@ -405,16 +422,16 @@ func (s *server) ssdpInterface(intf net.Interface) {
// good.
return
}
log.Printf("Error creating ssdp server on %s: %s", intf.Name, err)
fs.Errorf(s, "Error creating ssdp server on %s: %s", intf.Name, err)
return
}
defer ssdpServer.Close()
log.Println("Started SSDP on", intf.Name)
fs.Infof(s, "Started SSDP on %v", intf.Name)
stopped := make(chan struct{})
go func() {
defer close(stopped)
if err := ssdpServer.Serve(); err != nil {
log.Printf("%q: %q\n", intf.Name, err)
fs.Errorf(s, "%q: %q\n", intf.Name, err)
}
}()
select {
@@ -426,9 +443,7 @@ func (s *server) ssdpInterface(intf net.Interface) {
func (s *server) serveHTTP() error {
srv := &http.Server{
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
s.httpServeMux.ServeHTTP(w, r)
}),
Handler: s.handler,
}
err := srv.Serve(s.HTTPConn)
select {

View File

@@ -1,5 +1,3 @@
// +build go1.8
package dlna
import (
@@ -22,11 +20,11 @@ import (
var (
dlnaServer *server
testURL string
)
const (
testBindAddress = "localhost:51777"
testURL = "http://" + testBindAddress + "/"
testBindAddress = "localhost:0"
)
func startServer(t *testing.T, f fs.Fs) {
@@ -34,6 +32,7 @@ func startServer(t *testing.T, f fs.Fs) {
opt.ListenAddr = testBindAddress
dlnaServer = newServer(f, &opt)
assert.NoError(t, dlnaServer.Serve())
testURL = "http://" + dlnaServer.HTTPConn.Addr().String() + "/"
}
func TestInit(t *testing.T) {
@@ -59,6 +58,11 @@ func TestRootSCPD(t *testing.T) {
// Make sure that the SCPD contains a CDS service.
require.Contains(t, string(body),
"<serviceType>urn:schemas-upnp-org:service:ContentDirectory:1</serviceType>")
// Make sure that the SCPD contains a CM service.
require.Contains(t, string(body),
"<serviceType>urn:schemas-upnp-org:service:ConnectionManager:1</serviceType>")
// Ensure that the SCPD url is configured.
require.Regexp(t, "<SCPDURL>/.*</SCPDURL>", string(body))
}
// Make sure that it serves content from the remote.

View File

@@ -6,11 +6,28 @@ import (
"fmt"
"io"
"log"
"net"
"net/http"
"net/http/httptest"
"net/http/httputil"
"os"
"github.com/anacrolix/dms/soap"
"github.com/anacrolix/dms/upnp"
"github.com/ncw/rclone/fs"
)
// Return a default "friendly name" for the server.
func makeDefaultFriendlyName() string {
hostName, err := os.Hostname()
if err != nil {
hostName = ""
} else {
hostName = " (" + hostName + ")"
}
return "rclone" + hostName
}
func makeDeviceUUID(unique string) string {
h := md5.New()
if _, err := io.WriteString(h, unique); err != nil {
@@ -20,6 +37,24 @@ func makeDeviceUUID(unique string) string {
return upnp.FormatUUID(buf)
}
// Get all available active network interfaces.
func listInterfaces() []net.Interface {
ifs, err := net.Interfaces()
if err != nil {
log.Printf("list network interfaces: %v", err)
return []net.Interface{}
}
var active []net.Interface
for _, intf := range ifs {
if intf.Flags&net.FlagUp == 0 || intf.MTU <= 0 {
continue
}
active = append(active, intf)
}
return active
}
func didlLite(chardata string) string {
return `<DIDL-Lite` +
` xmlns:dc="http://purl.org/dc/elements/1.1/"` +
@@ -50,3 +85,99 @@ func marshalSOAPResponse(sa upnp.SoapAction, args map[string]string) []byte {
return []byte(fmt.Sprintf(`<u:%[1]sResponse xmlns:u="%[2]s">%[3]s</u:%[1]sResponse>`,
sa.Action, sa.ServiceURN.String(), mustMarshalXML(soapArgs)))
}
type loggingResponseWriter struct {
http.ResponseWriter
request *http.Request
committed bool
}
func (lrw *loggingResponseWriter) logRequest(code int, err interface{}) {
// Choose appropriate log level based on response status code.
var level fs.LogLevel
if code < 400 && err == nil {
level = fs.LogLevelInfo
} else {
level = fs.LogLevelError
}
fs.LogPrintf(level, lrw.request.URL.Path, "%s %s %d %s %s",
lrw.request.RemoteAddr, lrw.request.Method, code,
lrw.request.Header.Get("SOAPACTION"), err)
}
func (lrw *loggingResponseWriter) WriteHeader(code int) {
lrw.committed = true
lrw.logRequest(code, nil)
lrw.ResponseWriter.WriteHeader(code)
}
// HTTP handler that logs requests and any errors or panics.
func logging(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
lrw := &loggingResponseWriter{ResponseWriter: w, request: r}
defer func() {
err := recover()
if err != nil {
if !lrw.committed {
lrw.logRequest(http.StatusInternalServerError, err)
http.Error(w, fmt.Sprint(err), http.StatusInternalServerError)
} else {
// Too late to send the error to client, but at least log it.
fs.Errorf(r.URL.Path, "Recovered panic: %v", err)
}
}
}()
next.ServeHTTP(lrw, r)
})
}
// HTTP handler that logs complete request and response bodies for debugging.
// Error recovery and general request logging are left to logging().
func traceLogging(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
dump, err := httputil.DumpRequest(r, true)
if err != nil {
serveError(nil, w, "error dumping request", err)
return
}
fs.Debugf(nil, "%s", dump)
recorder := httptest.NewRecorder()
next.ServeHTTP(recorder, r)
dump, err = httputil.DumpResponse(recorder.Result(), true)
if err != nil {
// log the error but ignore it
fs.Errorf(nil, "error dumping response: %v", err)
} else {
fs.Debugf(nil, "%s", dump)
}
// copy from recorder to the real response writer
for k, v := range recorder.Header() {
w.Header()[k] = v
}
w.WriteHeader(recorder.Code)
_, err = recorder.Body.WriteTo(w)
if err != nil {
// Network error
fs.Debugf(nil, "Error writing response: %v", err)
}
})
}
// HTTP handler that sets headers.
func withHeader(name string, value string, next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set(name, value)
next.ServeHTTP(w, r)
})
}
// serveError returns an http.StatusInternalServerError and logs the error
func serveError(what interface{}, w http.ResponseWriter, text string, err error) {
fs.CountError(err)
fs.Errorf(what, "%s: %v", text, err)
http.Error(w, text+".", http.StatusInternalServerError)
}

View File

@@ -14,16 +14,25 @@ Use --addr to specify which IP address and port the server should
listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
IPs.
Use --name to choose the friendly server name, which is by
default "rclone (hostname)".
Use --log-trace in conjunction with -vv to enable additional debug
logging of all UPNP traffic.
`
// Options is the type for DLNA serving options.
type Options struct {
ListenAddr string
ListenAddr string
FriendlyName string
LogTrace bool
}
// DefaultOpt contains the defaults options for DLNA serving.
var DefaultOpt = Options{
ListenAddr: ":7879",
ListenAddr: ":7879",
FriendlyName: "",
LogTrace: false,
}
// Opt contains the options for DLNA serving.
@@ -34,6 +43,8 @@ var (
func addFlagsPrefix(flagSet *pflag.FlagSet, prefix string, Opt *Options) {
rc.AddOption("dlna", &Opt)
flags.StringVarP(flagSet, &Opt.ListenAddr, prefix+"addr", "", Opt.ListenAddr, "ip:port or :port to bind the DLNA http server to.")
flags.StringVarP(flagSet, &Opt.FriendlyName, prefix+"name", "", Opt.FriendlyName, "name of DLNA server")
flags.BoolVarP(flagSet, &Opt.LogTrace, prefix+"log-trace", "", Opt.LogTrace, "enable trace logging of SOAP traffic")
}
// AddFlags add the command line flags for DLNA serving.

View File

@@ -78,6 +78,7 @@ func newServer(f fs.Fs, opt *ftpopt.Options) (*server, error) {
},
Hostname: host,
Port: portNum,
PublicIp: opt.PublicIP,
PassivePorts: opt.PassivePorts,
Auth: &Auth{
BasicUser: opt.BasicUser,
@@ -155,7 +156,7 @@ func (f *DriverFactory) NewDriver() (ftp.Driver, error) {
}, nil
}
//Driver impletation of ftp server
//Driver implementation of ftp server
type Driver struct {
vfs *vfs.VFS
lock sync.Mutex
@@ -378,7 +379,7 @@ func (d *Driver) PutFile(path string, data io.Reader, appendData bool) (n int64,
return bytes, nil
}
//FileInfo struct ot hold file infor for ftp server
//FileInfo struct to hold file info for ftp server
type FileInfo struct {
os.FileInfo
@@ -387,7 +388,7 @@ type FileInfo struct {
group uint32
}
//Mode return êrm mode of file.
//Mode return mode of file.
func (f *FileInfo) Mode() os.FileMode {
return f.mode
}
@@ -407,7 +408,7 @@ func (f *FileInfo) Group() string {
str := fmt.Sprint(f.group)
g, err := user.LookupGroupId(str)
if err != nil {
return str //Group not found default to numrical value
return str //Group not found default to numerical value
}
return g.Name
}

View File

@@ -16,6 +16,7 @@ var (
func AddFlagsPrefix(flagSet *pflag.FlagSet, prefix string, Opt *ftpopt.Options) {
rc.AddOption("ftp", &Opt)
flags.StringVarP(flagSet, &Opt.ListenAddr, prefix+"addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to.")
flags.StringVarP(flagSet, &Opt.PublicIP, prefix+"public-ip", "", Opt.PublicIP, "Public IP address to advertise for passive connections.")
flags.StringVarP(flagSet, &Opt.PassivePorts, prefix+"passive-port", "", Opt.PassivePorts, "Passive port range to use.")
flags.StringVarP(flagSet, &Opt.BasicUser, prefix+"user", "", Opt.BasicUser, "User name for authentication.")
flags.StringVarP(flagSet, &Opt.BasicPass, prefix+"pass", "", Opt.BasicPass, "Password for authentication. (empty value allow every password)")

View File

@@ -24,6 +24,7 @@ You can set a single username and password with the --user and --pass flags.
type Options struct {
//TODO add more options
ListenAddr string // Port to listen on
PublicIP string // Passive ports range
PassivePorts string // Passive ports range
BasicUser string // single username for basic auth if not using Htpasswd
BasicPass string // password for BasicUser
@@ -32,6 +33,7 @@ type Options struct {
// DefaultOpt is the default values used for Options
var DefaultOpt = Options{
ListenAddr: "localhost:2121",
PublicIP: "",
PassivePorts: "30000-32000",
BasicUser: "anonymous",
BasicPass: "",

View File

@@ -1,11 +1,8 @@
// +build go1.8
package http
import (
"flag"
"io/ioutil"
"net"
"net/http"
"strings"
"testing"
@@ -23,11 +20,11 @@ import (
var (
updateGolden = flag.Bool("updategolden", false, "update golden files for regression test")
httpServer *server
testURL string
)
const (
testBindAddress = "localhost:51777"
testURL = "http://" + testBindAddress + "/"
testBindAddress = "localhost:0"
)
func startServer(t *testing.T, f fs.Fs) {
@@ -35,13 +32,14 @@ func startServer(t *testing.T, f fs.Fs) {
opt.ListenAddr = testBindAddress
httpServer = newServer(f, &opt)
assert.NoError(t, httpServer.Serve())
testURL = httpServer.Server.URL()
// try to connect to the test server
pause := time.Millisecond
for i := 0; i < 10; i++ {
conn, err := net.Dial("tcp", testBindAddress)
resp, err := http.Head(testURL)
if err == nil {
_ = conn.Close()
_ = resp.Body.Close()
return
}
// t.Logf("couldn't connect, sleeping for %v: %v", pause, err)

Some files were not shown because too many files have changed in this diff Show More