mirror of
https://github.com/rclone/rclone.git
synced 2025-12-06 00:03:32 +00:00
Compare commits
57 Commits
fix-no-rem
...
fix-sftp-d
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
97ade36d8c | ||
|
|
6545755758 | ||
|
|
c86a55c798 | ||
|
|
1d280081d4 | ||
|
|
f48cb5985f | ||
|
|
55e766f4e8 | ||
|
|
63a24255f8 | ||
|
|
bc74f0621e | ||
|
|
f39a08c9d7 | ||
|
|
675548070d | ||
|
|
37ff05a5fa | ||
|
|
c67c1ab4ee | ||
|
|
76f8095bc5 | ||
|
|
f646cd0a2a | ||
|
|
d38f6bb0ab | ||
|
|
11d86c74b2 | ||
|
|
feb6046a8a | ||
|
|
807102ada2 | ||
|
|
770b3496a1 | ||
|
|
da36ce08e4 | ||
|
|
8652cfe575 | ||
|
|
94b1439299 | ||
|
|
97c9e55ddb | ||
|
|
c0b2832509 | ||
|
|
7436768d62 | ||
|
|
55153403aa | ||
|
|
daf449b5f2 | ||
|
|
221dfc3882 | ||
|
|
aab29353d1 | ||
|
|
c24504b793 | ||
|
|
6338d0026e | ||
|
|
ba836d45ff | ||
|
|
367cf984af | ||
|
|
6b7d7d0441 | ||
|
|
cf19073ac9 | ||
|
|
ba5c559fec | ||
|
|
abb8fe8ba1 | ||
|
|
765af387e6 | ||
|
|
d05cf6aba8 | ||
|
|
76a3fef24d | ||
|
|
b40d9bd4c4 | ||
|
|
4680c0776d | ||
|
|
fb305b5976 | ||
|
|
5e91b93e59 | ||
|
|
58c99427b3 | ||
|
|
fee0abf513 | ||
|
|
40024990b7 | ||
|
|
04aa6969a4 | ||
|
|
d2050523de | ||
|
|
1cc6dd349e | ||
|
|
721bae11c3 | ||
|
|
b439199578 | ||
|
|
0bfd6f793b | ||
|
|
76ea716abf | ||
|
|
e635f4c0be | ||
|
|
0cb973f127 | ||
|
|
96ace599a8 |
@@ -32,3 +32,40 @@ jobs:
|
||||
publish: true
|
||||
dockerHubUser: ${{ secrets.DOCKER_HUB_USER }}
|
||||
dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }}
|
||||
|
||||
build_docker_volume_plugin:
|
||||
if: github.repository == 'rclone/rclone'
|
||||
needs: build
|
||||
runs-on: ubuntu-latest
|
||||
name: Build and publish docker volume plugin
|
||||
steps:
|
||||
- name: Checkout master
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Set plugin parameters
|
||||
shell: bash
|
||||
run: |
|
||||
GITHUB_REF=${{ github.ref }}
|
||||
|
||||
PLUGIN_IMAGE_USER=rclone
|
||||
PLUGIN_IMAGE_NAME=docker-volume-rclone
|
||||
PLUGIN_IMAGE_TAG=${GITHUB_REF#refs/tags/}
|
||||
PLUGIN_IMAGE=${PLUGIN_IMAGE_USER}/${PLUGIN_IMAGE_NAME}:${PLUGIN_IMAGE_TAG}
|
||||
PLUGIN_IMAGE_LATEST=${PLUGIN_IMAGE_USER}/${PLUGIN_IMAGE_NAME}:latest
|
||||
|
||||
echo "PLUGIN_IMAGE_USER=${PLUGIN_IMAGE_USER}" >> $GITHUB_ENV
|
||||
echo "PLUGIN_IMAGE_NAME=${PLUGIN_IMAGE_NAME}" >> $GITHUB_ENV
|
||||
echo "PLUGIN_IMAGE_TAG=${PLUGIN_IMAGE_TAG}" >> $GITHUB_ENV
|
||||
echo "PLUGIN_IMAGE=${PLUGIN_IMAGE}" >> $GITHUB_ENV
|
||||
echo "PLUGIN_IMAGE_LATEST=${PLUGIN_IMAGE_LATEST}" >> $GITHUB_ENV
|
||||
- name: Build image
|
||||
shell: bash
|
||||
run: |
|
||||
make docker-plugin
|
||||
- name: Push image
|
||||
shell: bash
|
||||
run: |
|
||||
docker login -u ${{ secrets.DOCKER_HUB_USER }} -p ${{ secrets.DOCKER_HUB_PASSWORD }}
|
||||
make docker-plugin-push PLUGIN_IMAGE=${PLUGIN_IMAGE}
|
||||
make docker-plugin-push PLUGIN_IMAGE=${PLUGIN_IMAGE_LATEST}
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -13,3 +13,4 @@ rclone.iml
|
||||
fuzz-build.zip
|
||||
*.orig
|
||||
*.rej
|
||||
Thumbs.db
|
||||
|
||||
155
CONTRIBUTING.md
155
CONTRIBUTING.md
@@ -12,95 +12,162 @@ When filing an issue, please include the following information if
|
||||
possible as well as a description of the problem. Make sure you test
|
||||
with the [latest beta of rclone](https://beta.rclone.org/):
|
||||
|
||||
* Rclone version (e.g. output from `rclone -V`)
|
||||
* Which OS you are using and how many bits (e.g. Windows 7, 64 bit)
|
||||
* Rclone version (e.g. output from `rclone version`)
|
||||
* Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
|
||||
* The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
|
||||
* A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
|
||||
* if the log contains secrets then edit the file with a text editor first to obscure them
|
||||
|
||||
## Submitting a pull request ##
|
||||
## Submitting a new feature or bug fix ##
|
||||
|
||||
If you find a bug that you'd like to fix, or a new feature that you'd
|
||||
like to implement then please submit a pull request via GitHub.
|
||||
|
||||
If it is a big feature then make an issue first so it can be discussed.
|
||||
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues) first so it can be discussed.
|
||||
|
||||
You'll need a Go environment set up with GOPATH set. See [the Go
|
||||
getting started docs](https://golang.org/doc/install) for more info.
|
||||
|
||||
First in your web browser press the fork button on [rclone's GitHub
|
||||
To prepare your pull request first press the fork button on [rclone's GitHub
|
||||
page](https://github.com/rclone/rclone).
|
||||
|
||||
Now in your terminal
|
||||
Then [install Git](https://git-scm.com/downloads) and set your public contribution [name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git) and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
|
||||
|
||||
Next open your terminal, change directory to your preferred folder and initialise your local rclone project:
|
||||
|
||||
git clone https://github.com/rclone/rclone.git
|
||||
cd rclone
|
||||
git remote rename origin upstream
|
||||
# if you have SSH keys setup in your GitHub account:
|
||||
git remote add origin git@github.com:YOURUSER/rclone.git
|
||||
go build
|
||||
# otherwise:
|
||||
git remote add origin https://github.com/YOURUSER/rclone.git
|
||||
|
||||
Make a branch to add your new feature
|
||||
Note that most of the terminal commands in the rest of this guide must be executed from the rclone folder created above.
|
||||
|
||||
Now [install Go](https://golang.org/doc/install) and verify your installation:
|
||||
|
||||
go version
|
||||
|
||||
Great, you can now compile and execute your own version of rclone:
|
||||
|
||||
go build
|
||||
./rclone version
|
||||
|
||||
Finally make a branch to add your new feature
|
||||
|
||||
git checkout -b my-new-feature
|
||||
|
||||
And get hacking.
|
||||
|
||||
When ready - run the unit tests for the code you changed
|
||||
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins) and a quick view on the rclone [code organisation](#code-organisation).
|
||||
|
||||
When ready - test the affected functionality and run the unit tests for the code you changed
|
||||
|
||||
cd folder/with/changed/files
|
||||
go test -v
|
||||
|
||||
Note that you may need to make a test remote, e.g. `TestSwift` for some
|
||||
of the unit tests.
|
||||
|
||||
Note the top level Makefile targets
|
||||
|
||||
* make check
|
||||
* make test
|
||||
|
||||
Both of these will be run by Travis when you make a pull request but
|
||||
you can do this yourself locally too. These require some extra go
|
||||
packages which you can install with
|
||||
|
||||
* make build_dep
|
||||
This is typically enough if you made a simple bug fix, otherwise please read the rclone [testing](#testing) section too.
|
||||
|
||||
Make sure you
|
||||
|
||||
* Add [unit tests](#testing) for a new feature.
|
||||
* Add [documentation](#writing-documentation) for a new feature.
|
||||
* Follow the [commit message guidelines](#commit-messages).
|
||||
* Add [unit tests](#testing) for a new feature
|
||||
* squash commits down to one per feature
|
||||
* rebase to master with `git rebase master`
|
||||
* [Commit your changes](#committing-your-changes) using the [message guideline](#commit-messages).
|
||||
|
||||
When you are done with that
|
||||
When you are done with that push your changes to Github:
|
||||
|
||||
git push -u origin my-new-feature
|
||||
|
||||
Go to the GitHub website and click [Create pull
|
||||
and open the GitHub website to [create your pull
|
||||
request](https://help.github.com/articles/creating-a-pull-request/).
|
||||
|
||||
You patch will get reviewed and you might get asked to fix some stuff.
|
||||
Your changes will then get reviewed and you might get asked to fix some stuff. If so, then make the changes in the same branch, commit and push your updates to GitHub.
|
||||
|
||||
If so, then make the changes in the same branch, squash the commits (make multiple commits one commit) by running:
|
||||
```
|
||||
git log # See how many commits you want to squash
|
||||
git reset --soft HEAD~2 # This squashes the 2 latest commits together.
|
||||
git status # Check what will happen, if you made a mistake resetting, you can run git reset 'HEAD@{1}' to undo.
|
||||
git commit # Add a new commit message.
|
||||
git push --force # Push the squashed commit to your GitHub repo.
|
||||
# For more, see Stack Overflow, Git docs, or generally Duck around the web. jtagcat also recommends wizardzines.com
|
||||
```
|
||||
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master) or [squash your commits](#squashing-your-commits).
|
||||
|
||||
## CI for your fork ##
|
||||
## Using Git and Github ##
|
||||
|
||||
### Committing your changes ###
|
||||
|
||||
Follow the guideline for [commit messages](#commit-messages) and then:
|
||||
|
||||
git checkout my-new-feature # To switch to your branch
|
||||
git status # To see the new and changed files
|
||||
git add FILENAME # To select FILENAME for the commit
|
||||
git status # To verify the changes to be committed
|
||||
git commit # To do the commit
|
||||
git log # To verify the commit. Use q to quit the log
|
||||
|
||||
You can modify the message or changes in the latest commit using:
|
||||
|
||||
git commit --amend
|
||||
|
||||
If you amend to commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
|
||||
|
||||
### Replacing your previously pushed commits ###
|
||||
|
||||
Note that you are about to rewrite the GitHub history of your branch. It is good practice to involve your collaborators before modifying commits that have been pushed to GitHub.
|
||||
|
||||
Your previously pushed commits are replaced by:
|
||||
|
||||
git push --force origin my-new-feature
|
||||
|
||||
### Basing your changes on the latest master ###
|
||||
|
||||
To base your changes on the latest version of the [rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
|
||||
|
||||
git checkout master
|
||||
git fetch upstream
|
||||
git merge --ff-only
|
||||
git push origin --follow-tags # optional update of your fork in GitHub
|
||||
git checkout my-new-feature
|
||||
git rebase master
|
||||
|
||||
If you rebase commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
|
||||
|
||||
### Squashing your commits ###
|
||||
|
||||
To combine your commits into one commit:
|
||||
|
||||
git log # To count the commits to squash, e.g. the last 2
|
||||
git reset --soft HEAD~2 # To undo the 2 latest commits
|
||||
git status # To check everything is as expected
|
||||
|
||||
If everything is fine, then make the new combined commit:
|
||||
|
||||
git commit # To commit the undone commits as one
|
||||
|
||||
otherwise, you may roll back using:
|
||||
|
||||
git reflog # To check that HEAD{1} is your previous state
|
||||
git reset --soft 'HEAD@{1}' # To roll back to your previous state
|
||||
|
||||
If you squash commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
|
||||
|
||||
Tip: You may like to use `git rebase -i master` if you are experienced or have a more complex situation.
|
||||
|
||||
### GitHub Continuous Integration ###
|
||||
|
||||
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
|
||||
|
||||
## Testing ##
|
||||
|
||||
### Quick testing ###
|
||||
|
||||
rclone's tests are run from the go testing framework, so at the top
|
||||
level you can run this to run all the tests.
|
||||
|
||||
go test -v ./...
|
||||
|
||||
You can also use `make`, if supported by your platform
|
||||
|
||||
make quicktest
|
||||
|
||||
The quicktest is [automatically run by GitHub](#github-continuous-integration) when you push your branch to GitHub.
|
||||
|
||||
### Backend testing ###
|
||||
|
||||
rclone contains a mixture of unit tests and integration tests.
|
||||
Because it is difficult (and in some respects pointless) to test cloud
|
||||
storage systems by mocking all their interfaces, rclone unit tests can
|
||||
@@ -134,12 +201,19 @@ project root:
|
||||
go install github.com/rclone/rclone/fstest/test_all
|
||||
test_all -backend drive
|
||||
|
||||
### Full integration testing ###
|
||||
|
||||
If you want to run all the integration tests against all the remotes,
|
||||
then change into the project root and run
|
||||
|
||||
make check
|
||||
make test
|
||||
|
||||
This command is run daily on the integration test server. You can
|
||||
The commands may require some extra go packages which you can install with
|
||||
|
||||
make build_dep
|
||||
|
||||
The full integration tests are run daily on the integration test server. You can
|
||||
find the results at https://pub.rclone.org/integration-tests/
|
||||
|
||||
## Code Organisation ##
|
||||
@@ -154,6 +228,7 @@ with modules beneath.
|
||||
* cmd - the rclone commands
|
||||
* all - import this to load all the commands
|
||||
* ...commands
|
||||
* cmdtest - end-to-end tests of commands, flags, environment variables,...
|
||||
* docs - the documentation and website
|
||||
* content - adjust these docs only - everything else is autogenerated
|
||||
* command - these are auto generated - edit the corresponding .go file
|
||||
|
||||
2451
MANUAL.html
generated
2451
MANUAL.html
generated
File diff suppressed because it is too large
Load Diff
3139
MANUAL.txt
generated
3139
MANUAL.txt
generated
File diff suppressed because it is too large
Load Diff
33
Makefile
33
Makefile
@@ -256,3 +256,36 @@ startstable:
|
||||
|
||||
winzip:
|
||||
zip -9 rclone-$(TAG).zip rclone.exe
|
||||
|
||||
# docker volume plugin
|
||||
PLUGIN_IMAGE_USER ?= rclone
|
||||
PLUGIN_IMAGE_TAG ?= latest
|
||||
PLUGIN_IMAGE_NAME ?= docker-volume-rclone
|
||||
PLUGIN_IMAGE ?= $(PLUGIN_IMAGE_USER)/$(PLUGIN_IMAGE_NAME):$(PLUGIN_IMAGE_TAG)
|
||||
|
||||
PLUGIN_BASE_IMAGE := rclone/rclone:latest
|
||||
PLUGIN_BUILD_DIR := ./build/docker-plugin
|
||||
PLUGIN_CONTRIB_DIR := ./cmd/serve/docker/contrib/plugin
|
||||
PLUGIN_CONFIG := $(PLUGIN_CONTRIB_DIR)/config.json
|
||||
PLUGIN_DOCKERFILE := $(PLUGIN_CONTRIB_DIR)/Dockerfile
|
||||
PLUGIN_CONTAINER := docker-volume-rclone-dev-$(shell date +'%Y%m%d-%H%M%S')
|
||||
|
||||
docker-plugin: docker-plugin-rootfs docker-plugin-create
|
||||
|
||||
docker-plugin-image: rclone
|
||||
docker build --no-cache --pull --build-arg BASE_IMAGE=${PLUGIN_BASE_IMAGE} -t ${PLUGIN_IMAGE} -f ${PLUGIN_DOCKERFILE} .
|
||||
|
||||
docker-plugin-rootfs: docker-plugin-image
|
||||
mkdir -p ${PLUGIN_BUILD_DIR}/rootfs
|
||||
docker create --name ${PLUGIN_CONTAINER} ${PLUGIN_IMAGE}
|
||||
docker export ${PLUGIN_CONTAINER} | tar -x -C ${PLUGIN_BUILD_DIR}/rootfs
|
||||
docker rm -vf ${PLUGIN_CONTAINER}
|
||||
cp ${PLUGIN_CONFIG} ${PLUGIN_BUILD_DIR}/config.json
|
||||
|
||||
docker-plugin-create:
|
||||
docker plugin rm -f ${PLUGIN_IMAGE} 2>/dev/null || true
|
||||
docker plugin create ${PLUGIN_IMAGE} ${PLUGIN_BUILD_DIR}
|
||||
|
||||
docker-plugin-push: docker-plugin-create
|
||||
docker plugin push ${PLUGIN_IMAGE}
|
||||
docker plugin rm ${PLUGIN_IMAGE}
|
||||
|
||||
@@ -80,13 +80,12 @@ func init() {
|
||||
|
||||
Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
|
||||
|
||||
$ az sp create-for-rbac --name "<name>" \
|
||||
$ az ad sp create-for-rbac --name "<name>" \
|
||||
--role "Storage Blob Data Owner" \
|
||||
--scopes "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>" \
|
||||
> azure-principal.json
|
||||
|
||||
See [Use Azure CLI to assign an Azure role for access to blob and queue data](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli)
|
||||
for more details.
|
||||
See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to blob data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
|
||||
`,
|
||||
}, {
|
||||
Name: "key",
|
||||
|
||||
@@ -210,12 +210,19 @@ func init() {
|
||||
if opt.TeamDriveID == "" {
|
||||
return fs.ConfigConfirm("teamdrive_ok", false, "config_change_team_drive", "Configure this as a Shared Drive (Team Drive)?\n")
|
||||
}
|
||||
return fs.ConfigConfirm("teamdrive_ok", false, "config_change_team_drive", fmt.Sprintf("Change current Shared Drive (Team Drive) ID %q?\n", opt.TeamDriveID))
|
||||
return fs.ConfigConfirm("teamdrive_change", false, "config_change_team_drive", fmt.Sprintf("Change current Shared Drive (Team Drive) ID %q?\n", opt.TeamDriveID))
|
||||
case "teamdrive_ok":
|
||||
if config.Result == "false" {
|
||||
m.Set("team_drive", "")
|
||||
return nil, nil
|
||||
}
|
||||
return fs.ConfigGoto("teamdrive_config")
|
||||
case "teamdrive_change":
|
||||
if config.Result == "false" {
|
||||
return nil, nil
|
||||
}
|
||||
return fs.ConfigGoto("teamdrive_config")
|
||||
case "teamdrive_config":
|
||||
f, err := newFs(ctx, name, "", m)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to make Fs to list Shared Drives")
|
||||
@@ -1321,8 +1328,8 @@ func (f *Fs) newLinkObject(remote string, info *drive.File, extension, exportMim
|
||||
//
|
||||
// When the drive.File cannot be represented as an fs.Object it will return (nil, nil).
|
||||
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *drive.File) (fs.Object, error) {
|
||||
// If item has MD5 sum or a length it is a file stored on drive
|
||||
if info.Md5Checksum != "" || info.Size > 0 {
|
||||
// If item has MD5 sum it is a file stored on drive
|
||||
if info.Md5Checksum != "" {
|
||||
return f.newRegularObject(remote, info), nil
|
||||
}
|
||||
|
||||
@@ -1355,8 +1362,8 @@ func (f *Fs) newObjectWithExportInfo(
|
||||
// Pretend a dangling shortcut is a regular object
|
||||
// It will error if used, but appear in listings so it can be deleted
|
||||
return f.newRegularObject(remote, info), nil
|
||||
case info.Md5Checksum != "" || info.Size > 0:
|
||||
// If item has MD5 sum or a length it is a file stored on drive
|
||||
case info.Md5Checksum != "":
|
||||
// If item has MD5 sum it is a file stored on drive
|
||||
return f.newRegularObject(remote, info), nil
|
||||
case f.opt.SkipGdocs:
|
||||
fs.Debugf(remote, "Skipping google document type %q", info.MimeType)
|
||||
|
||||
@@ -87,6 +87,11 @@ func (f *Fs) readFileInfo(ctx context.Context, url string) (*File, error) {
|
||||
return &file, err
|
||||
}
|
||||
|
||||
// maybe do some actual validation later if necessary
|
||||
func validToken(token *GetTokenResponse) bool {
|
||||
return token.Status == "OK"
|
||||
}
|
||||
|
||||
func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenResponse, error) {
|
||||
request := DownloadRequest{
|
||||
URL: url,
|
||||
@@ -101,7 +106,8 @@ func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenRespons
|
||||
var token GetTokenResponse
|
||||
err := f.pacer.Call(func() (bool, error) {
|
||||
resp, err := f.rest.CallJSON(ctx, &opts, &request, &token)
|
||||
return shouldRetry(ctx, resp, err)
|
||||
doretry, err := shouldRetry(ctx, resp, err)
|
||||
return doretry || !validToken(&token), err
|
||||
})
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "couldn't list files")
|
||||
|
||||
@@ -1050,6 +1050,16 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
return errors.Wrap(err, "Update")
|
||||
}
|
||||
err = c.Stor(o.fs.opt.Enc.FromStandardPath(path), in)
|
||||
// Ignore error 250 here - send by some servers
|
||||
if err != nil {
|
||||
switch errX := err.(type) {
|
||||
case *textproto.Error:
|
||||
switch errX.Code {
|
||||
case ftp.StatusRequestedFileActionOK:
|
||||
err = nil
|
||||
}
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
_ = c.Quit() // toss this connection to avoid sync errors
|
||||
remove()
|
||||
|
||||
@@ -53,6 +53,7 @@ const (
|
||||
minSleep = 10 * time.Millisecond
|
||||
scopeReadOnly = "https://www.googleapis.com/auth/photoslibrary.readonly"
|
||||
scopeReadWrite = "https://www.googleapis.com/auth/photoslibrary"
|
||||
scopeAccess = 2 // position of access scope in list
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -61,7 +62,7 @@ var (
|
||||
Scopes: []string{
|
||||
"openid",
|
||||
"profile",
|
||||
scopeReadWrite,
|
||||
scopeReadWrite, // this must be at position scopeAccess
|
||||
},
|
||||
Endpoint: google.Endpoint,
|
||||
ClientID: rcloneClientID,
|
||||
@@ -89,9 +90,9 @@ func init() {
|
||||
case "":
|
||||
// Fill in the scopes
|
||||
if opt.ReadOnly {
|
||||
oauthConfig.Scopes[0] = scopeReadOnly
|
||||
oauthConfig.Scopes[scopeAccess] = scopeReadOnly
|
||||
} else {
|
||||
oauthConfig.Scopes[0] = scopeReadWrite
|
||||
oauthConfig.Scopes[scopeAccess] = scopeReadWrite
|
||||
}
|
||||
return oauthutil.ConfigOut("warning", &oauthutil.Options{
|
||||
OAuth2Config: oauthConfig,
|
||||
|
||||
@@ -37,7 +37,7 @@ func init() {
|
||||
Help: `Kerberos service principal name for the namenode
|
||||
|
||||
Enables KERBEROS authentication. Specifies the Service Principal Name
|
||||
(<SERVICE>/<FQDN>) for the namenode.`,
|
||||
(SERVICE/FQDN) for the namenode.`,
|
||||
Required: false,
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "hdfs/namenode.hadoop.docker",
|
||||
|
||||
@@ -99,6 +99,11 @@ func init() {
|
||||
Help: "Files bigger than this can be resumed if the upload fail's.",
|
||||
Default: fs.SizeSuffix(10 * 1024 * 1024),
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: "no_versions",
|
||||
Help: "Avoid server side versioning by deleting files and recreating files instead of overwriting them.",
|
||||
Default: false,
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: config.ConfigEncoding,
|
||||
Help: config.ConfigEncodingHelp,
|
||||
@@ -297,6 +302,7 @@ type Options struct {
|
||||
MD5MemoryThreshold fs.SizeSuffix `config:"md5_memory_limit"`
|
||||
TrashedOnly bool `config:"trashed_only"`
|
||||
HardDelete bool `config:"hard_delete"`
|
||||
NoVersions bool `config:"no_versions"`
|
||||
UploadThreshold fs.SizeSuffix `config:"upload_resume_limit"`
|
||||
Enc encoder.MultiEncoder `config:"encoding"`
|
||||
}
|
||||
@@ -1494,6 +1500,20 @@ func readMD5(in io.Reader, size, threshold int64) (md5sum string, out io.Reader,
|
||||
//
|
||||
// The new object may have been created if an error is returned
|
||||
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
|
||||
if o.fs.opt.NoVersions {
|
||||
err := o.readMetaData(ctx, false)
|
||||
if err == nil {
|
||||
// if the object exists delete it
|
||||
err = o.remove(ctx, true)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to remove old object")
|
||||
}
|
||||
}
|
||||
// if the object does not exist we can just continue but if the error is something different we should report that
|
||||
if err != fs.ErrorObjectNotFound {
|
||||
return err
|
||||
}
|
||||
}
|
||||
o.fs.tokenRenewer.Start()
|
||||
defer o.fs.tokenRenewer.Stop()
|
||||
size := src.Size()
|
||||
@@ -1584,8 +1604,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
return nil
|
||||
}
|
||||
|
||||
// Remove an object
|
||||
func (o *Object) Remove(ctx context.Context) error {
|
||||
func (o *Object) remove(ctx context.Context, hard bool) error {
|
||||
opts := rest.Opts{
|
||||
Method: "POST",
|
||||
Path: o.filePath(),
|
||||
@@ -1593,7 +1612,7 @@ func (o *Object) Remove(ctx context.Context) error {
|
||||
NoResponse: true,
|
||||
}
|
||||
|
||||
if o.fs.opt.HardDelete {
|
||||
if hard {
|
||||
opts.Parameters.Set("rm", "true")
|
||||
} else {
|
||||
opts.Parameters.Set("dl", "true")
|
||||
@@ -1605,6 +1624,11 @@ func (o *Object) Remove(ctx context.Context) error {
|
||||
})
|
||||
}
|
||||
|
||||
// Remove an object
|
||||
func (o *Object) Remove(ctx context.Context) error {
|
||||
return o.remove(ctx, o.fs.opt.HardDelete)
|
||||
}
|
||||
|
||||
// Check the interfaces are satisfied
|
||||
var (
|
||||
_ fs.Fs = (*Fs)(nil)
|
||||
|
||||
@@ -467,6 +467,10 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
|
||||
for _, name := range names {
|
||||
namepath := filepath.Join(fsDirPath, name)
|
||||
fi, fierr := os.Lstat(namepath)
|
||||
if os.IsNotExist(fierr) {
|
||||
// skip entry removed by a concurrent goroutine
|
||||
continue
|
||||
}
|
||||
if fierr != nil {
|
||||
err = errors.Wrapf(err, "failed to read directory %q", namepath)
|
||||
fs.Errorf(dir, "%v", fierr)
|
||||
|
||||
@@ -1500,10 +1500,85 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
|
||||
return shouldRetry(ctx, resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
fmt.Println(err)
|
||||
if resp != nil && resp.StatusCode == 400 && f.driveType != driveTypePersonal {
|
||||
return "", errors.Errorf("%v (is making public links permitted by the org admin?)", err)
|
||||
}
|
||||
return "", err
|
||||
}
|
||||
return result.Link.WebURL, nil
|
||||
|
||||
shareURL := result.Link.WebURL
|
||||
|
||||
// Convert share link to direct download link if target is not a folder
|
||||
// Not attempting to do the conversion for regional versions, just to be safe
|
||||
if f.opt.Region != regionGlobal {
|
||||
return shareURL, nil
|
||||
}
|
||||
if info.Folder != nil {
|
||||
fs.Debugf(nil, "Can't convert share link for folder to direct link - returning the link as is")
|
||||
return shareURL, nil
|
||||
}
|
||||
|
||||
cnvFailMsg := "Don't know how to convert share link to direct link - returning the link as is"
|
||||
directURL := ""
|
||||
segments := strings.Split(shareURL, "/")
|
||||
switch f.driveType {
|
||||
case driveTypePersonal:
|
||||
// Method: https://stackoverflow.com/questions/37951114/direct-download-link-to-onedrive-file
|
||||
if len(segments) != 5 {
|
||||
fs.Logf(f, cnvFailMsg)
|
||||
return shareURL, nil
|
||||
}
|
||||
enc := base64.StdEncoding.EncodeToString([]byte(shareURL))
|
||||
enc = strings.ReplaceAll(enc, "/", "_")
|
||||
enc = strings.ReplaceAll(enc, "+", "-")
|
||||
enc = strings.ReplaceAll(enc, "=", "")
|
||||
directURL = fmt.Sprintf("https://api.onedrive.com/v1.0/shares/u!%s/root/content", enc)
|
||||
case driveTypeBusiness:
|
||||
// Method: https://docs.microsoft.com/en-us/sharepoint/dev/spfx/shorter-share-link-format
|
||||
// Example:
|
||||
// https://{tenant}-my.sharepoint.com/:t:/g/personal/{user_email}/{Opaque_String}
|
||||
// --convert to->
|
||||
// https://{tenant}-my.sharepoint.com/personal/{user_email}/_layouts/15/download.aspx?share={Opaque_String}
|
||||
if len(segments) != 8 {
|
||||
fs.Logf(f, cnvFailMsg)
|
||||
return shareURL, nil
|
||||
}
|
||||
directURL = fmt.Sprintf("https://%s/%s/%s/_layouts/15/download.aspx?share=%s",
|
||||
segments[2], segments[5], segments[6], segments[7])
|
||||
case driveTypeSharepoint:
|
||||
// Method: Similar to driveTypeBusiness
|
||||
// Example:
|
||||
// https://{tenant}.sharepoint.com/:t:/s/{site_name}/{Opaque_String}
|
||||
// --convert to->
|
||||
// https://{tenant}.sharepoint.com/sites/{site_name}/_layouts/15/download.aspx?share={Opaque_String}
|
||||
//
|
||||
// https://{tenant}.sharepoint.com/:t:/t/{team_name}/{Opaque_String}
|
||||
// --convert to->
|
||||
// https://{tenant}.sharepoint.com/teams/{team_name}/_layouts/15/download.aspx?share={Opaque_String}
|
||||
//
|
||||
// https://{tenant}.sharepoint.com/:t:/g/{Opaque_String}
|
||||
// --convert to->
|
||||
// https://{tenant}.sharepoint.com/_layouts/15/download.aspx?share={Opaque_String}
|
||||
if len(segments) < 6 || len(segments) > 7 {
|
||||
fs.Logf(f, cnvFailMsg)
|
||||
return shareURL, nil
|
||||
}
|
||||
pathPrefix := ""
|
||||
switch segments[4] {
|
||||
case "s": // Site
|
||||
pathPrefix = "/sites/" + segments[5]
|
||||
case "t": // Team
|
||||
pathPrefix = "/teams/" + segments[5]
|
||||
case "g": // Root site
|
||||
default:
|
||||
fs.Logf(f, cnvFailMsg)
|
||||
return shareURL, nil
|
||||
}
|
||||
directURL = fmt.Sprintf("https://%s%s/_layouts/15/download.aspx?share=%s",
|
||||
segments[2], pathPrefix, segments[len(segments)-1])
|
||||
}
|
||||
|
||||
return directURL, nil
|
||||
}
|
||||
|
||||
// CleanUp deletes all the hidden files.
|
||||
|
||||
@@ -430,6 +430,12 @@ func init() {
|
||||
Help: "Endpoint for OSS API.",
|
||||
Provider: "Alibaba",
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "oss-accelerate.aliyuncs.com",
|
||||
Help: "Global Accelerate",
|
||||
}, {
|
||||
Value: "oss-accelerate-overseas.aliyuncs.com",
|
||||
Help: "Global Accelerate (outside mainland China)",
|
||||
}, {
|
||||
Value: "oss-cn-hangzhou.aliyuncs.com",
|
||||
Help: "East China 1 (Hangzhou)",
|
||||
}, {
|
||||
@@ -446,10 +452,22 @@ func init() {
|
||||
Help: "North China 3 (Zhangjiakou)",
|
||||
}, {
|
||||
Value: "oss-cn-huhehaote.aliyuncs.com",
|
||||
Help: "North China 5 (Huhehaote)",
|
||||
Help: "North China 5 (Hohhot)",
|
||||
}, {
|
||||
Value: "oss-cn-wulanchabu.aliyuncs.com",
|
||||
Help: "North China 6 (Ulanqab)",
|
||||
}, {
|
||||
Value: "oss-cn-shenzhen.aliyuncs.com",
|
||||
Help: "South China 1 (Shenzhen)",
|
||||
}, {
|
||||
Value: "oss-cn-heyuan.aliyuncs.com",
|
||||
Help: "South China 2 (Heyuan)",
|
||||
}, {
|
||||
Value: "oss-cn-guangzhou.aliyuncs.com",
|
||||
Help: "South China 3 (Guangzhou)",
|
||||
}, {
|
||||
Value: "oss-cn-chengdu.aliyuncs.com",
|
||||
Help: "West China 1 (Chengdu)",
|
||||
}, {
|
||||
Value: "oss-cn-hongkong.aliyuncs.com",
|
||||
Help: "Hong Kong (Hong Kong)",
|
||||
|
||||
@@ -313,6 +313,13 @@ type Object struct {
|
||||
sha1sum *string // Cached SHA1 checksum
|
||||
}
|
||||
|
||||
// debugf calls fs.Debugf if --dump bodies or --dump headers is set
|
||||
func (f *Fs) debugf(o interface{}, text string, args ...interface{}) {
|
||||
if f.ci.Dump&(fs.DumpHeaders|fs.DumpBodies|fs.DumpRequests|fs.DumpResponses) != 0 {
|
||||
fs.Debugf(o, text, args...)
|
||||
}
|
||||
}
|
||||
|
||||
// dial starts a client connection to the given SSH server. It is a
|
||||
// convenience function that connects to the given network address,
|
||||
// initiates the SSH handshake, and then sets up a Client.
|
||||
@@ -429,10 +436,6 @@ func (f *Fs) newSftpClient(conn *ssh.Client, opts ...sftp.ClientOption) (*sftp.C
|
||||
sftp.UseConcurrentReads(!f.opt.DisableConcurrentReads),
|
||||
sftp.UseConcurrentWrites(!f.opt.DisableConcurrentWrites),
|
||||
)
|
||||
if f.opt.DisableConcurrentReads { // FIXME
|
||||
fs.Errorf(f, "Ignoring disable_concurrent_reads after library reversion - see #5197")
|
||||
}
|
||||
|
||||
return sftp.NewClientPipe(pr, pw, opts...)
|
||||
}
|
||||
|
||||
@@ -768,7 +771,9 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "NewFs")
|
||||
}
|
||||
f.debugf(f, "> Getwd")
|
||||
cwd, err := c.sftpClient.Getwd()
|
||||
f.debugf(f, "< Getwd: %q, err=%#v", cwd, err)
|
||||
f.putSftpConnection(&c, nil)
|
||||
if err != nil {
|
||||
fs.Debugf(f, "Failed to read current directory - using relative paths: %v", err)
|
||||
@@ -849,7 +854,9 @@ func (f *Fs) dirExists(ctx context.Context, dir string) (bool, error) {
|
||||
if err != nil {
|
||||
return false, errors.Wrap(err, "dirExists")
|
||||
}
|
||||
f.debugf(f, "> Stat dirExists: %q", dir)
|
||||
info, err := c.sftpClient.Stat(dir)
|
||||
f.debugf(f, "< Stat dirExists: %#v, err=%#v", info, err)
|
||||
f.putSftpConnection(&c, err)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
@@ -889,7 +896,9 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "List")
|
||||
}
|
||||
f.debugf(f, "> ReadDir: %q", sftpDir)
|
||||
infos, err := c.sftpClient.ReadDir(sftpDir)
|
||||
f.debugf(f, "< ReadDir: %#v, err=%#v", infos, err)
|
||||
f.putSftpConnection(&c, err)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "error listing %q", dir)
|
||||
@@ -980,7 +989,9 @@ func (f *Fs) mkdir(ctx context.Context, dirPath string) error {
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "mkdir")
|
||||
}
|
||||
f.debugf(f, "> Mkdir: %q", dirPath)
|
||||
err = c.sftpClient.Mkdir(dirPath)
|
||||
f.debugf(f, "< Mkdir: err=%#v", err)
|
||||
f.putSftpConnection(&c, err)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "mkdir %q failed", dirPath)
|
||||
@@ -1011,7 +1022,9 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Rmdir")
|
||||
}
|
||||
f.debugf(f, "> Rmdir: %q", root)
|
||||
err = c.sftpClient.RemoveDirectory(root)
|
||||
f.debugf(f, "< Rmdir: err=%#v", err)
|
||||
f.putSftpConnection(&c, err)
|
||||
return err
|
||||
}
|
||||
@@ -1031,10 +1044,10 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Move")
|
||||
}
|
||||
err = c.sftpClient.Rename(
|
||||
srcObj.path(),
|
||||
path.Join(f.absRoot, remote),
|
||||
)
|
||||
srcPath, dstPath := srcObj.path(), path.Join(f.absRoot, remote)
|
||||
f.debugf(f, "> Rename file: src=%q, dst=%q", srcPath, dstPath)
|
||||
err = c.sftpClient.Rename(srcPath, dstPath)
|
||||
f.debugf(f, "< Rename file: err=%#v", err)
|
||||
f.putSftpConnection(&c, err)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Move Rename failed")
|
||||
@@ -1083,10 +1096,12 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "DirMove")
|
||||
}
|
||||
f.debugf(f, "> Rename dir: src=%q, dst=%q", srcPath, dstPath)
|
||||
err = c.sftpClient.Rename(
|
||||
srcPath,
|
||||
dstPath,
|
||||
)
|
||||
f.debugf(f, "< Rename dir: err=%#v", err)
|
||||
f.putSftpConnection(&c, err)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "DirMove Rename(%q,%q) failed", srcPath, dstPath)
|
||||
@@ -1102,7 +1117,9 @@ func (f *Fs) run(ctx context.Context, cmd string) ([]byte, error) {
|
||||
}
|
||||
defer f.putSftpConnection(&c, err)
|
||||
|
||||
f.debugf(f, "> NewSession run")
|
||||
session, err := c.sshClient.NewSession()
|
||||
f.debugf(f, "< NewSession run: %#v, err=%#v", session, err)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "run: get SFTP session")
|
||||
}
|
||||
@@ -1114,7 +1131,9 @@ func (f *Fs) run(ctx context.Context, cmd string) ([]byte, error) {
|
||||
session.Stdout = &stdout
|
||||
session.Stderr = &stderr
|
||||
|
||||
f.debugf(f, "> Run cmd: %q", cmd)
|
||||
err = session.Run(cmd)
|
||||
f.debugf(f, "< Run cmd: err=%#v", err)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to run %q: %s", cmd, stderr.Bytes())
|
||||
}
|
||||
@@ -1261,7 +1280,9 @@ func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "Hash get SFTP connection")
|
||||
}
|
||||
o.fs.debugf(o, "> NewSession hash")
|
||||
session, err := c.sshClient.NewSession()
|
||||
o.fs.debugf(o, "< NewSession hash: %#v, err=%#v", session, err)
|
||||
o.fs.putSftpConnection(&c, err)
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "Hash put SFTP connection")
|
||||
@@ -1371,7 +1392,9 @@ func (f *Fs) stat(ctx context.Context, remote string) (info os.FileInfo, err err
|
||||
return nil, errors.Wrap(err, "stat")
|
||||
}
|
||||
absPath := path.Join(f.absRoot, remote)
|
||||
f.debugf(f, "> Stat file: %q", absPath)
|
||||
info, err = c.sftpClient.Stat(absPath)
|
||||
f.debugf(f, "< Stat file: %#v, err=%#v", info, err)
|
||||
f.putSftpConnection(&c, err)
|
||||
return info, err
|
||||
}
|
||||
@@ -1403,7 +1426,9 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "SetModTime")
|
||||
}
|
||||
o.fs.debugf(o, "> Chtimes: %q, %v", o.path(), modTime)
|
||||
err = c.sftpClient.Chtimes(o.path(), modTime, modTime)
|
||||
o.fs.debugf(o, "< Chtimes: err=%#v", err)
|
||||
o.fs.putSftpConnection(&c, err)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "SetModTime failed")
|
||||
@@ -1491,7 +1516,9 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Open")
|
||||
}
|
||||
o.fs.debugf(o, "> Open read: %q", o.path())
|
||||
sftpFile, err := c.sftpClient.Open(o.path())
|
||||
o.fs.debugf(o, "< Open read: %#v, err=%#v", sftpFile, err)
|
||||
o.fs.putSftpConnection(&c, err)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Open failed")
|
||||
@@ -1530,7 +1557,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Update")
|
||||
}
|
||||
o.fs.debugf(o, "> OpenFile write: %q", o.path())
|
||||
file, err := c.sftpClient.OpenFile(o.path(), os.O_WRONLY|os.O_CREATE|os.O_TRUNC)
|
||||
o.fs.debugf(o, "< OpenFile write: %#v, err=%#v", file, err)
|
||||
o.fs.putSftpConnection(&c, err)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Update Create failed")
|
||||
@@ -1542,7 +1571,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
fs.Debugf(src, "Failed to open new SSH connection for delete: %v", removeErr)
|
||||
return
|
||||
}
|
||||
o.fs.debugf(o, "> Remove file: %q", o.path())
|
||||
removeErr = c.sftpClient.Remove(o.path())
|
||||
o.fs.debugf(o, "< Remove file: err=%#v", removeErr)
|
||||
o.fs.putSftpConnection(&c, removeErr)
|
||||
if removeErr != nil {
|
||||
fs.Debugf(src, "Failed to remove: %v", removeErr)
|
||||
@@ -1591,7 +1622,9 @@ func (o *Object) Remove(ctx context.Context) error {
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "Remove")
|
||||
}
|
||||
o.fs.debugf(o, "> Remove: %q", o.path())
|
||||
err = c.sftpClient.Remove(o.path())
|
||||
o.fs.debugf(o, "< Remove: err=%#v", err)
|
||||
o.fs.putSftpConnection(&c, err)
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
# Email addresses to ignore in the git log when making the authors.md file
|
||||
<nick@raig-wood.com>
|
||||
<anaghk.dos@gmail.com>
|
||||
<33207650+sp31415t1@users.noreply.github.com>
|
||||
<unknown>
|
||||
|
||||
@@ -23,6 +23,7 @@ docs = [
|
||||
"rc.md",
|
||||
"overview.md",
|
||||
"flags.md",
|
||||
"docker.md",
|
||||
|
||||
# Keep these alphabetical by full name
|
||||
"fichier.md",
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
_ "github.com/rclone/rclone/cmd/cachestats"
|
||||
_ "github.com/rclone/rclone/cmd/cat"
|
||||
_ "github.com/rclone/rclone/cmd/check"
|
||||
_ "github.com/rclone/rclone/cmd/checksum"
|
||||
_ "github.com/rclone/rclone/cmd/cleanup"
|
||||
_ "github.com/rclone/rclone/cmd/cmount"
|
||||
_ "github.com/rclone/rclone/cmd/config"
|
||||
|
||||
@@ -2,6 +2,7 @@ package check
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
@@ -17,20 +18,22 @@ import (
|
||||
|
||||
// Globals
|
||||
var (
|
||||
download = false
|
||||
oneway = false
|
||||
combined = ""
|
||||
missingOnSrc = ""
|
||||
missingOnDst = ""
|
||||
match = ""
|
||||
differ = ""
|
||||
errFile = ""
|
||||
download = false
|
||||
oneway = false
|
||||
combined = ""
|
||||
missingOnSrc = ""
|
||||
missingOnDst = ""
|
||||
match = ""
|
||||
differ = ""
|
||||
errFile = ""
|
||||
checkFileHashType = ""
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(commandDefinition)
|
||||
cmdFlags := commandDefinition.Flags()
|
||||
flags.BoolVarP(cmdFlags, &download, "download", "", download, "Check by downloading rather than with hash.")
|
||||
flags.StringVarP(cmdFlags, &checkFileHashType, "checkfile", "C", checkFileHashType, "Treat source:path as a SUM file with hashes of given type")
|
||||
AddFlags(cmdFlags)
|
||||
}
|
||||
|
||||
@@ -126,7 +129,6 @@ func GetCheckOpt(fsrc, fdst fs.Fs) (opt *operations.CheckOpt, close func(), err
|
||||
}
|
||||
|
||||
return opt, close, nil
|
||||
|
||||
}
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
@@ -144,16 +146,39 @@ If you supply the |--download| flag, it will download the data from
|
||||
both remotes and check them against each other on the fly. This can
|
||||
be useful for remotes that don't support hashes or if you really want
|
||||
to check all the data.
|
||||
|
||||
If you supply the |--checkfile HASH| flag with a valid hash name,
|
||||
the |source:path| must point to a text file in the SUM format.
|
||||
`, "|", "`") + FlagsHelp,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
RunE: func(command *cobra.Command, args []string) error {
|
||||
cmd.CheckArgs(2, 2, command, args)
|
||||
fsrc, fdst := cmd.NewFsSrcDst(args)
|
||||
var (
|
||||
fsrc, fdst fs.Fs
|
||||
hashType hash.Type
|
||||
fsum fs.Fs
|
||||
sumFile string
|
||||
)
|
||||
if checkFileHashType != "" {
|
||||
if err := hashType.Set(checkFileHashType); err != nil {
|
||||
fmt.Println(hash.HelpString(0))
|
||||
return err
|
||||
}
|
||||
fsum, sumFile, fsrc = cmd.NewFsSrcFileDst(args)
|
||||
} else {
|
||||
fsrc, fdst = cmd.NewFsSrcDst(args)
|
||||
}
|
||||
|
||||
cmd.Run(false, true, command, func() error {
|
||||
opt, close, err := GetCheckOpt(fsrc, fdst)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer close()
|
||||
|
||||
if checkFileHashType != "" {
|
||||
return operations.CheckSum(context.Background(), fsrc, fsum, sumFile, hashType, opt, download)
|
||||
}
|
||||
|
||||
if download {
|
||||
return operations.CheckDownload(context.Background(), opt)
|
||||
}
|
||||
@@ -165,5 +190,6 @@ to check all the data.
|
||||
}
|
||||
return operations.Check(context.Background(), opt)
|
||||
})
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
57
cmd/checksum/checksum.go
Normal file
57
cmd/checksum/checksum.go
Normal file
@@ -0,0 +1,57 @@
|
||||
package checksum
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/rclone/rclone/cmd"
|
||||
"github.com/rclone/rclone/cmd/check" // for common flags
|
||||
"github.com/rclone/rclone/fs/config/flags"
|
||||
"github.com/rclone/rclone/fs/hash"
|
||||
"github.com/rclone/rclone/fs/operations"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var download = false
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(commandDefinition)
|
||||
cmdFlags := commandDefinition.Flags()
|
||||
flags.BoolVarP(cmdFlags, &download, "download", "", download, "Check by hashing the contents.")
|
||||
check.AddFlags(cmdFlags)
|
||||
}
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
Use: "checksum <hash> sumfile src:path",
|
||||
Short: `Checks the files in the source against a SUM file.`,
|
||||
Long: strings.ReplaceAll(`
|
||||
Checks that hashsums of source files match the SUM file.
|
||||
It compares hashes (MD5, SHA1, etc) and logs a report of files which
|
||||
don't match. It doesn't alter the file system.
|
||||
|
||||
If you supply the |--download| flag, it will download the data from remote
|
||||
and calculate the contents hash on the fly. This can be useful for remotes
|
||||
that don't support hashes or if you really want to check all the data.
|
||||
`, "|", "`") + check.FlagsHelp,
|
||||
RunE: func(command *cobra.Command, args []string) error {
|
||||
cmd.CheckArgs(3, 3, command, args)
|
||||
var hashType hash.Type
|
||||
if err := hashType.Set(args[0]); err != nil {
|
||||
fmt.Println(hash.HelpString(0))
|
||||
return err
|
||||
}
|
||||
fsum, sumFile, fsrc := cmd.NewFsSrcFileDst(args[1:])
|
||||
|
||||
cmd.Run(false, true, command, func() error {
|
||||
opt, close, err := check.GetCheckOpt(nil, fsrc)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer close()
|
||||
|
||||
return operations.CheckSum(context.Background(), fsrc, fsum, sumFile, hashType, opt, download)
|
||||
})
|
||||
return nil
|
||||
},
|
||||
}
|
||||
37
cmd/cmd.go
37
cmd/cmd.go
@@ -37,6 +37,7 @@ import (
|
||||
"github.com/rclone/rclone/fs/rc/rcserver"
|
||||
"github.com/rclone/rclone/lib/atexit"
|
||||
"github.com/rclone/rclone/lib/buildinfo"
|
||||
"github.com/rclone/rclone/lib/exitcode"
|
||||
"github.com/rclone/rclone/lib/random"
|
||||
"github.com/rclone/rclone/lib/terminal"
|
||||
"github.com/spf13/cobra"
|
||||
@@ -60,19 +61,6 @@ var (
|
||||
errorTooManyArguments = errors.New("too many arguments")
|
||||
)
|
||||
|
||||
const (
|
||||
exitCodeSuccess = iota
|
||||
exitCodeUsageError
|
||||
exitCodeUncategorizedError
|
||||
exitCodeDirNotFound
|
||||
exitCodeFileNotFound
|
||||
exitCodeRetryError
|
||||
exitCodeNoRetryError
|
||||
exitCodeFatalError
|
||||
exitCodeTransferExceeded
|
||||
exitCodeNoFilesTransferred
|
||||
)
|
||||
|
||||
// ShowVersion prints the version to stdout
|
||||
func ShowVersion() {
|
||||
osVersion, osKernel := buildinfo.GetOSVersion()
|
||||
@@ -484,31 +472,31 @@ func resolveExitCode(err error) {
|
||||
if err == nil {
|
||||
if ci.ErrorOnNoTransfer {
|
||||
if accounting.GlobalStats().GetTransfers() == 0 {
|
||||
os.Exit(exitCodeNoFilesTransferred)
|
||||
os.Exit(exitcode.NoFilesTransferred)
|
||||
}
|
||||
}
|
||||
os.Exit(exitCodeSuccess)
|
||||
os.Exit(exitcode.Success)
|
||||
}
|
||||
|
||||
_, unwrapped := fserrors.Cause(err)
|
||||
|
||||
switch {
|
||||
case unwrapped == fs.ErrorDirNotFound:
|
||||
os.Exit(exitCodeDirNotFound)
|
||||
os.Exit(exitcode.DirNotFound)
|
||||
case unwrapped == fs.ErrorObjectNotFound:
|
||||
os.Exit(exitCodeFileNotFound)
|
||||
os.Exit(exitcode.FileNotFound)
|
||||
case unwrapped == errorUncategorized:
|
||||
os.Exit(exitCodeUncategorizedError)
|
||||
os.Exit(exitcode.UncategorizedError)
|
||||
case unwrapped == accounting.ErrorMaxTransferLimitReached:
|
||||
os.Exit(exitCodeTransferExceeded)
|
||||
os.Exit(exitcode.TransferExceeded)
|
||||
case fserrors.ShouldRetry(err):
|
||||
os.Exit(exitCodeRetryError)
|
||||
os.Exit(exitcode.RetryError)
|
||||
case fserrors.IsNoRetryError(err):
|
||||
os.Exit(exitCodeNoRetryError)
|
||||
os.Exit(exitcode.NoRetryError)
|
||||
case fserrors.IsFatalError(err):
|
||||
os.Exit(exitCodeFatalError)
|
||||
os.Exit(exitcode.FatalError)
|
||||
default:
|
||||
os.Exit(exitCodeUsageError)
|
||||
os.Exit(exitcode.UsageError)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -539,7 +527,8 @@ func AddBackendFlags() {
|
||||
if opt.IsPassword {
|
||||
help += " (obscured)"
|
||||
}
|
||||
flag := flags.VarPF(pflag.CommandLine, opt, name, opt.ShortOpt, help)
|
||||
flag := pflag.CommandLine.VarPF(opt, name, opt.ShortOpt, help)
|
||||
flags.SetDefaultFromEnv(pflag.CommandLine, name)
|
||||
if _, isBool := opt.Default.(bool); isBool {
|
||||
flag.NoOptDefVal = "true"
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/rclone/rclone/cmd"
|
||||
@@ -21,6 +20,7 @@ var (
|
||||
OutputBase64 = false
|
||||
DownloadFlag = false
|
||||
HashsumOutfile = ""
|
||||
ChecksumFile = ""
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -33,6 +33,7 @@ func init() {
|
||||
func AddHashFlags(cmdFlags *pflag.FlagSet) {
|
||||
flags.BoolVarP(cmdFlags, &OutputBase64, "base64", "", OutputBase64, "Output base64 encoded hashsum")
|
||||
flags.StringVarP(cmdFlags, &HashsumOutfile, "output-file", "", HashsumOutfile, "Output hashsums to a file rather than the terminal")
|
||||
flags.StringVarP(cmdFlags, &ChecksumFile, "checkfile", "C", ChecksumFile, "Validate hashes against a given SUM file instead of printing them")
|
||||
flags.BoolVarP(cmdFlags, &DownloadFlag, "download", "", DownloadFlag, "Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote")
|
||||
}
|
||||
|
||||
@@ -70,7 +71,7 @@ hashed locally enabling any hash for any remote.
|
||||
Run without a hash to see the list of all supported hashes, e.g.
|
||||
|
||||
$ rclone hashsum
|
||||
` + hashListHelp(" ") + `
|
||||
` + hash.HelpString(4) + `
|
||||
Then
|
||||
|
||||
$ rclone hashsum MD5 remote:path
|
||||
@@ -80,7 +81,7 @@ Note that hash names are case insensitive.
|
||||
RunE: func(command *cobra.Command, args []string) error {
|
||||
cmd.CheckArgs(0, 2, command, args)
|
||||
if len(args) == 0 {
|
||||
fmt.Print(hashListHelp(""))
|
||||
fmt.Print(hash.HelpString(0))
|
||||
return nil
|
||||
} else if len(args) == 1 {
|
||||
return errors.New("need hash type and remote")
|
||||
@@ -88,12 +89,16 @@ Note that hash names are case insensitive.
|
||||
var ht hash.Type
|
||||
err := ht.Set(args[0])
|
||||
if err != nil {
|
||||
fmt.Println(hashListHelp(""))
|
||||
fmt.Println(hash.HelpString(0))
|
||||
return err
|
||||
}
|
||||
fsrc := cmd.NewFsSrc(args[1:])
|
||||
|
||||
cmd.Run(false, false, command, func() error {
|
||||
if ChecksumFile != "" {
|
||||
fsum, sumFile := cmd.NewFsFile(ChecksumFile)
|
||||
return operations.CheckSum(context.Background(), fsrc, fsum, sumFile, ht, nil, DownloadFlag)
|
||||
}
|
||||
if HashsumOutfile == "" {
|
||||
return operations.HashLister(context.Background(), ht, OutputBase64, DownloadFlag, fsrc, nil)
|
||||
}
|
||||
@@ -107,14 +112,3 @@ Note that hash names are case insensitive.
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
func hashListHelp(indent string) string {
|
||||
var help strings.Builder
|
||||
help.WriteString(indent)
|
||||
help.WriteString("Supported hashes are:\n")
|
||||
for _, ht := range hash.Supported().Array() {
|
||||
help.WriteString(indent)
|
||||
fmt.Fprintf(&help, " * %v\n", ht.String())
|
||||
}
|
||||
return help.String()
|
||||
}
|
||||
|
||||
@@ -32,6 +32,10 @@ hashed locally enabling MD5 for any remote.
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fsrc := cmd.NewFsSrc(args)
|
||||
cmd.Run(false, false, command, func() error {
|
||||
if hashsum.ChecksumFile != "" {
|
||||
fsum, sumFile := cmd.NewFsFile(hashsum.ChecksumFile)
|
||||
return operations.CheckSum(context.Background(), fsrc, fsum, sumFile, hash.MD5, nil, hashsum.DownloadFlag)
|
||||
}
|
||||
if hashsum.HashsumOutfile == "" {
|
||||
return operations.HashLister(context.Background(), hash.MD5, hashsum.OutputBase64, hashsum.DownloadFlag, fsrc, nil)
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Daemonization interface for non-Unix variants only
|
||||
|
||||
// +build windows
|
||||
// +build windows plan9 js
|
||||
|
||||
package mountlib
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Daemonization interface for Unix variants only
|
||||
|
||||
// +build !windows
|
||||
// +build !windows,!plan9,!js
|
||||
|
||||
package mountlib
|
||||
|
||||
|
||||
302
cmd/mountlib/help.go
Normal file
302
cmd/mountlib/help.go
Normal file
@@ -0,0 +1,302 @@
|
||||
package mountlib
|
||||
|
||||
// "@" will be replaced by the command name, "|" will be replaced by backticks
|
||||
var mountHelp = `
|
||||
rclone @ allows Linux, FreeBSD, macOS and Windows to
|
||||
mount any of Rclone's cloud storage systems as a file system with
|
||||
FUSE.
|
||||
|
||||
First set up your remote using |rclone config|. Check it works with |rclone ls| etc.
|
||||
|
||||
On Linux and OSX, you can either run mount in foreground mode or background (daemon) mode.
|
||||
Mount runs in foreground mode by default, use the |--daemon| flag to specify background mode.
|
||||
You can only run mount in foreground mode on Windows.
|
||||
|
||||
On Linux/macOS/FreeBSD start the mount like this, where |/path/to/local/mount|
|
||||
is an **empty** **existing** directory:
|
||||
|
||||
rclone @ remote:path/to/files /path/to/local/mount
|
||||
|
||||
On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows)
|
||||
for details. The following examples will mount to an automatically assigned drive,
|
||||
to specific drive letter |X:|, to path |C:\path\parent\mount|
|
||||
(where parent directory or drive must exist, and mount must **not** exist,
|
||||
and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and
|
||||
the last example will mount as network share |\\cloud\remote| and map it to an
|
||||
automatically assigned drive:
|
||||
|
||||
rclone @ remote:path/to/files *
|
||||
rclone @ remote:path/to/files X:
|
||||
rclone @ remote:path/to/files C:\path\parent\mount
|
||||
rclone @ remote:path/to/files \\cloud\remote
|
||||
|
||||
When the program ends while in foreground mode, either via Ctrl+C or receiving
|
||||
a SIGINT or SIGTERM signal, the mount should be automatically stopped.
|
||||
|
||||
When running in background mode the user will have to stop the mount manually:
|
||||
|
||||
# Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
umount /path/to/local/mount
|
||||
|
||||
The umount operation can fail, for example when the mountpoint is busy.
|
||||
When that happens, it is the user's responsibility to stop the mount manually.
|
||||
|
||||
The size of the mounted file system will be set according to information retrieved
|
||||
from the remote, the same as returned by the [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
command. Remotes with unlimited storage may report the used size only,
|
||||
then an additional 1 PiB of free space is assumed. If the remote does not
|
||||
[support](https://rclone.org/overview/#optional-features) the about feature
|
||||
at all, then 1 PiB is set as both the total and the free size.
|
||||
|
||||
**Note**: As of |rclone| 1.52.2, |rclone mount| now requires Go version 1.13
|
||||
or newer on some platforms depending on the underlying FUSE library in use.
|
||||
|
||||
### Installing on Windows
|
||||
|
||||
To run rclone @ on Windows, you will need to
|
||||
download and install [WinFsp](http://www.secfs.net/winfsp/).
|
||||
|
||||
[WinFsp](https://github.com/billziss-gh/winfsp) is an open source
|
||||
Windows File System Proxy which makes it easy to write user space file
|
||||
systems for Windows. It provides a FUSE emulation layer which rclone
|
||||
uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse).
|
||||
Both of these packages are by Bill Zissimopoulos who was very helpful
|
||||
during the implementation of rclone @ for Windows.
|
||||
|
||||
#### Mounting modes on windows
|
||||
|
||||
Unlike other operating systems, Microsoft Windows provides a different filesystem
|
||||
type for network and fixed drives. It optimises access on the assumption fixed
|
||||
disk drives are fast and reliable, while network drives have relatively high latency
|
||||
and less reliability. Some settings can also be differentiated between the two types,
|
||||
for example that Windows Explorer should just display icons and not create preview
|
||||
thumbnails for image and video files on network drives.
|
||||
|
||||
In most cases, rclone will mount the remote as a normal, fixed disk drive by default.
|
||||
However, you can also choose to mount it as a remote network drive, often described
|
||||
as a network share. If you mount an rclone remote using the default, fixed drive mode
|
||||
and experience unexpected program errors, freezes or other issues, consider mounting
|
||||
as a network drive instead.
|
||||
|
||||
When mounting as a fixed disk drive you can either mount to an unused drive letter,
|
||||
or to a path representing a **non-existent** subdirectory of an **existing** parent
|
||||
directory or drive. Using the special value |*| will tell rclone to
|
||||
automatically assign the next available drive letter, starting with Z: and moving backward.
|
||||
Examples:
|
||||
|
||||
rclone @ remote:path/to/files *
|
||||
rclone @ remote:path/to/files X:
|
||||
rclone @ remote:path/to/files C:\path\parent\mount
|
||||
rclone @ remote:path/to/files X:
|
||||
|
||||
Option |--volname| can be used to set a custom volume name for the mounted
|
||||
file system. The default is to use the remote name and path.
|
||||
|
||||
To mount as network drive, you can add option |--network-mode|
|
||||
to your @ command. Mounting to a directory path is not supported in
|
||||
this mode, it is a limitation Windows imposes on junctions, so the remote must always
|
||||
be mounted to a drive letter.
|
||||
|
||||
rclone @ remote:path/to/files X: --network-mode
|
||||
|
||||
A volume name specified with |--volname| will be used to create the network share path.
|
||||
A complete UNC path, such as |\\cloud\remote|, optionally with path
|
||||
|\\cloud\remote\madeup\path|, will be used as is. Any other
|
||||
string will be used as the share part, after a default prefix |\\server\|.
|
||||
If no volume name is specified then |\\server\share| will be used.
|
||||
You must make sure the volume name is unique when you are mounting more than one drive,
|
||||
or else the mount command will fail. The share name will treated as the volume label for
|
||||
the mapped drive, shown in Windows Explorer etc, while the complete
|
||||
|\\server\share| will be reported as the remote UNC path by
|
||||
|net use| etc, just like a normal network drive mapping.
|
||||
|
||||
If you specify a full network share UNC path with |--volname|, this will implicitely
|
||||
set the |--network-mode| option, so the following two examples have same result:
|
||||
|
||||
rclone @ remote:path/to/files X: --network-mode
|
||||
rclone @ remote:path/to/files X: --volname \\server\share
|
||||
|
||||
You may also specify the network share UNC path as the mountpoint itself. Then rclone
|
||||
will automatically assign a drive letter, same as with |*| and use that as
|
||||
mountpoint, and instead use the UNC path specified as the volume name, as if it were
|
||||
specified with the |--volname| option. This will also implicitely set
|
||||
the |--network-mode| option. This means the following two examples have same result:
|
||||
|
||||
rclone @ remote:path/to/files \\cloud\remote
|
||||
rclone @ remote:path/to/files * --volname \\cloud\remote
|
||||
|
||||
There is yet another way to enable network mode, and to set the share path,
|
||||
and that is to pass the "native" libfuse/WinFsp option directly:
|
||||
|--fuse-flag --VolumePrefix=\server\share|. Note that the path
|
||||
must be with just a single backslash prefix in this case.
|
||||
|
||||
|
||||
*Note:* In previous versions of rclone this was the only supported method.
|
||||
|
||||
[Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping)
|
||||
|
||||
See also [Limitations](#limitations) section below.
|
||||
|
||||
#### Windows filesystem permissions
|
||||
|
||||
The FUSE emulation layer on Windows must convert between the POSIX-based
|
||||
permission model used in FUSE, and the permission model used in Windows,
|
||||
based on access-control lists (ACL).
|
||||
|
||||
The mounted filesystem will normally get three entries in its access-control list (ACL),
|
||||
representing permissions for the POSIX permission scopes: Owner, group and others.
|
||||
By default, the owner and group will be taken from the current user, and the built-in
|
||||
group "Everyone" will be used to represent others. The user/group can be customized
|
||||
with FUSE options "UserName" and "GroupName",
|
||||
e.g. |-o UserName=user123 -o GroupName="Authenticated Users"|.
|
||||
|
||||
The permissions on each entry will be set according to
|
||||
[options](#options) |--dir-perms| and |--file-perms|,
|
||||
which takes a value in traditional [numeric notation](https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation),
|
||||
where the default corresponds to |--file-perms 0666 --dir-perms 0777|.
|
||||
|
||||
Note that the mapping of permissions is not always trivial, and the result
|
||||
you see in Windows Explorer may not be exactly like you expected.
|
||||
For example, when setting a value that includes write access, this will be
|
||||
mapped to individual permissions "write attributes", "write data" and "append data",
|
||||
but not "write extended attributes". Windows will then show this as basic
|
||||
permission "Special" instead of "Write", because "Write" includes the
|
||||
"write extended attributes" permission.
|
||||
|
||||
If you set POSIX permissions for only allowing access to the owner, using
|
||||
|--file-perms 0600 --dir-perms 0700|, the user group and the built-in "Everyone"
|
||||
group will still be given some special permissions, such as "read attributes"
|
||||
and "read permissions", in Windows. This is done for compatibility reasons,
|
||||
e.g. to allow users without additional permissions to be able to read basic
|
||||
metadata about files like in UNIX. One case that may arise is that other programs
|
||||
(incorrectly) interprets this as the file being accessible by everyone. For example
|
||||
an SSH client may warn about "unprotected private key file".
|
||||
|
||||
WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity",
|
||||
that allows the complete specification of file security descriptors using
|
||||
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
|
||||
With this you can work around issues such as the mentioned "unprotected private key file"
|
||||
by specifying |-o FileSecurity="D:P(A;;FA;;;OW)"|, for file all access (FA) to the owner (OW).
|
||||
|
||||
#### Windows caveats
|
||||
|
||||
Drives created as Administrator are not visible to other accounts,
|
||||
not even an account that was elevated to Administrator with the
|
||||
User Account Control (UAC) feature. A result of this is that if you mount
|
||||
to a drive letter from a Command Prompt run as Administrator, and then try
|
||||
to access the same drive from Windows Explorer (which does not run as
|
||||
Administrator), you will not be able to see the mounted drive.
|
||||
|
||||
If you don't need to access the drive from applications running with
|
||||
administrative privileges, the easiest way around this is to always
|
||||
create the mount from a non-elevated command prompt.
|
||||
|
||||
To make mapped drives available to the user account that created them
|
||||
regardless if elevated or not, there is a special Windows setting called
|
||||
[linked connections](https://docs.microsoft.com/en-us/troubleshoot/windows-client/networking/mapped-drives-not-available-from-elevated-command#detail-to-configure-the-enablelinkedconnections-registry-entry)
|
||||
that can be enabled.
|
||||
|
||||
It is also possible to make a drive mount available to everyone on the system,
|
||||
by running the process creating it as the built-in SYSTEM account.
|
||||
There are several ways to do this: One is to use the command-line
|
||||
utility [PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec),
|
||||
from Microsoft's Sysinternals suite, which has option |-s| to start
|
||||
processes as the SYSTEM account. Another alternative is to run the mount
|
||||
command from a Windows Scheduled Task, or a Windows Service, configured
|
||||
to run as the SYSTEM account. A third alternative is to use the
|
||||
[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)).
|
||||
Note that when running rclone as another user, it will not use
|
||||
the configuration file from your profile unless you tell it to
|
||||
with the [|--config|](https://rclone.org/docs/#config-config-file) option.
|
||||
Read more in the [install documentation](https://rclone.org/install/).
|
||||
|
||||
Note that mapping to a directory path, instead of a drive letter,
|
||||
does not suffer from the same limitations.
|
||||
|
||||
### Limitations
|
||||
|
||||
Without the use of |--vfs-cache-mode| this can only write files
|
||||
sequentially, it can only seek when reading. This means that many
|
||||
applications won't work with their files on an rclone mount without
|
||||
|--vfs-cache-mode writes| or |--vfs-cache-mode full|.
|
||||
See the [VFS File Caching](#vfs-file-caching) section for more info.
|
||||
|
||||
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
|
||||
Hubic) do not support the concept of empty directories, so empty
|
||||
directories will have a tendency to disappear once they fall out of
|
||||
the directory cache.
|
||||
|
||||
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
|
||||
|
||||
### rclone @ vs rclone sync/copy
|
||||
|
||||
File systems expect things to be 100% reliable, whereas cloud storage
|
||||
systems are a long way from 100% reliable. The rclone sync/copy
|
||||
commands cope with this with lots of retries. However rclone @
|
||||
can't use retries in the same way without making local copies of the
|
||||
uploads. Look at the [VFS File Caching](#vfs-file-caching)
|
||||
for solutions to make @ more reliable.
|
||||
|
||||
### Attribute caching
|
||||
|
||||
You can use the flag |--attr-timeout| to set the time the kernel caches
|
||||
the attributes (size, modification time, etc.) for directory entries.
|
||||
|
||||
The default is |1s| which caches files just long enough to avoid
|
||||
too many callbacks to rclone from the kernel.
|
||||
|
||||
In theory 0s should be the correct value for filesystems which can
|
||||
change outside the control of the kernel. However this causes quite a
|
||||
few problems such as
|
||||
[rclone using too much memory](https://github.com/rclone/rclone/issues/2157),
|
||||
[rclone not serving files to samba](https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112)
|
||||
and [excessive time listing directories](https://github.com/rclone/rclone/issues/2095#issuecomment-371141147).
|
||||
|
||||
The kernel can cache the info about a file for the time given by
|
||||
|--attr-timeout|. You may see corruption if the remote file changes
|
||||
length during this window. It will show up as either a truncated file
|
||||
or a file with garbage on the end. With |--attr-timeout 1s| this is
|
||||
very unlikely but not impossible. The higher you set |--attr-timeout|
|
||||
the more likely it is. The default setting of "1s" is the lowest
|
||||
setting which mitigates the problems above.
|
||||
|
||||
If you set it higher (|10s| or |1m| say) then the kernel will call
|
||||
back to rclone less often making it more efficient, however there is
|
||||
more chance of the corruption issue above.
|
||||
|
||||
If files don't change on the remote outside of the control of rclone
|
||||
then there is no chance of corruption.
|
||||
|
||||
This is the same as setting the attr_timeout option in mount.fuse.
|
||||
|
||||
### Filters
|
||||
|
||||
Note that all the rclone filters can be used to select a subset of the
|
||||
files to be visible in the mount.
|
||||
|
||||
### systemd
|
||||
|
||||
When running rclone @ as a systemd service, it is possible
|
||||
to use Type=notify. In this case the service will enter the started state
|
||||
after the mountpoint has been successfully set up.
|
||||
Units having the rclone @ service specified as a requirement
|
||||
will see all files and folders immediately in this mode.
|
||||
|
||||
### chunked reading
|
||||
|
||||
|--vfs-read-chunk-size| will enable reading the source objects in parts.
|
||||
This can reduce the used download quota for some remotes by requesting only chunks
|
||||
from the remote that are actually read at the cost of an increased number of requests.
|
||||
|
||||
When |--vfs-read-chunk-size-limit| is also specified and greater than
|
||||
|--vfs-read-chunk-size|, the chunk size for each open file will get doubled
|
||||
for each chunk read, until the specified value is reached. A value of |-1| will disable
|
||||
the limit and the chunk size will grow indefinitely.
|
||||
|
||||
With |--vfs-read-chunk-size 100M| and |--vfs-read-chunk-size-limit 0|
|
||||
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
|
||||
When |--vfs-read-chunk-size-limit 500M| is specified, the result would be
|
||||
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
|
||||
`
|
||||
@@ -1,19 +1,14 @@
|
||||
package mountlib
|
||||
|
||||
import (
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
sysdnotify "github.com/iguanesolutions/go-systemd/v5/notify"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/rclone/rclone/cmd"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/config"
|
||||
@@ -21,7 +16,11 @@ import (
|
||||
"github.com/rclone/rclone/fs/rc"
|
||||
"github.com/rclone/rclone/lib/atexit"
|
||||
"github.com/rclone/rclone/vfs"
|
||||
"github.com/rclone/rclone/vfs/vfscommon"
|
||||
"github.com/rclone/rclone/vfs/vfsflags"
|
||||
|
||||
sysdnotify "github.com/iguanesolutions/go-systemd/v5/notify"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/pflag"
|
||||
)
|
||||
@@ -63,6 +62,19 @@ type (
|
||||
MountFn func(VFS *vfs.VFS, mountpoint string, opt *Options) (<-chan error, func() error, error)
|
||||
)
|
||||
|
||||
// MountPoint represents a mount with options and runtime state
|
||||
type MountPoint struct {
|
||||
MountPoint string
|
||||
MountedOn time.Time
|
||||
MountOpt Options
|
||||
VFSOpt vfscommon.Options
|
||||
Fs fs.Fs
|
||||
VFS *vfs.VFS
|
||||
MountFn MountFn
|
||||
UnmountFn UnmountFn
|
||||
ErrChan <-chan error
|
||||
}
|
||||
|
||||
// Global constants
|
||||
const (
|
||||
MaxLeafSize = 1024 // don't pass file names longer than this
|
||||
@@ -106,424 +118,37 @@ func AddFlags(flagSet *pflag.FlagSet) {
|
||||
flags.BoolVarP(flagSet, &Opt.NetworkMode, "network-mode", "", Opt.NetworkMode, "Mount as remote network drive, instead of fixed disk drive. Supported on Windows only")
|
||||
}
|
||||
|
||||
// Check if folder is empty
|
||||
func checkMountEmpty(mountpoint string) error {
|
||||
fp, fpErr := os.Open(mountpoint)
|
||||
|
||||
if fpErr != nil {
|
||||
return errors.Wrap(fpErr, "Can not open: "+mountpoint)
|
||||
}
|
||||
defer fs.CheckClose(fp, &fpErr)
|
||||
|
||||
_, fpErr = fp.Readdirnames(1)
|
||||
|
||||
// directory is not empty
|
||||
if fpErr != io.EOF {
|
||||
var e error
|
||||
var errorMsg = "Directory is not empty: " + mountpoint + " If you want to mount it anyway use: --allow-non-empty option"
|
||||
if fpErr == nil {
|
||||
e = errors.New(errorMsg)
|
||||
} else {
|
||||
e = errors.Wrap(fpErr, errorMsg)
|
||||
}
|
||||
return e
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check the root doesn't overlap the mountpoint
|
||||
func checkMountpointOverlap(root, mountpoint string) error {
|
||||
abs := func(x string) string {
|
||||
if absX, err := filepath.EvalSymlinks(x); err == nil {
|
||||
x = absX
|
||||
}
|
||||
if absX, err := filepath.Abs(x); err == nil {
|
||||
x = absX
|
||||
}
|
||||
x = filepath.ToSlash(x)
|
||||
if !strings.HasSuffix(x, "/") {
|
||||
x += "/"
|
||||
}
|
||||
return x
|
||||
}
|
||||
rootAbs, mountpointAbs := abs(root), abs(mountpoint)
|
||||
if strings.HasPrefix(rootAbs, mountpointAbs) || strings.HasPrefix(mountpointAbs, rootAbs) {
|
||||
return errors.Errorf("mount point %q and directory to be mounted %q mustn't overlap", mountpoint, root)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// NewMountCommand makes a mount command with the given name and Mount function
|
||||
func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Command {
|
||||
var commandDefinition = &cobra.Command{
|
||||
Use: commandName + " remote:path /path/to/mountpoint",
|
||||
Hidden: hidden,
|
||||
Short: `Mount the remote as file system on a mountpoint.`,
|
||||
// Warning! "|" will be replaced by backticks below
|
||||
// "@" will be replaced by the command name
|
||||
Long: strings.ReplaceAll(strings.ReplaceAll(`
|
||||
rclone @ allows Linux, FreeBSD, macOS and Windows to
|
||||
mount any of Rclone's cloud storage systems as a file system with
|
||||
FUSE.
|
||||
|
||||
First set up your remote using |rclone config|. Check it works with |rclone ls| etc.
|
||||
|
||||
On Linux and OSX, you can either run mount in foreground mode or background (daemon) mode.
|
||||
Mount runs in foreground mode by default, use the |--daemon| flag to specify background mode.
|
||||
You can only run mount in foreground mode on Windows.
|
||||
|
||||
On Linux/macOS/FreeBSD start the mount like this, where |/path/to/local/mount|
|
||||
is an **empty** **existing** directory:
|
||||
|
||||
rclone @ remote:path/to/files /path/to/local/mount
|
||||
|
||||
On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows)
|
||||
for details. The following examples will mount to an automatically assigned drive,
|
||||
to specific drive letter |X:|, to path |C:\path\parent\mount|
|
||||
(where parent directory or drive must exist, and mount must **not** exist,
|
||||
and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and
|
||||
the last example will mount as network share |\\cloud\remote| and map it to an
|
||||
automatically assigned drive:
|
||||
|
||||
rclone @ remote:path/to/files *
|
||||
rclone @ remote:path/to/files X:
|
||||
rclone @ remote:path/to/files C:\path\parent\mount
|
||||
rclone @ remote:path/to/files \\cloud\remote
|
||||
|
||||
When the program ends while in foreground mode, either via Ctrl+C or receiving
|
||||
a SIGINT or SIGTERM signal, the mount should be automatically stopped.
|
||||
|
||||
When running in background mode the user will have to stop the mount manually:
|
||||
|
||||
# Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
umount /path/to/local/mount
|
||||
|
||||
The umount operation can fail, for example when the mountpoint is busy.
|
||||
When that happens, it is the user's responsibility to stop the mount manually.
|
||||
|
||||
The size of the mounted file system will be set according to information retrieved
|
||||
from the remote, the same as returned by the [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
command. Remotes with unlimited storage may report the used size only,
|
||||
then an additional 1 PiB of free space is assumed. If the remote does not
|
||||
[support](https://rclone.org/overview/#optional-features) the about feature
|
||||
at all, then 1 PiB is set as both the total and the free size.
|
||||
|
||||
**Note**: As of |rclone| 1.52.2, |rclone mount| now requires Go version 1.13
|
||||
or newer on some platforms depending on the underlying FUSE library in use.
|
||||
|
||||
### Installing on Windows
|
||||
|
||||
To run rclone @ on Windows, you will need to
|
||||
download and install [WinFsp](http://www.secfs.net/winfsp/).
|
||||
|
||||
[WinFsp](https://github.com/billziss-gh/winfsp) is an open source
|
||||
Windows File System Proxy which makes it easy to write user space file
|
||||
systems for Windows. It provides a FUSE emulation layer which rclone
|
||||
uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse).
|
||||
Both of these packages are by Bill Zissimopoulos who was very helpful
|
||||
during the implementation of rclone @ for Windows.
|
||||
|
||||
#### Mounting modes on windows
|
||||
|
||||
Unlike other operating systems, Microsoft Windows provides a different filesystem
|
||||
type for network and fixed drives. It optimises access on the assumption fixed
|
||||
disk drives are fast and reliable, while network drives have relatively high latency
|
||||
and less reliability. Some settings can also be differentiated between the two types,
|
||||
for example that Windows Explorer should just display icons and not create preview
|
||||
thumbnails for image and video files on network drives.
|
||||
|
||||
In most cases, rclone will mount the remote as a normal, fixed disk drive by default.
|
||||
However, you can also choose to mount it as a remote network drive, often described
|
||||
as a network share. If you mount an rclone remote using the default, fixed drive mode
|
||||
and experience unexpected program errors, freezes or other issues, consider mounting
|
||||
as a network drive instead.
|
||||
|
||||
When mounting as a fixed disk drive you can either mount to an unused drive letter,
|
||||
or to a path representing a **non-existent** subdirectory of an **existing** parent
|
||||
directory or drive. Using the special value |*| will tell rclone to
|
||||
automatically assign the next available drive letter, starting with Z: and moving backward.
|
||||
Examples:
|
||||
|
||||
rclone @ remote:path/to/files *
|
||||
rclone @ remote:path/to/files X:
|
||||
rclone @ remote:path/to/files C:\path\parent\mount
|
||||
rclone @ remote:path/to/files X:
|
||||
|
||||
Option |--volname| can be used to set a custom volume name for the mounted
|
||||
file system. The default is to use the remote name and path.
|
||||
|
||||
To mount as network drive, you can add option |--network-mode|
|
||||
to your @ command. Mounting to a directory path is not supported in
|
||||
this mode, it is a limitation Windows imposes on junctions, so the remote must always
|
||||
be mounted to a drive letter.
|
||||
|
||||
rclone @ remote:path/to/files X: --network-mode
|
||||
|
||||
A volume name specified with |--volname| will be used to create the network share path.
|
||||
A complete UNC path, such as |\\cloud\remote|, optionally with path
|
||||
|\\cloud\remote\madeup\path|, will be used as is. Any other
|
||||
string will be used as the share part, after a default prefix |\\server\|.
|
||||
If no volume name is specified then |\\server\share| will be used.
|
||||
You must make sure the volume name is unique when you are mounting more than one drive,
|
||||
or else the mount command will fail. The share name will treated as the volume label for
|
||||
the mapped drive, shown in Windows Explorer etc, while the complete
|
||||
|\\server\share| will be reported as the remote UNC path by
|
||||
|net use| etc, just like a normal network drive mapping.
|
||||
|
||||
If you specify a full network share UNC path with |--volname|, this will implicitely
|
||||
set the |--network-mode| option, so the following two examples have same result:
|
||||
|
||||
rclone @ remote:path/to/files X: --network-mode
|
||||
rclone @ remote:path/to/files X: --volname \\server\share
|
||||
|
||||
You may also specify the network share UNC path as the mountpoint itself. Then rclone
|
||||
will automatically assign a drive letter, same as with |*| and use that as
|
||||
mountpoint, and instead use the UNC path specified as the volume name, as if it were
|
||||
specified with the |--volname| option. This will also implicitely set
|
||||
the |--network-mode| option. This means the following two examples have same result:
|
||||
|
||||
rclone @ remote:path/to/files \\cloud\remote
|
||||
rclone @ remote:path/to/files * --volname \\cloud\remote
|
||||
|
||||
There is yet another way to enable network mode, and to set the share path,
|
||||
and that is to pass the "native" libfuse/WinFsp option directly:
|
||||
|--fuse-flag --VolumePrefix=\server\share|. Note that the path
|
||||
must be with just a single backslash prefix in this case.
|
||||
|
||||
|
||||
*Note:* In previous versions of rclone this was the only supported method.
|
||||
|
||||
[Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping)
|
||||
|
||||
See also [Limitations](#limitations) section below.
|
||||
|
||||
#### Windows filesystem permissions
|
||||
|
||||
The FUSE emulation layer on Windows must convert between the POSIX-based
|
||||
permission model used in FUSE, and the permission model used in Windows,
|
||||
based on access-control lists (ACL).
|
||||
|
||||
The mounted filesystem will normally get three entries in its access-control list (ACL),
|
||||
representing permissions for the POSIX permission scopes: Owner, group and others.
|
||||
By default, the owner and group will be taken from the current user, and the built-in
|
||||
group "Everyone" will be used to represent others. The user/group can be customized
|
||||
with FUSE options "UserName" and "GroupName",
|
||||
e.g. |-o UserName=user123 -o GroupName="Authenticated Users"|.
|
||||
|
||||
The permissions on each entry will be set according to
|
||||
[options](#options) |--dir-perms| and |--file-perms|,
|
||||
which takes a value in traditional [numeric notation](https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation),
|
||||
where the default corresponds to |--file-perms 0666 --dir-perms 0777|.
|
||||
|
||||
Note that the mapping of permissions is not always trivial, and the result
|
||||
you see in Windows Explorer may not be exactly like you expected.
|
||||
For example, when setting a value that includes write access, this will be
|
||||
mapped to individual permissions "write attributes", "write data" and "append data",
|
||||
but not "write extended attributes". Windows will then show this as basic
|
||||
permission "Special" instead of "Write", because "Write" includes the
|
||||
"write extended attributes" permission.
|
||||
|
||||
If you set POSIX permissions for only allowing access to the owner, using
|
||||
|--file-perms 0600 --dir-perms 0700|, the user group and the built-in "Everyone"
|
||||
group will still be given some special permissions, such as "read attributes"
|
||||
and "read permissions", in Windows. This is done for compatibility reasons,
|
||||
e.g. to allow users without additional permissions to be able to read basic
|
||||
metadata about files like in UNIX. One case that may arise is that other programs
|
||||
(incorrectly) interprets this as the file being accessible by everyone. For example
|
||||
an SSH client may warn about "unprotected private key file".
|
||||
|
||||
WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity",
|
||||
that allows the complete specification of file security descriptors using
|
||||
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
|
||||
With this you can work around issues such as the mentioned "unprotected private key file"
|
||||
by specifying |-o FileSecurity="D:P(A;;FA;;;OW)"|, for file all access (FA) to the owner (OW).
|
||||
|
||||
#### Windows caveats
|
||||
|
||||
Drives created as Administrator are not visible to other accounts,
|
||||
not even an account that was elevated to Administrator with the
|
||||
User Account Control (UAC) feature. A result of this is that if you mount
|
||||
to a drive letter from a Command Prompt run as Administrator, and then try
|
||||
to access the same drive from Windows Explorer (which does not run as
|
||||
Administrator), you will not be able to see the mounted drive.
|
||||
|
||||
If you don't need to access the drive from applications running with
|
||||
administrative privileges, the easiest way around this is to always
|
||||
create the mount from a non-elevated command prompt.
|
||||
|
||||
To make mapped drives available to the user account that created them
|
||||
regardless if elevated or not, there is a special Windows setting called
|
||||
[linked connections](https://docs.microsoft.com/en-us/troubleshoot/windows-client/networking/mapped-drives-not-available-from-elevated-command#detail-to-configure-the-enablelinkedconnections-registry-entry)
|
||||
that can be enabled.
|
||||
|
||||
It is also possible to make a drive mount available to everyone on the system,
|
||||
by running the process creating it as the built-in SYSTEM account.
|
||||
There are several ways to do this: One is to use the command-line
|
||||
utility [PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec),
|
||||
from Microsoft's Sysinternals suite, which has option |-s| to start
|
||||
processes as the SYSTEM account. Another alternative is to run the mount
|
||||
command from a Windows Scheduled Task, or a Windows Service, configured
|
||||
to run as the SYSTEM account. A third alternative is to use the
|
||||
[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)).
|
||||
Note that when running rclone as another user, it will not use
|
||||
the configuration file from your profile unless you tell it to
|
||||
with the [|--config|](https://rclone.org/docs/#config-config-file) option.
|
||||
Read more in the [install documentation](https://rclone.org/install/).
|
||||
|
||||
Note that mapping to a directory path, instead of a drive letter,
|
||||
does not suffer from the same limitations.
|
||||
|
||||
### Limitations
|
||||
|
||||
Without the use of |--vfs-cache-mode| this can only write files
|
||||
sequentially, it can only seek when reading. This means that many
|
||||
applications won't work with their files on an rclone mount without
|
||||
|--vfs-cache-mode writes| or |--vfs-cache-mode full|.
|
||||
See the [VFS File Caching](#vfs-file-caching) section for more info.
|
||||
|
||||
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
|
||||
Hubic) do not support the concept of empty directories, so empty
|
||||
directories will have a tendency to disappear once they fall out of
|
||||
the directory cache.
|
||||
|
||||
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
|
||||
|
||||
### rclone @ vs rclone sync/copy
|
||||
|
||||
File systems expect things to be 100% reliable, whereas cloud storage
|
||||
systems are a long way from 100% reliable. The rclone sync/copy
|
||||
commands cope with this with lots of retries. However rclone @
|
||||
can't use retries in the same way without making local copies of the
|
||||
uploads. Look at the [VFS File Caching](#vfs-file-caching)
|
||||
for solutions to make @ more reliable.
|
||||
|
||||
### Attribute caching
|
||||
|
||||
You can use the flag |--attr-timeout| to set the time the kernel caches
|
||||
the attributes (size, modification time, etc.) for directory entries.
|
||||
|
||||
The default is |1s| which caches files just long enough to avoid
|
||||
too many callbacks to rclone from the kernel.
|
||||
|
||||
In theory 0s should be the correct value for filesystems which can
|
||||
change outside the control of the kernel. However this causes quite a
|
||||
few problems such as
|
||||
[rclone using too much memory](https://github.com/rclone/rclone/issues/2157),
|
||||
[rclone not serving files to samba](https://forum.rclone.org/t/rclone-1-39-vs-1-40-mount-issue/5112)
|
||||
and [excessive time listing directories](https://github.com/rclone/rclone/issues/2095#issuecomment-371141147).
|
||||
|
||||
The kernel can cache the info about a file for the time given by
|
||||
|--attr-timeout|. You may see corruption if the remote file changes
|
||||
length during this window. It will show up as either a truncated file
|
||||
or a file with garbage on the end. With |--attr-timeout 1s| this is
|
||||
very unlikely but not impossible. The higher you set |--attr-timeout|
|
||||
the more likely it is. The default setting of "1s" is the lowest
|
||||
setting which mitigates the problems above.
|
||||
|
||||
If you set it higher (|10s| or |1m| say) then the kernel will call
|
||||
back to rclone less often making it more efficient, however there is
|
||||
more chance of the corruption issue above.
|
||||
|
||||
If files don't change on the remote outside of the control of rclone
|
||||
then there is no chance of corruption.
|
||||
|
||||
This is the same as setting the attr_timeout option in mount.fuse.
|
||||
|
||||
### Filters
|
||||
|
||||
Note that all the rclone filters can be used to select a subset of the
|
||||
files to be visible in the mount.
|
||||
|
||||
### systemd
|
||||
|
||||
When running rclone @ as a systemd service, it is possible
|
||||
to use Type=notify. In this case the service will enter the started state
|
||||
after the mountpoint has been successfully set up.
|
||||
Units having the rclone @ service specified as a requirement
|
||||
will see all files and folders immediately in this mode.
|
||||
|
||||
### chunked reading
|
||||
|
||||
|--vfs-read-chunk-size| will enable reading the source objects in parts.
|
||||
This can reduce the used download quota for some remotes by requesting only chunks
|
||||
from the remote that are actually read at the cost of an increased number of requests.
|
||||
|
||||
When |--vfs-read-chunk-size-limit| is also specified and greater than
|
||||
|--vfs-read-chunk-size|, the chunk size for each open file will get doubled
|
||||
for each chunk read, until the specified value is reached. A value of |-1| will disable
|
||||
the limit and the chunk size will grow indefinitely.
|
||||
|
||||
With |--vfs-read-chunk-size 100M| and |--vfs-read-chunk-size-limit 0|
|
||||
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
|
||||
When |--vfs-read-chunk-size-limit 500M| is specified, the result would be
|
||||
0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
|
||||
`, "|", "`"), "@", commandName) + vfs.Help,
|
||||
Long: strings.ReplaceAll(strings.ReplaceAll(mountHelp, "|", "`"), "@", commandName) + vfs.Help,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(2, 2, command, args)
|
||||
opt := Opt // make a copy of the options
|
||||
|
||||
if opt.Daemon {
|
||||
if Opt.Daemon {
|
||||
config.PassConfigKeyForDaemonization = true
|
||||
}
|
||||
|
||||
mountpoint := args[1]
|
||||
fdst := cmd.NewFsDir(args)
|
||||
if fdst.Name() == "" || fdst.Name() == "local" {
|
||||
err := checkMountpointOverlap(fdst.Root(), mountpoint)
|
||||
if err != nil {
|
||||
log.Fatalf("Fatal error: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Show stats if the user has specifically requested them
|
||||
if cmd.ShowStats() {
|
||||
defer cmd.StartStats()()
|
||||
}
|
||||
|
||||
// Inform about ignored flags on Windows,
|
||||
// and if not on Windows and not --allow-non-empty flag is used
|
||||
// verify that mountpoint is empty.
|
||||
if runtime.GOOS == "windows" {
|
||||
if opt.AllowNonEmpty {
|
||||
fs.Logf(nil, "--allow-non-empty flag does nothing on Windows")
|
||||
}
|
||||
if opt.AllowRoot {
|
||||
fs.Logf(nil, "--allow-root flag does nothing on Windows")
|
||||
}
|
||||
if opt.AllowOther {
|
||||
fs.Logf(nil, "--allow-other flag does nothing on Windows")
|
||||
}
|
||||
} else if !opt.AllowNonEmpty {
|
||||
err := checkMountEmpty(mountpoint)
|
||||
if err != nil {
|
||||
log.Fatalf("Fatal error: %v", err)
|
||||
}
|
||||
mnt := &MountPoint{
|
||||
MountFn: mount,
|
||||
MountPoint: args[1],
|
||||
Fs: cmd.NewFsDir(args),
|
||||
MountOpt: Opt,
|
||||
VFSOpt: vfsflags.Opt,
|
||||
}
|
||||
|
||||
// Work out the volume name, removing special
|
||||
// characters from it if necessary
|
||||
if opt.VolumeName == "" {
|
||||
opt.VolumeName = fdst.Name() + ":" + fdst.Root()
|
||||
daemonized, err := mnt.Mount()
|
||||
if !daemonized && err == nil {
|
||||
err = mnt.Wait()
|
||||
}
|
||||
opt.VolumeName = strings.Replace(opt.VolumeName, ":", " ", -1)
|
||||
opt.VolumeName = strings.Replace(opt.VolumeName, "/", " ", -1)
|
||||
opt.VolumeName = strings.TrimSpace(opt.VolumeName)
|
||||
if runtime.GOOS == "windows" && len(opt.VolumeName) > 32 {
|
||||
opt.VolumeName = opt.VolumeName[:32]
|
||||
}
|
||||
|
||||
// Start background task if --background is specified
|
||||
if opt.Daemon {
|
||||
daemonized := startBackgroundMode()
|
||||
if daemonized {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
VFS := vfs.New(fdst, &vfsflags.Opt)
|
||||
err := Mount(VFS, mountpoint, mount, &opt)
|
||||
if err != nil {
|
||||
log.Fatalf("Fatal error: %v", err)
|
||||
}
|
||||
@@ -541,49 +166,94 @@ When |--vfs-read-chunk-size-limit 500M| is specified, the result would be
|
||||
return commandDefinition
|
||||
}
|
||||
|
||||
// ClipBlocks clips the blocks pointed to the OS max
|
||||
func ClipBlocks(b *uint64) {
|
||||
var max uint64
|
||||
switch runtime.GOOS {
|
||||
case "windows":
|
||||
if runtime.GOARCH == "386" {
|
||||
max = (1 << 32) - 1
|
||||
} else {
|
||||
max = (1 << 43) - 1
|
||||
// Mount the remote at mountpoint
|
||||
func (m *MountPoint) Mount() (daemonized bool, err error) {
|
||||
if err = m.CheckOverlap(); err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
if err = m.CheckAllowings(); err != nil {
|
||||
return false, err
|
||||
}
|
||||
m.SetVolumeName(m.MountOpt.VolumeName)
|
||||
|
||||
// Start background task if --daemon is specified
|
||||
if m.MountOpt.Daemon {
|
||||
daemonized = startBackgroundMode()
|
||||
if daemonized {
|
||||
return true, nil
|
||||
}
|
||||
case "darwin":
|
||||
// OSX FUSE only supports 32 bit number of blocks
|
||||
// https://github.com/osxfuse/osxfuse/issues/396
|
||||
max = (1 << 32) - 1
|
||||
default:
|
||||
// no clipping
|
||||
return
|
||||
}
|
||||
if *b > max {
|
||||
*b = max
|
||||
|
||||
m.VFS = vfs.New(m.Fs, &m.VFSOpt)
|
||||
|
||||
m.ErrChan, m.UnmountFn, err = m.MountFn(m.VFS, m.MountPoint, &m.MountOpt)
|
||||
if err != nil {
|
||||
return false, errors.Wrap(err, "failed to mount FUSE fs")
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// Mount mounts the remote at mountpoint.
|
||||
//
|
||||
// If noModTime is set then it
|
||||
func Mount(VFS *vfs.VFS, mountpoint string, mount MountFn, opt *Options) error {
|
||||
if opt == nil {
|
||||
opt = &DefaultOpt
|
||||
// CheckOverlap checks that root doesn't overlap with mountpoint
|
||||
func (m *MountPoint) CheckOverlap() error {
|
||||
name := m.Fs.Name()
|
||||
if name != "" && name != "local" {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Mount it
|
||||
errChan, unmount, err := mount(VFS, mountpoint, opt)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to mount FUSE fs")
|
||||
rootAbs := absPath(m.Fs.Root())
|
||||
mountpointAbs := absPath(m.MountPoint)
|
||||
if strings.HasPrefix(rootAbs, mountpointAbs) || strings.HasPrefix(mountpointAbs, rootAbs) {
|
||||
const msg = "mount point %q and directory to be mounted %q mustn't overlap"
|
||||
return errors.Errorf(msg, m.MountPoint, m.Fs.Root())
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// absPath is a helper function for MountPoint.CheckOverlap
|
||||
func absPath(path string) string {
|
||||
if abs, err := filepath.EvalSymlinks(path); err == nil {
|
||||
path = abs
|
||||
}
|
||||
if abs, err := filepath.Abs(path); err == nil {
|
||||
path = abs
|
||||
}
|
||||
path = filepath.ToSlash(path)
|
||||
if !strings.HasSuffix(path, "/") {
|
||||
path += "/"
|
||||
}
|
||||
return path
|
||||
}
|
||||
|
||||
// CheckAllowings informs about ignored flags on Windows. If not on Windows
|
||||
// and not --allow-non-empty flag is used, verify that mountpoint is empty.
|
||||
func (m *MountPoint) CheckAllowings() error {
|
||||
opt := &m.MountOpt
|
||||
if runtime.GOOS == "windows" {
|
||||
if opt.AllowNonEmpty {
|
||||
fs.Logf(nil, "--allow-non-empty flag does nothing on Windows")
|
||||
}
|
||||
if opt.AllowRoot {
|
||||
fs.Logf(nil, "--allow-root flag does nothing on Windows")
|
||||
}
|
||||
if opt.AllowOther {
|
||||
fs.Logf(nil, "--allow-other flag does nothing on Windows")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if !opt.AllowNonEmpty {
|
||||
return CheckMountEmpty(m.MountPoint)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Wait for mount end
|
||||
func (m *MountPoint) Wait() error {
|
||||
// Unmount on exit
|
||||
var finaliseOnce sync.Once
|
||||
finalise := func() {
|
||||
finaliseOnce.Do(func() {
|
||||
_ = sysdnotify.Stopping()
|
||||
_ = unmount()
|
||||
_ = m.UnmountFn()
|
||||
})
|
||||
}
|
||||
fnHandle := atexit.Register(finalise)
|
||||
@@ -596,19 +266,20 @@ func Mount(VFS *vfs.VFS, mountpoint string, mount MountFn, opt *Options) error {
|
||||
|
||||
// Reload VFS cache on SIGHUP
|
||||
sigHup := make(chan os.Signal, 1)
|
||||
signal.Notify(sigHup, syscall.SIGHUP)
|
||||
NotifyOnSigHup(sigHup)
|
||||
var err error
|
||||
|
||||
waitloop:
|
||||
for {
|
||||
waiting := true
|
||||
for waiting {
|
||||
select {
|
||||
// umount triggered outside the app
|
||||
case err = <-errChan:
|
||||
break waitloop
|
||||
case err = <-m.ErrChan:
|
||||
waiting = false
|
||||
// user sent SIGHUP to clear the cache
|
||||
case <-sigHup:
|
||||
root, err := VFS.Root()
|
||||
root, err := m.VFS.Root()
|
||||
if err != nil {
|
||||
fs.Errorf(VFS.Fs(), "Error reading root: %v", err)
|
||||
fs.Errorf(m.VFS.Fs(), "Error reading root: %v", err)
|
||||
} else {
|
||||
root.ForgetAll()
|
||||
}
|
||||
@@ -620,6 +291,29 @@ waitloop:
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to umount FUSE fs")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Unmount the specified mountpoint
|
||||
func (m *MountPoint) Unmount() (err error) {
|
||||
return m.UnmountFn()
|
||||
}
|
||||
|
||||
// SetVolumeName with sensible default
|
||||
func (m *MountPoint) SetVolumeName(vol string) {
|
||||
if vol == "" {
|
||||
vol = m.Fs.Name() + ":" + m.Fs.Root()
|
||||
}
|
||||
m.MountOpt.SetVolumeName(vol)
|
||||
}
|
||||
|
||||
// SetVolumeName removes special characters from volume name if necessary
|
||||
func (opt *Options) SetVolumeName(vol string) {
|
||||
vol = strings.ReplaceAll(vol, ":", " ")
|
||||
vol = strings.ReplaceAll(vol, "/", " ")
|
||||
vol = strings.TrimSpace(vol)
|
||||
if runtime.GOOS == "windows" && len(vol) > 32 {
|
||||
vol = vol[:32]
|
||||
}
|
||||
opt.VolumeName = vol
|
||||
}
|
||||
|
||||
@@ -11,29 +11,33 @@ import (
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/rc"
|
||||
"github.com/rclone/rclone/vfs"
|
||||
"github.com/rclone/rclone/vfs/vfscommon"
|
||||
"github.com/rclone/rclone/vfs/vfsflags"
|
||||
)
|
||||
|
||||
// MountInfo defines the configuration for a mount
|
||||
type MountInfo struct {
|
||||
unmountFn UnmountFn
|
||||
MountPoint string `json:"MountPoint"`
|
||||
MountedOn time.Time `json:"MountedOn"`
|
||||
Fs string `json:"Fs"`
|
||||
MountOpt *Options
|
||||
VFSOpt *vfscommon.Options
|
||||
}
|
||||
|
||||
var (
|
||||
// mutex to protect all the variables in this block
|
||||
mountMu sync.Mutex
|
||||
// Mount functions available
|
||||
mountFns = map[string]MountFn{}
|
||||
// Map of mounted path => MountInfo
|
||||
liveMounts = map[string]MountInfo{}
|
||||
liveMounts = map[string]*MountPoint{}
|
||||
// Supported mount types
|
||||
supportedMountTypes = []string{"mount", "cmount", "mount2"}
|
||||
)
|
||||
|
||||
// ResolveMountMethod returns mount function by name
|
||||
func ResolveMountMethod(mountType string) (string, MountFn) {
|
||||
if mountType != "" {
|
||||
return mountType, mountFns[mountType]
|
||||
}
|
||||
for _, mountType := range supportedMountTypes {
|
||||
if mountFns[mountType] != nil {
|
||||
return mountType, mountFns[mountType]
|
||||
}
|
||||
}
|
||||
return "", nil
|
||||
}
|
||||
|
||||
// AddRc adds mount and unmount functionality to rc
|
||||
func AddRc(mountUtilName string, mountFunction MountFn) {
|
||||
mountMu.Lock()
|
||||
@@ -99,14 +103,12 @@ func mountRc(ctx context.Context, in rc.Params) (out rc.Params, err error) {
|
||||
mountMu.Lock()
|
||||
defer mountMu.Unlock()
|
||||
|
||||
if err != nil || mountType == "" {
|
||||
if mountFns["mount"] != nil {
|
||||
mountType = "mount"
|
||||
} else if mountFns["cmount"] != nil {
|
||||
mountType = "cmount"
|
||||
} else if mountFns["mount2"] != nil {
|
||||
mountType = "mount2"
|
||||
}
|
||||
if err != nil {
|
||||
mountType = ""
|
||||
}
|
||||
mountType, mountFn := ResolveMountMethod(mountType)
|
||||
if mountFn == nil {
|
||||
return nil, errors.New("Mount Option specified is not registered, or is invalid")
|
||||
}
|
||||
|
||||
// Get Fs.fs to be mounted from fs parameter in the params
|
||||
@@ -115,28 +117,26 @@ func mountRc(ctx context.Context, in rc.Params) (out rc.Params, err error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if mountFns[mountType] != nil {
|
||||
VFS := vfs.New(fdst, &vfsOpt)
|
||||
_, unmountFn, err := mountFns[mountType](VFS, mountPoint, &mountOpt)
|
||||
|
||||
if err != nil {
|
||||
log.Printf("mount FAILED: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
// Add mount to list if mount point was successfully created
|
||||
liveMounts[mountPoint] = MountInfo{
|
||||
unmountFn: unmountFn,
|
||||
MountedOn: time.Now(),
|
||||
Fs: fdst.Name(),
|
||||
MountPoint: mountPoint,
|
||||
VFSOpt: &vfsOpt,
|
||||
MountOpt: &mountOpt,
|
||||
}
|
||||
|
||||
fs.Debugf(nil, "Mount for %s created at %s using %s", fdst.String(), mountPoint, mountType)
|
||||
return nil, nil
|
||||
VFS := vfs.New(fdst, &vfsOpt)
|
||||
_, unmountFn, err := mountFn(VFS, mountPoint, &mountOpt)
|
||||
if err != nil {
|
||||
log.Printf("mount FAILED: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
return nil, errors.New("Mount Option specified is not registered, or is invalid")
|
||||
|
||||
// Add mount to list if mount point was successfully created
|
||||
liveMounts[mountPoint] = &MountPoint{
|
||||
MountPoint: mountPoint,
|
||||
MountedOn: time.Now(),
|
||||
MountFn: mountFn,
|
||||
UnmountFn: unmountFn,
|
||||
MountOpt: mountOpt,
|
||||
VFSOpt: vfsOpt,
|
||||
Fs: fdst,
|
||||
}
|
||||
|
||||
fs.Debugf(nil, "Mount for %s created at %s using %s", fdst.String(), mountPoint, mountType)
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
@@ -169,10 +169,14 @@ func unMountRc(_ context.Context, in rc.Params) (out rc.Params, err error) {
|
||||
}
|
||||
mountMu.Lock()
|
||||
defer mountMu.Unlock()
|
||||
err = performUnMount(mountPoint)
|
||||
if err != nil {
|
||||
mountInfo, found := liveMounts[mountPoint]
|
||||
if !found {
|
||||
return nil, errors.New("mount not found")
|
||||
}
|
||||
if err = mountInfo.Unmount(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
delete(liveMounts, mountPoint)
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
@@ -231,16 +235,34 @@ Eg
|
||||
})
|
||||
}
|
||||
|
||||
// listMountsRc returns a list of current mounts
|
||||
// MountInfo is a transitional structure for json marshaling
|
||||
type MountInfo struct {
|
||||
Fs string `json:"Fs"`
|
||||
MountPoint string `json:"MountPoint"`
|
||||
MountedOn time.Time `json:"MountedOn"`
|
||||
}
|
||||
|
||||
// listMountsRc returns a list of current mounts sorted by mount path
|
||||
func listMountsRc(_ context.Context, in rc.Params) (out rc.Params, err error) {
|
||||
var mountTypes = []MountInfo{}
|
||||
mountMu.Lock()
|
||||
defer mountMu.Unlock()
|
||||
for _, a := range liveMounts {
|
||||
mountTypes = append(mountTypes, a)
|
||||
var keys []string
|
||||
for key := range liveMounts {
|
||||
keys = append(keys, key)
|
||||
}
|
||||
sort.Strings(keys)
|
||||
mountPoints := []MountInfo{}
|
||||
for _, k := range keys {
|
||||
m := liveMounts[k]
|
||||
info := MountInfo{
|
||||
Fs: m.Fs.Name(),
|
||||
MountPoint: m.MountPoint,
|
||||
MountedOn: m.MountedOn,
|
||||
}
|
||||
mountPoints = append(mountPoints, info)
|
||||
}
|
||||
return rc.Params{
|
||||
"mountPoints": mountTypes,
|
||||
"mountPoints": mountPoints,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -265,27 +287,12 @@ Eg
|
||||
func unmountAll(_ context.Context, in rc.Params) (out rc.Params, err error) {
|
||||
mountMu.Lock()
|
||||
defer mountMu.Unlock()
|
||||
for key, mountInfo := range liveMounts {
|
||||
err = performUnMount(mountInfo.MountPoint)
|
||||
if err != nil {
|
||||
fs.Debugf(nil, "Couldn't unmount : %s", key)
|
||||
for mountPoint, mountInfo := range liveMounts {
|
||||
if err = mountInfo.Unmount(); err != nil {
|
||||
fs.Debugf(nil, "Couldn't unmount : %s", mountPoint)
|
||||
return nil, err
|
||||
}
|
||||
delete(liveMounts, mountPoint)
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// performUnMount unmounts the specified mountPoint
|
||||
func performUnMount(mountPoint string) (err error) {
|
||||
mountInfo, ok := liveMounts[mountPoint]
|
||||
if ok {
|
||||
err := mountInfo.unmountFn()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
delete(liveMounts, mountPoint)
|
||||
} else {
|
||||
return errors.New("mount not found")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
_ "github.com/rclone/rclone/cmd/cmount"
|
||||
_ "github.com/rclone/rclone/cmd/mount"
|
||||
_ "github.com/rclone/rclone/cmd/mount2"
|
||||
"github.com/rclone/rclone/cmd/mountlib"
|
||||
"github.com/rclone/rclone/fs/config/configfile"
|
||||
"github.com/rclone/rclone/fs/rc"
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -95,6 +96,22 @@ func TestRc(t *testing.T) {
|
||||
assert.Equal(t, os.FileMode(0400), fi.Mode())
|
||||
}
|
||||
|
||||
// check mount point list
|
||||
checkMountList := func() []mountlib.MountInfo {
|
||||
listCall := rc.Calls.Get("mount/listmounts")
|
||||
require.NotNil(t, listCall)
|
||||
listReply, err := listCall.Fn(ctx, rc.Params{})
|
||||
require.NoError(t, err)
|
||||
mountPointsReply, err := listReply.Get("mountPoints")
|
||||
require.NoError(t, err)
|
||||
mountPoints, ok := mountPointsReply.([]mountlib.MountInfo)
|
||||
require.True(t, ok)
|
||||
return mountPoints
|
||||
}
|
||||
mountPoints := checkMountList()
|
||||
require.Equal(t, 1, len(mountPoints))
|
||||
require.Equal(t, mountPoint, mountPoints[0].MountPoint)
|
||||
|
||||
// FIXME the OS sometimes appears to be using the mount
|
||||
// immediately after it appears so wait a moment
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
@@ -102,6 +119,7 @@ func TestRc(t *testing.T) {
|
||||
t.Run("Unmount", func(t *testing.T) {
|
||||
_, err := unmount.Fn(ctx, in)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 0, len(checkMountList()))
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
14
cmd/mountlib/sighup.go
Normal file
14
cmd/mountlib/sighup.go
Normal file
@@ -0,0 +1,14 @@
|
||||
// +build !plan9,!js
|
||||
|
||||
package mountlib
|
||||
|
||||
import (
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// NotifyOnSigHup makes SIGHUP notify given channel on supported systems
|
||||
func NotifyOnSigHup(sighupChan chan os.Signal) {
|
||||
signal.Notify(sighupChan, syscall.SIGHUP)
|
||||
}
|
||||
10
cmd/mountlib/sighup_unsupported.go
Normal file
10
cmd/mountlib/sighup_unsupported.go
Normal file
@@ -0,0 +1,10 @@
|
||||
// +build plan9 js
|
||||
|
||||
package mountlib
|
||||
|
||||
import (
|
||||
"os"
|
||||
)
|
||||
|
||||
// NotifyOnSigHup makes SIGHUP notify given channel on supported systems
|
||||
func NotifyOnSigHup(sighupChan chan os.Signal) {}
|
||||
55
cmd/mountlib/utils.go
Normal file
55
cmd/mountlib/utils.go
Normal file
@@ -0,0 +1,55 @@
|
||||
package mountlib
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"runtime"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/rclone/rclone/fs"
|
||||
)
|
||||
|
||||
// CheckMountEmpty checks if folder is empty
|
||||
func CheckMountEmpty(mountpoint string) error {
|
||||
fp, fpErr := os.Open(mountpoint)
|
||||
|
||||
if fpErr != nil {
|
||||
return errors.Wrap(fpErr, "Can not open: "+mountpoint)
|
||||
}
|
||||
defer fs.CheckClose(fp, &fpErr)
|
||||
|
||||
_, fpErr = fp.Readdirnames(1)
|
||||
|
||||
if fpErr == io.EOF {
|
||||
return nil
|
||||
}
|
||||
|
||||
msg := "Directory is not empty: " + mountpoint + " If you want to mount it anyway use: --allow-non-empty option"
|
||||
if fpErr == nil {
|
||||
return errors.New(msg)
|
||||
}
|
||||
return errors.Wrap(fpErr, msg)
|
||||
}
|
||||
|
||||
// ClipBlocks clips the blocks pointed to the OS max
|
||||
func ClipBlocks(b *uint64) {
|
||||
var max uint64
|
||||
switch runtime.GOOS {
|
||||
case "windows":
|
||||
if runtime.GOARCH == "386" {
|
||||
max = (1 << 32) - 1
|
||||
} else {
|
||||
max = (1 << 43) - 1
|
||||
}
|
||||
case "darwin":
|
||||
// OSX FUSE only supports 32 bit number of blocks
|
||||
// https://github.com/osxfuse/osxfuse/issues/396
|
||||
max = (1 << 32) - 1
|
||||
default:
|
||||
// no clipping
|
||||
return
|
||||
}
|
||||
if *b > max {
|
||||
*b = max
|
||||
}
|
||||
}
|
||||
175
cmd/serve/docker/api.go
Normal file
175
cmd/serve/docker/api.go
Normal file
@@ -0,0 +1,175 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
|
||||
"github.com/go-chi/chi/v5"
|
||||
"github.com/rclone/rclone/fs"
|
||||
)
|
||||
|
||||
const (
|
||||
contentType = "application/vnd.docker.plugins.v1.1+json"
|
||||
activatePath = "/Plugin.Activate"
|
||||
createPath = "/VolumeDriver.Create"
|
||||
getPath = "/VolumeDriver.Get"
|
||||
listPath = "/VolumeDriver.List"
|
||||
removePath = "/VolumeDriver.Remove"
|
||||
pathPath = "/VolumeDriver.Path"
|
||||
mountPath = "/VolumeDriver.Mount"
|
||||
unmountPath = "/VolumeDriver.Unmount"
|
||||
capsPath = "/VolumeDriver.Capabilities"
|
||||
)
|
||||
|
||||
// CreateRequest is the structure that docker's requests are deserialized to.
|
||||
type CreateRequest struct {
|
||||
Name string
|
||||
Options map[string]string `json:"Opts,omitempty"`
|
||||
}
|
||||
|
||||
// RemoveRequest structure for a volume remove request
|
||||
type RemoveRequest struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
// MountRequest structure for a volume mount request
|
||||
type MountRequest struct {
|
||||
Name string
|
||||
ID string
|
||||
}
|
||||
|
||||
// MountResponse structure for a volume mount response
|
||||
type MountResponse struct {
|
||||
Mountpoint string
|
||||
}
|
||||
|
||||
// UnmountRequest structure for a volume unmount request
|
||||
type UnmountRequest struct {
|
||||
Name string
|
||||
ID string
|
||||
}
|
||||
|
||||
// PathRequest structure for a volume path request
|
||||
type PathRequest struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
// PathResponse structure for a volume path response
|
||||
type PathResponse struct {
|
||||
Mountpoint string
|
||||
}
|
||||
|
||||
// GetRequest structure for a volume get request
|
||||
type GetRequest struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
// GetResponse structure for a volume get response
|
||||
type GetResponse struct {
|
||||
Volume *VolInfo
|
||||
}
|
||||
|
||||
// ListResponse structure for a volume list response
|
||||
type ListResponse struct {
|
||||
Volumes []*VolInfo
|
||||
}
|
||||
|
||||
// CapabilitiesResponse structure for a volume capability response
|
||||
type CapabilitiesResponse struct {
|
||||
Capabilities Capability
|
||||
}
|
||||
|
||||
// Capability represents the list of capabilities a volume driver can return
|
||||
type Capability struct {
|
||||
Scope string
|
||||
}
|
||||
|
||||
// ErrorResponse is a formatted error message that docker can understand
|
||||
type ErrorResponse struct {
|
||||
Err string
|
||||
}
|
||||
|
||||
func newRouter(drv *Driver) http.Handler {
|
||||
r := chi.NewRouter()
|
||||
r.Post(activatePath, func(w http.ResponseWriter, r *http.Request) {
|
||||
res := map[string]interface{}{
|
||||
"Implements": []string{"VolumeDriver"},
|
||||
}
|
||||
encodeResponse(w, res, nil, activatePath)
|
||||
})
|
||||
r.Post(createPath, func(w http.ResponseWriter, r *http.Request) {
|
||||
var req CreateRequest
|
||||
if decodeRequest(w, r, &req) {
|
||||
err := drv.Create(&req)
|
||||
encodeResponse(w, nil, err, createPath)
|
||||
}
|
||||
})
|
||||
r.Post(removePath, func(w http.ResponseWriter, r *http.Request) {
|
||||
var req RemoveRequest
|
||||
if decodeRequest(w, r, &req) {
|
||||
err := drv.Remove(&req)
|
||||
encodeResponse(w, nil, err, removePath)
|
||||
}
|
||||
})
|
||||
r.Post(mountPath, func(w http.ResponseWriter, r *http.Request) {
|
||||
var req MountRequest
|
||||
if decodeRequest(w, r, &req) {
|
||||
res, err := drv.Mount(&req)
|
||||
encodeResponse(w, res, err, mountPath)
|
||||
}
|
||||
})
|
||||
r.Post(pathPath, func(w http.ResponseWriter, r *http.Request) {
|
||||
var req PathRequest
|
||||
if decodeRequest(w, r, &req) {
|
||||
res, err := drv.Path(&req)
|
||||
encodeResponse(w, res, err, pathPath)
|
||||
}
|
||||
})
|
||||
r.Post(getPath, func(w http.ResponseWriter, r *http.Request) {
|
||||
var req GetRequest
|
||||
if decodeRequest(w, r, &req) {
|
||||
res, err := drv.Get(&req)
|
||||
encodeResponse(w, res, err, getPath)
|
||||
}
|
||||
})
|
||||
r.Post(unmountPath, func(w http.ResponseWriter, r *http.Request) {
|
||||
var req UnmountRequest
|
||||
if decodeRequest(w, r, &req) {
|
||||
err := drv.Unmount(&req)
|
||||
encodeResponse(w, nil, err, unmountPath)
|
||||
}
|
||||
})
|
||||
r.Post(listPath, func(w http.ResponseWriter, r *http.Request) {
|
||||
res, err := drv.List()
|
||||
encodeResponse(w, res, err, listPath)
|
||||
})
|
||||
r.Post(capsPath, func(w http.ResponseWriter, r *http.Request) {
|
||||
res := &CapabilitiesResponse{
|
||||
Capabilities: Capability{Scope: pluginScope},
|
||||
}
|
||||
encodeResponse(w, res, nil, capsPath)
|
||||
})
|
||||
return r
|
||||
}
|
||||
|
||||
func decodeRequest(w http.ResponseWriter, r *http.Request, req interface{}) bool {
|
||||
if err := json.NewDecoder(r.Body).Decode(req); err != nil {
|
||||
http.Error(w, err.Error(), http.StatusBadRequest)
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func encodeResponse(w http.ResponseWriter, res interface{}, err error, path string) {
|
||||
w.Header().Set("Content-Type", contentType)
|
||||
if err != nil {
|
||||
fs.Debugf(path, "Request returned error: %v", err)
|
||||
w.WriteHeader(http.StatusInternalServerError)
|
||||
res = &ErrorResponse{Err: err.Error()}
|
||||
} else if res == nil {
|
||||
res = struct{}{}
|
||||
}
|
||||
if err = json.NewEncoder(w).Encode(res); err != nil {
|
||||
fs.Debugf(path, "Response encoding failed: %v", err)
|
||||
}
|
||||
}
|
||||
31
cmd/serve/docker/contrib/plugin/Dockerfile
Normal file
31
cmd/serve/docker/contrib/plugin/Dockerfile
Normal file
@@ -0,0 +1,31 @@
|
||||
ARG BASE_IMAGE=rclone/rclone:latest
|
||||
ARG BUILD_PLATFORM=linux/amd64
|
||||
ARG TARGET_PLATFORM=linux/amd64
|
||||
|
||||
# temporary build image
|
||||
FROM --platform=${BUILD_PLATFORM} golang:alpine AS BUILD_ENV
|
||||
|
||||
COPY . /src
|
||||
WORKDIR /src
|
||||
|
||||
RUN apk add --no-cache make git bash && \
|
||||
CGO_ENABLED=0 \
|
||||
GOARCH=$(echo ${TARGET_PLATFORM} | cut -d '/' -f2) \
|
||||
make rclone
|
||||
|
||||
# plugin image
|
||||
FROM ${BASE_IMAGE}
|
||||
|
||||
COPY --from=BUILD_ENV /src/rclone /usr/local/bin/rclone
|
||||
|
||||
RUN mkdir -p /data/config /data/cache /mnt \
|
||||
&& /usr/local/bin/rclone version
|
||||
|
||||
ENV RCLONE_CONFIG=/data/config/rclone.conf
|
||||
ENV RCLONE_CACHE_DIR=/data/cache
|
||||
ENV RCLONE_BASE_DIR=/mnt
|
||||
ENV RCLONE_VERBOSE=0
|
||||
|
||||
WORKDIR /data
|
||||
ENTRYPOINT ["/usr/local/bin/rclone"]
|
||||
CMD ["serve", "docker"]
|
||||
66
cmd/serve/docker/contrib/plugin/config.json
Normal file
66
cmd/serve/docker/contrib/plugin/config.json
Normal file
@@ -0,0 +1,66 @@
|
||||
{
|
||||
"description": "Rclone volume plugin for Docker",
|
||||
"documentation": "https://rclone.org/",
|
||||
"interface": {
|
||||
"socket": "rclone.sock",
|
||||
"types": ["docker.volumedriver/1.0"]
|
||||
},
|
||||
"linux": {
|
||||
"capabilities": [
|
||||
"CAP_SYS_ADMIN"
|
||||
],
|
||||
"devices": [
|
||||
{
|
||||
"path": "/dev/fuse"
|
||||
}
|
||||
]
|
||||
},
|
||||
"network": {
|
||||
"type": "host"
|
||||
},
|
||||
"entrypoint": ["/usr/local/bin/rclone", "serve", "docker"],
|
||||
"workdir": "/data",
|
||||
"args": {
|
||||
"name": "args",
|
||||
"value": [],
|
||||
"settable": ["value"]
|
||||
},
|
||||
"env": [
|
||||
{
|
||||
"name": "RCLONE_VERBOSE",
|
||||
"value": "0",
|
||||
"settable": ["value"]
|
||||
},
|
||||
{
|
||||
"name": "RCLONE_CONFIG",
|
||||
"value": "/data/config/rclone.conf"
|
||||
},
|
||||
{
|
||||
"name": "RCLONE_CACHE_DIR",
|
||||
"value": "/data/cache"
|
||||
},
|
||||
{
|
||||
"name": "RCLONE_BASE_DIR",
|
||||
"value": "/mnt"
|
||||
}
|
||||
],
|
||||
"mounts": [
|
||||
{
|
||||
"name": "config",
|
||||
"source": "/var/lib/docker-plugins/rclone/config",
|
||||
"destination": "/data/config",
|
||||
"type": "bind",
|
||||
"options": ["rbind"],
|
||||
"settable": ["source"]
|
||||
},
|
||||
{
|
||||
"name": "cache",
|
||||
"source": "/var/lib/docker-plugins/rclone/cache",
|
||||
"destination": "/data/cache",
|
||||
"type": "bind",
|
||||
"options": ["rbind"],
|
||||
"settable": ["source"]
|
||||
}
|
||||
],
|
||||
"propagatedMount": "/mnt"
|
||||
}
|
||||
@@ -0,0 +1,19 @@
|
||||
[Unit]
|
||||
Description=Docker Volume Plugin for rclone
|
||||
Requires=docker.service
|
||||
Before=docker.service
|
||||
After=network.target
|
||||
Requires=docker-volume-rclone.socket
|
||||
After=docker-volume-rclone.socket
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/bin/rclone serve docker
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/docker-volumes/rclone
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/docker-plugins/rclone/config
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/docker-plugins/rclone/cache
|
||||
Environment=RCLONE_CONFIG=/var/lib/docker-plugins/rclone/config/rclone.conf
|
||||
Environment=RCLONE_CACHE_DIR=/var/lib/docker-plugins/rclone/cache
|
||||
Environment=RCLONE_VERBOSE=1
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
@@ -0,0 +1,8 @@
|
||||
[Unit]
|
||||
Description=Docker Volume Plugin for rclone
|
||||
|
||||
[Socket]
|
||||
ListenStream=/run/docker/plugins/rclone.sock
|
||||
|
||||
[Install]
|
||||
WantedBy=sockets.target
|
||||
72
cmd/serve/docker/docker.go
Normal file
72
cmd/serve/docker/docker.go
Normal file
@@ -0,0 +1,72 @@
|
||||
// Package docker serves a remote suitable for use with docker volume api
|
||||
package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/rclone/rclone/cmd"
|
||||
"github.com/rclone/rclone/cmd/mountlib"
|
||||
"github.com/rclone/rclone/fs/config/flags"
|
||||
"github.com/rclone/rclone/vfs"
|
||||
"github.com/rclone/rclone/vfs/vfsflags"
|
||||
)
|
||||
|
||||
var (
|
||||
pluginName = "rclone"
|
||||
pluginScope = "local"
|
||||
baseDir = "/var/lib/docker-volumes/rclone"
|
||||
sockDir = "/run/docker/plugins"
|
||||
defSpecDir = "/etc/docker/plugins"
|
||||
stateFile = "docker-plugin.state"
|
||||
socketAddr = "" // TCP listening address or empty string for Unix socket
|
||||
socketGid = syscall.Getgid()
|
||||
canPersist = false // allows writing to config file
|
||||
forgetState = false
|
||||
noSpec = false
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmdFlags := Command.Flags()
|
||||
// Add command specific flags
|
||||
flags.StringVarP(cmdFlags, &baseDir, "base-dir", "", baseDir, "base directory for volumes")
|
||||
flags.StringVarP(cmdFlags, &socketAddr, "socket-addr", "", socketAddr, "<host:port> or absolute path (default: /run/docker/plugins/rclone.sock)")
|
||||
flags.IntVarP(cmdFlags, &socketGid, "socket-gid", "", socketGid, "GID for unix socket (default: current process GID)")
|
||||
flags.BoolVarP(cmdFlags, &forgetState, "forget-state", "", forgetState, "skip restoring previous state")
|
||||
flags.BoolVarP(cmdFlags, &noSpec, "no-spec", "", noSpec, "do not write spec file")
|
||||
// Add common mount/vfs flags
|
||||
mountlib.AddFlags(cmdFlags)
|
||||
vfsflags.AddFlags(cmdFlags)
|
||||
}
|
||||
|
||||
// Command definition for cobra
|
||||
var Command = &cobra.Command{
|
||||
Use: "docker",
|
||||
Short: `Serve any remote on docker's volume plugin API.`,
|
||||
Long: strings.ReplaceAll(longHelp, "|", "`") + vfs.Help,
|
||||
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(0, 0, command, args)
|
||||
cmd.Run(false, false, command, func() error {
|
||||
ctx := context.Background()
|
||||
drv, err := NewDriver(ctx, baseDir, nil, nil, false, forgetState)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
srv := NewServer(drv)
|
||||
if socketAddr == "" {
|
||||
// Listen on unix socket at /run/docker/plugins/<pluginName>.sock
|
||||
return srv.ServeUnix(pluginName, socketGid)
|
||||
}
|
||||
if filepath.IsAbs(socketAddr) {
|
||||
// Listen on unix socket at given path
|
||||
return srv.ServeUnix(socketAddr, socketGid)
|
||||
}
|
||||
return srv.ServeTCP(socketAddr, "", nil, noSpec)
|
||||
})
|
||||
},
|
||||
}
|
||||
414
cmd/serve/docker/docker_test.go
Normal file
414
cmd/serve/docker/docker_test.go
Normal file
@@ -0,0 +1,414 @@
|
||||
package docker_test
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/rclone/rclone/cmd/mountlib"
|
||||
"github.com/rclone/rclone/cmd/serve/docker"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/config"
|
||||
"github.com/rclone/rclone/fstest"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
_ "github.com/rclone/rclone/backend/local"
|
||||
_ "github.com/rclone/rclone/backend/memory"
|
||||
_ "github.com/rclone/rclone/cmd/cmount"
|
||||
_ "github.com/rclone/rclone/cmd/mount"
|
||||
)
|
||||
|
||||
func initialise(ctx context.Context, t *testing.T) (string, fs.Fs) {
|
||||
fstest.Initialise()
|
||||
|
||||
// Make test cache directory
|
||||
testDir, err := fstest.LocalRemote()
|
||||
require.NoError(t, err)
|
||||
err = os.MkdirAll(testDir, 0755)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Make test file system
|
||||
testFs, err := fs.NewFs(ctx, testDir)
|
||||
require.NoError(t, err)
|
||||
return testDir, testFs
|
||||
}
|
||||
|
||||
func assertErrorContains(t *testing.T, err error, errString string, msgAndArgs ...interface{}) {
|
||||
assert.Error(t, err)
|
||||
if err != nil {
|
||||
assert.Contains(t, err.Error(), errString, msgAndArgs...)
|
||||
}
|
||||
}
|
||||
|
||||
func assertVolumeInfo(t *testing.T, v *docker.VolInfo, name, path string) {
|
||||
assert.Equal(t, name, v.Name)
|
||||
assert.Equal(t, path, v.Mountpoint)
|
||||
assert.NotEmpty(t, v.CreatedAt)
|
||||
_, err := time.Parse(time.RFC3339, v.CreatedAt)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestDockerPluginLogic(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
oldCacheDir := config.CacheDir
|
||||
testDir, testFs := initialise(ctx, t)
|
||||
config.CacheDir = testDir
|
||||
defer func() {
|
||||
config.CacheDir = oldCacheDir
|
||||
if !t.Failed() {
|
||||
fstest.Purge(testFs)
|
||||
_ = os.RemoveAll(testDir)
|
||||
}
|
||||
}()
|
||||
|
||||
// Create dummy volume driver
|
||||
drv, err := docker.NewDriver(ctx, testDir, nil, nil, true, true)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, drv)
|
||||
|
||||
// 1st volume request
|
||||
volReq := &docker.CreateRequest{
|
||||
Name: "vol1",
|
||||
Options: docker.VolOpts{},
|
||||
}
|
||||
assertErrorContains(t, drv.Create(volReq), "volume must have either remote or backend")
|
||||
|
||||
volReq.Options["remote"] = testDir
|
||||
assert.NoError(t, drv.Create(volReq))
|
||||
path1 := filepath.Join(testDir, "vol1")
|
||||
|
||||
assert.ErrorIs(t, drv.Create(volReq), docker.ErrVolumeExists)
|
||||
|
||||
getReq := &docker.GetRequest{Name: "vol1"}
|
||||
getRes, err := drv.Get(getReq)
|
||||
assert.NoError(t, err)
|
||||
require.NotNil(t, getRes)
|
||||
assertVolumeInfo(t, getRes.Volume, "vol1", path1)
|
||||
|
||||
// 2nd volume request
|
||||
volReq.Name = "vol2"
|
||||
assert.NoError(t, drv.Create(volReq))
|
||||
path2 := filepath.Join(testDir, "vol2")
|
||||
|
||||
listRes, err := drv.List()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 2, len(listRes.Volumes))
|
||||
assertVolumeInfo(t, listRes.Volumes[0], "vol1", path1)
|
||||
assertVolumeInfo(t, listRes.Volumes[1], "vol2", path2)
|
||||
|
||||
// Try prohibited volume options
|
||||
volReq.Name = "vol99"
|
||||
volReq.Options["remote"] = testDir
|
||||
volReq.Options["type"] = "memory"
|
||||
err = drv.Create(volReq)
|
||||
assertErrorContains(t, err, "volume must have either remote or backend")
|
||||
|
||||
volReq.Options["persist"] = "WrongBoolean"
|
||||
err = drv.Create(volReq)
|
||||
assertErrorContains(t, err, "cannot parse option")
|
||||
|
||||
volReq.Options["persist"] = "true"
|
||||
delete(volReq.Options, "remote")
|
||||
err = drv.Create(volReq)
|
||||
assertErrorContains(t, err, "persist remotes is prohibited")
|
||||
|
||||
volReq.Options["persist"] = "false"
|
||||
volReq.Options["memory-option-broken"] = "some-value"
|
||||
err = drv.Create(volReq)
|
||||
assertErrorContains(t, err, "unsupported backend option")
|
||||
|
||||
getReq.Name = "vol99"
|
||||
getRes, err = drv.Get(getReq)
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, getRes)
|
||||
|
||||
// Test mount requests
|
||||
mountReq := &docker.MountRequest{
|
||||
Name: "vol2",
|
||||
ID: "id1",
|
||||
}
|
||||
mountRes, err := drv.Mount(mountReq)
|
||||
assert.NoError(t, err)
|
||||
require.NotNil(t, mountRes)
|
||||
assert.Equal(t, path2, mountRes.Mountpoint)
|
||||
|
||||
mountRes, err = drv.Mount(mountReq)
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, mountRes)
|
||||
assertErrorContains(t, err, "already mounted by this id")
|
||||
|
||||
mountReq.ID = "id2"
|
||||
mountRes, err = drv.Mount(mountReq)
|
||||
assert.NoError(t, err)
|
||||
require.NotNil(t, mountRes)
|
||||
assert.Equal(t, path2, mountRes.Mountpoint)
|
||||
|
||||
unmountReq := &docker.UnmountRequest{
|
||||
Name: "vol2",
|
||||
ID: "id1",
|
||||
}
|
||||
err = drv.Unmount(unmountReq)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = drv.Unmount(unmountReq)
|
||||
assert.Error(t, err)
|
||||
assertErrorContains(t, err, "not mounted by this id")
|
||||
|
||||
// Simulate plugin restart
|
||||
drv2, err := docker.NewDriver(ctx, testDir, nil, nil, true, false)
|
||||
assert.NoError(t, err)
|
||||
require.NotNil(t, drv2)
|
||||
|
||||
// New plugin instance should pick up the saved state
|
||||
listRes, err = drv2.List()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 2, len(listRes.Volumes))
|
||||
assertVolumeInfo(t, listRes.Volumes[0], "vol1", path1)
|
||||
assertVolumeInfo(t, listRes.Volumes[1], "vol2", path2)
|
||||
|
||||
rmReq := &docker.RemoveRequest{Name: "vol2"}
|
||||
err = drv.Remove(rmReq)
|
||||
assertErrorContains(t, err, "volume is in use")
|
||||
|
||||
unmountReq.ID = "id1"
|
||||
err = drv.Unmount(unmountReq)
|
||||
assert.Error(t, err)
|
||||
assertErrorContains(t, err, "not mounted by this id")
|
||||
|
||||
unmountReq.ID = "id2"
|
||||
err = drv.Unmount(unmountReq)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = drv.Unmount(unmountReq)
|
||||
assert.EqualError(t, err, "volume is not mounted")
|
||||
|
||||
err = drv.Remove(rmReq)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
const (
|
||||
httpTimeout = 2 * time.Second
|
||||
tempDelay = 10 * time.Millisecond
|
||||
)
|
||||
|
||||
type APIClient struct {
|
||||
t *testing.T
|
||||
cli *http.Client
|
||||
host string
|
||||
}
|
||||
|
||||
func newAPIClient(t *testing.T, host, unixPath string) *APIClient {
|
||||
tr := &http.Transport{
|
||||
MaxIdleConns: 1,
|
||||
IdleConnTimeout: httpTimeout,
|
||||
DisableCompression: true,
|
||||
}
|
||||
|
||||
if unixPath != "" {
|
||||
tr.DialContext = func(_ context.Context, _, _ string) (net.Conn, error) {
|
||||
return net.Dial("unix", unixPath)
|
||||
}
|
||||
} else {
|
||||
dialer := &net.Dialer{
|
||||
Timeout: httpTimeout,
|
||||
KeepAlive: httpTimeout,
|
||||
}
|
||||
tr.DialContext = dialer.DialContext
|
||||
}
|
||||
|
||||
cli := &http.Client{
|
||||
Transport: tr,
|
||||
Timeout: httpTimeout,
|
||||
}
|
||||
return &APIClient{
|
||||
t: t,
|
||||
cli: cli,
|
||||
host: host,
|
||||
}
|
||||
}
|
||||
|
||||
func (a *APIClient) request(path string, in, out interface{}, wantErr bool) {
|
||||
t := a.t
|
||||
var (
|
||||
dataIn []byte
|
||||
dataOut []byte
|
||||
err error
|
||||
)
|
||||
|
||||
realm := "VolumeDriver"
|
||||
if path == "Activate" {
|
||||
realm = "Plugin"
|
||||
}
|
||||
url := fmt.Sprintf("http://%s/%s.%s", a.host, realm, path)
|
||||
|
||||
if str, isString := in.(string); isString {
|
||||
dataIn = []byte(str)
|
||||
} else {
|
||||
dataIn, err = json.Marshal(in)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
fs.Logf(path, "<-- %s", dataIn)
|
||||
|
||||
req, err := http.NewRequest("POST", url, bytes.NewBuffer(dataIn))
|
||||
require.NoError(t, err)
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
|
||||
res, err := a.cli.Do(req)
|
||||
require.NoError(t, err)
|
||||
|
||||
wantStatus := http.StatusOK
|
||||
if wantErr {
|
||||
wantStatus = http.StatusInternalServerError
|
||||
}
|
||||
assert.Equal(t, wantStatus, res.StatusCode)
|
||||
|
||||
dataOut, err = ioutil.ReadAll(res.Body)
|
||||
require.NoError(t, err)
|
||||
err = res.Body.Close()
|
||||
require.NoError(t, err)
|
||||
|
||||
if strPtr, isString := out.(*string); isString || wantErr {
|
||||
require.True(t, isString, "must use string for error response")
|
||||
if wantErr {
|
||||
var errRes docker.ErrorResponse
|
||||
err = json.Unmarshal(dataOut, &errRes)
|
||||
require.NoError(t, err)
|
||||
*strPtr = errRes.Err
|
||||
} else {
|
||||
*strPtr = strings.TrimSpace(string(dataOut))
|
||||
}
|
||||
} else {
|
||||
err = json.Unmarshal(dataOut, out)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
fs.Logf(path, "--> %s", dataOut)
|
||||
time.Sleep(tempDelay)
|
||||
}
|
||||
|
||||
func testMountAPI(t *testing.T, sockAddr string) {
|
||||
if _, mountFn := mountlib.ResolveMountMethod(""); mountFn == nil {
|
||||
t.Skip("Test requires working mount command")
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
oldCacheDir := config.CacheDir
|
||||
testDir, testFs := initialise(ctx, t)
|
||||
config.CacheDir = testDir
|
||||
defer func() {
|
||||
config.CacheDir = oldCacheDir
|
||||
if !t.Failed() {
|
||||
fstest.Purge(testFs)
|
||||
_ = os.RemoveAll(testDir)
|
||||
}
|
||||
}()
|
||||
|
||||
// Prepare API client
|
||||
var cli *APIClient
|
||||
var unixPath string
|
||||
if sockAddr != "" {
|
||||
cli = newAPIClient(t, sockAddr, "")
|
||||
} else {
|
||||
unixPath = filepath.Join(testDir, "rclone.sock")
|
||||
cli = newAPIClient(t, "localhost", unixPath)
|
||||
}
|
||||
|
||||
// Create mounting volume driver and listen for requests
|
||||
drv, err := docker.NewDriver(ctx, testDir, nil, nil, false, true)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, drv)
|
||||
defer drv.Exit()
|
||||
|
||||
srv := docker.NewServer(drv)
|
||||
go func() {
|
||||
var errServe error
|
||||
if unixPath != "" {
|
||||
errServe = srv.ServeUnix(unixPath, os.Getgid())
|
||||
} else {
|
||||
errServe = srv.ServeTCP(sockAddr, testDir, nil, false)
|
||||
}
|
||||
assert.ErrorIs(t, errServe, http.ErrServerClosed)
|
||||
}()
|
||||
defer func() {
|
||||
err := srv.Shutdown(ctx)
|
||||
assert.NoError(t, err)
|
||||
fs.Logf(nil, "Server stopped")
|
||||
time.Sleep(tempDelay)
|
||||
}()
|
||||
time.Sleep(tempDelay) // Let server start
|
||||
|
||||
// Run test sequence
|
||||
path1 := filepath.Join(testDir, "path1")
|
||||
require.NoError(t, os.MkdirAll(path1, 0755))
|
||||
mount1 := filepath.Join(testDir, "vol1")
|
||||
res := ""
|
||||
|
||||
cli.request("Activate", "{}", &res, false)
|
||||
assert.Contains(t, res, `"VolumeDriver"`)
|
||||
|
||||
createReq := docker.CreateRequest{
|
||||
Name: "vol1",
|
||||
Options: docker.VolOpts{"remote": path1},
|
||||
}
|
||||
cli.request("Create", createReq, &res, false)
|
||||
assert.Equal(t, "{}", res)
|
||||
cli.request("Create", createReq, &res, true)
|
||||
assert.Contains(t, res, "volume already exists")
|
||||
|
||||
mountReq := docker.MountRequest{Name: "vol1", ID: "id1"}
|
||||
var mountRes docker.MountResponse
|
||||
cli.request("Mount", mountReq, &mountRes, false)
|
||||
assert.Equal(t, mount1, mountRes.Mountpoint)
|
||||
cli.request("Mount", mountReq, &res, true)
|
||||
assert.Contains(t, res, "already mounted by this id")
|
||||
|
||||
removeReq := docker.RemoveRequest{Name: "vol1"}
|
||||
cli.request("Remove", removeReq, &res, true)
|
||||
assert.Contains(t, res, "volume is in use")
|
||||
|
||||
text := []byte("banana")
|
||||
err = ioutil.WriteFile(filepath.Join(mount1, "txt"), text, 0644)
|
||||
assert.NoError(t, err)
|
||||
time.Sleep(tempDelay)
|
||||
|
||||
text2, err := ioutil.ReadFile(filepath.Join(path1, "txt"))
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, text, text2)
|
||||
|
||||
unmountReq := docker.UnmountRequest{Name: "vol1", ID: "id1"}
|
||||
cli.request("Unmount", unmountReq, &res, false)
|
||||
assert.Equal(t, "{}", res)
|
||||
cli.request("Unmount", unmountReq, &res, true)
|
||||
assert.Equal(t, "volume is not mounted", res)
|
||||
|
||||
cli.request("Remove", removeReq, &res, false)
|
||||
assert.Equal(t, "{}", res)
|
||||
cli.request("Remove", removeReq, &res, true)
|
||||
assert.Equal(t, "volume not found", res)
|
||||
|
||||
var listRes docker.ListResponse
|
||||
cli.request("List", "{}", &listRes, false)
|
||||
assert.Empty(t, listRes.Volumes)
|
||||
}
|
||||
|
||||
func TestDockerPluginMountTCP(t *testing.T) {
|
||||
testMountAPI(t, "localhost:53789")
|
||||
}
|
||||
|
||||
func TestDockerPluginMountUnix(t *testing.T) {
|
||||
if runtime.GOOS != "linux" {
|
||||
t.Skip("Test is Linux-only")
|
||||
}
|
||||
testMountAPI(t, "")
|
||||
}
|
||||
360
cmd/serve/docker/driver.go
Normal file
360
cmd/serve/docker/driver.go
Normal file
@@ -0,0 +1,360 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"sort"
|
||||
"sync"
|
||||
|
||||
sysdnotify "github.com/iguanesolutions/go-systemd/v5/notify"
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/rclone/rclone/cmd/mountlib"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/config"
|
||||
"github.com/rclone/rclone/lib/atexit"
|
||||
"github.com/rclone/rclone/vfs/vfscommon"
|
||||
"github.com/rclone/rclone/vfs/vfsflags"
|
||||
)
|
||||
|
||||
// Driver implements docker driver api
|
||||
type Driver struct {
|
||||
root string
|
||||
volumes map[string]*Volume
|
||||
statePath string
|
||||
dummy bool // disables real mounting
|
||||
mntOpt mountlib.Options
|
||||
vfsOpt vfscommon.Options
|
||||
mu sync.Mutex
|
||||
exitOnce sync.Once
|
||||
hupChan chan os.Signal
|
||||
monChan chan bool // exit if true for exit, refresh if false
|
||||
}
|
||||
|
||||
// NewDriver makes a new docker driver
|
||||
func NewDriver(ctx context.Context, root string, mntOpt *mountlib.Options, vfsOpt *vfscommon.Options, dummy, forgetState bool) (*Driver, error) {
|
||||
// setup directories
|
||||
cacheDir, err := filepath.Abs(config.CacheDir)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to make --cache-dir absolute")
|
||||
}
|
||||
err = os.MkdirAll(cacheDir, 0700)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to create cache directory: %s", cacheDir)
|
||||
}
|
||||
|
||||
//err = os.MkdirAll(root, 0755)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "failed to create mount root: %s", root)
|
||||
}
|
||||
|
||||
// setup driver state
|
||||
if mntOpt == nil {
|
||||
mntOpt = &mountlib.Opt
|
||||
}
|
||||
if vfsOpt == nil {
|
||||
vfsOpt = &vfsflags.Opt
|
||||
}
|
||||
drv := &Driver{
|
||||
root: root,
|
||||
statePath: filepath.Join(cacheDir, stateFile),
|
||||
volumes: map[string]*Volume{},
|
||||
mntOpt: *mntOpt,
|
||||
vfsOpt: *vfsOpt,
|
||||
dummy: dummy,
|
||||
}
|
||||
drv.mntOpt.Daemon = false
|
||||
|
||||
// restore from saved state
|
||||
if !forgetState {
|
||||
if err = drv.restoreState(ctx); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to restore state")
|
||||
}
|
||||
}
|
||||
|
||||
// start mount monitoring
|
||||
drv.hupChan = make(chan os.Signal, 1)
|
||||
drv.monChan = make(chan bool, 1)
|
||||
mountlib.NotifyOnSigHup(drv.hupChan)
|
||||
go drv.monitor()
|
||||
|
||||
// unmount all volumes on exit
|
||||
atexit.Register(func() {
|
||||
drv.exitOnce.Do(drv.Exit)
|
||||
})
|
||||
|
||||
// notify systemd
|
||||
if err := sysdnotify.Ready(); err != nil {
|
||||
return nil, errors.Wrap(err, "failed to notify systemd")
|
||||
}
|
||||
|
||||
return drv, nil
|
||||
}
|
||||
|
||||
// Exit will unmount all currently mounted volumes
|
||||
func (drv *Driver) Exit() {
|
||||
fs.Debugf(nil, "Unmount all volumes")
|
||||
drv.mu.Lock()
|
||||
defer drv.mu.Unlock()
|
||||
|
||||
reportErr(sysdnotify.Stopping())
|
||||
drv.monChan <- true // ask monitor to exit
|
||||
for _, vol := range drv.volumes {
|
||||
reportErr(vol.unmountAll())
|
||||
vol.Mounts = []string{} // never persist mounts at exit
|
||||
}
|
||||
reportErr(drv.saveState())
|
||||
drv.dummy = true // no more mounts
|
||||
}
|
||||
|
||||
// monitor all mounts
|
||||
func (drv *Driver) monitor() {
|
||||
for {
|
||||
// https://stackoverflow.com/questions/19992334/how-to-listen-to-n-channels-dynamic-select-statement
|
||||
monChan := reflect.SelectCase{
|
||||
Dir: reflect.SelectRecv,
|
||||
Chan: reflect.ValueOf(drv.monChan),
|
||||
}
|
||||
hupChan := reflect.SelectCase{
|
||||
Dir: reflect.SelectRecv,
|
||||
Chan: reflect.ValueOf(drv.monChan),
|
||||
}
|
||||
sources := []reflect.SelectCase{monChan, hupChan}
|
||||
volumes := []*Volume{nil, nil}
|
||||
|
||||
drv.mu.Lock()
|
||||
for _, vol := range drv.volumes {
|
||||
if vol.mnt.ErrChan != nil {
|
||||
errSource := reflect.SelectCase{
|
||||
Dir: reflect.SelectRecv,
|
||||
Chan: reflect.ValueOf(vol.mnt.ErrChan),
|
||||
}
|
||||
sources = append(sources, errSource)
|
||||
volumes = append(volumes, vol)
|
||||
}
|
||||
}
|
||||
drv.mu.Unlock()
|
||||
|
||||
fs.Debugf(nil, "Monitoring %d volumes", len(sources)-2)
|
||||
idx, val, _ := reflect.Select(sources)
|
||||
switch idx {
|
||||
case 0:
|
||||
if val.Bool() {
|
||||
fs.Debugf(nil, "Monitoring stopped")
|
||||
return
|
||||
}
|
||||
case 1:
|
||||
// user sent SIGHUP to clear the cache
|
||||
drv.clearCache()
|
||||
default:
|
||||
vol := volumes[idx]
|
||||
if err := val.Interface(); err != nil {
|
||||
fs.Logf(nil, "Volume %q unmounted externally: %v", vol.Name, err)
|
||||
} else {
|
||||
fs.Infof(nil, "Volume %q unmounted externally", vol.Name)
|
||||
}
|
||||
drv.mu.Lock()
|
||||
reportErr(vol.unmountAll())
|
||||
drv.mu.Unlock()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// clearCache will clear cache of all volumes
|
||||
func (drv *Driver) clearCache() {
|
||||
fs.Debugf(nil, "Clear all caches")
|
||||
drv.mu.Lock()
|
||||
defer drv.mu.Unlock()
|
||||
|
||||
for _, vol := range drv.volumes {
|
||||
reportErr(vol.clearCache())
|
||||
}
|
||||
}
|
||||
|
||||
func reportErr(err error) {
|
||||
if err != nil {
|
||||
fs.Errorf("docker plugin", "%v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// Create volume
|
||||
// To use subpath we are limited to defining a new volume definition via alias
|
||||
func (drv *Driver) Create(req *CreateRequest) error {
|
||||
ctx := context.Background()
|
||||
drv.mu.Lock()
|
||||
defer drv.mu.Unlock()
|
||||
|
||||
name := req.Name
|
||||
fs.Debugf(nil, "Create volume %q", name)
|
||||
|
||||
if vol, _ := drv.getVolume(name); vol != nil {
|
||||
return ErrVolumeExists
|
||||
}
|
||||
|
||||
vol, err := newVolume(ctx, name, req.Options, drv)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
drv.volumes[name] = vol
|
||||
return drv.saveState()
|
||||
}
|
||||
|
||||
// Remove volume
|
||||
func (drv *Driver) Remove(req *RemoveRequest) error {
|
||||
ctx := context.Background()
|
||||
drv.mu.Lock()
|
||||
defer drv.mu.Unlock()
|
||||
vol, err := drv.getVolume(req.Name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err = vol.remove(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
delete(drv.volumes, vol.Name)
|
||||
return drv.saveState()
|
||||
}
|
||||
|
||||
// List volumes handled by the driver
|
||||
func (drv *Driver) List() (*ListResponse, error) {
|
||||
drv.mu.Lock()
|
||||
defer drv.mu.Unlock()
|
||||
|
||||
volumeList := drv.listVolumes()
|
||||
fs.Debugf(nil, "List: %v", volumeList)
|
||||
|
||||
res := &ListResponse{
|
||||
Volumes: []*VolInfo{},
|
||||
}
|
||||
for _, name := range volumeList {
|
||||
vol := drv.volumes[name]
|
||||
res.Volumes = append(res.Volumes, vol.getInfo())
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
// Get volume info
|
||||
func (drv *Driver) Get(req *GetRequest) (*GetResponse, error) {
|
||||
drv.mu.Lock()
|
||||
defer drv.mu.Unlock()
|
||||
vol, err := drv.getVolume(req.Name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &GetResponse{Volume: vol.getInfo()}, nil
|
||||
}
|
||||
|
||||
// Path returns path of the requested volume
|
||||
func (drv *Driver) Path(req *PathRequest) (*PathResponse, error) {
|
||||
drv.mu.Lock()
|
||||
defer drv.mu.Unlock()
|
||||
vol, err := drv.getVolume(req.Name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &PathResponse{Mountpoint: vol.MountPoint}, nil
|
||||
}
|
||||
|
||||
// Mount volume
|
||||
func (drv *Driver) Mount(req *MountRequest) (*MountResponse, error) {
|
||||
drv.mu.Lock()
|
||||
defer drv.mu.Unlock()
|
||||
vol, err := drv.getVolume(req.Name)
|
||||
if err == nil {
|
||||
err = vol.mount(req.ID)
|
||||
}
|
||||
if err == nil {
|
||||
err = drv.saveState()
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &MountResponse{Mountpoint: vol.MountPoint}, nil
|
||||
}
|
||||
|
||||
// Unmount volume
|
||||
func (drv *Driver) Unmount(req *UnmountRequest) error {
|
||||
drv.mu.Lock()
|
||||
defer drv.mu.Unlock()
|
||||
vol, err := drv.getVolume(req.Name)
|
||||
if err == nil {
|
||||
err = vol.unmount(req.ID)
|
||||
}
|
||||
if err == nil {
|
||||
err = drv.saveState()
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// getVolume returns volume by name
|
||||
func (drv *Driver) getVolume(name string) (*Volume, error) {
|
||||
vol := drv.volumes[name]
|
||||
if vol == nil {
|
||||
return nil, ErrVolumeNotFound
|
||||
}
|
||||
return vol, nil
|
||||
}
|
||||
|
||||
// listVolumes returns list volume listVolumes
|
||||
func (drv *Driver) listVolumes() []string {
|
||||
names := []string{}
|
||||
for key := range drv.volumes {
|
||||
names = append(names, key)
|
||||
}
|
||||
sort.Strings(names)
|
||||
return names
|
||||
}
|
||||
|
||||
// saveState saves volumes handled by driver to persistent store
|
||||
func (drv *Driver) saveState() error {
|
||||
volumeList := drv.listVolumes()
|
||||
fs.Debugf(nil, "Save state %v to %s", volumeList, drv.statePath)
|
||||
|
||||
state := []*Volume{}
|
||||
for _, key := range volumeList {
|
||||
vol := drv.volumes[key]
|
||||
vol.prepareState()
|
||||
state = append(state, vol)
|
||||
}
|
||||
|
||||
data, err := json.Marshal(state)
|
||||
if err == nil {
|
||||
err = ioutil.WriteFile(drv.statePath, data, 0600)
|
||||
}
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to write state")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// restoreState recreates volumes from saved driver state
|
||||
func (drv *Driver) restoreState(ctx context.Context) error {
|
||||
fs.Debugf(nil, "Restore state from %s", drv.statePath)
|
||||
|
||||
data, err := ioutil.ReadFile(drv.statePath)
|
||||
if os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
|
||||
var state []*Volume
|
||||
if err == nil {
|
||||
err = json.Unmarshal(data, &state)
|
||||
}
|
||||
if err != nil {
|
||||
fs.Logf(nil, "Failed to restore plugin state: %v", err)
|
||||
return nil
|
||||
}
|
||||
|
||||
for _, vol := range state {
|
||||
if err := vol.restoreState(ctx, drv); err != nil {
|
||||
fs.Logf(nil, "Failed to restore volume %q: %v", vol.Name, err)
|
||||
continue
|
||||
}
|
||||
drv.volumes[vol.Name] = vol
|
||||
}
|
||||
return nil
|
||||
}
|
||||
43
cmd/serve/docker/help.go
Normal file
43
cmd/serve/docker/help.go
Normal file
@@ -0,0 +1,43 @@
|
||||
package docker
|
||||
|
||||
// Note: "|" will be replaced by backticks
|
||||
var longHelp = `
|
||||
This command implements the Docker volume plugin API allowing docker to use
|
||||
rclone as a data storage mechanism for various cloud providers.
|
||||
rclone provides [docker volume plugin](/docker) based on it.
|
||||
|
||||
To create a docker plugin, one must create a Unix or TCP socket that Docker
|
||||
will look for when you use the plugin and then it listens for commands from
|
||||
docker daemon and runs the corresponding code when necessary.
|
||||
Docker plugins can run as a managed plugin under control of the docker daemon
|
||||
or as an independent native service. For testing, you can just run it directly
|
||||
from the command line, for example:
|
||||
|||
|
||||
sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
|
||||
|||
|
||||
|
||||
Running |rclone serve docker| will create the said socket, listening for
|
||||
commands from Docker to create the necessary Volumes. Normally you need not
|
||||
give the |--socket-addr| flag. The API will listen on the unix domain socket
|
||||
at |/run/docker/plugins/rclone.sock|. In the example above rclone will create
|
||||
a TCP socket and a small file |/etc/docker/plugins/rclone.spec| containing
|
||||
the socket address. We use |sudo| because both paths are writeable only by
|
||||
the root user.
|
||||
|
||||
If you later decide to change listening socket, the docker daemon must be
|
||||
restarted to reconnect to |/run/docker/plugins/rclone.sock|
|
||||
or parse new |/etc/docker/plugins/rclone.spec|. Until you restart, any
|
||||
volume related docker commands will timeout trying to access the old socket.
|
||||
Running directly is supported on **Linux only**, not on Windows or MacOS.
|
||||
This is not a problem with managed plugin mode described in details
|
||||
in the [full documentation](https://rclone.org/docker).
|
||||
|
||||
The command will create volume mounts under the path given by |--base-dir|
|
||||
(by default |/var/lib/docker-volumes/rclone| available only to root)
|
||||
and maintain the JSON formatted file |docker-plugin.state| in the rclone cache
|
||||
directory with book-keeping records of created and mounted volumes.
|
||||
|
||||
All mount and VFS options are submitted by the docker daemon via API, but
|
||||
you can also provide defaults on the command line as well as set path to the
|
||||
config file and cache directory or adjust logging verbosity.
|
||||
`
|
||||
307
cmd/serve/docker/options.go
Normal file
307
cmd/serve/docker/options.go
Normal file
@@ -0,0 +1,307 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/rclone/rclone/cmd/mountlib"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/config/configmap"
|
||||
"github.com/rclone/rclone/fs/fspath"
|
||||
"github.com/rclone/rclone/fs/rc"
|
||||
"github.com/rclone/rclone/vfs/vfscommon"
|
||||
"github.com/rclone/rclone/vfs/vfsflags"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/spf13/pflag"
|
||||
)
|
||||
|
||||
// applyOptions configures volume from request options.
|
||||
//
|
||||
// There are 5 special options:
|
||||
// - "remote" aka "fs" determines existing remote from config file
|
||||
// with a path or on-the-fly remote using the ":backend:" syntax.
|
||||
// It is usually named "remote" in documentation but can be aliased as
|
||||
// "fs" to avoid confusion with the "remote" option of some backends.
|
||||
// - "type" is equivalent to the ":backend:" syntax (optional).
|
||||
// - "path" provides explicit on-remote path for "type" (optional).
|
||||
// - "mount-type" can be "mount", "cmount" or "mount2", defaults to
|
||||
// first found (optional).
|
||||
// - "persist" is reserved for future to create remotes persisted
|
||||
// in rclone.conf similar to rcd (optional).
|
||||
//
|
||||
// Unlike rcd we use the flat naming scheme for mount, vfs and backend
|
||||
// options without substructures. Dashes, underscores and mixed case
|
||||
// in option names can be used interchangeably. Option name conflicts
|
||||
// can be resolved in a manner similar to rclone CLI by adding prefixes:
|
||||
// "vfs-", primary mount backend type like "sftp-", and so on.
|
||||
//
|
||||
// After triaging the options are put in MountOpt, VFSOpt or connect
|
||||
// string for actual filesystem setup and in volume.Options for saving
|
||||
// the state.
|
||||
func (vol *Volume) applyOptions(volOpt VolOpts) error {
|
||||
// copy options to override later
|
||||
mntOpt := &vol.mnt.MountOpt
|
||||
vfsOpt := &vol.mnt.VFSOpt
|
||||
*mntOpt = vol.drv.mntOpt
|
||||
*vfsOpt = vol.drv.vfsOpt
|
||||
|
||||
// vol.Options has all options except "remote" and "type"
|
||||
vol.Options = VolOpts{}
|
||||
vol.fsString = ""
|
||||
|
||||
var fsName, fsPath, fsType string
|
||||
var explicitPath string
|
||||
var fsOpt configmap.Simple
|
||||
|
||||
// parse "remote" or "type"
|
||||
for key, str := range volOpt {
|
||||
switch key {
|
||||
case "":
|
||||
continue
|
||||
case "remote", "fs":
|
||||
p, err := fspath.Parse(str)
|
||||
if err != nil || p.Name == ":" {
|
||||
return errors.Wrapf(err, "cannot parse path %q", str)
|
||||
}
|
||||
fsName, fsPath, fsOpt = p.Name, p.Path, p.Config
|
||||
vol.Fs = str
|
||||
case "type":
|
||||
fsType = str
|
||||
vol.Type = str
|
||||
case "path":
|
||||
explicitPath = str
|
||||
vol.Path = str
|
||||
default:
|
||||
vol.Options[key] = str
|
||||
}
|
||||
}
|
||||
|
||||
// find options supported by backend
|
||||
if strings.HasPrefix(fsName, ":") {
|
||||
fsType = fsName[1:]
|
||||
fsName = ""
|
||||
}
|
||||
if fsType == "" {
|
||||
fsType = "local"
|
||||
if fsName != "" {
|
||||
var ok bool
|
||||
fsType, ok = fs.ConfigMap(nil, fsName, nil).Get("type")
|
||||
if !ok {
|
||||
return fs.ErrorNotFoundInConfigFile
|
||||
}
|
||||
}
|
||||
}
|
||||
if explicitPath != "" {
|
||||
if fsPath != "" {
|
||||
fs.Logf(nil, "Explicit path will override connection string")
|
||||
}
|
||||
fsPath = explicitPath
|
||||
}
|
||||
fsInfo, err := fs.Find(fsType)
|
||||
if err != nil {
|
||||
return errors.Errorf("unknown filesystem type %q", fsType)
|
||||
}
|
||||
|
||||
// handle remaining options, override fsOpt
|
||||
if fsOpt == nil {
|
||||
fsOpt = configmap.Simple{}
|
||||
}
|
||||
opt := rc.Params{}
|
||||
for key, val := range vol.Options {
|
||||
opt[key] = val
|
||||
}
|
||||
for key := range opt {
|
||||
var ok bool
|
||||
var err error
|
||||
|
||||
switch normalOptName(key) {
|
||||
case "persist":
|
||||
vol.persist, err = opt.GetBool(key)
|
||||
ok = true
|
||||
case "mount-type":
|
||||
vol.mountType, err = opt.GetString(key)
|
||||
ok = true
|
||||
}
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "cannot parse option %q", key)
|
||||
}
|
||||
|
||||
if !ok {
|
||||
// try to use as a mount option in mntOpt
|
||||
ok, err = getMountOption(mntOpt, opt, key)
|
||||
if ok && err != nil {
|
||||
return errors.Wrapf(err, "cannot parse mount option %q", key)
|
||||
}
|
||||
}
|
||||
if !ok {
|
||||
// try as a vfs option in vfsOpt
|
||||
ok, err = getVFSOption(vfsOpt, opt, key)
|
||||
if ok && err != nil {
|
||||
return errors.Wrapf(err, "cannot parse vfs option %q", key)
|
||||
}
|
||||
}
|
||||
|
||||
if !ok {
|
||||
// try as a backend option in fsOpt (backends use "_" instead of "-")
|
||||
optWithPrefix := strings.ReplaceAll(normalOptName(key), "-", "_")
|
||||
fsOptName := strings.TrimPrefix(optWithPrefix, fsType+"_")
|
||||
hasFsPrefix := optWithPrefix != fsOptName
|
||||
if !hasFsPrefix || fsInfo.Options.Get(fsOptName) == nil {
|
||||
fs.Logf(nil, "Option %q is not supported by backend %q", key, fsType)
|
||||
return errors.Errorf("unsupported backend option %q", key)
|
||||
}
|
||||
fsOpt[fsOptName], err = opt.GetString(key)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "cannot parse backend option %q", key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// build remote string from fsName, fsType, fsOpt, fsPath
|
||||
colon := ":"
|
||||
comma := ","
|
||||
if fsName == "" {
|
||||
fsName = ":" + fsType
|
||||
}
|
||||
connString := fsOpt.String()
|
||||
if fsName == "" && fsType == "" {
|
||||
colon = ""
|
||||
connString = ""
|
||||
}
|
||||
if connString == "" {
|
||||
comma = ""
|
||||
}
|
||||
vol.fsString = fsName + comma + connString + colon + fsPath
|
||||
|
||||
return vol.validate()
|
||||
}
|
||||
|
||||
func getMountOption(mntOpt *mountlib.Options, opt rc.Params, key string) (ok bool, err error) {
|
||||
ok = true
|
||||
switch normalOptName(key) {
|
||||
case "debug-fuse":
|
||||
mntOpt.DebugFUSE, err = opt.GetBool(key)
|
||||
case "attr-timeout":
|
||||
mntOpt.AttrTimeout, err = opt.GetDuration(key)
|
||||
case "option":
|
||||
mntOpt.ExtraOptions, err = getStringArray(opt, key)
|
||||
case "fuse-flag":
|
||||
mntOpt.ExtraFlags, err = getStringArray(opt, key)
|
||||
case "daemon":
|
||||
mntOpt.Daemon, err = opt.GetBool(key)
|
||||
case "daemon-timeout":
|
||||
mntOpt.DaemonTimeout, err = opt.GetDuration(key)
|
||||
case "default-permissions":
|
||||
mntOpt.DefaultPermissions, err = opt.GetBool(key)
|
||||
case "allow-non-empty":
|
||||
mntOpt.AllowNonEmpty, err = opt.GetBool(key)
|
||||
case "allow-root":
|
||||
mntOpt.AllowRoot, err = opt.GetBool(key)
|
||||
case "allow-other":
|
||||
mntOpt.AllowOther, err = opt.GetBool(key)
|
||||
case "async-read":
|
||||
mntOpt.AsyncRead, err = opt.GetBool(key)
|
||||
case "max-read-ahead":
|
||||
err = getFVarP(&mntOpt.MaxReadAhead, opt, key)
|
||||
case "write-back-cache":
|
||||
mntOpt.WritebackCache, err = opt.GetBool(key)
|
||||
case "volname":
|
||||
mntOpt.VolumeName, err = opt.GetString(key)
|
||||
case "noappledouble":
|
||||
mntOpt.NoAppleDouble, err = opt.GetBool(key)
|
||||
case "noapplexattr":
|
||||
mntOpt.NoAppleXattr, err = opt.GetBool(key)
|
||||
case "network-mode":
|
||||
mntOpt.NetworkMode, err = opt.GetBool(key)
|
||||
default:
|
||||
ok = false
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func getVFSOption(vfsOpt *vfscommon.Options, opt rc.Params, key string) (ok bool, err error) {
|
||||
var intVal int64
|
||||
ok = true
|
||||
switch normalOptName(key) {
|
||||
|
||||
// options prefixed with "vfs-"
|
||||
case "vfs-cache-mode":
|
||||
err = getFVarP(&vfsOpt.CacheMode, opt, key)
|
||||
case "vfs-cache-poll-interval":
|
||||
vfsOpt.CachePollInterval, err = opt.GetDuration(key)
|
||||
case "vfs-cache-max-age":
|
||||
vfsOpt.CacheMaxAge, err = opt.GetDuration(key)
|
||||
case "vfs-cache-max-size":
|
||||
err = getFVarP(&vfsOpt.CacheMaxSize, opt, key)
|
||||
case "vfs-read-chunk-size":
|
||||
err = getFVarP(&vfsOpt.ChunkSize, opt, key)
|
||||
case "vfs-read-chunk-size-limit":
|
||||
err = getFVarP(&vfsOpt.ChunkSizeLimit, opt, key)
|
||||
case "vfs-case-insensitive":
|
||||
vfsOpt.CaseInsensitive, err = opt.GetBool(key)
|
||||
case "vfs-write-wait":
|
||||
vfsOpt.WriteWait, err = opt.GetDuration(key)
|
||||
case "vfs-read-wait":
|
||||
vfsOpt.ReadWait, err = opt.GetDuration(key)
|
||||
case "vfs-write-back":
|
||||
vfsOpt.WriteBack, err = opt.GetDuration(key)
|
||||
case "vfs-read-ahead":
|
||||
err = getFVarP(&vfsOpt.ReadAhead, opt, key)
|
||||
case "vfs-used-is-size":
|
||||
vfsOpt.UsedIsSize, err = opt.GetBool(key)
|
||||
|
||||
// unprefixed vfs options
|
||||
case "no-modtime":
|
||||
vfsOpt.NoModTime, err = opt.GetBool(key)
|
||||
case "no-checksum":
|
||||
vfsOpt.NoChecksum, err = opt.GetBool(key)
|
||||
case "dir-cache-time":
|
||||
vfsOpt.DirCacheTime, err = opt.GetDuration(key)
|
||||
case "poll-interval":
|
||||
vfsOpt.PollInterval, err = opt.GetDuration(key)
|
||||
case "read-only":
|
||||
vfsOpt.ReadOnly, err = opt.GetBool(key)
|
||||
case "dir-perms":
|
||||
perms := &vfsflags.FileMode{Mode: &vfsOpt.DirPerms}
|
||||
err = getFVarP(perms, opt, key)
|
||||
case "file-perms":
|
||||
perms := &vfsflags.FileMode{Mode: &vfsOpt.FilePerms}
|
||||
err = getFVarP(perms, opt, key)
|
||||
|
||||
// unprefixed unix-only vfs options
|
||||
case "umask":
|
||||
intVal, err = opt.GetInt64(key)
|
||||
vfsOpt.Umask = int(intVal)
|
||||
case "uid":
|
||||
intVal, err = opt.GetInt64(key)
|
||||
vfsOpt.UID = uint32(intVal)
|
||||
case "gid":
|
||||
intVal, err = opt.GetInt64(key)
|
||||
vfsOpt.GID = uint32(intVal)
|
||||
|
||||
// non-vfs options
|
||||
default:
|
||||
ok = false
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func getFVarP(pvalue pflag.Value, opt rc.Params, key string) error {
|
||||
str, err := opt.GetString(key)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return pvalue.Set(str)
|
||||
}
|
||||
|
||||
func getStringArray(opt rc.Params, key string) ([]string, error) {
|
||||
str, err := opt.GetString(key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return strings.Split(str, ","), nil
|
||||
}
|
||||
|
||||
func normalOptName(key string) string {
|
||||
return strings.ReplaceAll(strings.TrimPrefix(strings.ToLower(key), "--"), "_", "-")
|
||||
}
|
||||
100
cmd/serve/docker/serve.go
Normal file
100
cmd/serve/docker/serve.go
Normal file
@@ -0,0 +1,100 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/lib/atexit"
|
||||
)
|
||||
|
||||
// Server connects plugin with docker daemon by protocol
|
||||
type Server http.Server
|
||||
|
||||
// NewServer creates new docker plugin server
|
||||
func NewServer(drv *Driver) *Server {
|
||||
return &Server{Handler: newRouter(drv)}
|
||||
}
|
||||
|
||||
// Shutdown the server
|
||||
func (s *Server) Shutdown(ctx context.Context) error {
|
||||
hs := (*http.Server)(s)
|
||||
return hs.Shutdown(ctx)
|
||||
}
|
||||
|
||||
func (s *Server) serve(listener net.Listener, addr, tempFile string) error {
|
||||
if tempFile != "" {
|
||||
atexit.Register(func() {
|
||||
// remove spec file or self-created unix socket
|
||||
fs.Debugf(nil, "Removing stale file %s", tempFile)
|
||||
_ = os.Remove(tempFile)
|
||||
})
|
||||
}
|
||||
hs := (*http.Server)(s)
|
||||
return hs.Serve(listener)
|
||||
}
|
||||
|
||||
// ServeUnix makes the handler to listen for requests in a unix socket.
|
||||
// It also creates the socket file in the right directory for docker to read.
|
||||
func (s *Server) ServeUnix(path string, gid int) error {
|
||||
listener, socketPath, err := newUnixListener(path, gid)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if socketPath != "" {
|
||||
path = socketPath
|
||||
fs.Infof(nil, "Serving unix socket: %s", path)
|
||||
} else {
|
||||
fs.Infof(nil, "Serving systemd socket")
|
||||
}
|
||||
return s.serve(listener, path, socketPath)
|
||||
}
|
||||
|
||||
// ServeTCP makes the handler listen for request on a given TCP address.
|
||||
// It also writes the spec file in the right directory for docker to read.
|
||||
func (s *Server) ServeTCP(addr, specDir string, tlsConfig *tls.Config, noSpec bool) error {
|
||||
listener, err := net.Listen("tcp", addr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if tlsConfig != nil {
|
||||
tlsConfig.NextProtos = []string{"http/1.1"}
|
||||
listener = tls.NewListener(listener, tlsConfig)
|
||||
}
|
||||
addr = listener.Addr().String()
|
||||
specFile := ""
|
||||
if !noSpec {
|
||||
specFile, err = writeSpecFile(addr, "tcp", specDir)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
fs.Infof(nil, "Serving TCP socket: %s", addr)
|
||||
return s.serve(listener, addr, specFile)
|
||||
}
|
||||
|
||||
func writeSpecFile(addr, proto, specDir string) (string, error) {
|
||||
if specDir == "" && runtime.GOOS == "windows" {
|
||||
specDir = os.TempDir()
|
||||
}
|
||||
if specDir == "" {
|
||||
specDir = defSpecDir
|
||||
}
|
||||
if err := os.MkdirAll(specDir, 0755); err != nil {
|
||||
return "", err
|
||||
}
|
||||
specFile := filepath.Join(specDir, "rclone.spec")
|
||||
url := fmt.Sprintf("%s://%s", proto, addr)
|
||||
if err := ioutil.WriteFile(specFile, []byte(url), 0644); err != nil {
|
||||
return "", err
|
||||
}
|
||||
fs.Debugf(nil, "Plugin spec has been written to %s", specFile)
|
||||
return specFile, nil
|
||||
}
|
||||
17
cmd/serve/docker/systemd.go
Normal file
17
cmd/serve/docker/systemd.go
Normal file
@@ -0,0 +1,17 @@
|
||||
// +build linux,!android
|
||||
|
||||
package docker
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/coreos/go-systemd/activation"
|
||||
"github.com/coreos/go-systemd/util"
|
||||
)
|
||||
|
||||
func systemdActivationFiles() []*os.File {
|
||||
if util.IsRunningSystemd() {
|
||||
return activation.Files(false)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
11
cmd/serve/docker/systemd_unsupported.go
Normal file
11
cmd/serve/docker/systemd_unsupported.go
Normal file
@@ -0,0 +1,11 @@
|
||||
// +build !linux android
|
||||
|
||||
package docker
|
||||
|
||||
import (
|
||||
"os"
|
||||
)
|
||||
|
||||
func systemdActivationFiles() []*os.File {
|
||||
return nil
|
||||
}
|
||||
56
cmd/serve/docker/unix.go
Normal file
56
cmd/serve/docker/unix.go
Normal file
@@ -0,0 +1,56 @@
|
||||
// +build linux freebsd
|
||||
|
||||
package docker
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
func newUnixListener(path string, gid int) (net.Listener, string, error) {
|
||||
// try systemd socket activation
|
||||
fds := systemdActivationFiles()
|
||||
switch len(fds) {
|
||||
case 0:
|
||||
// fall thru
|
||||
case 1:
|
||||
listener, err := net.FileListener(fds[0])
|
||||
return listener, "", err
|
||||
default:
|
||||
return nil, "", fmt.Errorf("expected only one socket from systemd, got %d", len(fds))
|
||||
}
|
||||
|
||||
// create socket outselves
|
||||
if filepath.Ext(path) == "" {
|
||||
path += ".sock"
|
||||
}
|
||||
if !filepath.IsAbs(path) {
|
||||
path = filepath.Join(sockDir, path)
|
||||
}
|
||||
|
||||
if err := os.MkdirAll(filepath.Dir(path), 0755); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
if err := os.Remove(path); err != nil && !os.IsNotExist(err) {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
listener, err := net.Listen("unix", path)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
if err = os.Chmod(path, 0660); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
if os.Geteuid() == 0 {
|
||||
if err = os.Chown(path, 0, gid); err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
}
|
||||
|
||||
// we don't use spec file with unix sockets
|
||||
return listener, path, nil
|
||||
}
|
||||
12
cmd/serve/docker/unix_unsupported.go
Normal file
12
cmd/serve/docker/unix_unsupported.go
Normal file
@@ -0,0 +1,12 @@
|
||||
// +build !linux,!freebsd
|
||||
|
||||
package docker
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net"
|
||||
)
|
||||
|
||||
func newUnixListener(path string, gid int) (net.Listener, string, error) {
|
||||
return nil, "", errors.New("unix sockets require Linux or FreeBSD")
|
||||
}
|
||||
326
cmd/serve/docker/volume.go
Normal file
326
cmd/serve/docker/volume.go
Normal file
@@ -0,0 +1,326 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/rclone/rclone/cmd/mountlib"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/config"
|
||||
"github.com/rclone/rclone/fs/rc"
|
||||
)
|
||||
|
||||
// Errors
|
||||
var (
|
||||
ErrVolumeNotFound = errors.New("volume not found")
|
||||
ErrVolumeExists = errors.New("volume already exists")
|
||||
ErrMountpointExists = errors.New("non-empty mountpoint already exists")
|
||||
)
|
||||
|
||||
// Volume keeps volume runtime state
|
||||
// Public members get persisted in saved state
|
||||
type Volume struct {
|
||||
Name string `json:"name"`
|
||||
MountPoint string `json:"mountpoint"`
|
||||
CreatedAt time.Time `json:"created"`
|
||||
Fs string `json:"fs"` // remote[,connectString]:path
|
||||
Type string `json:"type,omitempty"` // same as ":backend:"
|
||||
Path string `json:"path,omitempty"` // for "remote:path" or ":backend:path"
|
||||
Options VolOpts `json:"options"` // all options together
|
||||
Mounts []string `json:"mounts"` // mountReqs as a string list
|
||||
mountReqs map[string]interface{}
|
||||
fsString string // result of merging Fs, Type and Options
|
||||
persist bool
|
||||
mountType string
|
||||
drv *Driver
|
||||
mnt *mountlib.MountPoint
|
||||
}
|
||||
|
||||
// VolOpts keeps volume options
|
||||
type VolOpts map[string]string
|
||||
|
||||
// VolInfo represents a volume for Get and List requests
|
||||
type VolInfo struct {
|
||||
Name string
|
||||
Mountpoint string `json:",omitempty"`
|
||||
CreatedAt string `json:",omitempty"`
|
||||
Status map[string]interface{} `json:",omitempty"`
|
||||
}
|
||||
|
||||
func newVolume(ctx context.Context, name string, volOpt VolOpts, drv *Driver) (*Volume, error) {
|
||||
path := filepath.Join(drv.root, name)
|
||||
mnt := &mountlib.MountPoint{
|
||||
MountPoint: path,
|
||||
}
|
||||
vol := &Volume{
|
||||
Name: name,
|
||||
MountPoint: path,
|
||||
CreatedAt: time.Now(),
|
||||
drv: drv,
|
||||
mnt: mnt,
|
||||
mountReqs: make(map[string]interface{}),
|
||||
}
|
||||
err := vol.applyOptions(volOpt)
|
||||
if err == nil {
|
||||
err = vol.setup(ctx)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return vol, nil
|
||||
}
|
||||
|
||||
// getInfo returns short digest about volume
|
||||
func (vol *Volume) getInfo() *VolInfo {
|
||||
vol.prepareState()
|
||||
return &VolInfo{
|
||||
Name: vol.Name,
|
||||
CreatedAt: vol.CreatedAt.Format(time.RFC3339),
|
||||
Mountpoint: vol.MountPoint,
|
||||
Status: rc.Params{"Mounts": vol.Mounts},
|
||||
}
|
||||
}
|
||||
|
||||
// prepareState prepares volume for saving state
|
||||
func (vol *Volume) prepareState() {
|
||||
vol.Mounts = []string{}
|
||||
for id := range vol.mountReqs {
|
||||
vol.Mounts = append(vol.Mounts, id)
|
||||
}
|
||||
sort.Strings(vol.Mounts)
|
||||
}
|
||||
|
||||
// restoreState updates volume from saved state
|
||||
func (vol *Volume) restoreState(ctx context.Context, drv *Driver) error {
|
||||
vol.drv = drv
|
||||
vol.mnt = &mountlib.MountPoint{
|
||||
MountPoint: vol.MountPoint,
|
||||
}
|
||||
volOpt := vol.Options
|
||||
volOpt["fs"] = vol.Fs
|
||||
volOpt["type"] = vol.Type
|
||||
if err := vol.applyOptions(volOpt); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := vol.validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := vol.setup(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
for _, id := range vol.Mounts {
|
||||
if err := vol.mount(id); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// validate volume
|
||||
func (vol *Volume) validate() error {
|
||||
if vol.Name == "" {
|
||||
return errors.New("volume name is required")
|
||||
}
|
||||
if (vol.Type != "" && vol.Fs != "") || (vol.Type == "" && vol.Fs == "") {
|
||||
return errors.New("volume must have either remote or backend type")
|
||||
}
|
||||
if vol.persist && vol.Type == "" {
|
||||
return errors.New("backend type is required to persist remotes")
|
||||
}
|
||||
if vol.persist && !canPersist {
|
||||
return errors.New("using backend type to persist remotes is prohibited")
|
||||
}
|
||||
if vol.MountPoint == "" {
|
||||
return errors.New("mount point is required")
|
||||
}
|
||||
if vol.mountReqs == nil {
|
||||
vol.mountReqs = make(map[string]interface{})
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// checkMountpoint verifies that mount point is an existing empty directory
|
||||
func (vol *Volume) checkMountpoint() error {
|
||||
path := vol.mnt.MountPoint
|
||||
if runtime.GOOS == "windows" {
|
||||
path = filepath.Dir(path)
|
||||
}
|
||||
_, err := os.Lstat(path)
|
||||
if os.IsNotExist(err) {
|
||||
if err = os.MkdirAll(path, 0700); err != nil {
|
||||
return errors.Wrapf(err, "failed to create mountpoint: %s", path)
|
||||
}
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
if runtime.GOOS != "windows" {
|
||||
if err := mountlib.CheckMountEmpty(path); err != nil {
|
||||
return ErrMountpointExists
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// setup volume filesystem
|
||||
func (vol *Volume) setup(ctx context.Context) error {
|
||||
fs.Debugf(nil, "Setup volume %q as %q at path %s", vol.Name, vol.fsString, vol.MountPoint)
|
||||
|
||||
if err := vol.checkMountpoint(); err != nil {
|
||||
return err
|
||||
}
|
||||
if vol.drv.dummy {
|
||||
return nil
|
||||
}
|
||||
|
||||
_, mountFn := mountlib.ResolveMountMethod(vol.mountType)
|
||||
if mountFn == nil {
|
||||
if vol.mountType != "" {
|
||||
return errors.Errorf("unsupported mount type %q", vol.mountType)
|
||||
}
|
||||
return errors.New("mount command unsupported by this build")
|
||||
}
|
||||
vol.mnt.MountFn = mountFn
|
||||
|
||||
if vol.persist {
|
||||
// Add remote to config file
|
||||
params := rc.Params{}
|
||||
for key, val := range vol.Options {
|
||||
params[key] = val
|
||||
}
|
||||
updateMode := config.UpdateRemoteOpt{}
|
||||
_, err := config.CreateRemote(ctx, vol.Name, vol.Type, params, updateMode)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Use existing remote
|
||||
f, err := fs.NewFs(ctx, vol.fsString)
|
||||
if err == nil {
|
||||
vol.mnt.Fs = f
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// remove volume filesystem and mounts
|
||||
func (vol *Volume) remove(ctx context.Context) error {
|
||||
count := len(vol.mountReqs)
|
||||
fs.Debugf(nil, "Remove volume %q (count %d)", vol.Name, count)
|
||||
|
||||
if count > 0 {
|
||||
return errors.New("volume is in use")
|
||||
}
|
||||
|
||||
if !vol.drv.dummy {
|
||||
shutdownFn := vol.mnt.Fs.Features().Shutdown
|
||||
if shutdownFn != nil {
|
||||
if err := shutdownFn(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if vol.persist {
|
||||
// Remote remote from config file
|
||||
config.DeleteRemote(vol.Name)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// clearCache will clear VFS cache for the volume
|
||||
func (vol *Volume) clearCache() error {
|
||||
VFS := vol.mnt.VFS
|
||||
if VFS == nil {
|
||||
return nil
|
||||
}
|
||||
root, err := VFS.Root()
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "error reading root: %v", VFS.Fs())
|
||||
}
|
||||
root.ForgetAll()
|
||||
return nil
|
||||
}
|
||||
|
||||
// mount volume filesystem
|
||||
func (vol *Volume) mount(id string) error {
|
||||
drv := vol.drv
|
||||
count := len(vol.mountReqs)
|
||||
fs.Debugf(nil, "Mount volume %q for id %q at path %s (count %d)",
|
||||
vol.Name, id, vol.MountPoint, count)
|
||||
|
||||
if _, found := vol.mountReqs[id]; found {
|
||||
return errors.New("volume is already mounted by this id")
|
||||
}
|
||||
|
||||
if count > 0 { // already mounted
|
||||
vol.mountReqs[id] = nil
|
||||
return nil
|
||||
}
|
||||
if drv.dummy {
|
||||
vol.mountReqs[id] = nil
|
||||
return nil
|
||||
}
|
||||
if vol.mnt.Fs == nil {
|
||||
return errors.New("volume filesystem is not ready")
|
||||
}
|
||||
|
||||
if _, err := vol.mnt.Mount(); err != nil {
|
||||
return err
|
||||
}
|
||||
vol.mnt.MountedOn = time.Now()
|
||||
vol.mountReqs[id] = nil
|
||||
vol.drv.monChan <- false // ask monitor to refresh channels
|
||||
return nil
|
||||
}
|
||||
|
||||
// unmount volume
|
||||
func (vol *Volume) unmount(id string) error {
|
||||
count := len(vol.mountReqs)
|
||||
fs.Debugf(nil, "Unmount volume %q from id %q at path %s (count %d)",
|
||||
vol.Name, id, vol.MountPoint, count)
|
||||
|
||||
if count == 0 {
|
||||
return errors.New("volume is not mounted")
|
||||
}
|
||||
if _, found := vol.mountReqs[id]; !found {
|
||||
return errors.New("volume is not mounted by this id")
|
||||
}
|
||||
|
||||
delete(vol.mountReqs, id)
|
||||
if len(vol.mountReqs) > 0 {
|
||||
return nil // more mounts left
|
||||
}
|
||||
|
||||
if vol.drv.dummy {
|
||||
return nil
|
||||
}
|
||||
|
||||
mnt := vol.mnt
|
||||
if mnt.UnmountFn != nil {
|
||||
if err := mnt.UnmountFn(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
mnt.ErrChan = nil
|
||||
mnt.UnmountFn = nil
|
||||
mnt.VFS = nil
|
||||
vol.drv.monChan <- false // ask monitor to refresh channels
|
||||
return nil
|
||||
}
|
||||
|
||||
func (vol *Volume) unmountAll() error {
|
||||
var firstErr error
|
||||
for id := range vol.mountReqs {
|
||||
err := vol.unmount(id)
|
||||
if firstErr == nil {
|
||||
firstErr = err
|
||||
}
|
||||
}
|
||||
return firstErr
|
||||
}
|
||||
@@ -69,6 +69,7 @@ control the stats printing.
|
||||
return err
|
||||
}
|
||||
s.Bind(router)
|
||||
httplib.Wait()
|
||||
return nil
|
||||
})
|
||||
},
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
|
||||
"github.com/rclone/rclone/cmd"
|
||||
"github.com/rclone/rclone/cmd/serve/dlna"
|
||||
"github.com/rclone/rclone/cmd/serve/docker"
|
||||
"github.com/rclone/rclone/cmd/serve/ftp"
|
||||
"github.com/rclone/rclone/cmd/serve/http"
|
||||
"github.com/rclone/rclone/cmd/serve/restic"
|
||||
@@ -30,6 +31,9 @@ func init() {
|
||||
if sftp.Command != nil {
|
||||
Command.AddCommand(sftp.Command)
|
||||
}
|
||||
if docker.Command != nil {
|
||||
Command.AddCommand(docker.Command)
|
||||
}
|
||||
cmd.Root.AddCommand(Command)
|
||||
}
|
||||
|
||||
|
||||
@@ -32,6 +32,10 @@ hashed locally enabling SHA-1 for any remote.
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
fsrc := cmd.NewFsSrc(args)
|
||||
cmd.Run(false, false, command, func() error {
|
||||
if hashsum.ChecksumFile != "" {
|
||||
fsum, sumFile := cmd.NewFsFile(hashsum.ChecksumFile)
|
||||
return operations.CheckSum(context.Background(), fsrc, fsum, sumFile, hash.SHA1, nil, hashsum.DownloadFlag)
|
||||
}
|
||||
if hashsum.HashsumOutfile == "" {
|
||||
return operations.HashLister(context.Background(), hash.SHA1, hashsum.OutputBase64, hashsum.DownloadFlag, fsrc, nil)
|
||||
}
|
||||
|
||||
@@ -51,7 +51,7 @@ func init() {
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
Use: "makefiles <dir>",
|
||||
Short: `Make a random file hierarchy in <dir>`,
|
||||
Short: `Make a random file hierarchy in a directory`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
if seed == 0 {
|
||||
|
||||
19
cmdtest/cmdtest.go
Normal file
19
cmdtest/cmdtest.go
Normal file
@@ -0,0 +1,19 @@
|
||||
// Package cmdtest creates a testable interface to rclone main
|
||||
//
|
||||
// The interface is used to perform end-to-end test of
|
||||
// commands, flags, environment variables etc.
|
||||
//
|
||||
package cmdtest
|
||||
|
||||
// The rest of this file is a 1:1 copy from rclone.go
|
||||
|
||||
import (
|
||||
_ "github.com/rclone/rclone/backend/all" // import all backends
|
||||
"github.com/rclone/rclone/cmd"
|
||||
_ "github.com/rclone/rclone/cmd/all" // import all commands
|
||||
_ "github.com/rclone/rclone/lib/plugin" // import plugins
|
||||
)
|
||||
|
||||
func main() {
|
||||
cmd.Main()
|
||||
}
|
||||
228
cmdtest/cmdtest_test.go
Normal file
228
cmdtest/cmdtest_test.go
Normal file
@@ -0,0 +1,228 @@
|
||||
// cmdtest_test creates a testable interface to rclone main
|
||||
//
|
||||
// The interface is used to perform end-to-end test of
|
||||
// commands, flags, environment variables etc.
|
||||
|
||||
package cmdtest
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// TestMain is initially called by go test to initiate the testing.
|
||||
// TestMain is also called during the tests to start rclone main in a fresh context (using exec.Command).
|
||||
// The context is determined by setting/finding the environment variable RCLONE_TEST_MAIN
|
||||
func TestMain(m *testing.M) {
|
||||
_, found := os.LookupEnv(rcloneTestMain)
|
||||
if !found {
|
||||
// started by Go test => execute tests
|
||||
err := os.Setenv(rcloneTestMain, "true")
|
||||
if err != nil {
|
||||
log.Fatalf("Unable to set %s: %s", rcloneTestMain, err.Error())
|
||||
}
|
||||
os.Exit(m.Run())
|
||||
} else {
|
||||
// started by func rcloneExecMain => call rclone main in cmdtest.go
|
||||
err := os.Unsetenv(rcloneTestMain)
|
||||
if err != nil {
|
||||
log.Fatalf("Unable to unset %s: %s", rcloneTestMain, err.Error())
|
||||
}
|
||||
main()
|
||||
}
|
||||
}
|
||||
|
||||
const rcloneTestMain = "RCLONE_TEST_MAIN"
|
||||
|
||||
// rcloneExecMain calls rclone with the given environment and arguments.
|
||||
// The environment variables are in a single string separated by ;
|
||||
// The terminal output is retuned as a string.
|
||||
func rcloneExecMain(env string, args ...string) (string, error) {
|
||||
_, found := os.LookupEnv(rcloneTestMain)
|
||||
if !found {
|
||||
log.Fatalf("Unexpected execution path: %s is missing.", rcloneTestMain)
|
||||
}
|
||||
// make a call to self to execute rclone main in a predefined environment (enters TestMain above)
|
||||
command := exec.Command(os.Args[0], args...)
|
||||
command.Env = getEnvInitial()
|
||||
if env != "" {
|
||||
command.Env = append(command.Env, strings.Split(env, ";")...)
|
||||
}
|
||||
out, err := command.CombinedOutput()
|
||||
return string(out), err
|
||||
}
|
||||
|
||||
// rcloneEnv calls rclone with the given environment and arguments.
|
||||
// The environment variables are in a single string separated by ;
|
||||
// The test config file is automatically configured in RCLONE_CONFIG.
|
||||
// The terminal output is retuned as a string.
|
||||
func rcloneEnv(env string, args ...string) (string, error) {
|
||||
envConfig := env
|
||||
if testConfig != "" {
|
||||
if envConfig != "" {
|
||||
envConfig += ";"
|
||||
}
|
||||
envConfig += "RCLONE_CONFIG=" + testConfig
|
||||
}
|
||||
return rcloneExecMain(envConfig, args...)
|
||||
}
|
||||
|
||||
// rclone calls rclone with the given arguments, E.g. "version","--help".
|
||||
// The test config file is automatically configured in RCLONE_CONFIG.
|
||||
// The terminal output is retuned as a string.
|
||||
func rclone(args ...string) (string, error) {
|
||||
return rcloneEnv("", args...)
|
||||
}
|
||||
|
||||
// getEnvInitial returns the os environment variables cleaned for RCLONE_ vars (except RCLONE_TEST_MAIN).
|
||||
func getEnvInitial() []string {
|
||||
if envInitial == nil {
|
||||
// Set initial environment variables
|
||||
osEnv := os.Environ()
|
||||
for i := range osEnv {
|
||||
if !strings.HasPrefix(osEnv[i], "RCLONE_") || strings.HasPrefix(osEnv[i], rcloneTestMain) {
|
||||
envInitial = append(envInitial, osEnv[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
return envInitial
|
||||
}
|
||||
|
||||
var envInitial []string
|
||||
|
||||
// createTestEnvironment creates a temporary testFolder and
|
||||
// sets testConfig to testFolder/rclone.config.
|
||||
func createTestEnvironment(t *testing.T) {
|
||||
//Set temporary folder for config and test data
|
||||
tempFolder, err := ioutil.TempDir("", "rclone_cmdtest_")
|
||||
require.NoError(t, err)
|
||||
testFolder = filepath.ToSlash(tempFolder)
|
||||
|
||||
// Set path to temporary config file
|
||||
testConfig = testFolder + "/rclone.config"
|
||||
}
|
||||
|
||||
var testFolder string
|
||||
var testConfig string
|
||||
|
||||
// removeTestEnvironment removes the test environment created by createTestEnvironment
|
||||
func removeTestEnvironment(t *testing.T) {
|
||||
// Remove temporary folder with all contents
|
||||
err := os.RemoveAll(testFolder)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// createTestFile creates the file testFolder/name
|
||||
func createTestFile(name string, t *testing.T) string {
|
||||
err := ioutil.WriteFile(testFolder+"/"+name, []byte("content_of_"+name), 0666)
|
||||
require.NoError(t, err)
|
||||
return testFolder + "/" + name
|
||||
}
|
||||
|
||||
// createTestFolder creates the folder testFolder/name
|
||||
func createTestFolder(name string, t *testing.T) string {
|
||||
err := os.Mkdir(testFolder+"/"+name, 0777)
|
||||
require.NoError(t, err)
|
||||
return testFolder + "/" + name
|
||||
}
|
||||
|
||||
// createSimpleTestData creates simple test data in testFolder/subFolder
|
||||
func createSimpleTestData(t *testing.T) string {
|
||||
createTestFolder("testdata", t)
|
||||
createTestFile("testdata/file1.txt", t)
|
||||
createTestFile("testdata/file2.txt", t)
|
||||
createTestFolder("testdata/folderA", t)
|
||||
createTestFile("testdata/folderA/fileA1.txt", t)
|
||||
createTestFile("testdata/folderA/fileA2.txt", t)
|
||||
createTestFolder("testdata/folderA/folderAA", t)
|
||||
createTestFile("testdata/folderA/folderAA/fileAA1.txt", t)
|
||||
createTestFile("testdata/folderA/folderAA/fileAA2.txt", t)
|
||||
createTestFolder("testdata/folderB", t)
|
||||
createTestFile("testdata/folderB/fileB1.txt", t)
|
||||
createTestFile("testdata/folderB/fileB2.txt", t)
|
||||
return testFolder + "/testdata"
|
||||
}
|
||||
|
||||
// removeSimpleTestData removes the test data created by createSimpleTestData
|
||||
func removeSimpleTestData(t *testing.T) {
|
||||
err := os.RemoveAll(testFolder + "/testdata")
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// TestCmdTest demonstrates and verifies the test functions for end-to-end testing of rclone
|
||||
func TestCmdTest(t *testing.T) {
|
||||
createTestEnvironment(t)
|
||||
defer removeTestEnvironment(t)
|
||||
|
||||
// Test simple call and output from rclone
|
||||
out, err := rclone("version")
|
||||
t.Logf("rclone version\n" + out)
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "rclone v")
|
||||
assert.Contains(t, out, "version: ")
|
||||
assert.NotContains(t, out, "Error:")
|
||||
assert.NotContains(t, out, "--help")
|
||||
assert.NotContains(t, out, " DEBUG : ")
|
||||
assert.Regexp(t, "rclone\\s+v\\d+\\.\\d+", out) // rclone v_.__
|
||||
}
|
||||
|
||||
// Test multiple arguments and DEBUG output
|
||||
out, err = rclone("version", "-vv")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "rclone v")
|
||||
assert.Contains(t, out, " DEBUG : ")
|
||||
}
|
||||
|
||||
// Test error and error output
|
||||
out, err = rclone("version", "--provoke-an-error")
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), "exit status 1")
|
||||
assert.Contains(t, out, "Error: unknown flag")
|
||||
}
|
||||
|
||||
// Test effect of environment variable
|
||||
env := "RCLONE_LOG_LEVEL=DEBUG"
|
||||
out, err = rcloneEnv(env, "version")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "rclone v")
|
||||
assert.Contains(t, out, " DEBUG : ")
|
||||
}
|
||||
|
||||
// Test effect of multiple environment variables, including one with ,
|
||||
env = "RCLONE_LOG_LEVEL=DEBUG;RCLONE_LOG_FORMAT=date,shortfile;RCLONE_STATS=173ms"
|
||||
out, err = rcloneEnv(env, "version")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "rclone v")
|
||||
assert.Contains(t, out, " DEBUG : ")
|
||||
assert.Regexp(t, "[^\\s]+\\.go:\\d+:", out) // ___.go:__:
|
||||
assert.Contains(t, out, "173ms")
|
||||
}
|
||||
|
||||
// Test setup of config file
|
||||
out, err = rclone("config", "create", "myLocal", "local")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "[myLocal]")
|
||||
assert.Contains(t, out, "type = local")
|
||||
}
|
||||
|
||||
// Test creation of simple test data
|
||||
createSimpleTestData(t)
|
||||
defer removeSimpleTestData(t)
|
||||
|
||||
// Test access to config file and simple test data
|
||||
out, err = rclone("lsl", "myLocal:"+testFolder)
|
||||
t.Logf("rclone lsl myLocal:testFolder\n" + out)
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "rclone.config")
|
||||
assert.Contains(t, out, "testdata/folderA/fileA1.txt")
|
||||
}
|
||||
|
||||
}
|
||||
276
cmdtest/environment_test.go
Normal file
276
cmdtest/environment_test.go
Normal file
@@ -0,0 +1,276 @@
|
||||
// environment_test tests the use and precedence of environment variables
|
||||
//
|
||||
// The tests rely on functions defined in cmdtest_test.go
|
||||
|
||||
package cmdtest
|
||||
|
||||
import (
|
||||
"os"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
// TestCmdTest demonstrates and verifies the test functions for end-to-end testing of rclone
|
||||
func TestEnvironmentVariables(t *testing.T) {
|
||||
|
||||
createTestEnvironment(t)
|
||||
defer removeTestEnvironment(t)
|
||||
|
||||
testdataPath := createSimpleTestData(t)
|
||||
defer removeSimpleTestData(t)
|
||||
|
||||
// Non backend flags
|
||||
// =================
|
||||
|
||||
// First verify default behaviour of the implicit max_depth=-1
|
||||
env := ""
|
||||
out, err := rcloneEnv(env, "lsl", testFolder)
|
||||
//t.Logf("\n" + out)
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "rclone.config") // depth 1
|
||||
assert.Contains(t, out, "file1.txt") // depth 2
|
||||
assert.Contains(t, out, "fileA1.txt") // depth 3
|
||||
assert.Contains(t, out, "fileAA1.txt") // depth 4
|
||||
}
|
||||
|
||||
// Test of flag.Value
|
||||
env = "RCLONE_MAX_DEPTH=2"
|
||||
out, err = rcloneEnv(env, "lsl", testFolder)
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "file1.txt") // depth 2
|
||||
assert.NotContains(t, out, "fileA1.txt") // depth 3
|
||||
}
|
||||
|
||||
// Test of flag.Changed (tests #5341 Issue1)
|
||||
env = "RCLONE_LOG_LEVEL=DEBUG"
|
||||
out, err = rcloneEnv(env, "version", "--quiet")
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, out, " DEBUG : ")
|
||||
assert.Contains(t, out, "Can't set -q and --log-level")
|
||||
assert.Contains(t, "exit status 1", err.Error())
|
||||
}
|
||||
|
||||
// Test of flag.DefValue
|
||||
env = "RCLONE_STATS=173ms"
|
||||
out, err = rcloneEnv(env, "help", "flags")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "(default 173ms)")
|
||||
}
|
||||
|
||||
// Test of command line flags overriding environment flags
|
||||
env = "RCLONE_MAX_DEPTH=2"
|
||||
out, err = rcloneEnv(env, "lsl", testFolder, "--max-depth", "3")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "fileA1.txt") // depth 3
|
||||
assert.NotContains(t, out, "fileAA1.txt") // depth 4
|
||||
}
|
||||
|
||||
// Test of debug logging while initialising flags from environment (tests #5241 Enhance1)
|
||||
env = "RCLONE_STATS=173ms"
|
||||
out, err = rcloneEnv(env, "version", "-vv")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, " DEBUG : ")
|
||||
assert.Contains(t, out, "--stats")
|
||||
assert.Contains(t, out, "173ms")
|
||||
assert.Contains(t, out, "RCLONE_STATS=")
|
||||
}
|
||||
|
||||
// Backend flags and option precedence
|
||||
// ===================================
|
||||
|
||||
// Test approach:
|
||||
// Verify no symlink warning when skip_links=true one the level with highest precedence
|
||||
// and skip_links=false on all levels with lower precedence
|
||||
//
|
||||
// Reference: https://rclone.org/docs/#precedence
|
||||
|
||||
// Create a symlink in test data
|
||||
err = os.Symlink(testdataPath+"/folderA", testdataPath+"/symlinkA")
|
||||
if runtime.GOOS == "windows" {
|
||||
errNote := "The policy settings on Windows often prohibit the creation of symlinks due to security issues.\n"
|
||||
errNote += "You can safely ignore this test, if your change didn't affect environment variables."
|
||||
require.NoError(t, err, errNote)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// Create a local remote with explicit skip_links=false
|
||||
out, err = rclone("config", "create", "myLocal", "local", "skip_links", "false")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "[myLocal]")
|
||||
assert.Contains(t, out, "type = local")
|
||||
assert.Contains(t, out, "skip_links = false")
|
||||
}
|
||||
|
||||
// Verify symlink warning when skip_links=false on all levels
|
||||
env = "RCLONE_SKIP_LINKS=false;RCLONE_LOCAL_SKIP_LINKS=false;RCLONE_CONFIG_MYLOCAL_SKIP_LINKS=false"
|
||||
out, err = rcloneEnv(env, "lsd", "myLocal,skip_links=false:"+testdataPath, "--skip-links=false")
|
||||
//t.Logf("\n" + out)
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "NOTICE: symlinkA:")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
// Test precedence of connection strings
|
||||
env = "RCLONE_SKIP_LINKS=false;RCLONE_LOCAL_SKIP_LINKS=false;RCLONE_CONFIG_MYLOCAL_SKIP_LINKS=false"
|
||||
out, err = rcloneEnv(env, "lsd", "myLocal,skip_links:"+testdataPath, "--skip-links=false")
|
||||
if assert.NoError(t, err) {
|
||||
assert.NotContains(t, out, "symlinkA")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
// Test precedence of command line flags
|
||||
env = "RCLONE_SKIP_LINKS=false;RCLONE_LOCAL_SKIP_LINKS=false;RCLONE_CONFIG_MYLOCAL_SKIP_LINKS=false"
|
||||
out, err = rcloneEnv(env, "lsd", "myLocal:"+testdataPath, "--skip-links")
|
||||
if assert.NoError(t, err) {
|
||||
assert.NotContains(t, out, "symlinkA")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
// Test precedence of remote specific environment variables (tests #5341 Issue2)
|
||||
env = "RCLONE_SKIP_LINKS=false;RCLONE_LOCAL_SKIP_LINKS=false;RCLONE_CONFIG_MYLOCAL_SKIP_LINKS=true"
|
||||
out, err = rcloneEnv(env, "lsd", "myLocal:"+testdataPath)
|
||||
if assert.NoError(t, err) {
|
||||
assert.NotContains(t, out, "symlinkA")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
// Test precedence of backend specific environment variables (tests #5341 Issue3)
|
||||
env = "RCLONE_SKIP_LINKS=false;RCLONE_LOCAL_SKIP_LINKS=true"
|
||||
out, err = rcloneEnv(env, "lsd", "myLocal:"+testdataPath)
|
||||
if assert.NoError(t, err) {
|
||||
assert.NotContains(t, out, "symlinkA")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
// Test precedence of backend generic environment variables
|
||||
env = "RCLONE_SKIP_LINKS=true"
|
||||
out, err = rcloneEnv(env, "lsd", "myLocal:"+testdataPath)
|
||||
if assert.NoError(t, err) {
|
||||
assert.NotContains(t, out, "symlinkA")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
// Recreate the test remote with explicit skip_links=true
|
||||
out, err = rclone("config", "create", "myLocal", "local", "skip_links", "true")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "[myLocal]")
|
||||
assert.Contains(t, out, "type = local")
|
||||
assert.Contains(t, out, "skip_links = true")
|
||||
}
|
||||
|
||||
// Test precedence of config file options
|
||||
env = ""
|
||||
out, err = rcloneEnv(env, "lsd", "myLocal:"+testdataPath)
|
||||
if assert.NoError(t, err) {
|
||||
assert.NotContains(t, out, "symlinkA")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
// Recreate the test remote with rclone defaults, that is implicit skip_links=false
|
||||
out, err = rclone("config", "create", "myLocal", "local")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "[myLocal]")
|
||||
assert.Contains(t, out, "type = local")
|
||||
assert.NotContains(t, out, "skip_links")
|
||||
}
|
||||
|
||||
// Verify the rclone default value (implicit skip_links=false)
|
||||
env = ""
|
||||
out, err = rcloneEnv(env, "lsd", "myLocal:"+testdataPath)
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "NOTICE: symlinkA:")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
// Display of backend defaults (tests #4659)
|
||||
//------------------------------------------
|
||||
|
||||
env = "RCLONE_DRIVE_CHUNK_SIZE=111M"
|
||||
out, err = rcloneEnv(env, "help", "flags")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Regexp(t, "--drive-chunk-size[^\\(]+\\(default 111M\\)", out)
|
||||
}
|
||||
|
||||
// Options on referencing remotes (alias, crypt, etc.)
|
||||
//----------------------------------------------------
|
||||
|
||||
// Create alias remote on myLocal having implicit skip_links=false
|
||||
out, err = rclone("config", "create", "myAlias", "alias", "remote", "myLocal:"+testdataPath)
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "[myAlias]")
|
||||
assert.Contains(t, out, "type = alias")
|
||||
assert.Contains(t, out, "remote = myLocal:")
|
||||
}
|
||||
|
||||
// Verify symlink warnings on the alias
|
||||
env = ""
|
||||
out, err = rcloneEnv(env, "lsd", "myAlias:")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "NOTICE: symlinkA:")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
// Test backend generic flags
|
||||
// having effect on the underlying local remote
|
||||
env = "RCLONE_SKIP_LINKS=true"
|
||||
out, err = rcloneEnv(env, "lsd", "myAlias:")
|
||||
if assert.NoError(t, err) {
|
||||
assert.NotContains(t, out, "symlinkA")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
// Test backend specific flags
|
||||
// having effect on the underlying local remote
|
||||
env = "RCLONE_LOCAL_SKIP_LINKS=true"
|
||||
out, err = rcloneEnv(env, "lsd", "myAlias:")
|
||||
if assert.NoError(t, err) {
|
||||
assert.NotContains(t, out, "symlinkA")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
// Test remote specific flags
|
||||
// having no effect unless supported by the immediate remote (alias)
|
||||
env = "RCLONE_CONFIG_MYALIAS_SKIP_LINKS=true"
|
||||
out, err = rcloneEnv(env, "lsd", "myAlias:")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "NOTICE: symlinkA:")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
env = "RCLONE_CONFIG_MYALIAS_REMOTE=" + "myLocal:" + testdataPath + "/folderA"
|
||||
out, err = rcloneEnv(env, "lsl", "myAlias:")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "fileA1.txt")
|
||||
assert.NotContains(t, out, "fileB1.txt")
|
||||
}
|
||||
|
||||
// Test command line flags
|
||||
// having effect on the underlying local remote
|
||||
env = ""
|
||||
out, err = rcloneEnv(env, "lsd", "myAlias:", "--skip-links")
|
||||
if assert.NoError(t, err) {
|
||||
assert.NotContains(t, out, "symlinkA")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
// Test connection specific flags
|
||||
// having no effect unless supported by the immediate remote (alias)
|
||||
env = ""
|
||||
out, err = rcloneEnv(env, "lsd", "myAlias,skip_links:")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "NOTICE: symlinkA:")
|
||||
assert.Contains(t, out, "folderA")
|
||||
}
|
||||
|
||||
env = ""
|
||||
out, err = rcloneEnv(env, "lsl", "myAlias,remote='myLocal:"+testdataPath+"/folderA':", "-vv")
|
||||
if assert.NoError(t, err) {
|
||||
assert.Contains(t, out, "fileA1.txt")
|
||||
assert.NotContains(t, out, "fileB1.txt")
|
||||
}
|
||||
|
||||
}
|
||||
@@ -18,6 +18,11 @@
|
||||
],
|
||||
"enableGitInfo": true,
|
||||
"markup": {
|
||||
"tableOfContents": {
|
||||
"endLevel": 3,
|
||||
"ordered": false,
|
||||
"startLevel": 2
|
||||
},
|
||||
"goldmark": {
|
||||
"extensions": {
|
||||
"typographer": false
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
title: "Rclone"
|
||||
description: "Rclone syncs your files to cloud storage: Google Drive, S3, Swift, Dropbox, Google Cloud Storage, Azure, Box and many more."
|
||||
type: page
|
||||
notoc: true
|
||||
---
|
||||
|
||||
# Rclone syncs your files to cloud storage
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Alias"
|
||||
description: "Remote Aliases"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-link" >}} Alias
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-link" >}} Alias
|
||||
|
||||
The `alias` remote provides a new name for another remote.
|
||||
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Amazon Drive"
|
||||
description: "Rclone docs for Amazon Drive"
|
||||
---
|
||||
|
||||
{{< icon "fab fa-amazon" >}} Amazon Drive
|
||||
-----------------------------------------
|
||||
# {{< icon "fab fa-amazon" >}} Amazon Drive
|
||||
|
||||
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage
|
||||
service run by Amazon for consumers.
|
||||
@@ -260,7 +259,7 @@ Files >= this size will be downloaded via their tempLink.
|
||||
|
||||
Files this size or more will be downloaded via their "tempLink". This
|
||||
is to work around a problem with Amazon Drive which blocks downloads
|
||||
of files bigger than about 10 GiB. The default for this is 9 GiB which
|
||||
of files bigger than about 10 GiB. The default for this is 9 GiB which
|
||||
shouldn't need to be changed.
|
||||
|
||||
To download files above this threshold, rclone requests a "tempLink"
|
||||
@@ -270,7 +269,7 @@ underlying S3 storage.
|
||||
- Config: templink_threshold
|
||||
- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
|
||||
- Type: SizeSuffix
|
||||
- Default: 9G
|
||||
- Default: 9Gi
|
||||
|
||||
#### --acd-encoding
|
||||
|
||||
|
||||
@@ -431,7 +431,7 @@ put them back in again.` >}}
|
||||
* Laurens Janssen <BD69BM@insim.biz>
|
||||
* Bob Bagwill <bobbagwill@gmail.com>
|
||||
* Nathan Collins <colli372@msu.edu>
|
||||
* lostheli <unknown>
|
||||
* lostheli
|
||||
* kelv <kelvin@acks.org>
|
||||
* Milly <milly.ca@gmail.com>
|
||||
* gtorelly <gtorelly@gmail.com>
|
||||
@@ -495,7 +495,7 @@ put them back in again.` >}}
|
||||
* Chris Macklin <chris.macklin@10xgenomics.com>
|
||||
* Antoon Prins <antoon.prins@surfsara.nl>
|
||||
* Alexey Ivanov <rbtz@dropbox.com>
|
||||
* sp31415t1 <33207650+sp31415t1@users.noreply.github.com>
|
||||
* Serge Pouliquen <sp31415@free.fr>
|
||||
* acsfer <carlos@reendex.com>
|
||||
* Tom <tom@tom-fitzhenry.me.uk>
|
||||
* Tyson Moore <tyson@tyson.me>
|
||||
@@ -504,3 +504,12 @@ put them back in again.` >}}
|
||||
* Reid Buzby <reid@rethink.software>
|
||||
* darrenrhs <darrenrhs@gmail.com>
|
||||
* Florian Penzkofer <fp@nullptr.de>
|
||||
* Xuanchen Wu <117010292@link.cuhk.edu.cn>
|
||||
* partev <petrosyan@gmail.com>
|
||||
* Dmitry Sitnikov <fo2@inbox.ru>
|
||||
* Haochen Tong <i@hexchain.org>
|
||||
* Michael Hanselmann <public@hansmi.ch>
|
||||
* Chuan Zh <zhchuan7@gmail.com>
|
||||
* Antoine GIRARD <antoine.girard@sapk.fr>
|
||||
* Justin Winokur (Jwink3101) <Jwink3101@users.noreply.github.com>
|
||||
* Mariano Absatz (git) <scm@baby.com.ar>
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Microsoft Azure Blob Storage"
|
||||
description: "Rclone docs for Microsoft Azure Blob Storage"
|
||||
---
|
||||
|
||||
{{< icon "fab fa-windows" >}} Microsoft Azure Blob Storage
|
||||
-----------------------------------------
|
||||
# {{< icon "fab fa-windows" >}} Microsoft Azure Blob Storage
|
||||
|
||||
Paths are specified as `remote:container` (or `remote:` for the `lsd`
|
||||
command.) You may put subdirectories in too, e.g.
|
||||
@@ -166,13 +165,12 @@ Path to file containing credentials for use with a service principal.
|
||||
|
||||
Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
|
||||
|
||||
$ az sp create-for-rbac --name "<name>" \
|
||||
$ az ad sp create-for-rbac --name "<name>" \
|
||||
--role "Storage Blob Data Owner" \
|
||||
--scopes "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>" \
|
||||
> azure-principal.json
|
||||
|
||||
See [Use Azure CLI to assign an Azure role for access to blob and queue data](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli)
|
||||
for more details.
|
||||
See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to blob data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
|
||||
|
||||
|
||||
- Config: service_principal_file
|
||||
@@ -286,7 +284,7 @@ Note that this is stored in memory and there may be up to
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 4M
|
||||
- Default: 4Mi
|
||||
|
||||
#### --azureblob-list-chunk
|
||||
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "B2"
|
||||
description: "Backblaze B2"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-fire" >}} Backblaze B2
|
||||
----------------------------------------
|
||||
# {{< icon "fa fa-fire" >}} Backblaze B2
|
||||
|
||||
B2 is [Backblaze's cloud storage system](https://www.backblaze.com/b2/).
|
||||
|
||||
@@ -406,7 +405,7 @@ This value should be set no larger than 4.657 GiB (== 5 GB).
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 200M
|
||||
- Default: 200Mi
|
||||
|
||||
#### --b2-copy-cutoff
|
||||
|
||||
@@ -420,7 +419,7 @@ The minimum is 0 and the maximum is 4.6 GiB.
|
||||
- Config: copy_cutoff
|
||||
- Env Var: RCLONE_B2_COPY_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 4G
|
||||
- Default: 4Gi
|
||||
|
||||
#### --b2-chunk-size
|
||||
|
||||
@@ -434,7 +433,7 @@ minimum size.
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_B2_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 96M
|
||||
- Default: 96Mi
|
||||
|
||||
#### --b2-disable-checksum
|
||||
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Box"
|
||||
description: "Rclone docs for Box"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-archive" >}} Box
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-archive" >}} Box
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
@@ -374,7 +373,7 @@ Cutoff for switching to multipart upload (>= 50 MiB).
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 50M
|
||||
- Default: 50Mi
|
||||
|
||||
#### --box-commit-retries
|
||||
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Cache"
|
||||
description: "Rclone docs for cache remote"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-archive" >}} Cache (DEPRECATED)
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-archive" >}} Cache (DEPRECATED)
|
||||
|
||||
The `cache` remote wraps another existing remote and stores file structure
|
||||
and its data for long running tasks like `rclone mount`.
|
||||
@@ -361,9 +360,9 @@ will need to be cleared or unexpected EOF errors will occur.
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_CACHE_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 5M
|
||||
- Default: 5Mi
|
||||
- Examples:
|
||||
- "1m"
|
||||
- "1M"
|
||||
- 1 MiB
|
||||
- "5M"
|
||||
- 5 MiB
|
||||
@@ -398,7 +397,7 @@ oldest chunks until it goes under this value.
|
||||
- Config: chunk_total_size
|
||||
- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 10G
|
||||
- Default: 10Gi
|
||||
- Examples:
|
||||
- "500M"
|
||||
- 500 MiB
|
||||
|
||||
@@ -5,6 +5,149 @@ description: "Rclone Changelog"
|
||||
|
||||
# Changelog
|
||||
|
||||
## v1.56.0 - 2021-07-20
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.55.0...v1.56.0)
|
||||
|
||||
* New backends
|
||||
* [Uptobox](/uptobox/) (buengese)
|
||||
* New commands
|
||||
* [serve docker](/commands/rclone_serve_docker/) (Antoine GIRARD) (Ivan Andreev)
|
||||
* and accompanying [docker volume plugin](/docker/)
|
||||
* [checksum](/commands/rclone_checksum/) to check files against a file of checksums (Ivan Andreev)
|
||||
* this is also available as `rclone md5sum -C` etc
|
||||
* [config touch](/commands/rclone_config_touch/): ensure config exists at configured location (albertony)
|
||||
* [test changenotify](/commands/rclone_test_changenotify/): command to help debugging changenotify (Nick Craig-Wood)
|
||||
* Deprecations
|
||||
* `dbhashsum`: Remove command deprecated a year ago (Ivan Andreev)
|
||||
* `cache`: Deprecate cache backend (Ivan Andreev)
|
||||
* New Features
|
||||
* rework config system so it can be used non-interactively via cli and rc API.
|
||||
* See docs in [config create](/commands/rclone_config_create/)
|
||||
* This is a very big change to all the backends so may cause breakages - please file bugs!
|
||||
* librclone - export the rclone RC as a C library (lewisxy) (Nick Craig-Wood)
|
||||
* Link a C-API rclone shared object into your project
|
||||
* Use the RC as an in memory interface
|
||||
* Python example supplied
|
||||
* Also supports Android and gomobile
|
||||
* fs
|
||||
* Add `--disable-http2` for global http2 disable (Nick Craig-Wood)
|
||||
* Make `--dump` imply `-vv` (Alex Chen)
|
||||
* Use binary prefixes for size and rate units (albertony)
|
||||
* Use decimal prefixes for counts (albertony)
|
||||
* Add google search widget to rclone.org (Ivan Andreev)
|
||||
* accounting: Calculate rolling average speed (Haochen Tong)
|
||||
* atexit: Terminate with non-zero status after receiving signal (Michael Hanselmann)
|
||||
* build
|
||||
* Only run event-based workflow scripts under rclone repo with manual override (Mathieu Carbou)
|
||||
* Add Android build with gomobile (x0b)
|
||||
* check: Log the hash in use like cryptcheck does (Nick Craig-Wood)
|
||||
* version: Print os/version, kernel and bitness (Ivan Andreev)
|
||||
* config
|
||||
* Prevent use of Windows reserved names in config file name (albertony)
|
||||
* Create config file in windows appdata directory by default (albertony)
|
||||
* Treat any config file paths with filename notfound as memory-only config (albertony)
|
||||
* Delay load config file (albertony)
|
||||
* Replace defaultConfig with a thread-safe in-memory implementation (Chris Macklin)
|
||||
* Allow `config create` and friends to take `key=value` parameters (Nick Craig-Wood)
|
||||
* Fixed issues with flags/options set by environment vars. (Ole Frost)
|
||||
* fshttp: Implement graceful DSCP error handling (Tyson Moore)
|
||||
* lib/http - provides an abstraction for a central http server that services can bind routes to (Nolan Woods)
|
||||
* Add `--template` config and flags to serve/data (Nolan Woods)
|
||||
* Add default 404 handler (Nolan Woods)
|
||||
* link: Use "off" value for unset expiry (Nick Craig-Wood)
|
||||
* oauthutil: Raise fatal error if token expired without refresh token (Alex Chen)
|
||||
* rcat: Add `--size` flag for more efficient uploads of known size (Nazar Mishturak)
|
||||
* serve sftp: Add `--stdio` flag to serve via stdio (Tom)
|
||||
* sync: Don't warn about `--no-traverse` when `--files-from` is set (Nick Gaya)
|
||||
* `test makefiles`
|
||||
* Add `--seed` flag and make data generated repeatable (Nick Craig-Wood)
|
||||
* Add log levels and speed summary (Nick Craig-Wood)
|
||||
* Bug Fixes
|
||||
* accounting: Fix startTime of statsGroups.sum (Haochen Tong)
|
||||
* cmd/ncdu: Fix out of range panic in delete (buengese)
|
||||
* config
|
||||
* Fix issues with memory-only config file paths (albertony)
|
||||
* Fix in memory config not saving on the fly backend config (Nick Craig-Wood)
|
||||
* fshttp: Fix address parsing for DSCP (Tyson Moore)
|
||||
* ncdu: Update termbox-go library to fix crash (Nick Craig-Wood)
|
||||
* oauthutil: Fix old authorize result not recognised (Cnly)
|
||||
* operations: Don't update timestamps of files in `--compare-dest` (Nick Gaya)
|
||||
* selfupdate: fix archive name on macos (Ivan Andreev)
|
||||
* Mount
|
||||
* Refactor before adding serve docker (Antoine GIRARD)
|
||||
* VFS
|
||||
* Add cache reset for `--vfs-cache-max-size` handling at cache poll interval (Leo Luan)
|
||||
* Fix modtime changing when reading file into cache (Nick Craig-Wood)
|
||||
* Avoid unnecessary subdir in cache path (albertony)
|
||||
* Fix that umask option cannot be set as environment variable (albertony)
|
||||
* Do not print notice about missing poll-interval support when set to 0 (albertony)
|
||||
* Local
|
||||
* Always use readlink to read symlink size for better compatibility (Nick Craig-Wood)
|
||||
* Add `--local-unicode-normalization` (and remove `--local-no-unicode-normalization`) (Nick Craig-Wood)
|
||||
* Skip entries removed concurrently with List() (Ivan Andreev)
|
||||
* Crypt
|
||||
* Support timestamped filenames from `--b2-versions` (Dominik Mydlil)
|
||||
* B2
|
||||
* Don't include the bucket name in public link file prefixes (Jeffrey Tolar)
|
||||
* Fix versions and .files with no extension (Nick Craig-Wood)
|
||||
* Factor version handling into lib/version (Dominik Mydlil)
|
||||
* Box
|
||||
* Use upload preflight check to avoid listings in file uploads (Nick Craig-Wood)
|
||||
* Return errors instead of calling log.Fatal with them (Nick Craig-Wood)
|
||||
* Drive
|
||||
* Switch to the Drives API for looking up shared drives (Nick Craig-Wood)
|
||||
* Fix some google docs being treated as files (Nick Craig-Wood)
|
||||
* Dropbox
|
||||
* Add `--dropbox-batch-mode` flag to speed up uploading (Nick Craig-Wood)
|
||||
* Read the [batch mode](/dropbox/#batch-mode) docs for more info
|
||||
* Set visibility in link sharing when `--expire` is set (Nick Craig-Wood)
|
||||
* Simplify chunked uploads (Alexey Ivanov)
|
||||
* Improve "own App IP" instructions (Ivan Andreev)
|
||||
* Fichier
|
||||
* Check if more than one upload link is returned (Nick Craig-Wood)
|
||||
* Support downloading password protected files and folders (Florian Penzkofer)
|
||||
* Make error messages report text from the API (Nick Craig-Wood)
|
||||
* Fix move of files in the same directory (Nick Craig-Wood)
|
||||
* Check that we actually got a download token and retry if we didn't (buengese)
|
||||
* Filefabric
|
||||
* Fix listing after change of from field from "int" to int. (Nick Craig-Wood)
|
||||
* FTP
|
||||
* Make upload error 250 indicate success (Nick Craig-Wood)
|
||||
* GCS
|
||||
* Make compatible with gsutil's mtime metadata (database64128)
|
||||
* Clean up time format constants (database64128)
|
||||
* Google Photos
|
||||
* Fix read only scope not being used properly (Nick Craig-Wood)
|
||||
* HTTP
|
||||
* Replace httplib with lib/http (Nolan Woods)
|
||||
* Clean up Bind to better use middleware (Nolan Woods)
|
||||
* Jottacloud
|
||||
* Fix legacy auth with state based config system (buengese)
|
||||
* Fix invalid url in output from link command (albertony)
|
||||
* Add no versions option (buengese)
|
||||
* Onedrive
|
||||
* Add `list_chunk option` (Nick Gaya)
|
||||
* Also report root error if unable to cancel multipart upload (Cnly)
|
||||
* Fix failed to configure: empty token found error (Nick Craig-Wood)
|
||||
* Make link return direct download link (Xuanchen Wu)
|
||||
* S3
|
||||
* Add `--s3-no-head-object` (Tatsuya Noyori)
|
||||
* Remove WebIdentityRoleProvider to fix crash on auth (Nick Craig-Wood)
|
||||
* Don't check to see if remote is object if it ends with / (Nick Craig-Wood)
|
||||
* Add SeaweedFS (Chris Lu)
|
||||
* Update Alibaba OSS endpoints (Chuan Zh)
|
||||
* SFTP
|
||||
* Fix performance regression by re-enabling concurrent writes (Nick Craig-Wood)
|
||||
* Expand tilde and environment variables in configured `known_hosts_file` (albertony)
|
||||
* Tardigrade
|
||||
* Upgrade to uplink v1.4.6 (Caleb Case)
|
||||
* Use negative offset (Caleb Case)
|
||||
* Add warning about `too many open files` (acsfer)
|
||||
* WebDAV
|
||||
* Fix sharepoint auth over http (Nick Craig-Wood)
|
||||
* Add headers option (Antoon Prins)
|
||||
|
||||
## v1.55.1 - 2021-04-26
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.55.0...v1.55.1)
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Chunker"
|
||||
description: "Split-chunking overlay remote"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-cut" >}}Chunker (BETA)
|
||||
----------------------------------------
|
||||
# {{< icon "fa fa-cut" >}}Chunker (BETA)
|
||||
|
||||
The `chunker` overlay transparently splits large files into smaller chunks
|
||||
during upload to wrapped remote and transparently assembles them back
|
||||
@@ -332,7 +331,7 @@ Files larger than chunk size will be split in chunks.
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_CHUNKER_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 2G
|
||||
- Default: 2Gi
|
||||
|
||||
#### --chunker-hash-type
|
||||
|
||||
|
||||
@@ -39,6 +39,7 @@ See the [global flags page](/flags/) for global options not listed here.
|
||||
* [rclone backend](/commands/rclone_backend/) - Run a backend specific command.
|
||||
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
|
||||
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
|
||||
* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file.
|
||||
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible.
|
||||
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
|
||||
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied.
|
||||
|
||||
@@ -24,6 +24,9 @@ both remotes and check them against each other on the fly. This can
|
||||
be useful for remotes that don't support hashes or if you really want
|
||||
to check all the data.
|
||||
|
||||
If you supply the `--checkfile HASH` flag with a valid hash name,
|
||||
the `source:path` must point to a text file in the SUM format.
|
||||
|
||||
If you supply the `--one-way` flag, it will only check that files in
|
||||
the source match the files in the destination, not the other way
|
||||
around. This means that extra files in the destination that are not in
|
||||
@@ -53,6 +56,7 @@ rclone check source:path dest:path [flags]
|
||||
## Options
|
||||
|
||||
```
|
||||
-C, --checkfile string Treat source:path as a SUM file with hashes of given type
|
||||
--combined string Make a combined report of changes to this file
|
||||
--differ string Report all non-matching files to this file
|
||||
--download Check by downloading rather than with hash.
|
||||
|
||||
68
docs/content/commands/rclone_checksum.md
Normal file
68
docs/content/commands/rclone_checksum.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
title: "rclone checksum"
|
||||
description: "Checks the files in the source against a SUM file."
|
||||
slug: rclone_checksum
|
||||
url: /commands/rclone_checksum/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/checksum/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone checksum
|
||||
|
||||
Checks the files in the source against a SUM file.
|
||||
|
||||
## Synopsis
|
||||
|
||||
|
||||
Checks that hashsums of source files match the SUM file.
|
||||
It compares hashes (MD5, SHA1, etc) and logs a report of files which
|
||||
don't match. It doesn't alter the file system.
|
||||
|
||||
If you supply the `--download` flag, it will download the data from remote
|
||||
and calculate the contents hash on the fly. This can be useful for remotes
|
||||
that don't support hashes or if you really want to check all the data.
|
||||
|
||||
If you supply the `--one-way` flag, it will only check that files in
|
||||
the source match the files in the destination, not the other way
|
||||
around. This means that extra files in the destination that are not in
|
||||
the source will not be detected.
|
||||
|
||||
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match`
|
||||
and `--error` flags write paths, one per line, to the file name (or
|
||||
stdout if it is `-`) supplied. What they write is described in the
|
||||
help below. For example `--differ` will write all paths which are
|
||||
present on both the source and destination but different.
|
||||
|
||||
The `--combined` flag will write a file (or stdout) which contains all
|
||||
file paths with a symbol and then a space and then the path to tell
|
||||
you what happened to it. These are reminiscent of diff files.
|
||||
|
||||
- `= path` means path was found in source and destination and was identical
|
||||
- `- path` means path was missing on the source, so only in the destination
|
||||
- `+ path` means path was missing on the destination, so only in the source
|
||||
- `* path` means path was present in source and destination but different.
|
||||
- `! path` means there was an error reading or hashing the source or dest.
|
||||
|
||||
|
||||
```
|
||||
rclone checksum <hash> sumfile src:path [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
```
|
||||
--combined string Make a combined report of changes to this file
|
||||
--differ string Report all non-matching files to this file
|
||||
--download Check by hashing the contents.
|
||||
--error string Report all files with errors (hashing or reading) to this file
|
||||
-h, --help help for checksum
|
||||
--match string Report all matching files to this file
|
||||
--missing-on-dst string Report all files missing from the destination to this file
|
||||
--missing-on-src string Report all files missing from the source to this file
|
||||
--one-way Check one way only, source files must exist on remote
|
||||
```
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
|
||||
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
|
||||
|
||||
@@ -24,7 +24,7 @@ you would do:
|
||||
If the remote uses OAuth the token will be updated, if you don't
|
||||
require this add an extra parameter thus:
|
||||
|
||||
rclone config update myremote swift env_auth=true config_refresh_token=false
|
||||
rclone config update myremote env_auth=true config_refresh_token=false
|
||||
|
||||
Note that if the config process would normally ask a question the
|
||||
default is taken (unless `--non-interactive` is used). Each time
|
||||
|
||||
@@ -48,6 +48,7 @@ rclone hashsum <hash> remote:path [flags]
|
||||
|
||||
```
|
||||
--base64 Output base64 encoded hashsum
|
||||
-C, --checkfile string Validate hashes against a given SUM file instead of printing them
|
||||
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
|
||||
-h, --help help for hashsum
|
||||
--output-file string Output hashsums to a file rather than the terminal
|
||||
|
||||
@@ -29,6 +29,7 @@ rclone md5sum remote:path [flags]
|
||||
|
||||
```
|
||||
--base64 Output base64 encoded hashsum
|
||||
-C, --checkfile string Validate hashes against a given SUM file instead of printing them
|
||||
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
|
||||
-h, --help help for md5sum
|
||||
--output-file string Output hashsums to a file rather than the terminal
|
||||
|
||||
@@ -608,7 +608,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
|
||||
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
|
||||
--read-only Mount read-only.
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
|
||||
@@ -33,6 +33,7 @@ Here are the keys - press '?' to toggle the help on and off
|
||||
a toggle average size in directory
|
||||
n,s,C,A sort by name,size,count,average size
|
||||
d delete file/directory
|
||||
y copy current path to clipboard
|
||||
Y display current path
|
||||
^L refresh screen
|
||||
? to toggle help on and off
|
||||
|
||||
@@ -35,6 +35,7 @@ See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
|
||||
* [rclone serve dlna](/commands/rclone_serve_dlna/) - Serve remote:path over DLNA
|
||||
* [rclone serve docker](/commands/rclone_serve_docker/) - Serve any remote on docker's volume plugin API.
|
||||
* [rclone serve ftp](/commands/rclone_serve_ftp/) - Serve remote:path over FTP.
|
||||
* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
|
||||
* [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
|
||||
|
||||
@@ -319,7 +319,7 @@ rclone serve dlna remote:path [flags]
|
||||
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
|
||||
--read-only Mount read-only.
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
|
||||
@@ -9,13 +9,49 @@ url: /commands/rclone_serve_docker/
|
||||
|
||||
Serve any remote on docker's volume plugin API.
|
||||
|
||||
# Synopsis
|
||||
## Synopsis
|
||||
|
||||
|
||||
rclone serve docker implements docker's volume plugin API.
|
||||
This allows docker to use rclone as a data storage mechanism for various cloud providers.
|
||||
This command implements the Docker volume plugin API allowing docker to use
|
||||
rclone as a data storage mechanism for various cloud providers.
|
||||
rclone provides [docker volume plugin](/docker) based on it.
|
||||
|
||||
# VFS - Virtual File System
|
||||
To create a docker plugin, one must create a Unix or TCP socket that Docker
|
||||
will look for when you use the plugin and then it listens for commands from
|
||||
docker daemon and runs the corresponding code when necessary.
|
||||
Docker plugins can run as a managed plugin under control of the docker daemon
|
||||
or as an independent native service. For testing, you can just run it directly
|
||||
from the command line, for example:
|
||||
```
|
||||
sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
|
||||
```
|
||||
|
||||
Running `rclone serve docker` will create the said socket, listening for
|
||||
commands from Docker to create the necessary Volumes. Normally you need not
|
||||
give the `--socket-addr` flag. The API will listen on the unix domain socket
|
||||
at `/run/docker/plugins/rclone.sock`. In the example above rclone will create
|
||||
a TCP socket and a small file `/etc/docker/plugins/rclone.spec` containing
|
||||
the socket address. We use `sudo` because both paths are writeable only by
|
||||
the root user.
|
||||
|
||||
If you later decide to change listening socket, the docker daemon must be
|
||||
restarted to reconnect to `/run/docker/plugins/rclone.sock`
|
||||
or parse new `/etc/docker/plugins/rclone.spec`. Until you restart, any
|
||||
volume related docker commands will timeout trying to access the old socket.
|
||||
Running directly is supported on **Linux only**, not on Windows or MacOS.
|
||||
This is not a problem with managed plugin mode described in details
|
||||
in the [full documentation](https://rclone.org/docker).
|
||||
|
||||
The command will create volume mounts under the path given by `--base-dir`
|
||||
(by default `/var/lib/docker-volumes/rclone` available only to root)
|
||||
and maintain the JSON formatted file `docker-plugin.state` in the rclone cache
|
||||
directory with book-keeping records of created and mounted volumes.
|
||||
|
||||
All mount and VFS options are submitted by the docker daemon via API, but
|
||||
you can also provide defaults on the command line as well as set path to the
|
||||
config file and cache directory or adjust logging verbosity.
|
||||
|
||||
## VFS - Virtual File System
|
||||
|
||||
This command uses the VFS layer. This adapts the cloud storage objects
|
||||
that rclone uses into something which looks much more like a disk
|
||||
@@ -29,7 +65,7 @@ doing this there are various options explained below.
|
||||
The VFS layer also implements a directory cache - this caches info
|
||||
about files and directories (but not the data) in memory.
|
||||
|
||||
# VFS Directory Cache
|
||||
## VFS Directory Cache
|
||||
|
||||
Using the `--dir-cache-time` flag, you can control how long a
|
||||
directory should be considered up to date and not refreshed from the
|
||||
@@ -37,7 +73,7 @@ backend. Changes made through the mount will appear immediately or
|
||||
invalidate the cache.
|
||||
|
||||
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
|
||||
--poll-interval duration Time to wait between polling for changes.
|
||||
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
|
||||
|
||||
However, changes made directly on the cloud storage by the web
|
||||
interface or a different copy of rclone will only be picked up once
|
||||
@@ -60,7 +96,7 @@ Or individual files or directories:
|
||||
|
||||
rclone rc vfs/forget file=path/to/file dir=path/to/dir
|
||||
|
||||
# VFS File Buffering
|
||||
## VFS File Buffering
|
||||
|
||||
The `--buffer-size` flag determines the amount of memory,
|
||||
that will be used to buffer data in advance.
|
||||
@@ -77,7 +113,7 @@ be used.
|
||||
The maximum memory used by rclone for buffering can be up to
|
||||
`--buffer-size * open files`.
|
||||
|
||||
# VFS File Caching
|
||||
## VFS File Caching
|
||||
|
||||
These flags control the VFS file caching options. File caching is
|
||||
necessary to make the VFS layer appear compatible with a normal file
|
||||
@@ -123,7 +159,7 @@ around this by giving each rclone its own cache hierarchy with
|
||||
`--cache-dir`. You don't need to worry about this if the remotes in
|
||||
use don't overlap.
|
||||
|
||||
## --vfs-cache-mode off
|
||||
### --vfs-cache-mode off
|
||||
|
||||
In this mode (the default) the cache will read directly from the remote and write
|
||||
directly to the remote without caching anything on disk.
|
||||
@@ -138,7 +174,7 @@ This will mean some operations are not possible
|
||||
* Open modes O_APPEND, O_TRUNC are ignored
|
||||
* If an upload fails it can't be retried
|
||||
|
||||
## --vfs-cache-mode minimal
|
||||
### --vfs-cache-mode minimal
|
||||
|
||||
This is very similar to "off" except that files opened for read AND
|
||||
write will be buffered to disk. This means that files opened for
|
||||
@@ -151,7 +187,7 @@ These operations are not possible
|
||||
* Files opened for write only will ignore O_APPEND, O_TRUNC
|
||||
* If an upload fails it can't be retried
|
||||
|
||||
## --vfs-cache-mode writes
|
||||
### --vfs-cache-mode writes
|
||||
|
||||
In this mode files opened for read only are still read directly from
|
||||
the remote, write only and read/write files are buffered to disk
|
||||
@@ -162,7 +198,7 @@ This mode should support all normal file system operations.
|
||||
If an upload fails it will be retried at exponentially increasing
|
||||
intervals up to 1 minute.
|
||||
|
||||
## --vfs-cache-mode full
|
||||
### --vfs-cache-mode full
|
||||
|
||||
In this mode all reads and writes are buffered to and from disk. When
|
||||
data is read from the remote this is buffered to disk as well.
|
||||
@@ -190,7 +226,7 @@ FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
# VFS Performance
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
performance or other reasons.
|
||||
@@ -231,7 +267,7 @@ modified files from cache (the related global flag --checkers have no effect on
|
||||
|
||||
--transfers int Number of file transfers to run in parallel. (default 4)
|
||||
|
||||
# VFS Case Sensitivity
|
||||
## VFS Case Sensitivity
|
||||
|
||||
Linux file systems are case-sensitive: two files can differ only
|
||||
by case, and the exact case must be used when opening a file.
|
||||
@@ -266,7 +302,7 @@ If the flag is not provided on the command line, then its default value depends
|
||||
on the operating system where rclone runs: "true" on Windows and macOS, "false"
|
||||
otherwise. If the flag is provided without a value, then it is "true".
|
||||
|
||||
# Alternate report of used bytes
|
||||
## Alternate report of used bytes
|
||||
|
||||
Some backends, most notably S3, do not report the amount of bytes used.
|
||||
If you need this information to be available when running `df` on the
|
||||
@@ -284,7 +320,7 @@ calls resulting in extra charges. Use it as a last resort and only with caching.
|
||||
rclone serve docker [flags]
|
||||
```
|
||||
|
||||
# Options
|
||||
## Options
|
||||
|
||||
```
|
||||
--allow-non-empty Allow mounting over a non-empty directory. Not supported on Windows.
|
||||
@@ -292,7 +328,7 @@ rclone serve docker [flags]
|
||||
--allow-root Allow access to root user. Not supported on Windows.
|
||||
--async-read Use asynchronous reads. Not supported on Windows. (default true)
|
||||
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
|
||||
--base-dir string base directory for volumes (default "/var/lib/docker/plugins/rclone/volumes")
|
||||
--base-dir string base directory for volumes (default "/var/lib/docker-volumes/rclone")
|
||||
--daemon Run mount as a daemon (background mode). Not supported on Windows.
|
||||
--daemon-timeout duration Time limit for rclone to respond to kernel. Not supported on Windows.
|
||||
--debug-fuse Debug the FUSE internals - needs -v.
|
||||
@@ -318,7 +354,7 @@ rclone serve docker [flags]
|
||||
--socket-addr string <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
|
||||
--socket-gid int GID for unix socket (default: current process GID) (default 1000)
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
@@ -337,7 +373,7 @@ rclone serve docker [flags]
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
# SEE ALSO
|
||||
## SEE ALSO
|
||||
|
||||
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
|
||||
|
||||
|
||||
@@ -403,7 +403,7 @@ rclone serve ftp remote:path [flags]
|
||||
--public-ip string Public IP address to advertise for passive connections.
|
||||
--read-only Mount read-only.
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--user string User name for authentication. (default "anonymous")
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
|
||||
@@ -398,7 +398,7 @@ rclone serve http remote:path [flags]
|
||||
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
|
||||
--template string User Specified Template.
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--user string User name for authentication.
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
|
||||
@@ -419,7 +419,7 @@ rclone serve sftp remote:path [flags]
|
||||
--read-only Mount read-only.
|
||||
--stdio Run an sftp server on run stdin/stdout
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--user string User name for authentication.
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
|
||||
@@ -491,7 +491,7 @@ rclone serve webdav remote:path [flags]
|
||||
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
|
||||
--template string User Specified Template.
|
||||
--uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 18)
|
||||
--umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
|
||||
--user string User name for authentication.
|
||||
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
|
||||
|
||||
@@ -29,6 +29,7 @@ rclone sha1sum remote:path [flags]
|
||||
|
||||
```
|
||||
--base64 Output base64 encoded hashsum
|
||||
-C, --checkfile string Validate hashes against a given SUM file instead of printing them
|
||||
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
|
||||
-h, --help help for sha1sum
|
||||
--output-file string Output hashsums to a file rather than the terminal
|
||||
|
||||
@@ -37,6 +37,6 @@ See the [global flags page](/flags/) for global options not listed here.
|
||||
* [rclone test changenotify](/commands/rclone_test_changenotify/) - Log any change notify requests for the remote passed in.
|
||||
* [rclone test histogram](/commands/rclone_test_histogram/) - Makes a histogram of file name characters.
|
||||
* [rclone test info](/commands/rclone_test_info/) - Discovers file name or other limitations for paths.
|
||||
* [rclone test makefiles](/commands/rclone_test_makefiles/) - Make a random file hierarchy in <dir>
|
||||
* [rclone test makefiles](/commands/rclone_test_makefiles/) - Make a random file hierarchy in a directory
|
||||
* [rclone test memory](/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats.
|
||||
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
---
|
||||
title: "rclone test makefiles"
|
||||
description: "Make a random file hierarchy in <dir>"
|
||||
description: "Make a random file hierarchy in a directory"
|
||||
slug: rclone_test_makefiles
|
||||
url: /commands/rclone_test_makefiles/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/makefiles/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone test makefiles
|
||||
|
||||
Make a random file hierarchy in <dir>
|
||||
Make a random file hierarchy in a directory
|
||||
|
||||
```
|
||||
rclone test makefiles <dir> [flags]
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Compress"
|
||||
description: "Compression Remote"
|
||||
---
|
||||
|
||||
{{< icon "fas fa-compress" >}}Compress (Experimental)
|
||||
-----------------------------------------
|
||||
# {{< icon "fas fa-compress" >}}Compress (Experimental)
|
||||
|
||||
### Warning
|
||||
This remote is currently **experimental**. Things may break and data may be lost. Anything you do with this remote is
|
||||
@@ -142,6 +141,6 @@ Some remotes don't allow the upload of files with unknown size.
|
||||
- Config: ram_cache_limit
|
||||
- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT
|
||||
- Type: SizeSuffix
|
||||
- Default: 20M
|
||||
- Default: 20Mi
|
||||
|
||||
{{< rem autogenerated options stop >}}
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Crypt"
|
||||
description: "Encryption overlay remote"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-lock" >}}Crypt
|
||||
----------------------------------------
|
||||
# {{< icon "fa fa-lock" >}}Crypt
|
||||
|
||||
Rclone `crypt` remotes encrypt and decrypt other remotes.
|
||||
|
||||
|
||||
526
docs/content/docker.md
Normal file
526
docs/content/docker.md
Normal file
@@ -0,0 +1,526 @@
|
||||
---
|
||||
title: "Docker Volume Plugin"
|
||||
description: "Docker Volume Plugin"
|
||||
---
|
||||
|
||||
# Docker Volume Plugin
|
||||
|
||||
## Introduction
|
||||
|
||||
Docker 1.9 has added support for creating
|
||||
[named volumes](https://docs.docker.com/storage/volumes/) via
|
||||
[command-line interface](https://docs.docker.com/engine/reference/commandline/volume_create/)
|
||||
and mounting them in containers as a way to share data between them.
|
||||
Since Docker 1.10 you can create named volumes with
|
||||
[Docker Compose](https://docs.docker.com/compose/) by descriptions in
|
||||
[docker-compose.yml](https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference)
|
||||
files for use by container groups on a single host.
|
||||
As of Docker 1.12 volumes are supported by
|
||||
[Docker Swarm](https://docs.docker.com/engine/swarm/key-concepts/)
|
||||
included with Docker Engine and created from descriptions in
|
||||
[swarm compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference)
|
||||
files for use with _swarm stacks_ across multiple cluster nodes.
|
||||
|
||||
[Docker Volume Plugins](https://docs.docker.com/engine/extend/plugins_volume/)
|
||||
augment the default `local` volume driver included in Docker with stateful
|
||||
volumes shared across containers and hosts. Unlike local volumes, your
|
||||
data will _not_ be deleted when such volume is removed. Plugins can run
|
||||
managed by the docker daemon, as a native system service
|
||||
(under systemd, _sysv_ or _upstart_) or as a standalone executable.
|
||||
Rclone can run as docker volume plugin in all these modes.
|
||||
It interacts with the local docker daemon
|
||||
via [plugin API](https://docs.docker.com/engine/extend/plugin_api/) and
|
||||
handles mounting of remote file systems into docker containers so it must
|
||||
run on the same host as the docker daemon or on every Swarm node.
|
||||
|
||||
## Getting started
|
||||
|
||||
In the first example we will use the [SFTP](/sftp/)
|
||||
rclone volume with Docker engine on a standalone Ubuntu machine.
|
||||
|
||||
Start from [installing Docker](https://docs.docker.com/engine/install/)
|
||||
on the host.
|
||||
|
||||
The _FUSE_ driver is a prerequisite for rclone mounting and should be
|
||||
installed on host:
|
||||
```
|
||||
sudo apt-get -y install fuse
|
||||
```
|
||||
|
||||
Create two directories required by rclone docker plugin:
|
||||
```
|
||||
sudo mkdir -p /var/lib/docker-plugins/rclone/config
|
||||
sudo mkdir -p /var/lib/docker-plugins/rclone/cache
|
||||
```
|
||||
|
||||
Install the managed rclone docker plugin:
|
||||
```
|
||||
docker plugin install rclone/docker-volume-rclone args="-v" --alias rclone --grant-all-permissions
|
||||
docker plugin list
|
||||
```
|
||||
|
||||
Create your [SFTP volume](/sftp/#standard-options):
|
||||
```
|
||||
docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true
|
||||
```
|
||||
|
||||
Note that since all options are static, you don't even have to run
|
||||
`rclone config` or create the `rclone.conf` file (but the `config` directory
|
||||
should still be present). In the simplest case you can use `localhost`
|
||||
as _hostname_ and your SSH credentials as _username_ and _password_.
|
||||
You can also change the remote path to your home directory on the host,
|
||||
for example `-o path=/home/username`.
|
||||
|
||||
|
||||
Time to create a test container and mount the volume into it:
|
||||
```
|
||||
docker run --rm -it -v firstvolume:/mnt --workdir /mnt ubuntu:latest bash
|
||||
```
|
||||
|
||||
If all goes well, you will enter the new container and change right to
|
||||
the mounted SFTP remote. You can type `ls` to list the mounted directory
|
||||
or otherwise play with it. Type `exit` when you are done.
|
||||
The container will stop but the volume will stay, ready to be reused.
|
||||
When it's not needed anymore, remove it:
|
||||
```
|
||||
docker volume list
|
||||
docker volume remove firstvolume
|
||||
```
|
||||
|
||||
Now let us try **something more elaborate**:
|
||||
[Google Drive](/drive/) volume on multi-node Docker Swarm.
|
||||
|
||||
You should start from installing Docker and FUSE, creating plugin
|
||||
directories and installing rclone plugin on _every_ swarm node.
|
||||
Then [setup the Swarm](https://docs.docker.com/engine/swarm/swarm-mode/).
|
||||
|
||||
Google Drive volumes need an access token which can be setup via web
|
||||
browser and will be periodically renewed by rclone. The managed
|
||||
plugin cannot run a browser so we will use a technique similar to the
|
||||
[rclone setup on a headless box](/remote_setup/).
|
||||
|
||||
Run [rclone config](/commands/rclone_config_create/)
|
||||
on _another_ machine equipped with _web browser_ and graphical user interface.
|
||||
Create the [Google Drive remote](/drive/#standard-options).
|
||||
When done, transfer the resulting `rclone.conf` to the Swarm cluster
|
||||
and save as `/var/lib/docker-plugins/rclone/config/rclone.conf`
|
||||
on _every_ node. By default this location is accessible only to the
|
||||
root user so you will need appropriate privileges. The resulting config
|
||||
will look like this:
|
||||
```
|
||||
[gdrive]
|
||||
type = drive
|
||||
scope = drive
|
||||
drive_id = 1234567...
|
||||
root_folder_id = 0Abcd...
|
||||
token = {"access_token":...}
|
||||
```
|
||||
|
||||
Now create the file named `example.yml` with a swarm stack description
|
||||
like this:
|
||||
```
|
||||
version: '3'
|
||||
services:
|
||||
heimdall:
|
||||
image: linuxserver/heimdall:latest
|
||||
ports: [8080:80]
|
||||
volumes: [configdata:/config]
|
||||
volumes:
|
||||
configdata:
|
||||
driver: rclone
|
||||
driver_opts:
|
||||
remote: 'gdrive:heimdall'
|
||||
allow_other: 'true'
|
||||
vfs_cache_mode: full
|
||||
poll_interval: 0
|
||||
```
|
||||
|
||||
and run the stack:
|
||||
```
|
||||
docker stack deploy example -c ./example.yml
|
||||
```
|
||||
|
||||
After a few seconds docker will spread the parsed stack description
|
||||
over cluster, create the `example_heimdall` service on port _8080_,
|
||||
run service containers on one or more cluster nodes and request
|
||||
the `example_configdata` volume from rclone plugins on the node hosts.
|
||||
You can use the following commands to confirm results:
|
||||
```
|
||||
docker service ls
|
||||
docker service ps example_heimdall
|
||||
docker volume ls
|
||||
```
|
||||
|
||||
Point your browser to `http://cluster.host.address:8080` and play with
|
||||
the service. Stop it with `docker stack remove example` when you are done.
|
||||
Note that the `example_configdata` volume(s) created on demand at the
|
||||
cluster nodes will not be automatically removed together with the stack
|
||||
but stay for future reuse. You can remove them manually by invoking
|
||||
the `docker volume remove example_configdata` command on every node.
|
||||
|
||||
## Creating Volumes via CLI
|
||||
|
||||
Volumes can be created with [docker volume create](https://docs.docker.com/engine/reference/commandline/volume_create/).
|
||||
Here are a few examples:
|
||||
```
|
||||
docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full
|
||||
docker volume create vol2 -d rclone -o remote=:tardigrade,access_grant=xxx:heimdall
|
||||
docker volume create vol3 -d rclone -o type=tardigrade -o path=heimdall -o tardigrade-access-grant=xxx -o poll-interval=0
|
||||
```
|
||||
|
||||
Note the `-d rclone` flag that tells docker to request volume from the
|
||||
rclone driver. This works even if you installed managed driver by its full
|
||||
name `rclone/docker-volume-rclone` because you provided the `--alias rclone`
|
||||
option.
|
||||
|
||||
Volumes can be inspected as follows:
|
||||
```
|
||||
docker volume list
|
||||
docker volume inspect vol1
|
||||
```
|
||||
|
||||
## Volume Configuration
|
||||
|
||||
Rclone flags and volume options are set via the `-o` flag to the
|
||||
`docker volume create` command. They include backend-specific parameters
|
||||
as well as mount and _VFS_ options. Also there are a few
|
||||
special `-o` options:
|
||||
`remote`, `fs`, `type`, `path`, `mount-type` and `persist`.
|
||||
|
||||
`remote` determines an existing remote name from the config file, with
|
||||
trailing colon and optionally with a remote path. See the full syntax in
|
||||
the [rclone documentation](/docs/#syntax-of-remote-paths).
|
||||
This option can be aliased as `fs` to prevent confusion with the
|
||||
_remote_ parameter of such backends as _crypt_ or _alias_.
|
||||
|
||||
The `remote=:backend:dir/subdir` syntax can be used to create
|
||||
[on-the-fly (config-less) remotes](/docs/#backend-path-to-dir),
|
||||
while the `type` and `path` options provide a simpler alternative for this.
|
||||
Using two split options
|
||||
```
|
||||
-o type=backend -o path=dir/subdir
|
||||
```
|
||||
is equivalent to the combined syntax
|
||||
```
|
||||
-o remote=:backend:dir/subdir
|
||||
```
|
||||
but is arguably easier to parameterize in scripts.
|
||||
The `path` part is optional.
|
||||
|
||||
[Mount and VFS options](/commands/rclone_serve_docker/#options)
|
||||
as well as [backend parameters](/flags/#backend-flags) are named
|
||||
like their twin command-line flags without the `--` CLI prefix.
|
||||
Optionally you can use underscores instead of dashes in option names.
|
||||
For example, `--vfs-cache-mode full` becomes
|
||||
`-o vfs-cache-mode=full` or `-o vfs_cache_mode=full`.
|
||||
Boolean CLI flags without value will gain the `true` value, e.g.
|
||||
`--allow-other` becomes `-o allow-other=true` or `-o allow_other=true`.
|
||||
|
||||
Please note that you can provide parameters only for the backend immediately
|
||||
referenced by the backend type of mounted `remote`.
|
||||
If this is a wrapping backend like _alias, chunker or crypt_, you cannot
|
||||
provide options for the referred to remote or backend. This limitation is
|
||||
imposed by the rclone connection string parser. The only workaround is to
|
||||
feed plugin with `rclone.conf` or configure plugin arguments (see below).
|
||||
|
||||
## Special Volume Options
|
||||
|
||||
`mount-type` determines the mount method and in general can be one of:
|
||||
`mount`, `cmount`, or `mount2`. This can be aliased as `mount_type`.
|
||||
It should be noted that the managed rclone docker plugin currently does
|
||||
not support the `cmount` method and `mount2` is rarely needed.
|
||||
This option defaults to the first found method, which is usually `mount`
|
||||
so you generally won't need it.
|
||||
|
||||
`persist` is a reserved boolean (true/false) option.
|
||||
In future it will allow to persist on-the-fly remotes in the plugin
|
||||
`rclone.conf` file.
|
||||
|
||||
## Connection Strings
|
||||
|
||||
The `remote` value can be extended
|
||||
with [connection strings](/docs/#connection-strings)
|
||||
as an alternative way to supply backend parameters. This is equivalent
|
||||
to the `-o` backend options with one _syntactic difference_.
|
||||
Inside connection string the backend prefix must be dropped from parameter
|
||||
names but in the `-o param=value` array it must be present.
|
||||
For instance, compare the following option array
|
||||
```
|
||||
-o remote=:sftp:/home -o sftp-host=localhost
|
||||
```
|
||||
with equivalent connection string:
|
||||
```
|
||||
-o remote=:sftp,host=localhost:/home
|
||||
```
|
||||
This difference exists because flag options `-o key=val` include not only
|
||||
backend parameters but also mount/VFS flags and possibly other settings.
|
||||
Also it allows to discriminate the `remote` option from the `crypt-remote`
|
||||
(or similarly named backend parameters) and arguably simplifies scripting
|
||||
due to clearer value substitution.
|
||||
|
||||
## Using with Swarm or Compose
|
||||
|
||||
Both _Docker Swarm_ and _Docker Compose_ use
|
||||
[YAML](http://yaml.org/spec/1.2/spec.html)-formatted text files to describe
|
||||
groups (stacks) of containers, their properties, networks and volumes.
|
||||
_Compose_ uses the [compose v2](https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference) format,
|
||||
_Swarm_ uses the [compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference) format.
|
||||
They are mostly similar, differences are explained in the
|
||||
[docker documentation](https://docs.docker.com/compose/compose-file/compose-versioning/#upgrading).
|
||||
|
||||
Volumes are described by the children of the top-level `volumes:` node.
|
||||
Each of them should be named after its volume and have at least two
|
||||
elements, the self-explanatory `driver: rclone` value and the
|
||||
`driver_opts:` structure playing the same role as `-o key=val` CLI flags:
|
||||
|
||||
```
|
||||
volumes:
|
||||
volume_name_1:
|
||||
driver: rclone
|
||||
driver_opts:
|
||||
remote: 'gdrive:'
|
||||
allow_other: 'true'
|
||||
vfs_cache_mode: full
|
||||
token: '{"type": "borrower", "expires": "2021-12-31"}'
|
||||
poll_interval: 0
|
||||
```
|
||||
|
||||
Notice a few important details:
|
||||
- YAML prefers `_` in option names instead of `-`.
|
||||
- YAML treats single and double quotes interchangeably.
|
||||
Simple strings and integers can be left unquoted.
|
||||
- Boolean values must be quoted like `'true'` or `"false"` because
|
||||
these two words are reserved by YAML.
|
||||
- The filesystem string is keyed with `remote` (or with `fs`).
|
||||
Normally you can omit quotes here, but if the string ends with colon,
|
||||
you **must** quote it like `remote: "storage_box:"`.
|
||||
- YAML is picky about surrounding braces in values as this is in fact
|
||||
another [syntax for key/value mappings](http://yaml.org/spec/1.2/spec.html#id2790832).
|
||||
For example, JSON access tokens usually contain double quotes and
|
||||
surrounding braces, so you must put them in single quotes.
|
||||
|
||||
## Installing as Managed Plugin
|
||||
|
||||
Docker daemon can install plugins from an image registry and run them managed.
|
||||
We maintain the
|
||||
[docker-volume-rclone](https://hub.docker.com/p/rclone/docker-volume-rclone/)
|
||||
plugin image on [Docker Hub](https://hub.docker.com).
|
||||
|
||||
The plugin requires presence of two directories on the host before it can
|
||||
be installed. Note that plugin will **not** create them automatically.
|
||||
By default they must exist on host at the following locations
|
||||
(though you can tweak the paths):
|
||||
- `/var/lib/docker-plugins/rclone/config`
|
||||
is reserved for the `rclone.conf` config file and **must** exist
|
||||
even if it's empty and the config file is not present.
|
||||
- `/var/lib/docker-plugins/rclone/cache`
|
||||
holds the plugin state file as well as optional VFS caches.
|
||||
|
||||
You can [install managed plugin](https://docs.docker.com/engine/reference/commandline/plugin_install/)
|
||||
with default settings as follows:
|
||||
```
|
||||
docker plugin install rclone/docker-volume-rclone:latest --grant-all-permissions --alias rclone
|
||||
```
|
||||
|
||||
Managed plugin is in fact a special container running in a namespace separate
|
||||
from normal docker containers. Inside it runs the `rclone serve docker`
|
||||
command. The config and cache directories are bind-mounted into the
|
||||
container at start. The docker daemon connects to a unix socket created
|
||||
by the command inside the container. The command creates on-demand remote
|
||||
mounts right inside, then docker machinery propagates them through kernel
|
||||
mount namespaces and bind-mounts into requesting user containers.
|
||||
|
||||
You can tweak a few plugin settings after installation when it's disabled
|
||||
(not in use), for instance:
|
||||
```
|
||||
docker plugin disable rclone
|
||||
docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args="--vfs-cache-mode=writes --allow-other"
|
||||
docker plugin enable rclone
|
||||
docker plugin inspect rclone
|
||||
```
|
||||
|
||||
Note that if docker refuses to disable the plugin, you should find and
|
||||
remove all active volumes connected with it as well as containers and
|
||||
swarm services that use them. This is rather tedious so please carefully
|
||||
plan in advance.
|
||||
|
||||
You can tweak the following settings:
|
||||
`args`, `config`, `cache`, and `RCLONE_VERBOSE`.
|
||||
It's _your_ task to keep plugin settings in sync across swarm cluster nodes.
|
||||
|
||||
`args` sets command-line arguments for the `rclone serve docker` command
|
||||
(_none_ by default). Arguments should be separated by space so you will
|
||||
normally want to put them in quotes on the
|
||||
[docker plugin set](https://docs.docker.com/engine/reference/commandline/plugin_set/)
|
||||
command line. Both [serve docker flags](/commands/rclone_serve_docker/#options)
|
||||
and [generic rclone flags](/flags/) are supported, including backend
|
||||
parameters that will be used as defaults for volume creation.
|
||||
Note that plugin will fail (due to [this docker bug](https://github.com/moby/moby/blob/v20.10.7/plugin/v2/plugin.go#L195))
|
||||
if the `args` value is empty. Use e.g. `args="-v"` as a workaround.
|
||||
|
||||
`config=/host/dir` sets alternative host location for the config directory.
|
||||
Plugin will look for `rclone.conf` here. It's not an error if the config
|
||||
file is not present but the directory must exist. Please note that plugin
|
||||
can periodically rewrite the config file, for example when it renews
|
||||
storage access tokens. Keep this in mind and try to avoid races between
|
||||
the plugin and other instances of rclone on the host that might try to
|
||||
change the config simultaneously resulting in corrupted `rclone.conf`.
|
||||
You can also put stuff like private key files for SFTP remotes in this
|
||||
directory. Just note that it's bind-mounted inside the plugin container
|
||||
at the predefined path `/data/config`. For example, if your key file is
|
||||
named `sftp-box1.key` on the host, the corresponding volume config option
|
||||
should read `-o sftp-key-file=/data/config/sftp-box1.key`.
|
||||
|
||||
`cache=/host/dir` sets alternative host location for the _cache_ directory.
|
||||
The plugin will keep VFS caches here. Also it will create and maintain
|
||||
the `docker-plugin.state` file in this directory. When the plugin is
|
||||
restarted or reinstalled, it will look in this file to recreate any volumes
|
||||
that existed previously. However, they will not be re-mounted into
|
||||
consuming containers after restart. Usually this is not a problem as
|
||||
the docker daemon normally will restart affected user containers after
|
||||
failures, daemon restarts or host reboots.
|
||||
|
||||
`RCLONE_VERBOSE` sets plugin verbosity from `0` (errors only, by default)
|
||||
to `2` (debugging). Verbosity can be also tweaked via `args="-v [-v] ..."`.
|
||||
Since arguments are more generic, you will rarely need this setting.
|
||||
The plugin output by default feeds the docker daemon log on local host.
|
||||
Log entries are reflected as _errors_ in the docker log but retain their
|
||||
actual level assigned by rclone in the encapsulated message string.
|
||||
|
||||
You can set custom plugin options right when you install it, _in one go_:
|
||||
```
|
||||
docker plugin remove rclone
|
||||
docker plugin install rclone/docker-volume-rclone:latest \
|
||||
--alias rclone --grant-all-permissions \
|
||||
args="-v --allow-other" config=/etc/rclone
|
||||
docker plugin inspect rclone
|
||||
```
|
||||
|
||||
## Healthchecks
|
||||
|
||||
The docker plugin volume protocol doesn't provide a way for plugins
|
||||
to inform the docker daemon that a volume is (un-)available.
|
||||
As a workaround you can setup a healthcheck to verify that the mount
|
||||
is responding, for example:
|
||||
```
|
||||
services:
|
||||
my_service:
|
||||
image: my_image
|
||||
healthcheck:
|
||||
test: ls /path/to/rclone/mount || exit 1
|
||||
interval: 1m
|
||||
timeout: 15s
|
||||
retries: 3
|
||||
start_period: 15s
|
||||
```
|
||||
|
||||
## Running Plugin under Systemd
|
||||
|
||||
In most cases you should prefer managed mode. Moreover, MacOS and Windows
|
||||
do not support native Docker plugins. Please use managed mode on these
|
||||
systems. Proceed further only if you are on Linux.
|
||||
|
||||
First, [install rclone](/install/).
|
||||
You can just run it (type `rclone serve docker` and hit enter) for the test.
|
||||
|
||||
Install _FUSE_:
|
||||
```
|
||||
sudo apt-get -y install fuse
|
||||
```
|
||||
|
||||
Download two systemd configuration files:
|
||||
[docker-volume-rclone.service](https://raw.githubusercontent.com/rclone/rclone/master/cmd/serve/docker/contrib/systemd/docker-volume-rclone.service)
|
||||
and [docker-volume-rclone.socket](https://raw.githubusercontent.com/rclone/rclone/master/cmd/serve/docker/contrib/systemd/docker-volume-rclone.socket).
|
||||
|
||||
Put them to the `/etc/systemd/system/` directory:
|
||||
```
|
||||
cp docker-volume-plugin.service /etc/systemd/system/
|
||||
cp docker-volume-plugin.socket /etc/systemd/system/
|
||||
```
|
||||
|
||||
Please note that all commands in this section must be run as _root_ but
|
||||
we omit `sudo` prefix for brevity.
|
||||
Now create directories required by the service:
|
||||
```
|
||||
mkdir -p /var/lib/docker-volumes/rclone
|
||||
mkdir -p /var/lib/docker-plugins/rclone/config
|
||||
mkdir -p /var/lib/docker-plugins/rclone/cache
|
||||
```
|
||||
|
||||
Run the docker plugin service in the socket activated mode:
|
||||
```
|
||||
systemctl daemon-reload
|
||||
systemctl start docker-volume-rclone.service
|
||||
systemctl enable docker-volume-rclone.socket
|
||||
systemctl start docker-volume-rclone.socket
|
||||
systemctl restart docker
|
||||
```
|
||||
|
||||
Or run the service directly:
|
||||
- run `systemctl daemon-reload` to let systemd pick up new config
|
||||
- run `systemctl enable docker-volume-rclone.service` to make the new
|
||||
service start automatically when you power on your machine.
|
||||
- run `systemctl start docker-volume-rclone.service`
|
||||
to start the service now.
|
||||
- run `systemctl restart docker` to restart docker daemon and let it
|
||||
detect the new plugin socket. Note that this step is not needed in
|
||||
managed mode where docker knows about plugin state changes.
|
||||
|
||||
The two methods are equivalent from the user perspective, but I personally
|
||||
prefer socket activation.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
You can [see managed plugin settings](https://docs.docker.com/engine/extend/#debugging-plugins)
|
||||
with
|
||||
```
|
||||
docker plugin list
|
||||
docker plugin inspect rclone
|
||||
```
|
||||
Note that docker (including latest 20.10.7) will not show actual values
|
||||
of `args`, just the defaults.
|
||||
|
||||
Use `journalctl --unit docker` to see managed plugin output as part of
|
||||
the docker daemon log. Note that docker reflects plugin lines as _errors_
|
||||
but their actual level can be seen from encapsulated message string.
|
||||
|
||||
You will usually install the latest version of managed plugin.
|
||||
Use the following commands to print the actual installed version:
|
||||
```
|
||||
PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}')
|
||||
sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone version
|
||||
```
|
||||
|
||||
You can even use `runc` to run shell inside the plugin container:
|
||||
```
|
||||
sudo runc --root /run/docker/runtime-runc/plugins.moby exec --tty $PLUGID bash
|
||||
```
|
||||
|
||||
Also you can use curl to check the plugin socket connectivity:
|
||||
```
|
||||
docker plugin list --no-trunc
|
||||
PLUGID=123abc...
|
||||
sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate
|
||||
```
|
||||
though this is rarely needed.
|
||||
|
||||
Finally I'd like to mention a _caveat with updating volume settings_.
|
||||
Docker CLI does not have a dedicated command like `docker volume update`.
|
||||
It may be tempting to invoke `docker volume create` with updated options
|
||||
on existing volume, but there is a gotcha. The command will do nothing,
|
||||
it won't even return an error. I hope that docker maintainers will fix
|
||||
this some day. In the meantime be aware that you must remove your volume
|
||||
before recreating it with new settings:
|
||||
```
|
||||
docker volume remove my_vol
|
||||
docker volume create my_vol -d rclone -o opt1=new_val1 ...
|
||||
```
|
||||
|
||||
and verify that settings did update:
|
||||
```
|
||||
docker volume list
|
||||
docker volume inspect my_vol
|
||||
```
|
||||
|
||||
If docker refuses to remove the volume, you should find containers
|
||||
or swarm services that use it and stop them first.
|
||||
@@ -239,6 +239,13 @@ However using the connection string syntax, this does work.
|
||||
|
||||
rclone copy "gdrive,shared_with_me:shared-file.txt" gdrive:
|
||||
|
||||
Note that the connection string only affects the options of the immediate
|
||||
backend. If for example gdriveCrypt is a crypt based on gdrive, then the
|
||||
following command **will not work** as intended, because
|
||||
`shared_with_me` is ignored by the crypt backend:
|
||||
|
||||
rclone copy "gdriveCrypt,shared_with_me:shared-file.txt" gdriveCrypt:
|
||||
|
||||
The connection strings have the following syntax
|
||||
|
||||
remote,parameter=value,parameter2=value2:path/to/dir
|
||||
@@ -2129,6 +2136,8 @@ Or to always use the trash in drive `--drive-use-trash`, set
|
||||
The same parser is used for the options and the environment variables
|
||||
so they take exactly the same form.
|
||||
|
||||
The options set by environment variables can be seen with the `-vv` flag, e.g. `rclone version -vv`.
|
||||
|
||||
### Config file ###
|
||||
|
||||
You can set defaults for values in the config file on an individual
|
||||
@@ -2155,6 +2164,11 @@ mys3:
|
||||
Note that if you want to create a remote using environment variables
|
||||
you must create the `..._TYPE` variable as above.
|
||||
|
||||
Note that you can only set the options of the immediate backend,
|
||||
so RCLONE_CONFIG_MYS3CRYPT_ACCESS_KEY_ID has no effect, if myS3Crypt is
|
||||
a crypt remote based on an S3 remote. However RCLONE_S3_ACCESS_KEY_ID will
|
||||
set the access key of all remotes using S3, including myS3Crypt.
|
||||
|
||||
Note also that now rclone has [connection strings](#connection-strings),
|
||||
it is probably easier to use those instead which makes the above example
|
||||
|
||||
@@ -2165,16 +2179,20 @@ it is probably easier to use those instead which makes the above example
|
||||
The various different methods of backend configuration are read in
|
||||
this order and the first one with a value is used.
|
||||
|
||||
- Flag values as supplied on the command line, e.g. `--drive-use-trash`.
|
||||
- Remote specific environment vars, e.g. `RCLONE_CONFIG_MYREMOTE_USE_TRASH` (see above).
|
||||
- Backend specific environment vars, e.g. `RCLONE_DRIVE_USE_TRASH`.
|
||||
- Config file, e.g. `use_trash = false`.
|
||||
- Default values, e.g. `true` - these can't be changed.
|
||||
- Parameters in connection strings, e.g. `myRemote,skip_links:`
|
||||
- Flag values as supplied on the command line, e.g. `--skip-links`
|
||||
- Remote specific environment vars, e.g. `RCLONE_CONFIG_MYREMOTE_SKIP_LINKS` (see above).
|
||||
- Backend specific environment vars, e.g. `RCLONE_LOCAL_SKIP_LINKS`.
|
||||
- Backend generic environment vars, e.g. `RCLONE_SKIP_LINKS`.
|
||||
- Config file, e.g. `skip_links = true`.
|
||||
- Default values, e.g. `false` - these can't be changed.
|
||||
|
||||
So if both `--drive-use-trash` is supplied on the config line and an
|
||||
environment variable `RCLONE_DRIVE_USE_TRASH` is set, the command line
|
||||
So if both `--skip-links` is supplied on the command line and an
|
||||
environment variable `RCLONE_LOCAL_SKIP_LINKS` is set, the command line
|
||||
flag will take preference.
|
||||
|
||||
The backend configurations set by environment variables can be seen with the `-vv` flag, e.g. `rclone about myRemote: -vv`.
|
||||
|
||||
For non backend configuration the order is as follows:
|
||||
|
||||
- Flag values as supplied on the command line, e.g. `--stats 5s`.
|
||||
@@ -2187,4 +2205,7 @@ For non backend configuration the order is as follows:
|
||||
- `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` (or the lowercase versions thereof).
|
||||
- `HTTPS_PROXY` takes precedence over `HTTP_PROXY` for https requests.
|
||||
- The environment values may be either a complete URL or a "host[:port]" for, in which case the "http" scheme is assumed.
|
||||
- `USER` and `LOGNAME` values are used as fallbacks for current username. The primary method for looking up username is OS-specific: Windows API on Windows, real user ID in /etc/passwd on Unix systems. In the documentation the current username is simply referred to as `$USER`.
|
||||
- `RCLONE_CONFIG_DIR` - rclone **sets** this variable for use in config files and sub processes to point to the directory holding the config file.
|
||||
|
||||
The options set by environment variables can be seen with the `-vv` and `--log-level=DEBUG` flags, e.g. `rclone version -vv`.
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Google drive"
|
||||
description: "Rclone docs for Google drive"
|
||||
---
|
||||
|
||||
{{< icon "fab fa-google" >}} Google Drive
|
||||
-----------------------------------------
|
||||
# {{< icon "fab fa-google" >}} Google Drive
|
||||
|
||||
Paths are specified as `drive:path`
|
||||
|
||||
@@ -868,7 +867,7 @@ Cutoff for switching to chunked upload
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 8M
|
||||
- Default: 8Mi
|
||||
|
||||
#### --drive-chunk-size
|
||||
|
||||
@@ -882,7 +881,7 @@ Reducing this will reduce memory usage but decrease performance.
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_DRIVE_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 8M
|
||||
- Default: 8Mi
|
||||
|
||||
#### --drive-acknowledge-abuse
|
||||
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Dropbox"
|
||||
description: "Rclone docs for Dropbox"
|
||||
---
|
||||
|
||||
{{< icon "fab fa-dropbox" >}} Dropbox
|
||||
---------------------------------
|
||||
# {{< icon "fab fa-dropbox" >}} Dropbox
|
||||
|
||||
Paths are specified as `remote:path`
|
||||
|
||||
@@ -238,7 +237,7 @@ Leave blank to use the provider defaults.
|
||||
|
||||
#### --dropbox-chunk-size
|
||||
|
||||
Upload chunk size. (< 150M).
|
||||
Upload chunk size. (< 150Mi).
|
||||
|
||||
Any files larger than this will be uploaded in chunks of this size.
|
||||
|
||||
@@ -250,7 +249,7 @@ memory. It can be set smaller if you are tight on memory.
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_DROPBOX_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 48M
|
||||
- Default: 48Mi
|
||||
|
||||
#### --dropbox-impersonate
|
||||
|
||||
@@ -309,6 +308,75 @@ shared folder.
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --dropbox-batch-mode
|
||||
|
||||
Upload file batching sync|async|off.
|
||||
|
||||
This sets the batch mode used by rclone.
|
||||
|
||||
For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)
|
||||
|
||||
This has 3 possible values
|
||||
|
||||
- off - no batching
|
||||
- sync - batch uploads and check completion (default)
|
||||
- async - batch upload and don't check completion
|
||||
|
||||
Rclone will close any outstanding batches when it exits which may make
|
||||
a delay on quit.
|
||||
|
||||
|
||||
- Config: batch_mode
|
||||
- Env Var: RCLONE_DROPBOX_BATCH_MODE
|
||||
- Type: string
|
||||
- Default: "sync"
|
||||
|
||||
#### --dropbox-batch-size
|
||||
|
||||
Max number of files in upload batch.
|
||||
|
||||
This sets the batch size of files to upload. It has to be less than 1000.
|
||||
|
||||
By default this is 0 which means rclone which calculate the batch size
|
||||
depending on the setting of batch_mode.
|
||||
|
||||
- batch_mode: async - default batch_size is 100
|
||||
- batch_mode: sync - default batch_size is the same as --transfers
|
||||
- batch_mode: off - not in use
|
||||
|
||||
Rclone will close any outstanding batches when it exits which may make
|
||||
a delay on quit.
|
||||
|
||||
Setting this is a great idea if you are uploading lots of small files
|
||||
as it will make them a lot quicker. You can use --transfers 32 to
|
||||
maximise throughput.
|
||||
|
||||
|
||||
- Config: batch_size
|
||||
- Env Var: RCLONE_DROPBOX_BATCH_SIZE
|
||||
- Type: int
|
||||
- Default: 0
|
||||
|
||||
#### --dropbox-batch-timeout
|
||||
|
||||
Max time to allow an idle upload batch before uploading
|
||||
|
||||
If an upload batch is idle for more than this long then it will be
|
||||
uploaded.
|
||||
|
||||
The default for this is 0 which means rclone will choose a sensible
|
||||
default based on the batch_mode in use.
|
||||
|
||||
- batch_mode: async - default batch_timeout is 500ms
|
||||
- batch_mode: sync - default batch_timeout is 10s
|
||||
- batch_mode: off - not in use
|
||||
|
||||
|
||||
- Config: batch_timeout
|
||||
- Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT
|
||||
- Type: Duration
|
||||
- Default: 0s
|
||||
|
||||
#### --dropbox-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "1Fichier"
|
||||
description: "Rclone docs for 1Fichier"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-archive" >}} 1Fichier
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-archive" >}} 1Fichier
|
||||
|
||||
This is a backend for the [1fichier](https://1fichier.com) cloud
|
||||
storage service. Note that a Premium subscription is required to use
|
||||
@@ -139,6 +138,28 @@ If you want to download a shared folder, add this parameter
|
||||
- Type: string
|
||||
- Default: ""
|
||||
|
||||
#### --fichier-file-password
|
||||
|
||||
If you want to download a shared file that is password protected, add this parameter
|
||||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
- Config: file_password
|
||||
- Env Var: RCLONE_FICHIER_FILE_PASSWORD
|
||||
- Type: string
|
||||
- Default: ""
|
||||
|
||||
#### --fichier-folder-password
|
||||
|
||||
If you want to list the files in a shared folder that is password protected, add this parameter
|
||||
|
||||
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
|
||||
|
||||
- Config: folder_password
|
||||
- Env Var: RCLONE_FICHIER_FOLDER_PASSWORD
|
||||
- Type: string
|
||||
- Default: ""
|
||||
|
||||
#### --fichier-encoding
|
||||
|
||||
This sets the encoding for the backend.
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Enterprise File Fabric"
|
||||
description: "Rclone docs for the Enterprise File Fabric backend"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-cloud" >}} Enterprise File Fabric
|
||||
-----------------------------------------
|
||||
# {{< icon "fa fa-cloud" >}} Enterprise File Fabric
|
||||
|
||||
This backend supports [Storage Made Easy's Enterprise File
|
||||
Fabric™](https://storagemadeeasy.com/about/) which provides a software
|
||||
|
||||
@@ -154,7 +154,7 @@ These flags are available for every command.
|
||||
--use-json-log Use json log format.
|
||||
--use-mmap Use mmap allocator (see docs).
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.56.0-beta.5531.41f561bf2.pr-commanddocs")
|
||||
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.56.0")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
```
|
||||
|
||||
@@ -311,6 +311,8 @@ and may be set in the config file.
|
||||
--dropbox-token-url string Token server url.
|
||||
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
|
||||
--fichier-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
|
||||
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
|
||||
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
|
||||
--fichier-shared-folder string If you want to download a shared folder, add this parameter
|
||||
--filefabric-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--filefabric-permanent-token string Permanent Authentication Token
|
||||
@@ -375,6 +377,7 @@ and may be set in the config file.
|
||||
--jottacloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
|
||||
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
|
||||
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10Mi)
|
||||
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them.
|
||||
--jottacloud-trashed-only Only show files that are in the trash.
|
||||
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10Mi)
|
||||
--koofr-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
@@ -587,7 +590,7 @@ and may be set in the config file.
|
||||
--zoho-client-id string OAuth Client Id
|
||||
--zoho-client-secret string OAuth Client Secret
|
||||
--zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
|
||||
--zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in.
|
||||
--zoho-region string Zoho region to connect to.
|
||||
--zoho-token string OAuth Access Token as a JSON blob.
|
||||
--zoho-token-url string Token server url.
|
||||
```
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "FTP"
|
||||
description: "Rclone docs for FTP backend"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-file" >}} FTP
|
||||
------------------------------
|
||||
# {{< icon "fa fa-file" >}} FTP
|
||||
|
||||
FTP is the File Transfer Protocol. Rclone FTP support is provided using the
|
||||
[github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp)
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Google Cloud Storage"
|
||||
description: "Rclone docs for Google Cloud Storage"
|
||||
---
|
||||
|
||||
{{< icon "fab fa-google" >}} Google Cloud Storage
|
||||
-------------------------------------------------
|
||||
# {{< icon "fab fa-google" >}} Google Cloud Storage
|
||||
|
||||
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
|
||||
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "Google Photos"
|
||||
description: "Rclone docs for Google Photos"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-images" >}} Google Photos
|
||||
-------------------------------------------------
|
||||
# {{< icon "fa fa-images" >}} Google Photos
|
||||
|
||||
The rclone backend for [Google Photos](https://www.google.com/photos/about/) is
|
||||
a specialized backend for transferring photos and videos to and from
|
||||
|
||||
@@ -3,8 +3,7 @@ title: "HDFS Remote"
|
||||
description: "Remote for Hadoop Distributed Filesystem"
|
||||
---
|
||||
|
||||
{{< icon "fa fa-globe" >}} HDFS
|
||||
-------------------------------------------------
|
||||
# {{< icon "fa fa-globe" >}} HDFS
|
||||
|
||||
[HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) is a
|
||||
distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/) framework.
|
||||
@@ -190,7 +189,7 @@ Here are the advanced options specific to hdfs (Hadoop distributed file system).
|
||||
Kerberos service principal name for the namenode
|
||||
|
||||
Enables KERBEROS authentication. Specifies the Service Principal Name
|
||||
(<SERVICE>/<FQDN>) for the namenode.
|
||||
(SERVICE/FQDN) for the namenode.
|
||||
|
||||
- Config: service_principal_name
|
||||
- Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user