mirror of
https://github.com/rclone/rclone.git
synced 2025-12-06 00:03:32 +00:00
Compare commits
1 Commits
fix-log-fa
...
fix-bitrix
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9ea209a9ab |
33
.github/ISSUE_TEMPLATE/Bug.md
vendored
33
.github/ISSUE_TEMPLATE/Bug.md
vendored
@@ -5,31 +5,19 @@ about: Report a problem with rclone
|
||||
|
||||
<!--
|
||||
|
||||
We understand you are having a problem with rclone; we want to help you with that!
|
||||
Welcome :-) We understand you are having a problem with rclone; we want to help you with that!
|
||||
|
||||
**STOP and READ**
|
||||
**YOUR POST WILL BE REMOVED IF IT IS LOW QUALITY**:
|
||||
Please show the effort you've put in to solving the problem and please be specific.
|
||||
People are volunteering their time to help! Low effort posts are not likely to get good answers!
|
||||
|
||||
If you think you might have found a bug, try to replicate it with the latest beta (or stable).
|
||||
The update instructions are available at https://rclone.org/commands/rclone_selfupdate/
|
||||
|
||||
If you can still replicate it or just got a question then please use the rclone forum:
|
||||
If you've just got a question or aren't sure if you've found a bug then please use the rclone forum:
|
||||
|
||||
https://forum.rclone.org/
|
||||
|
||||
for a quick response instead of filing an issue on this repo.
|
||||
instead of filing an issue for a quick response.
|
||||
|
||||
If nothing else helps, then please fill in the info below which helps us help you.
|
||||
If you think you might have found a bug, please can you try to replicate it with the latest beta?
|
||||
|
||||
**DO NOT REDACT** any information except passwords/keys/personal info.
|
||||
|
||||
You should use 3 backticks to begin and end your paste to make it readable.
|
||||
|
||||
Make sure to include a log obtained with '-vv'.
|
||||
|
||||
You can also use '-vv --log-file bug.log' and a service such as https://pastebin.com or https://gist.github.com/
|
||||
https://beta.rclone.org/
|
||||
|
||||
If you can still replicate it with the latest beta, then please fill in the info below which makes our lives much easier. A log with -vv will make our day :-)
|
||||
|
||||
Thank you
|
||||
|
||||
@@ -37,11 +25,6 @@ The Rclone Developers
|
||||
|
||||
-->
|
||||
|
||||
|
||||
#### The associated forum post URL from `https://forum.rclone.org`
|
||||
|
||||
|
||||
|
||||
#### What is the problem you are having with rclone?
|
||||
|
||||
|
||||
@@ -54,7 +37,7 @@ The Rclone Developers
|
||||
|
||||
|
||||
|
||||
#### Which cloud storage system are you using? (e.g. Google Drive)
|
||||
#### Which cloud storage system are you using? (e.g. Google Drive)
|
||||
|
||||
|
||||
|
||||
|
||||
16
.github/ISSUE_TEMPLATE/Feature.md
vendored
16
.github/ISSUE_TEMPLATE/Feature.md
vendored
@@ -7,16 +7,12 @@ about: Suggest a new feature or enhancement for rclone
|
||||
|
||||
Welcome :-)
|
||||
|
||||
So you've got an idea to improve rclone? We love that!
|
||||
You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
|
||||
So you've got an idea to improve rclone? We love that! You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
|
||||
|
||||
Probably the latest beta (or stable) release has your feature, so try to update your rclone.
|
||||
The update instructions are available at https://rclone.org/commands/rclone_selfupdate/
|
||||
Here is a checklist of things to do:
|
||||
|
||||
If it still isn't there, here is a checklist of things to do:
|
||||
|
||||
1. Search the old issues for your idea and +1 or comment on an existing issue if possible.
|
||||
2. Discuss on the forum: https://forum.rclone.org/
|
||||
1. Please search the old issues first for your idea and +1 or comment on an existing issue if possible.
|
||||
2. Discuss on the forum first: https://forum.rclone.org/
|
||||
3. Make a feature request issue (this is the right place!).
|
||||
4. Be prepared to get involved making the feature :-)
|
||||
|
||||
@@ -27,10 +23,6 @@ The Rclone Developers
|
||||
-->
|
||||
|
||||
|
||||
#### The associated forum post URL from `https://forum.rclone.org`
|
||||
|
||||
|
||||
|
||||
#### What is your current rclone version (output from `rclone version`)?
|
||||
|
||||
|
||||
|
||||
2
.github/workflows/build.yml
vendored
2
.github/workflows/build.yml
vendored
@@ -221,8 +221,6 @@ jobs:
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
# Upgrade together with NDK version
|
||||
- name: Set up Go 1.14
|
||||
|
||||
@@ -33,11 +33,10 @@ page](https://github.com/rclone/rclone).
|
||||
|
||||
Now in your terminal
|
||||
|
||||
git clone https://github.com/rclone/rclone.git
|
||||
cd rclone
|
||||
go get -u github.com/rclone/rclone
|
||||
cd $GOPATH/src/github.com/rclone/rclone
|
||||
git remote rename origin upstream
|
||||
git remote add origin git@github.com:YOURUSER/rclone.git
|
||||
go build
|
||||
|
||||
Make a branch to add your new feature
|
||||
|
||||
|
||||
1289
MANUAL.html
generated
1289
MANUAL.html
generated
File diff suppressed because it is too large
Load Diff
1867
MANUAL.txt
generated
1867
MANUAL.txt
generated
File diff suppressed because it is too large
Load Diff
@@ -20,7 +20,7 @@ var (
|
||||
)
|
||||
|
||||
func prepare(t *testing.T, root string) {
|
||||
require.NoError(t, configfile.LoadConfig(context.Background()))
|
||||
configfile.LoadConfig(context.Background())
|
||||
|
||||
// Configure the remote
|
||||
config.FileSet(remoteName, "type", "alias")
|
||||
|
||||
@@ -41,7 +41,6 @@ import (
|
||||
_ "github.com/rclone/rclone/backend/swift"
|
||||
_ "github.com/rclone/rclone/backend/tardigrade"
|
||||
_ "github.com/rclone/rclone/backend/union"
|
||||
_ "github.com/rclone/rclone/backend/uptobox"
|
||||
_ "github.com/rclone/rclone/backend/webdav"
|
||||
_ "github.com/rclone/rclone/backend/yandex"
|
||||
_ "github.com/rclone/rclone/backend/zoho"
|
||||
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"path"
|
||||
"strings"
|
||||
@@ -69,12 +70,11 @@ func init() {
|
||||
Prefix: "acd",
|
||||
Description: "Amazon Drive",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
err := oauthutil.Config(ctx, "amazon cloud drive", name, m, acdConfig, nil)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: append(oauthutil.SharedOptions, []fs.Option{{
|
||||
Name: "checkpoint",
|
||||
|
||||
@@ -2,11 +2,12 @@ package api
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/rclone/rclone/fs/fserrors"
|
||||
"github.com/rclone/rclone/lib/version"
|
||||
)
|
||||
|
||||
// Error describes a B2 error response
|
||||
@@ -62,17 +63,16 @@ func (t *Timestamp) UnmarshalJSON(data []byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// HasVersion returns true if it looks like the passed filename has a timestamp on it.
|
||||
//
|
||||
// Note that the passed filename's timestamp may still be invalid even if this
|
||||
// function returns true.
|
||||
func HasVersion(remote string) bool {
|
||||
return version.Match(remote)
|
||||
}
|
||||
const versionFormat = "-v2006-01-02-150405.000"
|
||||
|
||||
// AddVersion adds the timestamp as a version string into the filename passed in.
|
||||
func (t Timestamp) AddVersion(remote string) string {
|
||||
return version.Add(remote, time.Time(t))
|
||||
ext := path.Ext(remote)
|
||||
base := remote[:len(remote)-len(ext)]
|
||||
s := time.Time(t).Format(versionFormat)
|
||||
// Replace the '.' with a '-'
|
||||
s = strings.Replace(s, ".", "-", -1)
|
||||
return base + s + ext
|
||||
}
|
||||
|
||||
// RemoveVersion removes the timestamp from a filename as a version string.
|
||||
@@ -80,9 +80,24 @@ func (t Timestamp) AddVersion(remote string) string {
|
||||
// It returns the new file name and a timestamp, or the old filename
|
||||
// and a zero timestamp.
|
||||
func RemoveVersion(remote string) (t Timestamp, newRemote string) {
|
||||
time, newRemote := version.Remove(remote)
|
||||
t = Timestamp(time)
|
||||
return
|
||||
newRemote = remote
|
||||
ext := path.Ext(remote)
|
||||
base := remote[:len(remote)-len(ext)]
|
||||
if len(base) < len(versionFormat) {
|
||||
return
|
||||
}
|
||||
versionStart := len(base) - len(versionFormat)
|
||||
// Check it ends in -xxx
|
||||
if base[len(base)-4] != '-' {
|
||||
return
|
||||
}
|
||||
// Replace with .xxx for parsing
|
||||
base = base[:len(base)-4] + "." + base[len(base)-3:]
|
||||
newT, err := time.Parse(versionFormat, base[versionStart:])
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
return Timestamp(newT), base[:versionStart] + ext
|
||||
}
|
||||
|
||||
// IsZero returns true if the timestamp is uninitialized
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
var (
|
||||
emptyT api.Timestamp
|
||||
t0 = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123456789Z"))
|
||||
t0r = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123000000Z"))
|
||||
t1 = api.Timestamp(fstest.Time("2001-02-03T04:05:06.123000000Z"))
|
||||
)
|
||||
|
||||
@@ -35,6 +36,40 @@ func TestTimestampUnmarshalJSON(t *testing.T) {
|
||||
assert.Equal(t, (time.Time)(t1), (time.Time)(tActual))
|
||||
}
|
||||
|
||||
func TestTimestampAddVersion(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
t api.Timestamp
|
||||
in string
|
||||
expected string
|
||||
}{
|
||||
{t0, "potato.txt", "potato-v1970-01-01-010101-123.txt"},
|
||||
{t1, "potato", "potato-v2001-02-03-040506-123"},
|
||||
{t1, "", "-v2001-02-03-040506-123"},
|
||||
} {
|
||||
actual := test.t.AddVersion(test.in)
|
||||
assert.Equal(t, test.expected, actual, test.in)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTimestampRemoveVersion(t *testing.T) {
|
||||
for _, test := range []struct {
|
||||
in string
|
||||
expectedT api.Timestamp
|
||||
expectedRemote string
|
||||
}{
|
||||
{"potato.txt", emptyT, "potato.txt"},
|
||||
{"potato-v1970-01-01-010101-123.txt", t0r, "potato.txt"},
|
||||
{"potato-v2001-02-03-040506-123", t1, "potato"},
|
||||
{"-v2001-02-03-040506-123", t1, ""},
|
||||
{"potato-v2A01-02-03-040506-123", emptyT, "potato-v2A01-02-03-040506-123"},
|
||||
{"potato-v2001-02-03-040506=123", emptyT, "potato-v2001-02-03-040506=123"},
|
||||
} {
|
||||
actualT, actualRemote := api.RemoveVersion(test.in)
|
||||
assert.Equal(t, test.expectedT, actualT, test.in)
|
||||
assert.Equal(t, test.expectedRemote, actualRemote, test.in)
|
||||
}
|
||||
}
|
||||
|
||||
func TestTimestampIsZero(t *testing.T) {
|
||||
assert.True(t, emptyT.IsZero())
|
||||
assert.False(t, t0.IsZero())
|
||||
|
||||
@@ -1353,7 +1353,7 @@ func (f *Fs) getDownloadAuthorization(ctx context.Context, bucket, remote string
|
||||
}
|
||||
var request = api.GetDownloadAuthorizationRequest{
|
||||
BucketID: bucketID,
|
||||
FileNamePrefix: f.opt.Enc.FromStandardPath(path.Join(f.rootDirectory, remote)),
|
||||
FileNamePrefix: f.opt.Enc.FromStandardPath(path.Join(f.root, remote)),
|
||||
ValidDurationInSeconds: validDurationInSeconds,
|
||||
}
|
||||
var response api.GetDownloadAuthorizationResponse
|
||||
|
||||
@@ -36,13 +36,13 @@ func (t *Time) UnmarshalJSON(data []byte) error {
|
||||
|
||||
// Error is returned from box when things go wrong
|
||||
type Error struct {
|
||||
Type string `json:"type"`
|
||||
Status int `json:"status"`
|
||||
Code string `json:"code"`
|
||||
ContextInfo json.RawMessage `json:"context_info"`
|
||||
HelpURL string `json:"help_url"`
|
||||
Message string `json:"message"`
|
||||
RequestID string `json:"request_id"`
|
||||
Type string `json:"type"`
|
||||
Status int `json:"status"`
|
||||
Code string `json:"code"`
|
||||
ContextInfo json.RawMessage
|
||||
HelpURL string `json:"help_url"`
|
||||
Message string `json:"message"`
|
||||
RequestID string `json:"request_id"`
|
||||
}
|
||||
|
||||
// Error returns a string for the error and satisfies the error interface
|
||||
@@ -132,38 +132,6 @@ type UploadFile struct {
|
||||
ContentModifiedAt Time `json:"content_modified_at"`
|
||||
}
|
||||
|
||||
// PreUploadCheck is the request for upload preflight check
|
||||
type PreUploadCheck struct {
|
||||
Name string `json:"name"`
|
||||
Parent Parent `json:"parent"`
|
||||
Size *int64 `json:"size,omitempty"`
|
||||
}
|
||||
|
||||
// PreUploadCheckResponse is the response from upload preflight check
|
||||
// if successful
|
||||
type PreUploadCheckResponse struct {
|
||||
UploadToken string `json:"upload_token"`
|
||||
UploadURL string `json:"upload_url"`
|
||||
}
|
||||
|
||||
// PreUploadCheckConflict is returned in the ContextInfo error field
|
||||
// from PreUploadCheck when the error code is "item_name_in_use"
|
||||
type PreUploadCheckConflict struct {
|
||||
Conflicts struct {
|
||||
Type string `json:"type"`
|
||||
ID string `json:"id"`
|
||||
FileVersion struct {
|
||||
Type string `json:"type"`
|
||||
ID string `json:"id"`
|
||||
Sha1 string `json:"sha1"`
|
||||
} `json:"file_version"`
|
||||
SequenceID string `json:"sequence_id"`
|
||||
Etag string `json:"etag"`
|
||||
Sha1 string `json:"sha1"`
|
||||
Name string `json:"name"`
|
||||
} `json:"conflicts"`
|
||||
}
|
||||
|
||||
// UpdateFileModTime is used in Update File Info
|
||||
type UpdateFileModTime struct {
|
||||
ContentModifiedAt Time `json:"content_modified_at"`
|
||||
|
||||
@@ -17,6 +17,7 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
@@ -83,7 +84,7 @@ func init() {
|
||||
Name: "box",
|
||||
Description: "Box",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
jsonFile, ok := m.Get("box_config_file")
|
||||
boxSubType, boxSubTypeOk := m.Get("box_sub_type")
|
||||
boxAccessToken, boxAccessTokenOk := m.Get("access_token")
|
||||
@@ -92,16 +93,15 @@ func init() {
|
||||
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" {
|
||||
err = refreshJWTToken(ctx, jsonFile, boxSubType, name, m)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token with jwt authentication")
|
||||
log.Fatalf("Failed to configure token with jwt authentication: %v", err)
|
||||
}
|
||||
// Else, if not using an access token, use oauth2
|
||||
} else if boxAccessToken == "" || !boxAccessTokenOk {
|
||||
err = oauthutil.Config(ctx, "box", name, m, oauthConfig, nil)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token with oauth authentication")
|
||||
log.Fatalf("Failed to configure token with oauth authentication: %v", err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: append(oauthutil.SharedOptions, []fs.Option{{
|
||||
Name: "root_folder_id",
|
||||
@@ -157,15 +157,15 @@ func refreshJWTToken(ctx context.Context, jsonFile string, boxSubType string, na
|
||||
jsonFile = env.ShellExpand(jsonFile)
|
||||
boxConfig, err := getBoxConfig(jsonFile)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "get box config")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
privateKey, err := getDecryptedPrivateKey(boxConfig)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "get decrypted private key")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
claims, err := getClaims(boxConfig, boxSubType)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "get claims")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
signingHeaders := getSigningHeaders(boxConfig)
|
||||
queryParams := getQueryParams(boxConfig)
|
||||
@@ -686,80 +686,22 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
|
||||
return o, leaf, directoryID, nil
|
||||
}
|
||||
|
||||
// preUploadCheck checks to see if a file can be uploaded
|
||||
//
|
||||
// It returns "", nil if the file is good to go
|
||||
// It returns "ID", nil if the file must be updated
|
||||
func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size int64) (ID string, err error) {
|
||||
check := api.PreUploadCheck{
|
||||
Name: f.opt.Enc.FromStandardName(leaf),
|
||||
Parent: api.Parent{
|
||||
ID: directoryID,
|
||||
},
|
||||
}
|
||||
if size >= 0 {
|
||||
check.Size = &size
|
||||
}
|
||||
opts := rest.Opts{
|
||||
Method: "OPTIONS",
|
||||
Path: "/files/content/",
|
||||
}
|
||||
var result api.PreUploadCheckResponse
|
||||
var resp *http.Response
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
resp, err = f.srv.CallJSON(ctx, &opts, &check, &result)
|
||||
return shouldRetry(ctx, resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
if apiErr, ok := err.(*api.Error); ok && apiErr.Code == "item_name_in_use" {
|
||||
var conflict api.PreUploadCheckConflict
|
||||
err = json.Unmarshal(apiErr.ContextInfo, &conflict)
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "pre-upload check: JSON decode failed")
|
||||
}
|
||||
if conflict.Conflicts.Type != api.ItemTypeFile {
|
||||
return "", errors.Wrap(err, "pre-upload check: can't overwrite non file with file")
|
||||
}
|
||||
return conflict.Conflicts.ID, nil
|
||||
}
|
||||
return "", errors.Wrap(err, "pre-upload check")
|
||||
}
|
||||
return "", nil
|
||||
}
|
||||
|
||||
// Put the object
|
||||
//
|
||||
// Copy the reader in to the new object which is returned
|
||||
//
|
||||
// The new object may have been created if an error is returned
|
||||
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
|
||||
// If directory doesn't exist, file doesn't exist so can upload
|
||||
remote := src.Remote()
|
||||
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, false)
|
||||
if err != nil {
|
||||
if err == fs.ErrorDirNotFound {
|
||||
return f.PutUnchecked(ctx, in, src, options...)
|
||||
}
|
||||
existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil)
|
||||
switch err {
|
||||
case nil:
|
||||
return existingObj, existingObj.Update(ctx, in, src, options...)
|
||||
case fs.ErrorObjectNotFound:
|
||||
// Not found so create it
|
||||
return f.PutUnchecked(ctx, in, src)
|
||||
default:
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Preflight check the upload, which returns the ID if the
|
||||
// object already exists
|
||||
ID, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if ID == "" {
|
||||
return f.PutUnchecked(ctx, in, src, options...)
|
||||
}
|
||||
|
||||
// If object exists then create a skeleton one with just id
|
||||
o := &Object{
|
||||
fs: f,
|
||||
remote: remote,
|
||||
id: ID,
|
||||
}
|
||||
return o, o.Update(ctx, in, src, options...)
|
||||
}
|
||||
|
||||
// PutStream uploads to the remote path with the modTime given of indeterminate size
|
||||
|
||||
2
backend/cache/cache_internal_test.go
vendored
2
backend/cache/cache_internal_test.go
vendored
@@ -836,7 +836,7 @@ func newRun() *run {
|
||||
if uploadDir == "" {
|
||||
r.tmpUploadDir, err = ioutil.TempDir("", "rclonecache-tmp")
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("Failed to create temp dir: %v", err))
|
||||
log.Fatalf("Failed to create temp dir: %v", err)
|
||||
}
|
||||
} else {
|
||||
r.tmpUploadDir = uploadDir
|
||||
|
||||
@@ -53,7 +53,7 @@ const (
|
||||
Gzip = 2
|
||||
)
|
||||
|
||||
var nameRegexp = regexp.MustCompile("^(.+?)\\.([A-Za-z0-9-_]{11})$")
|
||||
var nameRegexp = regexp.MustCompile("^(.+?)\\.([A-Za-z0-9+_]{11})$")
|
||||
|
||||
// Register with Fs
|
||||
func init() {
|
||||
|
||||
@@ -12,14 +12,12 @@ import (
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/rclone/rclone/backend/crypt/pkcs7"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/accounting"
|
||||
"github.com/rclone/rclone/lib/version"
|
||||
"github.com/rfjakob/eme"
|
||||
"golang.org/x/crypto/nacl/secretbox"
|
||||
"golang.org/x/crypto/scrypt"
|
||||
@@ -444,32 +442,11 @@ func (c *Cipher) encryptFileName(in string) string {
|
||||
if !c.dirNameEncrypt && i != (len(segments)-1) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Strip version string so that only the non-versioned part
|
||||
// of the file name gets encrypted/obfuscated
|
||||
hasVersion := false
|
||||
var t time.Time
|
||||
if i == (len(segments)-1) && version.Match(segments[i]) {
|
||||
var s string
|
||||
t, s = version.Remove(segments[i])
|
||||
// version.Remove can fail, in which case it returns segments[i]
|
||||
if s != segments[i] {
|
||||
segments[i] = s
|
||||
hasVersion = true
|
||||
}
|
||||
}
|
||||
|
||||
if c.mode == NameEncryptionStandard {
|
||||
segments[i] = c.encryptSegment(segments[i])
|
||||
} else {
|
||||
segments[i] = c.obfuscateSegment(segments[i])
|
||||
}
|
||||
|
||||
// Add back a version to the encrypted/obfuscated
|
||||
// file name, if we stripped it off earlier
|
||||
if hasVersion {
|
||||
segments[i] = version.Add(segments[i], t)
|
||||
}
|
||||
}
|
||||
return strings.Join(segments, "/")
|
||||
}
|
||||
@@ -500,21 +477,6 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
|
||||
if !c.dirNameEncrypt && i != (len(segments)-1) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Strip version string so that only the non-versioned part
|
||||
// of the file name gets decrypted/deobfuscated
|
||||
hasVersion := false
|
||||
var t time.Time
|
||||
if i == (len(segments)-1) && version.Match(segments[i]) {
|
||||
var s string
|
||||
t, s = version.Remove(segments[i])
|
||||
// version.Remove can fail, in which case it returns segments[i]
|
||||
if s != segments[i] {
|
||||
segments[i] = s
|
||||
hasVersion = true
|
||||
}
|
||||
}
|
||||
|
||||
if c.mode == NameEncryptionStandard {
|
||||
segments[i], err = c.decryptSegment(segments[i])
|
||||
} else {
|
||||
@@ -524,12 +486,6 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// Add back a version to the decrypted/deobfuscated
|
||||
// file name, if we stripped it off earlier
|
||||
if hasVersion {
|
||||
segments[i] = version.Add(segments[i], t)
|
||||
}
|
||||
}
|
||||
return strings.Join(segments, "/"), nil
|
||||
}
|
||||
@@ -538,18 +494,10 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
|
||||
func (c *Cipher) DecryptFileName(in string) (string, error) {
|
||||
if c.mode == NameEncryptionOff {
|
||||
remainingLength := len(in) - len(encryptedSuffix)
|
||||
if remainingLength == 0 || !strings.HasSuffix(in, encryptedSuffix) {
|
||||
return "", ErrorNotAnEncryptedFile
|
||||
if remainingLength > 0 && strings.HasSuffix(in, encryptedSuffix) {
|
||||
return in[:remainingLength], nil
|
||||
}
|
||||
decrypted := in[:remainingLength]
|
||||
if version.Match(decrypted) {
|
||||
_, unversioned := version.Remove(decrypted)
|
||||
if unversioned == "" {
|
||||
return "", ErrorNotAnEncryptedFile
|
||||
}
|
||||
}
|
||||
// Leave the version string on, if it was there
|
||||
return decrypted, nil
|
||||
return "", ErrorNotAnEncryptedFile
|
||||
}
|
||||
return c.decryptFileName(in)
|
||||
}
|
||||
|
||||
@@ -160,29 +160,22 @@ func TestEncryptFileName(t *testing.T) {
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1"))
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12"))
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123"))
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123", c.EncryptFileName("1-v2001-02-03-040506-123"))
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng-v2001-02-03-040506-123", c.EncryptFileName("1/12-v2001-02-03-040506-123"))
|
||||
// Standard mode with directory name encryption off
|
||||
c, _ = newCipher(NameEncryptionStandard, "", "", false)
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1"))
|
||||
assert.Equal(t, "1/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12"))
|
||||
assert.Equal(t, "1/12/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123"))
|
||||
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123", c.EncryptFileName("1-v2001-02-03-040506-123"))
|
||||
assert.Equal(t, "1/l42g6771hnv3an9cgc8cr2n1ng-v2001-02-03-040506-123", c.EncryptFileName("1/12-v2001-02-03-040506-123"))
|
||||
// Now off mode
|
||||
c, _ = newCipher(NameEncryptionOff, "", "", true)
|
||||
assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123"))
|
||||
// Obfuscation mode
|
||||
c, _ = newCipher(NameEncryptionObfuscated, "", "", true)
|
||||
assert.Equal(t, "49.6/99.23/150.890/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
|
||||
assert.Equal(t, "49.6/99.23/150.890/53-v2001-02-03-040506-123.!!lipps", c.EncryptFileName("1/12/123/!hello-v2001-02-03-040506-123"))
|
||||
assert.Equal(t, "49.6/99.23/150.890/162.uryyB-v2001-02-03-040506-123.GKG", c.EncryptFileName("1/12/123/hello-v2001-02-03-040506-123.txt"))
|
||||
assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1"))
|
||||
assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0"))
|
||||
// Obfuscation mode with directory name encryption off
|
||||
c, _ = newCipher(NameEncryptionObfuscated, "", "", false)
|
||||
assert.Equal(t, "1/12/123/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
|
||||
assert.Equal(t, "1/12/123/53-v2001-02-03-040506-123.!!lipps", c.EncryptFileName("1/12/123/!hello-v2001-02-03-040506-123"))
|
||||
assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1"))
|
||||
assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0"))
|
||||
}
|
||||
@@ -201,19 +194,14 @@ func TestDecryptFileName(t *testing.T) {
|
||||
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
|
||||
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize},
|
||||
{NameEncryptionStandard, false, "1/12/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
|
||||
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123", "1-v2001-02-03-040506-123", nil},
|
||||
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil},
|
||||
{NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile},
|
||||
{NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile},
|
||||
{NameEncryptionOff, true, "1/12/123-v2001-02-03-040506-123.bin", "1/12/123-v2001-02-03-040506-123", nil},
|
||||
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123", nil},
|
||||
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt", nil},
|
||||
{NameEncryptionObfuscated, true, "!.hello", "hello", nil},
|
||||
{NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile},
|
||||
{NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil},
|
||||
{NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil},
|
||||
{NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil},
|
||||
{NameEncryptionObfuscated, false, "1/12/123/53-v2001-02-03-040506-123.!!lipps", "1/12/123/!hello-v2001-02-03-040506-123", nil},
|
||||
} {
|
||||
c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt)
|
||||
actual, actualErr := c.DecryptFileName(test.in)
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"mime"
|
||||
"net/http"
|
||||
"path"
|
||||
@@ -182,12 +183,13 @@ func init() {
|
||||
Description: "Google Drive",
|
||||
NewFs: NewFs,
|
||||
CommandHelp: commandHelp,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
// Parse config into Options struct
|
||||
opt := new(Options)
|
||||
err := configstruct.Set(m, opt)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "couldn't parse config into struct")
|
||||
fs.Errorf(nil, "Couldn't parse config into struct: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Fill in the scopes
|
||||
@@ -197,17 +199,16 @@ func init() {
|
||||
m.Set("root_folder_id", "appDataFolder")
|
||||
}
|
||||
|
||||
if opt.ServiceAccountFile == "" && opt.ServiceAccountCredentials == "" {
|
||||
if opt.ServiceAccountFile == "" {
|
||||
err = oauthutil.Config(ctx, "drive", name, m, driveConfig, nil)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
}
|
||||
err = configTeamDrive(ctx, opt, m, name)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure Shared Drive")
|
||||
log.Fatalf("Failed to configure Shared Drive: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: append(driveOAuthOptions(), []fs.Option{{
|
||||
Name: "scope",
|
||||
@@ -521,7 +522,7 @@ If this flag is set then rclone will ignore shortcut files completely.
|
||||
} {
|
||||
for mimeType, extension := range m {
|
||||
if err := mime.AddExtensionType(extension, mimeType); err != nil {
|
||||
fs.Errorf("Failed to register MIME type %q: %v", mimeType, err)
|
||||
log.Fatalf("Failed to register MIME type %q: %v", mimeType, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -2958,12 +2959,12 @@ func (f *Fs) makeShortcut(ctx context.Context, srcPath string, dstFs *Fs, dstPat
|
||||
}
|
||||
|
||||
// List all team drives
|
||||
func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.Drive, err error) {
|
||||
drives = []*drive.Drive{}
|
||||
listTeamDrives := f.svc.Drives.List().PageSize(100)
|
||||
func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.TeamDrive, err error) {
|
||||
drives = []*drive.TeamDrive{}
|
||||
listTeamDrives := f.svc.Teamdrives.List().PageSize(100)
|
||||
var defaultFs Fs // default Fs with default Options
|
||||
for {
|
||||
var teamDrives *drive.DriveList
|
||||
var teamDrives *drive.TeamDriveList
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
teamDrives, err = listTeamDrives.Context(ctx).Do()
|
||||
return defaultFs.shouldRetry(ctx, err)
|
||||
@@ -2971,7 +2972,7 @@ func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.Drive, err err
|
||||
if err != nil {
|
||||
return drives, errors.Wrap(err, "listing Team Drives failed")
|
||||
}
|
||||
drives = append(drives, teamDrives.Drives...)
|
||||
drives = append(drives, teamDrives.TeamDrives...)
|
||||
if teamDrives.NextPageToken == "" {
|
||||
break
|
||||
}
|
||||
@@ -3068,7 +3069,7 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
|
||||
return err
|
||||
}
|
||||
if destLeaf == "" {
|
||||
destLeaf = path.Base(o.Remote())
|
||||
destLeaf = info.Name
|
||||
}
|
||||
if destDir == "" {
|
||||
destDir = "."
|
||||
|
||||
@@ -25,6 +25,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"path"
|
||||
"regexp"
|
||||
"strings"
|
||||
@@ -98,10 +99,8 @@ var (
|
||||
"files.content.write",
|
||||
"files.content.read",
|
||||
"sharing.write",
|
||||
"account_info.read", // needed for About
|
||||
// "file_requests.write",
|
||||
// "members.read", // needed for impersonate - but causes app to need to be approved by Dropbox Team Admin during the flow
|
||||
// "team_data.member"
|
||||
},
|
||||
// Endpoint: oauth2.Endpoint{
|
||||
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
|
||||
@@ -131,8 +130,8 @@ func getOauthConfig(m configmap.Mapper) *oauth2.Config {
|
||||
}
|
||||
// Make a copy of the config
|
||||
config := *dropboxConfig
|
||||
// Make a copy of the scopes with extra scopes requires appended
|
||||
config.Scopes = append(config.Scopes, "members.read", "team_data.member")
|
||||
// Make a copy of the scopes with "members.read" appended
|
||||
config.Scopes = append(config.Scopes, "members.read")
|
||||
return &config
|
||||
}
|
||||
|
||||
@@ -143,7 +142,7 @@ func init() {
|
||||
Name: "dropbox",
|
||||
Description: "Dropbox",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
opt := oauthutil.Options{
|
||||
NoOffline: true,
|
||||
OAuth2Opts: []oauth2.AuthCodeOption{
|
||||
@@ -152,9 +151,8 @@ func init() {
|
||||
}
|
||||
err := oauthutil.Config(ctx, "dropbox", name, m, getOauthConfig(m), &opt)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: append(oauthutil.SharedOptions, []fs.Option{{
|
||||
Name: "chunk_size",
|
||||
@@ -1086,30 +1084,13 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
|
||||
fs.Debugf(f, "attempting to share '%s' (absolute path: %s)", remote, absPath)
|
||||
createArg := sharing.CreateSharedLinkWithSettingsArg{
|
||||
Path: absPath,
|
||||
Settings: &sharing.SharedLinkSettings{
|
||||
RequestedVisibility: &sharing.RequestedVisibility{
|
||||
Tagged: dropbox.Tagged{Tag: sharing.RequestedVisibilityPublic},
|
||||
},
|
||||
Audience: &sharing.LinkAudience{
|
||||
Tagged: dropbox.Tagged{Tag: sharing.LinkAudiencePublic},
|
||||
},
|
||||
Access: &sharing.RequestedLinkAccessLevel{
|
||||
Tagged: dropbox.Tagged{Tag: sharing.RequestedLinkAccessLevelViewer},
|
||||
},
|
||||
},
|
||||
// FIXME this gives settings_error/not_authorized/.. errors
|
||||
// and the expires setting isn't in the documentation so remove
|
||||
// for now.
|
||||
// Settings: &sharing.SharedLinkSettings{
|
||||
// Expires: time.Now().Add(time.Duration(expire)).UTC().Round(time.Second),
|
||||
// },
|
||||
}
|
||||
if expire < fs.DurationOff {
|
||||
expiryTime := time.Now().Add(time.Duration(expire)).UTC().Round(time.Second)
|
||||
createArg.Settings.Expires = expiryTime
|
||||
}
|
||||
// FIXME note we can't set Settings for non enterprise dropbox
|
||||
// because of https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75
|
||||
// however this only goes wrong when we set Expires, so as a
|
||||
// work-around remove Settings unless expire is set.
|
||||
if expire == fs.DurationOff {
|
||||
createArg.Settings = nil
|
||||
}
|
||||
|
||||
var linkRes sharing.IsSharedLinkMetadata
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
linkRes, err = f.sharing.CreateSharedLinkWithSettings(&createArg)
|
||||
@@ -1353,13 +1334,13 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
|
||||
switch info := entry.(type) {
|
||||
case *files.FolderMetadata:
|
||||
entryType = fs.EntryDirectory
|
||||
entryPath = strings.TrimPrefix(info.PathDisplay, f.slashRootSlash)
|
||||
entryPath = strings.TrimLeft(info.PathDisplay, f.slashRootSlash)
|
||||
case *files.FileMetadata:
|
||||
entryType = fs.EntryObject
|
||||
entryPath = strings.TrimPrefix(info.PathDisplay, f.slashRootSlash)
|
||||
entryPath = strings.TrimLeft(info.PathDisplay, f.slashRootSlash)
|
||||
case *files.DeletedMetadata:
|
||||
entryType = fs.EntryObject
|
||||
entryPath = strings.TrimPrefix(info.PathDisplay, f.slashRootSlash)
|
||||
entryPath = strings.TrimLeft(info.PathDisplay, f.slashRootSlash)
|
||||
default:
|
||||
fs.Errorf(entry, "dropbox ChangeNotify: ignoring unknown EntryType %T", entry)
|
||||
continue
|
||||
|
||||
@@ -35,7 +35,9 @@ func init() {
|
||||
fs.Register(&fs.RegInfo{
|
||||
Name: "fichier",
|
||||
Description: "1Fichier",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, config configmap.Mapper) {
|
||||
},
|
||||
NewFs: NewFs,
|
||||
Options: []fs.Option{{
|
||||
Help: "Your API Key, get it from https://1fichier.com/console/params.pl",
|
||||
Name: "api_key",
|
||||
@@ -346,10 +348,8 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(fileUploadResponse.Links) == 0 {
|
||||
return nil, errors.New("upload response not found")
|
||||
} else if len(fileUploadResponse.Links) > 1 {
|
||||
fs.Debugf(remote, "Multiple upload responses found, using the first")
|
||||
if len(fileUploadResponse.Links) != 1 {
|
||||
return nil, errors.New("unexpected amount of files")
|
||||
}
|
||||
|
||||
link := fileUploadResponse.Links[0]
|
||||
|
||||
@@ -241,6 +241,23 @@ func (dl *debugLog) Write(p []byte) (n int, err error) {
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
type dialCtx struct {
|
||||
f *Fs
|
||||
ctx context.Context
|
||||
}
|
||||
|
||||
// dial a new connection with fshttp dialer
|
||||
func (d *dialCtx) dial(network, address string) (net.Conn, error) {
|
||||
conn, err := fshttp.NewDialer(d.ctx).Dial(network, address)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if d.f.tlsConf != nil {
|
||||
conn = tls.Client(conn, d.f.tlsConf)
|
||||
}
|
||||
return conn, err
|
||||
}
|
||||
|
||||
// shouldRetry returns a boolean as to whether this err deserve to be
|
||||
// retried. It returns the err as a convenience
|
||||
func shouldRetry(ctx context.Context, err error) (bool, error) {
|
||||
@@ -260,22 +277,9 @@ func shouldRetry(ctx context.Context, err error) (bool, error) {
|
||||
// Open a new connection to the FTP server.
|
||||
func (f *Fs) ftpConnection(ctx context.Context) (c *ftp.ServerConn, err error) {
|
||||
fs.Debugf(f, "Connecting to FTP server")
|
||||
|
||||
// Make ftp library dial with fshttp dialer optionally using TLS
|
||||
dial := func(network, address string) (conn net.Conn, err error) {
|
||||
conn, err = fshttp.NewDialer(ctx).Dial(network, address)
|
||||
if f.tlsConf != nil && err == nil {
|
||||
conn = tls.Client(conn, f.tlsConf)
|
||||
}
|
||||
return
|
||||
}
|
||||
ftpConfig := []ftp.DialOption{ftp.DialWithDialFunc(dial)}
|
||||
|
||||
if f.opt.TLS {
|
||||
// Our dialer takes care of TLS but ftp library also needs tlsConf
|
||||
// as a trigger for sending PSBZ and PROT options to server.
|
||||
ftpConfig = append(ftpConfig, ftp.DialWithTLS(f.tlsConf))
|
||||
} else if f.opt.ExplicitTLS {
|
||||
dCtx := dialCtx{f, ctx}
|
||||
ftpConfig := []ftp.DialOption{ftp.DialWithDialFunc(dCtx.dial)}
|
||||
if f.opt.ExplicitTLS {
|
||||
ftpConfig = append(ftpConfig, ftp.DialWithExplicitTLS(f.tlsConf))
|
||||
// Initial connection needs to be cleartext for explicit TLS
|
||||
conn, err := fshttp.NewDialer(ctx).Dial("tcp", f.dialAddr)
|
||||
|
||||
@@ -19,6 +19,7 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net/http"
|
||||
"path"
|
||||
"strings"
|
||||
@@ -75,18 +76,17 @@ func init() {
|
||||
Prefix: "gcs",
|
||||
Description: "Google Cloud Storage (this is not Google Drive)",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
saFile, _ := m.Get("service_account_file")
|
||||
saCreds, _ := m.Get("service_account_credentials")
|
||||
anonymous, _ := m.Get("anonymous")
|
||||
if saFile != "" || saCreds != "" || anonymous == "true" {
|
||||
return nil
|
||||
return
|
||||
}
|
||||
err := oauthutil.Config(ctx, "google cloud storage", name, m, storageConfig, nil)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: append(oauthutil.SharedOptions, []fs.Option{{
|
||||
Name: "project_number",
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
golog "log"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
@@ -77,12 +78,13 @@ func init() {
|
||||
Prefix: "gphotos",
|
||||
Description: "Google Photos",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
// Parse config into Options struct
|
||||
opt := new(Options)
|
||||
err := configstruct.Set(m, opt)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "couldn't parse config into struct")
|
||||
fs.Errorf(nil, "Couldn't parse config into struct: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Fill in the scopes
|
||||
@@ -95,7 +97,7 @@ func init() {
|
||||
// Do the oauth
|
||||
err = oauthutil.Config(ctx, "google photos", name, m, oauthConfig, nil)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
golog.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
|
||||
// Warn the user
|
||||
@@ -106,7 +108,6 @@ func init() {
|
||||
|
||||
`)
|
||||
|
||||
return nil
|
||||
},
|
||||
Options: append(oauthutil.SharedOptions, []fs.Option{{
|
||||
Name: "read_only",
|
||||
|
||||
@@ -47,7 +47,7 @@ func prepareServer(t *testing.T) (configmap.Simple, func()) {
|
||||
ts := httptest.NewServer(handler)
|
||||
|
||||
// Configure the remote
|
||||
require.NoError(t, configfile.LoadConfig(context.Background()))
|
||||
configfile.LoadConfig(context.Background())
|
||||
// fs.Config.LogLevel = fs.LogLevelDebug
|
||||
// fs.Config.DumpHeaders = true
|
||||
// fs.Config.DumpBodies = true
|
||||
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net/http"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -55,12 +56,11 @@ func init() {
|
||||
Name: "hubic",
|
||||
Description: "Hubic",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
err := oauthutil.Config(ctx, "hubic", name, m, oauthConfig, nil)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: append(oauthutil.SharedOptions, swift.SharedOptions...),
|
||||
})
|
||||
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"math/rand"
|
||||
"net/http"
|
||||
"net/url"
|
||||
@@ -86,12 +87,12 @@ func init() {
|
||||
Name: "jottacloud",
|
||||
Description: "Jottacloud",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
refresh := false
|
||||
if version, ok := m.Get("configVersion"); ok {
|
||||
ver, err := strconv.Atoi(version)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to parse config version - corrupted config")
|
||||
log.Fatalf("Failed to parse config version - corrupted config")
|
||||
}
|
||||
refresh = (ver != configVersion) && (ver != v1configVersion)
|
||||
}
|
||||
@@ -103,7 +104,7 @@ func init() {
|
||||
if ok && tokenString != "" {
|
||||
fmt.Printf("Already have a token - refresh?\n")
|
||||
if !config.Confirm(false) {
|
||||
return nil
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -115,13 +116,11 @@ func init() {
|
||||
|
||||
switch config.ChooseNumber("Your choice", 1, 3) {
|
||||
case 1:
|
||||
return v2config(ctx, name, m)
|
||||
v2config(ctx, name, m)
|
||||
case 2:
|
||||
return v1config(ctx, name, m)
|
||||
v1config(ctx, name, m)
|
||||
case 3:
|
||||
return teliaCloudConfig(ctx, name, m)
|
||||
default:
|
||||
return errors.New("unknown config choice")
|
||||
teliaCloudConfig(ctx, name, m)
|
||||
}
|
||||
},
|
||||
Options: []fs.Option{{
|
||||
@@ -243,7 +242,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
|
||||
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
|
||||
}
|
||||
|
||||
func teliaCloudConfig(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
func teliaCloudConfig(ctx context.Context, name string, m configmap.Mapper) {
|
||||
teliaCloudOauthConfig := &oauth2.Config{
|
||||
Endpoint: oauth2.Endpoint{
|
||||
AuthURL: teliaCloudAuthURL,
|
||||
@@ -256,14 +255,15 @@ func teliaCloudConfig(ctx context.Context, name string, m configmap.Mapper) erro
|
||||
|
||||
err := oauthutil.Config(ctx, "jottacloud", name, m, teliaCloudOauthConfig, nil)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
|
||||
if config.Confirm(false) {
|
||||
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, teliaCloudOauthConfig)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to load oAuthClient")
|
||||
log.Fatalf("Failed to load oAuthClient: %s", err)
|
||||
}
|
||||
|
||||
srv := rest.NewClient(oAuthClient).SetRoot(rootURL)
|
||||
@@ -271,7 +271,7 @@ func teliaCloudConfig(ctx context.Context, name string, m configmap.Mapper) erro
|
||||
|
||||
device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to setup mountpoint")
|
||||
log.Fatalf("Failed to setup mountpoint: %s", err)
|
||||
}
|
||||
m.Set(configDevice, device)
|
||||
m.Set(configMountpoint, mountpoint)
|
||||
@@ -280,18 +280,17 @@ func teliaCloudConfig(ctx context.Context, name string, m configmap.Mapper) erro
|
||||
m.Set("configVersion", strconv.Itoa(configVersion))
|
||||
m.Set(configClientID, teliaCloudClientID)
|
||||
m.Set(configTokenURL, teliaCloudTokenURL)
|
||||
return nil
|
||||
}
|
||||
|
||||
// v1config configure a jottacloud backend using legacy authentication
|
||||
func v1config(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
func v1config(ctx context.Context, name string, m configmap.Mapper) {
|
||||
srv := rest.NewClient(fshttp.NewClient(ctx))
|
||||
|
||||
fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n")
|
||||
if config.Confirm(false) {
|
||||
deviceRegistration, err := registerDevice(ctx, srv)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to register device")
|
||||
log.Fatalf("Failed to register device: %v", err)
|
||||
}
|
||||
|
||||
m.Set(configClientID, deviceRegistration.ClientID)
|
||||
@@ -319,18 +318,18 @@ func v1config(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
|
||||
token, err := doAuthV1(ctx, srv, username, password)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to get oauth token")
|
||||
log.Fatalf("Failed to get oauth token: %s", err)
|
||||
}
|
||||
err = oauthutil.PutToken(name, m, &token, true)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "error while saving token")
|
||||
log.Fatalf("Error while saving token: %s", err)
|
||||
}
|
||||
|
||||
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
|
||||
if config.Confirm(false) {
|
||||
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to load oAuthClient")
|
||||
log.Fatalf("Failed to load oAuthClient: %s", err)
|
||||
}
|
||||
|
||||
srv = rest.NewClient(oAuthClient).SetRoot(rootURL)
|
||||
@@ -338,14 +337,13 @@ func v1config(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
|
||||
device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to setup mountpoint")
|
||||
log.Fatalf("Failed to setup mountpoint: %s", err)
|
||||
}
|
||||
m.Set(configDevice, device)
|
||||
m.Set(configMountpoint, mountpoint)
|
||||
}
|
||||
|
||||
m.Set("configVersion", strconv.Itoa(v1configVersion))
|
||||
return nil
|
||||
}
|
||||
|
||||
// registerDevice register a new device for use with the jottacloud API
|
||||
@@ -420,7 +418,7 @@ func doAuthV1(ctx context.Context, srv *rest.Client, username, password string)
|
||||
}
|
||||
|
||||
// v2config configure a jottacloud backend using the modern JottaCli token based authentication
|
||||
func v2config(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
func v2config(ctx context.Context, name string, m configmap.Mapper) {
|
||||
srv := rest.NewClient(fshttp.NewClient(ctx))
|
||||
|
||||
fmt.Printf("Generate a personal login token here: https://www.jottacloud.com/web/secure\n")
|
||||
@@ -432,32 +430,31 @@ func v2config(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
|
||||
token, err := doAuthV2(ctx, srv, loginToken, m)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to get oauth token")
|
||||
log.Fatalf("Failed to get oauth token: %s", err)
|
||||
}
|
||||
err = oauthutil.PutToken(name, m, &token, true)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "error while saving token")
|
||||
log.Fatalf("Error while saving token: %s", err)
|
||||
}
|
||||
|
||||
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
|
||||
if config.Confirm(false) {
|
||||
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to load oAuthClient")
|
||||
log.Fatalf("Failed to load oAuthClient: %s", err)
|
||||
}
|
||||
|
||||
srv = rest.NewClient(oAuthClient).SetRoot(rootURL)
|
||||
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
|
||||
device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to setup mountpoint")
|
||||
log.Fatalf("Failed to setup mountpoint: %s", err)
|
||||
}
|
||||
m.Set(configDevice, device)
|
||||
m.Set(configMountpoint, mountpoint)
|
||||
}
|
||||
|
||||
m.Set("configVersion", strconv.Itoa(configVersion))
|
||||
return nil
|
||||
}
|
||||
|
||||
// doAuthV2 runs the actual token request for V2 authentication
|
||||
|
||||
@@ -6,8 +6,8 @@ import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
@@ -48,7 +48,7 @@ func (w *BinWriter) Reader() io.Reader {
|
||||
// WritePu16 writes a short as unsigned varint
|
||||
func (w *BinWriter) WritePu16(val int) {
|
||||
if val < 0 || val > 65535 {
|
||||
panic(fmt.Sprintf("Invalid UInt16 %v", val))
|
||||
log.Fatalf("Invalid UInt16 %v", val)
|
||||
}
|
||||
w.WritePu64(int64(val))
|
||||
}
|
||||
@@ -56,7 +56,7 @@ func (w *BinWriter) WritePu16(val int) {
|
||||
// WritePu32 writes a signed long as unsigned varint
|
||||
func (w *BinWriter) WritePu32(val int64) {
|
||||
if val < 0 || val > 4294967295 {
|
||||
panic(fmt.Sprintf("Invalid UInt32 %v", val))
|
||||
log.Fatalf("Invalid UInt32 %v", val)
|
||||
}
|
||||
w.WritePu64(val)
|
||||
}
|
||||
@@ -64,7 +64,7 @@ func (w *BinWriter) WritePu32(val int64) {
|
||||
// WritePu64 writes an unsigned (actually, signed) long as unsigned varint
|
||||
func (w *BinWriter) WritePu64(val int64) {
|
||||
if val < 0 {
|
||||
panic(fmt.Sprintf("Invalid UInt64 %v", val))
|
||||
log.Fatalf("Invalid UInt64 %v", val)
|
||||
}
|
||||
w.b.Write(w.a[:binary.PutUvarint(w.a, uint64(val))])
|
||||
}
|
||||
@@ -123,7 +123,7 @@ func (r *BinReader) check(err error) bool {
|
||||
r.err = err
|
||||
}
|
||||
if err != io.EOF {
|
||||
panic(fmt.Sprintf("Error parsing response: %v", err))
|
||||
log.Fatalf("Error parsing response: %v", err)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
@@ -98,7 +99,7 @@ func init() {
|
||||
Name: "onedrive",
|
||||
Description: "Microsoft OneDrive",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
region, _ := m.Get("region")
|
||||
graphURL := graphAPIEndpoint[region] + "/v1.0"
|
||||
oauthConfig.Endpoint = oauth2.Endpoint{
|
||||
@@ -108,12 +109,13 @@ func init() {
|
||||
ci := fs.GetConfig(ctx)
|
||||
err := oauthutil.Config(ctx, "onedrive", name, m, oauthConfig, nil)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Stop if we are running non-interactive config
|
||||
if ci.AutoConfirm {
|
||||
return nil
|
||||
return
|
||||
}
|
||||
|
||||
type driveResource struct {
|
||||
@@ -136,7 +138,7 @@ func init() {
|
||||
|
||||
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure OneDrive")
|
||||
log.Fatalf("Failed to configure OneDrive: %v", err)
|
||||
}
|
||||
srv := rest.NewClient(oAuthClient)
|
||||
|
||||
@@ -201,17 +203,18 @@ func init() {
|
||||
sites := siteResponse{}
|
||||
_, err := srv.CallJSON(ctx, &opts, nil, &sites)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to query available sites")
|
||||
log.Fatalf("Failed to query available sites: %v", err)
|
||||
}
|
||||
|
||||
if len(sites.Sites) == 0 {
|
||||
return errors.Errorf("search for %q returned no results", searchTerm)
|
||||
log.Fatalf("Search for '%s' returned no results", searchTerm)
|
||||
} else {
|
||||
fmt.Printf("Found %d sites, please select the one you want to use:\n", len(sites.Sites))
|
||||
for index, site := range sites.Sites {
|
||||
fmt.Printf("%d: %s (%s) id=%s\n", index, site.SiteName, site.SiteURL, site.SiteID)
|
||||
}
|
||||
siteID = sites.Sites[config.ChooseNumber("Chose drive to use:", 0, len(sites.Sites)-1)].SiteID
|
||||
}
|
||||
fmt.Printf("Found %d sites, please select the one you want to use:\n", len(sites.Sites))
|
||||
for index, site := range sites.Sites {
|
||||
fmt.Printf("%d: %s (%s) id=%s\n", index, site.SiteName, site.SiteURL, site.SiteID)
|
||||
}
|
||||
siteID = sites.Sites[config.ChooseNumber("Chose drive to use:", 0, len(sites.Sites)-1)].SiteID
|
||||
}
|
||||
|
||||
// if we use server-relative URL for finding the drive
|
||||
@@ -224,7 +227,7 @@ func init() {
|
||||
site := siteResource{}
|
||||
_, err := srv.CallJSON(ctx, &opts, nil, &site)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to query available site by relative path")
|
||||
log.Fatalf("Failed to query available site by relative path: %v", err)
|
||||
}
|
||||
siteID = site.SiteID
|
||||
}
|
||||
@@ -244,7 +247,7 @@ func init() {
|
||||
drives := drivesResponse{}
|
||||
_, err := srv.CallJSON(ctx, &opts, nil, &drives)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to query available drives")
|
||||
log.Fatalf("Failed to query available drives: %v", err)
|
||||
}
|
||||
|
||||
// Also call /me/drive as sometimes /me/drives doesn't return it #4068
|
||||
@@ -253,7 +256,7 @@ func init() {
|
||||
meDrive := driveResource{}
|
||||
_, err := srv.CallJSON(ctx, &opts, nil, &meDrive)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to query available drives")
|
||||
log.Fatalf("Failed to query available drives: %v", err)
|
||||
}
|
||||
found := false
|
||||
for _, drive := range drives.Drives {
|
||||
@@ -270,13 +273,14 @@ func init() {
|
||||
}
|
||||
|
||||
if len(drives.Drives) == 0 {
|
||||
return errors.New("no drives found")
|
||||
log.Fatalf("No drives found")
|
||||
} else {
|
||||
fmt.Printf("Found %d drives, please select the one you want to use:\n", len(drives.Drives))
|
||||
for index, drive := range drives.Drives {
|
||||
fmt.Printf("%d: %s (%s) id=%s\n", index, drive.DriveName, drive.DriveType, drive.DriveID)
|
||||
}
|
||||
finalDriveID = drives.Drives[config.ChooseNumber("Chose drive to use:", 0, len(drives.Drives)-1)].DriveID
|
||||
}
|
||||
fmt.Printf("Found %d drives, please select the one you want to use:\n", len(drives.Drives))
|
||||
for index, drive := range drives.Drives {
|
||||
fmt.Printf("%d: %s (%s) id=%s\n", index, drive.DriveName, drive.DriveType, drive.DriveID)
|
||||
}
|
||||
finalDriveID = drives.Drives[config.ChooseNumber("Chose drive to use:", 0, len(drives.Drives)-1)].DriveID
|
||||
}
|
||||
|
||||
// Test the driveID and get drive type
|
||||
@@ -287,18 +291,17 @@ func init() {
|
||||
var rootItem api.Item
|
||||
_, err = srv.CallJSON(ctx, &opts, nil, &rootItem)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "failed to query root for drive %s", finalDriveID)
|
||||
log.Fatalf("Failed to query root for drive %s: %v", finalDriveID, err)
|
||||
}
|
||||
|
||||
fmt.Printf("Found drive '%s' of type '%s', URL: %s\nIs that okay?\n", rootItem.Name, rootItem.ParentReference.DriveType, rootItem.WebURL)
|
||||
// This does not work, YET :)
|
||||
if !config.ConfirmWithConfig(ctx, m, "config_drive_ok", true) {
|
||||
return errors.New("cancelled by user")
|
||||
log.Fatalf("Cancelled by user")
|
||||
}
|
||||
|
||||
m.Set(configDriveID, finalDriveID)
|
||||
m.Set(configDriveType, rootItem.ParentReference.DriveType)
|
||||
return nil
|
||||
},
|
||||
Options: append(oauthutil.SharedOptions, []fs.Option{{
|
||||
Name: "region",
|
||||
@@ -358,11 +361,6 @@ This will only work if you are copying between two OneDrive *Personal* drives AN
|
||||
the files to copy are already shared between them. In other cases, rclone will
|
||||
fall back to normal copy (which will be slightly slower).`,
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: "list_chunk",
|
||||
Help: "Size of listing chunk.",
|
||||
Default: 1000,
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: "no_versions",
|
||||
Default: false,
|
||||
@@ -470,7 +468,6 @@ type Options struct {
|
||||
DriveType string `config:"drive_type"`
|
||||
ExposeOneNoteFiles bool `config:"expose_onenote_files"`
|
||||
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
|
||||
ListChunk int64 `config:"list_chunk"`
|
||||
NoVersions bool `config:"no_versions"`
|
||||
LinkScope string `config:"link_scope"`
|
||||
LinkType string `config:"link_type"`
|
||||
@@ -563,9 +560,6 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
|
||||
if len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
|
||||
retry = true
|
||||
fs.Debugf(nil, "Should retry: %v", err)
|
||||
} else if err != nil && strings.Contains(err.Error(), "Unable to initialize RPS") {
|
||||
retry = true
|
||||
fs.Debugf(nil, "HTTP 401: Unable to initialize RPS. Trying again.")
|
||||
}
|
||||
case 429: // Too Many Requests.
|
||||
// see https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online
|
||||
@@ -902,7 +896,7 @@ type listAllFn func(*api.Item) bool
|
||||
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
|
||||
// Top parameter asks for bigger pages of data
|
||||
// https://dev.onedrive.com/odata/optional-query-parameters.htm
|
||||
opts := f.newOptsCall(dirID, "GET", fmt.Sprintf("/children?$top=%d", f.opt.ListChunk))
|
||||
opts := f.newOptsCall(dirID, "GET", "/children?$top=1000")
|
||||
OUTER:
|
||||
for {
|
||||
var result api.ListChildrenResponse
|
||||
@@ -1429,7 +1423,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
|
||||
Password: f.opt.LinkPassword,
|
||||
}
|
||||
|
||||
if expire < fs.DurationOff {
|
||||
if expire < fs.Duration(time.Hour*24*365*100) {
|
||||
expiry := time.Now().Add(time.Duration(expire))
|
||||
share.Expiry = &expiry
|
||||
}
|
||||
@@ -1857,7 +1851,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64,
|
||||
fs.Debugf(o, "Cancelling multipart upload: %v", err)
|
||||
cancelErr := o.cancelUploadSession(ctx, uploadURL)
|
||||
if cancelErr != nil {
|
||||
fs.Logf(o, "Failed to cancel multipart upload: %v (upload failed due to: %v)", cancelErr, err)
|
||||
fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr)
|
||||
}
|
||||
})()
|
||||
|
||||
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
@@ -71,7 +72,7 @@ func init() {
|
||||
Name: "pcloud",
|
||||
Description: "Pcloud",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
optc := new(Options)
|
||||
err := configstruct.Set(m, optc)
|
||||
if err != nil {
|
||||
@@ -99,9 +100,8 @@ func init() {
|
||||
}
|
||||
err = oauthutil.Config(ctx, "pcloud", name, m, oauthConfig, &opt)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: append(oauthutil.SharedOptions, []fs.Option{{
|
||||
Name: config.ConfigEncoding,
|
||||
|
||||
@@ -20,6 +20,7 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
@@ -77,12 +78,11 @@ func init() {
|
||||
Name: "premiumizeme",
|
||||
Description: "premiumize.me",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
err := oauthutil.Config(ctx, "premiumizeme", name, m, oauthConfig, nil)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: []fs.Option{{
|
||||
Name: "api_key",
|
||||
|
||||
@@ -2,10 +2,10 @@ package putio
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"regexp"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/config"
|
||||
"github.com/rclone/rclone/fs/config/configmap"
|
||||
@@ -60,15 +60,14 @@ func init() {
|
||||
Name: "putio",
|
||||
Description: "Put.io",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
opt := oauthutil.Options{
|
||||
NoOffline: true,
|
||||
}
|
||||
err := oauthutil.Config(ctx, "putio", name, m, putioConfig, &opt)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: []fs.Option{{
|
||||
Name: config.ConfigEncoding,
|
||||
|
||||
@@ -26,6 +26,7 @@ import (
|
||||
"github.com/aws/aws-sdk-go/aws/corehandlers"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials/stscreds"
|
||||
"github.com/aws/aws-sdk-go/aws/defaults"
|
||||
"github.com/aws/aws-sdk-go/aws/ec2metadata"
|
||||
"github.com/aws/aws-sdk-go/aws/endpoints"
|
||||
@@ -1510,6 +1511,11 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (*s3.S
|
||||
}),
|
||||
ExpiryWindow: 3 * time.Minute,
|
||||
},
|
||||
|
||||
// Pick up IAM role if we are in EKS
|
||||
&stscreds.WebIdentityRoleProvider{
|
||||
ExpiryWindow: 3 * time.Minute,
|
||||
},
|
||||
}
|
||||
cred := credentials.NewChainCredentials(providers)
|
||||
|
||||
|
||||
@@ -296,32 +296,36 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
}
|
||||
|
||||
// Config callback for 2FA
|
||||
func Config(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
func Config(ctx context.Context, name string, m configmap.Mapper) {
|
||||
ci := fs.GetConfig(ctx)
|
||||
serverURL, ok := m.Get(configURL)
|
||||
if !ok || serverURL == "" {
|
||||
// If there's no server URL, it means we're trying an operation at the backend level, like a "rclone authorize seafile"
|
||||
return errors.New("operation not supported on this remote. If you need a 2FA code on your account, use the command: nrclone config reconnect <remote name>: ")
|
||||
fmt.Print("\nOperation not supported on this remote.\nIf you need a 2FA code on your account, use the command:\n\nrclone config reconnect <remote name>:\n\n")
|
||||
return
|
||||
}
|
||||
|
||||
// Stop if we are running non-interactive config
|
||||
if ci.AutoConfirm {
|
||||
return nil
|
||||
return
|
||||
}
|
||||
|
||||
u, err := url.Parse(serverURL)
|
||||
if err != nil {
|
||||
return errors.Errorf("invalid server URL %s", serverURL)
|
||||
fs.Errorf(nil, "Invalid server URL %s", serverURL)
|
||||
return
|
||||
}
|
||||
|
||||
is2faEnabled, _ := m.Get(config2FA)
|
||||
if is2faEnabled != "true" {
|
||||
return errors.New("two-factor authentication is not enabled on this account")
|
||||
fmt.Println("Two-factor authentication is not enabled on this account.")
|
||||
return
|
||||
}
|
||||
|
||||
username, _ := m.Get(configUser)
|
||||
if username == "" {
|
||||
return errors.New("a username is required")
|
||||
fs.Errorf(nil, "A username is required")
|
||||
return
|
||||
}
|
||||
|
||||
password, _ := m.Get(configPassword)
|
||||
@@ -372,7 +376,6 @@ func Config(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
break
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// sets the AuthorizationToken up
|
||||
|
||||
@@ -16,7 +16,6 @@ import (
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
@@ -224,17 +223,6 @@ have a server which returns
|
||||
Then you may need to enable this flag.
|
||||
|
||||
If concurrent reads are disabled, the use_fstat option is ignored.
|
||||
`,
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: "disable_concurrent_writes",
|
||||
Default: false,
|
||||
Help: `If set don't use concurrent writes
|
||||
|
||||
Normally rclone uses concurrent writes to upload files. This improves
|
||||
the performance greatly, especially for distant servers.
|
||||
|
||||
This option disables concurrent writes should that be necessary.
|
||||
`,
|
||||
Advanced: true,
|
||||
}, {
|
||||
@@ -255,30 +243,29 @@ Set to 0 to keep connections indefinitely.
|
||||
|
||||
// Options defines the configuration for this backend
|
||||
type Options struct {
|
||||
Host string `config:"host"`
|
||||
User string `config:"user"`
|
||||
Port string `config:"port"`
|
||||
Pass string `config:"pass"`
|
||||
KeyPem string `config:"key_pem"`
|
||||
KeyFile string `config:"key_file"`
|
||||
KeyFilePass string `config:"key_file_pass"`
|
||||
PubKeyFile string `config:"pubkey_file"`
|
||||
KnownHostsFile string `config:"known_hosts_file"`
|
||||
KeyUseAgent bool `config:"key_use_agent"`
|
||||
UseInsecureCipher bool `config:"use_insecure_cipher"`
|
||||
DisableHashCheck bool `config:"disable_hashcheck"`
|
||||
AskPassword bool `config:"ask_password"`
|
||||
PathOverride string `config:"path_override"`
|
||||
SetModTime bool `config:"set_modtime"`
|
||||
Md5sumCommand string `config:"md5sum_command"`
|
||||
Sha1sumCommand string `config:"sha1sum_command"`
|
||||
SkipLinks bool `config:"skip_links"`
|
||||
Subsystem string `config:"subsystem"`
|
||||
ServerCommand string `config:"server_command"`
|
||||
UseFstat bool `config:"use_fstat"`
|
||||
DisableConcurrentReads bool `config:"disable_concurrent_reads"`
|
||||
DisableConcurrentWrites bool `config:"disable_concurrent_writes"`
|
||||
IdleTimeout fs.Duration `config:"idle_timeout"`
|
||||
Host string `config:"host"`
|
||||
User string `config:"user"`
|
||||
Port string `config:"port"`
|
||||
Pass string `config:"pass"`
|
||||
KeyPem string `config:"key_pem"`
|
||||
KeyFile string `config:"key_file"`
|
||||
KeyFilePass string `config:"key_file_pass"`
|
||||
PubKeyFile string `config:"pubkey_file"`
|
||||
KnownHostsFile string `config:"known_hosts_file"`
|
||||
KeyUseAgent bool `config:"key_use_agent"`
|
||||
UseInsecureCipher bool `config:"use_insecure_cipher"`
|
||||
DisableHashCheck bool `config:"disable_hashcheck"`
|
||||
AskPassword bool `config:"ask_password"`
|
||||
PathOverride string `config:"path_override"`
|
||||
SetModTime bool `config:"set_modtime"`
|
||||
Md5sumCommand string `config:"md5sum_command"`
|
||||
Sha1sumCommand string `config:"sha1sum_command"`
|
||||
SkipLinks bool `config:"skip_links"`
|
||||
Subsystem string `config:"subsystem"`
|
||||
ServerCommand string `config:"server_command"`
|
||||
UseFstat bool `config:"use_fstat"`
|
||||
DisableConcurrentReads bool `config:"disable_concurrent_reads"`
|
||||
IdleTimeout fs.Duration `config:"idle_timeout"`
|
||||
}
|
||||
|
||||
// Fs stores the interface to the remote SFTP files
|
||||
@@ -299,7 +286,6 @@ type Fs struct {
|
||||
drain *time.Timer // used to drain the pool when we stop using the connections
|
||||
pacer *fs.Pacer // pacer for operations
|
||||
savedpswd string
|
||||
transfers int32 // count in use references
|
||||
}
|
||||
|
||||
// Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading)
|
||||
@@ -362,23 +348,6 @@ func (c *conn) closed() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Show that we are doing an upload or download
|
||||
//
|
||||
// Call removeTransfer() when done
|
||||
func (f *Fs) addTransfer() {
|
||||
atomic.AddInt32(&f.transfers, 1)
|
||||
}
|
||||
|
||||
// Show the upload or download done
|
||||
func (f *Fs) removeTransfer() {
|
||||
atomic.AddInt32(&f.transfers, -1)
|
||||
}
|
||||
|
||||
// getTransfers shows whether there are any transfers in progress
|
||||
func (f *Fs) getTransfers() int32 {
|
||||
return atomic.LoadInt32(&f.transfers)
|
||||
}
|
||||
|
||||
// Open a new connection to the SFTP server.
|
||||
func (f *Fs) sftpConnection(ctx context.Context) (c *conn, err error) {
|
||||
// Rate limit rate of new connections
|
||||
@@ -427,11 +396,7 @@ func (f *Fs) newSftpClient(conn *ssh.Client, opts ...sftp.ClientOption) (*sftp.C
|
||||
opts = append(opts,
|
||||
sftp.UseFstat(f.opt.UseFstat),
|
||||
sftp.UseConcurrentReads(!f.opt.DisableConcurrentReads),
|
||||
sftp.UseConcurrentWrites(!f.opt.DisableConcurrentWrites),
|
||||
)
|
||||
if f.opt.DisableConcurrentReads { // FIXME
|
||||
fs.Errorf(f, "Ignoring disable_concurrent_reads after library reversion - see #5197")
|
||||
}
|
||||
|
||||
return sftp.NewClientPipe(pr, pw, opts...)
|
||||
}
|
||||
@@ -509,13 +474,6 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
|
||||
func (f *Fs) drainPool(ctx context.Context) (err error) {
|
||||
f.poolMu.Lock()
|
||||
defer f.poolMu.Unlock()
|
||||
if transfers := f.getTransfers(); transfers != 0 {
|
||||
fs.Debugf(f, "Not closing %d unused connections as %d transfers in progress", len(f.pool), transfers)
|
||||
if f.opt.IdleTimeout > 0 {
|
||||
f.drain.Reset(time.Duration(f.opt.IdleTimeout)) // nudge on the pool emptying timer
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if f.opt.IdleTimeout > 0 {
|
||||
f.drain.Stop()
|
||||
}
|
||||
@@ -1422,22 +1380,18 @@ func (o *Object) Storable() bool {
|
||||
|
||||
// objectReader represents a file open for reading on the SFTP server
|
||||
type objectReader struct {
|
||||
f *Fs
|
||||
sftpFile *sftp.File
|
||||
pipeReader *io.PipeReader
|
||||
done chan struct{}
|
||||
}
|
||||
|
||||
func (f *Fs) newObjectReader(sftpFile *sftp.File) *objectReader {
|
||||
func newObjectReader(sftpFile *sftp.File) *objectReader {
|
||||
pipeReader, pipeWriter := io.Pipe()
|
||||
file := &objectReader{
|
||||
f: f,
|
||||
sftpFile: sftpFile,
|
||||
pipeReader: pipeReader,
|
||||
done: make(chan struct{}),
|
||||
}
|
||||
// Show connection in use
|
||||
f.addTransfer()
|
||||
|
||||
go func() {
|
||||
// Use sftpFile.WriteTo to pump data so that it gets a
|
||||
@@ -1467,8 +1421,6 @@ func (file *objectReader) Close() (err error) {
|
||||
_ = file.pipeReader.Close()
|
||||
// Wait for the background process to finish
|
||||
<-file.done
|
||||
// Show connection no longer in use
|
||||
file.f.removeTransfer()
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -1502,27 +1454,12 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
|
||||
return nil, errors.Wrap(err, "Open Seek failed")
|
||||
}
|
||||
}
|
||||
in = readers.NewLimitedReadCloser(o.fs.newObjectReader(sftpFile), limit)
|
||||
in = readers.NewLimitedReadCloser(newObjectReader(sftpFile), limit)
|
||||
return in, nil
|
||||
}
|
||||
|
||||
type sizeReader struct {
|
||||
io.Reader
|
||||
size int64
|
||||
}
|
||||
|
||||
// Size returns the expected size of the stream
|
||||
//
|
||||
// It is used in sftpFile.ReadFrom as a hint to work out the
|
||||
// concurrency needed
|
||||
func (sr *sizeReader) Size() int64 {
|
||||
return sr.size
|
||||
}
|
||||
|
||||
// Update a remote sftp file using the data <in> and ModTime from <src>
|
||||
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
|
||||
o.fs.addTransfer() // Show transfer in progress
|
||||
defer o.fs.removeTransfer()
|
||||
// Clear the hash cache since we are about to update the object
|
||||
o.md5sum = nil
|
||||
o.sha1sum = nil
|
||||
@@ -1550,7 +1487,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
fs.Debugf(src, "Removed after failed upload: %v", err)
|
||||
}
|
||||
}
|
||||
_, err = file.ReadFrom(&sizeReader{Reader: in, size: src.Size()})
|
||||
_, err = file.ReadFrom(in)
|
||||
if err != nil {
|
||||
remove()
|
||||
return errors.Wrap(err, "Update ReadFrom failed")
|
||||
|
||||
@@ -77,6 +77,7 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
@@ -135,7 +136,7 @@ func init() {
|
||||
Name: "sharefile",
|
||||
Description: "Citrix Sharefile",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
oauthConfig := newOauthConfig("")
|
||||
checkAuth := func(oauthConfig *oauth2.Config, auth *oauthutil.AuthResult) error {
|
||||
if auth == nil || auth.Form == nil {
|
||||
@@ -156,9 +157,8 @@ func init() {
|
||||
}
|
||||
err := oauthutil.Config(ctx, "sharefile", name, m, oauthConfig, &opt)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: []fs.Option{{
|
||||
Name: "upload_cutoff",
|
||||
|
||||
@@ -16,6 +16,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
@@ -75,17 +76,17 @@ func init() {
|
||||
Name: "sugarsync",
|
||||
Description: "Sugarsync",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
opt := new(Options)
|
||||
err := configstruct.Set(m, opt)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to read options")
|
||||
log.Fatalf("Failed to read options: %v", err)
|
||||
}
|
||||
|
||||
if opt.RefreshToken != "" {
|
||||
fmt.Printf("Already have a token - refresh?\n")
|
||||
if !config.ConfirmWithConfig(ctx, m, "config_refresh_token", true) {
|
||||
return nil
|
||||
return
|
||||
}
|
||||
}
|
||||
fmt.Printf("Username (email address)> ")
|
||||
@@ -113,11 +114,10 @@ func init() {
|
||||
// return shouldRetry(ctx, resp, err)
|
||||
//})
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to get token")
|
||||
log.Fatalf("Failed to get token: %v", err)
|
||||
}
|
||||
opt.RefreshToken = resp.Header.Get("Location")
|
||||
m.Set("refresh_token", opt.RefreshToken)
|
||||
return nil
|
||||
},
|
||||
Options: []fs.Option{{
|
||||
Name: "app_id",
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"path"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -41,7 +42,7 @@ func init() {
|
||||
Name: "tardigrade",
|
||||
Description: "Tardigrade Decentralized Cloud Storage",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, configMapper configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, configMapper configmap.Mapper) {
|
||||
provider, _ := configMapper.Get(fs.ConfigProvider)
|
||||
|
||||
config.FileDeleteKey(name, fs.ConfigProvider)
|
||||
@@ -53,7 +54,7 @@ func init() {
|
||||
|
||||
// satelliteString contains always default and passphrase can be empty
|
||||
if apiKey == "" {
|
||||
return nil
|
||||
return
|
||||
}
|
||||
|
||||
satellite, found := satMap[satelliteString]
|
||||
@@ -63,12 +64,12 @@ func init() {
|
||||
|
||||
access, err := uplink.RequestAccessWithPassphrase(context.TODO(), satellite, apiKey, passphrase)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "couldn't create access grant")
|
||||
log.Fatalf("Couldn't create access grant: %v", err)
|
||||
}
|
||||
|
||||
serializedAccess, err := access.Serialize()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "couldn't serialize access grant")
|
||||
log.Fatalf("Couldn't serialize access grant: %v", err)
|
||||
}
|
||||
configMapper.Set("satellite_address", satellite)
|
||||
configMapper.Set("access_grant", serializedAccess)
|
||||
@@ -77,9 +78,8 @@ func init() {
|
||||
config.FileDeleteKey(name, "api_key")
|
||||
config.FileDeleteKey(name, "passphrase")
|
||||
} else {
|
||||
return errors.Errorf("invalid provider type: %s", provider)
|
||||
log.Fatalf("Invalid provider type: %s", provider)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: []fs.Option{
|
||||
{
|
||||
|
||||
@@ -1,170 +0,0 @@
|
||||
package api
|
||||
|
||||
import "fmt"
|
||||
|
||||
// Error contains the error code and message returned by the API
|
||||
type Error struct {
|
||||
Success bool `json:"success,omitempty"`
|
||||
StatusCode int `json:"statusCode,omitempty"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Data string `json:"data,omitempty"`
|
||||
}
|
||||
|
||||
// Error returns a string for the error and satisfies the error interface
|
||||
func (e Error) Error() string {
|
||||
out := fmt.Sprintf("api error %d", e.StatusCode)
|
||||
if e.Message != "" {
|
||||
out += ": " + e.Message
|
||||
}
|
||||
if e.Data != "" {
|
||||
out += ": " + e.Data
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// FolderEntry represents a Uptobox subfolder when listing folder contents
|
||||
type FolderEntry struct {
|
||||
FolderID uint64 `json:"fld_id"`
|
||||
Description string `json:"fld_descr"`
|
||||
Password string `json:"fld_password"`
|
||||
FullPath string `json:"fullPath"`
|
||||
Path string `json:"fld_name"`
|
||||
Name string `json:"name"`
|
||||
Hash string `json:"hash"`
|
||||
}
|
||||
|
||||
// FolderInfo represents the current folder when listing folder contents
|
||||
type FolderInfo struct {
|
||||
FolderID uint64 `json:"fld_id"`
|
||||
Hash string `json:"hash"`
|
||||
FileCount uint64 `json:"fileCount"`
|
||||
TotalFileSize int64 `json:"totalFileSize"`
|
||||
}
|
||||
|
||||
// FileInfo represents a file when listing folder contents
|
||||
type FileInfo struct {
|
||||
Name string `json:"file_name"`
|
||||
Description string `json:"file_descr"`
|
||||
Created string `json:"file_created"`
|
||||
Size int64 `json:"file_size"`
|
||||
Downloads uint64 `json:"file_downloads"`
|
||||
Code string `json:"file_code"`
|
||||
Password string `json:"file_password"`
|
||||
Public int `json:"file_public"`
|
||||
LastDownload string `json:"file_last_download"`
|
||||
ID uint64 `json:"id"`
|
||||
}
|
||||
|
||||
// ReadMetadataResponse is the response when listing folder contents
|
||||
type ReadMetadataResponse struct {
|
||||
StatusCode int `json:"statusCode"`
|
||||
Message string `json:"message"`
|
||||
Data struct {
|
||||
CurrentFolder FolderInfo `json:"currentFolder"`
|
||||
Folders []FolderEntry `json:"folders"`
|
||||
Files []FileInfo `json:"files"`
|
||||
PageCount int `json:"pageCount"`
|
||||
TotalFileCount int `json:"totalFileCount"`
|
||||
TotalFileSize int64 `json:"totalFileSize"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
// UploadInfo is the response when initiating an upload
|
||||
type UploadInfo struct {
|
||||
StatusCode int `json:"statusCode"`
|
||||
Message string `json:"message"`
|
||||
Data struct {
|
||||
UploadLink string `json:"uploadLink"`
|
||||
MaxUpload string `json:"maxUpload"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
// UploadResponse is the respnse to a successful upload
|
||||
type UploadResponse struct {
|
||||
Files []struct {
|
||||
Name string `json:"name"`
|
||||
Size int64 `json:"size"`
|
||||
URL string `json:"url"`
|
||||
DeleteURL string `json:"deleteUrl"`
|
||||
} `json:"files"`
|
||||
}
|
||||
|
||||
// UpdateResponse is a generic response to various action on files (rename/copy/move)
|
||||
type UpdateResponse struct {
|
||||
Message string `json:"message"`
|
||||
StatusCode int `json:"statusCode"`
|
||||
}
|
||||
|
||||
// Download is the response when requesting a download link
|
||||
type Download struct {
|
||||
StatusCode int `json:"statusCode"`
|
||||
Message string `json:"message"`
|
||||
Data struct {
|
||||
DownloadLink string `json:"dlLink"`
|
||||
} `json:"data"`
|
||||
}
|
||||
|
||||
// MetadataRequestOptions represents all the options when listing folder contents
|
||||
type MetadataRequestOptions struct {
|
||||
Limit uint64
|
||||
Offset uint64
|
||||
SearchField string
|
||||
Search string
|
||||
}
|
||||
|
||||
// CreateFolderRequest is used for creating a folder
|
||||
type CreateFolderRequest struct {
|
||||
Token string `json:"token"`
|
||||
Path string `json:"path"`
|
||||
Name string `json:"name"`
|
||||
}
|
||||
|
||||
// DeleteFolderRequest is used for deleting a folder
|
||||
type DeleteFolderRequest struct {
|
||||
Token string `json:"token"`
|
||||
FolderID uint64 `json:"fld_id"`
|
||||
}
|
||||
|
||||
// CopyMoveFileRequest is used for moving/copying a file
|
||||
type CopyMoveFileRequest struct {
|
||||
Token string `json:"token"`
|
||||
FileCodes string `json:"file_codes"`
|
||||
DestinationFolderID uint64 `json:"destination_fld_id"`
|
||||
Action string `json:"action"`
|
||||
}
|
||||
|
||||
// MoveFolderRequest is used for moving a folder
|
||||
type MoveFolderRequest struct {
|
||||
Token string `json:"token"`
|
||||
FolderID uint64 `json:"fld_id"`
|
||||
DestinationFolderID uint64 `json:"destination_fld_id"`
|
||||
Action string `json:"action"`
|
||||
}
|
||||
|
||||
// RenameFolderRequest is used for renaming a folder
|
||||
type RenameFolderRequest struct {
|
||||
Token string `json:"token"`
|
||||
FolderID uint64 `json:"fld_id"`
|
||||
NewName string `json:"new_name"`
|
||||
}
|
||||
|
||||
// UpdateFileInformation is used for renaming a file
|
||||
type UpdateFileInformation struct {
|
||||
Token string `json:"token"`
|
||||
FileCode string `json:"file_code"`
|
||||
NewName string `json:"new_name,omitempty"`
|
||||
Description string `json:"description,omitempty"`
|
||||
Password string `json:"password,omitempty"`
|
||||
Public string `json:"public,omitempty"`
|
||||
}
|
||||
|
||||
// RemoveFileRequest is used for deleting a file
|
||||
type RemoveFileRequest struct {
|
||||
Token string `json:"token"`
|
||||
FileCodes string `json:"file_codes"`
|
||||
}
|
||||
|
||||
// Token represents the authentication token
|
||||
type Token struct {
|
||||
Token string `json:"token"`
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,21 +0,0 @@
|
||||
// Test Uptobox filesystem interface
|
||||
package uptobox_test
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/rclone/rclone/backend/uptobox"
|
||||
"github.com/rclone/rclone/fstest"
|
||||
"github.com/rclone/rclone/fstest/fstests"
|
||||
)
|
||||
|
||||
// TestIntegration runs integration tests against the remote
|
||||
func TestIntegration(t *testing.T) {
|
||||
if *fstest.RemoteName == "" {
|
||||
*fstest.RemoteName = "TestUptobox:"
|
||||
}
|
||||
fstests.Run(t, &fstests.Opt{
|
||||
RemoteName: *fstest.RemoteName,
|
||||
NilObject: (*uptobox.Object)(nil),
|
||||
})
|
||||
}
|
||||
@@ -125,7 +125,7 @@ func (ca *CookieAuth) getSPCookie(conf *SharepointSuccessResponse) (*CookieRespo
|
||||
return nil, errors.Wrap(err, "Error while constructing endpoint URL")
|
||||
}
|
||||
|
||||
u, err := url.Parse(spRoot.Scheme + "://" + spRoot.Host + "/_forms/default.aspx?wa=wsignin1.0")
|
||||
u, err := url.Parse("https://" + spRoot.Host + "/_forms/default.aspx?wa=wsignin1.0")
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "Error while constructing login URL")
|
||||
}
|
||||
|
||||
@@ -1067,7 +1067,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
|
||||
Path: addSlash(srcPath),
|
||||
NoResponse: true,
|
||||
ExtraHeaders: map[string]string{
|
||||
"Destination": addSlash(destinationURL.String()),
|
||||
"Destination": destinationURL.String(),
|
||||
"Overwrite": "F",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -60,12 +60,12 @@ func init() {
|
||||
Name: "yandex",
|
||||
Description: "Yandex Disk",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
err := oauthutil.Config(ctx, "yandex", name, m, oauthConfig, nil)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
return
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: append(oauthutil.SharedOptions, []fs.Option{{
|
||||
Name: config.ConfigEncoding,
|
||||
@@ -251,22 +251,22 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
|
||||
token, err := oauthutil.GetToken(name, m)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "couldn't read OAuth token")
|
||||
log.Fatalf("Couldn't read OAuth token (this should never happen).")
|
||||
}
|
||||
if token.RefreshToken == "" {
|
||||
return nil, errors.New("unable to get RefreshToken. If you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend")
|
||||
log.Fatalf("Unable to get RefreshToken. If you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend.")
|
||||
}
|
||||
if token.TokenType != "OAuth" {
|
||||
token.TokenType = "OAuth"
|
||||
err = oauthutil.PutToken(name, m, token, false)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "couldn't save OAuth token")
|
||||
log.Fatalf("Couldn't save OAuth token (this should never happen).")
|
||||
}
|
||||
log.Printf("Automatically upgraded OAuth config.")
|
||||
}
|
||||
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "failed to configure Yandex")
|
||||
log.Fatalf("Failed to configure Yandex: %v", err)
|
||||
}
|
||||
|
||||
ci := fs.GetConfig(ctx)
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"path"
|
||||
@@ -72,41 +73,32 @@ func init() {
|
||||
Name: "zoho",
|
||||
Description: "Zoho",
|
||||
NewFs: NewFs,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper) {
|
||||
// Need to setup region before configuring oauth
|
||||
err := setupRegion(m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
setupRegion(m)
|
||||
opt := oauthutil.Options{
|
||||
// No refresh token unless ApprovalForce is set
|
||||
OAuth2Opts: []oauth2.AuthCodeOption{oauth2.ApprovalForce},
|
||||
}
|
||||
if err := oauthutil.Config(ctx, "zoho", name, m, oauthConfig, &opt); err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
// We need to rewrite the token type to "Zoho-oauthtoken" because Zoho wants
|
||||
// it's own custom type
|
||||
token, err := oauthutil.GetToken(name, m)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to read token")
|
||||
log.Fatalf("Failed to read token: %v", err)
|
||||
}
|
||||
if token.TokenType != "Zoho-oauthtoken" {
|
||||
token.TokenType = "Zoho-oauthtoken"
|
||||
err = oauthutil.PutToken(name, m, token, false)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to configure token")
|
||||
log.Fatalf("Failed to configure token: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
if fs.GetConfig(ctx).AutoConfirm {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err = setupRoot(ctx, name, m); err != nil {
|
||||
return errors.Wrap(err, "failed to configure root directory")
|
||||
log.Fatalf("Failed to configure root directory: %v", err)
|
||||
}
|
||||
return nil
|
||||
},
|
||||
Options: append(oauthutil.SharedOptions, []fs.Option{{
|
||||
Name: "region",
|
||||
@@ -167,16 +159,15 @@ type Object struct {
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
func setupRegion(m configmap.Mapper) error {
|
||||
func setupRegion(m configmap.Mapper) {
|
||||
region, ok := m.Get("region")
|
||||
if !ok || region == "" {
|
||||
return errors.New("no region set")
|
||||
if !ok {
|
||||
log.Fatalf("No region set\n")
|
||||
}
|
||||
rootURL = fmt.Sprintf("https://workdrive.zoho.%s/api/v1", region)
|
||||
accountsURL = fmt.Sprintf("https://accounts.zoho.%s", region)
|
||||
oauthConfig.Endpoint.AuthURL = fmt.Sprintf("https://accounts.zoho.%s/oauth/v2/auth", region)
|
||||
oauthConfig.Endpoint.TokenURL = fmt.Sprintf("https://accounts.zoho.%s/oauth/v2/token", region)
|
||||
return nil
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------
|
||||
@@ -212,7 +203,7 @@ func listWorkspaces(ctx context.Context, teamID string, srv *rest.Client) ([]api
|
||||
func setupRoot(ctx context.Context, name string, m configmap.Mapper) error {
|
||||
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to load oAuthClient")
|
||||
log.Fatalf("Failed to load oAuthClient: %s", err)
|
||||
}
|
||||
authSrv := rest.NewClient(oAuthClient).SetRoot(accountsURL)
|
||||
opts := rest.Opts{
|
||||
@@ -381,10 +372,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
if err := configstruct.Set(m, opt); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err := setupRegion(m)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
setupRegion(m)
|
||||
|
||||
root = parsePath(root)
|
||||
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
|
||||
|
||||
@@ -62,7 +62,6 @@ docs = [
|
||||
"sftp.md",
|
||||
"sugarsync.md",
|
||||
"tardigrade.md",
|
||||
"uptobox.md",
|
||||
"union.md",
|
||||
"webdav.md",
|
||||
"yandex.md",
|
||||
|
||||
@@ -44,10 +44,10 @@ var commandDefinition = &cobra.Command{
|
||||
Use: "about remote:",
|
||||
Short: `Get quota information from the remote.`,
|
||||
Long: `
|
||||
` + "`rclone about`" + ` prints quota information about a remote to standard
|
||||
` + "`rclone about`" + `prints quota information about a remote to standard
|
||||
output. The output is typically used, free, quota and trash contents.
|
||||
|
||||
E.g. Typical output from ` + "`rclone about remote:`" + ` is:
|
||||
E.g. Typical output from` + "`rclone about remote:`" + `is:
|
||||
|
||||
Total: 17G
|
||||
Used: 7.444G
|
||||
@@ -75,7 +75,7 @@ Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g.
|
||||
Trashed: 104857602
|
||||
Other: 8849156022
|
||||
|
||||
A ` + "`--json`" + ` flag generates conveniently computer readable output, e.g.
|
||||
A ` + "`--json`" + `flag generates conveniently computer readable output, e.g.
|
||||
|
||||
{
|
||||
"total": 18253611008,
|
||||
|
||||
@@ -54,7 +54,6 @@ import (
|
||||
_ "github.com/rclone/rclone/cmd/size"
|
||||
_ "github.com/rclone/rclone/cmd/sync"
|
||||
_ "github.com/rclone/rclone/cmd/test"
|
||||
_ "github.com/rclone/rclone/cmd/test/changenotify"
|
||||
_ "github.com/rclone/rclone/cmd/test/histogram"
|
||||
_ "github.com/rclone/rclone/cmd/test/info"
|
||||
_ "github.com/rclone/rclone/cmd/test/makefiles"
|
||||
|
||||
20
cmd/cmd.go
20
cmd/cmd.go
@@ -75,19 +75,8 @@ const (
|
||||
|
||||
// ShowVersion prints the version to stdout
|
||||
func ShowVersion() {
|
||||
osVersion, osKernel := buildinfo.GetOSVersion()
|
||||
if osVersion == "" {
|
||||
osVersion = "unknown"
|
||||
}
|
||||
if osKernel == "" {
|
||||
osKernel = "unknown"
|
||||
}
|
||||
|
||||
linking, tagString := buildinfo.GetLinkingAndTags()
|
||||
|
||||
fmt.Printf("rclone %s\n", fs.Version)
|
||||
fmt.Printf("- os/version: %s\n", osVersion)
|
||||
fmt.Printf("- os/kernel: %s\n", osKernel)
|
||||
fmt.Printf("- os/type: %s\n", runtime.GOOS)
|
||||
fmt.Printf("- os/arch: %s\n", runtime.GOARCH)
|
||||
fmt.Printf("- go/version: %s\n", runtime.Version())
|
||||
@@ -400,10 +389,7 @@ func initConfig() {
|
||||
configflags.SetFlags(ci)
|
||||
|
||||
// Load the config
|
||||
err := configfile.LoadConfig(ctx)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to load config: %v", err)
|
||||
}
|
||||
configfile.LoadConfig(ctx)
|
||||
|
||||
// Start accounting
|
||||
accounting.Start(ctx)
|
||||
@@ -414,7 +400,7 @@ func initConfig() {
|
||||
}
|
||||
|
||||
// Load filters
|
||||
err = filterflags.Reload(ctx)
|
||||
err := filterflags.Reload(ctx)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to load filters: %v", err)
|
||||
}
|
||||
@@ -567,7 +553,7 @@ func Main() {
|
||||
setupRootCommand(Root)
|
||||
AddBackendFlags()
|
||||
if err := Root.Execute(); err != nil {
|
||||
if strings.HasPrefix(err.Error(), "unknown command") && selfupdateEnabled {
|
||||
if strings.HasPrefix(err.Error(), "unknown command") {
|
||||
Root.PrintErrf("You could use '%s selfupdate' to get latest features.\n\n", Root.CommandPath())
|
||||
}
|
||||
log.Fatalf("Fatal error: %v", err)
|
||||
|
||||
@@ -21,7 +21,6 @@ import (
|
||||
"github.com/rclone/rclone/cmd/mountlib"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/lib/atexit"
|
||||
"github.com/rclone/rclone/lib/buildinfo"
|
||||
"github.com/rclone/rclone/vfs"
|
||||
)
|
||||
|
||||
@@ -36,7 +35,6 @@ func init() {
|
||||
cmd.Aliases = append(cmd.Aliases, "cmount")
|
||||
}
|
||||
mountlib.AddRc("cmount", mount)
|
||||
buildinfo.Tags = append(buildinfo.Tags, "cmount")
|
||||
}
|
||||
|
||||
// Find the option string in the current options
|
||||
|
||||
@@ -22,7 +22,6 @@ func init() {
|
||||
cmd.Root.AddCommand(configCommand)
|
||||
configCommand.AddCommand(configEditCommand)
|
||||
configCommand.AddCommand(configFileCommand)
|
||||
configCommand.AddCommand(configTouchCommand)
|
||||
configCommand.AddCommand(configShowCommand)
|
||||
configCommand.AddCommand(configDumpCommand)
|
||||
configCommand.AddCommand(configProvidersCommand)
|
||||
@@ -42,9 +41,9 @@ var configCommand = &cobra.Command{
|
||||
remotes and manage existing ones. You may also set or remove a
|
||||
password to protect your configuration.
|
||||
`,
|
||||
RunE: func(command *cobra.Command, args []string) error {
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(0, 0, command, args)
|
||||
return config.EditConfig(context.Background())
|
||||
config.EditConfig(context.Background())
|
||||
},
|
||||
}
|
||||
|
||||
@@ -64,15 +63,6 @@ var configFileCommand = &cobra.Command{
|
||||
},
|
||||
}
|
||||
|
||||
var configTouchCommand = &cobra.Command{
|
||||
Use: "touch",
|
||||
Short: `Ensure configuration file exists.`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(0, 0, command, args)
|
||||
config.SaveConfig()
|
||||
},
|
||||
}
|
||||
|
||||
var configShowCommand = &cobra.Command{
|
||||
Use: "show [<remote>]",
|
||||
Short: `Print (decrypted) config file, or the config for a single remote.`,
|
||||
@@ -272,7 +262,8 @@ This normally means going through the interactive oauth flow again.
|
||||
if fsInfo.Config == nil {
|
||||
return errors.Errorf("%s: doesn't support Reconnect", configName)
|
||||
}
|
||||
return fsInfo.Config(ctx, configName, config)
|
||||
fsInfo.Config(ctx, configName, config)
|
||||
return nil
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ var commandDefinition = &cobra.Command{
|
||||
Download a URL's content and copy it to the destination without saving
|
||||
it in temporary storage.
|
||||
|
||||
Setting ` + "`--auto-filename`" + ` will cause the file name to be retrieved from
|
||||
Setting ` + "`--auto-filename`" + `will cause the file name to be retrieved from
|
||||
the from URL (after any redirections) and used in the destination
|
||||
path. With ` + "`--print-filename`" + ` in addition, the resuling file name will
|
||||
be printed.
|
||||
|
||||
@@ -3,6 +3,7 @@ package link
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/rclone/rclone/cmd"
|
||||
"github.com/rclone/rclone/fs"
|
||||
@@ -12,7 +13,7 @@ import (
|
||||
)
|
||||
|
||||
var (
|
||||
expire = fs.DurationOff
|
||||
expire = fs.Duration(time.Hour * 24 * 365 * 100)
|
||||
unlink = false
|
||||
)
|
||||
|
||||
|
||||
@@ -334,7 +334,7 @@ metadata about files like in UNIX. One case that may arise is that other program
|
||||
(incorrectly) interprets this as the file being accessible by everyone. For example
|
||||
an SSH client may warn about "unprotected private key file".
|
||||
|
||||
WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity",
|
||||
WinFsp 2021 (version 1.9, still in beta) introduces a new FUSE option "FileSecurity",
|
||||
that allows the complete specification of file security descriptors using
|
||||
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
|
||||
With this you can work around issues such as the mentioned "unprotected private key file"
|
||||
@@ -342,38 +342,19 @@ by specifying |-o FileSecurity="D:P(A;;FA;;;OW)"|, for file all access (FA) to t
|
||||
|
||||
#### Windows caveats
|
||||
|
||||
Drives created as Administrator are not visible to other accounts,
|
||||
not even an account that was elevated to Administrator with the
|
||||
User Account Control (UAC) feature. A result of this is that if you mount
|
||||
to a drive letter from a Command Prompt run as Administrator, and then try
|
||||
to access the same drive from Windows Explorer (which does not run as
|
||||
Administrator), you will not be able to see the mounted drive.
|
||||
Note that drives created as Administrator are not visible by other
|
||||
accounts (including the account that was elevated as
|
||||
Administrator). So if you start a Windows drive from an Administrative
|
||||
Command Prompt and then try to access the same drive from Explorer
|
||||
(which does not run as Administrator), you will not be able to see the
|
||||
new drive.
|
||||
|
||||
If you don't need to access the drive from applications running with
|
||||
administrative privileges, the easiest way around this is to always
|
||||
create the mount from a non-elevated command prompt.
|
||||
|
||||
To make mapped drives available to the user account that created them
|
||||
regardless if elevated or not, there is a special Windows setting called
|
||||
[linked connections](https://docs.microsoft.com/en-us/troubleshoot/windows-client/networking/mapped-drives-not-available-from-elevated-command#detail-to-configure-the-enablelinkedconnections-registry-entry)
|
||||
that can be enabled.
|
||||
|
||||
It is also possible to make a drive mount available to everyone on the system,
|
||||
by running the process creating it as the built-in SYSTEM account.
|
||||
There are several ways to do this: One is to use the command-line
|
||||
utility [PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec),
|
||||
from Microsoft's Sysinternals suite, which has option |-s| to start
|
||||
processes as the SYSTEM account. Another alternative is to run the mount
|
||||
command from a Windows Scheduled Task, or a Windows Service, configured
|
||||
to run as the SYSTEM account. A third alternative is to use the
|
||||
[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)).
|
||||
Note that when running rclone as another user, it will not use
|
||||
the configuration file from your profile unless you tell it to
|
||||
with the [|--config|](https://rclone.org/docs/#config-config-file) option.
|
||||
Read more in the [install documentation](https://rclone.org/install/).
|
||||
|
||||
Note that mapping to a directory path, instead of a drive letter,
|
||||
does not suffer from the same limitations.
|
||||
The easiest way around this is to start the drive from a normal
|
||||
command prompt. It is also possible to start a drive from the SYSTEM
|
||||
account (using [the WinFsp.Launcher
|
||||
infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture))
|
||||
which creates drives accessible for everyone on the system or
|
||||
alternatively using [the nssm service manager](https://nssm.cc/usage).
|
||||
|
||||
### Limitations
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ import (
|
||||
|
||||
func TestRc(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
require.NoError(t, configfile.LoadConfig(ctx))
|
||||
configfile.LoadConfig(ctx)
|
||||
mount := rc.Calls.Get("mount/mount")
|
||||
assert.NotNil(t, mount)
|
||||
unmount := rc.Calls.Get("mount/unmount")
|
||||
|
||||
@@ -485,15 +485,11 @@ func (u *UI) removeEntry(pos int) {
|
||||
|
||||
// delete the entry at the current position
|
||||
func (u *UI) delete() {
|
||||
if u.d == nil || len(u.entries) == 0 {
|
||||
return
|
||||
}
|
||||
ctx := context.Background()
|
||||
cursorPos := u.dirPosMap[u.path]
|
||||
dirPos := u.sortPerm[cursorPos.entry]
|
||||
dirEntry := u.entries[dirPos]
|
||||
dirPos := u.sortPerm[u.dirPosMap[u.path].entry]
|
||||
entry := u.entries[dirPos]
|
||||
u.boxMenu = []string{"cancel", "confirm"}
|
||||
if obj, isFile := dirEntry.(fs.Object); isFile {
|
||||
if obj, isFile := entry.(fs.Object); isFile {
|
||||
u.boxMenuHandler = func(f fs.Fs, p string, o int) (string, error) {
|
||||
if o != 1 {
|
||||
return "Aborted!", nil
|
||||
@@ -503,33 +499,27 @@ func (u *UI) delete() {
|
||||
return "", err
|
||||
}
|
||||
u.removeEntry(dirPos)
|
||||
if cursorPos.entry >= len(u.entries) {
|
||||
u.move(-1) // move back onto a valid entry
|
||||
}
|
||||
return "Successfully deleted file!", nil
|
||||
}
|
||||
u.popupBox([]string{
|
||||
"Delete this file?",
|
||||
u.fsName + dirEntry.String()})
|
||||
u.fsName + entry.String()})
|
||||
} else {
|
||||
u.boxMenuHandler = func(f fs.Fs, p string, o int) (string, error) {
|
||||
if o != 1 {
|
||||
return "Aborted!", nil
|
||||
}
|
||||
err := operations.Purge(ctx, f, dirEntry.String())
|
||||
err := operations.Purge(ctx, f, entry.String())
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
u.removeEntry(dirPos)
|
||||
if cursorPos.entry >= len(u.entries) {
|
||||
u.move(-1) // move back onto a valid entry
|
||||
}
|
||||
return "Successfully purged folder!", nil
|
||||
}
|
||||
u.popupBox([]string{
|
||||
"Purge this directory?",
|
||||
"ALL files in it will be deleted",
|
||||
u.fsName + dirEntry.String()})
|
||||
u.fsName + entry.String()})
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -7,19 +7,12 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/rclone/rclone/cmd"
|
||||
"github.com/rclone/rclone/fs/config/flags"
|
||||
"github.com/rclone/rclone/fs/operations"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
size = int64(-1)
|
||||
)
|
||||
|
||||
func init() {
|
||||
cmd.Root.AddCommand(commandDefinition)
|
||||
cmdFlags := commandDefinition.Flags()
|
||||
flags.Int64VarP(cmdFlags, &size, "size", "", size, "File size hint to preallocate")
|
||||
}
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
@@ -44,13 +37,6 @@ must fit into RAM. The cutoff needs to be small enough to adhere
|
||||
the limits of your remote, please see there. Generally speaking,
|
||||
setting this cutoff too high will decrease your performance.
|
||||
|
||||
Use the |--size| flag to preallocate the file in advance at the remote end
|
||||
and actually stream it, even if remote backend doesn't support streaming.
|
||||
|
||||
|--size| should be the exact size of the input stream in bytes. If the
|
||||
size of the stream is different in length to the |--size| passed in
|
||||
then the transfer will likely fail.
|
||||
|
||||
Note that the upload can also not be retried because the data is
|
||||
not kept around until the upload succeeds. If you need to transfer
|
||||
a lot of data, you're better off caching locally and then
|
||||
@@ -65,7 +51,7 @@ a lot of data, you're better off caching locally and then
|
||||
|
||||
fdst, dstFileName := cmd.NewFsDstFile(args)
|
||||
cmd.Run(false, false, command, func() error {
|
||||
_, err := operations.RcatSize(context.Background(), fdst, dstFileName, os.Stdin, size, time.Now())
|
||||
_, err := operations.Rcat(context.Background(), fdst, dstFileName, os.Stdin, time.Now())
|
||||
return err
|
||||
})
|
||||
},
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
// +build !noselfupdate
|
||||
|
||||
package selfupdate
|
||||
|
||||
// Note: "|" will be replaced by backticks in the help string below
|
||||
@@ -29,7 +27,7 @@ If the old version contains only dots and digits (for example |v1.54.0|)
|
||||
then it's a stable release so you won't need the |--beta| flag. Beta releases
|
||||
have an additional information similar to |v1.54.0-beta.5111.06f1c0c61|.
|
||||
(if you are a developer and use a locally built rclone, the version number
|
||||
will end with |-DEV|, you will have to rebuild it as it obviously can't
|
||||
will end with |-DEV|, you will have to rebuild it as it obvisously can't
|
||||
be distributed).
|
||||
|
||||
If you previously installed rclone via a package manager, the package may
|
||||
|
||||
@@ -1,11 +0,0 @@
|
||||
// +build noselfupdate
|
||||
|
||||
package selfupdate
|
||||
|
||||
import (
|
||||
"github.com/rclone/rclone/lib/buildinfo"
|
||||
)
|
||||
|
||||
func init() {
|
||||
buildinfo.Tags = append(buildinfo.Tags, "noselfupdate")
|
||||
}
|
||||
@@ -1,5 +1,3 @@
|
||||
// +build !noselfupdate
|
||||
|
||||
package selfupdate
|
||||
|
||||
import (
|
||||
@@ -145,9 +143,14 @@ func InstallUpdate(ctx context.Context, opt *Options) error {
|
||||
return errors.New("--stable and --beta are mutually exclusive")
|
||||
}
|
||||
|
||||
// The `cmount` tag is added by cmd/cmount/mount.go only if build is static.
|
||||
_, tags := buildinfo.GetLinkingAndTags()
|
||||
if strings.Contains(" "+tags+" ", " cmount ") && !cmount.ProvidedBy(runtime.GOOS) {
|
||||
gotCmount := false
|
||||
for _, tag := range buildinfo.Tags {
|
||||
if tag == "cmount" {
|
||||
gotCmount = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if gotCmount && !cmount.ProvidedBy(runtime.GOOS) {
|
||||
return errors.New("updating would discard the mount FUSE capability, aborting")
|
||||
}
|
||||
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
// +build !noselfupdate
|
||||
|
||||
package selfupdate
|
||||
|
||||
import (
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
// +build !noselfupdate
|
||||
|
||||
package selfupdate
|
||||
|
||||
import (
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
// +build !windows,!plan9,!js
|
||||
// +build !noselfupdate
|
||||
|
||||
package selfupdate
|
||||
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
// +build plan9 js
|
||||
// +build !noselfupdate
|
||||
|
||||
package selfupdate
|
||||
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
// +build windows
|
||||
// +build !noselfupdate
|
||||
|
||||
package selfupdate
|
||||
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
// +build noselfupdate
|
||||
|
||||
package cmd
|
||||
|
||||
const selfupdateEnabled = false
|
||||
@@ -1,7 +0,0 @@
|
||||
// +build !noselfupdate
|
||||
|
||||
package cmd
|
||||
|
||||
// This constant must be in the `cmd` package rather than `cmd/selfupdate`
|
||||
// to prevent build failure due to dependency loop.
|
||||
const selfupdateEnabled = true
|
||||
@@ -41,7 +41,7 @@ func startServer(t *testing.T, f fs.Fs) {
|
||||
}
|
||||
|
||||
func TestInit(t *testing.T) {
|
||||
require.NoError(t, configfile.LoadConfig(context.Background()))
|
||||
configfile.LoadConfig(context.Background())
|
||||
|
||||
f, err := fs.NewFs(context.Background(), "testdata/files")
|
||||
l, _ := f.List(context.Background(), "")
|
||||
|
||||
@@ -61,7 +61,7 @@ var (
|
||||
func TestInit(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
// Configure the remote
|
||||
require.NoError(t, configfile.LoadConfig(context.Background()))
|
||||
configfile.LoadConfig(context.Background())
|
||||
// fs.Config.LogLevel = fs.LogLevelDebug
|
||||
// fs.Config.DumpHeaders = true
|
||||
// fs.Config.DumpBodies = true
|
||||
|
||||
@@ -66,7 +66,7 @@ func createOverwriteDeleteSeq(t testing.TB, path string) []TestRequest {
|
||||
// TestResticHandler runs tests on the restic handler code, especially in append-only mode.
|
||||
func TestResticHandler(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
require.NoError(t, configfile.LoadConfig(ctx))
|
||||
configfile.LoadConfig(ctx)
|
||||
buf := make([]byte, 32)
|
||||
_, err := io.ReadFull(rand.Reader, buf)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -1,54 +0,0 @@
|
||||
// Package changenotify tests rclone's changenotify support
|
||||
package changenotify
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"time"
|
||||
|
||||
"github.com/rclone/rclone/cmd"
|
||||
"github.com/rclone/rclone/cmd/test"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/config/flags"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
pollInterval = 10 * time.Second
|
||||
)
|
||||
|
||||
func init() {
|
||||
test.Command.AddCommand(commandDefinition)
|
||||
cmdFlags := commandDefinition.Flags()
|
||||
flags.DurationVarP(cmdFlags, &pollInterval, "poll-interval", "", pollInterval, "Time to wait between polling for changes.")
|
||||
}
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
Use: "changenotify remote:",
|
||||
Short: `Log any change notify requests for the remote passed in.`,
|
||||
RunE: func(command *cobra.Command, args []string) error {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
f := cmd.NewFsSrc(args)
|
||||
ctx := context.Background()
|
||||
|
||||
// Start polling function
|
||||
features := f.Features()
|
||||
if do := features.ChangeNotify; do != nil {
|
||||
pollChan := make(chan time.Duration)
|
||||
do(ctx, changeNotify, pollChan)
|
||||
pollChan <- pollInterval
|
||||
fs.Logf(nil, "Waiting for changes, polling every %v", pollInterval)
|
||||
} else {
|
||||
return errors.New("poll-interval is not supported by this remote")
|
||||
}
|
||||
select {}
|
||||
},
|
||||
}
|
||||
|
||||
// changeNotify invalidates the directory cache for the relativePath
|
||||
// passed in.
|
||||
//
|
||||
// if entryType is a directory it invalidates the parent of the directory too.
|
||||
func changeNotify(relativePath string, entryType fs.EntryType) {
|
||||
fs.Logf(nil, "%q: %v", relativePath, entryType)
|
||||
}
|
||||
@@ -3,12 +3,12 @@
|
||||
package makefiles
|
||||
|
||||
import (
|
||||
cryptrand "crypto/rand"
|
||||
"io"
|
||||
"log"
|
||||
"math/rand"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"time"
|
||||
|
||||
"github.com/rclone/rclone/cmd"
|
||||
"github.com/rclone/rclone/cmd/test"
|
||||
@@ -27,10 +27,8 @@ var (
|
||||
maxFileSize = fs.SizeSuffix(100)
|
||||
minFileNameLength = 4
|
||||
maxFileNameLength = 12
|
||||
seed = int64(1)
|
||||
|
||||
// Globals
|
||||
randSource *rand.Rand
|
||||
directoriesToCreate int
|
||||
totalDirectories int
|
||||
fileNames = map[string]struct{}{} // keep a note of which file name we've used already
|
||||
@@ -46,7 +44,6 @@ func init() {
|
||||
flags.FVarP(cmdFlags, &maxFileSize, "max-file-size", "", "Maximum size of files to create")
|
||||
flags.IntVarP(cmdFlags, &minFileNameLength, "min-name-length", "", minFileNameLength, "Minimum size of file names")
|
||||
flags.IntVarP(cmdFlags, &maxFileNameLength, "max-name-length", "", maxFileNameLength, "Maximum size of file names")
|
||||
flags.Int64VarP(cmdFlags, &seed, "seed", "", seed, "Seed for the random number generator (0 for random)")
|
||||
}
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
@@ -54,36 +51,28 @@ var commandDefinition = &cobra.Command{
|
||||
Short: `Make a random file hierarchy in <dir>`,
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
cmd.CheckArgs(1, 1, command, args)
|
||||
if seed == 0 {
|
||||
seed = time.Now().UnixNano()
|
||||
fs.Logf(nil, "Using random seed = %d", seed)
|
||||
}
|
||||
randSource = rand.New(rand.NewSource(seed))
|
||||
outputDirectory := args[0]
|
||||
directoriesToCreate = numberOfFiles / averageFilesPerDirectory
|
||||
averageSize := (minFileSize + maxFileSize) / 2
|
||||
start := time.Now()
|
||||
fs.Logf(nil, "Creating %d files of average size %v in %d directories in %q.", numberOfFiles, averageSize, directoriesToCreate, outputDirectory)
|
||||
log.Printf("Creating %d files of average size %v in %d directories in %q.", numberOfFiles, averageSize, directoriesToCreate, outputDirectory)
|
||||
root := &dir{name: outputDirectory, depth: 1}
|
||||
for totalDirectories < directoriesToCreate {
|
||||
root.createDirectories()
|
||||
}
|
||||
dirs := root.list("", []string{})
|
||||
totalBytes := int64(0)
|
||||
for i := 0; i < numberOfFiles; i++ {
|
||||
dir := dirs[randSource.Intn(len(dirs))]
|
||||
totalBytes += writeFile(dir, fileName())
|
||||
dir := dirs[rand.Intn(len(dirs))]
|
||||
writeFile(dir, fileName())
|
||||
}
|
||||
dt := time.Since(start)
|
||||
fs.Logf(nil, "Written %viB in %v at %viB/s.", fs.SizeSuffix(totalBytes), dt.Round(time.Millisecond), fs.SizeSuffix((totalBytes*int64(time.Second))/int64(dt)))
|
||||
log.Printf("Done.")
|
||||
},
|
||||
}
|
||||
|
||||
// fileName creates a unique random file or directory name
|
||||
func fileName() (name string) {
|
||||
for {
|
||||
length := randSource.Intn(maxFileNameLength-minFileNameLength) + minFileNameLength
|
||||
name = random.StringFn(length, randSource.Intn)
|
||||
length := rand.Intn(maxFileNameLength-minFileNameLength) + minFileNameLength
|
||||
name = random.String(length)
|
||||
if _, found := fileNames[name]; !found {
|
||||
break
|
||||
}
|
||||
@@ -110,7 +99,7 @@ func (d *dir) createDirectories() {
|
||||
}
|
||||
d.children = append(d.children, newDir)
|
||||
totalDirectories++
|
||||
switch randSource.Intn(4) {
|
||||
switch rand.Intn(4) {
|
||||
case 0:
|
||||
if d.depth < maxDepth {
|
||||
newDir.createDirectories()
|
||||
@@ -133,7 +122,7 @@ func (d *dir) list(path string, output []string) []string {
|
||||
}
|
||||
|
||||
// writeFile writes a random file at dir/name
|
||||
func writeFile(dir, name string) int64 {
|
||||
func writeFile(dir, name string) {
|
||||
err := os.MkdirAll(dir, 0777)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to make directory %q: %v", dir, err)
|
||||
@@ -143,8 +132,8 @@ func writeFile(dir, name string) int64 {
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to open file %q: %v", path, err)
|
||||
}
|
||||
size := randSource.Int63n(int64(maxFileSize-minFileSize)) + int64(minFileSize)
|
||||
_, err = io.CopyN(fd, randSource, size)
|
||||
size := rand.Int63n(int64(maxFileSize-minFileSize)) + int64(minFileSize)
|
||||
_, err = io.CopyN(fd, cryptrand.Reader, size)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to write %v bytes to file %q: %v", size, path, err)
|
||||
}
|
||||
@@ -152,6 +141,4 @@ func writeFile(dir, name string) int64 {
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to close file %q: %v", path, err)
|
||||
}
|
||||
fs.Infof(path, "Written file size %v", fs.SizeSuffix(size))
|
||||
return size
|
||||
}
|
||||
|
||||
@@ -29,16 +29,13 @@ var commandDefinition = &cobra.Command{
|
||||
Use: "version",
|
||||
Short: `Show the version number.`,
|
||||
Long: `
|
||||
Show the rclone version number, the go version, the build target
|
||||
OS and architecture, the runtime OS and kernel version and bitness,
|
||||
build tags and the type of executable (static or dynamic).
|
||||
Show the rclone version number, the go version, the build target OS and
|
||||
architecture, build tags and the type of executable (static or dynamic).
|
||||
|
||||
For example:
|
||||
|
||||
$ rclone version
|
||||
rclone v1.55.0
|
||||
- os/version: ubuntu 18.04 (64 bit)
|
||||
- os/kernel: 4.15.0-136-generic (x86_64)
|
||||
rclone v1.54
|
||||
- os/type: linux
|
||||
- os/arch: amd64
|
||||
- go/version: go1.16
|
||||
|
||||
@@ -26,12 +26,12 @@ func TestVersionWorksWithoutAccessibleConfigFile(t *testing.T) {
|
||||
}
|
||||
// re-wire
|
||||
oldOsStdout := os.Stdout
|
||||
oldConfigPath := config.GetConfigPath()
|
||||
assert.NoError(t, config.SetConfigPath(path))
|
||||
oldConfigPath := config.ConfigPath
|
||||
config.ConfigPath = path
|
||||
os.Stdout = nil
|
||||
defer func() {
|
||||
os.Stdout = oldOsStdout
|
||||
assert.NoError(t, config.SetConfigPath(oldConfigPath))
|
||||
config.ConfigPath = oldConfigPath
|
||||
}()
|
||||
|
||||
cmd.Root.SetArgs([]string{"version"})
|
||||
|
||||
@@ -152,7 +152,6 @@ WebDAV or S3, that work out of the box.)
|
||||
{{< provider name="SugarSync" home="https://sugarsync.com/" config="/sugarsync/" >}}
|
||||
{{< provider name="Tardigrade" home="https://tardigrade.io/" config="/tardigrade/" >}}
|
||||
{{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}}
|
||||
{{< provider name="Uptobox" home="https://uptobox.com" config="/uptobox/" >}}
|
||||
{{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}
|
||||
{{< provider name="WebDAV" home="https://en.wikipedia.org/wiki/WebDAV" config="/webdav/" >}}
|
||||
{{< provider name="Yandex Disk" home="https://disk.yandex.com/" config="/yandex/" >}}
|
||||
|
||||
@@ -372,7 +372,7 @@ put them back in again.` >}}
|
||||
* Fred <fred@creativeprojects.tech>
|
||||
* Sébastien Gross <renard@users.noreply.github.com>
|
||||
* Maxime Suret <11944422+msuret@users.noreply.github.com>
|
||||
* Caleb Case <caleb@storj.io> <calebcase@gmail.com>
|
||||
* Caleb Case <caleb@storj.io>
|
||||
* Ben Zenker <imbenzenker@gmail.com>
|
||||
* Martin Michlmayr <tbm@cyrius.com>
|
||||
* Brandon McNama <bmcnama@pagerduty.com>
|
||||
@@ -477,13 +477,3 @@ put them back in again.` >}}
|
||||
* Lucas Messenger <lmesseng@cisco.com>
|
||||
* Manish Kumar <krmanish260@gmail.com>
|
||||
* x0b <x0bdev@gmail.com>
|
||||
* CERN through the CS3MESH4EOSC Project
|
||||
* Nick Gaya <nicholasgaya+github@gmail.com>
|
||||
* Ashok Gelal <401055+ashokgelal@users.noreply.github.com>
|
||||
* Dominik Mydlil <dominik.mydlil@outlook.com>
|
||||
* Nazar Mishturak <nazarmx@gmail.com>
|
||||
* Ansh Mittal <iamAnshMittal@gmail.com>
|
||||
* noabody <noabody@yahoo.com>
|
||||
* OleFrost <82263101+olefrost@users.noreply.github.com>
|
||||
* Kenny Parsons <kennyparsons93@gmail.com>
|
||||
* Jeffrey Tolar <tolar.jeffrey@gmail.com>
|
||||
|
||||
@@ -392,22 +392,6 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
|
||||
- Type: MultiEncoder
|
||||
- Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8
|
||||
|
||||
#### --azureblob-public-access
|
||||
|
||||
Public access level of a container: blob, container.
|
||||
|
||||
- Config: public_access
|
||||
- Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Examples:
|
||||
- ""
|
||||
- The container and its blobs can be accessed only with an authorized request. It's a default value
|
||||
- "blob"
|
||||
- Blob data within this container can be read via anonymous request.
|
||||
- "container"
|
||||
- Allow full public read access for container and blob data.
|
||||
|
||||
{{< rem autogenerated options stop >}}
|
||||
### Limitations ###
|
||||
|
||||
|
||||
@@ -172,6 +172,11 @@ the file instead of hiding it.
|
||||
Old versions of files, where available, are visible using the
|
||||
`--b2-versions` flag.
|
||||
|
||||
**NB** Note that `--b2-versions` does not work with crypt at the
|
||||
moment [#1627](https://github.com/rclone/rclone/issues/1627). Using
|
||||
[--backup-dir](/docs/#backup-dir-dir) with rclone is the recommended
|
||||
way of working around this.
|
||||
|
||||
If you wish to remove all the old versions then you can use the
|
||||
`rclone cleanup remote:bucket` command which will delete all the old
|
||||
versions of files, leaving the current ones intact. You can also
|
||||
|
||||
@@ -5,198 +5,6 @@ description: "Rclone Changelog"
|
||||
|
||||
# Changelog
|
||||
|
||||
## v1.55.1 - 2021-04-26
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.55.0...v1.55.1)
|
||||
|
||||
* Bug Fixes
|
||||
* selfupdate
|
||||
* Dont detect FUSE if build is static (Ivan Andreev)
|
||||
* Add build tag noselfupdate (Ivan Andreev)
|
||||
* sync: Fix incorrect error reported by graceful cutoff (Nick Craig-Wood)
|
||||
* install.sh: fix macOS arm64 download (Nick Craig-Wood)
|
||||
* build: Fix version numbers in android branch builds (Nick Craig-Wood)
|
||||
* docs
|
||||
* Contributing.md: update setup instructions for go1.16 (Nick Gaya)
|
||||
* WinFsp 2021 is out of beta (albertony)
|
||||
* Minor cleanup of space around code section (albertony)
|
||||
* Fixed some typos (albertony)
|
||||
* VFS
|
||||
* Fix a code path which allows dirty data to be removed causing data loss (Nick Craig-Wood)
|
||||
* Compress
|
||||
* Fix compressed name regexp (buengese)
|
||||
* Drive
|
||||
* Fix backend copyid of google doc to directory (Nick Craig-Wood)
|
||||
* Don't open browser when service account... (Ansh Mittal)
|
||||
* Dropbox
|
||||
* Add missing team_data.member scope for use with --impersonate (Nick Craig-Wood)
|
||||
* Fix About after scopes changes - rclone config reconnect needed (Nick Craig-Wood)
|
||||
* Fix Unable to decrypt returned paths from changeNotify (Nick Craig-Wood)
|
||||
* FTP
|
||||
* Fix implicit TLS (Ivan Andreev)
|
||||
* Onedrive
|
||||
* Work around for random "Unable to initialize RPS" errors (OleFrost)
|
||||
* SFTP
|
||||
* Revert sftp library to v1.12.0 from v1.13.0 to fix performance regression (Nick Craig-Wood)
|
||||
* Fix Update ReadFrom failed: failed to send packet: EOF errors (Nick Craig-Wood)
|
||||
* Zoho
|
||||
* Fix error when region isn't set (buengese)
|
||||
* Do not ask for mountpoint twice when using headless setup (buengese)
|
||||
|
||||
## v1.55.0 - 2021-03-31
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.54.0...v1.55.0)
|
||||
|
||||
* New commands
|
||||
* [selfupdate](/commands/rclone_selfupdate/) (Ivan Andreev)
|
||||
* Allows rclone to update itself in-place or via a package (using `--package` flag)
|
||||
* Reads cryptographically signed signatures for non beta releases
|
||||
* Works on all OSes.
|
||||
* [test](/commands/rclone_test/) - these are test commands - use with care!
|
||||
* `histogram` - Makes a histogram of file name characters.
|
||||
* `info` - Discovers file name or other limitations for paths.
|
||||
* `makefiles` - Make a random file hierarchy for testing.
|
||||
* `memory` - Load all the objects at remote:path into memory and report memory stats.
|
||||
* New Features
|
||||
* [Connection strings](/docs/#connection-strings)
|
||||
* Config parameters can now be passed as part of the remote name as a connection string.
|
||||
* For example to do the equivalent of `--drive-shared-with-me` use `drive,shared_with_me:`
|
||||
* Make sure we don't save on the fly remote config to the config file (Nick Craig-Wood)
|
||||
* Make sure backends with additional config have a different name for caching (Nick Craig-Wood)
|
||||
* This work was sponsored by CERN, through the [CS3MESH4EOSC Project](https://cs3mesh4eosc.eu/).
|
||||
* CS3MESH4EOSC has received funding from the European Union’s Horizon 2020
|
||||
* research and innovation programme under Grant Agreement no. 863353.
|
||||
* build
|
||||
* Update go build version to go1.16 and raise minimum go version to go1.13 (Nick Craig-Wood)
|
||||
* Make a macOS ARM64 build to support Apple Silicon (Nick Craig-Wood)
|
||||
* Install macfuse 4.x instead of osxfuse 3.x (Nick Craig-Wood)
|
||||
* Use `GO386=softfloat` instead of deprecated `GO386=387` for 386 builds (Nick Craig-Wood)
|
||||
* Disable IOS builds for the time being (Nick Craig-Wood)
|
||||
* Androids builds made with up to date NDK (x0b)
|
||||
* Add an rclone user to the Docker image but don't use it by default (cynthia kwok)
|
||||
* dedupe: Make largest directory primary to minimize data moved (Saksham Khanna)
|
||||
* config
|
||||
* Wrap config library in an interface (Fionera)
|
||||
* Make config file system pluggable (Nick Craig-Wood)
|
||||
* `--config ""` or `"/notfound"` for in memory config only (Nick Craig-Wood)
|
||||
* Clear fs cache of stale entries when altering config (Nick Craig-Wood)
|
||||
* copyurl: Add option to print resulting auto-filename (albertony)
|
||||
* delete: Make `--rmdirs` obey the filters (Nick Craig-Wood)
|
||||
* docs - many fixes and reworks from edwardxml, albertony, pvalls, Ivan Andreev, Evan Harris, buengese, Alexey Tabakman
|
||||
* encoder/filename - add SCSU as tables (Klaus Post)
|
||||
* Add multiple paths support to `--compare-dest` and `--copy-dest` flag (K265)
|
||||
* filter: Make `--exclude "dir/"` equivalent to `--exclude "dir/**"` (Nick Craig-Wood)
|
||||
* fshttp: Add DSCP support with `--dscp` for QoS with differentiated services (Max Sum)
|
||||
* lib/cache: Add Delete and DeletePrefix methods (Nick Craig-Wood)
|
||||
* lib/file
|
||||
* Make pre-allocate detect disk full errors and return them (Nick Craig-Wood)
|
||||
* Don't run preallocate concurrently (Nick Craig-Wood)
|
||||
* Retry preallocate on EINTR (Nick Craig-Wood)
|
||||
* operations: Made copy and sync operations obey a RetryAfterError (Ankur Gupta)
|
||||
* rc
|
||||
* Add string alternatives for setting options over the rc (Nick Craig-Wood)
|
||||
* Add `options/local` to see the options configured in the context (Nick Craig-Wood)
|
||||
* Add `_config` parameter to set global config for just this rc call (Nick Craig-Wood)
|
||||
* Implement passing filter config with `_filter` parameter (Nick Craig-Wood)
|
||||
* Add `fscache/clear` and `fscache/entries` to control the fs cache (Nick Craig-Wood)
|
||||
* Avoid +Inf value for speed in `core/stats` (albertony)
|
||||
* Add a full set of stats to `core/stats` (Nick Craig-Wood)
|
||||
* Allow `fs=` params to be a JSON blob (Nick Craig-Wood)
|
||||
* rcd: Added systemd notification during the `rclone rcd` command. (Naveen Honest Raj)
|
||||
* rmdirs: Make `--rmdirs` obey the filters (Nick Craig-Wood)
|
||||
* version: Show build tags and type of executable (Ivan Andreev)
|
||||
* Bug Fixes
|
||||
* install.sh: make it fail on download errors (Ivan Andreev)
|
||||
* Fix excessive retries missing `--max-duration` timeout (Nick Craig-Wood)
|
||||
* Fix crash when `--low-level-retries=0` (Nick Craig-Wood)
|
||||
* Fix failed token refresh on mounts created via the rc (Nick Craig-Wood)
|
||||
* fshttp: Fix bandwidth limiting after bad merge (Nick Craig-Wood)
|
||||
* lib/atexit
|
||||
* Unregister interrupt handler once it has fired so users can interrupt again (Nick Craig-Wood)
|
||||
* Fix occasional failure to unmount with CTRL-C (Nick Craig-Wood)
|
||||
* Fix deadlock calling Finalise while Run is running (Nick Craig-Wood)
|
||||
* lib/rest: Fix multipart uploads not stopping on context cancel (Nick Craig-Wood)
|
||||
* Mount
|
||||
* Allow mounting to root directory on windows (albertony)
|
||||
* Improved handling of relative paths on windows (albertony)
|
||||
* Fix unicode issues with accented characters on macOS (Nick Craig-Wood)
|
||||
* Docs: document the new FileSecurity option in WinFsp 2021 (albertony)
|
||||
* Docs: add note about volume path syntax on windows (albertony)
|
||||
* Fix caching of old directories after renaming them (Nick Craig-Wood)
|
||||
* Update cgofuse to the latest version to bring in macfuse 4 fix (Nick Craig-Wood)
|
||||
* VFS
|
||||
* `--vfs-used-is-size` to report used space using recursive scan (tYYGH)
|
||||
* Don't set modification time if it was already correct (Nick Craig-Wood)
|
||||
* Fix Create causing windows explorer to truncate files on CTRL-C CTRL-V (Nick Craig-Wood)
|
||||
* Fix modtimes not updating when writing via cache (Nick Craig-Wood)
|
||||
* Fix modtimes changing by fractional seconds after upload (Nick Craig-Wood)
|
||||
* Fix modtime set if `--vfs-cache-mode writes`/`full` and no write (Nick Craig-Wood)
|
||||
* Rename files in cache and cancel uploads on directory rename (Nick Craig-Wood)
|
||||
* Fix directory renaming by renaming dirs cached in memory (Nick Craig-Wood)
|
||||
* Local
|
||||
* Add flag `--local-no-preallocate` (David Sze)
|
||||
* Make `nounc` an advanced option except on Windows (albertony)
|
||||
* Don't ignore preallocate disk full errors (Nick Craig-Wood)
|
||||
* Cache
|
||||
* Add `--fs-cache-expire-duration` to control the fs cache (Nick Craig-Wood)
|
||||
* Crypt
|
||||
* Add option to not encrypt data (Vesnyx)
|
||||
* Log hash ok on upload (albertony)
|
||||
* Azure Blob
|
||||
* Add container public access level support. (Manish Kumar)
|
||||
* B2
|
||||
* Fix HTML files downloaded via cloudflare (Nick Craig-Wood)
|
||||
* Box
|
||||
* Fix transfers getting stuck on token expiry after API change (Nick Craig-Wood)
|
||||
* Chunker
|
||||
* Partially implement no-rename transactions (Maxwell Calman)
|
||||
* Drive
|
||||
* Don't stop server side copy if couldn't read description (Nick Craig-Wood)
|
||||
* Pass context on to drive SDK - to help with cancellation (Nick Craig-Wood)
|
||||
* Dropbox
|
||||
* Add polling for changes support (Robert Thomas)
|
||||
* Make `--timeout 0` work properly (Nick Craig-Wood)
|
||||
* Raise priority of rate limited message to INFO to make it more noticeable (Nick Craig-Wood)
|
||||
* Fichier
|
||||
* Implement copy & move (buengese)
|
||||
* Implement public link (buengese)
|
||||
* FTP
|
||||
* Implement Shutdown method (Nick Craig-Wood)
|
||||
* Close idle connections after `--ftp-idle-timeout` (1m by default) (Nick Craig-Wood)
|
||||
* Make `--timeout 0` work properly (Nick Craig-Wood)
|
||||
* Add `--ftp-close-timeout` flag for use with awkward ftp servers (Nick Craig-Wood)
|
||||
* Retry connections and logins on 421 errors (Nick Craig-Wood)
|
||||
* Hdfs
|
||||
* Fix permissions for when directory is created (Lucas Messenger)
|
||||
* Onedrive
|
||||
* Make `--timeout 0` work properly (Nick Craig-Wood)
|
||||
* S3
|
||||
* Fix `--s3-profile` which wasn't working (Nick Craig-Wood)
|
||||
* SFTP
|
||||
* Close idle connections after `--sftp-idle-timeout` (1m by default) (Nick Craig-Wood)
|
||||
* Fix "file not found" errors for read once servers (Nick Craig-Wood)
|
||||
* Fix SetModTime stat failed: object not found with `--sftp-set-modtime=false` (Nick Craig-Wood)
|
||||
* Swift
|
||||
* Update github.com/ncw/swift to v2.0.0 (Nick Craig-Wood)
|
||||
* Implement copying large objects (nguyenhuuluan434)
|
||||
* Union
|
||||
* Fix crash when using epff policy (Nick Craig-Wood)
|
||||
* Fix union attempting to update files on a read only file system (Nick Craig-Wood)
|
||||
* Refactor to use fspath.SplitFs instead of fs.ParseRemote (Nick Craig-Wood)
|
||||
* Fix initialisation broken in refactor (Nick Craig-Wood)
|
||||
* WebDAV
|
||||
* Add support for sharepoint with NTLM authentication (Rauno Ots)
|
||||
* Make sharepoint-ntlm docs more consistent (Alex Chen)
|
||||
* Improve terminology in sharepoint-ntlm docs (Ivan Andreev)
|
||||
* Disable HTTP/2 for NTLM authentication (georne)
|
||||
* Fix sharepoint-ntlm error 401 for parallel actions (Ivan Andreev)
|
||||
* Check that purged directory really exists (Ivan Andreev)
|
||||
* Yandex
|
||||
* Make `--timeout 0` work properly (Nick Craig-Wood)
|
||||
* Zoho
|
||||
* Replace client id - you will need to `rclone config reconnect` after this (buengese)
|
||||
* Add forgotten setupRegion() to NewFs - this finally fixes regions other than EU (buengese)
|
||||
|
||||
## v1.54.1 - 2021-03-08
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.54.0...v1.54.1)
|
||||
|
||||
@@ -416,27 +416,4 @@ Choose how chunker should handle files with missing or invalid chunks.
|
||||
- "false"
|
||||
- Warn user, skip incomplete file and proceed.
|
||||
|
||||
#### --chunker-transactions
|
||||
|
||||
Choose how chunker should handle temporary files during transactions.
|
||||
|
||||
- Config: transactions
|
||||
- Env Var: RCLONE_CHUNKER_TRANSACTIONS
|
||||
- Type: string
|
||||
- Default: "rename"
|
||||
- Examples:
|
||||
- "rename"
|
||||
- Rename temporary files after a successful transaction.
|
||||
- "norename"
|
||||
- Leave temporary file names and write transaction ID to metadata file.
|
||||
- Metadata is required for no rename transactions (meta format cannot be "none").
|
||||
- If you are using norename transactions you should be careful not to downgrade Rclone
|
||||
- as older versions of Rclone don't support this transaction style and will misinterpret
|
||||
- files manipulated by norename transactions.
|
||||
- This method is EXPERIMENTAL, don't use on production systems.
|
||||
- "auto"
|
||||
- Rename or norename will be used depending on capabilities of the backend.
|
||||
- If meta format is set to "none", rename transactions will always be used.
|
||||
- This method is EXPERIMENTAL, don't use on production systems.
|
||||
|
||||
{{< rem autogenerated options stop >}}
|
||||
|
||||
@@ -72,13 +72,11 @@ See the [global flags page](/flags/) for global options not listed here.
|
||||
* [rclone rcd](/commands/rclone_rcd/) - Run rclone listening to remote control commands only.
|
||||
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the empty directory at path.
|
||||
* [rclone rmdirs](/commands/rclone_rmdirs/) - Remove empty directories under the path.
|
||||
* [rclone selfupdate](/commands/rclone_selfupdate/) - Update the rclone binary.
|
||||
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
|
||||
* [rclone settier](/commands/rclone_settier/) - Changes storage class/tier of objects in remote.
|
||||
* [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path.
|
||||
* [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path.
|
||||
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
|
||||
* [rclone test](/commands/rclone_test/) - Run a test command
|
||||
* [rclone touch](/commands/rclone_touch/) - Create new file or change file modification time.
|
||||
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
|
||||
* [rclone version](/commands/rclone_version/) - Show the version number.
|
||||
|
||||
@@ -15,16 +15,15 @@ Copy url content to dest.
|
||||
Download a URL's content and copy it to the destination without saving
|
||||
it in temporary storage.
|
||||
|
||||
Setting `--auto-filename`will cause the file name to be retrieved from
|
||||
Setting --auto-filename will cause the file name to be retrieved from
|
||||
the from URL (after any redirections) and used in the destination
|
||||
path. With `--print-filename` in addition, the resuling file name will
|
||||
be printed.
|
||||
path.
|
||||
|
||||
Setting `--no-clobber` will prevent overwriting file on the
|
||||
Setting --no-clobber will prevent overwriting file on the
|
||||
destination if there is one with the same name.
|
||||
|
||||
Setting `--stdout` or making the output file name `-`
|
||||
will cause the output to be written to standard output.
|
||||
Setting --stdout or making the output file name "-" will cause the
|
||||
output to be written to standard output.
|
||||
|
||||
|
||||
```
|
||||
@@ -34,11 +33,10 @@ rclone copyurl https://example.com dest:path [flags]
|
||||
## Options
|
||||
|
||||
```
|
||||
-a, --auto-filename Get the file name from the URL and use it for destination file path
|
||||
-h, --help help for copyurl
|
||||
--no-clobber Prevent overwriting file with same name
|
||||
-p, --print-filename Print the resulting name from --auto-filename
|
||||
--stdout Write the output to stdout rather than a file
|
||||
-a, --auto-filename Get the file name from the URL and use it for destination file path
|
||||
-h, --help help for copyurl
|
||||
--no-clobber Prevent overwriting file with same name
|
||||
--stdout Write the output to stdout rather than a file
|
||||
```
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
@@ -17,8 +17,8 @@ By default `dedupe` interactively finds files with duplicate
|
||||
names and offers to delete all but one or rename them to be
|
||||
different. This is known as deduping by name.
|
||||
|
||||
Deduping by name is only useful with a small group of backends (e.g. Google Drive,
|
||||
Opendrive) that can have duplicate file names. It can be run on wrapping backends
|
||||
Deduping by name is only useful with backends like Google Drive which
|
||||
can have duplicate file names. It can be run on wrapping backends
|
||||
(e.g. crypt) if they wrap a backend which supports duplicate file
|
||||
names.
|
||||
|
||||
|
||||
@@ -29,15 +29,15 @@ is an **empty** **existing** directory:
|
||||
|
||||
On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows)
|
||||
for details. The following examples will mount to an automatically assigned drive,
|
||||
to specific drive letter `X:`, to path `C:\path\parent\mount`
|
||||
(where parent directory or drive must exist, and mount must **not** exist,
|
||||
to specific drive letter `X:`, to path `C:\path\to\nonexistent\directory`
|
||||
(which must be **non-existent** subdirectory of an **existing** parent directory or drive,
|
||||
and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and
|
||||
the last example will mount as network share `\\cloud\remote` and map it to an
|
||||
automatically assigned drive:
|
||||
|
||||
rclone mount remote:path/to/files *
|
||||
rclone mount remote:path/to/files X:
|
||||
rclone mount remote:path/to/files C:\path\parent\mount
|
||||
rclone mount remote:path/to/files C:\path\to\nonexistent\directory
|
||||
rclone mount remote:path/to/files \\cloud\remote
|
||||
|
||||
When the program ends while in foreground mode, either via Ctrl+C or receiving
|
||||
@@ -91,14 +91,14 @@ and experience unexpected program errors, freezes or other issues, consider moun
|
||||
as a network drive instead.
|
||||
|
||||
When mounting as a fixed disk drive you can either mount to an unused drive letter,
|
||||
or to a path representing a **non-existent** subdirectory of an **existing** parent
|
||||
or to a path - which must be **non-existent** subdirectory of an **existing** parent
|
||||
directory or drive. Using the special value `*` will tell rclone to
|
||||
automatically assign the next available drive letter, starting with Z: and moving backward.
|
||||
Examples:
|
||||
|
||||
rclone mount remote:path/to/files *
|
||||
rclone mount remote:path/to/files X:
|
||||
rclone mount remote:path/to/files C:\path\parent\mount
|
||||
rclone mount remote:path/to/files C:\path\to\nonexistent\directory
|
||||
rclone mount remote:path/to/files X:
|
||||
|
||||
Option `--volname` can be used to set a custom volume name for the mounted
|
||||
@@ -171,24 +171,10 @@ Note that the mapping of permissions is not always trivial, and the result
|
||||
you see in Windows Explorer may not be exactly like you expected.
|
||||
For example, when setting a value that includes write access, this will be
|
||||
mapped to individual permissions "write attributes", "write data" and "append data",
|
||||
but not "write extended attributes". Windows will then show this as basic
|
||||
permission "Special" instead of "Write", because "Write" includes the
|
||||
"write extended attributes" permission.
|
||||
|
||||
If you set POSIX permissions for only allowing access to the owner, using
|
||||
`--file-perms 0600 --dir-perms 0700`, the user group and the built-in "Everyone"
|
||||
group will still be given some special permissions, such as "read attributes"
|
||||
and "read permissions", in Windows. This is done for compatibility reasons,
|
||||
e.g. to allow users without additional permissions to be able to read basic
|
||||
metadata about files like in UNIX. One case that may arise is that other programs
|
||||
(incorrectly) interprets this as the file being accessible by everyone. For example
|
||||
an SSH client may warn about "unprotected private key file".
|
||||
|
||||
WinFsp 2021 (version 1.9, still in beta) introduces a new FUSE option "FileSecurity",
|
||||
that allows the complete specification of file security descriptors using
|
||||
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
|
||||
With this you can work around issues such as the mentioned "unprotected private key file"
|
||||
by specifying `-o FileSecurity="D:P(A;;FA;;;OW)"`, for file all access (FA) to the owner (OW).
|
||||
but not "write extended attributes" (WinFsp does not support extended attributes,
|
||||
see [this](https://github.com/billziss-gh/winfsp/wiki/NTFS-Compatibility)).
|
||||
Windows will then show this as basic permission "Special" instead of "Write",
|
||||
because "Write" includes the "write extended attributes" permission.
|
||||
|
||||
### Windows caveats
|
||||
|
||||
@@ -392,13 +378,6 @@ for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
This can potentially cause data corruption if you do. You can work
|
||||
around this by giving each rclone its own cache hierarchy with
|
||||
`--cache-dir`. You don't need to worry about this if the remotes in
|
||||
use don't overlap.
|
||||
|
||||
### --vfs-cache-mode off
|
||||
|
||||
In this mode (the default) the cache will read directly from the remote and write
|
||||
@@ -542,19 +521,6 @@ If the flag is not provided on the command line, then its default value depends
|
||||
on the operating system where rclone runs: "true" on Windows and macOS, "false"
|
||||
otherwise. If the flag is provided without a value, then it is "true".
|
||||
|
||||
## Alternate report of used bytes
|
||||
|
||||
Some backends, most notably S3, do not report the amount of bytes used.
|
||||
If you need this information to be available when running `df` on the
|
||||
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
|
||||
With this flag set, instead of relying on the backend to report this
|
||||
information, rclone will scan the whole remote similar to `rclone size`
|
||||
and compute the total used space itself.
|
||||
|
||||
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
|
||||
result is accurate. However, this is very inefficient and may cost lots of API
|
||||
calls resulting in extra charges. Use it as a last resort and only with caching.
|
||||
|
||||
|
||||
```
|
||||
rclone mount remote:path /path/to/mountpoint [flags]
|
||||
@@ -599,7 +565,6 @@ rclone mount remote:path /path/to/mountpoint [flags]
|
||||
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
|
||||
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
|
||||
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
|
||||
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
|
||||
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
|
||||
--volname string Set the volume name. Supported on Windows and OSX only.
|
||||
|
||||
@@ -1,84 +0,0 @@
|
||||
---
|
||||
title: "rclone selfupdate"
|
||||
description: "Update the rclone binary."
|
||||
slug: rclone_selfupdate
|
||||
url: /commands/rclone_selfupdate/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/selfupdate/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone selfupdate
|
||||
|
||||
Update the rclone binary.
|
||||
|
||||
## Synopsis
|
||||
|
||||
|
||||
This command downloads the latest release of rclone and replaces
|
||||
the currently running binary. The download is verified with a hashsum
|
||||
and cryptographically signed signature.
|
||||
|
||||
If used without flags (or with implied `--stable` flag), this command
|
||||
will install the latest stable release. However, some issues may be fixed
|
||||
(or features added) only in the latest beta release. In such cases you should
|
||||
run the command with the `--beta` flag, i.e. `rclone selfupdate --beta`.
|
||||
You can check in advance what version would be installed by adding the
|
||||
`--check` flag, then repeat the command without it when you are satisfied.
|
||||
|
||||
Sometimes the rclone team may recommend you a concrete beta or stable
|
||||
rclone release to troubleshoot your issue or add a bleeding edge feature.
|
||||
The `--version VER` flag, if given, will update to the concrete version
|
||||
instead of the latest one. If you omit micro version from `VER` (for
|
||||
example `1.53`), the latest matching micro version will be used.
|
||||
|
||||
Upon successful update rclone will print a message that contains a previous
|
||||
version number. You will need it if you later decide to revert your update
|
||||
for some reason. Then you'll have to note the previous version and run the
|
||||
following command: `rclone selfupdate [--beta] OLDVER`.
|
||||
If the old version contains only dots and digits (for example `v1.54.0`)
|
||||
then it's a stable release so you won't need the `--beta` flag. Beta releases
|
||||
have an additional information similar to `v1.54.0-beta.5111.06f1c0c61`.
|
||||
(if you are a developer and use a locally built rclone, the version number
|
||||
will end with `-DEV`, you will have to rebuild it as it obvisously can't
|
||||
be distributed).
|
||||
|
||||
If you previously installed rclone via a package manager, the package may
|
||||
include local documentation or configure services. You may wish to update
|
||||
with the flag `--package deb` or `--package rpm` (whichever is correct for
|
||||
your OS) to update these too. This command with the default `--package zip`
|
||||
will update only the rclone executable so the local manual may become
|
||||
inaccurate after it.
|
||||
|
||||
The `rclone mount` command (https://rclone.org/commands/rclone_mount/) may
|
||||
or may not support extended FUSE options depending on the build and OS.
|
||||
`selfupdate` will refuse to update if the capability would be discarded.
|
||||
|
||||
Note: Windows forbids deletion of a currently running executable so this
|
||||
command will rename the old executable to 'rclone.old.exe' upon success.
|
||||
|
||||
Please note that this command was not available before rclone version 1.55.
|
||||
If it fails for you with the message `unknown command "selfupdate"` then
|
||||
you will need to update manually following the install instructions located
|
||||
at https://rclone.org/install/
|
||||
|
||||
|
||||
```
|
||||
rclone selfupdate [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
```
|
||||
--beta Install beta release.
|
||||
--check Check for latest release, do not download.
|
||||
-h, --help help for selfupdate
|
||||
--output string Save the downloaded binary at a given path (default: replace running binary)
|
||||
--package string Package format: zip|deb|rpm (default: zip)
|
||||
--stable Install stable release (this is the default)
|
||||
--version string Install the given rclone version (default: latest)
|
||||
```
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
|
||||
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
|
||||
|
||||
@@ -134,13 +134,6 @@ for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
This can potentially cause data corruption if you do. You can work
|
||||
around this by giving each rclone its own cache hierarchy with
|
||||
`--cache-dir`. You don't need to worry about this if the remotes in
|
||||
use don't overlap.
|
||||
|
||||
### --vfs-cache-mode off
|
||||
|
||||
In this mode (the default) the cache will read directly from the remote and write
|
||||
@@ -284,19 +277,6 @@ If the flag is not provided on the command line, then its default value depends
|
||||
on the operating system where rclone runs: "true" on Windows and macOS, "false"
|
||||
otherwise. If the flag is provided without a value, then it is "true".
|
||||
|
||||
## Alternate report of used bytes
|
||||
|
||||
Some backends, most notably S3, do not report the amount of bytes used.
|
||||
If you need this information to be available when running `df` on the
|
||||
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
|
||||
With this flag set, instead of relying on the backend to report this
|
||||
information, rclone will scan the whole remote similar to `rclone size`
|
||||
and compute the total used space itself.
|
||||
|
||||
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
|
||||
result is accurate. However, this is very inefficient and may cost lots of API
|
||||
calls resulting in extra charges. Use it as a last resort and only with caching.
|
||||
|
||||
|
||||
```
|
||||
rclone serve dlna remote:path [flags]
|
||||
@@ -329,7 +309,6 @@ rclone serve dlna remote:path [flags]
|
||||
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
|
||||
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
|
||||
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
|
||||
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
|
||||
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
|
||||
```
|
||||
|
||||
@@ -133,13 +133,6 @@ for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
This can potentially cause data corruption if you do. You can work
|
||||
around this by giving each rclone its own cache hierarchy with
|
||||
`--cache-dir`. You don't need to worry about this if the remotes in
|
||||
use don't overlap.
|
||||
|
||||
### --vfs-cache-mode off
|
||||
|
||||
In this mode (the default) the cache will read directly from the remote and write
|
||||
@@ -283,19 +276,6 @@ If the flag is not provided on the command line, then its default value depends
|
||||
on the operating system where rclone runs: "true" on Windows and macOS, "false"
|
||||
otherwise. If the flag is provided without a value, then it is "true".
|
||||
|
||||
## Alternate report of used bytes
|
||||
|
||||
Some backends, most notably S3, do not report the amount of bytes used.
|
||||
If you need this information to be available when running `df` on the
|
||||
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
|
||||
With this flag set, instead of relying on the backend to report this
|
||||
information, rclone will scan the whole remote similar to `rclone size`
|
||||
and compute the total used space itself.
|
||||
|
||||
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
|
||||
result is accurate. However, this is very inefficient and may cost lots of API
|
||||
calls resulting in extra charges. Use it as a last resort and only with caching.
|
||||
|
||||
## Auth Proxy
|
||||
|
||||
If you supply the parameter `--auth-proxy /path/to/program` then
|
||||
@@ -414,7 +394,6 @@ rclone serve ftp remote:path [flags]
|
||||
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
|
||||
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
|
||||
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
|
||||
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
|
||||
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
|
||||
```
|
||||
|
||||
@@ -205,13 +205,6 @@ for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
This can potentially cause data corruption if you do. You can work
|
||||
around this by giving each rclone its own cache hierarchy with
|
||||
`--cache-dir`. You don't need to worry about this if the remotes in
|
||||
use don't overlap.
|
||||
|
||||
### --vfs-cache-mode off
|
||||
|
||||
In this mode (the default) the cache will read directly from the remote and write
|
||||
@@ -355,19 +348,6 @@ If the flag is not provided on the command line, then its default value depends
|
||||
on the operating system where rclone runs: "true" on Windows and macOS, "false"
|
||||
otherwise. If the flag is provided without a value, then it is "true".
|
||||
|
||||
## Alternate report of used bytes
|
||||
|
||||
Some backends, most notably S3, do not report the amount of bytes used.
|
||||
If you need this information to be available when running `df` on the
|
||||
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
|
||||
With this flag set, instead of relying on the backend to report this
|
||||
information, rclone will scan the whole remote similar to `rclone size`
|
||||
and compute the total used space itself.
|
||||
|
||||
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
|
||||
result is accurate. However, this is very inefficient and may cost lots of API
|
||||
calls resulting in extra charges. Use it as a last resort and only with caching.
|
||||
|
||||
|
||||
```
|
||||
rclone serve http remote:path [flags]
|
||||
@@ -410,7 +390,6 @@ rclone serve http remote:path [flags]
|
||||
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
|
||||
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
|
||||
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
|
||||
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
|
||||
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
|
||||
```
|
||||
|
||||
@@ -144,13 +144,6 @@ for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
This can potentially cause data corruption if you do. You can work
|
||||
around this by giving each rclone its own cache hierarchy with
|
||||
`--cache-dir`. You don't need to worry about this if the remotes in
|
||||
use don't overlap.
|
||||
|
||||
### --vfs-cache-mode off
|
||||
|
||||
In this mode (the default) the cache will read directly from the remote and write
|
||||
@@ -294,19 +287,6 @@ If the flag is not provided on the command line, then its default value depends
|
||||
on the operating system where rclone runs: "true" on Windows and macOS, "false"
|
||||
otherwise. If the flag is provided without a value, then it is "true".
|
||||
|
||||
## Alternate report of used bytes
|
||||
|
||||
Some backends, most notably S3, do not report the amount of bytes used.
|
||||
If you need this information to be available when running `df` on the
|
||||
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
|
||||
With this flag set, instead of relying on the backend to report this
|
||||
information, rclone will scan the whole remote similar to `rclone size`
|
||||
and compute the total used space itself.
|
||||
|
||||
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
|
||||
result is accurate. However, this is very inefficient and may cost lots of API
|
||||
calls resulting in extra charges. Use it as a last resort and only with caching.
|
||||
|
||||
## Auth Proxy
|
||||
|
||||
If you supply the parameter `--auth-proxy /path/to/program` then
|
||||
@@ -424,7 +404,6 @@ rclone serve sftp remote:path [flags]
|
||||
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
|
||||
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
|
||||
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
|
||||
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
|
||||
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
|
||||
```
|
||||
|
||||
@@ -213,13 +213,6 @@ for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
This can potentially cause data corruption if you do. You can work
|
||||
around this by giving each rclone its own cache hierarchy with
|
||||
`--cache-dir`. You don't need to worry about this if the remotes in
|
||||
use don't overlap.
|
||||
|
||||
### --vfs-cache-mode off
|
||||
|
||||
In this mode (the default) the cache will read directly from the remote and write
|
||||
@@ -363,19 +356,6 @@ If the flag is not provided on the command line, then its default value depends
|
||||
on the operating system where rclone runs: "true" on Windows and macOS, "false"
|
||||
otherwise. If the flag is provided without a value, then it is "true".
|
||||
|
||||
## Alternate report of used bytes
|
||||
|
||||
Some backends, most notably S3, do not report the amount of bytes used.
|
||||
If you need this information to be available when running `df` on the
|
||||
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
|
||||
With this flag set, instead of relying on the backend to report this
|
||||
information, rclone will scan the whole remote similar to `rclone size`
|
||||
and compute the total used space itself.
|
||||
|
||||
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
|
||||
result is accurate. However, this is very inefficient and may cost lots of API
|
||||
calls resulting in extra charges. Use it as a last resort and only with caching.
|
||||
|
||||
## Auth Proxy
|
||||
|
||||
If you supply the parameter `--auth-proxy /path/to/program` then
|
||||
@@ -502,7 +482,6 @@ rclone serve webdav remote:path [flags]
|
||||
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
|
||||
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
|
||||
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
|
||||
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
|
||||
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
|
||||
```
|
||||
|
||||
@@ -15,8 +15,7 @@ Make source and dest identical, modifying destination only.
|
||||
Sync the source to the destination, changing the destination
|
||||
only. Doesn't transfer unchanged files, testing by size and
|
||||
modification time or MD5SUM. Destination is updated to match
|
||||
source, including deleting files if necessary (except duplicate
|
||||
objects, see below).
|
||||
source, including deleting files if necessary.
|
||||
|
||||
**Important**: Since this can cause data loss, test first with the
|
||||
`--dry-run` or the `--interactive`/`-i` flag.
|
||||
@@ -24,8 +23,7 @@ objects, see below).
|
||||
rclone sync -i SOURCE remote:DESTINATION
|
||||
|
||||
Note that files in the destination won't be deleted if there were any
|
||||
errors at any point. Duplicate objects (files with the same name, on
|
||||
those providers that support it) are also not yet handled.
|
||||
errors at any point.
|
||||
|
||||
It is always the contents of the directory that is synced, not the
|
||||
directory so when source:path is a directory, it's the contents of
|
||||
@@ -37,9 +35,6 @@ go there.
|
||||
|
||||
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
|
||||
|
||||
**Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.
|
||||
See [this forum post](https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) for more info.
|
||||
|
||||
|
||||
```
|
||||
rclone sync source:path dest:path [flags]
|
||||
|
||||
@@ -1,41 +0,0 @@
|
||||
---
|
||||
title: "rclone test"
|
||||
description: "Run a test command"
|
||||
slug: rclone_test
|
||||
url: /commands/rclone_test/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone test
|
||||
|
||||
Run a test command
|
||||
|
||||
## Synopsis
|
||||
|
||||
Rclone test is used to run test commands.
|
||||
|
||||
Select which test comand you want with the subcommand, eg
|
||||
|
||||
rclone test memory remote:
|
||||
|
||||
Each subcommand has its own options which you can see in their help.
|
||||
|
||||
**NB** Be careful running these commands, they may do strange things
|
||||
so reading their documentation first is recommended.
|
||||
|
||||
|
||||
## Options
|
||||
|
||||
```
|
||||
-h, --help help for test
|
||||
```
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
|
||||
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
|
||||
* [rclone test histogram](/commands/rclone_test_histogram/) - Makes a histogram of file name characters.
|
||||
* [rclone test info](/commands/rclone_test_info/) - Discovers file name or other limitations for paths.
|
||||
* [rclone test makefiles](/commands/rclone_test_makefiles/) - Make a random file hierarchy in <dir>
|
||||
* [rclone test memory](/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats.
|
||||
|
||||
@@ -1,36 +0,0 @@
|
||||
---
|
||||
title: "rclone test histogram"
|
||||
description: "Makes a histogram of file name characters."
|
||||
slug: rclone_test_histogram
|
||||
url: /commands/rclone_test_histogram/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/histogram/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone test histogram
|
||||
|
||||
Makes a histogram of file name characters.
|
||||
|
||||
## Synopsis
|
||||
|
||||
This command outputs JSON which shows the histogram of characters used
|
||||
in filenames in the remote:path specified.
|
||||
|
||||
The data doesn't contain any identifying information but is useful for
|
||||
the rclone developers when developing filename compression.
|
||||
|
||||
|
||||
```
|
||||
rclone test histogram [remote:path] [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
```
|
||||
-h, --help help for histogram
|
||||
```
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
|
||||
* [rclone test](/commands/rclone_test/) - Run a test command
|
||||
|
||||
@@ -1,44 +0,0 @@
|
||||
---
|
||||
title: "rclone test info"
|
||||
description: "Discovers file name or other limitations for paths."
|
||||
slug: rclone_test_info
|
||||
url: /commands/rclone_test_info/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/info/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone test info
|
||||
|
||||
Discovers file name or other limitations for paths.
|
||||
|
||||
## Synopsis
|
||||
|
||||
rclone info discovers what filenames and upload methods are possible
|
||||
to write to the paths passed in and how long they can be. It can take some
|
||||
time. It will write test files into the remote:path passed in. It outputs
|
||||
a bit of go code for each one.
|
||||
|
||||
**NB** this can create undeletable files and other hazards - use with care
|
||||
|
||||
|
||||
```
|
||||
rclone test info [remote:path]+ [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
```
|
||||
--all Run all tests.
|
||||
--check-control Check control characters.
|
||||
--check-length Check max filename length.
|
||||
--check-normalization Check UTF-8 Normalization.
|
||||
--check-streaming Check uploads with indeterminate file size.
|
||||
-h, --help help for info
|
||||
--upload-wait duration Wait after writing a file.
|
||||
--write-json string Write results to file.
|
||||
```
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
|
||||
* [rclone test](/commands/rclone_test/) - Run a test command
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
---
|
||||
title: "rclone test makefiles"
|
||||
description: "Make a random file hierarchy in <dir>"
|
||||
slug: rclone_test_makefiles
|
||||
url: /commands/rclone_test_makefiles/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/makefiles/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone test makefiles
|
||||
|
||||
Make a random file hierarchy in <dir>
|
||||
|
||||
```
|
||||
rclone test makefiles <dir> [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
```
|
||||
--files int Number of files to create (default 1000)
|
||||
--files-per-directory int Average number of files per directory (default 10)
|
||||
-h, --help help for makefiles
|
||||
--max-file-size SizeSuffix Maximum size of files to create (default 100)
|
||||
--max-name-length int Maximum size of file names (default 12)
|
||||
--min-file-size SizeSuffix Minimum size of file to create
|
||||
--min-name-length int Minimum size of file names (default 4)
|
||||
```
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
|
||||
* [rclone test](/commands/rclone_test/) - Run a test command
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
---
|
||||
title: "rclone test memory"
|
||||
description: "Load all the objects at remote:path into memory and report memory stats."
|
||||
slug: rclone_test_memory
|
||||
url: /commands/rclone_test_memory/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/memory/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone test memory
|
||||
|
||||
Load all the objects at remote:path into memory and report memory stats.
|
||||
|
||||
```
|
||||
rclone test memory remote:path [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
```
|
||||
-h, --help help for memory
|
||||
```
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
|
||||
* [rclone test](/commands/rclone_test/) - Run a test command
|
||||
|
||||
@@ -12,21 +12,14 @@ Show the version number.
|
||||
## Synopsis
|
||||
|
||||
|
||||
Show the rclone version number, the go version, the build target OS and
|
||||
architecture, build tags and the type of executable (static or dynamic).
|
||||
Show the version number, the go version and the architecture.
|
||||
|
||||
For example:
|
||||
Eg
|
||||
|
||||
$ rclone version
|
||||
rclone v1.54
|
||||
- os/type: linux
|
||||
- os/arch: amd64
|
||||
- go/version: go1.16
|
||||
- go/linking: static
|
||||
- go/tags: none
|
||||
|
||||
Note: before rclone version 1.55 the os/type and os/arch lines were merged,
|
||||
and the "go/version" line was tagged as "go version".
|
||||
rclone v1.41
|
||||
- os/arch: linux/amd64
|
||||
- go version: go1.10
|
||||
|
||||
If you supply the --check flag, then it will do an online check to
|
||||
compare your version with the latest release and the latest beta.
|
||||
|
||||
@@ -517,20 +517,6 @@ names, or for debugging purposes.
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --crypt-no-data-encryption
|
||||
|
||||
Option to either encrypt file data or leave it unencrypted.
|
||||
|
||||
- Config: no_data_encryption
|
||||
- Env Var: RCLONE_CRYPT_NO_DATA_ENCRYPTION
|
||||
- Type: bool
|
||||
- Default: false
|
||||
- Examples:
|
||||
- "true"
|
||||
- Don't encrypt file data, leave it unencrypted.
|
||||
- "false"
|
||||
- Encrypt file data.
|
||||
|
||||
### Backend commands
|
||||
|
||||
Here are the commands specific to the crypt backend.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user