1
0
mirror of https://github.com/rclone/rclone.git synced 2026-02-05 03:03:17 +00:00

Compare commits

..

11 Commits

Author SHA1 Message Date
Nick Craig-Wood
373fb01725 Version v1.50.2 2019-11-19 16:03:01 +00:00
Nick Craig-Wood
7766c5c90b accounting: fix memory leak on retries operations
Before this change if an operation was retried on operations.Copy and
the operation was large enough to use an async buffer then an async
buffer was leaked on the retry.  This leaked memory, a file handle and
a go routine.

After this change if Account.WithBuffer is called and there is already
a buffer, then a new one won't be allocated.
2019-11-19 12:12:47 +00:00
Nick Craig-Wood
473a437163 drive: fix --drive-root-folder-id with team/shared drives
Before this change rclone used the team_drive ID as the root if set
even if the root_folder_id was set too.

This change uses the root_folder_id in preference over the team_drive
which restores the functionality.

This problem was introduced by ba7c2ac443

Fixes #3742
2019-11-17 14:50:44 +00:00
Nick Craig-Wood
9662554f53 drive: fix listing of the root directory with drive.files scope
We attempt to find the ID of the root folder by doing a GET on the
folder ID "root". With scope "drive.files" this fails with a 404
message.

After this change if we get the 404 message, we just carry on using
"root" as the root folder ID and we cache that for future lookups.

This means that changenotify messages will not work correctly in the
root folder but otherwise has minor consequences.

See: https://forum.rclone.org/t/fresh-raspberry-pi-build-google-drive-404-error-failed-to-ls-googleapi-error-404-file-not-found/12791
2019-11-11 09:07:54 +00:00
Nick Craig-Wood
db930850cc Version v1.50.1 2019-11-02 14:26:50 +00:00
Nick Craig-Wood
6f8558f61a local: fix listings of . on Windows - fixes #3676 2019-10-30 16:03:13 +00:00
Nick Craig-Wood
d4fe62ec08 hash: fix hash names for DropboxHash and CRC-32
These were unintentionally renamed as part of 1dc8bcd48c

Fixes #3679
2019-10-30 16:02:57 +00:00
Nick Craig-Wood
9d69bc0b48 fshttp: don't print token bucket errors on context cancelled
These happen as a natural part of exceeding --max-transfer and we
don't need to worry the user with them.
2019-10-30 16:02:47 +00:00
Xiaoxing Ye
f91b120be7 onedrive: no trailing slash reading metadata...
No trailing slash when reading metadata of an item given item ID.

This should fix #3664.
2019-10-30 16:02:31 +00:00
Nick Craig-Wood
fb25a926d7 fshttp: fix error reporting on tpslimit token bucket errors 2019-10-30 16:02:23 +00:00
Nick Craig-Wood
6c10b162ea rc: fix formatting of docs 2019-10-27 10:44:29 +00:00
641 changed files with 40572 additions and 52087 deletions

View File

@@ -102,7 +102,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v1
uses: actions/checkout@master
with:
path: ./src/github.com/${{ github.repository }}
@@ -211,7 +211,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v1
uses: actions/checkout@master
with:
path: ./src/github.com/${{ github.repository }}

14590
MANUAL.html generated

File diff suppressed because one or more lines are too long

33
MANUAL.md generated
View File

@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Oct 26, 2019
% Nov 19, 2019
# Rclone - rsync for cloud storage
@@ -7133,6 +7133,7 @@ Authentication is required for this call.
### config/get: Get a remote in the config file. {#config/get}
Parameters:
- name - name of remote to get
See the [config dump command](https://rclone.org/commands/rclone_config_dump/) command for more information on the above.
@@ -7275,6 +7276,7 @@ If group is not provided then summed up stats for all groups will be
returned.
Parameters
- group - name of the stats group (string)
Returns the following values:
@@ -7316,8 +7318,8 @@ This clears counters and errors for all stats or specific stats group if group
is provided.
Parameters
- group - name of the stats group (string)
```
### core/transferred: Returns stats about completed transfers. {#core/transferred}
@@ -7331,6 +7333,7 @@ returned.
Note only the last 100 completed transfers are returned.
Parameters
- group - name of the stats group (string)
Returns the following values:
@@ -7354,6 +7357,7 @@ Returns the following values:
### core/version: Shows the current version of rclone and the go runtime. {#core/version}
This shows the current version of go and the go runtime
- version - rclone version, eg "v1.44"
- decomposed - version number as [major, minor, patch, subpatch]
- note patch and subpatch will be 999 for a git compiled version
@@ -7367,14 +7371,17 @@ This shows the current version of go and the go runtime
Parameters - None
Results
- jobids - array of integer job ids
### job/status: Reads the status of the job ID {#job/status}
Parameters
- jobid - id of the job (integer)
Results
- finished - boolean
- duration - time in seconds that the job ran for
- endTime - time the job finished (eg "2018-10-26T18:50:20.528746884+01:00")
@@ -7389,6 +7396,7 @@ Results
### job/stop: Stop the running job {#job/stop}
Parameters
- jobid - id of the job (integer)
### operations/about: Return the space used on the remote {#operations/about}
@@ -8452,7 +8460,7 @@ These flags are available for every command.
--use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.50.0")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.50.2")
-v, --verbose count Print lots more stuff (repeat for more)
```
@@ -20181,6 +20189,25 @@ to override the default choice.
# Changelog
## v1.50.2 - 2019-11-19
* Bug Fixes
* accounting: Fix memory leak on retries operations (Nick Craig-Wood)
* Drive
* Fix listing of the root directory with drive.files scope (Nick Craig-Wood)
* Fix --drive-root-folder-id with team/shared drives (Nick Craig-Wood)
## v1.50.1 - 2019-11-02
* Bug Fixes
* hash: Fix accidentally changed hash names for `DropboxHash` and `CRC-32` (Nick Craig-Wood)
* fshttp: Fix error reporting on tpslimit token bucket errors (Nick Craig-Wood)
* fshttp: Don't print token bucket errors on context cancelled (Nick Craig-Wood)
* Local
* Fix listings of . on Windows (Nick Craig-Wood)
* Onedrive
* Fix DirMove/Move after Onedrive change (Xiaoxing Ye)
## v1.50.0 - 2019-10-26
* New backends

13487
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -46,8 +46,7 @@ endif
rclone:
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS)
mkdir -p `go env GOPATH`/bin/
cp -av rclone`go env GOEXE` `go env GOPATH`/bin/rclone`go env GOEXE`.new
mv -v `go env GOPATH`/bin/rclone`go env GOEXE`.new `go env GOPATH`/bin/rclone`go env GOEXE`
cp -av rclone`go env GOEXE` `go env GOPATH`/bin/
test_all:
go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/rclone/rclone/fstest/test_all

View File

@@ -89,7 +89,7 @@ Now
* make TAG=${NEW_TAG} upload_github
* NB this overwrites the current beta so we need to do this
* git co master
* make VERSION=${NEW_TAG} startdev
* make LAST_TAG=${NEW_TAG} startdev
* # cherry pick the changes to the changelog and VERSION
* git checkout ${BASE_TAG}-fixes VERSION docs/content/changelog.md
* git commit --amend

View File

@@ -16,7 +16,7 @@ import (
"sync"
"time"
bolt "github.com/etcd-io/bbolt"
bolt "github.com/coreos/bbolt"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/walk"

View File

@@ -63,7 +63,6 @@ func init() {
Name: "password",
Help: "Password or pass phrase for encryption.",
IsPassword: true,
Required: true,
}, {
Name: "password2",
Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.",

View File

@@ -326,17 +326,6 @@ Photos folder" option in your google drive settings. You can then copy
or move the photos locally and use the date the image was taken
(created) set as the modification date.`,
Advanced: true,
}, {
Name: "use_shared_date",
Default: false,
Help: `Use date file was shared instead of modified date.
Note that, as with "--drive-use-created-date", this flag may have
unexpected consequences when uploading/downloading files.
If both this flag and "--drive-use-created-date" are set, the created
date is used.`,
Advanced: true,
}, {
Name: "list_chunk",
Default: 1000,
@@ -474,7 +463,6 @@ type Options struct {
ImportExtensions string `config:"import_formats"`
AllowImportNameChange bool `config:"allow_import_name_change"`
UseCreatedDate bool `config:"use_created_date"`
UseSharedDate bool `config:"use_shared_date"`
ListChunk int64 `config:"list_chunk"`
Impersonate string `config:"impersonate"`
AlternateExport bool `config:"alternate_export"`
@@ -706,9 +694,6 @@ func (f *Fs) list(ctx context.Context, dirIDs []string, title string, directorie
if f.opt.AuthOwnerOnly {
fields += ",owners"
}
if f.opt.UseSharedDate {
fields += ",sharedWithMeTime"
}
if f.opt.SkipChecksumGphotos {
fields += ",spaces"
}
@@ -845,7 +830,7 @@ func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name
} else {
fmt.Printf("Change current team drive ID %q?\n", opt.TeamDriveID)
}
if !config.Confirm(false) {
if !config.Confirm() {
return nil
}
client, err := createOAuthClient(opt, name, m)
@@ -1110,8 +1095,6 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
modifiedDate := info.ModifiedTime
if f.opt.UseCreatedDate {
modifiedDate = info.CreatedTime
} else if f.opt.UseSharedDate && info.SharedWithMeTime != "" {
modifiedDate = info.SharedWithMeTime
}
size := info.Size
if f.opt.SizeAsQuota {
@@ -1480,14 +1463,6 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if iErr != nil {
return nil, iErr
}
// If listing the root of a teamdrive and got no entries,
// double check we have access
if f.isTeamDrive && len(entries) == 0 && f.root == "" && dir == "" {
err = f.teamDriveOK(ctx)
if err != nil {
return nil, err
}
}
return entries, nil
}
@@ -1625,7 +1600,6 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
out := make(chan error, fs.Config.Checkers)
list := walk.NewListRHelper(callback)
overflow := []listREntry{}
listed := 0
cb := func(entry fs.DirEntry) error {
mu.Lock()
@@ -1638,7 +1612,6 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
overflow = append(overflow, listREntry{d.ID(), d.Remote()})
}
}
listed++
return list.Add(entry)
}
@@ -1695,21 +1668,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
return err
}
err = list.Flush()
if err != nil {
return err
}
// If listing the root of a teamdrive and got no entries,
// double check we have access
if f.isTeamDrive && listed == 0 && f.root == "" && dir == "" {
err = f.teamDriveOK(ctx)
if err != nil {
return err
}
}
return nil
return list.Flush()
}
// itemToDirEntry converts a drive.File to a fs.DirEntry.
@@ -2082,30 +2041,9 @@ func (f *Fs) CleanUp(ctx context.Context) error {
return nil
}
// teamDriveOK checks to see if we can access the team drive
func (f *Fs) teamDriveOK(ctx context.Context) (err error) {
if !f.isTeamDrive {
return nil
}
var td *drive.Drive
err = f.pacer.Call(func() (bool, error) {
td, err = f.svc.Drives.Get(f.opt.TeamDriveID).Fields("name,id,capabilities,createdTime,restrictions").Context(ctx).Do()
return shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "failed to get Team/Shared Drive info")
}
fs.Debugf(f, "read info from team drive %q", td.Name)
return err
}
// About gets quota information
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
if f.isTeamDrive {
err := f.teamDriveOK(ctx)
if err != nil {
return nil, err
}
// Teamdrives don't appear to have a usage API so just return empty
return &fs.Usage{}, nil
}

View File

@@ -46,26 +46,13 @@ func (t Time) String() string { return time.Time(t).Format(timeFormat) }
// APIString returns Time string in Jottacloud API format
func (t Time) APIString() string { return time.Time(t).Format(apiTimeFormat) }
// LoginToken is struct representing the login token generated in the WebUI
type LoginToken struct {
Username string `json:"username"`
Realm string `json:"realm"`
WellKnownLink string `json:"well_known_link"`
AuthToken string `json:"auth_token"`
}
// TokenJSON is the struct representing the HTTP response from OAuth2
// providers returning a token in JSON form.
type TokenJSON struct {
AccessToken string `json:"access_token"`
ExpiresIn int32 `json:"expires_in"` // at least PayPal returns string, while most return number
RefreshExpiresIn int32 `json:"refresh_expires_in"`
RefreshToken string `json:"refresh_token"`
TokenType string `json:"token_type"`
IDToken string `json:"id_token"`
NotBeforePolicy int32 `json:"not-before-policy"`
SessionState string `json:"session_state"`
Scope string `json:"scope"`
AccessToken string `json:"access_token"`
TokenType string `json:"token_type"`
RefreshToken string `json:"refresh_token"`
ExpiresIn int32 `json:"expires_in"` // at least PayPal returns string, while most return number
}
// JSON structures returned by new API

View File

@@ -4,13 +4,12 @@ import (
"bytes"
"context"
"crypto/md5"
"encoding/base64"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"log"
"math/rand"
"net/http"
"net/url"
"os"
@@ -26,6 +25,7 @@ import (
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
@@ -41,25 +41,29 @@ const enc = encodings.JottaCloud
// Globals
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultDevice = "Jotta"
defaultMountpoint = "Archive"
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com/"
baseURL = "https://www.jottacloud.com/"
tokenURL = "https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token"
cachePrefix = "rclone-jcmd5-"
configDevice = "device"
configMountpoint = "mountpoint"
configVersion = 1
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultDevice = "Jotta"
defaultMountpoint = "Archive"
rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com/"
baseURL = "https://www.jottacloud.com/"
tokenURL = "https://api.jottacloud.com/auth/v1/token"
registerURL = "https://api.jottacloud.com/auth/v1/register"
cachePrefix = "rclone-jcmd5-"
rcloneClientID = "nibfk8biu12ju7hpqomr8b1e40"
rcloneEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2"
configClientID = "client_id"
configClientSecret = "client_secret"
configDevice = "device"
configMountpoint = "mountpoint"
charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
)
var (
// Description of how to auth for this app for a personal account
oauthConfig = &oauth2.Config{
ClientID: "jottacli",
Endpoint: oauth2.Endpoint{
AuthURL: tokenURL,
TokenURL: tokenURL,
@@ -77,39 +81,43 @@ func init() {
NewFs: NewFs,
Config: func(name string, m configmap.Mapper) {
ctx := context.TODO()
tokenString, ok := m.Get("token")
if ok && tokenString != "" {
fmt.Printf("Already have a token - refresh?\n")
if !config.Confirm() {
return
}
}
refresh := false
if version, ok := m.Get("configVersion"); ok {
ver, err := strconv.Atoi(version)
srv := rest.NewClient(fshttp.NewClient(fs.Config))
fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n")
if config.Confirm() {
deviceRegistration, err := registerDevice(ctx, srv)
if err != nil {
log.Fatalf("Failed to parse config version - corrupted config")
log.Fatalf("Failed to register device: %v", err)
}
refresh = ver != configVersion
} else {
refresh = true
m.Set(configClientID, deviceRegistration.ClientID)
m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret))
fs.Debugf(nil, "Got clientID '%s' and clientSecret '%s'", deviceRegistration.ClientID, deviceRegistration.ClientSecret)
}
if refresh {
fmt.Printf("Config outdated - refreshing\n")
} else {
tokenString, ok := m.Get("token")
if ok && tokenString != "" {
fmt.Printf("Already have a token - refresh?\n")
if !config.Confirm(false) {
return
}
}
clientID, ok := m.Get(configClientID)
if !ok {
clientID = rcloneClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = rcloneEncryptedClientSecret
}
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
clientConfig := *fs.Config
clientConfig.UserAgent = "JottaCli 0.6.18626 windows-amd64"
srv := rest.NewClient(fshttp.NewClient(&clientConfig))
fmt.Printf("Username> ")
username := config.ReadLine()
password := config.GetPassword("Your Jottacloud password is only required during setup and will not be stored.")
fmt.Printf("Generate a personal login token here: https://www.jottacloud.com/web/secure\n")
fmt.Printf("Login Token> ")
loginToken := config.ReadLine()
token, err := doAuth(ctx, srv, loginToken)
token, err := doAuth(ctx, srv, username, password)
if err != nil {
log.Fatalf("Failed to get oauth token: %s", err)
}
@@ -119,7 +127,7 @@ func init() {
}
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
if config.Confirm(false) {
if config.Confirm() {
oAuthClient, _, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to load oAuthClient: %s", err)
@@ -135,8 +143,6 @@ func init() {
m.Set(configDevice, device)
m.Set(configMountpoint, mountpoint)
}
m.Set("configVersion", strconv.Itoa(configVersion))
},
Options: []fs.Option{{
Name: "md5_memory_limit",
@@ -243,51 +249,67 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// doAuth runs the actual token request
func doAuth(ctx context.Context, srv *rest.Client, loginTokenBase64 string) (token oauth2.Token, err error) {
loginTokenBytes, err := base64.StdEncoding.DecodeString(loginTokenBase64)
if err != nil {
return token, err
// registerDevice register a new device for use with the jottacloud API
func registerDevice(ctx context.Context, srv *rest.Client) (reg *api.DeviceRegistrationResponse, err error) {
// random generator to generate random device names
seededRand := rand.New(rand.NewSource(time.Now().UnixNano()))
randonDeviceNamePartLength := 21
randomDeviceNamePart := make([]byte, randonDeviceNamePartLength)
for i := range randomDeviceNamePart {
randomDeviceNamePart[i] = charset[seededRand.Intn(len(charset))]
}
randomDeviceName := "rclone-" + string(randomDeviceNamePart)
fs.Debugf(nil, "Trying to register device '%s'", randomDeviceName)
var loginToken api.LoginToken
decoder := json.NewDecoder(bytes.NewReader(loginTokenBytes))
err = decoder.Decode(&loginToken)
if err != nil {
return token, err
}
values := url.Values{}
values.Set("device_id", randomDeviceName)
// we don't seem to need any data from this link but the API is not happy if skip it
opts := rest.Opts{
Method: "GET",
RootURL: loginToken.WellKnownLink,
NoResponse: true,
}
_, err = srv.Call(ctx, &opts)
if err != nil {
return token, err
Method: "POST",
RootURL: registerURL,
ContentType: "application/x-www-form-urlencoded",
ExtraHeaders: map[string]string{"Authorization": "Bearer c2xrZmpoYWRsZmFramhkc2xma2phaHNkbGZramhhc2xkZmtqaGFzZGxrZmpobGtq"},
Parameters: values,
}
var deviceRegistration *api.DeviceRegistrationResponse
_, err = srv.CallJSON(ctx, &opts, nil, &deviceRegistration)
return deviceRegistration, err
}
// doAuth runs the actual token request
func doAuth(ctx context.Context, srv *rest.Client, username, password string) (token oauth2.Token, err error) {
// prepare out token request with username and password
values := url.Values{}
values.Set("client_id", "jottacli")
values.Set("grant_type", "password")
values.Set("password", loginToken.AuthToken)
values.Set("scope", "offline_access+openid")
values.Set("username", loginToken.Username)
values.Encode()
opts = rest.Opts{
values.Set("grant_type", "PASSWORD")
values.Set("password", password)
values.Set("username", username)
values.Set("client_id", oauthConfig.ClientID)
values.Set("client_secret", oauthConfig.ClientSecret)
opts := rest.Opts{
Method: "POST",
RootURL: oauthConfig.Endpoint.AuthURL,
ContentType: "application/x-www-form-urlencoded",
Body: strings.NewReader(values.Encode()),
Parameters: values,
}
// do the first request
var jsonToken api.TokenJSON
_, err = srv.CallJSON(ctx, &opts, nil, &jsonToken)
resp, err := srv.CallJSON(ctx, &opts, nil, &jsonToken)
if err != nil {
return token, err
// if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header
if resp != nil {
if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" {
fmt.Printf("This account uses 2 factor authentication you will receive a verification code via SMS.\n")
fmt.Printf("Enter verification code> ")
authCode := config.ReadLine()
authCode = strings.Replace(authCode, "-", "", -1) // remove any "-" contained in the code so we have a 6 digit number
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode
resp, err = srv.CallJSON(ctx, &opts, nil, &jsonToken)
}
}
}
token.AccessToken = jsonToken.AccessToken
@@ -449,6 +471,29 @@ func (f *Fs) filePath(file string) string {
return urlPathEscape(f.filePathRaw(file))
}
// Jottacloud requires the grant_type 'refresh_token' string
// to be uppercase and throws a 400 Bad Request if we use the
// lower case used by the oauth2 module
//
// This filter catches all refresh requests, reads the body,
// changes the case and then sends it on
func grantTypeFilter(req *http.Request) {
if tokenURL == req.URL.String() {
// read the entire body
refreshBody, err := ioutil.ReadAll(req.Body)
if err != nil {
return
}
_ = req.Body.Close()
// make the refresh token upper case
refreshBody = []byte(strings.Replace(string(refreshBody), "grant_type=refresh_token", "grant_type=REFRESH_TOKEN", 1))
// set the new ReadCloser (with a dummy Close())
req.Body = ioutil.NopCloser(bytes.NewReader(refreshBody))
}
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.TODO()
@@ -459,23 +504,30 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, err
}
var ok bool
var version string
if version, ok = m.Get("configVersion"); ok {
ver, err := strconv.Atoi(version)
if err != nil {
return nil, errors.New("Failed to parse config version")
}
ok = ver == configVersion
}
if !ok {
return nil, errors.New("Outdated config - please reconfigure this backend")
}
rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root)
clientID, ok := m.Get(configClientID)
if !ok {
clientID = rcloneClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = rcloneEncryptedClientSecret
}
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
// the oauth client for the api servers needs
// a filter to fix the grant_type issues (see above)
baseClient := fshttp.NewClient(fs.Config)
if do, ok := baseClient.Transport.(interface {
SetRequestFilter(f func(req *http.Request))
}); ok {
do.SetRequestFilter(grantTypeFilter)
} else {
fs.Debugf(name+":", "Couldn't add request filter - uploads will fail")
}
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, oauthConfig, baseClient)
if err != nil {
return nil, errors.Wrap(err, "Failed to configure Jottacloud oauth client")

View File

@@ -16,7 +16,6 @@ import (
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
httpclient "github.com/koofr/go-httpclient"
@@ -260,9 +259,7 @@ func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
if err != nil {
return nil, err
}
httpClient := httpclient.New()
httpClient.Client = fshttp.NewClient(fs.Config)
client := koofrclient.NewKoofrClientWithHTTPClient(opt.Endpoint, httpClient)
client := koofrclient.NewKoofrClient(opt.Endpoint, false)
basicAuth := fmt.Sprintf("Basic %s",
base64.StdEncoding.EncodeToString([]byte(opt.User+":"+pass)))
client.HTTPClient.Headers.Set("Authorization", basicAuth)

View File

@@ -350,7 +350,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
err = errors.Wrapf(err, "failed to open directory %q", dir)
fs.Errorf(dir, "%v", err)
if isPerm {
_ = accounting.Stats(ctx).Error(fserrors.NoRetryError(err))
accounting.Stats(ctx).Error(fserrors.NoRetryError(err))
err = nil // ignore error but fail sync
}
return nil, err
@@ -386,7 +386,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if fierr != nil {
err = errors.Wrapf(err, "failed to read directory %q", namepath)
fs.Errorf(dir, "%v", fierr)
_ = accounting.Stats(ctx).Error(fserrors.NoRetryError(fierr)) // fail the sync
accounting.Stats(ctx).Error(fserrors.NoRetryError(fierr)) // fail the sync
continue
}
fis = append(fis, fi)
@@ -409,7 +409,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Skip bad symlinks
err = fserrors.NoRetryError(errors.Wrap(err, "symlink"))
fs.Errorf(newRemote, "Listing error: %v", err)
err = accounting.Stats(ctx).Error(err)
accounting.Stats(ctx).Error(err)
continue
}
if err != nil {
@@ -820,10 +820,10 @@ func (file *localOpenFile) Read(p []byte) (n int, err error) {
return 0, errors.Wrap(err, "can't read status of source file while transferring")
}
if file.o.size != fi.Size() {
return 0, fserrors.NoLowLevelRetryError(errors.Errorf("can't copy - source file is being updated (size changed from %d to %d)", file.o.size, fi.Size()))
return 0, errors.Errorf("can't copy - source file is being updated (size changed from %d to %d)", file.o.size, fi.Size())
}
if !file.o.modTime.Equal(fi.ModTime()) {
return 0, fserrors.NoLowLevelRetryError(errors.Errorf("can't copy - source file is being updated (mod time changed from %v to %v)", file.o.modTime, fi.ModTime()))
return 0, errors.Errorf("can't copy - source file is being updated (mod time changed from %v to %v)", file.o.modTime, fi.ModTime())
}
}

View File

@@ -269,7 +269,7 @@ func qsServiceConnection(opt *Options) (*qs.Service, error) {
cf.Protocol = protocol
cf.Host = host
cf.Port = port
// unsupported in v3.1: cf.ConnectionRetries = opt.ConnectionRetries
cf.ConnectionRetries = opt.ConnectionRetries
cf.Connection = fshttp.NewClient(fs.Config)
return qs.Init(cf)

View File

@@ -26,7 +26,6 @@ import (
"regexp"
"strconv"
"strings"
"sync"
"time"
"github.com/aws/aws-sdk-go/aws"
@@ -694,37 +693,16 @@ The minimum is 0 and the maximum is 5GB.`,
Name: "chunk_size",
Help: `Chunk size to use for uploading.
When uploading files larger than upload_cutoff or files with unknown
size (eg from "rclone rcat" or uploaded with "rclone mount" or google
photos or google docs) they will be uploaded as multipart uploads
using this chunk size.
When uploading files larger than upload_cutoff they will be uploaded
as multipart uploads using this chunk size.
Note that "--s3-upload-concurrency" chunks of this size are buffered
in memory per transfer.
If you are transferring large files over high speed links and you have
enough memory, then increasing this will speed up the transfers.
Rclone will automatically increase the chunk size when uploading a
large file of known size to stay below the 10,000 chunks limit.
Files of unknown size are uploaded with the configured
chunk_size. Since the default chunk size is 5MB and there can be at
most 10,000 chunks, this means that by default the maximum size of
file you can stream upload is 48GB. If you wish to stream upload
larger files then you will need to increase chunk_size.`,
enough memory, then increasing this will speed up the transfers.`,
Default: minChunkSize,
Advanced: true,
}, {
Name: "copy_cutoff",
Help: `Cutoff for switching to multipart copy
Any files larger than this that need to be server side copied will be
copied in chunks of this size.
The minimum is 0 and the maximum is 5GB.`,
Default: fs.SizeSuffix(maxSizeForCopy),
Advanced: true,
}, {
Name: "disable_checksum",
Help: "Don't store MD5 checksum with object metadata",
@@ -793,11 +771,12 @@ WARNING: Storing parts of an incomplete multipart upload counts towards space us
// Constants
const (
metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
listChunkSize = 1000 // number of items to read at once
maxRetries = 10 // number of retries to make of operations
maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY
metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
listChunkSize = 1000 // number of items to read at once
maxRetries = 10 // number of retries to make of operations
maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY
maxFileSize = 5 * 1024 * 1024 * 1024 * 1024 // largest possible upload file size
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
@@ -819,7 +798,6 @@ type Options struct {
SSEKMSKeyID string `config:"sse_kms_key_id"`
StorageClass string `config:"storage_class"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CopyCutoff fs.SizeSuffix `config:"copy_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"`
@@ -983,7 +961,7 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
Client: ec2metadata.New(session.New(), &aws.Config{
HTTPClient: lowTimeoutClient,
}),
ExpiryWindow: 3 * time.Minute,
ExpiryWindow: 3,
},
}
cred := credentials.NewChainCredentials(providers)
@@ -1664,7 +1642,7 @@ func (f *Fs) copy(ctx context.Context, req *s3.CopyObjectInput, dstBucket, dstPa
req.StorageClass = &f.opt.StorageClass
}
if srcSize >= int64(f.opt.CopyCutoff) {
if srcSize >= int64(f.opt.UploadCutoff) {
return f.copyMultipart(ctx, req, dstBucket, dstPath, srcBucket, srcPath, srcSize)
}
return f.pacer.Call(func() (bool, error) {
@@ -1677,8 +1655,8 @@ func calculateRange(partSize, partIndex, numParts, totalSize int64) string {
start := partIndex * partSize
var ends string
if partIndex == numParts-1 {
if totalSize >= 1 {
ends = strconv.FormatInt(totalSize-1, 10)
if totalSize >= 0 {
ends = strconv.FormatInt(totalSize, 10)
}
} else {
ends = strconv.FormatInt(start+partSize-1, 10)
@@ -1715,7 +1693,7 @@ func (f *Fs) copyMultipart(ctx context.Context, req *s3.CopyObjectInput, dstBuck
}
}()
partSize := int64(f.opt.CopyCutoff)
partSize := int64(f.opt.ChunkSize)
numParts := (srcSize-1)/partSize + 1
var parts []*s3.CompletedPart
@@ -1943,6 +1921,11 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
}
o.meta[metaMtime] = aws.String(swift.TimeToFloatString(modTime))
if o.bytes >= maxSizeForCopy {
fs.Debugf(o, "SetModTime is unsupported for objects bigger than %v bytes", fs.SizeSuffix(maxSizeForCopy))
return nil
}
// Can't update metadata here, so return this error to force a recopy
if o.storageClass == "GLACIER" || o.storageClass == "DEEP_ARCHIVE" {
return fs.ErrorCantSetModTime
@@ -1999,8 +1982,6 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return resp.Body, nil
}
var warnStreamUpload sync.Once
// Update the Object from in with modTime and size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
bucket, bucketPath := o.split()
@@ -2020,14 +2001,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
u.S3 = o.fs.c
u.PartSize = int64(o.fs.opt.ChunkSize)
// size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize
// buffers here (default 5MB). With a maximum number of parts (10,000) this will be a file of
// 48GB which seems like a not too unreasonable limit.
if size == -1 {
warnStreamUpload.Do(func() {
fs.Logf(o.fs, "Streaming uploads using chunk size %v will have maximum file size of %v",
o.fs.opt.ChunkSize, fs.SizeSuffix(u.PartSize*s3manager.MaxUploadParts))
})
// Make parts as small as possible while still being able to upload to the
// S3 file size limit. Rounded up to nearest MB.
u.PartSize = (((maxFileSize / s3manager.MaxUploadParts) >> 20) + 1) << 20
return
}
// Adjust PartSize until the number of parts is small enough.
@@ -2046,7 +2023,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// read the md5sum if available for non multpart and if
// disable checksum isn't present.
var md5sum string
if !multipart && !o.fs.opt.DisableChecksum {
if !multipart || !o.fs.opt.DisableChecksum {
hash, err := src.Hash(ctx, hash.MD5)
if err == nil && matchMd5.MatchString(hash) {
hashBytes, err := hex.DecodeString(hash)

View File

@@ -29,17 +29,15 @@ import (
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
sshagent "github.com/xanzy/ssh-agent"
"golang.org/x/crypto/ssh"
"golang.org/x/time/rate"
)
const (
connectionsPerSecond = 10 // don't make more than this many ssh connections/s
hashCommandNotSupported = "none"
minSleep = 100 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
)
var (
@@ -156,11 +154,6 @@ Home directory can be found in a shared folder called "home"
Default: "",
Help: "The command used to read sha1 hashes. Leave blank for autodetect.",
Advanced: true,
}, {
Name: "skip_links",
Default: false,
Help: "Set to skip any symlinks and any other non regular files.",
Advanced: true,
}},
}
fs.Register(fsi)
@@ -182,7 +175,6 @@ type Options struct {
SetModTime bool `config:"set_modtime"`
Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"`
SkipLinks bool `config:"skip_links"`
}
// Fs stores the interface to the remote SFTP files
@@ -198,7 +190,7 @@ type Fs struct {
cachedHashes *hash.Set
poolMu sync.Mutex
pool []*conn
pacer *fs.Pacer // pacer for operations
connLimit *rate.Limiter // for limiting number of connections per second
}
// Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading)
@@ -278,6 +270,10 @@ func (c *conn) closed() error {
// Open a new connection to the SFTP server.
func (f *Fs) sftpConnection() (c *conn, err error) {
// Rate limit rate of new connections
err = f.connLimit.Wait(context.Background())
if err != nil {
return nil, errors.Wrap(err, "limiter failed in connect")
}
c = &conn{
err: make(chan error, 1),
}
@@ -311,14 +307,7 @@ func (f *Fs) getSftpConnection() (c *conn, err error) {
if c != nil {
return c, nil
}
err = f.pacer.Call(func() (bool, error) {
c, err = f.sftpConnection()
if err != nil {
return true, err
}
return false, nil
})
return c, err
return f.sftpConnection()
}
// Return an SFTP connection to the pool
@@ -476,7 +465,7 @@ func NewFsWithConnection(ctx context.Context, name string, root string, m config
config: sshConfig,
url: "sftp://" + opt.User + "@" + opt.Host + ":" + opt.Port + "/" + root,
mkdirLock: newStringLock(),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
connLimit: rate.NewLimiter(rate.Limit(connectionsPerSecond), 1),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
@@ -606,16 +595,12 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
remote := path.Join(dir, info.Name())
// If file is a symlink (not a regular file is the best cross platform test we can do), do a stat to
// pick up the size and type of the destination, instead of the size and type of the symlink.
if !info.Mode().IsRegular() && !info.IsDir() {
if f.opt.SkipLinks {
// skip non regular file if SkipLinks is set
continue
}
if !info.Mode().IsRegular() {
oldInfo := info
info, err = f.stat(remote)
if err != nil {
if !os.IsNotExist(err) {
fs.Errorf(remote, "stat of non-regular file failed: %v", err)
fs.Errorf(remote, "stat of non-regular file/dir failed: %v", err)
}
info = oldInfo
}

View File

@@ -7,7 +7,6 @@ import (
"context"
"fmt"
"io"
"net/url"
"path"
"strconv"
"strings"
@@ -531,10 +530,10 @@ type listFn func(remote string, object *swift.Object, isDirectory bool) error
//
// Set recurse to read sub directories
func (f *Fs) listContainerRoot(container, directory, prefix string, addContainer bool, recurse bool, fn listFn) error {
if prefix != "" && !strings.HasSuffix(prefix, "/") {
if prefix != "" {
prefix += "/"
}
if directory != "" && !strings.HasSuffix(directory, "/") {
if directory != "" {
directory += "/"
}
// Options for ObjectsWalk
@@ -953,18 +952,6 @@ func (o *Object) isStaticLargeObject() (bool, error) {
return o.hasHeader("X-Static-Large-Object")
}
func (o *Object) isInContainerVersioning(container string) (bool, error) {
_, headers, err := o.fs.c.Container(container)
if err != nil {
return false, err
}
xHistoryLocation := headers["X-History-Location"]
if len(xHistoryLocation) > 0 {
return true, nil
}
return false, nil
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return o.size
@@ -1096,8 +1083,9 @@ func min(x, y int64) int64 {
//
// if except is passed in then segments with that prefix won't be deleted
func (o *Object) removeSegments(except string) error {
segmentsContainer, prefix, err := o.getSegmentsDlo()
err = o.fs.listContainerRoot(segmentsContainer, prefix, "", false, true, func(remote string, object *swift.Object, isDirectory bool) error {
container, containerPath := o.split()
segmentsContainer := container + "_segments"
err := o.fs.listContainerRoot(segmentsContainer, containerPath, "", false, true, func(remote string, object *swift.Object, isDirectory bool) error {
if isDirectory {
return nil
}
@@ -1126,23 +1114,6 @@ func (o *Object) removeSegments(except string) error {
return nil
}
func (o *Object) getSegmentsDlo() (segmentsContainer string, prefix string, err error) {
if err = o.readMetaData(); err != nil {
return
}
dirManifest := o.headers["X-Object-Manifest"]
dirManifest, err = url.PathUnescape(dirManifest)
if err != nil {
return
}
delimiter := strings.Index(dirManifest, "/")
if len(dirManifest) == 0 || delimiter < 0 {
err = errors.New("Missing or wrong structure of manifest of Dynamic large object")
return
}
return dirManifest[:delimiter], dirManifest[delimiter+1:], nil
}
// urlEncode encodes a string so that it is a valid URL
//
// We don't use any of Go's standard methods as we need `/` not
@@ -1329,9 +1300,12 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
// Remove an object
func (o *Object) Remove(ctx context.Context) (err error) {
func (o *Object) Remove(ctx context.Context) error {
container, containerPath := o.split()
isDynamicLargeObject, err := o.isDynamicLargeObject()
if err != nil {
return err
}
// Remove file/manifest first
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(container, containerPath)
@@ -1340,22 +1314,12 @@ func (o *Object) Remove(ctx context.Context) (err error) {
if err != nil {
return err
}
isDynamicLargeObject, err := o.isDynamicLargeObject()
if err != nil {
return err
}
// ...then segments if required
if isDynamicLargeObject {
isInContainerVersioning, err := o.isInContainerVersioning(container)
err = o.removeSegments("")
if err != nil {
return err
}
if !isInContainerVersioning {
err = o.removeSegments("")
if err != nil {
return err
}
}
}
return nil
}

View File

@@ -113,8 +113,7 @@ type Fs struct {
canStream bool // set if can stream
useOCMtime bool // set if can use X-OC-Mtime
retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default)
hasMD5 bool // set if can use owncloud style checksums for MD5
hasSHA1 bool // set if can use owncloud style checksums for SHA1
hasChecksums bool // set if can use owncloud style checksums
}
// Object describes a webdav object
@@ -216,7 +215,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, depth string)
},
NoRedirect: true,
}
if f.hasMD5 || f.hasSHA1 {
if f.hasChecksums {
opts.Body = bytes.NewBuffer(owncloudProps)
}
var result api.Multistatus
@@ -384,7 +383,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// sets the BearerToken up
func (f *Fs) setBearerToken(token string) {
f.opt.BearerToken = token
f.srv.SetHeader("Authorization", "Bearer "+token)
f.srv.SetHeader("Authorization", "BEARER "+token)
}
// fetch the bearer token using the command
@@ -431,12 +430,11 @@ func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
f.canStream = true
f.precision = time.Second
f.useOCMtime = true
f.hasMD5 = true
f.hasSHA1 = true
f.hasChecksums = true
case "nextcloud":
f.precision = time.Second
f.useOCMtime = true
f.hasSHA1 = true
f.hasChecksums = true
case "sharepoint":
// To mount sharepoint, two Cookies are required
// They have to be set instead of BasicAuth
@@ -538,7 +536,7 @@ func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, file
"Depth": depth,
},
}
if f.hasMD5 || f.hasSHA1 {
if f.hasChecksums {
opts.Body = bytes.NewBuffer(owncloudProps)
}
var result api.Multistatus
@@ -947,14 +945,10 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
hashes := hash.Set(hash.None)
if f.hasMD5 {
hashes.Add(hash.MD5)
if f.hasChecksums {
return hash.NewHashSet(hash.MD5, hash.SHA1)
}
if f.hasSHA1 {
hashes.Add(hash.SHA1)
}
return hashes
return hash.Set(hash.None)
}
// About gets quota information
@@ -1021,11 +1015,13 @@ func (o *Object) Remote() string {
// Hash returns the SHA1 or MD5 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t == hash.MD5 && o.fs.hasMD5 {
return o.md5, nil
}
if t == hash.SHA1 && o.fs.hasSHA1 {
return o.sha1, nil
if o.fs.hasChecksums {
switch t {
case hash.SHA1:
return o.sha1, nil
case hash.MD5:
return o.md5, nil
}
}
return "", hash.ErrUnsupported
}
@@ -1046,14 +1042,10 @@ func (o *Object) setMetaData(info *api.Prop) (err error) {
o.hasMetaData = true
o.size = info.Size
o.modTime = time.Time(info.Modified)
if o.fs.hasMD5 || o.fs.hasSHA1 {
if o.fs.hasChecksums {
hashes := info.Hashes()
if o.fs.hasSHA1 {
o.sha1 = hashes[hash.SHA1]
}
if o.fs.hasMD5 {
o.md5 = hashes[hash.MD5]
}
o.sha1 = hashes[hash.SHA1]
o.md5 = hashes[hash.MD5]
}
return nil
}
@@ -1134,21 +1126,19 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
ContentLength: &size, // FIXME this isn't necessary with owncloud - See https://github.com/nextcloud/nextcloud-snap/issues/365
ContentType: fs.MimeType(ctx, src),
}
if o.fs.useOCMtime || o.fs.hasMD5 || o.fs.hasSHA1 {
if o.fs.useOCMtime || o.fs.hasChecksums {
opts.ExtraHeaders = map[string]string{}
if o.fs.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1e9)
}
// Set one upload checksum
// Owncloud uses one checksum only to check the upload and stores its own SHA1 and MD5
// Nextcloud stores the checksum you supply (SHA1 or MD5) but only stores one
if o.fs.hasSHA1 {
if o.fs.hasChecksums {
// Set an upload checksum - prefer SHA1
//
// This is used as an upload integrity test. If we set
// only SHA1 here, owncloud will calculate the MD5 too.
if sha1, _ := src.Hash(ctx, hash.SHA1); sha1 != "" {
opts.ExtraHeaders["OC-Checksum"] = "SHA1:" + sha1
}
}
if o.fs.hasMD5 && opts.ExtraHeaders["OC-Checksum"] == "" {
if md5, _ := src.Hash(ctx, hash.MD5); md5 != "" {
} else if md5, _ := src.Hash(ctx, hash.MD5); md5 != "" {
opts.ExtraHeaders["OC-Checksum"] = "MD5:" + md5
}
}

View File

@@ -3,18 +3,11 @@ package authorize
import (
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/flags"
"github.com/spf13/cobra"
)
var (
noAutoBrowser bool
)
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &noAutoBrowser, "auth-no-open-browser", "", false, "Do not automatically open auth link in default browser")
}
var commandDefinition = &cobra.Command{
@@ -23,12 +16,9 @@ var commandDefinition = &cobra.Command{
Long: `
Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
Use the --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.`,
rclone config.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 3, command, args)
config.Authorize(args, noAutoBrowser)
config.Authorize(args)
},
}

View File

@@ -82,7 +82,7 @@ func ShowVersion() {
func NewFsFile(remote string) (fs.Fs, string) {
_, _, fsPath, err := fs.ParseRemote(remote)
if err != nil {
err = fs.CountError(err)
fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err)
}
f, err := cache.Get(remote)
@@ -92,7 +92,7 @@ func NewFsFile(remote string) (fs.Fs, string) {
case nil:
return f, ""
default:
err = fs.CountError(err)
fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err)
}
return nil, ""
@@ -107,13 +107,13 @@ func newFsFileAddFilter(remote string) (fs.Fs, string) {
if fileName != "" {
if !filter.Active.InActive() {
err := errors.Errorf("Can't limit to single files when using filters: %v", remote)
err = fs.CountError(err)
fs.CountError(err)
log.Fatalf(err.Error())
}
// Limit transfers to this file
err := filter.Active.AddFile(fileName)
if err != nil {
err = fs.CountError(err)
fs.CountError(err)
log.Fatalf("Failed to limit to single file %q: %v", remote, err)
}
}
@@ -135,7 +135,7 @@ func NewFsSrc(args []string) fs.Fs {
func newFsDir(remote string) fs.Fs {
f, err := cache.Get(remote)
if err != nil {
err = fs.CountError(err)
fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err)
}
return f
@@ -189,11 +189,11 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
fdst, err := cache.Get(dstRemote)
switch err {
case fs.ErrorIsFile:
_ = fs.CountError(err)
fs.CountError(err)
log.Fatalf("Source doesn't exist or is a directory and destination is a file")
case nil:
default:
_ = fs.CountError(err)
fs.CountError(err)
log.Fatalf("Failed to create file system for destination %q: %v", dstRemote, err)
}
return
@@ -239,7 +239,7 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
SigInfoHandler()
for try := 1; try <= *retries; try++ {
err = f()
err = fs.CountError(err)
fs.CountError(err)
lastErr := accounting.GlobalStats().GetLastError()
if err == nil {
err = lastErr
@@ -386,12 +386,12 @@ func initConfig() {
fs.Infof(nil, "Creating CPU profile %q\n", *cpuProfile)
f, err := os.Create(*cpuProfile)
if err != nil {
err = fs.CountError(err)
fs.CountError(err)
log.Fatal(err)
}
err = pprof.StartCPUProfile(f)
if err != nil {
err = fs.CountError(err)
fs.CountError(err)
log.Fatal(err)
}
atexit.Register(func() {
@@ -405,17 +405,17 @@ func initConfig() {
fs.Infof(nil, "Saving Memory profile %q\n", *memProfile)
f, err := os.Create(*memProfile)
if err != nil {
err = fs.CountError(err)
fs.CountError(err)
log.Fatal(err)
}
err = pprof.WriteHeapProfile(f)
if err != nil {
err = fs.CountError(err)
fs.CountError(err)
log.Fatal(err)
}
err = f.Close()
if err != nil {
err = fs.CountError(err)
fs.CountError(err)
log.Fatal(err)
}
})

View File

@@ -371,12 +371,7 @@ func (fsys *FS) Write(path string, buff []byte, ofst int64, fh uint64) (n int) {
if errc != 0 {
return errc
}
var err error
if fsys.VFS.Opt.CacheMode < vfs.CacheModeWrites || handle.Node().Mode()&os.ModeAppend == 0 {
n, err = handle.WriteAt(buff, ofst)
} else {
n, err = handle.Write(buff)
}
n, err := handle.WriteAt(buff, ofst)
if err != nil {
return translateError(err)
}

View File

@@ -21,7 +21,6 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
)
@@ -208,7 +207,7 @@ func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, er
// If noModTime is set then it
func Mount(f fs.Fs, mountpoint string) error {
// Mount it
FS, errChan, unmount, err := mount(f, mountpoint)
FS, errChan, _, err := mount(f, mountpoint)
if err != nil {
return errors.Wrap(err, "failed to mount FUSE fs")
}
@@ -218,10 +217,6 @@ func Mount(f fs.Fs, mountpoint string) error {
sigHup := make(chan os.Signal, 1)
signal.Notify(sigHup, syscall.SIGHUP)
atexit.Register(func() {
_ = unmount()
})
if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd")
}

View File

@@ -88,7 +88,7 @@ func cryptCheck(ctx context.Context, fdst, fsrc fs.Fs) error {
underlyingDst := cryptDst.UnWrap()
underlyingHash, err := underlyingDst.Hash(ctx, hashType)
if err != nil {
err = fs.CountError(err)
fs.CountError(err)
fs.Errorf(dst, "Error reading hash from underlying %v: %v", underlyingDst, err)
return true, false
}
@@ -97,7 +97,7 @@ func cryptCheck(ctx context.Context, fdst, fsrc fs.Fs) error {
}
cryptHash, err := fcrypt.ComputeHash(ctx, cryptDst, src, hashType)
if err != nil {
err = fs.CountError(err)
fs.CountError(err)
fs.Errorf(dst, "Error computing hash: %v", err)
return true, false
}
@@ -106,7 +106,7 @@ func cryptCheck(ctx context.Context, fdst, fsrc fs.Fs) error {
}
if cryptHash != underlyingHash {
err = errors.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", fdst.Name(), fdst.Root(), cryptHash, fsrc.Name(), fsrc.Root(), underlyingHash)
err = fs.CountError(err)
fs.CountError(err)
fs.Errorf(src, err.Error())
return true, false
}

View File

@@ -46,11 +46,10 @@ __rclone_custom_func() {
else
__rclone_init_completion -n : || return
fi
local rclone=(command rclone --ask-password=false)
if [[ $cur != *:* ]]; then
local ifs=$IFS
IFS=$'\n'
local remotes=($("${rclone[@]}" listremotes 2> /dev/null))
local remotes=($(command rclone listremotes))
IFS=$ifs
local remote
for remote in "${remotes[@]}"; do
@@ -69,7 +68,7 @@ __rclone_custom_func() {
fi
local ifs=$IFS
IFS=$'\n'
local lines=($("${rclone[@]}" lsf "${cur%%:*}:$prefix" 2> /dev/null))
local lines=($(rclone lsf "${cur%%:*}:$prefix" 2>/dev/null))
IFS=$ifs
local line
for line in "${lines[@]}"; do

View File

@@ -5,7 +5,6 @@ package mount
import (
"context"
"io"
"os"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
@@ -42,12 +41,7 @@ var _ fusefs.HandleWriter = (*FileHandle)(nil)
// Write data to the file handle
func (fh *FileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) (err error) {
defer log.Trace(fh, "len=%d, offset=%d", len(req.Data), req.Offset)("written=%d, err=%v", &resp.Size, &err)
var n int
if fh.Handle.Node().VFS().Opt.CacheMode < vfs.CacheModeWrites || fh.Handle.Node().Mode()&os.ModeAppend == 0 {
n, err = fh.Handle.WriteAt(req.Data, req.Offset)
} else {
n, err = fh.Handle.Write(req.Data)
}
n, err := fh.Handle.WriteAt(req.Data, req.Offset)
if err != nil {
return translateError(err)
}

View File

@@ -32,10 +32,12 @@ func mountOptions(device string) (options []fuse.MountOption) {
fuse.Subtype("rclone"),
fuse.FSName(device),
fuse.VolumeName(mountlib.VolumeName),
fuse.AsyncRead(),
// Options from benchmarking in the fuse module
//fuse.MaxReadahead(64 * 1024 * 1024),
//fuse.AsyncRead(), - FIXME this causes
// ReadFileHandle.Read error: read /home/files/ISOs/xubuntu-15.10-desktop-amd64.iso: bad file descriptor
// which is probably related to errors people are having
//fuse.WritebackCache(),
}
if mountlib.NoAppleDouble {
@@ -137,9 +139,6 @@ func Mount(f fs.Fs, mountpoint string) error {
sigHup := make(chan os.Signal, 1)
signal.Notify(sigHup, syscall.SIGHUP)
atexit.IgnoreSignals()
atexit.Register(func() {
_ = unmount()
})
if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd")

View File

@@ -50,8 +50,6 @@ func TestRenameOpenHandle(t *testing.T) {
err = file.Close()
require.NoError(t, err)
run.waitForWriters()
// verify file was renamed properly
run.checkDir(t, "renamebla 9")

View File

@@ -34,11 +34,6 @@ func osCreate(name string) (*os.File, error) {
return os.OpenFile(name, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
}
// os.Create with append
func osAppend(name string) (*os.File, error) {
return os.OpenFile(name, os.O_WRONLY|os.O_APPEND, 0666)
}
// TestFileModTimeWithOpenWriters tests mod time on open files
func TestFileModTimeWithOpenWriters(t *testing.T) {
run.skipIfNoFUSE(t)

View File

@@ -78,7 +78,6 @@ func RunTests(t *testing.T, fn MountFn) {
t.Run("TestWriteFileDoubleClose", TestWriteFileDoubleClose)
t.Run("TestWriteFileFsync", TestWriteFileFsync)
t.Run("TestWriteFileDup", TestWriteFileDup)
t.Run("TestWriteFileAppend", TestWriteFileAppend)
})
log.Printf("Finished test run with cache mode %v (ok=%v)", cacheMode, ok)
if !ok {

View File

@@ -2,7 +2,6 @@ package mounttest
import (
"os"
"runtime"
"testing"
"github.com/stretchr/testify/assert"
@@ -131,48 +130,3 @@ func TestWriteFileDup(t *testing.T) {
run.waitForWriters()
run.rm(t, "to be synced")
}
// TestWriteFileAppend tests that O_APPEND works on cache backends >= writes
func TestWriteFileAppend(t *testing.T) {
run.skipIfNoFUSE(t)
if run.vfs.Opt.CacheMode < vfs.CacheModeWrites {
t.Skip("not supported on vfs-cache-mode < writes")
return
}
// TODO: Windows needs the v1.5 release of WinFsp to handle O_APPEND properly.
// Until it gets released, skip this test on Windows.
if runtime.GOOS == "windows" {
t.Skip("currently unsupported on Windows")
}
filepath := run.path("to be synced")
fh, err := osCreate(filepath)
require.NoError(t, err)
testData := []byte("0123456789")
appendData := []byte("10")
_, err = fh.Write(testData)
require.NoError(t, err)
err = fh.Close()
require.NoError(t, err)
fh, err = osAppend(filepath)
require.NoError(t, err)
_, err = fh.Write(appendData)
require.NoError(t, err)
err = fh.Close()
require.NoError(t, err)
info, err := os.Stat(filepath)
require.NoError(t, err)
require.EqualValues(t, len(testData)+len(appendData), info.Size())
run.waitForWriters()
run.rm(t, "to be synced")
}

View File

@@ -214,7 +214,7 @@ func withHeader(name string, value string, next http.Handler) http.Handler {
// serveError returns an http.StatusInternalServerError and logs the error
func serveError(what interface{}, w http.ResponseWriter, text string, err error) {
err = fs.CountError(err)
fs.CountError(err)
fs.Errorf(what, "%s: %v", text, err)
http.Error(w, text+".", http.StatusInternalServerError)
}

View File

@@ -15,6 +15,7 @@ import (
"strconv"
"sync"
ftp "github.com/goftp/server"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/serve/proxy"
@@ -28,7 +29,6 @@ import (
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
ftp "goftp.io/server"
)
// Options contains options for the http Server
@@ -155,7 +155,7 @@ func newServer(f fs.Fs, opt *Options) (*server, error) {
PassivePorts: opt.PassivePorts,
Auth: s, // implemented by CheckPasswd method
Logger: &Logger{},
//TODO implement a maximum of https://godoc.org/goftp.io/server#ServerOpts
//TODO implement a maximum of https://godoc.org/github.com/goftp/server#ServerOpts
}
s.srv = ftp.NewServer(ftpopt)
return s, nil
@@ -210,8 +210,8 @@ func (l *Logger) PrintResponse(sessionID string, code int, message string) {
// CheckPassword is called with the connection.
func findID(callerName []byte) (string, error) {
// Dump the stack in this format
// github.com/rclone/rclone/vendor/goftp.io/server.(*Conn).Serve(0xc0000b2680)
// /home/ncw/go/src/github.com/rclone/rclone/vendor/goftp.io/server/conn.go:116 +0x11d
// github.com/rclone/rclone/vendor/github.com/goftp/server.(*Conn).Serve(0xc0000b2680)
// /home/ncw/go/src/github.com/rclone/rclone/vendor/github.com/goftp/server/conn.go:116 +0x11d
buf := make([]byte, 4096)
n := runtime.Stack(buf, false)
buf = buf[:n]

View File

@@ -11,6 +11,7 @@ import (
"fmt"
"testing"
ftp "github.com/goftp/server"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/cmd/serve/servetest"
"github.com/rclone/rclone/fs"
@@ -18,7 +19,6 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
ftp "goftp.io/server"
)
const (

View File

@@ -68,7 +68,7 @@ func (d *Directory) AddEntry(remote string, isDir bool) {
// Error logs the error and if a ResponseWriter is given it writes a http.StatusInternalServerError
func Error(what interface{}, w http.ResponseWriter, text string, err error) {
err = fs.CountError(err)
fs.CountError(err)
fs.Errorf(what, "%s: %v", text, err)
if w != nil {
http.Error(w, text+".", http.StatusInternalServerError)

View File

@@ -208,10 +208,7 @@ func (p *Proxy) call(user, pass string, passwordBytes []byte) (value interface{}
if err != nil {
return nil, false, err
}
// The bcrypt cost is a compromise between security and speed. The password is looked up on every
// transaction for WebDAV so we store it lightly hashed. An attacker would find it easier to go after
// the unencrypted password in memory most likely.
pwHash, err := bcrypt.GenerateFromPassword(passwordBytes, bcrypt.MinCost)
pwHash, err := bcrypt.GenerateFromPassword(passwordBytes, bcrypt.DefaultCost)
if err != nil {
return nil, false, err
}

View File

@@ -271,7 +271,7 @@ func (s *server) postObject(w http.ResponseWriter, r *http.Request, remote strin
_, err := operations.RcatSize(r.Context(), s.f, remote, r.Body, r.ContentLength, time.Now())
if err != nil {
err = accounting.Stats(r.Context()).Error(err)
accounting.Stats(r.Context()).Error(err)
fs.Errorf(remote, "Post request rcat error: %v", err)
http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError)

View File

@@ -192,7 +192,7 @@ Contributors
* Sheldon Rupp <me@shel.io>
* albertony <12441419+albertony@users.noreply.github.com>
* cron410 <cron410@gmail.com>
* Anagh Kumar Baranwal <anaghk.dos@gmail.com> <6824881+darthShadow@users.noreply.github.com>
* Anagh Kumar Baranwal <anaghk.dos@gmail.com>
* Felix Brucker <felix@felixbrucker.com>
* Santiago Rodríguez <scollazo@users.noreply.github.com>
* Craig Miskell <craig.miskell@fluxfederation.com>
@@ -263,7 +263,7 @@ Contributors
* garry415 <garry.415@gmail.com>
* forgems <forgems@gmail.com>
* Florian Apolloner <florian@apolloner.eu>
* Aleksandar Janković <office@ajankovic.com> <ajankovic@users.noreply.github.com>
* Aleksandar Jankovic <office@ajankovic.com>
* Maran <maran@protonmail.com>
* nguyenhuuluan434 <nguyenhuuluan434@gmail.com>
* Laura Hausmann <zotan@zotan.pw> <laura@hausmann.dev>
@@ -306,13 +306,3 @@ Contributors
* Carlos Ferreyra <crypticmind@gmail.com>
* Saksham Khanna <sakshamkhanna@outlook.com>
* dausruddin <5763466+dausruddin@users.noreply.github.com>
* zero-24 <zero-24@users.noreply.github.com>
* Xiaoxing Ye <ye@xiaoxing.us>
* Barry Muldrey <barry@muldrey.net>
* Sebastian Brandt <sebastian.brandt@friday.de>
* Marco Molteni <marco.molteni@mailbox.org>
* Ankur Gupta <ankur0493@gmail.com>
* Maciej Zimnoch <maciej@scylladb.com>
* anuar45 <serdaliyev.anuar@gmail.com>
* Fernando <ferferga@users.noreply.github.com>
* David Cole <david.cole@sohonet.com>

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone"
slug: rclone
url: /commands/rclone/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone about"
slug: rclone_about
url: /commands/rclone_about/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
@@ -22,8 +22,7 @@ rclone authorize [flags]
### Options
```
--auth-no-open-browser Do not automatically open auth link in default browser
-h, --help help for authorize
-h, --help help for authorize
```
See the [global flags page](/flags/) for global options not listed here.

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone cachestats"
slug: rclone_cachestats
url: /commands/rclone_cachestats/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config create"
slug: rclone_config_create
url: /commands/rclone_config_create/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config delete"
slug: rclone_config_delete
url: /commands/rclone_config_delete/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config disconnect"
slug: rclone_config_disconnect
url: /commands/rclone_config_disconnect/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config dump"
slug: rclone_config_dump
url: /commands/rclone_config_dump/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config edit"
slug: rclone_config_edit
url: /commands/rclone_config_edit/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config file"
slug: rclone_config_file
url: /commands/rclone_config_file/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config password"
slug: rclone_config_password
url: /commands/rclone_config_password/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config providers"
slug: rclone_config_providers
url: /commands/rclone_config_providers/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config reconnect"
slug: rclone_config_reconnect
url: /commands/rclone_config_reconnect/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config show"
slug: rclone_config_show
url: /commands/rclone_config_show/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config update"
slug: rclone_config_update
url: /commands/rclone_config_update/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone config userinfo"
slug: rclone_config_userinfo
url: /commands/rclone_config_userinfo/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone copyto"
slug: rclone_copyto
url: /commands/rclone_copyto/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone copyurl"
slug: rclone_copyurl
url: /commands/rclone_copyurl/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone cryptcheck"
slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone cryptdecode"
slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone dbhashsum"
slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone deletefile"
slug: rclone_deletefile
url: /commands/rclone_deletefile/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone genautocomplete"
slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone genautocomplete bash"
slug: rclone_genautocomplete_bash
url: /commands/rclone_genautocomplete_bash/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone genautocomplete zsh"
slug: rclone_genautocomplete_zsh
url: /commands/rclone_genautocomplete_zsh/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone gendocs"
slug: rclone_gendocs
url: /commands/rclone_gendocs/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone hashsum"
slug: rclone_hashsum
url: /commands/rclone_hashsum/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone link"
slug: rclone_link
url: /commands/rclone_link/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone listremotes"
slug: rclone_listremotes
url: /commands/rclone_listremotes/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone ls"
slug: rclone_ls
url: /commands/rclone_ls/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone lsd"
slug: rclone_lsd
url: /commands/rclone_lsd/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone lsf"
slug: rclone_lsf
url: /commands/rclone_lsf/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone lsjson"
slug: rclone_lsjson
url: /commands/rclone_lsjson/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone lsl"
slug: rclone_lsl
url: /commands/rclone_lsl/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone md5sum"
slug: rclone_md5sum
url: /commands/rclone_md5sum/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone mkdir"
slug: rclone_mkdir
url: /commands/rclone_mkdir/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone mount"
slug: rclone_mount
url: /commands/rclone_mount/
@@ -65,28 +65,6 @@ infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Archit
which creates drives accessible for everyone on the system or
alternatively using [the nssm service manager](https://nssm.cc/usage).
#### Mount as a network drive
By default, rclone will mount the remote as a normal drive. However, you can also mount it as a **Network Drive**
(or **Network Share**, as mentioned in some places)
Unlike other systems, Windows provides a different filesystem type for network drives.
Windows and other programs treat the network drives and fixed/removable drives differently:
In network drives, many I/O operations are optimized, as the high latency and low reliability
(compared to a normal drive) of a network is expected.
Although many people prefer network shares to be mounted as normal system drives, this might cause
some issues, such as programs not working as expected or freezes and errors while operating with the
mounted remote in Windows Explorer. If you experience any of those, consider mounting rclone remotes as network shares,
as Windows expects normal drives to be fast and reliable, while cloud storage is far from that.
See also [Limitations](#limitations) section below for more info
Add `--fuse-flag --VolumePrefix=\server\share` to your `mount` command, **replacing `share` with any other
name of your choice if you are mounting more than one remote**. Otherwise, the mountpoints will conflict and
your mounted filesystems will overlap.
[Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping)
### Limitations
Without the use of "--vfs-cache-mode" this can only write files

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone move"
slug: rclone_move
url: /commands/rclone_move/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone moveto"
slug: rclone_moveto
url: /commands/rclone_moveto/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone ncdu"
slug: rclone_ncdu
url: /commands/rclone_ncdu/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone obscure"
slug: rclone_obscure
url: /commands/rclone_obscure/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone purge"
slug: rclone_purge
url: /commands/rclone_purge/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone rc"
slug: rclone_rc
url: /commands/rclone_rc/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone rcat"
slug: rclone_rcat
url: /commands/rclone_rcat/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone rcd"
slug: rclone_rcd
url: /commands/rclone_rcd/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone rmdir"
slug: rclone_rmdir
url: /commands/rclone_rmdir/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone rmdirs"
slug: rclone_rmdirs
url: /commands/rclone_rmdirs/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone serve"
slug: rclone_serve
url: /commands/rclone_serve/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone serve dlna"
slug: rclone_serve_dlna
url: /commands/rclone_serve_dlna/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone serve ftp"
slug: rclone_serve_ftp
url: /commands/rclone_serve_ftp/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone serve http"
slug: rclone_serve_http
url: /commands/rclone_serve_http/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone serve restic"
slug: rclone_serve_restic
url: /commands/rclone_serve_restic/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone serve sftp"
slug: rclone_serve_sftp
url: /commands/rclone_serve_sftp/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone serve webdav"
slug: rclone_serve_webdav
url: /commands/rclone_serve_webdav/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone settier"
slug: rclone_settier
url: /commands/rclone_settier/

View File

@@ -1,5 +1,5 @@
---
date: 2019-10-26T11:04:03+01:00
date: 2019-11-19T16:02:36Z
title: "rclone sha1sum"
slug: rclone_sha1sum
url: /commands/rclone_sha1sum/

Some files were not shown because too many files have changed in this diff Show More