1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-21 03:43:26 +00:00

Compare commits

...

32 Commits

Author SHA1 Message Date
Paul Collins
8d80df2c85 swift: add workarounds for bad listings in Ceph RGW
Ceph's Swift API emulation does not fully confirm to the API spec.
As a result, it sometimes returns fewer items in a container than
the requested limit, which according to the spec should means
that there are no more objects left in the container.  (Note that
python-swiftclient always fetches unless the current page is empty.)

This commit adds a pair of new Swift backend settings to handle this.

Set `fetch_until_empty_page` to true to always fetch another
page of the container listing unless there are no items left.

Alternatively, set `partial_page_fetch_threshold` to an integer
percentage.  In this case rclone will fetch a new page only when
the current page is within this percentage of the limit.

Swift API reference: https://docs.openstack.org/swift/latest/api/pagination.html

PR against ncw/swift with research and discussion: https://github.com/ncw/swift/pull/167

Fixes #7924
2024-06-28 09:54:44 +01:00
Nick Craig-Wood
754e53dbcc docs: remove warp as silver sponsor 2024-06-24 10:33:18 +01:00
Nick Craig-Wood
5511fa441a onedrive: fix nil pointer error when uploading small files
Before this fix when uploading a single part file, if the
o.fetchAndUpdateMetadata() call failed rclone would call
o.setMetaData() with a nil info which caused a crash.

This fixes the problem by returning the error from
o.fetchAndUpdateMetadata() explicitly.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
4ed4483bbc vfs: fix fatal error: sync: unlock of unlocked mutex in panics
Before this change a panic could be overwritten with the message

    fatal error: sync: unlock of unlocked mutex

This was because we temporarily unlocked the mutex, but failed to lock
it again if there was a panic.

This is code is never the cause of an error but it masks the
underlying error by overwriting the panic cause.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
0e85ba5080 Add Filipe Herculano to contributors 2024-06-24 09:30:59 +01:00
Nick Craig-Wood
e5095a7d7b Add Thearas to contributors 2024-06-24 09:30:59 +01:00
wiserain
300851e8bf pikpak: implement custom hash to replace wrong sha1
This improves PikPak's file integrity verification by implementing a custom 
hash function named gcid and replacing the previously used SHA-1 hash.
2024-06-20 00:57:21 +09:00
wiserain
cbccad9491 pikpak: improves data consistency by ensuring async tasks complete
Similar to uploads implemented in commit ce5024bf33, 
this change ensures most asynchronous file operations (copy, move, delete, 
purge, and cleanup) complete before proceeding with subsequent actions. 
This reduces the risk of data inconsistencies and improves overall reliability.
2024-06-20 00:07:05 +09:00
dependabot[bot]
9f1a7cfa67 build(deps): bump docker/build-push-action from 5 to 6
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-18 14:48:30 +01:00
Filipe Herculano
d84a4c9ac1 s3: fix incorrect region for Magalu provider 2024-06-15 17:40:28 +01:00
Thearas
1c9da8c96a docs: recommend no_check_bucket = true for Alibaba - fixes #7889
Change-Id: Ib6246e416ce67dddc3cb69350de69129a8826ce3
2024-06-15 17:39:05 +01:00
Nick Craig-Wood
af9c5fef93 docs: tidy .gitignore for docs 2024-06-15 13:08:20 +01:00
Nick Craig-Wood
7060777d1d docs: fix hugo warning: found no layout file for "html" for kind "term"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "term": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

This turned out to be the addition of the `groups:` keyword to the
command frontmatter. Hugo is doing something with this keyword though
this isn't documented in the frontmatter documentation.

The fix was removing the `groups:` keyword from the frontmatter since
it was never used by hugo.
2024-06-15 12:59:49 +01:00
Nick Craig-Wood
0197e7f4e5 docs: remove slug and url from command pages since they are no longer needed 2024-06-15 12:37:43 +01:00
Nick Craig-Wood
c1c9e209f3 docs: fix hugo warning: found no layout file for "html" for kind "section"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "section": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

It turned out to be
- the arrangement of the oracle object storage docs and sub page
- the fact that a section template was missing
2024-06-15 12:29:37 +01:00
Nick Craig-Wood
fd182af866 serve dlna: fix panic: invalid argument to Int63n
This updates the upstream github.com/anacrolix/dms to master to fix
the problem.

Fixes #7911
2024-06-15 10:58:57 +01:00
Nick Craig-Wood
4ea629446f Start v1.68.0-DEV development 2024-06-14 17:54:27 +01:00
Nick Craig-Wood
93e8a976ef Version v1.67.0 2024-06-14 16:04:51 +01:00
nielash
8470bdf810 s3: fix 405 error on HEAD for delete marker with versionId
When getting an object by specifying a versionId in the request, if the
specified version is a delete marker, it returns 405 (Method Not Allowed),
instead of 404 (Not Found) which would be returned without a versionId. See
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html

Before this change, we were only looking for 404 (and not 405) to determine
whether the object exists. This meant that in some circumstances (ex. when
Versioning is enabled for the bucket and we have a non-null X-Amz-Version-Id), we
deemed the object to exist when we should not have.

After this change, 405 (Method Not Allowed) is treated the same as 404 (Not
Found) for the purposes of headObject.

See https://forum.rclone.org/t/bisync-rename-failed-method-not-allowed/45723/13
2024-06-13 18:09:29 +01:00
Nick Craig-Wood
1aa3a37a28 gitannex: make tests run more quietly - use go test -v for more info
These tests were generating 1000s of lines of logs and making it
difficult to figure out what was failing in other tests.
2024-06-13 17:33:56 +01:00
albertony
ae887ad042 jottacloud: set metadata on server side copy and move - fixes #7900 2024-06-13 16:19:36 +01:00
Nick Craig-Wood
d279fea44a qingstor: disable integration tests as test account suspended
QingStor support have disabled the integration test account with this message

尊敬的用户您好:依据监管部门相关内容安全合规要求,QingStor即日起限制对
个人客户提供对象存储服务,您的对象存储服务将被系统置于禁用状态,如需继
续使用QingsStor对象存储服务,您可以通过工单或者拨打400热线申请开通,未
解封期间您的数据将不受影响,感谢您的谅解和支持。

Which google translate renders as

> Dear user: In accordance with the relevant content security
> compliance requirements of the regulatory authorities, QingStor will
> limit the provision of object storage services to individual
> customers from now on. Your object storage service will be disabled
> by the system. If you need to continue to use the QingsStor object
> storage service, you can apply for activation through a work order
> or by calling the 400 hotline. Your data will not be affected during
> the period of unblocking. Thank you for your understanding and
> support.
2024-06-13 12:50:35 +01:00
Nick Craig-Wood
282e34f2d5 operations: add operations.ReadFile to read the contents of a file into memory 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
021f25a748 fs: make ConfigFs take an fs.Info which makes it more useful 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
18e9d039ad touch: fix using -R on certain backends
On backends which return a valid object for "" with NewObject then
touch was going wrong as it thought it was passed an object.

This should not happen normally but s3 can be configured with
--s3-no-head where it is happy to believe that all objects exist.
2024-06-12 17:57:28 +01:00
Nick Craig-Wood
cbcfb90d9a serve s3: fix XML of error message
This updates the s3 libary to fix the XML of the error response

Fixes #7749
2024-06-12 17:53:57 +01:00
Nick Craig-Wood
caba22a585 fs/logger: make the tests deterministic
Previously this used `rclone test makefiles --seed 0` which sets a
random seed and every now and again we get this error

    Failed to open file "$WORK\\src\\moru": open $WORK\src\moru: is a directory

Because a file with the same name was created as a file in the src and
a dir in the dst.

This fixes it by using determinstic seeds each time.
2024-06-12 16:39:30 +01:00
Nick Craig-Wood
3fef8016b5 zoho: sleep for 60 seconds if rate limit error received 2024-06-12 16:34:30 +01:00
Nick Craig-Wood
edf6537c61 zoho: remove simple file names complication which is no longer needed 2024-06-12 16:34:27 +01:00
Nick Craig-Wood
00f0e9df9d zoho: retry reading info if size wasn't returned 2024-06-12 16:34:24 +01:00
Nick Craig-Wood
e6ab644350 zoho: fix throttling problem when uploading files
Before this change rclone checked to see if a file existed before
uploading it. It did this to avoid making duplicate files. This
involved listing the destination directory to see if the file existed
which was rate limited by Zoho.

However Zoho can't have duplicate files anyway so this fix just
removes that check and the PutUnchecked method which isn't needed.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697
See: https://forum.rclone.org/t/followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/44794
2024-06-12 16:34:18 +01:00
Nick Craig-Wood
61c18e3b60 zoho: use cursor listing for improved performance
Cursor listing enables us to list up to 1,000 items per call
(previously it was 10) and uses one less transaction per call.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697/4
2024-06-12 16:34:11 +01:00
183 changed files with 47679 additions and 37824 deletions

View File

@@ -56,7 +56,7 @@ jobs:
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .

5
.gitignore vendored
View File

@@ -3,7 +3,9 @@ _junk/
rclone
rclone.exe
build
docs/public
/docs/public/
/docs/.hugo_build.lock
/docs/static/img/logos/
rclone.iml
.idea
.history
@@ -16,6 +18,5 @@ fuzz-build.zip
Thumbs.db
__pycache__
.DS_Store
/docs/static/img/logos/
resource_windows_*.syso
.devcontainer

24678
MANUAL.html generated

File diff suppressed because it is too large Load Diff

1458
MANUAL.md generated

File diff suppressed because it is too large Load Diff

23817
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -239,7 +239,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w --disableFastRender
cd docs && hugo server --logLevel info -w --disableFastRender
tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

View File

@@ -1 +1 @@
v1.67.0
v1.68.0

View File

@@ -3776,7 +3776,7 @@ file named "foo ' \.txt":
The result is a JSON array of matches, for example:
[
[
{
"createdTime": "2017-06-29T19:58:28.537Z",
"id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
@@ -3792,7 +3792,7 @@ The result is a JSON array of matches, for example:
"size": "311",
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]`,
]`,
}}
// Command the backend to run a named command

View File

@@ -1487,16 +1487,38 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(ctx, remote)
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, err
}
if err := f.mkParentDir(ctx, remote); err != nil {
return nil, err
}
info, err := f.copyOrMove(ctx, "cp", srcObj.filePath(), remote)
// if destination was a trashed file then after a successful copy the copied file is still in trash (bug in api?)
if err == nil && bool(info.Deleted) && !f.opt.TrashedOnly && info.State == "COMPLETED" {
fs.Debugf(src, "Server-side copied to trashed destination, restoring")
info, err = f.createOrUpdate(ctx, remote, srcObj.createTime, srcObj.modTime, srcObj.size, srcObj.md5)
if err == nil {
var createTime time.Time
var createTimeMeta bool
var modTime time.Time
var modTimeMeta bool
if meta != nil {
createTime, createTimeMeta = srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime
}
modTime, modTimeMeta = srcObj.parseFsMetadataTime(meta, "mtime")
if !modTimeMeta {
modTime = srcObj.modTime
}
}
if bool(info.Deleted) && !f.opt.TrashedOnly && info.State == "COMPLETED" {
// Workaround necessary when destination was a trashed file, to avoid the copied file also being in trash (bug in api?)
fs.Debugf(src, "Server-side copied to trashed destination, restoring")
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
} else if createTimeMeta || modTimeMeta {
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
}
}
if err != nil {
@@ -1523,12 +1545,30 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(ctx, remote)
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, err
}
if err := f.mkParentDir(ctx, remote); err != nil {
return nil, err
}
info, err := f.copyOrMove(ctx, "mv", srcObj.filePath(), remote)
if err != nil && meta != nil {
createTime, createTimeMeta := srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime
}
modTime, modTimeMeta := srcObj.parseFsMetadataTime(meta, "mtime")
if !modTimeMeta {
modTime = srcObj.modTime
}
if createTimeMeta || modTimeMeta {
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
}
}
if err != nil {
return nil, fmt.Errorf("couldn't move file: %w", err)
}
@@ -1786,6 +1826,20 @@ func (o *Object) readMetaData(ctx context.Context, force bool) (err error) {
return o.setMetaData(info)
}
// parseFsMetadataTime parses a time string from fs.Metadata with key
func (o *Object) parseFsMetadataTime(m fs.Metadata, key string) (t time.Time, ok bool) {
value, ok := m[key]
if ok {
var err error
t, err = time.Parse(time.RFC3339Nano, value) // metadata stores RFC3339Nano timestamps
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", key, value, err)
ok = false
}
}
return t, ok
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
@@ -1957,21 +2011,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var createdTime string
var modTime string
if meta != nil {
if v, ok := meta["btime"]; ok {
t, err := time.Parse(time.RFC3339Nano, v) // metadata stores RFC3339Nano timestamps
if err != nil {
fs.Debugf(o, "failed to parse metadata btime: %q: %v", v, err)
} else {
createdTime = api.Rfc3339Time(t).String() // jottacloud api wants RFC3339 timestamps
}
if t, ok := o.parseFsMetadataTime(meta, "btime"); ok {
createdTime = api.Rfc3339Time(t).String() // jottacloud api wants RFC3339 timestamps
}
if v, ok := meta["mtime"]; ok {
t, err := time.Parse(time.RFC3339Nano, v)
if err != nil {
fs.Debugf(o, "failed to parse metadata mtime: %q: %v", v, err)
} else {
modTime = api.Rfc3339Time(t).String()
}
if t, ok := o.parseFsMetadataTime(meta, "mtime"); ok {
modTime = api.Rfc3339Time(t).String()
}
}
if modTime == "" { // prefer mtime in meta as Modified time, fallback to source ModTime

View File

@@ -2538,6 +2538,9 @@ func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, src fs.Obje
}
// Set the mod time now and read metadata
info, err = o.fs.fetchAndUpdateMetadata(ctx, src, options, o)
if err != nil {
return nil, fmt.Errorf("failed to fetch and update metadata: %w", err)
}
return info, o.setMetaData(info)
}

View File

@@ -176,7 +176,7 @@ type File struct {
FileCategory string `json:"file_category,omitempty"` // "AUDIO", "VIDEO"
FileExtension string `json:"file_extension,omitempty"`
FolderType string `json:"folder_type,omitempty"`
Hash string `json:"hash,omitempty"` // sha1 but NOT a valid file hash. looks like a torrent hash
Hash string `json:"hash,omitempty"` // custom hash with a form of sha1sum
IconLink string `json:"icon_link,omitempty"`
ID string `json:"id,omitempty"`
Kind string `json:"kind,omitempty"` // "drive#file"
@@ -486,7 +486,7 @@ type RequestNewFile struct {
ParentID string `json:"parent_id"`
FolderType string `json:"folder_type"`
// only when uploading a new file
Hash string `json:"hash,omitempty"` // sha1sum
Hash string `json:"hash,omitempty"` // gcid
Resumable map[string]string `json:"resumable,omitempty"` // {"provider": "PROVIDER_ALIYUN"}
Size int64 `json:"size,omitempty"`
UploadType string `json:"upload_type,omitempty"` // "UPLOAD_TYPE_FORM" or "UPLOAD_TYPE_RESUMABLE"

View File

@@ -12,6 +12,7 @@ import (
"net/url"
"os"
"strconv"
"time"
"github.com/rclone/rclone/backend/pikpak/api"
"github.com/rclone/rclone/lib/rest"
@@ -19,7 +20,7 @@ import (
// Globals
const (
cachePrefix = "rclone-pikpak-sha1sum-"
cachePrefix = "rclone-pikpak-gcid-"
)
// requestDecompress requests decompress of compressed files
@@ -82,19 +83,21 @@ func (f *Fs) getVIPInfo(ctx context.Context) (info *api.VIP, err error) {
// action can be one of batch{Copy,Delete,Trash,Untrash}
func (f *Fs) requestBatchAction(ctx context.Context, action string, req *api.RequestBatch) (err error) {
opts := rest.Opts{
Method: "POST",
Path: "/drive/v1/files:" + action,
NoResponse: true, // Only returns `{"task_id":""}
Method: "POST",
Path: "/drive/v1/files:" + action,
}
info := struct {
TaskID string `json:"task_id"`
}{}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rst.CallJSON(ctx, &opts, &req, nil)
resp, err = f.rst.CallJSON(ctx, &opts, &req, &info)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return fmt.Errorf("batch action %q failed: %w", action, err)
}
return nil
return f.waitTask(ctx, info.TaskID)
}
// requestNewTask requests a new api.NewTask and returns api.Task
@@ -148,6 +151,9 @@ func (f *Fs) getFile(ctx context.Context, ID string) (info *api.File, err error)
}
return f.shouldRetry(ctx, resp, err)
})
if err == nil {
info.Name = f.opt.Enc.ToStandardName(info.Name)
}
return
}
@@ -179,8 +185,8 @@ func (f *Fs) getTask(ctx context.Context, ID string, checkPhase bool) (info *api
resp, err = f.rst.CallJSON(ctx, &opts, nil, &info)
if checkPhase {
if err == nil && info.Phase != api.PhaseTypeComplete {
// could be pending right after file is created/uploaded.
return true, errors.New(info.Phase)
// could be pending right after the task is created
return true, fmt.Errorf("%s (%s) is still in %s", info.Name, info.Type, info.Phase)
}
}
return f.shouldRetry(ctx, resp, err)
@@ -188,6 +194,18 @@ func (f *Fs) getTask(ctx context.Context, ID string, checkPhase bool) (info *api
return
}
// waitTask waits for async tasks to be completed
func (f *Fs) waitTask(ctx context.Context, ID string) (err error) {
time.Sleep(taskWaitTime)
if info, err := f.getTask(ctx, ID, true); err != nil {
if info == nil {
return fmt.Errorf("can't verify the task is completed: %q", ID)
}
return fmt.Errorf("can't verify the task is completed: %#v", info)
}
return
}
// deleteTask remove a task having the specified ID
func (f *Fs) deleteTask(ctx context.Context, ID string, deleteFiles bool) (err error) {
params := url.Values{}
@@ -235,16 +253,11 @@ func (f *Fs) requestShare(ctx context.Context, req *api.RequestShare) (info *api
return
}
// Read the sha1 of in returning a reader which will read the same contents
// Read the gcid of in returning a reader which will read the same contents
//
// The cleanup function should be called when out is finished with
// regardless of whether this function returned an error or not.
func readSHA1(in io.Reader, size, threshold int64) (sha1sum string, out io.Reader, cleanup func(), err error) {
// we need an SHA1
hash := sha1.New()
// use the teeReader to write to the local file AND calculate the SHA1 while doing so
teeReader := io.TeeReader(in, hash)
func readGcid(in io.Reader, size, threshold int64) (gcid string, out io.Reader, cleanup func(), err error) {
// nothing to clean up by default
cleanup = func() {}
@@ -267,8 +280,11 @@ func readSHA1(in io.Reader, size, threshold int64) (sha1sum string, out io.Reade
_ = os.Remove(tempFile.Name()) // delete the cache file after we are done - may be deleted already
}
// copy the ENTIRE file to disc and calculate the SHA1 in the process
if _, err = io.Copy(tempFile, teeReader); err != nil {
// use the teeReader to write to the local file AND calculate the gcid while doing so
teeReader := io.TeeReader(in, tempFile)
// copy the ENTIRE file to disk and calculate the gcid in the process
if gcid, err = calcGcid(teeReader, size); err != nil {
return
}
// jump to the start of the local file so we can pass it along
@@ -279,15 +295,38 @@ func readSHA1(in io.Reader, size, threshold int64) (sha1sum string, out io.Reade
// replace the already read source with a reader of our cached file
out = tempFile
} else {
// that's a small file, just read it into memory
var inData []byte
inData, err = io.ReadAll(teeReader)
if err != nil {
buf := &bytes.Buffer{}
teeReader := io.TeeReader(in, buf)
if gcid, err = calcGcid(teeReader, size); err != nil {
return
}
// set the reader to our read memory block
out = bytes.NewReader(inData)
out = buf
}
return hex.EncodeToString(hash.Sum(nil)), out, cleanup, nil
return
}
func calcGcid(r io.Reader, size int64) (string, error) {
calcBlockSize := func(j int64) int64 {
var psize int64 = 0x40000
for float64(j)/float64(psize) > 0x200 && psize < 0x200000 {
psize = psize << 1
}
return psize
}
totalHash := sha1.New()
blockHash := sha1.New()
readSize := calcBlockSize(size)
for {
blockHash.Reset()
if n, err := io.CopyN(blockHash, r, readSize); err != nil && n == 0 {
if err != io.EOF {
return "", err
}
break
}
totalHash.Write(blockHash.Sum(nil))
}
return hex.EncodeToString(totalHash.Sum(nil)), nil
}

View File

@@ -7,8 +7,6 @@ package pikpak
// md5sum is not always available, sometimes given empty.
// sha1sum used for upload differs from the one with official apps.
// Trashed files are not restored to the original location when using `batchUntrash`
// Can't stream without `--vfs-cache-mode=full`
@@ -69,7 +67,7 @@ const (
rcloneEncryptedClientSecret = "aqrmB6M1YJ1DWCBxVxFSjFo7wzWEky494YMmkqgAl1do1WKOe2E"
minSleep = 100 * time.Millisecond
maxSleep = 2 * time.Second
waitTime = 500 * time.Millisecond
taskWaitTime = 500 * time.Millisecond
decayConstant = 2 // bigger for slower decay, exponential
rootURL = "https://api-drive.mypikpak.com"
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
@@ -291,6 +289,7 @@ type Object struct {
modTime time.Time // modification time of the object
mimeType string // The object MIME type
parent string // ID of the parent directories
gcid string // custom hash of the object
md5sum string // md5sum of the object
link *api.Link // link to download the object
linkMu *sync.Mutex
@@ -917,19 +916,21 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
// CleanUp empties the trash
func (f *Fs) CleanUp(ctx context.Context) (err error) {
opts := rest.Opts{
Method: "PATCH",
Path: "/drive/v1/files/trash:empty",
NoResponse: true, // Only returns `{"task_id":""}
Method: "PATCH",
Path: "/drive/v1/files/trash:empty",
}
info := struct {
TaskID string `json:"task_id"`
}{}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rst.Call(ctx, &opts)
resp, err = f.rst.CallJSON(ctx, &opts, nil, &info)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return fmt.Errorf("couldn't empty trash: %w", err)
}
return nil
return f.waitTask(ctx, info.TaskID)
}
// Move the object
@@ -1222,7 +1223,7 @@ func (f *Fs) uploadByResumable(ctx context.Context, in io.Reader, name string, s
return
}
func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str string, size int64, options ...fs.OpenOption) (info *api.File, err error) {
func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string, size int64, options ...fs.OpenOption) (info *api.File, err error) {
// determine upload type
uploadType := api.UploadTypeResumable
// if size >= 0 && size < int64(5*fs.Mebi) {
@@ -1237,7 +1238,7 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str stri
ParentID: parentIDForRequest(dirID),
FolderType: "NORMAL",
Size: size,
Hash: strings.ToUpper(sha1Str),
Hash: strings.ToUpper(gcid),
UploadType: uploadType,
}
if uploadType == api.UploadTypeResumable {
@@ -1262,8 +1263,8 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str stri
if cancelErr := f.deleteTask(ctx, new.Task.ID, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", waitTime)
time.Sleep(waitTime)
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", taskWaitTime)
time.Sleep(taskWaitTime)
})()
if uploadType == api.UploadTypeForm && new.Form != nil {
@@ -1277,12 +1278,7 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str stri
if err != nil {
return nil, fmt.Errorf("failed to upload: %w", err)
}
fs.Debugf(leaf, "sleeping for %v before checking upload status", waitTime)
time.Sleep(waitTime)
if _, err = f.getTask(ctx, new.Task.ID, true); err != nil {
return nil, fmt.Errorf("unable to complete the upload: %w", err)
}
return new.File, nil
return new.File, f.waitTask(ctx, new.Task.ID)
}
// Put the object
@@ -1506,6 +1502,7 @@ func (o *Object) setMetaData(info *api.File) (err error) {
} else {
o.parent = info.ParentID
}
o.gcid = info.Hash
o.md5sum = info.Md5Checksum
if info.Links.ApplicationOctetStream != nil {
o.link = info.Links.ApplicationOctetStream
@@ -1579,9 +1576,6 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
if o.md5sum == "" {
return "", nil
}
return strings.ToLower(o.md5sum), nil
}
@@ -1705,25 +1699,23 @@ func (o *Object) upload(ctx context.Context, in io.Reader, src fs.ObjectInfo, wi
return err
}
// Calculate sha1sum; grabbed from package jottacloud
hashStr, err := src.Hash(ctx, hash.SHA1)
if err != nil || hashStr == "" {
// unwrap the accounting from the input, we use wrap to put it
// back on after the buffering
var wrap accounting.WrapFn
in, wrap = accounting.UnWrap(in)
var cleanup func()
hashStr, in, cleanup, err = readSHA1(in, size, int64(o.fs.opt.HashMemoryThreshold))
defer cleanup()
if err != nil {
return fmt.Errorf("failed to calculate SHA1: %w", err)
}
// Wrap the accounting back onto the stream
in = wrap(in)
// Calculate gcid; grabbed from package jottacloud
var gcid string
// unwrap the accounting from the input, we use wrap to put it
// back on after the buffering
var wrap accounting.WrapFn
in, wrap = accounting.UnWrap(in)
var cleanup func()
gcid, in, cleanup, err = readGcid(in, size, int64(o.fs.opt.HashMemoryThreshold))
defer cleanup()
if err != nil {
return fmt.Errorf("failed to calculate gcid: %w", err)
}
// Wrap the accounting back onto the stream
in = wrap(in)
if !withTemp {
info, err := o.fs.upload(ctx, in, leaf, dirID, hashStr, size, options...)
info, err := o.fs.upload(ctx, in, leaf, dirID, gcid, size, options...)
if err != nil {
return err
}
@@ -1732,7 +1724,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, src fs.ObjectInfo, wi
// We have to fall back to upload + rename
tempName := "rcloneTemp" + random.String(8)
info, err := o.fs.upload(ctx, in, tempName, dirID, hashStr, size, options...)
info, err := o.fs.upload(ctx, in, tempName, dirID, gcid, size, options...)
if err != nil {
return err
}

View File

@@ -1415,8 +1415,8 @@ func init() {
Help: "Magalu BR Southeast 1 endpoint",
Provider: "Magalu",
}, {
Value: "br-se1.magaluobjects.com",
Help: "Magalu BR Northest 1 endpoint",
Value: "br-ne1.magaluobjects.com",
Help: "Magalu BR Northeast 1 endpoint",
Provider: "Magalu",
}},
}, {
@@ -5422,7 +5422,7 @@ func (f *Fs) headObject(ctx context.Context, req *s3.HeadObjectInput) (resp *s3.
})
if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok {
if awsErr.StatusCode() == http.StatusNotFound {
if awsErr.StatusCode() == http.StatusNotFound || awsErr.StatusCode() == http.StatusMethodNotAllowed {
return nil, fs.ErrorObjectNotFound
}
}

View File

@@ -278,6 +278,36 @@ provider.`,
Value: "pca",
Help: "OVH Public Cloud Archive",
}},
}, {
Name: "fetch_until_empty_page",
Help: `When paginating, always fetch unless we received an empty page.
Consider using this option if rclone listings show fewer objects
than expected, or if repeated syncs copy unchanged objects.
It is safe to enable this, but rclone may make more API calls than
necessary.
This is one of a pair of workarounds to handle implementations
of the Swift API that do not implement pagination as expected. See
also "partial_page_fetch_threshold".`,
Default: false,
Advanced: true,
}, {
Name: "partial_page_fetch_threshold",
Help: `When paginating, fetch if the current page is within this percentage of the limit.
Consider using this option if rclone listings show fewer objects
than expected, or if repeated syncs copy unchanged objects.
It is safe to enable this, but rclone may make more API calls than
necessary.
This is one of a pair of workarounds to handle implementations
of the Swift API that do not implement pagination as expected. See
also "fetch_until_empty_page".`,
Default: 0,
Advanced: true,
}}, SharedOptions...),
})
}
@@ -308,6 +338,8 @@ type Options struct {
NoLargeObjects bool `config:"no_large_objects"`
UseSegmentsContainer fs.Tristate `config:"use_segments_container"`
Enc encoder.MultiEncoder `config:"encoding"`
FetchUntilEmptyPage bool `config:"fetch_until_empty_page"`
PartialPageFetchThreshold int `config:"partial_page_fetch_threshold"`
}
// Fs represents a remote swift server
@@ -462,6 +494,8 @@ func swiftConnection(ctx context.Context, opt *Options, name string) (*swift.Con
ConnectTimeout: 10 * ci.ConnectTimeout, // Use the timeouts in the transport
Timeout: 10 * ci.Timeout, // Use the timeouts in the transport
Transport: fshttp.NewTransport(ctx),
FetchUntilEmptyPage: opt.FetchUntilEmptyPage,
PartialPageFetchThreshold: opt.PartialPageFetchThreshold,
}
if opt.EnvAuth {
err := c.ApplyEnvironment()

View File

@@ -70,8 +70,17 @@ type ItemInfo struct {
Item Item `json:"data"`
}
// Links contains Cursor information
type Links struct {
Cursor struct {
HasNext bool `json:"has_next"`
Next string `json:"next"`
} `json:"cursor"`
}
// ItemList contains multiple Zoho Items
type ItemList struct {
Links Links `json:"links"`
Items []Item `json:"data"`
}

View File

@@ -289,6 +289,10 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
authRetry = true
fs.Debugf(nil, "Should retry: %v", err)
}
if resp != nil && resp.StatusCode == 429 {
fs.Errorf(nil, "zoho: rate limit error received, sleeping for 60s: %v", err)
time.Sleep(60 * time.Second)
}
return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@@ -332,7 +336,7 @@ func parsePath(path string) (root string) {
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Item, err error) {
// defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
// defer log.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
leaf, directoryID, err := f.dirCache.FindPath(ctx, path, false)
if err != nil {
if err == fs.ErrorDirNotFound {
@@ -454,18 +458,18 @@ type listAllFn func(*api.Item) bool
//
// If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
const listItemsLimit = 1000
opts := rest.Opts{
Method: "GET",
Path: "/files/" + dirID + "/files",
ExtraHeaders: map[string]string{"Accept": "application/vnd.api+json"},
Parameters: url.Values{},
Parameters: url.Values{
"page[limit]": {strconv.Itoa(listItemsLimit)},
"page[next]": {"0"},
},
}
opts.Parameters.Set("page[limit]", strconv.Itoa(10))
offset := 0
OUTER:
for {
opts.Parameters.Set("page[offset]", strconv.Itoa(offset))
var result api.ItemList
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
@@ -495,7 +499,15 @@ OUTER:
break OUTER
}
}
offset += 10
if !result.Links.Cursor.HasNext {
break
}
// Fetch the next from the URL in the response
nextURL, err := url.Parse(result.Links.Cursor.Next)
if err != nil {
return found, fmt.Errorf("failed to parse next link as URL: %w", err)
}
opts.Parameters.Set("page[next]", nextURL.Query().Get("page[next]"))
}
return
}
@@ -631,33 +643,6 @@ func (f *Fs) createObject(ctx context.Context, remote string, size int64, modTim
return
}
// Put the object
//
// Copy the reader in to the new object which is returned.
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil)
switch err {
case nil:
return existingObj, existingObj.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
// Not found so create it
return f.PutUnchecked(ctx, in, src)
default:
return nil, err
}
}
func isSimpleName(s string) bool {
for _, r := range s {
if (r < 'a' || r > 'z') && (r < 'A' || r > 'Z') && (r != '.') {
return false
}
}
return true
}
func (f *Fs) upload(ctx context.Context, name string, parent string, size int64, in io.Reader, options ...fs.OpenOption) (*api.Item, error) {
params := url.Values{}
params.Set("filename", name)
@@ -693,22 +678,32 @@ func (f *Fs) upload(ctx context.Context, name string, parent string, size int64,
return nil, errors.New("upload: invalid response")
}
// Received meta data is missing size so we have to read it again.
info, err := f.readMetaDataForID(ctx, uploadResponse.Uploads[0].Attributes.RessourceID)
if err != nil {
return nil, err
// It doesn't always appear on first read so try again if necessary
var info *api.Item
const maxTries = 10
sleepTime := 100 * time.Millisecond
for i := 0; i < maxTries; i++ {
info, err = f.readMetaDataForID(ctx, uploadResponse.Uploads[0].Attributes.RessourceID)
if err != nil {
return nil, err
}
if info.Attributes.StorageInfo.Size != 0 || size == 0 {
break
}
fs.Debugf(f, "Size not available yet for %q - try again in %v (try %d/%d)", name, sleepTime, i+1, maxTries)
time.Sleep(sleepTime)
sleepTime *= 2
}
return info, nil
}
// PutUnchecked the object into the container
//
// This will produce an error if the object already exists.
// Put the object into the container
//
// Copy the reader in to the new object which is returned.
//
// The new object may have been created if an error is returned
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
size := src.Size()
remote := src.Remote()
@@ -718,25 +713,12 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
return nil, err
}
if isSimpleName(leaf) {
info, err := f.upload(ctx, f.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return nil, err
}
return f.newObjectWithInfo(ctx, remote, info)
}
tempName := "rcloneTemp" + random.String(8)
info, err := f.upload(ctx, tempName, directoryID, size, in, options...)
// Upload the file
info, err := f.upload(ctx, f.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return nil, err
}
o, err := f.newObjectWithInfo(ctx, remote, info)
if err != nil {
return nil, err
}
return o, o.(*Object).rename(ctx, leaf)
return f.newObjectWithInfo(ctx, remote, info)
}
// Mkdir creates the container if it doesn't exist
@@ -1200,32 +1182,12 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
if isSimpleName(leaf) {
// Simple name we can just overwrite the old file
info, err := o.fs.upload(ctx, o.fs.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return err
}
return o.setMetaData(info)
}
// We have to fall back to upload + rename
tempName := "rcloneTemp" + random.String(8)
info, err := o.fs.upload(ctx, tempName, directoryID, size, in, options...)
// Overwrite the old file
info, err := o.fs.upload(ctx, o.fs.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return err
}
// upload was successful, need to delete old object before rename
if err = o.Remove(ctx); err != nil {
return fmt.Errorf("failed to remove old object: %w", err)
}
if err = o.setMetaData(info); err != nil {
return err
}
// rename also updates metadata
return o.rename(ctx, leaf)
return o.setMetaData(info)
}
// Remove an object

View File

@@ -29,8 +29,6 @@ type frontmatter struct {
Date string
Title string
Description string
Slug string
URL string
Source string
Annotations map[string]string
}
@@ -38,8 +36,6 @@ type frontmatter struct {
var frontmatterTemplate = template.Must(template.New("frontmatter").Parse(`---
title: "{{ .Title }}"
description: "{{ .Description }}"
slug: {{ .Slug }}
url: {{ .URL }}
{{- range $key, $value := .Annotations }}
{{ $key }}: {{ $value }}
{{- end }}
@@ -112,10 +108,14 @@ rclone.org website.`,
Date: now,
Title: strings.ReplaceAll(base, "_", " "),
Description: commands[name].Short,
Slug: base,
URL: "/commands/" + strings.ToLower(base) + "/",
Source: strings.ReplaceAll(strings.ReplaceAll(base, "rclone", "cmd"), "_", "/") + "/",
Annotations: commands[name].Annotations,
Annotations: map[string]string{},
}
// Filter out annotations that confuse hugo from the frontmatter
for k, v := range commands[name].Annotations {
if k != "groups" {
data.Annotations[k] = v
}
}
var buf bytes.Buffer
err := frontmatterTemplate.Execute(&buf, data)

View File

@@ -93,6 +93,7 @@ func findFileWithContents(t *testing.T, dir string, wantContents []byte) bool {
}
type e2eTestingContext struct {
t *testing.T
tempDir string
binDir string
homeDir string
@@ -126,7 +127,7 @@ func makeE2eTestingContext(t *testing.T) e2eTestingContext {
require.NoError(t, os.Mkdir(dir, 0700))
}
return e2eTestingContext{tempDir, binDir, homeDir, configDir, rcloneConfigDir, ephemeralRepoDir}
return e2eTestingContext{t, tempDir, binDir, homeDir, configDir, rcloneConfigDir, ephemeralRepoDir}
}
// Install the symlink that enables git-annex to invoke "rclone gitannex"
@@ -154,16 +155,17 @@ func (e *e2eTestingContext) installRcloneConfig(t *testing.T) {
// variable to a subdirectory of the temp directory. It also ensures that the
// git-annex-remote-rclone-builtin symlink will be found by extending the PATH.
func (e *e2eTestingContext) runInRepo(t *testing.T, command string, args ...string) {
fmt.Printf("+ %s %v\n", command, args)
if testing.Verbose() {
t.Logf("Running %s %v\n", command, args)
}
cmd := exec.Command(command, args...)
cmd.Dir = e.ephemeralRepoDir
cmd.Env = []string{
"HOME=" + e.homeDir,
"PATH=" + os.Getenv("PATH") + ":" + e.binDir,
}
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
require.NoError(t, cmd.Run())
buf, err := cmd.CombinedOutput()
require.NoError(t, err, fmt.Sprintf("+ %s %v failed:\n%s\n", command, args, buf))
}
// createGitRepo creates an empty git repository in the ephemeral repo

View File

@@ -139,7 +139,12 @@ func Touch(ctx context.Context, f fs.Fs, remote string) error {
return err
}
fs.Debugf(nil, "Touch time %v", t)
file, err := f.NewObject(ctx, remote)
var file fs.Object
if remote == "" {
err = fs.ErrorIsDir
} else {
file, err = f.NewObject(ctx, remote)
}
if err != nil {
if errors.Is(err, fs.ErrorObjectNotFound) {
// Touching non-existent path, possibly creating it as new file

View File

@@ -9,8 +9,7 @@
"description": "rclone - rsync for cloud storage: google drive, s3, gcs, azure, dropbox, box...",
"canonifyurls": false,
"disableKinds": [
"taxonomy",
"taxonomyTerm"
"taxonomy"
],
"ignoreFiles": [
"~$",

View File

@@ -118,7 +118,7 @@ Here are the Advanced options specific to alias (Alias for an existing remote).
#### --alias-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -865,3 +865,5 @@ put them back in again.` >}}
* Michał Dzienisiewicz <michal.piotr.dz@gmail.com>
* Florian Klink <flokli@flokli.de>
* Bill Fraser <bill@wfraser.dev>
* Thearas <thearas850@gmail.com>
* Filipe Herculano <fifo_@live.com>

View File

@@ -851,7 +851,7 @@ Properties:
#### --azureblob-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -689,7 +689,7 @@ Properties:
#### --azurefiles-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -647,7 +647,7 @@ Properties:
#### --b2-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -475,7 +475,7 @@ Properties:
#### --box-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -666,7 +666,7 @@ Properties:
#### --cache-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -5,6 +5,159 @@ description: "Rclone Changelog"
# Changelog
## v1.67.0 - 2024-06-14
[See commits](https://github.com/rclone/rclone/compare/v1.66.0...v1.67.0)
* New backends
* [uloz.to](/ulozto/) (iotmaestro)
* New S3 providers
* [Magalu Object Storage](/s3/#magalu) (Bruno Fernandes)
* New commands
* [gitannex](/commands/rclone_gitannex/): Enables git-annex to store and retrieve content from an rclone remote (Dan McArdle)
* New Features
* accounting: Add deleted files total size to status summary line (Kyle Reynolds)
* build
* Fix `CVE-2023-45288` by upgrading `golang.org/x/net` (Nick Craig-Wood)
* Fix `CVE-2024-35255` by upgrading `github.com/Azure/azure-sdk-for-go/sdk/azidentity` to 1.6.0 (dependabot)
* Convert source files with CRLF to LF (albertony)
* Update all dependencies (Nick Craig-Wood)
* doc updates (albertony, Alex Garel, Dave Nicolson, Dominik Joe Pantůček, Eric Wolf, Erisa A, Evan Harris, Evan McBeth, Gachoud Philippe, hidewrong, jakzoe, jumbi77, kapitainsky, Kyle Reynolds, Lewis Hook, Nick Craig-Wood, overallteach, pawsey-kbuckley, Pieter van Oostrum, psychopatt, racerole, static-moonlight, Warrentheo, yudrywet, yumeiyin )
* ncdu: Do not quit on Esc to aid usability (Katia Esposito)
* rcserver: Set `ModTime` for dirs and files served by `--rc-serve` (Nikita Shoshin)
* Bug Fixes
* bisync: Add integration tests against all backends and fix many many problems (nielash)
* config: Fix default value for `description` (Nick Craig-Wood)
* copy: Fix `nil` pointer dereference when corrupted on transfer with `nil` dst (nielash)
* fs
* Improve JSON Unmarshalling for `Duration` types (Kyle Reynolds)
* Close the CPU profile on exit (guangwu)
* Replace `/bin/bash` with `/usr/bin/env bash` (Florian Klink)
* oauthutil: Clear client secret if client ID is set (Michael Terry)
* operations
* Rework `rcat` so that it doesn't call the `--metadata-mapper` twice (Nick Craig-Wood)
* Ensure `SrcFsType` is set correctly when using `--metadata-mapper` (Nick Craig-Wood)
* Fix "optional feature not implemented" error with a crypted sftp bug (Nick Craig-Wood)
* Fix very long file names when using copy with `--partial` (Nick Craig-Wood)
* Fix retries downloading too much data with certain backends (Nick Craig-Wood)
* Fix move when dst is nil and fdst is case-insensitive (nielash)
* Fix lsjson `--encrypted` when using `--crypt-XXX` parameters (Nick Craig-Wood)
* Fix missing metadata for multipart transfers to local disk (Nick Craig-Wood)
* Fix incorrect modtime on some multipart transfers (Nick Craig-Wood)
* Fix hashing problem in integration tests (Nick Craig-Wood)
* rc
* Fix stats groups being ignored in `operations/check` (Nick Craig-Wood)
* Fix incorrect `Content-Type` in HTTP API (Kyle Reynolds)
* serve s3
* Fix `Last-Modified` header format (Butanediol)
* Fix in-memory metadata storing wrong modtime (nielash)
* Fix XML of error message (Nick Craig-Wood)
* serve webdav: Fix webdav with `--baseurl` under Windows (Nick Craig-Wood)
* serve dlna: Make `BrowseMetadata` more compliant (albertony)
* serve http: Added `Content-Length` header when HTML directory is served (Sunny)
* sync
* Don't sync directories if they haven't been modified (Nick Craig-Wood)
* Don't test reading metadata if we can't write it (Nick Craig-Wood)
* Fix case normalisation (problem on on s3) (Nick Craig-Wood)
* Fix management of empty directories to make it more accurate (Nick Craig-Wood)
* Fix creation of empty directories when `--create-empty-src-dirs=false` (Nick Craig-Wood)
* Fix directory modification times not being set (Nick Craig-Wood)
* Fix "failed to update directory timestamp or metadata: directory not found" (Nick Craig-Wood)
* Fix expecting SFTP to have MkdirMetadata method: optional feature not implemented (Nick Craig-Wood)
* test info: Improve cleanup of temp files (Kyle Reynolds)
* touch: Fix using `-R` on certain backends (Nick Craig-Wood)
* Mount
* Add `--direct-io` flag to force uncached access (Nick Craig-Wood)
* VFS
* Fix download loop when file size shrunk (Nick Craig-Wood)
* Fix renaming a directory (nielash)
* Local
* Add `--local-time-type` to use `mtime`/`atime`/`btime`/`ctime` as the time (Nick Craig-Wood)
* Allow `SeBackupPrivilege` and/or `SeRestorePrivilege` to work on Windows (Charles Hamilton)
* Azure Blob
* Fix encoding issue with dir path comparison (nielash)
* B2
* Add new [cleanup](/b2/#cleanup) and [cleanup-hidden](/b2/#cleanup-hidden) backend commands. (Pat Patterson)
* Update B2 URLs to new home (Nick Craig-Wood)
* Chunker
* Fix startup when root points to composite multi-chunk file without metadata (nielash)
* Fix case-insensitive comparison on local without metadata (nielash)
* Fix "finalizer already set" error (nielash)
* Drive
* Add [backend query](/drive/#query) command for general purpose querying of files (John-Paul Smith)
* Stop sending notification emails when setting permissions (Nick Craig-Wood)
* Fix server side copy with metadata from my drive to shared drive (Nick Craig-Wood)
* Set all metadata permissions and return error summary instead of stopping on the first error (Nick Craig-Wood)
* Make errors setting permissions into no retry errors (Nick Craig-Wood)
* Fix description being overwritten on server side moves (Nick Craig-Wood)
* Allow setting metadata to fail if `failok` flag is set (Nick Craig-Wood)
* Fix panic when using `--metadata-mapper` on large google doc files (Nick Craig-Wood)
* Dropbox
* Add `--dropbox-root-namespace` to override the root namespace (Bill Fraser)
* Google Cloud Storage
* Fix encoding issue with dir path comparison (nielash)
* Hdfs
* Fix f.String() not including subpath (nielash)
* Http
* Add `--http-no-escape` to not escape URL metacharacters in path names (Kyle Reynolds)
* Jottacloud
* Set metadata on server side copy and move (albertony)
* Linkbox
* Fix working with names longer than 8-25 Unicode chars. (Vitaly)
* Fix list paging and optimized synchronization. (gvitali)
* Mailru
* Attempt to fix throttling by increasing min sleep to 100ms (Nick Craig-Wood)
* Memory
* Fix dst mutating src after server-side copy (nielash)
* Fix deadlock in operations.Purge (nielash)
* Fix incorrect list entries when rooted at subdirectory (nielash)
* Onedrive
* Add `--onedrive-hard-delete` to permanently delete files (Nick Craig-Wood)
* Make server-side copy work in more scenarios (YukiUnHappy)
* Fix "unauthenticated: Unauthenticated" errors when downloading (Nick Craig-Wood)
* Fix `--metadata-mapper` being called twice if writing permissions (nielash)
* Set all metadata permissions and return error summary instead of stopping on the first error (nielash)
* Make errors setting permissions into no retry errors (Nick Craig-Wood)
* Skip writing permissions with 'owner' role (nielash)
* Fix references to deprecated permissions properties (nielash)
* Add support for group permissions (nielash)
* Allow setting permissions to fail if `failok` flag is set (Nick Craig-Wood)
* Pikpak
* Make getFile() usage more efficient to avoid the download limit (wiserain)
* Improve upload reliability and resolve potential file conflicts (wiserain)
* Implement configurable chunk size for multipart upload (wiserain)
* Protondrive
* Don't auth with an empty access token (Michał Dzienisiewicz)
* Qingstor
* Disable integration tests as test account suspended (Nick Craig-Wood)
* Quatrix
* Fix f.String() not including subpath (nielash)
* S3
* Add new AWS region `il-central-1` Tel Aviv (yoelvini)
* Update Scaleway's configuration options (Alexandre Lavigne)
* Ceph: fix quirks when creating buckets to fix trying to create an existing bucket (Thomas Schneider)
* Fix encoding issue with dir path comparison (nielash)
* Fix 405 error on HEAD for delete marker with versionId (nielash)
* Validate `--s3-copy-cutoff` size before copy (hoyho)
* SFTP
* Add `--sftp-connections` to limit the maximum number of connections (Tomasz Melcer)
* Storj
* Update `storj.io/uplink` to latest release (JT Olio)
* Update bio on request (Nick Craig-Wood)
* Swift
* Implement `--swift-use-segments-container` to allow >5G files on Blomp (Nick Craig-Wood)
* Union
* Fix deleting dirs when all remotes can't have empty dirs (Nick Craig-Wood)
* WebDAV
* Fix setting modification times erasing checksums on owncloud and nextcloud (nielash)
* owncloud: Add `--webdav-owncloud-exclude-mounts` which allows excluding mounted folders when listing remote resources (Thomas Müller)
* Zoho
* Fix throttling problem when uploading files (Nick Craig-Wood)
* Use cursor listing for improved performance (Nick Craig-Wood)
* Retry reading info after upload if size wasn't returned (Nick Craig-Wood)
* Remove simple file names complication which is no longer needed (Nick Craig-Wood)
* Sleep for 60 seconds if rate limit error received (Nick Craig-Wood)
## v1.66.0 - 2024-03-10
[See commits](https://github.com/rclone/rclone/compare/v1.65.0...v1.66.0)

View File

@@ -479,7 +479,7 @@ Properties:
#### --chunker-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -160,7 +160,7 @@ Here are the Advanced options specific to combine (Combine several remotes into
#### --combine-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -1,8 +1,6 @@
---
title: "rclone"
description: "Show help for rclone commands, flags and backends."
slug: rclone
url: /commands/rclone/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/ and as part of making a release run "make commanddocs"
---
## rclone
@@ -257,6 +255,7 @@ rclone [flags]
--dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-root-namespace string Specify a different Dropbox namespace ID to use as the root for all paths
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
--dropbox-token string OAuth Access Token as a JSON blob
@@ -384,6 +383,7 @@ rclone [flags]
--hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi)
--http-description string Description of the remote
--http-headers CommaSepList Set HTTP headers for all transactions
--http-no-escape Do not escape URL metacharacters in path names
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
@@ -432,7 +432,7 @@ rclone [flags]
--koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-password string Your password for rclone generate one at https://app.koofr.net/app/admin/preferences/password (obscured)
--koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name
@@ -449,6 +449,7 @@ rclone [flags]
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
--local-nounc Disable UNC (long path names) conversion on Windows
--local-time-type mtime|atime|btime|ctime Set what kind of time is returned (default mtime)
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--log-file string Log everything to this file
@@ -530,6 +531,7 @@ rclone [flags]
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-hard-delete Permanently delete files on removal
--onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
@@ -587,6 +589,7 @@ rclone [flags]
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
@@ -597,6 +600,7 @@ rclone [flags]
--pikpak-token string OAuth Access Token as a JSON blob
--pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
--premiumizeme-auth-url string Auth server URL
@@ -665,6 +669,7 @@ rclone [flags]
--rc-realm string Realm for authentication
--rc-salt string Password hashing salt (default "dlPL2MqE")
--rc-serve Enable the serving of remote objects
--rc-serve-no-modtime Don't read the modification time (can speed things up)
--rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
@@ -745,6 +750,7 @@ rclone [flags]
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
--sftp-connections int Maximum number of SFTP simultaneous connections, 0 for unlimited
--sftp-copy-is-hardlink Set to enable server side copies using hardlinks
--sftp-description string Description of the remote
--sftp-disable-concurrent-reads If set don't use concurrent reads
@@ -840,7 +846,7 @@ rclone [flags]
--swift-auth string Authentication URL for server (OS_AUTH_URL)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-chunk-size SizeSuffix Above this size files will be chunked (default 5Gi)
--swift-description string Description of the remote
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8)
@@ -856,6 +862,7 @@ rclone [flags]
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-use-segments-container Tristate Choose destination for large object segments (default unset)
--swift-user string User name to log in (OS_USERNAME)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID)
--syslog Use Syslog for logging
@@ -867,6 +874,13 @@ rclone [flags]
--track-renames When synchronizing, track file renames and do a server-side move if possible
--track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash")
--transfers int Number of file transfers to run in parallel (default 4)
--ulozto-app-token string The application token identifying the app. An app API key can be either found in the API
--ulozto-description string Description of the remote
--ulozto-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--ulozto-list-page-size int The size of a single page for list commands. 1-500 (default 500)
--ulozto-password string The password for the user (obscured)
--ulozto-root-folder-slug string If set, rclone will use this folder as the root folder for all operations. For example,
--ulozto-username string The username of the principal to operate as
--union-action-policy string Policy to choose upstream on ACTION category (default "epall")
--union-cache-time int Cache time of usage and free space (in seconds) (default 120)
--union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
@@ -883,7 +897,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.66.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.67.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
@@ -892,6 +906,7 @@ rclone [flags]
--webdav-encoding string The encoding for the backend
--webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi)
--webdav-owncloud-exclude-mounts Exclude ownCloud mounted storages
--webdav-owncloud-exclude-shares Exclude ownCloud shares
--webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--webdav-pass string Password (obscured)
@@ -937,6 +952,7 @@ rclone [flags]
* [rclone delete](/commands/rclone_delete/) - Remove the files in path.
* [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file from remote.
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
* [rclone gitannex](/commands/rclone_gitannex/) - Speaks with git-annex over stdin/stdout.
* [rclone hashsum](/commands/rclone_hashsum/) - Produces a hashsum file for all the objects in the path.
* [rclone link](/commands/rclone_link/) - Generate public link to file/folder.
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file and defined in environment variables.

View File

@@ -1,8 +1,6 @@
---
title: "rclone about"
description: "Get quota information from the remote."
slug: rclone_about
url: /commands/rclone_about/
versionIntroduced: v1.41
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/about/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone authorize"
description: "Remote authorization."
slug: rclone_authorize
url: /commands/rclone_authorize/
versionIntroduced: v1.27
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/authorize/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone backend"
description: "Run a backend-specific command."
slug: rclone_backend
url: /commands/rclone_backend/
groups: Important
versionIntroduced: v1.52
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/backend/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone bisync"
description: "Perform bidirectional synchronization between two paths."
slug: rclone_bisync
url: /commands/rclone_bisync/
groups: Filter,Copy,Important
status: Beta
versionIntroduced: v1.58
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/bisync/ and as part of making a release run "make commanddocs"

View File

@@ -1,9 +1,6 @@
---
title: "rclone cat"
description: "Concatenates any files and sends them to stdout."
slug: rclone_cat
url: /commands/rclone_cat/
groups: Filter,Listing
versionIntroduced: v1.33
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cat/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone check"
description: "Checks the files in the source and destination match."
slug: rclone_check
url: /commands/rclone_check/
groups: Filter,Listing,Check
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/check/ and as part of making a release run "make commanddocs"
---
# rclone check

View File

@@ -1,9 +1,6 @@
---
title: "rclone checksum"
description: "Checks the files in the destination against a SUM file."
slug: rclone_checksum
url: /commands/rclone_checksum/
groups: Filter,Listing
versionIntroduced: v1.56
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/checksum/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone cleanup"
description: "Clean up the remote if possible."
slug: rclone_cleanup
url: /commands/rclone_cleanup/
groups: Important
versionIntroduced: v1.31
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cleanup/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone completion"
description: "Output completion script for a given shell."
slug: rclone_completion
url: /commands/rclone_completion/
versionIntroduced: v1.33
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone completion bash"
description: "Output bash completion script for rclone."
slug: rclone_completion_bash
url: /commands/rclone_completion_bash/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/bash/ and as part of making a release run "make commanddocs"
---
# rclone completion bash
@@ -14,21 +12,32 @@ Output bash completion script for rclone.
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will
probably need to be run with sudo or as root, e.g.
By default, when run without any arguments,
sudo rclone genautocomplete bash
rclone genautocomplete bash
Logout and login again to use the autocompletion scripts, or source
them directly
the generated script will be written to
. /etc/bash_completion
/etc/bash_completion.d/rclone
If you supply a command line argument the script will be written
there.
and so rclone will probably need to be run as root, or with sudo.
If you supply a path to a file as the command line argument, then
the generated script will be written to that file, in which case
you should not need root privileges.
If output_file is "-", then the output will be written to stdout.
If you have installed the script into the default location, you
can logout and login again to use the autocompletion script.
Alternatively, you can source the script directly
. /path/to/my_bash_completion_scripts/rclone
and the autocompletion functionality will be added to your
current shell.
```
rclone completion bash [output_file] [flags]

View File

@@ -1,8 +1,6 @@
---
title: "rclone completion fish"
description: "Output fish completion script for rclone."
slug: rclone_completion_fish
url: /commands/rclone_completion_fish/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/fish/ and as part of making a release run "make commanddocs"
---
# rclone completion fish

View File

@@ -1,8 +1,6 @@
---
title: "rclone completion powershell"
description: "Output powershell completion script for rclone."
slug: rclone_completion_powershell
url: /commands/rclone_completion_powershell/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/powershell/ and as part of making a release run "make commanddocs"
---
# rclone completion powershell

View File

@@ -1,8 +1,6 @@
---
title: "rclone completion zsh"
description: "Output zsh completion script for rclone."
slug: rclone_completion_zsh
url: /commands/rclone_completion_zsh/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/zsh/ and as part of making a release run "make commanddocs"
---
# rclone completion zsh

View File

@@ -1,8 +1,6 @@
---
title: "rclone config"
description: "Enter an interactive configuration session."
slug: rclone_config
url: /commands/rclone_config/
versionIntroduced: v1.39
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config create"
description: "Create a new remote with name, type and options."
slug: rclone_config_create
url: /commands/rclone_config_create/
versionIntroduced: v1.39
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/create/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config delete"
description: "Delete an existing remote."
slug: rclone_config_delete
url: /commands/rclone_config_delete/
versionIntroduced: v1.39
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/delete/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config disconnect"
description: "Disconnects user from remote"
slug: rclone_config_disconnect
url: /commands/rclone_config_disconnect/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/disconnect/ and as part of making a release run "make commanddocs"
---
# rclone config disconnect

View File

@@ -1,8 +1,6 @@
---
title: "rclone config dump"
description: "Dump the config file as JSON."
slug: rclone_config_dump
url: /commands/rclone_config_dump/
versionIntroduced: v1.39
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/dump/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config edit"
description: "Enter an interactive configuration session."
slug: rclone_config_edit
url: /commands/rclone_config_edit/
versionIntroduced: v1.39
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/edit/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config file"
description: "Show path of configuration file in use."
slug: rclone_config_file
url: /commands/rclone_config_file/
versionIntroduced: v1.38
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/file/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config password"
description: "Update password in an existing remote."
slug: rclone_config_password
url: /commands/rclone_config_password/
versionIntroduced: v1.39
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/password/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config paths"
description: "Show paths used for configuration, cache, temp etc."
slug: rclone_config_paths
url: /commands/rclone_config_paths/
versionIntroduced: v1.57
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/paths/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config providers"
description: "List in JSON format all the providers and options."
slug: rclone_config_providers
url: /commands/rclone_config_providers/
versionIntroduced: v1.39
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/providers/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config reconnect"
description: "Re-authenticates user with remote."
slug: rclone_config_reconnect
url: /commands/rclone_config_reconnect/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/reconnect/ and as part of making a release run "make commanddocs"
---
# rclone config reconnect

View File

@@ -1,8 +1,6 @@
---
title: "rclone config redacted"
description: "Print redacted (decrypted) config file, or the redacted config for a single remote."
slug: rclone_config_redacted
url: /commands/rclone_config_redacted/
versionIntroduced: v1.64
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/redacted/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config show"
description: "Print (decrypted) config file, or the config for a single remote."
slug: rclone_config_show
url: /commands/rclone_config_show/
versionIntroduced: v1.38
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/show/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config touch"
description: "Ensure configuration file exists."
slug: rclone_config_touch
url: /commands/rclone_config_touch/
versionIntroduced: v1.56
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/touch/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config update"
description: "Update options in an existing remote."
slug: rclone_config_update
url: /commands/rclone_config_update/
versionIntroduced: v1.39
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/update/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone config userinfo"
description: "Prints info about logged in user of remote."
slug: rclone_config_userinfo
url: /commands/rclone_config_userinfo/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/userinfo/ and as part of making a release run "make commanddocs"
---
# rclone config userinfo

View File

@@ -1,9 +1,6 @@
---
title: "rclone copy"
description: "Copy files from source to dest, skipping identical files."
slug: rclone_copy
url: /commands/rclone_copy/
groups: Copy,Filter,Listing,Important
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/copy/ and as part of making a release run "make commanddocs"
---
# rclone copy

View File

@@ -1,9 +1,6 @@
---
title: "rclone copyto"
description: "Copy files from source to dest, skipping identical files."
slug: rclone_copyto
url: /commands/rclone_copyto/
groups: Copy,Filter,Listing,Important
versionIntroduced: v1.35
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/copyto/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone copyurl"
description: "Copy the contents of the URL supplied content to dest:path."
slug: rclone_copyurl
url: /commands/rclone_copyurl/
groups: Important
versionIntroduced: v1.43
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/copyurl/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone cryptcheck"
description: "Cryptcheck checks the integrity of an encrypted remote."
slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/
groups: Filter,Listing,Check
versionIntroduced: v1.36
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cryptcheck/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone cryptdecode"
description: "Cryptdecode returns unencrypted file names."
slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/
versionIntroduced: v1.38
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cryptdecode/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone dedupe"
description: "Interactively find duplicate filenames and delete/rename them."
slug: rclone_dedupe
url: /commands/rclone_dedupe/
groups: Important
versionIntroduced: v1.27
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/dedupe/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone delete"
description: "Remove the files in path."
slug: rclone_delete
url: /commands/rclone_delete/
groups: Important,Filter,Listing
versionIntroduced: v1.27
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/delete/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone deletefile"
description: "Remove a single file from remote."
slug: rclone_deletefile
url: /commands/rclone_deletefile/
groups: Important
versionIntroduced: v1.42
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/deletefile/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone gendocs"
description: "Output markdown docs for rclone to the directory supplied."
slug: rclone_gendocs
url: /commands/rclone_gendocs/
versionIntroduced: v1.33
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/gendocs/ and as part of making a release run "make commanddocs"
---

View File

@@ -0,0 +1,100 @@
---
title: "rclone gitannex"
description: "Speaks with git-annex over stdin/stdout."
versionIntroduced: v1.67.0
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/gitannex/ and as part of making a release run "make commanddocs"
---
# rclone gitannex
Speaks with git-annex over stdin/stdout.
## Synopsis
Rclone's `gitannex` subcommand enables [git-annex] to store and retrieve content
from an rclone remote. It is meant to be run by git-annex, not directly by
users.
[git-annex]: https://git-annex.branchable.com/
Installation on Linux
---------------------
1. Skip this step if your version of git-annex is [10.20240430] or newer.
Otherwise, you must create a symlink somewhere on your PATH with a particular
name. This symlink helps git-annex tell rclone it wants to run the "gitannex"
subcommand.
```sh
# Create the helper symlink in "$HOME/bin".
ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin"
# Verify the new symlink is on your PATH.
which git-annex-remote-rclone-builtin
```
[10.20240430]: https://git-annex.branchable.com/news/version_10.20240430/
2. Add a new remote to your git-annex repo. This new remote will connect
git-annex with the `rclone gitannex` subcommand.
Start by asking git-annex to describe the remote's available configuration
parameters.
```sh
# If you skipped step 1:
git annex initremote MyRemote type=rclone --whatelse
# If you created a symlink in step 1:
git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse
```
> **NOTE**: If you're porting an existing [git-annex-remote-rclone] remote to
> use `rclone gitannex`, you can probably reuse the configuration parameters
> verbatim without renaming them. Check parameter synonyms with `--whatelse`
> as shown above.
>
> [git-annex-remote-rclone]: https://github.com/git-annex-remote-rclone/git-annex-remote-rclone
The following example creates a new git-annex remote named "MyRemote" that
will use the rclone remote named "SomeRcloneRemote". That rclone remote must
be one configured in your rclone.conf file, which can be located with `rclone
config file`.
```sh
git annex initremote MyRemote \
type=external \
externaltype=rclone-builtin \
encryption=none \
rcloneremotename=SomeRcloneRemote \
rcloneprefix=git-annex-content \
rclonelayout=nodir
```
3. Before you trust this command with your precious data, be sure to **test the
remote**. This command is very new and has not been tested on many rclone
backends. Caveat emptor!
```sh
git annex testremote MyRemote
```
Happy annexing!
```
rclone gitannex [flags]
```
## Options
```
-h, --help help for gitannex
```
See the [global flags page](/flags/) for global options not listed here.
# SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.

View File

@@ -1,9 +1,6 @@
---
title: "rclone hashsum"
description: "Produces a hashsum file for all the objects in the path."
slug: rclone_hashsum
url: /commands/rclone_hashsum/
groups: Filter,Listing
versionIntroduced: v1.41
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/hashsum/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone link"
description: "Generate public link to file/folder."
slug: rclone_link
url: /commands/rclone_link/
versionIntroduced: v1.41
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/link/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone listremotes"
description: "List all the remotes in the config file and defined in environment variables."
slug: rclone_listremotes
url: /commands/rclone_listremotes/
versionIntroduced: v1.34
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/listremotes/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone ls"
description: "List the objects in the path with size and path."
slug: rclone_ls
url: /commands/rclone_ls/
groups: Filter,Listing
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/ls/ and as part of making a release run "make commanddocs"
---
# rclone ls

View File

@@ -1,9 +1,6 @@
---
title: "rclone lsd"
description: "List all directories/containers/buckets in the path."
slug: rclone_lsd
url: /commands/rclone_lsd/
groups: Filter,Listing
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/lsd/ and as part of making a release run "make commanddocs"
---
# rclone lsd

View File

@@ -1,9 +1,6 @@
---
title: "rclone lsf"
description: "List directories and objects in remote:path formatted for parsing."
slug: rclone_lsf
url: /commands/rclone_lsf/
groups: Filter,Listing
versionIntroduced: v1.40
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/lsf/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone lsjson"
description: "List directories and objects in the path in JSON format."
slug: rclone_lsjson
url: /commands/rclone_lsjson/
groups: Filter,Listing
versionIntroduced: v1.37
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/lsjson/ and as part of making a release run "make commanddocs"
---
@@ -37,7 +34,7 @@ The output is an array of Items, where each Item looks like this
"Tier" : "hot",
}
If `--hash` is not specified the Hashes property won't be emitted. The
If `--hash` is not specified, the Hashes property will be omitted. The
types of hash can be specified with the `--hash-type` parameter (which
may be repeated). If `--hash-type` is set then it implies `--hash`.
@@ -49,7 +46,7 @@ If `--no-mimetype` is specified then MimeType will be blank. This can
speed things up on remotes where reading the MimeType takes an extra
request (e.g. s3, swift).
If `--encrypted` is not specified the Encrypted won't be emitted.
If `--encrypted` is not specified the Encrypted will be omitted.
If `--dirs-only` is not specified files in addition to directories are
returned

View File

@@ -1,9 +1,6 @@
---
title: "rclone lsl"
description: "List the objects in path with modification time, size and path."
slug: rclone_lsl
url: /commands/rclone_lsl/
groups: Filter,Listing
versionIntroduced: v1.02
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/lsl/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone md5sum"
description: "Produces an md5sum file for all the objects in the path."
slug: rclone_md5sum
url: /commands/rclone_md5sum/
groups: Filter,Listing
versionIntroduced: v1.02
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/md5sum/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone mkdir"
description: "Make the path if it doesn't already exist."
slug: rclone_mkdir
url: /commands/rclone_mkdir/
groups: Important
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/mkdir/ and as part of making a release run "make commanddocs"
---
# rclone mkdir

View File

@@ -1,9 +1,6 @@
---
title: "rclone mount"
description: "Mount the remote as file system on a mountpoint."
slug: rclone_mount
url: /commands/rclone_mount/
groups: Filter
versionIntroduced: v1.33
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/mount/ and as part of making a release run "make commanddocs"
---
@@ -13,9 +10,8 @@ Mount the remote as file system on a mountpoint.
## Synopsis
rclone mount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
FUSE.
Rclone mount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
@@ -830,6 +826,7 @@ result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone mount remote:path /path/to/mountpoint [flags]
```
@@ -850,6 +847,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--devname string Set the device name - default is remote:path
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--direct-io Use Direct IO, disables caching of data
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)

View File

@@ -1,9 +1,6 @@
---
title: "rclone move"
description: "Move files from source to dest."
slug: rclone_move
url: /commands/rclone_move/
groups: Filter,Listing,Important,Copy
versionIntroduced: v1.19
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/move/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone moveto"
description: "Move file or directory from source to dest."
slug: rclone_moveto
url: /commands/rclone_moveto/
groups: Filter,Listing,Important,Copy
versionIntroduced: v1.35
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/moveto/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone ncdu"
description: "Explore a remote with a text based user interface."
slug: rclone_ncdu
url: /commands/rclone_ncdu/
groups: Filter,Listing
versionIntroduced: v1.37
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/ncdu/ and as part of making a release run "make commanddocs"
---
@@ -46,7 +43,8 @@ press '?' to toggle the help on and off. The supported keys are:
^L refresh screen (fix screen corruption)
r recalculate file sizes
? to toggle help on and off
q/ESC/^c to quit
ESC to close the menu box
q/^c to quit
Listed files/directories may be prefixed by a one-character flag,
some of them combined with a description in brackets at end of line.

View File

@@ -1,9 +1,6 @@
---
title: "rclone nfsmount"
description: "Mount the remote as file system on a mountpoint."
slug: rclone_nfsmount
url: /commands/rclone_nfsmount/
groups: Filter
status: Experimental
versionIntroduced: v1.65
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/nfsmount/ and as part of making a release run "make commanddocs"
@@ -14,9 +11,8 @@ Mount the remote as file system on a mountpoint.
## Synopsis
rclone nfsmount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
FUSE.
Rclone nfsmount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
@@ -831,6 +827,7 @@ result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone nfsmount remote:path /path/to/mountpoint [flags]
```
@@ -852,6 +849,7 @@ rclone nfsmount remote:path /path/to/mountpoint [flags]
--devname string Set the device name - default is remote:path
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--direct-io Use Direct IO, disables caching of data
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)

View File

@@ -1,8 +1,6 @@
---
title: "rclone obscure"
description: "Obscure password for use in the rclone config file."
slug: rclone_obscure
url: /commands/rclone_obscure/
versionIntroduced: v1.36
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/obscure/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone purge"
description: "Remove the path and all of its contents."
slug: rclone_purge
url: /commands/rclone_purge/
groups: Important
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/purge/ and as part of making a release run "make commanddocs"
---
# rclone purge

View File

@@ -1,8 +1,6 @@
---
title: "rclone rc"
description: "Run a command against a running rclone."
slug: rclone_rc
url: /commands/rclone_rc/
versionIntroduced: v1.40
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/rc/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone rcat"
description: "Copies standard input to file on remote."
slug: rclone_rcat
url: /commands/rclone_rcat/
groups: Important
versionIntroduced: v1.38
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/rcat/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone rcd"
description: "Run rclone listening to remote control commands only."
slug: rclone_rcd
url: /commands/rclone_rcd/
groups: RC
versionIntroduced: v1.45
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/rcd/ and as part of making a release run "make commanddocs"
---
@@ -13,7 +10,6 @@ Run rclone listening to remote control commands only.
## Synopsis
This runs rclone so that it only listens to remote control commands.
This is useful if you are controlling rclone via the rc API.
@@ -67,7 +63,7 @@ of that with the CA certificate. `--krc-ey` should be the PEM encoded
private key and `--rc-client-ca` should be the PEM encoded client
certificate authority certificate.
--rc-min-tls-version is minimum TLS version that is acceptable. Valid
`--rc-min-tls-version` is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
"tls1.0").
@@ -135,6 +131,7 @@ Use `--rc-realm` to set the authentication realm.
Use `--rc-salt` to change the password hashing salt from the default.
```
rclone rcd <path to files to serve>* [flags]
```
@@ -170,6 +167,7 @@ Flags to control the Remote Control API.
--rc-realm string Realm for authentication
--rc-salt string Password hashing salt (default "dlPL2MqE")
--rc-serve Enable the serving of remote objects
--rc-serve-no-modtime Don't read the modification time (can speed things up)
--rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template

View File

@@ -1,9 +1,6 @@
---
title: "rclone rmdir"
description: "Remove the empty directory at path."
slug: rclone_rmdir
url: /commands/rclone_rmdir/
groups: Important
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/rmdir/ and as part of making a release run "make commanddocs"
---
# rclone rmdir

View File

@@ -1,9 +1,6 @@
---
title: "rclone rmdirs"
description: "Remove empty directories under the path."
slug: rclone_rmdirs
url: /commands/rclone_rmdirs/
groups: Important
versionIntroduced: v1.35
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/rmdirs/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone selfupdate"
description: "Update the rclone binary."
slug: rclone_selfupdate
url: /commands/rclone_selfupdate/
versionIntroduced: v1.55
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/selfupdate/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone serve"
description: "Serve a remote over a protocol."
slug: rclone_serve
url: /commands/rclone_serve/
versionIntroduced: v1.39
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone serve dlna"
description: "Serve remote:path over DLNA"
slug: rclone_serve_dlna
url: /commands/rclone_serve_dlna/
groups: Filter
versionIntroduced: v1.46
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/dlna/ and as part of making a release run "make commanddocs"
---
@@ -24,7 +21,6 @@ based on media formats or file extensions. Additionally, there is no
media transcoding support. This means that some players might show
files that they are not able to play back correctly.
## Server options
Use `--addr` to specify which IP address and port the server should
@@ -391,6 +387,7 @@ result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone serve dlna remote:path [flags]
```

View File

@@ -1,9 +1,6 @@
---
title: "rclone serve docker"
description: "Serve any remote on docker's volume plugin API."
slug: rclone_serve_docker
url: /commands/rclone_serve_docker/
groups: Filter
versionIntroduced: v1.56
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/docker/ and as part of making a release run "make commanddocs"
---
@@ -406,6 +403,7 @@ result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone serve docker [flags]
```
@@ -427,6 +425,7 @@ rclone serve docker [flags]
--devname string Set the device name - default is remote:path
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--direct-io Use Direct IO, disables caching of data
--file-perms FileMode File permissions (default 0666)
--forget-state Skip restoring previous state
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)

View File

@@ -1,9 +1,6 @@
---
title: "rclone serve ftp"
description: "Serve remote:path over FTP."
slug: rclone_serve_ftp
url: /commands/rclone_serve_ftp/
groups: Filter
versionIntroduced: v1.44
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/ftp/ and as part of making a release run "make commanddocs"
---
@@ -13,7 +10,6 @@ Serve remote:path over FTP.
## Synopsis
Run a basic FTP server to serve a remote over FTP protocol.
This can be viewed with a FTP client or you can make a remote of
type FTP to read and write it.
@@ -469,6 +465,7 @@ This can be used to build general purpose proxies to any kind of
backend that rclone supports.
```
rclone serve ftp remote:path [flags]
```

View File

@@ -1,9 +1,6 @@
---
title: "rclone serve http"
description: "Serve the remote over HTTP."
slug: rclone_serve_http
url: /commands/rclone_serve_http/
groups: Filter
versionIntroduced: v1.39
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/http/ and as part of making a release run "make commanddocs"
---
@@ -68,7 +65,7 @@ of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
--min-tls-version is minimum TLS version that is acceptable. Valid
`--min-tls-version` is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
"tls1.0").
@@ -570,6 +567,7 @@ This can be used to build general purpose proxies to any kind of
backend that rclone supports.
```
rclone serve http remote:path [flags]
```

View File

@@ -1,9 +1,6 @@
---
title: "rclone serve nfs"
description: "Serve the remote as an NFS mount"
slug: rclone_serve_nfs
url: /commands/rclone_serve_nfs/
groups: Filter
status: Experimental
versionIntroduced: v1.65
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/nfs/ and as part of making a release run "make commanddocs"
@@ -399,6 +396,7 @@ result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone serve nfs remote:path [flags]
```

Some files were not shown because too many files have changed in this diff Show More