1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-29 15:53:26 +00:00

Compare commits

..

3 Commits

Author SHA1 Message Date
albertony
ed85edef50 docs: fix markdownlint issue MD013/line-length Line length 2025-12-22 22:22:44 +01:00
albertony
3885800959 docs: fix markdownlint issue MD060/table-column-style 2025-12-22 22:22:44 +01:00
dependabot[bot]
698373fd5c build: bump DavidAnson/markdownlint-cli2-action from 20 to 22
Bumps [DavidAnson/markdownlint-cli2-action](https://github.com/davidanson/markdownlint-cli2-action) from 20 to 22.
- [Release notes](https://github.com/davidanson/markdownlint-cli2-action/releases)
- [Commits](https://github.com/davidanson/markdownlint-cli2-action/compare/v20...v22)

---
updated-dependencies:
- dependency-name: DavidAnson/markdownlint-cli2-action
  dependency-version: '22'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-22 22:22:37 +01:00
149 changed files with 1885 additions and 6440 deletions

View File

@@ -283,7 +283,7 @@ jobs:
run: govulncheck ./...
- name: Check Markdown format
uses: DavidAnson/markdownlint-cli2-action@v20
uses: DavidAnson/markdownlint-cli2-action@v22
with:
globs: |
CONTRIBUTING.md

View File

@@ -412,8 +412,8 @@ the source file in the `Help:` field:
- The `backenddocs` make target runs the Python script `bin/make_backend_docs.py`,
and you can also run this directly, optionally with the name of a backend
as argument to only update the docs for a specific backend.
- **Do not** commit the updated Markdown files. This operation is run as part of
the release process. Since any manual changes in the autogenerated sections
- **Do not** commit the updated Markdown files. This operation is run as part
of the release process. Since any manual changes in the autogenerated sections
of the Markdown files will then be lost, we have a pull request check that
reports error for any changes within the autogenerated sections. Should you
have done manual changes outside of the autogenerated sections they must be
@@ -580,7 +580,8 @@ remote or an fs.
make sure we can encode any path name and `rclone info` to help determine the
encodings needed
- `rclone purge -v TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json
TestRemote:rclone-info`
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
@@ -632,22 +633,14 @@ Add your backend to the docs - you'll need to pick an icon for it from
alphabetical order of full name of remote (e.g. `drive` is ordered as
`Google Drive`) but with the local file system last.
First add a data file about your backend in
`docs/data/backends/remote.yaml` - this is used to build the overview
tables and the tiering info.
- Create it with: `bin/manage_backends.py create docs/data/backends/remote.yaml`
- Edit it to fill in the blanks. Look at the [tiers docs](https://rclone.org/tiers/).
- Run this command to fill in the features: `bin/manage_backends.py features docs/data/backends/remote.yaml`
Next edit these files:
- `README.md` - main GitHub page
- `docs/content/remote.md` - main docs page (note the backend options are
automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your
reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs - add an entry into the Features
table and the Optional Features table.
- `docs/content/docs.md` - list of remotes in config section
- `docs/content/_index.md` - front page of rclone.org
- `docs/layouts/chrome/navbar.html` - add it to the website navigation

View File

@@ -2,27 +2,27 @@
Current active maintainers of rclone are:
| Name | GitHub ID | Specific Responsibilities |
| :--------------- | :---------------- | :-------------------------- |
| Nick Craig-Wood | @ncw | overall project health |
| Stefan Breunig | @breunigs | |
| Ishuah Kariuki | @ishuah | |
| Remus Bunduc | @remusb | cache backend |
| Fabian Möller | @B4dM4n | |
| Alex Chen | @Cnly | onedrive backend |
| Sandeep Ummadi | @sandeepkru | azureblob backend |
| Name | GitHub ID | Specific Responsibilities |
| :--------------- | :---------------- | :------------------------------------- |
| Nick Craig-Wood | @ncw | overall project health |
| Stefan Breunig | @breunigs | |
| Ishuah Kariuki | @ishuah | |
| Remus Bunduc | @remusb | cache backend |
| Fabian Möller | @B4dM4n | |
| Alex Chen | @Cnly | onedrive backend |
| Sandeep Ummadi | @sandeepkru | azureblob backend |
| Sebastian Bünger | @buengese | jottacloud, yandex & compress backends |
| Ivan Andreev | @ivandeex | chunker & mailru backends |
| Max Sum | @Max-Sum | union backend |
| Fred | @creativeprojects | seafile backend |
| Caleb Case | @calebcase | storj backend |
| wiserain | @wiserain | pikpak backend |
| albertony | @albertony | |
| Chun-Hung Tseng | @henrybear327 | Proton Drive Backend |
| Hideo Aoyama | @boukendesho | snap packaging |
| nielash | @nielash | bisync |
| Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom |
| Ivan Andreev | @ivandeex | chunker & mailru backends |
| Max Sum | @Max-Sum | union backend |
| Fred | @creativeprojects | seafile backend |
| Caleb Case | @calebcase | storj backend |
| wiserain | @wiserain | pikpak backend |
| albertony | @albertony | |
| Chun-Hung Tseng | @henrybear327 | Proton Drive Backend |
| Hideo Aoyama | @boukendesho | snap packaging |
| nielash | @nielash | bisync |
| Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom |
## This is a work in progress draft

View File

@@ -28,25 +28,21 @@ directories to and from different cloud storage providers.
- Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
- Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
- ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
- Bizfly Cloud Simple Storage [:page_facing_up:](https://rclone.org/s3/#bizflycloud)
- Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
- Box [:page_facing_up:](https://rclone.org/box/)
- Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
- China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
- Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
- Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
- Cloudinary [:page_facing_up:](https://rclone.org/cloudinary/)
- Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
- Cubbit DS3 [:page_facing_up:](https://rclone.org/s3/#Cubbit)
- DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
- Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
- Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
- Drime [:page_facing_up:](https://rclone.org/s3/#drime)
- Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
- Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
- Exaba [:page_facing_up:](https://rclone.org/s3/#exaba)
- Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
- FileLu [:page_facing_up:](https://rclone.org/filelu/)
- Filen [:page_facing_up:](https://rclone.org/filen/)
- Files.com [:page_facing_up:](https://rclone.org/filescom/)
- FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade)
- FTP [:page_facing_up:](https://rclone.org/ftp/)

View File

@@ -16,13 +16,11 @@ import (
_ "github.com/rclone/rclone/backend/compress"
_ "github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/doi"
_ "github.com/rclone/rclone/backend/drime"
_ "github.com/rclone/rclone/backend/drive"
_ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/filefabric"
_ "github.com/rclone/rclone/backend/filelu"
_ "github.com/rclone/rclone/backend/filen"
_ "github.com/rclone/rclone/backend/filescom"
_ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/gofile"
@@ -66,6 +64,7 @@ import (
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/ulozto"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav"
_ "github.com/rclone/rclone/backend/yandex"
_ "github.com/rclone/rclone/backend/zoho"

View File

@@ -77,7 +77,7 @@ The DOI provider can be set when rclone does not automatically recognize a suppo
Name: "doi_resolver_api_url",
Help: `The URL of the DOI resolver API to use.
The DOI resolver can be set for testing or for cases when the canonical DOI resolver API cannot be used.
The DOI resolver can be set for testing or for cases when the the canonical DOI resolver API cannot be used.
Defaults to "https://doi.org/api".`,
Required: false,

View File

@@ -1,237 +0,0 @@
// Package api has type definitions for drime
//
// Converted from the API docs with help from https://mholt.github.io/json-to-go/
package api
import (
"encoding/json"
"fmt"
"time"
)
// Types of things in Item
const (
ItemTypeFolder = "folder"
)
// User information
type User struct {
Email string `json:"email"`
ID json.Number `json:"id"`
Avatar string `json:"avatar"`
ModelType string `json:"model_type"`
OwnsEntry bool `json:"owns_entry"`
EntryPermissions []any `json:"entry_permissions"`
DisplayName string `json:"display_name"`
}
// Permissions for a file
type Permissions struct {
FilesUpdate bool `json:"files.update"`
FilesCreate bool `json:"files.create"`
FilesDownload bool `json:"files.download"`
FilesDelete bool `json:"files.delete"`
}
// Item describes a folder or a file as returned by /drive/file-entries
type Item struct {
ID json.Number `json:"id"`
Name string `json:"name"`
Description any `json:"description"`
FileName string `json:"file_name"`
Mime string `json:"mime"`
Color any `json:"color"`
Backup bool `json:"backup"`
Tracked int `json:"tracked"`
FileSize int64 `json:"file_size"`
UserID json.Number `json:"user_id"`
ParentID json.Number `json:"parent_id"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
DeletedAt any `json:"deleted_at"`
IsDeleted int `json:"is_deleted"`
Path string `json:"path"`
DiskPrefix any `json:"disk_prefix"`
Type string `json:"type"`
Extension any `json:"extension"`
FileHash any `json:"file_hash"`
Public bool `json:"public"`
Thumbnail bool `json:"thumbnail"`
MuxStatus any `json:"mux_status"`
ThumbnailURL any `json:"thumbnail_url"`
WorkspaceID int `json:"workspace_id"`
IsEncrypted int `json:"is_encrypted"`
Iv any `json:"iv"`
VaultID any `json:"vault_id"`
OwnerID int `json:"owner_id"`
Hash string `json:"hash"`
URL string `json:"url"`
Users []User `json:"users"`
Tags []any `json:"tags"`
Permissions Permissions `json:"permissions"`
}
// Listing response
type Listing struct {
CurrentPage int `json:"current_page"`
Data []Item `json:"data"`
From int `json:"from"`
LastPage int `json:"last_page"`
NextPage int `json:"next_page"`
PerPage int `json:"per_page"`
PrevPage int `json:"prev_page"`
To int `json:"to"`
Total int `json:"total"`
}
// UploadResponse for a file
type UploadResponse struct {
Status string `json:"status"`
FileEntry Item `json:"fileEntry"`
}
// CreateFolderRequest for a folder
type CreateFolderRequest struct {
Name string `json:"name"`
ParentID json.Number `json:"parentId,omitempty"`
}
// CreateFolderResponse for a folder
type CreateFolderResponse struct {
Status string `json:"status"`
Folder Item `json:"folder"`
}
// Error is returned from drime when things go wrong
type Error struct {
Message string `json:"message"`
}
// Error returns a string for the error and satisfies the error interface
func (e Error) Error() string {
out := fmt.Sprintf("Error %q", e.Message)
return out
}
// Check Error satisfies the error interface
var _ error = (*Error)(nil)
// DeleteRequest is the input to DELETE /file-entries
type DeleteRequest struct {
EntryIDs []string `json:"entryIds"`
DeleteForever bool `json:"deleteForever"`
}
// DeleteResponse is the input to DELETE /file-entries
type DeleteResponse struct {
Status string `json:"status"`
Message string `json:"message"`
Errors map[string]string `json:"errors"`
}
// UpdateItemRequest describes the updates to be done to an item for PUT /file-entries/{id}/
type UpdateItemRequest struct {
Name string `json:"name,omitempty"`
Description string `json:"description,omitempty"`
}
// UpdateItemResponse is returned by PUT /file-entries/{id}/
type UpdateItemResponse struct {
Status string `json:"status"`
FileEntry Item `json:"fileEntry"`
}
// MoveRequest is the input to /file-entries/move
type MoveRequest struct {
EntryIDs []string `json:"entryIds"`
DestinationID string `json:"destinationId"`
}
// MoveResponse is returned by POST /file-entries/move
type MoveResponse struct {
Status string `json:"status"`
Entries []Item `json:"entries"`
}
// CopyRequest is the input to /file-entries/duplicate
type CopyRequest struct {
EntryIDs []string `json:"entryIds"`
DestinationID string `json:"destinationId"`
}
// CopyResponse is returned by POST /file-entries/duplicate
type CopyResponse struct {
Status string `json:"status"`
Entries []Item `json:"entries"`
}
// MultiPartCreateRequest is the input of POST /s3/multipart/create
type MultiPartCreateRequest struct {
Filename string `json:"filename"`
Mime string `json:"mime"`
Size int64 `json:"size"`
Extension string `json:"extension"`
ParentID json.Number `json:"parent_id"`
RelativePath string `json:"relativePath"`
}
// MultiPartCreateResponse is returned by POST /s3/multipart/create
type MultiPartCreateResponse struct {
UploadID string `json:"uploadId"`
Key string `json:"key"`
}
// CompletedPart Type for completed parts when making a multipart upload.
type CompletedPart struct {
ETag string `json:"ETag"`
PartNumber int32 `json:"PartNumber"`
}
// MultiPartGetURLsRequest is the input of POST /s3/multipart/batch-sign-part-urls
type MultiPartGetURLsRequest struct {
UploadID string `json:"uploadId"`
Key string `json:"key"`
PartNumbers []int `json:"partNumbers"`
}
// MultiPartGetURLsResponse is the result of POST /s3/multipart/batch-sign-part-urls
type MultiPartGetURLsResponse struct {
URLs []struct {
URL string `json:"url"`
PartNumber int32 `json:"partNumber"`
} `json:"urls"`
}
// MultiPartCompleteRequest is the input to POST /s3/multipart/complete
type MultiPartCompleteRequest struct {
UploadID string `json:"uploadId"`
Key string `json:"key"`
Parts []CompletedPart `json:"parts"`
}
// MultiPartCompleteResponse is the result of POST /s3/multipart/complete
type MultiPartCompleteResponse struct {
Location string `json:"location"`
}
// MultiPartEntriesRequest is the input to POST /s3/entries
type MultiPartEntriesRequest struct {
ClientMime string `json:"clientMime"`
ClientName string `json:"clientName"`
Filename string `json:"filename"`
Size int64 `json:"size"`
ClientExtension string `json:"clientExtension"`
ParentID json.Number `json:"parent_id"`
RelativePath string `json:"relativePath"`
}
// MultiPartEntriesResponse is the result of POST /s3/entries
type MultiPartEntriesResponse struct {
FileEntry Item `json:"fileEntry"`
}
// MultiPartAbort is the input of POST /s3/multipart/abort
type MultiPartAbort struct {
UploadID string `json:"uploadId"`
Key string `json:"key"`
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,33 +0,0 @@
// Drime filesystem interface
package drime
import (
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestDrime:",
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
)

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +0,0 @@
package filen
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFilen:",
NilObject: (*Object)(nil),
})
}

View File

@@ -204,12 +204,6 @@ Example:
Help: `URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
Supports the format http://user:pass@host:port, http://host:port, http://host.
Example:
http://myUser:myPass@proxyhostname.example.com:8000
`,
Advanced: true,
}, {
@@ -898,7 +892,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
resultchan := make(chan []*ftp.Entry, 1)
errchan := make(chan error, 1)
go func(c *ftp.ServerConn) {
go func() {
result, err := c.List(f.dirFromStandardPath(path.Join(f.root, dir)))
f.putFtpConnection(&c, err)
if err != nil {
@@ -906,7 +900,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return
}
resultchan <- result
}(c)
}()
// Wait for List for up to Timeout seconds
timer := time.NewTimer(f.ci.TimeoutOrInfinite())

View File

@@ -72,7 +72,7 @@ func (ik *ImageKit) Upload(ctx context.Context, file io.Reader, param UploadPara
response := &UploadResult{}
formReader, contentType, _, err := rest.MultipartUpload(ctx, file, formParams, "file", param.FileName, "application/octet-stream")
formReader, contentType, _, err := rest.MultipartUpload(ctx, file, formParams, "file", param.FileName)
if err != nil {
return nil, nil, fmt.Errorf("failed to make multipart upload: %w", err)

View File

@@ -17,10 +17,12 @@ Improvements:
import (
"context"
"crypto/tls"
"encoding/base64"
"errors"
"fmt"
"io"
"net/http"
"path"
"slices"
"strings"
@@ -254,7 +256,25 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
defer megaCacheMu.Unlock()
srv := megaCache[opt.User]
if srv == nil {
srv = mega.New().SetClient(fshttp.NewClient(ctx))
// srv = mega.New().SetClient(fshttp.NewClient(ctx))
// Workaround for Mega's use of insecure cipher suites which are no longer supported by default since Go 1.22.
// Relevant issues:
// https://github.com/rclone/rclone/issues/8565
// https://github.com/meganz/webclient/issues/103
clt := fshttp.NewClient(ctx)
clt.Transport = fshttp.NewTransportCustom(ctx, func(t *http.Transport) {
var ids []uint16
// Read default ciphers
for _, cs := range tls.CipherSuites() {
ids = append(ids, cs.ID)
}
// Insecure but Mega uses TLS_RSA_WITH_AES_128_GCM_SHA256 for storage endpoints
// (e.g. https://gfs302n114.userstorage.mega.co.nz) as of June 18, 2025.
t.TLSClientConfig.CipherSuites = append(ids, tls.TLS_RSA_WITH_AES_128_GCM_SHA256)
})
srv = mega.New().SetClient(clt)
srv.SetRetries(ci.LowLevelRetries) // let mega do the low level retries
srv.SetHTTPS(opt.UseHTTPS)
srv.SetLogger(func(format string, v ...any) {

View File

@@ -403,7 +403,7 @@ This is why this flag is not set as the default.
As a rule of thumb if nearly all of your data is under rclone's root
directory (the |root/directory| in |onedrive:root/directory|) then
using this flag will be a big performance win. If your data is
using this flag will be be a big performance win. If your data is
mostly not under the root then using this flag will be a big
performance loss.

View File

@@ -60,6 +60,9 @@ type StateChangeConf struct {
func (conf *StateChangeConf) WaitForStateContext(ctx context.Context, entityType string) (any, error) {
// fs.Debugf(entityType, "Waiting for state to become: %s", conf.Target)
notfoundTick := 0
targetOccurrence := 0
// Set a default for times to check for not found
if conf.NotFoundChecks == 0 {
conf.NotFoundChecks = 20
@@ -81,11 +84,9 @@ func (conf *StateChangeConf) WaitForStateContext(ctx context.Context, entityType
// cancellation channel for the refresh loop
cancelCh := make(chan struct{})
go func() {
notfoundTick := 0
targetOccurrence := 0
result := Result{}
result := Result{}
go func() {
defer close(resCh)
select {

View File

@@ -1459,7 +1459,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// opts.Body=0), so upload it as a multipart form POST with
// Content-Length set.
if size == 0 {
formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, opts.Parameters, "content", leaf, opts.ContentType)
formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, opts.Parameters, "content", leaf)
if err != nil {
return fmt.Errorf("failed to make multipart upload for 0 length file: %w", err)
}

View File

@@ -1384,7 +1384,7 @@ func (f *Fs) uploadByForm(ctx context.Context, in io.Reader, name string, size i
for i := range iVal.NumField() {
params.Set(iTyp.Field(i).Tag.Get("json"), iVal.Field(i).String())
}
formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, params, "file", name, "application/octet-stream")
formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, params, "file", name)
if err != nil {
return fmt.Errorf("failed to make multipart upload: %w", err)
}

View File

@@ -10,8 +10,8 @@ import (
"strings"
"time"
protonDriveAPI "github.com/rclone/Proton-API-Bridge"
"github.com/rclone/go-proton-api"
protonDriveAPI "github.com/henrybear327/Proton-API-Bridge"
"github.com/henrybear327/go-proton-api"
"github.com/pquerna/otp/totp"

View File

@@ -1,15 +0,0 @@
name: BizflyCloud
description: Bizfly Cloud Simple Storage
region:
hn: Ha Noi
hcm: Ho Chi Minh
endpoint:
hn.ss.bfcplatform.vn: Hanoi endpoint
hcm.ss.bfcplatform.vn: Ho Chi Minh endpoint
acl: {}
bucket_acl: true
quirks:
force_path_style: true
list_url_encode: false
use_multipart_etag: false
use_already_exists: false

View File

@@ -688,7 +688,7 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, uploadLink, filePath stri
"need_idx_progress": {"true"},
"replace": {"1"},
}
formReader, contentType, _, err := rest.MultipartUpload(ctx, in, parameters, "file", f.opt.Enc.FromStandardName(filename), "application/octet-stream")
formReader, contentType, _, err := rest.MultipartUpload(ctx, in, parameters, "file", f.opt.Enc.FromStandardName(filename))
if err != nil {
return nil, fmt.Errorf("failed to make multipart upload: %w", err)
}

View File

@@ -519,12 +519,6 @@ Example:
Help: `URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
Supports the format http://user:pass@host:port, http://host:port, http://host.
Example:
http://myUser:myPass@proxyhostname.example.com:8000
`,
Advanced: true,
}, {
@@ -925,8 +919,15 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
opt.Port = "22"
}
// Set up sshConfig here from opt
// **NB** everything else should be setup in NewFsWithConnection
// get proxy URL if set
if opt.HTTPProxy != "" {
proxyURL, err := url.Parse(opt.HTTPProxy)
if err != nil {
return nil, fmt.Errorf("failed to parse HTTP Proxy URL: %w", err)
}
f.proxyURL = proxyURL
}
sshConfig := &ssh.ClientConfig{
User: opt.User,
Auth: []ssh.AuthMethod{},
@@ -1174,21 +1175,11 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
f.mkdirLock = newStringLock()
f.pacer = fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant)))
f.savedpswd = ""
// set the pool drainer timer going
if f.opt.IdleTimeout > 0 {
f.drain = time.AfterFunc(time.Duration(f.opt.IdleTimeout), func() { _ = f.drainPool(ctx) })
}
// get proxy URL if set
if opt.HTTPProxy != "" {
proxyURL, err := url.Parse(opt.HTTPProxy)
if err != nil {
return nil, fmt.Errorf("failed to parse HTTP Proxy URL: %w", err)
}
f.proxyURL = proxyURL
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
SlowHash: true,
@@ -1258,7 +1249,7 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
fs.Debugf(f, "Failed to resolve path using RealPath: %v", err)
cwd, err := c.sftpClient.Getwd()
if err != nil {
fs.Debugf(f, "Failed to read current directory - using relative paths: %v", err)
fs.Debugf(f, "Failed to to read current directory - using relative paths: %v", err)
} else {
f.absRoot = path.Join(cwd, f.root)
fs.Debugf(f, "Relative path joined with current directory to get absolute path %q", f.absRoot)

View File

@@ -54,7 +54,7 @@ var SharedOptions = []fs.Option{{
Name: "chunk_size",
Help: strings.ReplaceAll(`Above this size files will be chunked.
Above this size files will be chunked into a |`+segmentsContainerSuffix+`| container
Above this size files will be chunked into a a |`+segmentsContainerSuffix+`| container
or a |`+segmentsDirectory+`| directory. (See the |use_segments_container| option
for more info). Default for this is 5 GiB which is its maximum value, which
means only files above this size will be chunked.

View File

@@ -0,0 +1,171 @@
// Package api provides types used by the Uptobox API.
package api
import "fmt"
// Error contains the error code and message returned by the API
type Error struct {
Success bool `json:"success,omitempty"`
StatusCode int `json:"statusCode,omitempty"`
Message string `json:"message,omitempty"`
Data string `json:"data,omitempty"`
}
// Error returns a string for the error and satisfies the error interface
func (e Error) Error() string {
out := fmt.Sprintf("api error %d", e.StatusCode)
if e.Message != "" {
out += ": " + e.Message
}
if e.Data != "" {
out += ": " + e.Data
}
return out
}
// FolderEntry represents a Uptobox subfolder when listing folder contents
type FolderEntry struct {
FolderID uint64 `json:"fld_id"`
Description string `json:"fld_descr"`
Password string `json:"fld_password"`
FullPath string `json:"fullPath"`
Path string `json:"fld_name"`
Name string `json:"name"`
Hash string `json:"hash"`
}
// FolderInfo represents the current folder when listing folder contents
type FolderInfo struct {
FolderID uint64 `json:"fld_id"`
Hash string `json:"hash"`
FileCount uint64 `json:"fileCount"`
TotalFileSize int64 `json:"totalFileSize"`
}
// FileInfo represents a file when listing folder contents
type FileInfo struct {
Name string `json:"file_name"`
Description string `json:"file_descr"`
Created string `json:"file_created"`
Size int64 `json:"file_size"`
Downloads uint64 `json:"file_downloads"`
Code string `json:"file_code"`
Password string `json:"file_password"`
Public int `json:"file_public"`
LastDownload string `json:"file_last_download"`
ID uint64 `json:"id"`
}
// ReadMetadataResponse is the response when listing folder contents
type ReadMetadataResponse struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
CurrentFolder FolderInfo `json:"currentFolder"`
Folders []FolderEntry `json:"folders"`
Files []FileInfo `json:"files"`
PageCount int `json:"pageCount"`
TotalFileCount int `json:"totalFileCount"`
TotalFileSize int64 `json:"totalFileSize"`
} `json:"data"`
}
// UploadInfo is the response when initiating an upload
type UploadInfo struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
UploadLink string `json:"uploadLink"`
MaxUpload string `json:"maxUpload"`
} `json:"data"`
}
// UploadResponse is the response to a successful upload
type UploadResponse struct {
Files []struct {
Name string `json:"name"`
Size int64 `json:"size"`
URL string `json:"url"`
DeleteURL string `json:"deleteUrl"`
} `json:"files"`
}
// UpdateResponse is a generic response to various action on files (rename/copy/move)
type UpdateResponse struct {
Message string `json:"message"`
StatusCode int `json:"statusCode"`
}
// Download is the response when requesting a download link
type Download struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
DownloadLink string `json:"dlLink"`
} `json:"data"`
}
// MetadataRequestOptions represents all the options when listing folder contents
type MetadataRequestOptions struct {
Limit uint64
Offset uint64
SearchField string
Search string
}
// CreateFolderRequest is used for creating a folder
type CreateFolderRequest struct {
Token string `json:"token"`
Path string `json:"path"`
Name string `json:"name"`
}
// DeleteFolderRequest is used for deleting a folder
type DeleteFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
}
// CopyMoveFileRequest is used for moving/copying a file
type CopyMoveFileRequest struct {
Token string `json:"token"`
FileCodes string `json:"file_codes"`
DestinationFolderID uint64 `json:"destination_fld_id"`
Action string `json:"action"`
}
// MoveFolderRequest is used for moving a folder
type MoveFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
DestinationFolderID uint64 `json:"destination_fld_id"`
Action string `json:"action"`
}
// RenameFolderRequest is used for renaming a folder
type RenameFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
NewName string `json:"new_name"`
}
// UpdateFileInformation is used for renaming a file
type UpdateFileInformation struct {
Token string `json:"token"`
FileCode string `json:"file_code"`
NewName string `json:"new_name,omitempty"`
Description string `json:"description,omitempty"`
Password string `json:"password,omitempty"`
Public string `json:"public,omitempty"`
}
// RemoveFileRequest is used for deleting a file
type RemoveFileRequest struct {
Token string `json:"token"`
FileCodes string `json:"file_codes"`
}
// Token represents the authentication token
type Token struct {
Token string `json:"token"`
}

1087
backend/uptobox/uptobox.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
// Test Uptobox filesystem interface
package uptobox_test
import (
"testing"
"github.com/rclone/rclone/backend/uptobox"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
*fstest.RemoteName = "TestUptobox:"
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*uptobox.Object)(nil),
})
}

View File

@@ -817,7 +817,7 @@ func (f *Fs) upload(ctx context.Context, name string, parent string, size int64,
params.Set("filename", url.QueryEscape(name))
params.Set("parent_id", parent)
params.Set("override-name-exist", strconv.FormatBool(true))
formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, nil, "content", name, "application/octet-stream")
formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, nil, "content", name)
if err != nil {
return nil, fmt.Errorf("failed to make multipart upload: %w", err)
}

View File

@@ -43,11 +43,9 @@ docs = [
"compress.md",
"combine.md",
"doi.md",
"drime.md"
"dropbox.md",
"filefabric.md",
"filelu.md",
"filen.md",
"filescom.md",
"ftp.md",
"gofile.md",
@@ -91,6 +89,7 @@ docs = [
"storj.md",
"sugarsync.md",
"ulozto.md",
"uptobox.md",
"union.md",
"webdav.md",
"yandex.md",

View File

@@ -1,300 +0,0 @@
#!/usr/bin/env python3
"""
Manage the backend yaml files in docs/data/backends
usage: manage_backends.py [-h] {create,features,update,help} [files ...]
Manage rclone backend YAML files.
positional arguments:
{create,features,update,help}
Action to perform
files List of YAML files to operate on
options:
-h, --help show this help message and exit
"""
import argparse
import sys
import os
import yaml
import json
import subprocess
import time
import socket
from contextlib import contextmanager
from pprint import pprint
# --- Configuration ---
# The order in which keys should appear in the YAML file
CANONICAL_ORDER = [
"backend",
"name",
"tier",
"maintainers",
"features_score",
"integration_tests",
"data_integrity",
"performance",
"adoption",
"docs",
"security",
"virtual",
"remote",
"features",
"hashes",
"precision"
]
# Default values for fields when creating/updating
DEFAULTS = {
"tier": None,
"maintainers": None,
"features_score": None,
"integration_tests": None,
"data_integrity": None,
"performance": None,
"adoption": None,
"docs": None,
"security": None,
"virtual": False,
"remote": None,
"features": [],
"hashes": [],
"precision": None
}
# --- Test server management ---
def wait_for_tcp(address_str, delay=1, timeout=2, tries=60):
"""
Blocks until the specified TCP address (e.g., '172.17.0.3:21') is reachable.
"""
host, port = address_str.split(":")
port = int(port)
print(f"Waiting for {host}:{port}...")
for tri in range(tries):
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.settimeout(timeout)
result = sock.connect_ex((host, port))
if result == 0:
print(f"Connected to {host}:{port} successfully!")
break
else:
print(f"Failed to connect to {host}:{port} try {tri} !")
time.sleep(delay)
def parse_init_output(binary_input):
"""
Parse the output of the init script
"""
decoded_str = binary_input.decode('utf-8')
result = {}
for line in decoded_str.splitlines():
if '=' in line:
key, value = line.split('=', 1)
result[key.strip()] = value.strip()
return result
@contextmanager
def test_server(remote):
"""Start the test server for remote if needed"""
remote_name = remote.split(":",1)[0]
init_script = "fstest/testserver/init.d/" + remote_name
if not os.path.isfile(init_script):
yield
return
print(f"--- Starting {init_script} ---")
out = subprocess.check_output([init_script, "start"])
out = parse_init_output(out)
pprint(out)
# Configure the server with environment variables
env_keys = []
for key, value in out.items():
env_key = f"RCLONE_CONFIG_{remote_name.upper()}_{key.upper()}"
env_keys.append(env_key)
os.environ[env_key] = value
for key,var in os.environ.items():
if key.startswith("RCLON"):
print(key, var)
if "_connect" in out:
wait_for_tcp(out["_connect"])
try:
yield
finally:
print(f"--- Stopping {init_script} ---")
subprocess.run([init_script, "stop"], check=True)
# Remove the env vars
for env_key in env_keys:
del os.environ[env_key]
# --- Helper Functions ---
def load_yaml(filepath):
if not os.path.exists(filepath):
return {}
with open(filepath, 'r', encoding='utf-8') as f:
return yaml.safe_load(f) or {}
def save_yaml(filepath, data):
# Reconstruct dictionary in canonical order
ordered_data = {}
# Add known keys in order
for key in CANONICAL_ORDER:
if key in data:
ordered_data[key] = data[key]
# Add any other keys that might exist (custom fields)
for key in data:
if key not in CANONICAL_ORDER:
ordered_data[key] = data[key]
# Ensure features are a sorted list (if present)
if 'features' in ordered_data and isinstance(ordered_data['features'], list):
ordered_data['features'].sort()
with open(filepath, 'w', encoding='utf-8') as f:
yaml.dump(ordered_data, f, default_flow_style=False, sort_keys=False, allow_unicode=True)
print(f"Saved {filepath}")
def get_backend_name_from_file(filepath):
"""
s3.yaml -> S3
azureblob.yaml -> Azureblob
"""
basename = os.path.basename(filepath)
name, _ = os.path.splitext(basename)
return name.title()
def fetch_rclone_features(remote_str):
"""
Runs `rclone backend features remote:` and returns the JSON object.
"""
cmd = ["rclone", "backend", "features", remote_str]
try:
with test_server(remote_str):
result = subprocess.run(cmd, capture_output=True, text=True, check=True)
return json.loads(result.stdout)
except subprocess.CalledProcessError as e:
print(f"Error running rclone: {e.stderr}")
return None
except FileNotFoundError:
print("Error: 'rclone' command not found in PATH.")
sys.exit(1)
# --- Verbs ---
def do_create(files):
for filepath in files:
if os.path.exists(filepath):
print(f"Skipping {filepath} (already exists)")
continue
data = DEFAULTS.copy()
# Set a default name based on filename
data['name'] = get_backend_name_from_file(filepath)
save_yaml(filepath, data)
def do_update(files):
for filepath in files:
if not os.path.exists(filepath):
print(f"Warning: {filepath} does not exist. Use 'create' first.")
continue
data = load_yaml(filepath)
modified = False
# Inject the filename as the 'backend'
file_backend = os.path.splitext(os.path.basename(filepath))[0]
if data.get('backend') != file_backend:
data['backend'] = file_backend
modified = True
print(f"[{filepath}] Updated backend to: {file_backend}")
# Add missing default fields
for key, default_val in DEFAULTS.items():
if key not in data:
data[key] = default_val
modified = True
print(f"[{filepath}] Added missing field: {key}")
# Special handling for 'name' if it was just added as None or didn't exist
if data.get('name') is None:
data['name'] = get_backend_name_from_file(filepath)
modified = True
print(f"[{filepath}] Set default name: {data['name']}")
if modified:
save_yaml(filepath, data)
else:
# We save anyway to enforce canonical order if the file was messy
save_yaml(filepath, data)
def do_features(files):
for filepath in files:
if not os.path.exists(filepath):
print(f"Error: {filepath} not found.")
continue
data = load_yaml(filepath)
remote = data.get('remote')
if not remote:
print(f"Error: [{filepath}] 'remote' field is missing or empty. Cannot fetch features.")
continue
print(f"[{filepath}] Fetching features for remote: '{remote}'...")
rclone_data = fetch_rclone_features(remote)
if not rclone_data:
print(f"Failed to fetch data for {filepath}")
continue
# Process Features (Dict -> Sorted List of True keys)
features_dict = rclone_data.get('Features', {})
# Filter only true values and sort keys
feature_list = sorted([k for k, v in features_dict.items() if v])
# Process Hashes
hashes_list = rclone_data.get('Hashes', [])
# Process Precision
precision = rclone_data.get('Precision')
# Update data
data['features'] = feature_list
data['hashes'] = hashes_list
data['precision'] = precision
save_yaml(filepath, data)
# --- Main CLI ---
def main():
parser = argparse.ArgumentParser(description="Manage rclone backend YAML files.")
parser.add_argument("verb", choices=["create", "features", "update", "help"], help="Action to perform")
parser.add_argument("files", nargs="*", help="List of YAML files to operate on")
args = parser.parse_args()
if args.verb == "help":
parser.print_help()
sys.exit(0)
if not args.files:
print("Error: No files specified.")
parser.print_help()
sys.exit(1)
if args.verb == "create":
do_create(args.files)
elif args.verb == "update":
do_update(args.files)
elif args.verb == "features":
do_features(args.files)
if __name__ == "__main__":
main()

View File

@@ -341,7 +341,7 @@ func (h *testState) preconfigureServer() {
// The `\\?\` prefix tells Windows APIs to pass strings unmodified to the
// filesystem without additional parsing [1]. Our workaround is roughly to add
// the prefix to whichever parameter doesn't have it (when the OS is Windows).
// I'm not sure this generalizes, but it works for the kinds of inputs we're
// I'm not sure this generalizes, but it works for the the kinds of inputs we're
// throwing at it.
//
// [1]: https://learn.microsoft.com/en-us/windows/win32/fileio/naming-a-file?redirectedfrom=MSDN#win32-file-namespaces

View File

@@ -97,7 +97,7 @@ with the following options:
- If ` + "`--files-only`" + ` is specified then files will be returned only,
no directories.
If ` + "`--stat`" + ` is set then the output is not an array of items,
If ` + "`--stat`" + ` is set then the the output is not an array of items,
but instead a single JSON blob will be returned about the item pointed to.
This will return an error if the item isn't found, however on bucket based
backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will

View File

@@ -71,7 +71,7 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=m
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
` + "```" + `
The vfsOpt are as described in options/get and can be seen in the
The vfsOpt are as described in options/get and can be seen in the the
"vfs" section when running and the mountOpt can be seen in the "mount" section:
` + "```console" + `

View File

@@ -34,7 +34,7 @@ argument by passing a hyphen as an argument. This will use the first
line of STDIN as the password not including the trailing newline.
` + "```console" + `
echo 'secretpassword' | rclone obscure -
echo "secretpassword" | rclone obscure -
` + "```" + `
If there is no data on STDIN to read, rclone obscure will default to

View File

@@ -291,7 +291,7 @@ func (c *conn) handleChannel(newChannel ssh.NewChannel) {
}
}
fs.Debugf(c.what, " - accepted: %v\n", ok)
err := req.Reply(ok, reply)
err = req.Reply(ok, reply)
if err != nil {
fs.Errorf(c.what, "Failed to Reply to request: %v", err)
return

View File

@@ -116,7 +116,6 @@ WebDAV or S3, that work out of the box.)
{{< provider name="Akamai Netstorage" home="https://www.akamai.com/us/en/products/media-delivery/netstorage.jsp" config="/netstorage/" >}}
{{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
{{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}}
{{< provider name="Bizfly Cloud Simple Storage" home="https://bizflycloud.vn/" config="/s3/#bizflycloud" >}}
{{< provider name="Backblaze B2" home="https://www.backblaze.com/cloud-storage" config="/b2/" >}}
{{< provider name="Box" home="https://www.box.com/" config="/box/" >}}
{{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
@@ -129,14 +128,12 @@ WebDAV or S3, that work out of the box.)
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
{{< provider name="Digi Storage" home="https://storage.rcs-rds.ro/" config="/koofr/#digi-storage" >}}
{{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
{{< provider name="Drime" home="https://www.drime.cloud/" config="/drime/" >}}
{{< provider name="Dropbox" home="https://www.dropbox.com/" config="/dropbox/" >}}
{{< provider name="Enterprise File Fabric" home="https://storagemadeeasy.com/about/" config="/filefabric/" >}}
{{< provider name="Exaba" home="https://exaba.com/" config="/s3/#exaba" >}}
{{< provider name="Fastmail Files" home="https://www.fastmail.com/" config="/webdav/#fastmail-files" >}}
{{< provider name="FileLu Cloud Storage" home="https://filelu.com/" config="/filelu/" >}}
{{< provider name="FileLu S5 (S3-Compatible Object Storage)" home="https://s5lu.com/" config="/s3/#filelu-s5" >}}
{{< provider name="Filen" home="https://www.filen.io/" config="/filen/" >}}
{{< provider name="Files.com" home="https://www.files.com/" config="/filescom/" >}}
{{< provider name="FlashBlade" home="https://www.purestorage.com/products/unstructured-data-storage.html" config="/s3/#pure-storage-flashblade" >}}
{{< provider name="FTP" home="https://en.wikipedia.org/wiki/File_Transfer_Protocol" config="/ftp/" >}}
@@ -215,6 +212,7 @@ WebDAV or S3, that work out of the box.)
{{< provider name="SugarSync" home="https://sugarsync.com/" config="/sugarsync/" >}}
{{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}}
{{< provider name="Uloz.to" home="https://uloz.to" config="/ulozto/" >}}
{{< provider name="Uptobox" home="https://uptobox.com" config="/uptobox/" >}}
{{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}
{{< provider name="WebDAV" home="https://en.wikipedia.org/wiki/WebDAV" config="/webdav/" >}}
{{< provider name="Yandex Disk" home="https://disk.yandex.com/" config="/yandex/" >}}

View File

@@ -1060,13 +1060,3 @@ put them back in again. -->
- jhasse-shade <jacob@shade.inc>
- vyv03354 <VYV03354@nifty.ne.jp>
- masrlinu <masrlinu@users.noreply.github.com> <5259918+masrlinu@users.noreply.github.com>
- vupn0712 <126212736+vupn0712@users.noreply.github.com>
- darkdragon-001 <darkdragon-001@users.noreply.github.com>
- sys6101 <csvmen@gmail.com>
- Nicolas Dessart <nds@outsight.tech>
- Qingwei Li <332664203@qq.com>
- yy <yhymmt37@gmail.com>
- Marc-Philip <marc-philip.werner@sap.com>
- Mikel Olasagasti Uranga <mikel@olasagasti.info>
- Nick Owens <mischief@offblast.org>
- hyusap <paulayush@gmail.com>

View File

@@ -1015,6 +1015,10 @@ rclone [flags]
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
-u, --update Skip files that are newer on the destination
--uptobox-access-token string Your access token
--uptobox-description string Description of the remote
--uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
--use-cookies Enable session cookiejar
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)

View File

@@ -43,11 +43,9 @@ See the following for detailed instructions for
- [Crypt](/crypt/) - to encrypt other remotes
- [DigitalOcean Spaces](/s3/#digitalocean-spaces)
- [Digi Storage](/koofr/#digi-storage)
- [Drime](/drime/)
- [Dropbox](/dropbox/)
- [Enterprise File Fabric](/filefabric/)
- [FileLu Cloud Storage](/filelu/)
- [Filen](/filen/)
- [Files.com](/filescom/)
- [FTP](/ftp/)
- [Gofile](/gofile/)
@@ -91,6 +89,7 @@ See the following for detailed instructions for
- [SugarSync](/sugarsync/)
- [Union](/union/)
- [Uloz.to](/ulozto/)
- [Uptobox](/uptobox/)
- [WebDAV](/webdav/)
- [Yandex Disk](/yandex/)
- [Zoho WorkDrive](/zoho/)
@@ -752,21 +751,21 @@ object also.
Here is a table of standard system metadata which, if appropriate, a
backend may implement.
| key | description | example |
|---------------------|-------------|---------|
| mode | File type and mode: octal, unix style | 0100664 |
| uid | User ID of owner: decimal number | 500 |
| gid | Group ID of owner: decimal number | 500 |
| rdev | Device ID (if special file) => hexadecimal | 0 |
| atime | Time of last access: RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 |
| mtime | Time of last modification: RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 |
| btime | Time of file creation (birth): RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 |
| utime | Time of file upload: RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 |
| cache-control | Cache-Control header | no-cache |
| key | description | example |
| --- | ----------- | ------- |
| mode | File type and mode: octal, unix style | 0100664 |
| uid | User ID of owner: decimal number | 500 |
| gid | Group ID of owner: decimal number | 500 |
| rdev | Device ID (if special file) => hexadecimal | 0 |
| atime | Time of last access: RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 |
| mtime | Time of last modification: RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 |
| btime | Time of file creation (birth): RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 |
| utime | Time of file upload: RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 |
| cache-control | Cache-Control header | no-cache |
| content-disposition | Content-Disposition header | inline |
| content-encoding | Content-Encoding header | gzip |
| content-language | Content-Language header | en-US |
| content-type | Content-Type header | text/plain |
| content-encoding | Content-Encoding header | gzip |
| content-language | Content-Language header | en-US |
| content-type | Content-Type header | text/plain |
The metadata keys `mtime` and `content-type` will take precedence if
supplied in the metadata over reading the `Content-Type` or
@@ -1192,7 +1191,8 @@ on any OS, and the value is defined as following:
- On Windows: `%HOME%` if defined, else `%USERPROFILE%`, or else `%HOMEDRIVE%\%HOMEPATH%`.
- On Unix: `$HOME` if defined, else by looking up current user in OS-specific user
database (e.g. passwd file), or else use the result from shell command `cd && pwd`.
database (e.g. passwd file), or else use the result from shell command
`cd && pwd`.
If you run `rclone config file` you will see where the default location is for
you. Running `rclone config touch` will ensure a configuration file exists,
@@ -3439,7 +3439,7 @@ many items, the input is treated as a [CSV encoded](https://godoc.org/encoding/c
string. For example
| Environment variable | Equivalent options |
|----------------------|--------------------|
| -------------------- | ------------------ |
| `RCLONE_EXCLUDE="*.jpg"` | `--exclude "*.jpg"` |
| `RCLONE_EXCLUDE="*.jpg,*.png"` | `--exclude "*.jpg"` `--exclude "*.png"` |
| `RCLONE_EXCLUDE='"*.jpg","*.png"'` | `--exclude "*.jpg"` `--exclude "*.png"` |

View File

@@ -16,7 +16,7 @@ image](https://securebuild.com/images/rclone) through our partner
## Release {{% version %}} OS requirements {#osrequirements}
| OS | Minimum Version |
|:-------:|:-------:|
| :---: | :---: |
| Linux | Kernel 3.2 |
| macOS | 12 (Monterey) |
| Windows | 10, Server 2016 |
@@ -31,7 +31,7 @@ in the Go Wiki.
## Release {{% version %}} {#release}
| Arch-OS | Windows | macOS | Linux | .deb | .rpm | FreeBSD | NetBSD | OpenBSD | Plan9 | Solaris |
|:-------:|:-------:|:-----:|:-----:|:----:|:----:|:-------:|:------:|:-------:|:-----:|:-------:|
| :-----: | :-----: | :---: | :---: | :--: | :--: | :-----: | :----: | :-----: | :---: | :-----: |
| Intel/AMD - 64 Bit | {{< download windows amd64 >}} | {{< download osx amd64 >}} | {{< download linux amd64 >}} | {{< download linux amd64 deb >}} | {{< download linux amd64 rpm >}} | {{< download freebsd amd64 >}} | {{< download netbsd amd64 >}} | {{< download openbsd amd64 >}} | {{< download plan9 amd64 >}} | {{< download solaris amd64 >}} |
| Intel/AMD - 32 Bit | {{< download windows 386 >}} | - | {{< download linux 386 >}} | {{< download linux 386 deb >}} | {{< download linux 386 rpm >}} | {{< download freebsd 386 >}} | {{< download netbsd 386 >}} | {{< download openbsd 386 >}} | {{< download plan9 386 >}} | - |
| ARMv5 - 32 Bit NOHF | - | - | {{< download linux arm >}} | {{< download linux arm deb >}} | {{< download linux arm rpm >}} | {{< download freebsd arm >}} | {{< download netbsd arm >}} | - | - | - |
@@ -120,7 +120,7 @@ If you would like to download the current version (maybe from a
script) from a URL which doesn't change then you can use these links.
| Arch-OS | Windows | macOS | Linux | .deb | .rpm | FreeBSD | NetBSD | OpenBSD | Plan9 | Solaris |
|:-------:|:-------:|:-----:|:-----:|:----:|:----:|:-------:|:------:|:-------:|:-----:|:-------:|
| :-----: | :-----: | :---: | :---: | :--: | :--: | :-----: | :----: | :-----: | :---: | :-----: |
| Intel/AMD - 64 Bit | {{< cdownload windows amd64 >}} | {{< cdownload osx amd64 >}} | {{< cdownload linux amd64 >}} | {{< cdownload linux amd64 deb >}} | {{< cdownload linux amd64 rpm >}} | {{< cdownload freebsd amd64 >}} | {{< cdownload netbsd amd64 >}} | {{< cdownload openbsd amd64 >}} | {{< cdownload plan9 amd64 >}} | {{< cdownload solaris amd64 >}} |
| Intel/AMD - 32 Bit | {{< cdownload windows 386 >}} | - | {{< cdownload linux 386 >}} | {{< cdownload linux 386 deb >}} | {{< cdownload linux 386 rpm >}} | {{< cdownload freebsd 386 >}} | {{< cdownload netbsd 386 >}} | {{< cdownload openbsd 386 >}} | {{< cdownload plan9 386 >}} | - |
| ARMv5 - 32 Bit NOHF | - | - | {{< cdownload linux arm >}} | {{< cdownload linux arm deb >}} | {{< cdownload linux arm rpm >}} | {{< cdownload freebsd arm >}} | {{< cdownload netbsd arm >}} | - | - | - |
@@ -137,7 +137,7 @@ Older downloads can be found at <https://downloads.rclone.org/>
The latest `rclone` version working for:
| OS | Maximum rclone version |
|:-------:|:-------:|
| :---: | :---: |
| Windows 7 | v1.63.1 |
| Windows Server 2008 | v1.63.1 |
| Windows Server 2012 | v1.63.1 |

View File

@@ -1,244 +0,0 @@
---
title: "Drime"
description: "Rclone docs for Drime"
versionIntroduced: "v1.73"
---
# {{< icon "fa fa-cloud" >}} Drime
[Drime](https://drime.cloud/) is a cloud storage and transfer service focused
on fast, resilient file delivery. It offers both free and paid tiers with
emphasis on high-speed uploads and link sharing.
To setup Drime you need to log in, navigate to Settings, Developer, and create a
token to use as an API access key. Give it a sensible name and copy the token
for use in the config.
## Configuration
Here is a run through of `rclone config` to make a remote called `remote`.
Firstly run:
```console
rclone config
```
Then follow through the interactive setup:
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> remote
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
XX / Drime
\ (drime)
Storage> drime
Option access_token.
API Access token
You can get this from the web control panel.
Enter a value. Press Enter to leave empty.
access_token> YOUR_API_ACCESS_TOKEN
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: drime
- access_token: YOUR_API_ACCESS_TOKEN
Keep this "remote" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
Once configured you can then use `rclone` like this (replace `remote` with the
name you gave your remote):
List directories and files in the top level of your Drime
```console
rclone lsf remote:
```
To copy a local directory to a Drime directory called backup
```console
rclone copy /home/source remote:backup
```
### Modification times and hashes
Drime does not support modification times or hashes.
This means that by default syncs will only use the size of the file to determine
if it needs updating.
You can use the `--update` flag which will use the time the object was uploaded.
For many operations this is sufficient to determine if it has changed. However
files created with timestamps in the past will be missed by the sync if using
`--update`.
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| \ | 0x5C | |
File names can also not start or end with the following characters. These only
get replaced if they are the first or last character in the name:
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| SP | 0x20 | ␠ |
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
### Root folder ID
You can set the `root_folder_id` for rclone. This is the directory
(identified by its `Folder ID`) that rclone considers to be the root
of your Drime drive.
Normally you will leave this blank and rclone will determine the
correct root to use itself and fill in the value in the config file.
However you can set this to restrict rclone to a specific folder
hierarchy.
In order to do this you will have to find the `Folder ID` of the
directory you wish rclone to display.
You can do this with rclone
```console
$ rclone lsf -Fip --dirs-only remote:
d6341f53-ee65-4f29-9f59-d11e8070b2a0;Files/
f4f5c9b8-6ece-478b-b03e-4538edfe5a1c;Photos/
d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
```
The ID to use is the part before the `;` so you could set
```text
root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0
```
To restrict rclone to the `Files` directory.
<!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/drime/drime.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options
Here are the Standard options specific to drime (Drime).
#### --drime-access-token
API Access token
You can get this from the web control panel.
Properties:
- Config: access_token
- Env Var: RCLONE_DRIME_ACCESS_TOKEN
- Type: string
- Required: false
### Advanced options
Here are the Advanced options specific to drime (Drime).
#### --drime-root-folder-id
ID of the root folder
Leave this blank normally, rclone will fill it in automatically.
If you want rclone to be restricted to a particular folder you can
fill it in - see the docs for more info.
Properties:
- Config: root_folder_id
- Env Var: RCLONE_DRIME_ROOT_FOLDER_ID
- Type: string
- Required: false
#### --drime-workspace-id
Account ID
Leave this blank normally, rclone will fill it in automatically.
Properties:
- Config: workspace_id
- Env Var: RCLONE_DRIME_WORKSPACE_ID
- Type: string
- Required: false
#### --drime-list-chunk
Number of items to list in each call
Properties:
- Config: list_chunk
- Env Var: RCLONE_DRIME_LIST_CHUNK
- Type: int
- Default: 1000
#### --drime-encoding
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_DRIME_ENCODING
- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
#### --drime-description
Description of the remote.
Properties:
- Config: description
- Env Var: RCLONE_DRIME_DESCRIPTION
- Type: string
- Required: false
<!-- autogenerated options stop -->
## Limitations
Drime only supports filenames up to 255 bytes in length, where filenames are
encoded in UTF8.

View File

@@ -316,47 +316,3 @@ back again when transferring to a different storage system where the
original characters are supported. When the same Unicode characters
are intentionally used in file names, this replacement strategy leads
to unwanted renames. Read more under section [caveats](/overview/#restricted-filenames-caveats).
### Why does rclone fail to connect over TLS but another client works?
If you see TLS handshake failures (or packet captures show the server
rejecting all offered ciphers), the server/proxy may only support
legacy TLS cipher suites (for example RSA key-exchange ciphers
such as `RSA_WITH_AES_256_CBC_SHA256`, or old 3DES ciphers). Recent Go
versions (which rclone is built with) have **removed insecure ciphers
from the default list**, so rclone may refuse to negotiate them even
if other tools still do.
If you can't update/reconfigure the server/proxy to support modern TLS
(TLS 1.2/1.3) and ECDHE-based cipher suites you can re-enable legacy
ciphers via `GODEBUG`:
- Windows (cmd.exe):
```bat
set GODEBUG=tlsrsakex=1
rclone copy ...
```
- Windows (PowerShell):
```powershell
$env:GODEBUG="tlsrsakex=1"
rclone copy ...
```
- Linux/macOS:
```sh
GODEBUG=tlsrsakex=1 rclone copy ...
```
If the server only supports 3DES, try:
```sh
GODEBUG=tls3des=1 rclone ...
```
This applies to **any rclone feature using TLS** (HTTPS, FTPS, WebDAV
over TLS, proxies with TLS interception, etc.). Use these workarounds
only long enough to get the server/proxy updated.

View File

@@ -1,244 +0,0 @@
---
title: "Filen"
description: "Rclone docs for Filen"
versionIntroduced: "1.73"
---
# {{< icon "fa fa-solid fa-f" >}} Filen
## Configuration
The initial setup for Filen requires that you get an API key for your account,
currently this is only possible using the [Filen CLI](https://github.com/FilenCloudDienste/filen-cli).
This means you must first download the CLI, login, and then run the `export-api-key` command.
Here is an example of how to make a remote called `FilenRemote`. First run:
rclone config
This will guide you through an interactive setup process:
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> FilenRemote
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Filen
\ "filen"
[snip]
Storage> filen
Option Email.
The email of your Filen account
Enter a value.
Email> youremail@provider.com
Option Password.
The password of your Filen account
Choose an alternative below.
y) Yes, type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
Option API Key.
An API Key for your Filen account
Get this using the Filen CLI export-api-key command
You can download the Filen CLI from https://github.com/FilenCloudDienste/filen-cli
Choose an alternative below.
y) Yes, type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: filen
- Email: youremail@provider.com
- Password: *** ENCRYPTED ***
- API Key: *** ENCRYPTED ***
Keep this "FilenRemote" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
### Modification times and hashes
Modification times are fully supported for files, for directories, only the creation time matters.
Filen supports Blake3 hashes.
### Restricted filename characters
Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8)
### API Key
<!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/filen/filen.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options
Here are the Standard options specific to filen (Filen).
#### --filen-email
Email of your Filen account
Properties:
- Config: email
- Env Var: RCLONE_FILEN_EMAIL
- Type: string
- Required: true
#### --filen-password
Password of your Filen account
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
Properties:
- Config: password
- Env Var: RCLONE_FILEN_PASSWORD
- Type: string
- Required: true
#### --filen-api-key
API Key for your Filen account
Get this using the Filen CLI export-api-key command
You can download the Filen CLI from https://github.com/FilenCloudDienste/filen-cli
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
Properties:
- Config: api_key
- Env Var: RCLONE_FILEN_API_KEY
- Type: string
- Required: true
### Advanced options
Here are the Advanced options specific to filen (Filen).
#### --filen-encoding
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_FILEN_ENCODING
- Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
#### --filen-upload-concurrency
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently for multipart uploads.
Note that chunks are stored in memory and there may be up to
"--transfers" * "--filen-upload-concurrency" chunks stored at once
in memory.
If you are uploading small numbers of large files over high-speed links
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.
Properties:
- Config: upload_concurrency
- Env Var: RCLONE_FILEN_UPLOAD_CONCURRENCY
- Type: int
- Default: 16
#### --filen-master-keys
Master Keys (internal use only)
Properties:
- Config: master_keys
- Env Var: RCLONE_FILEN_MASTER_KEYS
- Type: string
- Required: false
#### --filen-private-key
Private RSA Key (internal use only)
Properties:
- Config: private_key
- Env Var: RCLONE_FILEN_PRIVATE_KEY
- Type: string
- Required: false
#### --filen-public-key
Public RSA Key (internal use only)
Properties:
- Config: public_key
- Env Var: RCLONE_FILEN_PUBLIC_KEY
- Type: string
- Required: false
#### --filen-auth-version
Authentication Version (internal use only)
Properties:
- Config: auth_version
- Env Var: RCLONE_FILEN_AUTH_VERSION
- Type: string
- Required: false
#### --filen-base-folder-uuid
UUID of Account Root Directory (internal use only)
Properties:
- Config: base_folder_uuid
- Env Var: RCLONE_FILEN_BASE_FOLDER_UUID
- Type: string
- Required: false
#### --filen-description
Description of the remote.
Properties:
- Config: description
- Env Var: RCLONE_FILEN_DESCRIPTION
- Type: string
- Required: false
<!-- autogenerated options stop -->

View File

@@ -202,28 +202,28 @@ them into regular expressions.
## Filter pattern examples {#examples}
| Description | Pattern | Matches | Does not match |
| ----------- |-------- | ------- | -------------- |
| Wildcard | `*.jpg` | `/file.jpg` | `/file.png` |
| | | `/dir/file.jpg` | `/dir/file.png` |
| Rooted | `/*.jpg` | `/file.jpg` | `/file.png` |
| | | `/file2.jpg` | `/dir/file.jpg` |
| Alternates | `*.{jpg,png}` | `/file.jpg` | `/file.gif` |
| | | `/dir/file.png` | `/dir/file.gif` |
| Path Wildcard | `dir/**` | `/dir/anyfile` | `file.png` |
| | | `/subdir/dir/subsubdir/anyfile` | `/subdir/file.png` |
| Any Char | `*.t?t` | `/file.txt` | `/file.qxt` |
| | | `/dir/file.tzt` | `/dir/file.png` |
| Range | `*.[a-z]` | `/file.a` | `/file.0` |
| | | `/dir/file.b` | `/dir/file.1` |
| Escape | `*.\?\?\?` | `/file.???` | `/file.abc` |
| | | `/dir/file.???` | `/dir/file.def` |
| Class | `*.\d\d\d` | `/file.012` | `/file.abc` |
| | | `/dir/file.345` | `/dir/file.def` |
| Regexp | `*.{{jpe?g}}` | `/file.jpeg` | `/file.png` |
| | | `/dir/file.jpg` | `/dir/file.jpeeg` |
| Rooted Regexp | `/{{.*\.jpe?g}}` | `/file.jpeg` | `/file.png` |
| | | `/file.jpg` | `/dir/file.jpg` |
| Description | Pattern | Matches | Does not match |
| ----------- | ---------------- | ------------------------------- | ------------------ |
| Wildcard | `*.jpg` | `/file.jpg` | `/file.png` |
| | | `/dir/file.jpg` | `/dir/file.png` |
| Rooted | `/*.jpg` | `/file.jpg` | `/file.png` |
| | | `/file2.jpg` | `/dir/file.jpg` |
| Alternates | `*.{jpg,png}` | `/file.jpg` | `/file.gif` |
| | | `/dir/file.png` | `/dir/file.gif` |
| Path Wildcard | `dir/**` | `/dir/anyfile` | `file.png` |
| | | `/subdir/dir/subsubdir/anyfile` | `/subdir/file.png` |
| Any Char | `*.t?t` | `/file.txt` | `/file.qxt` |
| | | `/dir/file.tzt` | `/dir/file.png` |
| Range | `*.[a-z]` | `/file.a` | `/file.0` |
| | | `/dir/file.b` | `/dir/file.1` |
| Escape | `*.\?\?\?` | `/file.???` | `/file.abc` |
| | | `/dir/file.???` | `/dir/file.def` |
| Class | `*.\d\d\d` | `/file.012` | `/file.abc` |
| | | `/dir/file.345` | `/dir/file.def` |
| Regexp | `*.{{jpe?g}}` | `/file.jpeg` | `/file.png` |
| | | `/dir/file.jpg` | `/dir/file.jpeeg` |
| Rooted Regexp | `/{{.*\.jpe?g}}` | `/file.jpeg` | `/file.png` |
| | | `/file.jpg` | `/dir/file.jpg` |
## How filter rules are applied to files {#how-filter-rules-work}

View File

@@ -1138,6 +1138,10 @@ Backend-only flags (these can be set in the config file also).
--union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi)
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access token
--uptobox-description string Description of the remote
--uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
--webdav-auth-redirect Preserve authentication on redirect
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token

View File

@@ -498,12 +498,6 @@ URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
Supports the format http://user:pass@host:port, http://host:port, http://host.
Example:
http://myUser:myPass@proxyhostname.example.com:8000
Properties:

View File

@@ -659,14 +659,8 @@ second that each client_id can do set by Google.
If there is a problem with this client_id (eg quota too low or the
client_id stops working) then you can make your own.
Please follow the steps in [the google drive docs](https://rclone.org/drive/#making-your-own-client-id)
with the following differences:
- At step 3, instead of enabling the "Google Drive API", search for and
enable the "Photos Library API".
- At step 5, you will need to add different scopes. Use these scopes
instead of the drive ones:
Please follow the steps in [the google drive docs](https://rclone.org/drive/#making-your-own-client-id).
You will need these scopes instead of the drive ones detailed:
```text
https://www.googleapis.com/auth/photoslibrary.appendonly

View File

@@ -285,8 +285,8 @@ rclone v1.49.1
- go version: go1.12.9
```
There are a few command line options to consider when starting an rclone Docker container
from the rclone image.
There are a few command line options to consider when starting an rclone Docker
container from the rclone image.
- You need to mount the host rclone config dir at `/config/rclone` into the Docker
container. Due to the fact that rclone updates tokens inside its config file,
@@ -300,8 +300,8 @@ from the rclone image.
data files reside on the host with a non-root UID:GID, you need to pass these
on the container start command line.
- If you want to access the RC interface (either via the API or the Web UI), it is
required to set the `--rc-addr` to `:5572` in order to connect to it from outside
- If you want to access the RC interface (either via the API or the Web UI), it
is required to set the `--rc-addr` to `:5572` in order to connect to it from outside
the container. An explanation about why this is necessary can be found in an old
[pythonspeed.com](https://web.archive.org/web/20200808071950/https://pythonspeed.com/articles/docker-connection-refused/)
article.
@@ -309,9 +309,9 @@ from the rclone image.
probably set it to listen to localhost only, with `127.0.0.1:5572` as the
value for `--rc-addr`
- It is possible to use `rclone mount` inside a userspace Docker container, and expose
the resulting fuse mount to the host. The exact `docker run` options to do that
might vary slightly between hosts. See, e.g. the discussion in this
- It is possible to use `rclone mount` inside a userspace Docker container, and
expose the resulting fuse mount to the host. The exact `docker run` options to
do that might vary slightly between hosts. See, e.g. the discussion in this
[thread](https://github.com/moby/moby/issues/9448).
You also need to mount the host `/etc/passwd` and `/etc/group` for fuse to work
@@ -542,8 +542,8 @@ To override them set the corresponding options (as command-line arguments, or as
After installing and configuring rclone, as described above, you are ready to use
rclone as an interactive command line utility. If your goal is to perform *periodic*
operations, such as a regular [sync](https://rclone.org/commands/rclone_sync/), you
will probably want to configure your rclone command in your operating system's
operations, such as a regular [sync](https://rclone.org/commands/rclone_sync/),
you will probably want to configure your rclone command in your operating system's
scheduler. If you need to expose *service*-like features, such as
[remote control](https://rclone.org/rc/), [GUI](https://rclone.org/gui/),
[serve](https://rclone.org/commands/rclone_serve/) or [mount](https://rclone.org/commands/rclone_mount/),
@@ -583,9 +583,9 @@ c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclo
As mentioned in the [mount](https://rclone.org/commands/rclone_mount/) documentation,
mounted drives created as Administrator are not visible to other accounts, not even
the account that was elevated as Administrator. By running the mount command as the
built-in `SYSTEM` user account, it will create drives accessible for everyone on
the system. Both scheduled task and Windows service can be used to achieve this.
the account that was elevated as Administrator. By running the mount command as
the built-in `SYSTEM` user account, it will create drives accessible for everyone
on the system. Both scheduled task and Windows service can be used to achieve this.
NOTE: Remember that when rclone runs as the `SYSTEM` user, the user profile
that it sees will not be yours. This means that if you normally run rclone with
@@ -615,8 +615,8 @@ will often give you better results.
#### Start from Task Scheduler
Task Scheduler is an administrative tool built into Windows, and it can be used to
configure rclone to be started automatically in a highly configurable way, e.g.
Task Scheduler is an administrative tool built into Windows, and it can be used
to configure rclone to be started automatically in a highly configurable way, e.g.
periodically on a schedule, on user log on, or at system startup. It can run
be configured to run as the current user, or for a mount command that needs to
be available to all users it can run as the `SYSTEM` user.
@@ -656,18 +656,18 @@ To Windows service running any rclone command, the excellent third-party utility
[NSSM](http://nssm.cc), the "Non-Sucking Service Manager", can be used.
It includes some advanced features such as adjusting process priority, defining
process environment variables, redirect to file anything written to stdout, and
customized response to different exit codes, with a GUI to configure everything from
(although it can also be used from command line ).
customized response to different exit codes, with a GUI to configure everything
from (although it can also be used from command line ).
There are also several other alternatives. To mention one more,
[WinSW](https://github.com/winsw/winsw), "Windows Service Wrapper", is worth checking
out. It requires .NET Framework, but it is preinstalled on newer versions of Windows,
and it also provides alternative standalone distributions which includes necessary
runtime (.NET 5). WinSW is a command-line only utility, where you have to manually
create an XML file with service configuration. This may be a drawback for some, but
it can also be an advantage as it is easy to back up and reuse the configuration
settings, without having go through manual steps in a GUI. One thing to note is that
by default it does not restart the service on error, one have to explicit enable
create an XML file with service configuration. This may be a drawback for some,
but it can also be an advantage as it is easy to back up and reuse the configuration
settings, without having go through manual steps in a GUI. One thing to note is
that by default it does not restart the service on error, one have to explicit enable
this in the configuration file (via the "onfailure" parameter).
### Autostart on Linux
@@ -676,8 +676,8 @@ this in the configuration file (via the "onfailure" parameter).
To always run rclone in background, relevant for mount commands etc,
you can use systemd to set up rclone as a system or user service. Running as a
system service ensures that it is run at startup even if the user it is running as
has no active session. Running rclone as a user service ensures that it only
system service ensures that it is run at startup even if the user it is running
as has no active session. Running rclone as a user service ensures that it only
starts after the configured user has logged into the system.
#### Run periodically from cron

View File

@@ -14,7 +14,104 @@ show through.
Here is an overview of the major features of each cloud storage system.
{{< features-table >}}
| Name | Hash | ModTime | Case Insensitive | Duplicate Files | MIME Type | Metadata |
| ----------------------------- | :---------------: | :-----: | :--------------: | :-------------: | :-------: | :------: |
| 1Fichier | Whirlpool | - | No | Yes | R | - |
| Akamai Netstorage | MD5, SHA256 | R/W | No | No | R | - |
| Amazon S3 (or S3 compatible) | MD5 | R/W | No | No | R/W | RWU |
| Backblaze B2 | SHA1 | R/W | No | No | R/W | - |
| Box | SHA1 | R/W | Yes | No | - | - |
| Citrix ShareFile | MD5 | R/W | Yes | No | - | - |
| Cloudinary | MD5 | R | No | Yes | - | - |
| Dropbox | DBHASH ¹ | R | Yes | No | - | - |
| Enterprise File Fabric | - | R/W | Yes | No | R/W | - |
| FileLu Cloud Storage | MD5 | R/W | No | Yes | R | - |
| Files.com | MD5, CRC32 | DR/W | Yes | No | R | - |
| FTP | - | R/W ¹⁰ | No | No | - | - |
| Gofile | MD5 | DR/W | No | Yes | R | - |
| Google Cloud Storage | MD5 | R/W | No | No | R/W | - |
| Google Drive | MD5, SHA1, SHA256 | DR/W | No | Yes | R/W | DRWU |
| Google Photos | - | - | No | Yes | R | - |
| HDFS | - | R/W | No | No | - | - |
| HiDrive | HiDrive ¹² | R/W | No | No | - | - |
| HTTP | - | R | No | No | R | R |
| iCloud Drive | - | R | No | No | - | - |
| Internet Archive | MD5, SHA1, CRC32 | R/W ¹¹ | No | No | - | RWU |
| Jottacloud | MD5 | R/W | Yes | No | R | RW |
| Koofr | MD5 | - | Yes | No | - | - |
| Linkbox | - | R | No | No | - | - |
| Mail.ru Cloud | Mailru ⁶ | R/W | Yes | No | - | - |
| Mega | - | - | No | Yes | - | - |
| Memory | MD5 | R/W | No | No | - | - |
| Microsoft Azure Blob Storage | MD5 | R/W | No | No | R/W | - |
| Microsoft Azure Files Storage | MD5 | R/W | Yes | No | R/W | - |
| Microsoft OneDrive | QuickXorHash ⁵ | DR/W | Yes | No | R | DRW |
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
| Oracle Object Storage | MD5 | R/W | No | No | R/W | RU |
| pCloud | MD5, SHA1 ⁷ | R/W | No | No | W | - |
| PikPak | MD5 | R | No | No | R | - |
| Pixeldrain | SHA256 | R/W | No | No | R | RW |
| premiumize.me | - | - | Yes | No | R | - |
| put.io | CRC-32 | R/W | No | Yes | R | - |
| Proton Drive | SHA1 | R/W | No | No | R | - |
| QingStor | MD5 | - ⁹ | No | No | R/W | - |
| Quatrix by Maytech | - | R/W | No | No | - | - |
| Seafile | - | - | No | No | - | - |
| SFTP | MD5, SHA1 ² | DR/W | Depends | No | - | - |
| Shade | - | - | Yes | No | - | - |
| Sia | - | - | No | No | - | - |
| SMB | - | R/W | Yes | No | - | - |
| SugarSync | - | - | No | No | - | - |
| Storj | - | R | No | No | - | - |
| Uloz.to | MD5, SHA256 ¹³ | - | No | Yes | - | - |
| Uptobox | - | - | No | Yes | - | - |
| WebDAV | MD5, SHA1 ³ | R ⁴ | Depends | No | - | - |
| Yandex Disk | MD5 | R/W | No | No | R | - |
| Zoho WorkDrive | - | - | No | No | - | - |
| The local filesystem | All | DR/W | Depends | No | - | DRWU |
¹ Dropbox supports [its own custom
hash](https://www.dropbox.com/developers/reference/content-hash).
This is an SHA256 sum of all the 4 MiB block SHA256s.
² SFTP supports checksums if the same login has shell access and
`md5sum` or `sha1sum` as well as `echo` are in the remote's PATH.
³ WebDAV supports hashes when used with Fastmail Files, Owncloud and Nextcloud only.
⁴ WebDAV supports modtimes when used with Fastmail Files, Owncloud and Nextcloud
only.
⁵ [QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash)
is Microsoft's own hash.
⁶ Mail.ru uses its own modified SHA1 hash
⁷ pCloud only supports SHA1 (not MD5) in its EU region
⁸ Opendrive does not support creation of duplicate files using
their web client interface or other stock clients, but the underlying
storage platform has been determined to allow duplicate files, and it
is possible to create them with `rclone`. It may be that this is a
mistake or an unsupported feature.
⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.
¹⁰ FTP supports modtimes for the major FTP servers, and also others
if they advertised required protocol extensions. See [this](/ftp/#modification-times)
for more details.
¹¹ Internet Archive requires option `wait_archive` to be set to a non-zero value
for full modtime support.
¹² HiDrive supports [its own custom
hash](https://static.hidrive.com/dev/0001).
It combines SHA1 sums for each 4 KiB block hierarchically to a single
top-level sum.
¹³ Uloz.to provides server-calculated MD5 hash upon file upload. MD5 and SHA256
hashes are client-calculated and stored as metadata fields.
### Hash
@@ -39,7 +136,7 @@ size by default, though can be configured to check the file hash
change the timestamp of an existing file without having to re-upload it.
| Key | Explanation |
|-----|-------------|
| --- | ----------- |
| `-` | ModTimes not supported - times likely the upload time |
| `R` | ModTimes supported on files but can't be changed without re-upload |
| `R/W` | Read and Write ModTimes fully supported on files |
@@ -186,8 +283,8 @@ will be escaped with the `` character to avoid ambiguous file names.
Each cloud storage backend can use a different set of characters,
which will be specified in the documentation for each backend.
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| Character | Value | Replacement |
| --------- | :---: | :---------- :|
| NUL | 0x00 | ␀ |
| SOH | 0x01 | ␁ |
| STX | 0x02 | ␂ |
@@ -227,9 +324,9 @@ The default encoding will also encode these file names as they are
problematic with many cloud storage systems.
| File name | Replacement |
| --------- |:-----------:|
| --------- | :--------- :|
| . | |
| .. | |
| .. | |
#### Invalid UTF-8 bytes {#invalid-utf8}
@@ -269,8 +366,8 @@ list of all possible values by passing an invalid value to this
flag, e.g. `--local-encoding "help"`. The command `rclone help flags encoding`
will show you the defaults for the backends.
| Encoding | Characters | Encoded as |
| --------- | ---------- | ---------- |
| Encoding | Characters | Encoded as |
| -------- | ---------- | ---------- |
| Asterisk | `*` | `` |
| BackQuote | `` ` `` | `` |
| BackSlash | `\` | `` |
@@ -395,12 +492,12 @@ that backend) and/or user metadata (general purpose metadata).
The levels of metadata support are
| Key | Explanation |
|-----|-------------|
| `R` | Read only System Metadata on files only|
| `RW` | Read and write System Metadata on files only|
| `RWU` | Read and write System Metadata and read and write User Metadata on files only|
| --- | ----------- |
| `R` | Read only System Metadata on files only |
| `RW` | Read and write System Metadata on files only |
| `RWU` | Read and write System Metadata and read and write User Metadata on files only |
| `DR` | Read only System Metadata on files and directories |
| `DRW` | Read and write System Metadata on files and directories|
| `DRW` | Read and write System Metadata on files and directories |
| `DRWU` | Read and write System Metadata and read and write User Metadata on files and directories |
See [the metadata docs](/docs/#metadata) for more info.
@@ -410,7 +507,73 @@ See [the metadata docs](/docs/#metadata) for more info.
All rclone remotes support a base command set. Other features depend
upon backend-specific capabilities.
{{< optional-features-table >}}
| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | MultithreadUpload | LinkSharing | About | EmptyDir |
| ----------------------------- | :---: | :--: | :--: | :-----: | :-----: | :---: | :----------: | :-----------------| :----------: | :---: | :------: |
| 1Fichier | No | Yes | Yes | No | No | No | No | No | Yes | No | Yes |
| Akamai Netstorage | Yes | No | No | No | No | Yes | Yes | No | No | No | Yes |
| Amazon S3 (or S3 compatible) | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No |
| Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No |
| Box | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes |
| Citrix ShareFile | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
| Dropbox | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
| Cloudinary | No | No | No | No | No | No | Yes | No | No | No | No |
| Enterprise File Fabric | Yes | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes |
| Files.com | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | No | Yes |
| FTP | No | No | Yes | Yes | No | No | Yes | No | No | No | Yes |
| Gofile | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
| Google Cloud Storage | Yes | Yes | No | No | No | No | Yes | No | No | No | No |
| Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
| Google Photos | No | No | No | No | No | No | No | No | No | No | No |
| HDFS | Yes | No | Yes | Yes | No | No | Yes | No | No | Yes | Yes |
| HiDrive | Yes | Yes | Yes | Yes | No | No | Yes | No | No | No | Yes |
| HTTP | No | No | No | No | No | No | No | No | No | No | Yes |
| iCloud Drive | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
| ImageKit | Yes | No | Yes | No | No | No | No | No | No | No | Yes |
| Internet Archive | No | Yes | No | No | Yes | Yes | No | No | Yes | Yes | No |
| Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
| Koofr | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
| Mail.ru Cloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
| Mega | Yes | No | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
| Memory | No | Yes | No | No | No | Yes | Yes | No | No | No | No |
| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | Yes | No | No | No |
| Microsoft Azure Files Storage | No | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes |
| Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | Yes ⁵ | No | No | Yes | Yes | Yes |
| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
| OpenStack Swift | Yes ¹ | Yes | No | No | No | Yes | Yes | No | No | Yes | No |
| Oracle Object Storage | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | No |
| pCloud | Yes | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
| PikPak | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
| Pixeldrain | Yes | No | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
| premiumize.me | Yes | No | Yes | Yes | No | No | No | No | Yes | Yes | Yes |
| put.io | Yes | No | Yes | Yes | Yes | No | Yes | No | No | Yes | Yes |
| Proton Drive | Yes | No | Yes | Yes | Yes | No | No | No | No | Yes | Yes |
| QingStor | No | Yes | No | No | Yes | Yes | No | No | No | No | No |
| Quatrix by Maytech | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
| Seafile | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
| SFTP | No | Yes ⁴| Yes | Yes | No | No | Yes | No | No | Yes | Yes |
| Sia | No | No | No | No | No | No | Yes | No | No | No | Yes |
| SMB | No | No | Yes | Yes | No | No | Yes | Yes | No | No | Yes |
| SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | No | Yes |
| Storj | Yes ² | Yes | Yes | No | No | Yes | Yes | No | Yes | No | No |
| Uloz.to | No | No | Yes | Yes | No | No | No | No | No | No | Yes |
| Uptobox | No | Yes | Yes | Yes | No | No | No | No | No | No | No |
| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ³ | No | No | Yes | Yes |
| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes |
| Zoho WorkDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
| The local filesystem | No | No | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes |
¹ Note Swift implements this in order to delete directory markers but
it doesn't actually have a quicker way of deleting files other than
deleting them individually.
² Storj implements this efficiently only for entire buckets. If
purging a directory inside a bucket, files are deleted individually.
³ StreamUpload is not supported with Nextcloud
⁴ Use the `--sftp-copy-is-hardlink` flag to enable.
⁵ Use the `--onedrive-delta` flag to enable.
### Purge

View File

@@ -18,7 +18,6 @@ The S3 backend can be used with a number of different providers:
{{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}}
{{< provider name="Cloudflare R2" home="https://blog.cloudflare.com/r2-open-beta/" config="/s3/#cloudflare-r2" >}}
{{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.com/en/products/cloud-storage" config="/s3/#arvan-cloud" >}}
{{< provider name="Bizfly Cloud Simple Storage" home="https://bizflycloud.vn/" config="/s3/#bizflycloud" >}}
{{< provider name="Cubbit DS3" home="https://cubbit.io/ds3-cloud" config="/s3/#Cubbit" >}}
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
{{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
@@ -4537,36 +4536,6 @@ server_side_encryption =
storage_class =
```
### BizflyCloud {#bizflycloud}
[Bizfly Cloud Simple Storage](https://bizflycloud.vn/simple-storage) is an
S3-compatible service with regions in Hanoi (HN) and Ho Chi Minh City (HCM).
Use the endpoint for your region:
- HN: `hn.ss.bfcplatform.vn`
- HCM: `hcm.ss.bfcplatform.vn`
A minimal configuration looks like this.
```ini
[bizfly]
type = s3
provider = BizflyCloud
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
region = HN
endpoint = hn.ss.bfcplatform.vn
location_constraint =
acl =
server_side_encryption =
storage_class =
```
Switch `region` and `endpoint` to `HCM` and `hcm.ss.bfcplatform.vn` for Ho Chi
Minh City.
### Ceph
[Ceph](https://ceph.com/) is an open-source, unified, distributed

View File

@@ -1186,12 +1186,6 @@ URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
Supports the format http://user:pass@host:port, http://host:port, http://host.
Example:
http://myUser:myPass@proxyhostname.example.com:8000
Properties:

View File

@@ -97,7 +97,7 @@ Shade uses multipart uploads by default. This means that files will be chunked a
Please note that when deleting files in Shade via rclone it will delete the file instantly, instead of sending it to the trash. This means that it will not be recoverable.
<!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/shade/shade.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/box/box.go then run make backenddocs" >}}
### Standard options
Here are the Standard options specific to shade (Shade FS).
@@ -183,7 +183,7 @@ Properties:
- Type: string
- Required: false
<!-- autogenerated options stop -->
{{< rem autogenerated options stop >}}
## Limitations

View File

@@ -13,7 +13,7 @@ Thank you to our sponsors:
<!-- markdownlint-capture -->
<!-- markdownlint-disable line-length no-bare-urls -->
{{< sponsor src="/img/logos/rabata.svg" width="300" height="200" title="Visit our sponsor Rabata.io" link="https://rabata.io/?utm_source=banner&utm_medium=rclone&utm_content=general">}}
{{< sponsor src="/img/logos/rabata/txt_1_300x114.png" width="300" height="200" title="Visit our sponsor Rabata.io" link="https://rabata.io/?utm_source=banner&utm_medium=rclone&utm_content=general">}}
{{< sponsor src="/img/logos/idrive_e2.svg" width="300" height="200" title="Visit our sponsor IDrive e2" link="https://www.idrive.com/e2/?refer=rclone">}}
{{< sponsor src="/img/logos/filescom-enterprise-grade-workflows.png" width="300" height="200" title="Start Your Free Trial Today" link="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=rclone">}}
{{< sponsor src="/img/logos/mega-s4.svg" width="300" height="200" title="MEGA S4: New S3 compatible object storage. High scale. Low cost. Free egress." link="https://mega.io/objectstorage?utm_source=rclone&utm_medium=referral&utm_campaign=rclone-mega-s4&mct=rclonepromo">}}

View File

@@ -1,58 +0,0 @@
---
title: "Backend Support Tiers"
description: "A complete list of supported backends and their stability tiers."
---
# Tiers
Rclone backends are divided into tiers to give users an idea of the stability of each backend.
| Tier | Label | Intended meaning |
|--------|---------------|------------------|
| {{< tier tier="Tier 1" >}} | Core | Production-grade, first-class |
| {{< tier tier="Tier 2" >}} | Stable | Well-supported, minor gaps |
| {{< tier tier="Tier 3" >}} | Supported | Works for many uses; known caveats |
| {{< tier tier="Tier 4" >}} | Experimental | Use with care; expect gaps/changes |
| {{< tier tier="Tier 5" >}} | Deprecated | No longer maintained or supported |
## Overview
Here is a summary of all backends:
{{< tiers-table >}}
## Scoring
Here is how the backends are scored.
### Features
These are useful optional features a backend should have in rough
order of importance. Each one of these scores a point for the Features
column.
- F1: Hash(es)
- F2: Modtime
- F3: Stream upload
- F4: Copy/Move
- F5: DirMove
- F6: Metadata
- F7: MultipartUpload
### Tier
The tier is decided after determining these attributes. Some discretion is allowed in tiering as some of these attributes are more important than others.
| Attr | T1: Core | T2: Stable | T3: Supported | T4: Experimental | T5: Incubator |
|------|----------|------------|---------------|------------------|---------------|
| Maintainers | >=2 | >=1 | >=1 | >=0 | >=0 |
| API source | Official | Official | Either | Either | Either |
| Features (F1-F7) | >=5/7 | >=4/7 | >=3/7 | >=2/7 | N/A |
| Integration tests | All Green | All green | Nearly all green | Some Flaky | N/A |
| Error handling | Pacer | Pacer | Retries | Retries | N/A |
| Data integrity | Hashes, alt, modtime | Hashes or alt | Hash OR modtime | Best-effort | N/A |
| Perf baseline | Bench within 2x S3 | Bench doc | Anecdotal OK | Optional | N/A |
| Adoption | widely used | often used | some use | N/A | N/A |
| Docs completeness | Full | Full | Basic | Minimal | Minimal |
| Security | Principle-of-least-privilege | Reasonable scopes | Basic auth | Works | Works |

179
docs/content/uptobox.md Normal file
View File

@@ -0,0 +1,179 @@
---
title: "Uptobox"
description: "Rclone docs for Uptobox"
versionIntroduced: "v1.56"
---
# {{< icon "fa fa-archive" >}} Uptobox
This is a Backend for Uptobox file storage service. Uptobox is closer to a
one-click hoster than a traditional cloud storage provider and therefore not
suitable for long term storage.
Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
To configure an Uptobox backend you'll need your personal api token. You'll find
it in your [account settings](https://uptobox.com/my_account).
Here is an example of how to make a remote called `remote` with the default setup.
First run:
```console
rclone config
```
This will guide you through an interactive setup process:
```text
Current remotes:
Name Type
==== ====
TestUptobox uptobox
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> uptobox
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[...]
37 / Uptobox
\ "uptobox"
[...]
Storage> uptobox
** See help for uptobox backend at: https://rclone.org/uptobox/ **
Your API Key, get it from https://uptobox.com/my_account
Enter a string value. Press Enter for the default ("").
api_key> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[uptobox]
type = uptobox
api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>
```
Once configured you can then use `rclone` like this (replace `remote` with the
name you gave your remote):
List directories in top level of your Uptobox
```console
rclone lsd remote:
```
List all the files in your Uptobox
```console
rclone ls remote:
```
To copy a local directory to an Uptobox directory called backup
```console
rclone copy /home/source remote:backup
```
### Modification times and hashes
Uptobox supports neither modified times nor checksums. All timestamps
will read as that set by `--default-time`.
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| " | 0x22 | |
| ` | 0x41 | |
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in XML strings.
<!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/uptobox/uptobox.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options
Here are the Standard options specific to uptobox (Uptobox).
#### --uptobox-access-token
Your access token.
Get it from https://uptobox.com/my_account.
Properties:
- Config: access_token
- Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
- Type: string
- Required: false
### Advanced options
Here are the Advanced options specific to uptobox (Uptobox).
#### --uptobox-private
Set to make uploaded files private
Properties:
- Config: private
- Env Var: RCLONE_UPTOBOX_PRIVATE
- Type: bool
- Default: false
#### --uptobox-encoding
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_UPTOBOX_ENCODING
- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
#### --uptobox-description
Description of the remote.
Properties:
- Config: description
- Env Var: RCLONE_UPTOBOX_DESCRIPTION
- Type: string
- Required: false
<!-- autogenerated options stop -->
## Limitations
Uptobox will delete inactive files that have not been accessed in 60 days.
`rclone about` is not supported by this backend an overview of used space can however
been seen in the uptobox web interface.

View File

@@ -1,16 +0,0 @@
backend: alias
name: Alias
tier: Tier 1
maintainers: Core
features_score: 7
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: true
remote: null
features: null
hashes: null
precision: null

View File

@@ -1,43 +0,0 @@
backend: archive
name: Archive
tier: Tier 3
maintainers: Core
features_score: 3
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Some use
docs: Full
security: High
virtual: true
remote: 'TestArchive:'
features:
- About
- CanHaveEmptyDirectories
- DirMove
- Move
- OpenWriterAt
- Overlay
- PartialUploads
- PutStream
- ReadMetadata
- SetWrapper
- UnWrap
- UserMetadata
- WrapFs
- WriteMetadata
hashes:
- md5
- sha1
- whirlpool
- crc32
- sha256
- sha512
- blake3
- xxh3
- xxh128
- dropbox
- hidrive
- mailru
- quickxor
precision: 1000000000

View File

@@ -1,34 +0,0 @@
backend: azureblob
name: Azure Blob
tier: Tier 1
maintainers: Core
features_score: 7
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestAzureBlob:'
features:
- BucketBased
- BucketBasedRootOK
- Copy
- DoubleSlash
- GetTier
- ListP
- ListR
- OpenChunkWriter
- Purge
- PutStream
- ReadMetadata
- ReadMimeType
- ServerSideAcrossConfigs
- SetTier
- UserMetadata
- WriteMetadata
- WriteMimeType
hashes:
- md5
precision: 1

View File

@@ -1,30 +0,0 @@
backend: azurefiles
name: Azure Files
tier: Tier 2
maintainers: Core
features_score: 6
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestAzureFiles:'
features:
- About
- CanHaveEmptyDirectories
- CaseInsensitive
- Copy
- DirMove
- ListP
- Move
- OpenWriterAt
- PartialUploads
- PutStream
- ReadMimeType
- SlowHash
- WriteMimeType
hashes:
- md5
precision: 1000000000

View File

@@ -1,31 +0,0 @@
backend: b2
name: B2
tier: Tier 1
maintainers: Core
features_score: 6
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestB2:'
features:
- BucketBased
- BucketBasedRootOK
- ChunkWriterDoesntSeek
- CleanUp
- Command
- Copy
- ListP
- ListR
- OpenChunkWriter
- PublicLink
- Purge
- PutStream
- ReadMimeType
- WriteMimeType
hashes:
- sha1
precision: 1000000

View File

@@ -1,32 +0,0 @@
backend: box
name: Box
tier: Tier 1
maintainers: Core
features_score: 6
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestBox:'
features:
- About
- CanHaveEmptyDirectories
- CaseInsensitive
- ChangeNotify
- CleanUp
- Copy
- DirCacheFlush
- DirMove
- ListP
- Move
- PublicLink
- Purge
- PutStream
- PutUnchecked
- Shutdown
hashes:
- sha1
precision: 1000000000

View File

@@ -1,40 +0,0 @@
backend: cache
name: Cache
tier: Tier 5
maintainers: None
features_score: 7
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Some use
docs: Full
security: High
virtual: true
remote: 'TestCache:'
features:
- About
- CanHaveEmptyDirectories
- Command
- DirCacheFlush
- DirMove
- Move
- Overlay
- PutStream
- SetWrapper
- UnWrap
- WrapFs
hashes:
- md5
- sha1
- whirlpool
- crc32
- sha256
- sha512
- blake3
- xxh3
- xxh128
- dropbox
- hidrive
- mailru
- quickxor
precision: 1

View File

@@ -1,16 +0,0 @@
backend: chunker
name: Chunker
tier: Tier 4
maintainers: None
features_score: 7
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Some use
docs: Full
security: High
virtual: true
remote: 'TestChunkerLocal:'
features: null
hashes: null
precision: null

View File

@@ -1,18 +0,0 @@
backend: cloudinary
name: Cloudinary
tier: Tier 3
maintainers: Core
features_score: 3
integration_tests: Failing
data_integrity: Hash
performance: High
adoption: Often used
docs: Full
security: High
virtual: false
remote: 'TestCloudinary:'
features:
- CanHaveEmptyDirectories
hashes:
- md5
precision: 3153600000000000000

View File

@@ -1,49 +0,0 @@
backend: combine
name: Combine
tier: Tier 1
maintainers: Core
features_score: 7
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Often used
docs: Full
security: High
virtual: true
remote: TestCombine:dir1
features:
- About
- CanHaveEmptyDirectories
- DirModTimeUpdatesOnWrite
- DirMove
- DirSetModTime
- ListP
- MkdirMetadata
- Move
- OpenWriterAt
- Overlay
- PartialUploads
- PutStream
- ReadDirMetadata
- ReadMetadata
- SlowHash
- UserDirMetadata
- UserMetadata
- WriteDirMetadata
- WriteDirSetModTime
- WriteMetadata
hashes:
- md5
- sha1
- whirlpool
- crc32
- sha256
- sha512
- blake3
- xxh3
- xxh128
- dropbox
- hidrive
- mailru
- quickxor
precision: 1

View File

@@ -1,39 +0,0 @@
backend: compress
name: Compress
tier: Tier 4
maintainers: Core
features_score: 5
integration_tests: Failing
data_integrity: Hash
performance: Medium
adoption: Some use
docs: Full
security: High
virtual: true
remote: 'TestCompress:'
features:
- About
- CanHaveEmptyDirectories
- DirModTimeUpdatesOnWrite
- DirMove
- DirSetModTime
- ListP
- MkdirMetadata
- Move
- Overlay
- PartialUploads
- PutStream
- ReadDirMetadata
- ReadMetadata
- ReadMimeType
- SetWrapper
- UnWrap
- UserDirMetadata
- UserMetadata
- WrapFs
- WriteDirMetadata
- WriteDirSetModTime
- WriteMetadata
hashes:
- md5
precision: 1

View File

@@ -1,16 +0,0 @@
backend: crypt
name: Crypt
tier: Tier 1
maintainers: Core
features_score: 7
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: true
remote: 'TestCryptLocal:'
features: null
hashes: null
precision: null

View File

@@ -1,16 +0,0 @@
backend: doi
name: Doi
tier: Tier 2
maintainers: External
features_score: 0
integration_tests: N/A
data_integrity: Other
performance: High
adoption: Some use
docs: Full
security: High
virtual: false
remote: null
features: null
hashes: null
precision: null

View File

@@ -1,27 +0,0 @@
backend: drime
name: Drime
tier: Tier 1
maintainers: Core
features_score: 4
integration_tests: Passing
data_integrity: High
performance: High
adoption: Some use
docs: Full
security: High
virtual: false
remote: 'TestDrime:'
features:
- CanHaveEmptyDirectories
- Copy
- DirCacheFlush
- DirMove
- Move
- OpenChunkWriter
- Purge
- PutStream
- PutUnchecked
- ReadMimeType
- WriteMimeType
hashes: []
precision: 3153600000000000000

View File

@@ -1,48 +0,0 @@
backend: drive
name: Drive
tier: Tier 1
maintainers: Core
features_score: 7
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestDrive:'
features:
- About
- CanHaveEmptyDirectories
- ChangeNotify
- CleanUp
- Command
- Copy
- DirCacheFlush
- DirMove
- DirSetModTime
- DuplicateFiles
- FilterAware
- ListP
- ListR
- MergeDirs
- MkdirMetadata
- Move
- PublicLink
- Purge
- PutStream
- PutUnchecked
- ReadDirMetadata
- ReadMetadata
- ReadMimeType
- UserDirMetadata
- UserMetadata
- WriteDirMetadata
- WriteDirSetModTime
- WriteMetadata
- WriteMimeType
hashes:
- md5
- sha1
- sha256
precision: 1000000

View File

@@ -1,29 +0,0 @@
backend: dropbox
name: Dropbox
tier: Tier 1
maintainers: Core
features_score: 6
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestDropbox:'
features:
- About
- CanHaveEmptyDirectories
- CaseInsensitive
- ChangeNotify
- Copy
- DirMove
- ListP
- Move
- PublicLink
- Purge
- PutStream
- Shutdown
hashes:
- dropbox
precision: 1000000000

View File

@@ -1,26 +0,0 @@
backend: fichier
name: Fichier
tier: Tier 3
maintainers: Core
features_score: 2
integration_tests: Passing
data_integrity: Hash
performance: Medium
adoption: Some use
docs: Full
security: High
virtual: false
remote: 'TestFichier:'
features:
- About
- CanHaveEmptyDirectories
- Copy
- DirMove
- DuplicateFiles
- Move
- PublicLink
- PutUnchecked
- ReadMimeType
hashes:
- whirlpool
precision: 3153600000000000000

View File

@@ -1,26 +0,0 @@
backend: filefabric
name: Filefabric
tier: Tier 4
maintainers: Core
features_score: 3
integration_tests: Failing
data_integrity: Modtime
performance: Medium
adoption: Some use
docs: Full
security: High
virtual: false
remote: 'TestFileFabric:'
features:
- CanHaveEmptyDirectories
- CaseInsensitive
- CleanUp
- Copy
- DirCacheFlush
- DirMove
- Move
- Purge
- ReadMimeType
- WriteMimeType
hashes: []
precision: 1000000000

View File

@@ -1,21 +0,0 @@
backend: filelu
name: Filelu
tier: Tier 1
maintainers: Core
features_score: 1
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestFileLu:'
features:
- About
- CanHaveEmptyDirectories
- Move
- Purge
- SlowHash
hashes: []
precision: 3153600000000000000

View File

@@ -1,29 +0,0 @@
backend: filen
name: Filen
tier: Tier 1
maintainers: External
features_score: 6
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Some use
docs: Full
security: High
virtual: false
remote: 'TestFilen:'
features:
- About
- CanHaveEmptyDirectories
- ChunkWriterDoesntSeek
- CleanUp
- DirMove
- ListR
- Move
- OpenChunkWriter
- Purge
- PutStream
- ReadMimeType
- WriteMimeType
hashes:
- blake3
precision: 1000000

View File

@@ -1,29 +0,0 @@
backend: filescom
name: Filescom
tier: Tier 1
maintainers: Core
features_score: 5
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestFilesCom:'
features:
- CanHaveEmptyDirectories
- CaseInsensitive
- Copy
- DirModTimeUpdatesOnWrite
- DirMove
- DirSetModTime
- Move
- PublicLink
- Purge
- PutStream
- ReadMimeType
hashes:
- md5
- crc32
precision: 1000000000

View File

@@ -1,22 +0,0 @@
backend: ftp
name: FTP
tier: Tier 1
maintainers: Core
features_score: 4
integration_tests: Passing
data_integrity: Modtime
performance: High
adoption: Widely used
docs: Full
security: Varies
virtual: false
remote: 'TestFTPProftpd:'
features:
- CanHaveEmptyDirectories
- DirMove
- Move
- PartialUploads
- PutStream
- Shutdown
hashes: []
precision: 1000000000

View File

@@ -1,34 +0,0 @@
backend: gofile
name: Gofile
tier: Tier 1
maintainers: Core
features_score: 5
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Often used
docs: Full
security: High
virtual: false
remote: 'TestGoFile:'
features:
- About
- CanHaveEmptyDirectories
- Copy
- DirCacheFlush
- DirModTimeUpdatesOnWrite
- DirMove
- DirSetModTime
- DuplicateFiles
- ListR
- MergeDirs
- Move
- PublicLink
- Purge
- PutStream
- PutUnchecked
- ReadMimeType
- WriteDirSetModTime
hashes:
- md5
precision: 1000000000

View File

@@ -1,26 +0,0 @@
backend: googlecloudstorage
name: Google Cloud Storage
tier: Tier 1
maintainers: Core
features_score: 6
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestGoogleCloudStorage:'
features:
- BucketBased
- BucketBasedRootOK
- CanHaveEmptyDirectories
- Copy
- ListP
- ListR
- PutStream
- ReadMimeType
- WriteMimeType
hashes:
- md5
precision: 1

View File

@@ -1,20 +0,0 @@
backend: googlephotos
name: Google Photos
tier: Tier 5
maintainers: None
features_score: 0
integration_tests: Failing
data_integrity: Other
performance: Low
adoption: Some use
docs: Full
security: High
virtual: false
remote: 'TestGooglePhotos:'
features:
- Disconnect
- ReadMimeType
- Shutdown
- UserInfo
hashes: []
precision: 3153600000000000000

View File

@@ -1,16 +0,0 @@
backend: hasher
name: Hasher
tier: Tier 4
maintainers: Core
features_score: 7
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Some use
docs: Full
security: High
virtual: true
remote: null
features: null
hashes: null
precision: null

View File

@@ -1,22 +0,0 @@
backend: hdfs
name: HDFS
tier: Tier 2
maintainers: Core
features_score: 4
integration_tests: Passing
data_integrity: Modtime
performance: Medium
adoption: Some use
docs: Full
security: High
virtual: false
remote: 'TestHdfs:'
features:
- About
- CanHaveEmptyDirectories
- DirMove
- Move
- Purge
- PutStream
hashes: []
precision: 1000000000

View File

@@ -1,25 +0,0 @@
backend: hidrive
name: Hidrive
tier: Tier 1
maintainers: Core
features_score: 5
integration_tests: Passing
data_integrity: Hash
performance: Medium
adoption: Often used
docs: Full
security: High
virtual: false
remote: 'TestHiDrive:'
features:
- CanHaveEmptyDirectories
- Copy
- DirMove
- Move
- Purge
- PutStream
- PutUnchecked
- Shutdown
hashes:
- hidrive
precision: 1000000000

View File

@@ -1,20 +0,0 @@
backend: http
name: HTTP
tier: Tier 3
maintainers: Core
features_score: 0
integration_tests: N/A
data_integrity: Other
performance: High
adoption: Widely used
docs: Full
security: Varies
virtual: false
remote: ':http,url=''http://downloads.rclone.org'':'
features:
- CanHaveEmptyDirectories
- Command
- PutStream
- ReadMetadata
hashes: []
precision: 1000000000

View File

@@ -1,16 +0,0 @@
backend: iclouddrive
name: Iclouddrive
tier: Tier 4
maintainers: External
features_score: 3
integration_tests: Flaky
data_integrity: Modtime
performance: Low
adoption: Some use
docs: Full
security: High
virtual: false
remote: 'TestICloudDrive:'
features: null
hashes: null
precision: null

View File

@@ -1,23 +0,0 @@
backend: imagekit
name: Imagekit
tier: Tier 1
maintainers: External
features_score: 0
integration_tests: Passing
data_integrity: Other
performance: High
adoption: Some use
docs: Full
security: High
virtual: false
remote: 'TestImageKit:'
features:
- CanHaveEmptyDirectories
- FilterAware
- PublicLink
- Purge
- ReadMetadata
- ReadMimeType
- SlowHash
hashes: []
precision: 3153600000000000000

View File

@@ -1,28 +0,0 @@
backend: internetarchive
name: Internet Archive
tier: Tier 3
maintainers: Core
features_score: 5
integration_tests: Failing
data_integrity: Hash
performance: Medium
adoption: Widely used
docs: Full
security: High
virtual: false
remote: TestIA:rclone-integration-test
features:
- About
- BucketBased
- CleanUp
- Copy
- ListR
- PublicLink
- ReadMetadata
- UserMetadata
- WriteMetadata
hashes:
- md5
- sha1
- crc32
precision: 1

View File

@@ -1,32 +0,0 @@
backend: jottacloud
name: Jottacloud
tier: Tier 1
maintainers: Core
features_score: 6
integration_tests: Flaky
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestJottacloud:'
features:
- About
- CanHaveEmptyDirectories
- CaseInsensitive
- CleanUp
- Copy
- DirMove
- ListR
- Move
- PublicLink
- Purge
- ReadMetadata
- ReadMimeType
- Shutdown
- UserInfo
- WriteMetadata
hashes:
- md5
precision: 1000000000

View File

@@ -1,25 +0,0 @@
backend: koofr
name: Koofr
tier: Tier 2
maintainers: Core
features_score: 5
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestKoofr:'
features:
- About
- CanHaveEmptyDirectories
- CaseInsensitive
- Copy
- DirMove
- Move
- PublicLink
- PutStream
hashes:
- md5
precision: 1000000

View File

@@ -1,20 +0,0 @@
backend: linkbox
name: Linkbox
tier: Tier 5
maintainers: Core
features_score: 2
integration_tests: Failing
data_integrity: Modtime
performance: High
adoption: Often used
docs: Basic
security: High
virtual: false
remote: 'TestLinkbox:'
features:
- CanHaveEmptyDirectories
- CaseInsensitive
- DirCacheFlush
- Purge
hashes: []
precision: 3153600000000000000

View File

@@ -1,50 +0,0 @@
backend: local
name: Local
tier: Tier 1
maintainers: Core
features_score: 7
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: .
features:
- About
- CanHaveEmptyDirectories
- Command
- DirModTimeUpdatesOnWrite
- DirMove
- DirSetModTime
- FilterAware
- IsLocal
- MkdirMetadata
- Move
- OpenWriterAt
- PartialUploads
- PutStream
- ReadDirMetadata
- ReadMetadata
- SlowHash
- UserDirMetadata
- UserMetadata
- WriteDirMetadata
- WriteDirSetModTime
- WriteMetadata
hashes:
- md5
- sha1
- whirlpool
- crc32
- sha256
- sha512
- blake3
- xxh3
- xxh128
- dropbox
- hidrive
- mailru
- quickxor
precision: 1

View File

@@ -1,27 +0,0 @@
backend: mailru
name: Mailru
tier: Tier 1
maintainers: External
features_score: 6
integration_tests: Passing
data_integrity: Hash
performance: Medium
adoption: Often used
docs: Full
security: High
virtual: false
remote: 'TestMailru:'
features:
- About
- CanHaveEmptyDirectories
- CaseInsensitive
- CleanUp
- Copy
- DirMove
- Move
- PublicLink
- Purge
- ServerSideAcrossConfigs
hashes:
- mailru
precision: 1000000000

View File

@@ -1,27 +0,0 @@
backend: mega
name: Mega
tier: Tier 2
maintainers: Core
features_score: 3
integration_tests: Flaky
data_integrity: Other
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestMega:'
features:
- About
- CanHaveEmptyDirectories
- CleanUp
- DirCacheFlush
- DirMove
- DuplicateFiles
- MergeDirs
- Move
- PublicLink
- Purge
- PutUnchecked
hashes: []
precision: 3153600000000000000

View File

@@ -1,25 +0,0 @@
backend: memory
name: Memory
tier: Tier 1
maintainers: Core
features_score: 4
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: ':memory:'
features:
- BucketBased
- BucketBasedRootOK
- Copy
- ListP
- ListR
- PutStream
- ReadMimeType
- WriteMimeType
hashes:
- md5
precision: 1

View File

@@ -1,22 +0,0 @@
backend: netstorage
name: Netstorage
tier: Tier 1
maintainers: Core
features_score: 3
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Some use
docs: Full
security: High
virtual: false
remote: 'TestnStorage:'
features:
- CanHaveEmptyDirectories
- Command
- ListR
- Purge
- PutStream
hashes:
- md5
precision: 1000000000

View File

@@ -1,38 +0,0 @@
backend: onedrive
name: Onedrive
tier: Tier 1
maintainers: Core
features_score: 6
integration_tests: Flaky
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestOneDrive:'
features:
- About
- CanHaveEmptyDirectories
- CaseInsensitive
- ChangeNotify
- CleanUp
- Copy
- DirCacheFlush
- DirMove
- DirSetModTime
- ListP
- MkdirMetadata
- Move
- PublicLink
- Purge
- ReadDirMetadata
- ReadMetadata
- ReadMimeType
- Shutdown
- WriteDirMetadata
- WriteDirSetModTime
- WriteMetadata
hashes:
- quickxor
precision: 1000000000

View File

@@ -1,25 +0,0 @@
backend: opendrive
name: Opendrive
tier: Tier 1
maintainers: Core
features_score: 5
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Often used
docs: Full
security: High
virtual: false
remote: 'TestOpenDrive:'
features:
- About
- CanHaveEmptyDirectories
- CaseInsensitive
- Copy
- DirCacheFlush
- DirMove
- Move
- Purge
hashes:
- md5
precision: 1000000000

View File

@@ -1,32 +0,0 @@
backend: oracleobjectstorage
name: Oracle Object Storage
tier: Tier 1
maintainers: Core
features_score: 6
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestOracleObjectStorage:'
features:
- BucketBased
- BucketBasedRootOK
- CleanUp
- Command
- Copy
- GetTier
- ListP
- ListR
- OpenChunkWriter
- PutStream
- ReadMetadata
- ReadMimeType
- SetTier
- SlowModTime
- WriteMimeType
hashes:
- md5
precision: 1000000

View File

@@ -1,31 +0,0 @@
backend: pcloud
name: Pcloud
tier: Tier 1
maintainers: Core
features_score: 5
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Often used
docs: Full
security: High
virtual: false
remote: 'TestPcloud:'
features:
- About
- CanHaveEmptyDirectories
- ChangeNotify
- Copy
- DirCacheFlush
- DirMove
- ListP
- ListR
- Move
- PartialUploads
- PublicLink
- Purge
- Shutdown
hashes:
- sha1
- sha256
precision: 1000000000

View File

@@ -1,30 +0,0 @@
backend: pikpak
name: Pikpak
tier: Tier 1
maintainers: External
features_score: 5
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Often used
docs: Full
security: High
virtual: false
remote: 'TestPikPak:'
features:
- About
- CanHaveEmptyDirectories
- CleanUp
- Command
- Copy
- DirCacheFlush
- DirMove
- Move
- NoMultiThreading
- PublicLink
- Purge
- ReadMimeType
- UserInfo
hashes:
- md5
precision: 3153600000000000000

View File

@@ -1,29 +0,0 @@
backend: pixeldrain
name: Pixeldrain
tier: Tier 1
maintainers: Core
features_score: 7
integration_tests: Passing
data_integrity: Hash
performance: High
adoption: Widely used
docs: Full
security: High
virtual: false
remote: 'TestPixeldrain:'
features:
- About
- CanHaveEmptyDirectories
- ChangeNotify
- DirMove
- DirSetModTime
- Move
- PublicLink
- Purge
- PutStream
- ReadMetadata
- ReadMimeType
- WriteMetadata
hashes:
- sha256
precision: 1000000

Some files were not shown because too many files have changed in this diff Show More