mirror of
https://github.com/rclone/rclone.git
synced 2026-01-22 20:33:17 +00:00
Compare commits
16 Commits
v1.53.1
...
fix-dropbo
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c741c02fb6 | ||
|
|
56e8d75cab | ||
|
|
e74f5b8906 | ||
|
|
e4a7686444 | ||
|
|
c3884aafd9 | ||
|
|
0a9785a4ff | ||
|
|
8140f67092 | ||
|
|
4a001b8a02 | ||
|
|
525433e6dd | ||
|
|
f71f6c57d7 | ||
|
|
e35623c72e | ||
|
|
344bce7e2a | ||
|
|
3a4322a7ba | ||
|
|
27b9ae4fc3 | ||
|
|
7e2488af10 | ||
|
|
41ecb586c4 |
17291
MANUAL.html
generated
17291
MANUAL.html
generated
File diff suppressed because one or more lines are too long
315
MANUAL.md
generated
315
MANUAL.md
generated
@@ -1,6 +1,6 @@
|
||||
% rclone(1) User Manual
|
||||
% Nick Craig-Wood
|
||||
% Sep 13, 2020
|
||||
% Sep 02, 2020
|
||||
|
||||
# Rclone syncs your files to cloud storage
|
||||
|
||||
@@ -146,7 +146,6 @@ WebDAV or S3, that work out of the box.)
|
||||
- StackPath
|
||||
- SugarSync
|
||||
- Tardigrade
|
||||
- Tencent Cloud Object Storage (COS)
|
||||
- Wasabi
|
||||
- WebDAV
|
||||
- Yandex Disk
|
||||
@@ -504,7 +503,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
|
||||
|
||||
# rclone copy
|
||||
|
||||
Copy files from source to dest, skipping already copied.
|
||||
Copy files from source to dest, skipping already copied
|
||||
|
||||
## Synopsis
|
||||
|
||||
@@ -834,7 +833,7 @@ the source match the files in the destination, not the other way
|
||||
around. This means that extra files in the destination that are not in
|
||||
the source will not be detected.
|
||||
|
||||
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match`
|
||||
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--src-only`
|
||||
and `--error` flags write paths, one per line, to the file name (or
|
||||
stdout if it is `-`) supplied. What they write is described in the
|
||||
help below. For example `--differ` will write all paths which are
|
||||
@@ -860,7 +859,6 @@ rclone check source:path dest:path [flags]
|
||||
```
|
||||
--combined string Make a combined report of changes to this file
|
||||
--differ string Report all non-matching files to this file
|
||||
--download Check by downloading rather than with hash.
|
||||
--error string Report all files with errors (hashing or reading) to this file
|
||||
-h, --help help for check
|
||||
--match string Report all matching files to this file
|
||||
@@ -1193,7 +1191,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
|
||||
|
||||
# rclone cleanup
|
||||
|
||||
Clean up the remote if possible.
|
||||
Clean up the remote if possible
|
||||
|
||||
## Synopsis
|
||||
|
||||
@@ -1917,7 +1915,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
|
||||
|
||||
# rclone copyto
|
||||
|
||||
Copy files from source to dest, skipping already copied.
|
||||
Copy files from source to dest, skipping already copied
|
||||
|
||||
## Synopsis
|
||||
|
||||
@@ -2042,7 +2040,7 @@ the source match the files in the destination, not the other way
|
||||
around. This means that extra files in the destination that are not in
|
||||
the source will not be detected.
|
||||
|
||||
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match`
|
||||
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--src-only`
|
||||
and `--error` flags write paths, one per line, to the file name (or
|
||||
stdout if it is `-`) supplied. What they write is described in the
|
||||
help below. For example `--differ` will write all paths which are
|
||||
@@ -2436,7 +2434,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
|
||||
|
||||
# rclone lsf
|
||||
|
||||
List directories and objects in remote:path formatted for parsing.
|
||||
List directories and objects in remote:path formatted for parsing
|
||||
|
||||
## Synopsis
|
||||
|
||||
@@ -2746,9 +2744,6 @@ Stopping the mount manually:
|
||||
# OS X
|
||||
umount /path/to/local/mount
|
||||
|
||||
**Note**: As of `rclone` 1.52.2, `rclone mount` now requires Go version 1.13
|
||||
or newer on some platforms depending on the underlying FUSE library in use.
|
||||
|
||||
## Installing on Windows
|
||||
|
||||
To run rclone mount on Windows, you will need to
|
||||
@@ -3057,11 +3052,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
@@ -3299,7 +3289,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
|
||||
|
||||
# rclone obscure
|
||||
|
||||
Obscure password for use in the rclone config file.
|
||||
Obscure password for use in the rclone config file
|
||||
|
||||
## Synopsis
|
||||
|
||||
@@ -3764,11 +3754,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
@@ -4071,11 +4056,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
@@ -4534,11 +4514,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
@@ -5060,11 +5035,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
@@ -5532,11 +5502,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
@@ -6537,8 +6502,6 @@ This can be useful for tracking down problems with syncs in
|
||||
combination with the `-v` flag. See the [Logging section](#logging)
|
||||
for more info.
|
||||
|
||||
If FILE exists then rclone will append to it.
|
||||
|
||||
Note that if you are using the `logrotate` program to manage rclone's
|
||||
logs, then you should use the `copytruncate` option as rclone doesn't
|
||||
have a signal to rotate logs.
|
||||
@@ -8930,8 +8893,6 @@ OR
|
||||
"result": "<Raw command line output>"
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### core/gc: Runs a garbage collection. {#core-gc}
|
||||
@@ -9607,7 +9568,7 @@ This allows you to remove a plugin using it's name
|
||||
|
||||
This takes parameters
|
||||
|
||||
- name: name of the plugin in the format `author`/`plugin_name`
|
||||
- name: name of the plugin in the format <author>/<plugin_name>
|
||||
|
||||
Eg
|
||||
|
||||
@@ -9621,7 +9582,7 @@ This allows you to remove a plugin using it's name
|
||||
|
||||
This takes the following parameters
|
||||
|
||||
- name: name of the plugin in the format `author`/`plugin_name`
|
||||
- name: name of the plugin in the format <author>/<plugin_name>
|
||||
|
||||
Eg
|
||||
|
||||
@@ -10566,7 +10527,7 @@ These flags are available for every command.
|
||||
--use-json-log Use json log format.
|
||||
--use-mmap Use mmap allocator (see docs).
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.53.1")
|
||||
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.53.0")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
```
|
||||
|
||||
@@ -10665,7 +10626,7 @@ and may be set in the config file.
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user.
|
||||
--drive-auth-url string Auth server URL.
|
||||
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
|
||||
--drive-client-id string Google Application Client Id
|
||||
--drive-client-id string OAuth Client Id
|
||||
--drive-client-secret string OAuth Client Secret
|
||||
--drive-disable-http2 Disable drive using http2 (default true)
|
||||
--drive-encoding MultiEncoder This sets the encoding for the backend. (default InvalidUtf8)
|
||||
@@ -11514,7 +11475,6 @@ The S3 backend can be used with a number of different providers:
|
||||
- Minio
|
||||
- Scaleway
|
||||
- StackPath
|
||||
- Tencent Cloud Object Storage (COS)
|
||||
- Wasabi
|
||||
|
||||
|
||||
@@ -11952,7 +11912,7 @@ Vault API, so rclone cannot directly access Glacier Vaults.
|
||||
|
||||
### Standard Options
|
||||
|
||||
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)).
|
||||
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
|
||||
|
||||
#### --s3-provider
|
||||
|
||||
@@ -11983,8 +11943,6 @@ Choose your S3 provider.
|
||||
- Scaleway Object Storage
|
||||
- "StackPath"
|
||||
- StackPath Object Storage
|
||||
- "TencentCOS"
|
||||
- Tencent Cloud Object Storage (COS)
|
||||
- "Wasabi"
|
||||
- Wasabi Object Storage
|
||||
- "Other"
|
||||
@@ -12338,54 +12296,6 @@ Endpoint for StackPath Object Storage.
|
||||
|
||||
#### --s3-endpoint
|
||||
|
||||
Endpoint for Tencent COS API.
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Examples:
|
||||
- "cos.ap-beijing.myqcloud.com"
|
||||
- Beijing Region.
|
||||
- "cos.ap-nanjing.myqcloud.com"
|
||||
- Nanjing Region.
|
||||
- "cos.ap-shanghai.myqcloud.com"
|
||||
- Shanghai Region.
|
||||
- "cos.ap-guangzhou.myqcloud.com"
|
||||
- Guangzhou Region.
|
||||
- "cos.ap-nanjing.myqcloud.com"
|
||||
- Nanjing Region.
|
||||
- "cos.ap-chengdu.myqcloud.com"
|
||||
- Chengdu Region.
|
||||
- "cos.ap-chongqing.myqcloud.com"
|
||||
- Chongqing Region.
|
||||
- "cos.ap-hongkong.myqcloud.com"
|
||||
- Hong Kong (China) Region.
|
||||
- "cos.ap-singapore.myqcloud.com"
|
||||
- Singapore Region.
|
||||
- "cos.ap-mumbai.myqcloud.com"
|
||||
- Mumbai Region.
|
||||
- "cos.ap-seoul.myqcloud.com"
|
||||
- Seoul Region.
|
||||
- "cos.ap-bangkok.myqcloud.com"
|
||||
- Bangkok Region.
|
||||
- "cos.ap-tokyo.myqcloud.com"
|
||||
- Tokyo Region.
|
||||
- "cos.na-siliconvalley.myqcloud.com"
|
||||
- Silicon Valley Region.
|
||||
- "cos.na-ashburn.myqcloud.com"
|
||||
- Virginia Region.
|
||||
- "cos.na-toronto.myqcloud.com"
|
||||
- Toronto Region.
|
||||
- "cos.eu-frankfurt.myqcloud.com"
|
||||
- Frankfurt Region.
|
||||
- "cos.eu-moscow.myqcloud.com"
|
||||
- Moscow Region.
|
||||
- "cos.accelerate.myqcloud.com"
|
||||
- Use Tencent COS Accelerate Endpoint.
|
||||
|
||||
#### --s3-endpoint
|
||||
|
||||
Endpoint for S3 API.
|
||||
Required when using an S3 clone.
|
||||
|
||||
@@ -12553,8 +12463,6 @@ doesn't copy the ACL from the source but rather writes a fresh one.
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Examples:
|
||||
- "default"
|
||||
- Owner gets Full_CONTROL. No one else has access rights (default).
|
||||
- "private"
|
||||
- Owner gets FULL_CONTROL. No one else has access rights (default).
|
||||
- "public-read"
|
||||
@@ -12655,24 +12563,6 @@ The storage class to use when storing new objects in OSS.
|
||||
|
||||
#### --s3-storage-class
|
||||
|
||||
The storage class to use when storing new objects in Tencent COS.
|
||||
|
||||
- Config: storage_class
|
||||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Examples:
|
||||
- ""
|
||||
- Default
|
||||
- "STANDARD"
|
||||
- Standard storage class
|
||||
- "ARCHIVE"
|
||||
- Archive storage mode.
|
||||
- "STANDARD_IA"
|
||||
- Infrequent access storage mode.
|
||||
|
||||
#### --s3-storage-class
|
||||
|
||||
The storage class to use when storing new objects in S3.
|
||||
|
||||
- Config: storage_class
|
||||
@@ -12689,7 +12579,7 @@ The storage class to use when storing new objects in S3.
|
||||
|
||||
### Advanced Options
|
||||
|
||||
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)).
|
||||
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
|
||||
|
||||
#### --s3-bucket-acl
|
||||
|
||||
@@ -12910,7 +12800,7 @@ if false then rclone will use virtual path style. See [the AWS S3
|
||||
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
|
||||
for more info.
|
||||
|
||||
Some providers (eg AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to
|
||||
Some providers (eg AWS, Aliyun OSS or Netease COS) require this set to
|
||||
false - rclone will do this automatically based on the provider
|
||||
setting.
|
||||
|
||||
@@ -13779,138 +13669,6 @@ d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
### Tencent COS {#tencent-cos}
|
||||
|
||||
[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.
|
||||
|
||||
To configure access to Tencent COS, follow the steps below:
|
||||
|
||||
1. Run `rclone config` and select `n` for a new remote.
|
||||
|
||||
```
|
||||
rclone config
|
||||
No remotes found - make a new one
|
||||
n) New remote
|
||||
s) Set configuration password
|
||||
q) Quit config
|
||||
n/s/q> n
|
||||
```
|
||||
|
||||
2. Give the name of the configuration. For example, name it 'cos'.
|
||||
|
||||
```
|
||||
name> cos
|
||||
```
|
||||
|
||||
3. Select `s3` storage.
|
||||
|
||||
```
|
||||
Choose a number from below, or type in your own value
|
||||
1 / 1Fichier
|
||||
\ "fichier"
|
||||
2 / Alias for an existing remote
|
||||
\ "alias"
|
||||
3 / Amazon Drive
|
||||
\ "amazon cloud drive"
|
||||
4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)
|
||||
\ "s3"
|
||||
[snip]
|
||||
Storage> s3
|
||||
```
|
||||
|
||||
4. Select `TencentCOS` provider.
|
||||
```
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Web Services (AWS) S3
|
||||
\ "AWS"
|
||||
[snip]
|
||||
11 / Tencent Cloud Object Storage (COS)
|
||||
\ "TencentCOS"
|
||||
[snip]
|
||||
provider> TencentCOS
|
||||
```
|
||||
|
||||
5. Enter your SecretId and SecretKey of Tencent Cloud.
|
||||
|
||||
```
|
||||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
||||
Only applies if access_key_id and secret_access_key is blank.
|
||||
Enter a boolean value (true or false). Press Enter for the default ("false").
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Enter AWS credentials in the next step
|
||||
\ "false"
|
||||
2 / Get AWS credentials from the environment (env vars or IAM)
|
||||
\ "true"
|
||||
env_auth> 1
|
||||
AWS Access Key ID.
|
||||
Leave blank for anonymous access or runtime credentials.
|
||||
Enter a string value. Press Enter for the default ("").
|
||||
access_key_id> AKIDxxxxxxxxxx
|
||||
AWS Secret Access Key (password)
|
||||
Leave blank for anonymous access or runtime credentials.
|
||||
Enter a string value. Press Enter for the default ("").
|
||||
secret_access_key> xxxxxxxxxxx
|
||||
```
|
||||
|
||||
6. Select endpoint for Tencent COS. This is the standard endpoint for different region.
|
||||
|
||||
```
|
||||
1 / Beijing Region.
|
||||
\ "cos.ap-beijing.myqcloud.com"
|
||||
2 / Nanjing Region.
|
||||
\ "cos.ap-nanjing.myqcloud.com"
|
||||
3 / Shanghai Region.
|
||||
\ "cos.ap-shanghai.myqcloud.com"
|
||||
4 / Guangzhou Region.
|
||||
\ "cos.ap-guangzhou.myqcloud.com"
|
||||
[snip]
|
||||
endpoint> 4
|
||||
```
|
||||
|
||||
7. Choose acl and storage class.
|
||||
|
||||
```
|
||||
Note that this ACL is applied when server side copying objects as S3
|
||||
doesn't copy the ACL from the source but rather writes a fresh one.
|
||||
Enter a string value. Press Enter for the default ("").
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Owner gets Full_CONTROL. No one else has access rights (default).
|
||||
\ "default"
|
||||
[snip]
|
||||
acl> 1
|
||||
The storage class to use when storing new objects in Tencent COS.
|
||||
Enter a string value. Press Enter for the default ("").
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Default
|
||||
\ ""
|
||||
[snip]
|
||||
storage_class> 1
|
||||
Edit advanced config? (y/n)
|
||||
y) Yes
|
||||
n) No (default)
|
||||
y/n> n
|
||||
Remote config
|
||||
--------------------
|
||||
[cos]
|
||||
type = s3
|
||||
provider = TencentCOS
|
||||
env_auth = false
|
||||
access_key_id = xxx
|
||||
secret_access_key = xxx
|
||||
endpoint = cos.ap-guangzhou.myqcloud.com
|
||||
acl = default
|
||||
--------------------
|
||||
y) Yes this is OK (default)
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
Current remotes:
|
||||
|
||||
Name Type
|
||||
==== ====
|
||||
cos s3
|
||||
```
|
||||
|
||||
### Netease NOS ###
|
||||
|
||||
For Netease NOS configure as per the configurator `rclone config`
|
||||
@@ -18239,10 +17997,8 @@ Here are the standard options specific to drive (Google Drive).
|
||||
|
||||
#### --drive-client-id
|
||||
|
||||
Google Application Client Id
|
||||
Setting your own is recommended.
|
||||
See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
|
||||
If you leave this blank, it will use an internal key which is low performance.
|
||||
OAuth Client Id
|
||||
Leave blank normally.
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_DRIVE_CLIENT_ID
|
||||
@@ -19932,13 +19688,8 @@ flag.
|
||||
Note that Jottacloud requires the MD5 hash before upload so if the
|
||||
source does not have an MD5 checksum then the file will be cached
|
||||
temporarily on disk (wherever the `TMPDIR` environment variable points
|
||||
to) before it is uploaded. Small files will be cached in memory - see
|
||||
to) before it is uploaded. Small files will be cached in memory - see
|
||||
the [--jottacloud-md5-memory-limit](#jottacloud-md5-memory-limit) flag.
|
||||
When uploading from local disk the source checksum is always available,
|
||||
so this does not apply. Starting with rclone version 1.52 the same is
|
||||
true for crypted remotes (in older versions the crypt backend would not
|
||||
calculate hashes for uploads from local disk, so the Jottacloud
|
||||
backend had to do it as described above).
|
||||
|
||||
#### Restricted filename characters
|
||||
|
||||
@@ -25681,36 +25432,6 @@ Options:
|
||||
|
||||
# Changelog
|
||||
|
||||
## v1.53.1 - 2020-09-13
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.53.0...v1.53.1)
|
||||
|
||||
* Bug Fixes
|
||||
* accounting: Remove new line from end of --stats-one-line display (Nick Craig-Wood)
|
||||
* check
|
||||
* Add back missing --download flag (Nick Craig-Wood)
|
||||
* Fix docs (Nick Craig-Wood)
|
||||
* docs
|
||||
* Note --log-file does append (Nick Craig-Wood)
|
||||
* Add full stops for consistency in rclone --help (edwardxml)
|
||||
* Add Tencent COS to s3 provider list (wjielai)
|
||||
* Updated mount command to reflect that it requires Go 1.13 or newer (Evan Harris)
|
||||
* jottacloud: Mention that uploads from local disk will not need to cache files to disk for md5 calculation (albertony)
|
||||
* Fix formatting of rc docs page (Nick Craig-Wood)
|
||||
* build
|
||||
* Include vendor tar ball in release and fix startdev (Nick Craig-Wood)
|
||||
* Fix "Illegal instruction" error for ARMv6 builds (Nick Craig-Wood)
|
||||
* Fix architecture name in ARMv7 build (Nick Craig-Wood)
|
||||
* VFS
|
||||
* Fix spurious error "vfs cache: failed to _ensure cache EOF" (Nick Craig-Wood)
|
||||
* Log an ERROR if we fail to set the file to be sparse (Nick Craig-Wood)
|
||||
* Local
|
||||
* Log an ERROR if we fail to set the file to be sparse (Nick Craig-Wood)
|
||||
* Drive
|
||||
* Re-adds special oauth help text (Tim Gallant)
|
||||
* Opendrive
|
||||
* Do not retry 400 errors (Evan Harris)
|
||||
|
||||
## v1.53.0 - 2020-09-02
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.52.0...v1.53.0)
|
||||
|
||||
16656
MANUAL.txt
generated
16656
MANUAL.txt
generated
File diff suppressed because it is too large
Load Diff
@@ -64,7 +64,6 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
|
||||
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
|
||||
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
|
||||
* Tardigrade [:page_facing_up:](https://rclone.org/tardigrade/)
|
||||
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
|
||||
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
|
||||
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
|
||||
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
|
||||
|
||||
@@ -22,6 +22,7 @@ of path_display and all will be well.
|
||||
*/
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -29,9 +30,11 @@ import (
|
||||
"path"
|
||||
"regexp"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox"
|
||||
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/async"
|
||||
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/auth"
|
||||
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/common"
|
||||
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files"
|
||||
@@ -47,6 +50,7 @@ import (
|
||||
"github.com/rclone/rclone/fs/config/obscure"
|
||||
"github.com/rclone/rclone/fs/fserrors"
|
||||
"github.com/rclone/rclone/fs/hash"
|
||||
"github.com/rclone/rclone/lib/atexit"
|
||||
"github.com/rclone/rclone/lib/encoder"
|
||||
"github.com/rclone/rclone/lib/oauthutil"
|
||||
"github.com/rclone/rclone/lib/pacer"
|
||||
@@ -61,6 +65,7 @@ const (
|
||||
minSleep = 10 * time.Millisecond
|
||||
maxSleep = 2 * time.Second
|
||||
decayConstant = 2 // bigger for slower decay, exponential
|
||||
maxBatchSize = 1000
|
||||
// Upload chunk size - setting too small makes uploads slow.
|
||||
// Chunks are buffered into memory for retries.
|
||||
//
|
||||
@@ -142,6 +147,23 @@ memory. It can be set smaller if you are tight on memory.`, maxChunkSize),
|
||||
Help: "Impersonate this user when using a business account.",
|
||||
Default: "",
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: "batch",
|
||||
Help: `Enable batching of files if non-zero.
|
||||
|
||||
This sets the batch size of files to upload. It has to be less than 1000. A
|
||||
sensible setting is probably 1000 if you are using this feature.
|
||||
|
||||
Rclone will close any outstanding batches when it exits.
|
||||
|
||||
Setting this is a great idea if you are uploading lots of small files as it will
|
||||
make them a lot quicker. You can use --transfers 32 to maximise throughput.
|
||||
|
||||
It has the downside that rclone can't check the hash of the file after upload,
|
||||
so using "rclone check" after the transfer completes is recommended.
|
||||
`,
|
||||
Default: 0,
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: config.ConfigEncoding,
|
||||
Help: config.ConfigEncodingHelp,
|
||||
@@ -163,6 +185,7 @@ memory. It can be set smaller if you are tight on memory.`, maxChunkSize),
|
||||
type Options struct {
|
||||
ChunkSize fs.SizeSuffix `config:"chunk_size"`
|
||||
Impersonate string `config:"impersonate"`
|
||||
Batch int `config:"batch"`
|
||||
Enc encoder.MultiEncoder `config:"encoding"`
|
||||
}
|
||||
|
||||
@@ -180,6 +203,7 @@ type Fs struct {
|
||||
slashRootSlash string // root with "/" prefix and postfix, lowercase
|
||||
pacer *fs.Pacer // To pace the API calls
|
||||
ns string // The namespace we are using or "" for none
|
||||
batcher *batcher // batch builder
|
||||
}
|
||||
|
||||
// Object describes a dropbox object
|
||||
@@ -195,6 +219,165 @@ type Object struct {
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
// batcher holds info about the current items waiting for upload
|
||||
type batcher struct {
|
||||
f *Fs // Fs this batch is part of
|
||||
mu sync.Mutex // lock for vars below
|
||||
commitMu sync.Mutex // lock for waiting for batch
|
||||
maxBatch int // maximum size for batch
|
||||
active int // number of batches being sent
|
||||
items []*files.UploadSessionFinishArg // current uncommitted files
|
||||
atexit atexit.FnHandle // atexit handle
|
||||
}
|
||||
|
||||
// newBatcher creates a new batcher structure
|
||||
func newBatcher(f *Fs, maxBatch int) *batcher {
|
||||
return &batcher{
|
||||
f: f,
|
||||
maxBatch: maxBatch,
|
||||
}
|
||||
}
|
||||
|
||||
// Start starts adding an item to a batch returning true if it was
|
||||
// successfully started
|
||||
//
|
||||
// This should be paired with End
|
||||
func (b *batcher) Start() bool {
|
||||
if b.maxBatch <= 0 {
|
||||
return false
|
||||
}
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
b.active++
|
||||
// FIXME set a timer or something
|
||||
return true
|
||||
}
|
||||
|
||||
// End ends adding an item
|
||||
func (b *batcher) End(started bool) error {
|
||||
if !started {
|
||||
return nil
|
||||
}
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
b.active--
|
||||
if len(b.items) < b.maxBatch {
|
||||
return nil
|
||||
}
|
||||
return b._commit(false)
|
||||
}
|
||||
|
||||
// Waits for the batch to complete - call with batchMu held
|
||||
func (b *batcher) _waitForBatchResult(res *files.UploadSessionFinishBatchLaunch) (batchResult *files.UploadSessionFinishBatchResult, err error) {
|
||||
if res.AsyncJobId == "" {
|
||||
return res.Complete, nil
|
||||
}
|
||||
var batchStatus *files.UploadSessionFinishBatchJobStatus
|
||||
sleepTime := time.Second
|
||||
const maxTries = 120
|
||||
for try := 1; try <= maxTries; try++ {
|
||||
err = b.f.pacer.Call(func() (bool, error) {
|
||||
batchStatus, err = b.f.srv.UploadSessionFinishBatchCheck(&async.PollArg{
|
||||
AsyncJobId: res.AsyncJobId,
|
||||
})
|
||||
return shouldRetry(err)
|
||||
})
|
||||
if err != nil {
|
||||
fs.Errorf(b.f, "failed to wait for batch: %v", err)
|
||||
break
|
||||
}
|
||||
if batchStatus.Tag == "complete" {
|
||||
break
|
||||
}
|
||||
fs.Debugf(b.f, "sleeping for %v to wait for batch to complete, try %d/%d", sleepTime, try, maxTries)
|
||||
time.Sleep(sleepTime)
|
||||
}
|
||||
return batchStatus.Complete, nil
|
||||
}
|
||||
|
||||
// commit a batch - call with batchMu held
|
||||
//
|
||||
// if finalizing is true then it doesn't unregister Finalize as this
|
||||
// causes a deadlock during finalization.
|
||||
func (b *batcher) _commit(finalizing bool) (err error) {
|
||||
b.commitMu.Lock()
|
||||
batch := "batch"
|
||||
if finalizing {
|
||||
batch = "last batch"
|
||||
}
|
||||
fs.Debugf(b.f, "comitting %s length %d", batch, len(b.items))
|
||||
var arg = &files.UploadSessionFinishBatchArg{
|
||||
Entries: b.items,
|
||||
}
|
||||
var res *files.UploadSessionFinishBatchLaunch
|
||||
err = b.f.pacer.Call(func() (bool, error) {
|
||||
res, err = b.f.srv.UploadSessionFinishBatch(arg)
|
||||
// If error is insufficient space then don't retry
|
||||
if e, ok := err.(files.UploadSessionFinishAPIError); ok {
|
||||
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.WriteErrorInsufficientSpace {
|
||||
err = fserrors.NoRetryError(err)
|
||||
return false, err
|
||||
}
|
||||
}
|
||||
// after the first chunk is uploaded, we retry everything
|
||||
return err != nil, err
|
||||
})
|
||||
if err != nil {
|
||||
b.commitMu.Unlock()
|
||||
return err
|
||||
}
|
||||
|
||||
// Clear batch
|
||||
b.items = nil
|
||||
|
||||
// If finalizing, don't unregister or get result
|
||||
if finalizing {
|
||||
b.commitMu.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
// Unregister the atexit since queue is empty
|
||||
atexit.Unregister(b.atexit)
|
||||
b.atexit = nil
|
||||
|
||||
// Wait for the batch to finish before we proceed in the background
|
||||
go func() {
|
||||
defer b.commitMu.Unlock()
|
||||
_, err = b._waitForBatchResult(res)
|
||||
if err != nil {
|
||||
fs.Errorf(b.f, "Error waiting for batch to finish: %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Add adds a finished item to the batch
|
||||
func (b *batcher) Add(commitInfo *files.UploadSessionFinishArg) {
|
||||
fs.Debugf(b.f, "adding %q to batch", commitInfo.Commit.Path)
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
b.items = append(b.items, commitInfo)
|
||||
if b.atexit == nil {
|
||||
b.atexit = atexit.Register(b.Finalize)
|
||||
}
|
||||
}
|
||||
|
||||
// Finalize finishes any pending batches
|
||||
func (b *batcher) Finalize() {
|
||||
b.mu.Lock()
|
||||
defer b.mu.Unlock()
|
||||
if len(b.items) == 0 {
|
||||
return
|
||||
}
|
||||
err := b._commit(true)
|
||||
if err != nil {
|
||||
fs.Errorf(b.f, "Failed to finalize last batch: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
// Name of the remote (as passed into NewFs)
|
||||
func (f *Fs) Name() string {
|
||||
return f.name
|
||||
@@ -230,7 +413,7 @@ func shouldRetry(err error) (bool, error) {
|
||||
switch e := err.(type) {
|
||||
case auth.RateLimitAPIError:
|
||||
if e.RateLimitError.RetryAfter > 0 {
|
||||
fs.Debugf(baseErrString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter)
|
||||
fs.Logf(baseErrString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter)
|
||||
err = pacer.RetryAfterError(err, time.Duration(e.RateLimitError.RetryAfter)*time.Second)
|
||||
}
|
||||
return true, err
|
||||
@@ -273,6 +456,9 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "dropbox: chunk size")
|
||||
}
|
||||
if opt.Batch > maxBatchSize || opt.Batch < 0 {
|
||||
return nil, errors.Errorf("dropbox: batch must be < %d and >= 0 - it is currently %d", maxBatchSize, opt.Batch)
|
||||
}
|
||||
|
||||
// Convert the old token if it exists. The old token was just
|
||||
// just a string, the new one is a JSON blob
|
||||
@@ -297,6 +483,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
|
||||
opt: *opt,
|
||||
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
|
||||
}
|
||||
f.batcher = newBatcher(f, f.opt.Batch)
|
||||
config := dropbox.Config{
|
||||
LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo
|
||||
Client: oAuthClient, // maybe???
|
||||
@@ -1044,6 +1231,13 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
|
||||
// unknown (i.e. -1) or smaller than uploadChunkSize, the method incurs an
|
||||
// avoidable request to the Dropbox API that does not carry payload.
|
||||
func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size int64) (entry *files.FileMetadata, err error) {
|
||||
batching := o.fs.batcher.Start()
|
||||
defer func() {
|
||||
batchErr := o.fs.batcher.End(batching)
|
||||
if err != nil {
|
||||
err = batchErr
|
||||
}
|
||||
}()
|
||||
chunkSize := int64(o.fs.opt.ChunkSize)
|
||||
chunks := 0
|
||||
if size != -1 {
|
||||
@@ -1057,11 +1251,15 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
|
||||
fs.Debugf(o, "Streaming chunk %d/%d", cur, cur)
|
||||
} else if chunks == 0 {
|
||||
fs.Debugf(o, "Streaming chunk %d/unknown", cur)
|
||||
} else {
|
||||
} else if chunks != 1 {
|
||||
fs.Debugf(o, "Uploading chunk %d/%d", cur, chunks)
|
||||
}
|
||||
}
|
||||
|
||||
appendArg := files.UploadSessionAppendArg{
|
||||
Close: chunks == 1,
|
||||
}
|
||||
|
||||
// write the first chunk
|
||||
fmtChunk(1, false)
|
||||
var res *files.UploadSessionStartResult
|
||||
@@ -1071,7 +1269,10 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
|
||||
if _, err = chunk.Seek(0, io.SeekStart); err != nil {
|
||||
return false, nil
|
||||
}
|
||||
res, err = o.fs.srv.UploadSessionStart(&files.UploadSessionStartArg{}, chunk)
|
||||
arg := files.UploadSessionStartArg{
|
||||
Close: appendArg.Close,
|
||||
}
|
||||
res, err = o.fs.srv.UploadSessionStart(&arg, chunk)
|
||||
return shouldRetry(err)
|
||||
})
|
||||
if err != nil {
|
||||
@@ -1082,22 +1283,34 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
|
||||
SessionId: res.SessionId,
|
||||
Offset: 0,
|
||||
}
|
||||
appendArg := files.UploadSessionAppendArg{
|
||||
Cursor: &cursor,
|
||||
Close: false,
|
||||
}
|
||||
appendArg.Cursor = &cursor
|
||||
|
||||
// write more whole chunks (if any)
|
||||
// write more whole chunks (if any, and if !batching), if
|
||||
// batching write the last chunk also.
|
||||
currentChunk := 2
|
||||
for {
|
||||
if chunks > 0 && currentChunk >= chunks {
|
||||
// if the size is known, only upload full chunks. Remaining bytes are uploaded with
|
||||
// the UploadSessionFinish request.
|
||||
break
|
||||
} else if chunks == 0 && in.BytesRead()-cursor.Offset < uint64(chunkSize) {
|
||||
// if the size is unknown, upload as long as we can read full chunks from the reader.
|
||||
// The UploadSessionFinish request will not contain any payload.
|
||||
break
|
||||
if chunks > 0 {
|
||||
// Size known
|
||||
if currentChunk == chunks {
|
||||
// Last chunk
|
||||
if !batching {
|
||||
// if the size is known, only upload full chunks. Remaining bytes are uploaded with
|
||||
// the UploadSessionFinish request.
|
||||
break
|
||||
}
|
||||
appendArg.Close = true
|
||||
} else if currentChunk > chunks {
|
||||
break
|
||||
}
|
||||
} else {
|
||||
// Size unknown
|
||||
lastReadWasShort := in.BytesRead()-cursor.Offset < uint64(chunkSize)
|
||||
if lastReadWasShort {
|
||||
// if the size is unknown, upload as long as we can read full chunks from the reader.
|
||||
// The UploadSessionFinish request will not contain any payload.
|
||||
// This is also what we want if batching
|
||||
break
|
||||
}
|
||||
}
|
||||
cursor.Offset = in.BytesRead()
|
||||
fmtChunk(currentChunk, false)
|
||||
@@ -1123,6 +1336,26 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
|
||||
Cursor: &cursor,
|
||||
Commit: commitInfo,
|
||||
}
|
||||
// If we are batching then we should have written all the data now
|
||||
// store the commit info now for a batch commit
|
||||
if batching {
|
||||
// If we haven't closed the session then we need to
|
||||
if !appendArg.Close {
|
||||
fs.Debugf(o, "Closing session")
|
||||
var empty bytes.Buffer
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
err = o.fs.srv.UploadSessionAppendV2(&appendArg, &empty)
|
||||
// after the first chunk is uploaded, we retry everything
|
||||
return err != nil, err
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
o.fs.batcher.Add(args)
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
fmtChunk(currentChunk, true)
|
||||
chunk = readers.NewRepeatableReaderBuffer(in, buf)
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
@@ -1165,7 +1398,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
size := src.Size()
|
||||
var err error
|
||||
var entry *files.FileMetadata
|
||||
if size > int64(o.fs.opt.ChunkSize) || size == -1 {
|
||||
if size > int64(o.fs.opt.ChunkSize) || size == -1 || o.fs.opt.Batch > 0 {
|
||||
entry, err = o.uploadChunked(in, commitInfo, size)
|
||||
} else {
|
||||
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
||||
@@ -1176,6 +1409,13 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "upload failed")
|
||||
}
|
||||
// If we haven't received data back from batch upload then fake it
|
||||
if entry == nil {
|
||||
o.bytes = size
|
||||
o.modTime = commitInfo.ClientModified
|
||||
o.hash = "" // we don't have this
|
||||
return nil
|
||||
}
|
||||
return o.setMetadataFromEntry(entry)
|
||||
}
|
||||
|
||||
|
||||
@@ -1213,7 +1213,7 @@ func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.Wr
|
||||
// Set the file to be a sparse file (important on Windows)
|
||||
err = file.SetSparse(out)
|
||||
if err != nil {
|
||||
fs.Errorf(o, "Failed to set sparse: %v", err)
|
||||
fs.Debugf(o, "Failed to set sparse: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -646,6 +646,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
|
||||
|
||||
// retryErrorCodes is a slice of error codes that we will retry
|
||||
var retryErrorCodes = []int{
|
||||
400, // Bad request (seen in "Next token is expired")
|
||||
401, // Unauthorized (seen in "Token has expired")
|
||||
408, // Request Timeout
|
||||
423, // Locked - get this on folders sometimes
|
||||
|
||||
104
backend/s3/s3.go
104
backend/s3/s3.go
@@ -58,7 +58,7 @@ import (
|
||||
func init() {
|
||||
fs.Register(&fs.RegInfo{
|
||||
Name: "s3",
|
||||
Description: "Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)",
|
||||
Description: "Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)",
|
||||
NewFs: NewFs,
|
||||
CommandHelp: commandHelp,
|
||||
Options: []fs.Option{{
|
||||
@@ -94,9 +94,6 @@ func init() {
|
||||
}, {
|
||||
Value: "StackPath",
|
||||
Help: "StackPath Object Storage",
|
||||
}, {
|
||||
Value: "TencentCOS",
|
||||
Help: "Tencent Cloud Object Storage (COS)",
|
||||
}, {
|
||||
Value: "Wasabi",
|
||||
Help: "Wasabi Object Storage",
|
||||
@@ -188,7 +185,7 @@ func init() {
|
||||
}, {
|
||||
Name: "region",
|
||||
Help: "Region to connect to.\nLeave blank if you are using an S3 clone and you don't have a region.",
|
||||
Provider: "!AWS,Alibaba,Scaleway,TencentCOS",
|
||||
Provider: "!AWS,Alibaba,Scaleway",
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "",
|
||||
Help: "Use this if unsure. Will use v4 signatures and an empty region.",
|
||||
@@ -479,73 +476,10 @@ func init() {
|
||||
Value: "s3.eu-central-1.stackpathstorage.com",
|
||||
Help: "EU Endpoint",
|
||||
}},
|
||||
}, {
|
||||
// cos endpoints: https://intl.cloud.tencent.com/document/product/436/6224
|
||||
Name: "endpoint",
|
||||
Help: "Endpoint for Tencent COS API.",
|
||||
Provider: "TencentCOS",
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "cos.ap-beijing.myqcloud.com",
|
||||
Help: "Beijing Region.",
|
||||
}, {
|
||||
Value: "cos.ap-nanjing.myqcloud.com",
|
||||
Help: "Nanjing Region.",
|
||||
}, {
|
||||
Value: "cos.ap-shanghai.myqcloud.com",
|
||||
Help: "Shanghai Region.",
|
||||
}, {
|
||||
Value: "cos.ap-guangzhou.myqcloud.com",
|
||||
Help: "Guangzhou Region.",
|
||||
}, {
|
||||
Value: "cos.ap-nanjing.myqcloud.com",
|
||||
Help: "Nanjing Region.",
|
||||
}, {
|
||||
Value: "cos.ap-chengdu.myqcloud.com",
|
||||
Help: "Chengdu Region.",
|
||||
}, {
|
||||
Value: "cos.ap-chongqing.myqcloud.com",
|
||||
Help: "Chongqing Region.",
|
||||
}, {
|
||||
Value: "cos.ap-hongkong.myqcloud.com",
|
||||
Help: "Hong Kong (China) Region.",
|
||||
}, {
|
||||
Value: "cos.ap-singapore.myqcloud.com",
|
||||
Help: "Singapore Region.",
|
||||
}, {
|
||||
Value: "cos.ap-mumbai.myqcloud.com",
|
||||
Help: "Mumbai Region.",
|
||||
}, {
|
||||
Value: "cos.ap-seoul.myqcloud.com",
|
||||
Help: "Seoul Region.",
|
||||
}, {
|
||||
Value: "cos.ap-bangkok.myqcloud.com",
|
||||
Help: "Bangkok Region.",
|
||||
}, {
|
||||
Value: "cos.ap-tokyo.myqcloud.com",
|
||||
Help: "Tokyo Region.",
|
||||
}, {
|
||||
Value: "cos.na-siliconvalley.myqcloud.com",
|
||||
Help: "Silicon Valley Region.",
|
||||
}, {
|
||||
Value: "cos.na-ashburn.myqcloud.com",
|
||||
Help: "Virginia Region.",
|
||||
}, {
|
||||
Value: "cos.na-toronto.myqcloud.com",
|
||||
Help: "Toronto Region.",
|
||||
}, {
|
||||
Value: "cos.eu-frankfurt.myqcloud.com",
|
||||
Help: "Frankfurt Region.",
|
||||
}, {
|
||||
Value: "cos.eu-moscow.myqcloud.com",
|
||||
Help: "Moscow Region.",
|
||||
}, {
|
||||
Value: "cos.accelerate.myqcloud.com",
|
||||
Help: "Use Tencent COS Accelerate Endpoint.",
|
||||
}},
|
||||
}, {
|
||||
Name: "endpoint",
|
||||
Help: "Endpoint for S3 API.\nRequired when using an S3 clone.",
|
||||
Provider: "!AWS,IBMCOS,TencentCOS,Alibaba,Scaleway,StackPath",
|
||||
Provider: "!AWS,IBMCOS,Alibaba,Scaleway,StackPath",
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "objects-us-east-1.dream.io",
|
||||
Help: "Dream Objects endpoint",
|
||||
@@ -732,7 +666,7 @@ func init() {
|
||||
}, {
|
||||
Name: "location_constraint",
|
||||
Help: "Location constraint - must be set to match the Region.\nLeave blank if not sure. Used when creating buckets only.",
|
||||
Provider: "!AWS,IBMCOS,Alibaba,Scaleway,StackPath,TencentCOS",
|
||||
Provider: "!AWS,IBMCOS,Alibaba,Scaleway,StackPath",
|
||||
}, {
|
||||
Name: "acl",
|
||||
Help: `Canned ACL used when creating buckets and storing or copying objects.
|
||||
@@ -744,13 +678,9 @@ For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview
|
||||
Note that this ACL is applied when server side copying objects as S3
|
||||
doesn't copy the ACL from the source but rather writes a fresh one.`,
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "default",
|
||||
Help: "Owner gets Full_CONTROL. No one else has access rights (default).",
|
||||
Provider: "TencentCOS",
|
||||
}, {
|
||||
Value: "private",
|
||||
Help: "Owner gets FULL_CONTROL. No one else has access rights (default).",
|
||||
Provider: "!IBMCOS,TencentCOS",
|
||||
Provider: "!IBMCOS",
|
||||
}, {
|
||||
Value: "public-read",
|
||||
Help: "Owner gets FULL_CONTROL. The AllUsers group gets READ access.",
|
||||
@@ -912,24 +842,6 @@ isn't set then "acl" is used instead.`,
|
||||
Value: "STANDARD_IA",
|
||||
Help: "Infrequent access storage mode.",
|
||||
}},
|
||||
}, {
|
||||
// Mapping from here: https://intl.cloud.tencent.com/document/product/436/30925
|
||||
Name: "storage_class",
|
||||
Help: "The storage class to use when storing new objects in Tencent COS.",
|
||||
Provider: "TencentCOS",
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "",
|
||||
Help: "Default",
|
||||
}, {
|
||||
Value: "STANDARD",
|
||||
Help: "Standard storage class",
|
||||
}, {
|
||||
Value: "ARCHIVE",
|
||||
Help: "Archive storage mode.",
|
||||
}, {
|
||||
Value: "STANDARD_IA",
|
||||
Help: "Infrequent access storage mode.",
|
||||
}},
|
||||
}, {
|
||||
// Mapping from here: https://www.scaleway.com/en/docs/object-storage-glacier/#-Scaleway-Storage-Classes
|
||||
Name: "storage_class",
|
||||
@@ -1063,7 +975,7 @@ if false then rclone will use virtual path style. See [the AWS S3
|
||||
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
|
||||
for more info.
|
||||
|
||||
Some providers (eg AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to
|
||||
Some providers (eg AWS, Aliyun OSS or Netease COS) require this set to
|
||||
false - rclone will do this automatically based on the provider
|
||||
setting.`,
|
||||
Default: true,
|
||||
@@ -1393,7 +1305,7 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
|
||||
if opt.Region == "" {
|
||||
opt.Region = "us-east-1"
|
||||
}
|
||||
if opt.Provider == "AWS" || opt.Provider == "Alibaba" || opt.Provider == "Netease" || opt.Provider == "Scaleway" || opt.Provider == "TencentCOS" || opt.UseAccelerateEndpoint {
|
||||
if opt.Provider == "AWS" || opt.Provider == "Alibaba" || opt.Provider == "Netease" || opt.Provider == "Scaleway" || opt.UseAccelerateEndpoint {
|
||||
opt.ForcePathStyle = false
|
||||
}
|
||||
if opt.Provider == "Scaleway" && opt.MaxUploadParts > 1000 {
|
||||
@@ -1675,7 +1587,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
|
||||
//
|
||||
// So we enable only on providers we know supports it properly, all others can retry when a
|
||||
// XML Syntax error is detected.
|
||||
var urlEncodeListings = (f.opt.Provider == "AWS" || f.opt.Provider == "Wasabi" || f.opt.Provider == "Alibaba" || f.opt.Provider == "Minio" || f.opt.Provider == "TencentCOS")
|
||||
var urlEncodeListings = (f.opt.Provider == "AWS" || f.opt.Provider == "Wasabi" || f.opt.Provider == "Alibaba" || f.opt.Provider == "Minio")
|
||||
for {
|
||||
// FIXME need to implement ALL loop
|
||||
req := s3.ListObjectsInput{
|
||||
|
||||
@@ -324,7 +324,7 @@ func compileArch(version, goos, goarch, dir string) bool {
|
||||
artifacts := []string{buildZip(dir)}
|
||||
// build a .deb and .rpm if appropriate
|
||||
if goos == "linux" {
|
||||
artifacts = append(artifacts, buildDebAndRpm(dir, version, stripVersion(goarch))...)
|
||||
artifacts = append(artifacts, buildDebAndRpm(dir, version, goarch)...)
|
||||
}
|
||||
if *copyAs != "" {
|
||||
for _, artifact := range artifacts {
|
||||
|
||||
@@ -14,7 +14,7 @@ func init() {
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
Use: "cleanup remote:path",
|
||||
Short: `Clean up the remote if possible.`,
|
||||
Short: `Clean up the remote if possible`,
|
||||
Long: `
|
||||
Clean up the remote if possible. Empty the trash or delete old file
|
||||
versions. Not supported by all remotes.
|
||||
|
||||
@@ -22,7 +22,7 @@ func init() {
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
Use: "copy source:path dest:path",
|
||||
Short: `Copy files from source to dest, skipping already copied.`,
|
||||
Short: `Copy files from source to dest, skipping already copied`,
|
||||
Long: `
|
||||
Copy the source to the destination. Doesn't transfer
|
||||
unchanged files, testing by size and modification time or
|
||||
|
||||
@@ -15,7 +15,7 @@ func init() {
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
Use: "copyto source:path dest:path",
|
||||
Short: `Copy files from source to dest, skipping already copied.`,
|
||||
Short: `Copy files from source to dest, skipping already copied`,
|
||||
Long: `
|
||||
If source:path is a file or directory then it copies it to a file or
|
||||
directory named dest:path.
|
||||
|
||||
@@ -44,7 +44,7 @@ func init() {
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
Use: "lsf remote:path",
|
||||
Short: `List directories and objects in remote:path formatted for parsing.`,
|
||||
Short: `List directories and objects in remote:path formatted for parsing`,
|
||||
Long: `
|
||||
List the contents of the source path (directories and objects) to
|
||||
standard output in a form which is easy to parse by scripts. By
|
||||
|
||||
@@ -192,9 +192,6 @@ Stopping the mount manually:
|
||||
# OS X
|
||||
umount /path/to/local/mount
|
||||
|
||||
**Note**: As of ` + "`rclone` 1.52.2, `rclone mount`" + ` now requires Go version 1.13
|
||||
or newer on some platforms depending on the underlying FUSE library in use.
|
||||
|
||||
### Installing on Windows
|
||||
|
||||
To run rclone ` + commandName + ` on Windows, you will need to
|
||||
|
||||
@@ -17,7 +17,7 @@ func init() {
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
Use: "obscure password",
|
||||
Short: `Obscure password for use in the rclone config file.`,
|
||||
Short: `Obscure password for use in the rclone config file`,
|
||||
Long: `In the rclone config file, human readable passwords are
|
||||
obscured. Obscuring them is done by encrypting them and writing them
|
||||
out in base64. This is **not** a secure way of encrypting these
|
||||
|
||||
@@ -148,7 +148,6 @@ WebDAV or S3, that work out of the box.)
|
||||
{{< provider name="StackPath" home="https://www.stackpath.com/products/object-storage/" config="/s3/#stackpath" >}}
|
||||
{{< provider name="SugarSync" home="https://sugarsync.com/" config="/sugarsync/" >}}
|
||||
{{< provider name="Tardigrade" home="https://tardigrade.io/" config="/tardigrade/" >}}
|
||||
{{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}}
|
||||
{{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}
|
||||
{{< provider name="WebDAV" home="https://en.wikipedia.org/wiki/WebDAV" config="/webdav/" >}}
|
||||
{{< provider name="Yandex Disk" home="https://disk.yandex.com/" config="/yandex/" >}}
|
||||
|
||||
@@ -5,36 +5,6 @@ description: "Rclone Changelog"
|
||||
|
||||
# Changelog
|
||||
|
||||
## v1.53.1 - 2020-09-13
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.53.0...v1.53.1)
|
||||
|
||||
* Bug Fixes
|
||||
* accounting: Remove new line from end of --stats-one-line display (Nick Craig-Wood)
|
||||
* check
|
||||
* Add back missing --download flag (Nick Craig-Wood)
|
||||
* Fix docs (Nick Craig-Wood)
|
||||
* docs
|
||||
* Note --log-file does append (Nick Craig-Wood)
|
||||
* Add full stops for consistency in rclone --help (edwardxml)
|
||||
* Add Tencent COS to s3 provider list (wjielai)
|
||||
* Updated mount command to reflect that it requires Go 1.13 or newer (Evan Harris)
|
||||
* jottacloud: Mention that uploads from local disk will not need to cache files to disk for md5 calculation (albertony)
|
||||
* Fix formatting of rc docs page (Nick Craig-Wood)
|
||||
* build
|
||||
* Include vendor tar ball in release and fix startdev (Nick Craig-Wood)
|
||||
* Fix "Illegal instruction" error for ARMv6 builds (Nick Craig-Wood)
|
||||
* Fix architecture name in ARMv7 build (Nick Craig-Wood)
|
||||
* VFS
|
||||
* Fix spurious error "vfs cache: failed to _ensure cache EOF" (Nick Craig-Wood)
|
||||
* Log an ERROR if we fail to set the file to be sparse (Nick Craig-Wood)
|
||||
* Local
|
||||
* Log an ERROR if we fail to set the file to be sparse (Nick Craig-Wood)
|
||||
* Drive
|
||||
* Re-adds special oauth help text (Tim Gallant)
|
||||
* Opendrive
|
||||
* Do not retry 400 errors (Evan Harris)
|
||||
|
||||
## v1.53.0 - 2020-09-02
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.52.0...v1.53.0)
|
||||
|
||||
@@ -39,10 +39,10 @@ See the [global flags page](/flags/) for global options not listed here.
|
||||
* [rclone backend](/commands/rclone_backend/) - Run a backend specific command.
|
||||
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
|
||||
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
|
||||
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible.
|
||||
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible
|
||||
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
|
||||
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied.
|
||||
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied.
|
||||
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied
|
||||
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied
|
||||
* [rclone copyurl](/commands/rclone_copyurl/) - Copy url content to dest.
|
||||
* [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integrity of a crypted remote.
|
||||
* [rclone cryptdecode](/commands/rclone_cryptdecode/) - Cryptdecode returns unencrypted file names.
|
||||
@@ -56,7 +56,7 @@ See the [global flags page](/flags/) for global options not listed here.
|
||||
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file.
|
||||
* [rclone ls](/commands/rclone_ls/) - List the objects in the path with size and path.
|
||||
* [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the path.
|
||||
* [rclone lsf](/commands/rclone_lsf/) - List directories and objects in remote:path formatted for parsing.
|
||||
* [rclone lsf](/commands/rclone_lsf/) - List directories and objects in remote:path formatted for parsing
|
||||
* [rclone lsjson](/commands/rclone_lsjson/) - List directories and objects in the path in JSON format.
|
||||
* [rclone lsl](/commands/rclone_lsl/) - List the objects in path with modification time, size and path.
|
||||
* [rclone md5sum](/commands/rclone_md5sum/) - Produces an md5sum file for all the objects in the path.
|
||||
@@ -65,7 +65,7 @@ See the [global flags page](/flags/) for global options not listed here.
|
||||
* [rclone move](/commands/rclone_move/) - Move files from source to dest.
|
||||
* [rclone moveto](/commands/rclone_moveto/) - Move file or directory from source to dest.
|
||||
* [rclone ncdu](/commands/rclone_ncdu/) - Explore a remote with a text based user interface.
|
||||
* [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone config file.
|
||||
* [rclone obscure](/commands/rclone_obscure/) - Obscure password for use in the rclone config file
|
||||
* [rclone purge](/commands/rclone_purge/) - Remove the path and all of its contents.
|
||||
* [rclone rc](/commands/rclone_rc/) - Run a command against a running rclone.
|
||||
* [rclone rcat](/commands/rclone_rcat/) - Copies standard input to file on remote.
|
||||
|
||||
@@ -29,7 +29,7 @@ the source match the files in the destination, not the other way
|
||||
around. This means that extra files in the destination that are not in
|
||||
the source will not be detected.
|
||||
|
||||
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match`
|
||||
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--src-only`
|
||||
and `--error` flags write paths, one per line, to the file name (or
|
||||
stdout if it is `-`) supplied. What they write is described in the
|
||||
help below. For example `--differ` will write all paths which are
|
||||
@@ -55,7 +55,6 @@ rclone check source:path dest:path [flags]
|
||||
```
|
||||
--combined string Make a combined report of changes to this file
|
||||
--differ string Report all non-matching files to this file
|
||||
--download Check by downloading rather than with hash.
|
||||
--error string Report all files with errors (hashing or reading) to this file
|
||||
-h, --help help for check
|
||||
--match string Report all matching files to this file
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
---
|
||||
title: "rclone cleanup"
|
||||
description: "Clean up the remote if possible."
|
||||
description: "Clean up the remote if possible"
|
||||
slug: rclone_cleanup
|
||||
url: /commands/rclone_cleanup/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cleanup/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone cleanup
|
||||
|
||||
Clean up the remote if possible.
|
||||
Clean up the remote if possible
|
||||
|
||||
## Synopsis
|
||||
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
---
|
||||
title: "rclone copy"
|
||||
description: "Copy files from source to dest, skipping already copied."
|
||||
description: "Copy files from source to dest, skipping already copied"
|
||||
slug: rclone_copy
|
||||
url: /commands/rclone_copy/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/copy/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone copy
|
||||
|
||||
Copy files from source to dest, skipping already copied.
|
||||
Copy files from source to dest, skipping already copied
|
||||
|
||||
## Synopsis
|
||||
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
---
|
||||
title: "rclone copyto"
|
||||
description: "Copy files from source to dest, skipping already copied."
|
||||
description: "Copy files from source to dest, skipping already copied"
|
||||
slug: rclone_copyto
|
||||
url: /commands/rclone_copyto/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/copyto/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone copyto
|
||||
|
||||
Copy files from source to dest, skipping already copied.
|
||||
Copy files from source to dest, skipping already copied
|
||||
|
||||
## Synopsis
|
||||
|
||||
|
||||
@@ -40,7 +40,7 @@ the source match the files in the destination, not the other way
|
||||
around. This means that extra files in the destination that are not in
|
||||
the source will not be detected.
|
||||
|
||||
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match`
|
||||
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--src-only`
|
||||
and `--error` flags write paths, one per line, to the file name (or
|
||||
stdout if it is `-`) supplied. What they write is described in the
|
||||
help below. For example `--differ` will write all paths which are
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
---
|
||||
title: "rclone lsf"
|
||||
description: "List directories and objects in remote:path formatted for parsing."
|
||||
description: "List directories and objects in remote:path formatted for parsing"
|
||||
slug: rclone_lsf
|
||||
url: /commands/rclone_lsf/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/lsf/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone lsf
|
||||
|
||||
List directories and objects in remote:path formatted for parsing.
|
||||
List directories and objects in remote:path formatted for parsing
|
||||
|
||||
## Synopsis
|
||||
|
||||
|
||||
@@ -49,9 +49,6 @@ Stopping the mount manually:
|
||||
# OS X
|
||||
umount /path/to/local/mount
|
||||
|
||||
**Note**: As of `rclone` 1.52.2, `rclone mount` now requires Go version 1.13
|
||||
or newer on some platforms depending on the underlying FUSE library in use.
|
||||
|
||||
## Installing on Windows
|
||||
|
||||
To run rclone mount on Windows, you will need to
|
||||
@@ -360,11 +357,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
---
|
||||
title: "rclone obscure"
|
||||
description: "Obscure password for use in the rclone config file."
|
||||
description: "Obscure password for use in the rclone config file"
|
||||
slug: rclone_obscure
|
||||
url: /commands/rclone_obscure/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/obscure/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone obscure
|
||||
|
||||
Obscure password for use in the rclone config file.
|
||||
Obscure password for use in the rclone config file
|
||||
|
||||
## Synopsis
|
||||
|
||||
|
||||
@@ -196,11 +196,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
|
||||
@@ -195,11 +195,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
|
||||
@@ -267,11 +267,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
|
||||
@@ -206,11 +206,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
|
||||
@@ -275,11 +275,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
## VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
|
||||
@@ -757,8 +757,6 @@ This can be useful for tracking down problems with syncs in
|
||||
combination with the `-v` flag. See the [Logging section](#logging)
|
||||
for more info.
|
||||
|
||||
If FILE exists then rclone will append to it.
|
||||
|
||||
Note that if you are using the `logrotate` program to manage rclone's
|
||||
logs, then you should use the `copytruncate` option as rclone doesn't
|
||||
have a signal to rotate logs.
|
||||
|
||||
@@ -547,10 +547,8 @@ Here are the standard options specific to drive (Google Drive).
|
||||
|
||||
#### --drive-client-id
|
||||
|
||||
Google Application Client Id
|
||||
Setting your own is recommended.
|
||||
See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
|
||||
If you leave this blank, it will use an internal key which is low performance.
|
||||
OAuth Client Id
|
||||
Leave blank normally.
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_DRIVE_CLIENT_ID
|
||||
|
||||
@@ -147,7 +147,7 @@ These flags are available for every command.
|
||||
--use-json-log Use json log format.
|
||||
--use-mmap Use mmap allocator (see docs).
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.53.1")
|
||||
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.53.0")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
```
|
||||
|
||||
@@ -246,7 +246,7 @@ and may be set in the config file.
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user.
|
||||
--drive-auth-url string Auth server URL.
|
||||
--drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
|
||||
--drive-client-id string Google Application Client Id
|
||||
--drive-client-id string OAuth Client Id
|
||||
--drive-client-secret string OAuth Client Secret
|
||||
--drive-disable-http2 Disable drive using http2 (default true)
|
||||
--drive-encoding MultiEncoder This sets the encoding for the backend. (default InvalidUtf8)
|
||||
|
||||
@@ -18,7 +18,6 @@ The S3 backend can be used with a number of different providers:
|
||||
{{< provider name="Minio" home="https://www.minio.io/" config="/s3/#minio" >}}
|
||||
{{< provider name="Scaleway" home="https://www.scaleway.com/en/object-storage/" config="/s3/#scaleway" >}}
|
||||
{{< provider name="StackPath" home="https://www.stackpath.com/products/object-storage/" config="/s3/#stackpath" >}}
|
||||
{{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}}
|
||||
{{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" end="true" >}}
|
||||
{{< /provider_list >}}
|
||||
|
||||
@@ -456,7 +455,7 @@ Vault API, so rclone cannot directly access Glacier Vaults.
|
||||
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
|
||||
### Standard Options
|
||||
|
||||
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)).
|
||||
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
|
||||
|
||||
#### --s3-provider
|
||||
|
||||
@@ -487,8 +486,6 @@ Choose your S3 provider.
|
||||
- Scaleway Object Storage
|
||||
- "StackPath"
|
||||
- StackPath Object Storage
|
||||
- "TencentCOS"
|
||||
- Tencent Cloud Object Storage (COS)
|
||||
- "Wasabi"
|
||||
- Wasabi Object Storage
|
||||
- "Other"
|
||||
@@ -842,54 +839,6 @@ Endpoint for StackPath Object Storage.
|
||||
|
||||
#### --s3-endpoint
|
||||
|
||||
Endpoint for Tencent COS API.
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Examples:
|
||||
- "cos.ap-beijing.myqcloud.com"
|
||||
- Beijing Region.
|
||||
- "cos.ap-nanjing.myqcloud.com"
|
||||
- Nanjing Region.
|
||||
- "cos.ap-shanghai.myqcloud.com"
|
||||
- Shanghai Region.
|
||||
- "cos.ap-guangzhou.myqcloud.com"
|
||||
- Guangzhou Region.
|
||||
- "cos.ap-nanjing.myqcloud.com"
|
||||
- Nanjing Region.
|
||||
- "cos.ap-chengdu.myqcloud.com"
|
||||
- Chengdu Region.
|
||||
- "cos.ap-chongqing.myqcloud.com"
|
||||
- Chongqing Region.
|
||||
- "cos.ap-hongkong.myqcloud.com"
|
||||
- Hong Kong (China) Region.
|
||||
- "cos.ap-singapore.myqcloud.com"
|
||||
- Singapore Region.
|
||||
- "cos.ap-mumbai.myqcloud.com"
|
||||
- Mumbai Region.
|
||||
- "cos.ap-seoul.myqcloud.com"
|
||||
- Seoul Region.
|
||||
- "cos.ap-bangkok.myqcloud.com"
|
||||
- Bangkok Region.
|
||||
- "cos.ap-tokyo.myqcloud.com"
|
||||
- Tokyo Region.
|
||||
- "cos.na-siliconvalley.myqcloud.com"
|
||||
- Silicon Valley Region.
|
||||
- "cos.na-ashburn.myqcloud.com"
|
||||
- Virginia Region.
|
||||
- "cos.na-toronto.myqcloud.com"
|
||||
- Toronto Region.
|
||||
- "cos.eu-frankfurt.myqcloud.com"
|
||||
- Frankfurt Region.
|
||||
- "cos.eu-moscow.myqcloud.com"
|
||||
- Moscow Region.
|
||||
- "cos.accelerate.myqcloud.com"
|
||||
- Use Tencent COS Accelerate Endpoint.
|
||||
|
||||
#### --s3-endpoint
|
||||
|
||||
Endpoint for S3 API.
|
||||
Required when using an S3 clone.
|
||||
|
||||
@@ -1057,8 +1006,6 @@ doesn't copy the ACL from the source but rather writes a fresh one.
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Examples:
|
||||
- "default"
|
||||
- Owner gets Full_CONTROL. No one else has access rights (default).
|
||||
- "private"
|
||||
- Owner gets FULL_CONTROL. No one else has access rights (default).
|
||||
- "public-read"
|
||||
@@ -1159,24 +1106,6 @@ The storage class to use when storing new objects in OSS.
|
||||
|
||||
#### --s3-storage-class
|
||||
|
||||
The storage class to use when storing new objects in Tencent COS.
|
||||
|
||||
- Config: storage_class
|
||||
- Env Var: RCLONE_S3_STORAGE_CLASS
|
||||
- Type: string
|
||||
- Default: ""
|
||||
- Examples:
|
||||
- ""
|
||||
- Default
|
||||
- "STANDARD"
|
||||
- Standard storage class
|
||||
- "ARCHIVE"
|
||||
- Archive storage mode.
|
||||
- "STANDARD_IA"
|
||||
- Infrequent access storage mode.
|
||||
|
||||
#### --s3-storage-class
|
||||
|
||||
The storage class to use when storing new objects in S3.
|
||||
|
||||
- Config: storage_class
|
||||
@@ -1193,7 +1122,7 @@ The storage class to use when storing new objects in S3.
|
||||
|
||||
### Advanced Options
|
||||
|
||||
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)).
|
||||
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
|
||||
|
||||
#### --s3-bucket-acl
|
||||
|
||||
@@ -1414,7 +1343,7 @@ if false then rclone will use virtual path style. See [the AWS S3
|
||||
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
|
||||
for more info.
|
||||
|
||||
Some providers (eg AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to
|
||||
Some providers (eg AWS, Aliyun OSS or Netease COS) require this set to
|
||||
false - rclone will do this automatically based on the provider
|
||||
setting.
|
||||
|
||||
@@ -2283,138 +2212,6 @@ d) Delete this remote
|
||||
y/e/d> y
|
||||
```
|
||||
|
||||
### Tencent COS {#tencent-cos}
|
||||
|
||||
[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.
|
||||
|
||||
To configure access to Tencent COS, follow the steps below:
|
||||
|
||||
1. Run `rclone config` and select `n` for a new remote.
|
||||
|
||||
```
|
||||
rclone config
|
||||
No remotes found - make a new one
|
||||
n) New remote
|
||||
s) Set configuration password
|
||||
q) Quit config
|
||||
n/s/q> n
|
||||
```
|
||||
|
||||
2. Give the name of the configuration. For example, name it 'cos'.
|
||||
|
||||
```
|
||||
name> cos
|
||||
```
|
||||
|
||||
3. Select `s3` storage.
|
||||
|
||||
```
|
||||
Choose a number from below, or type in your own value
|
||||
1 / 1Fichier
|
||||
\ "fichier"
|
||||
2 / Alias for an existing remote
|
||||
\ "alias"
|
||||
3 / Amazon Drive
|
||||
\ "amazon cloud drive"
|
||||
4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)
|
||||
\ "s3"
|
||||
[snip]
|
||||
Storage> s3
|
||||
```
|
||||
|
||||
4. Select `TencentCOS` provider.
|
||||
```
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Amazon Web Services (AWS) S3
|
||||
\ "AWS"
|
||||
[snip]
|
||||
11 / Tencent Cloud Object Storage (COS)
|
||||
\ "TencentCOS"
|
||||
[snip]
|
||||
provider> TencentCOS
|
||||
```
|
||||
|
||||
5. Enter your SecretId and SecretKey of Tencent Cloud.
|
||||
|
||||
```
|
||||
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
|
||||
Only applies if access_key_id and secret_access_key is blank.
|
||||
Enter a boolean value (true or false). Press Enter for the default ("false").
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Enter AWS credentials in the next step
|
||||
\ "false"
|
||||
2 / Get AWS credentials from the environment (env vars or IAM)
|
||||
\ "true"
|
||||
env_auth> 1
|
||||
AWS Access Key ID.
|
||||
Leave blank for anonymous access or runtime credentials.
|
||||
Enter a string value. Press Enter for the default ("").
|
||||
access_key_id> AKIDxxxxxxxxxx
|
||||
AWS Secret Access Key (password)
|
||||
Leave blank for anonymous access or runtime credentials.
|
||||
Enter a string value. Press Enter for the default ("").
|
||||
secret_access_key> xxxxxxxxxxx
|
||||
```
|
||||
|
||||
6. Select endpoint for Tencent COS. This is the standard endpoint for different region.
|
||||
|
||||
```
|
||||
1 / Beijing Region.
|
||||
\ "cos.ap-beijing.myqcloud.com"
|
||||
2 / Nanjing Region.
|
||||
\ "cos.ap-nanjing.myqcloud.com"
|
||||
3 / Shanghai Region.
|
||||
\ "cos.ap-shanghai.myqcloud.com"
|
||||
4 / Guangzhou Region.
|
||||
\ "cos.ap-guangzhou.myqcloud.com"
|
||||
[snip]
|
||||
endpoint> 4
|
||||
```
|
||||
|
||||
7. Choose acl and storage class.
|
||||
|
||||
```
|
||||
Note that this ACL is applied when server side copying objects as S3
|
||||
doesn't copy the ACL from the source but rather writes a fresh one.
|
||||
Enter a string value. Press Enter for the default ("").
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Owner gets Full_CONTROL. No one else has access rights (default).
|
||||
\ "default"
|
||||
[snip]
|
||||
acl> 1
|
||||
The storage class to use when storing new objects in Tencent COS.
|
||||
Enter a string value. Press Enter for the default ("").
|
||||
Choose a number from below, or type in your own value
|
||||
1 / Default
|
||||
\ ""
|
||||
[snip]
|
||||
storage_class> 1
|
||||
Edit advanced config? (y/n)
|
||||
y) Yes
|
||||
n) No (default)
|
||||
y/n> n
|
||||
Remote config
|
||||
--------------------
|
||||
[cos]
|
||||
type = s3
|
||||
provider = TencentCOS
|
||||
env_auth = false
|
||||
access_key_id = xxx
|
||||
secret_access_key = xxx
|
||||
endpoint = cos.ap-guangzhou.myqcloud.com
|
||||
acl = default
|
||||
--------------------
|
||||
y) Yes this is OK (default)
|
||||
e) Edit this remote
|
||||
d) Delete this remote
|
||||
y/e/d> y
|
||||
Current remotes:
|
||||
|
||||
Name Type
|
||||
==== ====
|
||||
cos s3
|
||||
```
|
||||
|
||||
### Netease NOS ###
|
||||
|
||||
For Netease NOS configure as per the configurator `rclone config`
|
||||
|
||||
@@ -1 +1 @@
|
||||
v1.53.1
|
||||
v1.54.0
|
||||
@@ -1,4 +1,4 @@
|
||||
package fs
|
||||
|
||||
// Version of rclone
|
||||
var Version = "v1.53.1-DEV"
|
||||
var Version = "v1.54.0-DEV"
|
||||
|
||||
@@ -166,11 +166,6 @@ whereas the --vfs-read-ahead is buffered on disk.
|
||||
When using this mode it is recommended that --buffer-size is not set
|
||||
too big and --vfs-read-ahead is set large if required.
|
||||
|
||||
**IMPORTANT** not all file systems support sparse files. In particular
|
||||
FAT/exFAT do not. Rclone will perform very badly if the cache
|
||||
directory is on a filesystem which doesn't support sparse files and it
|
||||
will log an ERROR message if one is detected.
|
||||
|
||||
### VFS Performance
|
||||
|
||||
These flags may be used to enable/disable features of the VFS for
|
||||
|
||||
@@ -249,7 +249,7 @@ func (item *Item) _truncate(size int64) (err error) {
|
||||
|
||||
err = file.SetSparse(fd)
|
||||
if err != nil {
|
||||
fs.Errorf(item.name, "vfs cache: truncate: failed to set as a sparse file: %v", err)
|
||||
fs.Debugf(item.name, "vfs cache: truncate: failed to set as a sparse file: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -446,7 +446,7 @@ func (item *Item) _createFile(osPath string) (err error) {
|
||||
}
|
||||
err = file.SetSparse(fd)
|
||||
if err != nil {
|
||||
fs.Errorf(item.name, "vfs cache: failed to set as a sparse file: %v", err)
|
||||
fs.Debugf(item.name, "vfs cache: failed to set as a sparse file: %v", err)
|
||||
}
|
||||
item.fd = fd
|
||||
|
||||
|
||||
Reference in New Issue
Block a user