mirror of
https://github.com/rclone/rclone.git
synced 2026-01-06 02:23:24 +00:00
docs: cleanup backend hashes sections
This commit is contained in:
committed by
Nick Craig-Wood
parent
98a96596df
commit
a7faf05393
@@ -271,7 +271,9 @@ d) Delete this remote
|
||||
y/e/d>
|
||||
```
|
||||
|
||||
### Modified time
|
||||
### Modification times and hashes
|
||||
|
||||
#### Modification times
|
||||
|
||||
The modified time is stored as metadata on the object as
|
||||
`X-Amz-Meta-Mtime` as floating point since the epoch, accurate to 1 ns.
|
||||
@@ -284,6 +286,29 @@ storage the object will be uploaded rather than copied.
|
||||
Note that reading this from the object takes an additional `HEAD`
|
||||
request as the metadata isn't returned in object listings.
|
||||
|
||||
#### Hashes
|
||||
|
||||
For small objects which weren't uploaded as multipart uploads (objects
|
||||
sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
|
||||
the `ETag:` header as an MD5 checksum.
|
||||
|
||||
However for objects which were uploaded as multipart uploads or with
|
||||
server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
|
||||
longer the MD5 sum of the data, so rclone adds an additional piece of
|
||||
metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
|
||||
the same format as is required for `Content-MD5`). You can use base64 -d and hexdump to check this value manually:
|
||||
|
||||
echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
|
||||
|
||||
or you can use `rclone check` to verify the hashes are OK.
|
||||
|
||||
For large objects, calculating this hash can take some time so the
|
||||
addition of this hash can be disabled with `--s3-disable-checksum`.
|
||||
This will mean that these objects do not have an MD5 checksum.
|
||||
|
||||
Note that reading this from the object takes an additional `HEAD`
|
||||
request as the metadata isn't returned in object listings.
|
||||
|
||||
### Reducing costs
|
||||
|
||||
#### Avoiding HEAD requests to read the modification time
|
||||
@@ -375,29 +400,6 @@ there for more details.
|
||||
|
||||
Setting this flag increases the chance for undetected upload failures.
|
||||
|
||||
### Hashes
|
||||
|
||||
For small objects which weren't uploaded as multipart uploads (objects
|
||||
sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
|
||||
the `ETag:` header as an MD5 checksum.
|
||||
|
||||
However for objects which were uploaded as multipart uploads or with
|
||||
server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
|
||||
longer the MD5 sum of the data, so rclone adds an additional piece of
|
||||
metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
|
||||
the same format as is required for `Content-MD5`). You can use base64 -d and hexdump to check this value manually:
|
||||
|
||||
echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
|
||||
|
||||
or you can use `rclone check` to verify the hashes are OK.
|
||||
|
||||
For large objects, calculating this hash can take some time so the
|
||||
addition of this hash can be disabled with `--s3-disable-checksum`.
|
||||
This will mean that these objects do not have an MD5 checksum.
|
||||
|
||||
Note that reading this from the object takes an additional `HEAD`
|
||||
request as the metadata isn't returned in object listings.
|
||||
|
||||
### Versions
|
||||
|
||||
When bucket versioning is enabled (this can be done with rclone with
|
||||
@@ -660,7 +662,8 @@ According to AWS's [documentation on S3 Object Lock](https://docs.aws.amazon.com
|
||||
|
||||
> If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
|
||||
|
||||
As mentioned in the [Hashes](#hashes) section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
|
||||
As mentioned in the [Modification times and hashes](#modification-times-and-hashes) section,
|
||||
small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
|
||||
A simple solution is to set the `--s3-upload-cutoff 0` and force all the files to be uploaded as multipart.
|
||||
|
||||
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
|
||||
|
||||
Reference in New Issue
Block a user