1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-16 23:43:48 +00:00

config,s3: hierarchical configuration support #2140

This introduces a method of making provider specific configuration
within a remote.  This is useful particularly in s3.

This commit does the basic configuration in S3 for IBM COS.
This commit is contained in:
Giri Badanahatti
2018-04-12 11:05:53 -05:00
committed by Nick Craig-Wood
parent 9e4cd55477
commit acd5d4377e
5 changed files with 623 additions and 306 deletions

View File

@@ -26,7 +26,7 @@ Rclone is a command line program to sync files and directories to and from:
* {{< provider name="Google Drive" home="https://www.google.com/drive/" config="/drive/" >}}
* {{< provider name="HTTP" home="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol" config="/http/" >}}
* {{< provider name="Hubic" home="https://hubic.com/" config="/hubic/" >}}
* {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/" >}}
* {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
* {{< provider name="Memset Memstore" home="https://www.memset.com/cloud/storage/" config="/swift/" >}}
* {{< provider name="Microsoft Azure Blob Storage" home="https://azure.microsoft.com/en-us/services/storage/blobs/" config="/azureblob/" >}}
* {{< provider name="Microsoft OneDrive" home="https://onedrive.live.com/" config="/onedrive/" >}}

View File

@@ -4,8 +4,17 @@ description: "Rclone docs for Amazon S3"
date: "2016-07-11"
---
<i class="fa fa-amazon"></i> Amazon S3
---------------------------------------
<i class="fa fa-amazon"></i> Amazon S3 Storage Providers
--------------------------------------------------------
* {{< provider name="AWS S3" home="https://aws.amazon.com/s3/" config="/s3/" >}}
* {{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
* {{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
* {{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
* {{< provider name="Minio" home="https://www.minio.io/" config="/s3/#minio" >}}
* {{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}
## AWS S3 {#amazon-s3}
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`.
@@ -29,14 +38,27 @@ Choose a number from below, or type in your own value
\ "alias"
2 / Amazon Drive
\ "amazon cloud drive"
3 / Amazon S3 (also Dreamhost, Ceph, Minio)
3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
\ "s3"
4 / Backblaze B2
\ "b2"
[snip]
23 / http Connection
\ "http"
Storage> s3
Storage> 3
Choose the S3 provider.
Choose a number from below, or type in your own value
1 / Choose this option to configure Storage to AWS S3
\ "AWS"
2 / Choose this option to configure Storage to Ceph Systems
\ "Ceph"
3 / Choose this option to configure Storage to Dreamhost
\ "Dreamhost"
4 / Choose this option to the configure Storage to IBM COS S3
\ "IBMCOS"
5 / Choose this option to the configure Storage to Minio
\ "Minio"
Provider>1
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
@@ -100,7 +122,7 @@ region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint>
endpoint>
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
@@ -178,11 +200,11 @@ env_auth = false
access_key_id = XXX
secret_access_key = YYY
region = us-east-1
endpoint =
location_constraint =
endpoint =
location_constraint =
acl = private
server_side_encryption =
storage_class =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -268,7 +290,7 @@ credentials then S3 interaction will be non-authenticated (see below).
### S3 Permissions ###
When using the `sync` subcommand of `rclone` the following minimum
When using the `sync` subcommand of `rclone` the following minimum
permissions are required to be available on the bucket being written to:
* `ListBucket`
@@ -308,10 +330,10 @@ Notes on above:
1. This is a policy that can be used when creating bucket. It assumes
that `USER_NAME` has been created.
2. The Resource entry must include both resource ARNs, as one implies
2. The Resource entry must include both resource ARNs, as one implies
the bucket and the other implies the bucket's objects.
For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
that will generate one or more buckets that will work with `rclone sync`.
### Key Management System (KMS) ###
@@ -365,6 +387,7 @@ Note that 2 chunks of this size are buffered in memory per transfer.
If you are transferring large files over high speed links and you have
enough memory, then increasing this will speed up the transfers.
### Anonymous access to public buckets ###
If you want to use rclone to access a public bucket, configure with a
@@ -427,12 +450,12 @@ type = s3
env_auth = false
access_key_id = XXX
secret_access_key = YYY
region =
region =
endpoint = https://ceph.endpoint.example.com
location_constraint =
acl =
server_side_encryption =
storage_class =
location_constraint =
acl =
server_side_encryption =
storage_class =
```
Note also that Ceph sometimes puts `/` in the passwords it gives
@@ -498,11 +521,11 @@ Storage> s3
env_auth> 1
access_key_id> YOUR_ACCESS_KEY
secret_access_key> YOUR_SECRET_KEY
region>
region>
endpoint> nyc3.digitaloceanspaces.com
location_constraint>
acl>
storage_class>
location_constraint>
acl>
storage_class>
```
The resulting configuration file should look like:
@@ -513,12 +536,12 @@ type = s3
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
region =
region =
endpoint = nyc3.digitaloceanspaces.com
location_constraint =
acl =
server_side_encryption =
storage_class =
location_constraint =
acl =
server_side_encryption =
storage_class =
```
Once configured, you can create a new Space and begin copying files. For example:
@@ -545,30 +568,41 @@ To configure access to IBM COS S3, follow the steps below:
2. Enter the name for the configuration
```
name> IBM-COS-XREGION
name> <YOUR NAME>
```
3. Select "s3" storage.
```
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
Choose a number from below, or type in your own value
1 / Alias for a existing remote
\ "alias"
2 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio, IBM COS(S3))
3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
\ "s3"
3 / Backblaze B2
Storage> 2
4 / Backblaze B2
\ "b2"
[snip]
23 / http Connection
\ "http"
Storage> 3
```
4. Select "Enter AWS credentials…"
4. Select IBM COS as the S3 Storage Provider.
```
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
Choose the S3 provider.
Choose a number from below, or type in your own value
1 / Choose this option to configure Storage to AWS S3
\ "AWS"
2 / Choose this option to configure Storage to Ceph Systems
\ "Ceph"
3 / Choose this option to configure Storage to Dreamhost
\ "Dreamhost"
4 / Choose this option to the configure Storage to IBM COS S3
\ "IBMCOS"
5 / Choose this option to the configure Storage to Minio
\ "Minio"
Provider>4
```
5. Enter the Access Key and Secret.
@@ -579,111 +613,94 @@ To configure access to IBM COS S3, follow the steps below:
secret_access_key> <>
```
6. Select "other-v4-signature" region.
6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address.
```
Region to connect to.
Endpoint for IBM COS S3 API.
Specify if using an IBM COS On Premise.
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
/ US East (Ohio) Region
2 | Needs location constraint us-east-2.
\ "us-east-2"
/ US West (Oregon) Region
…<omitted>…
15 | eg Ceph/Dreamhost
| set this and make sure you set the endpoint.
\ "other-v2-signature"
/ If using an S3 clone that understands v4 signatures set this
16 | and make sure you set the endpoint.
\ "other-v4-signature
region> 16
1 / US Cross Region Endpoint
\ "s3-api.us-geo.objectstorage.softlayer.net"
2 / US Cross Region Dallas Endpoint
\ "s3-api.dal.us-geo.objectstorage.softlayer.net"
3 / US Cross Region Washington DC Endpoint
\ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
4 / US Cross Region San Jose Endpoint
\ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
5 / US Cross Region Private Endpoint
\ "s3-api.us-geo.objectstorage.service.networklayer.com"
6 / US Cross Region Dallas Private Endpoint
\ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
7 / US Cross Region Washington DC Private Endpoint
\ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
8 / US Cross Region San Jose Private Endpoint
\ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
9 / US Region East Endpoint
\ "s3.us-east.objectstorage.softlayer.net"
10 / US Region East Private Endpoint
\ "s3.us-east.objectstorage.service.networklayer.com"
11 / US Region South Endpoint
[snip]
34 / Toronto Single Site Private Endpoint
\ "s3.tor01.objectstorage.service.networklayer.com"
endpoint>1
```
7. Enter the endpoint FQDN.
7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
```
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> s3-api.us-geo.objectstorage.softlayer.net
1 / US Cross Region Standard
\ "us-standard"
2 / US Cross Region Vault
\ "us-vault"
3 / US Cross Region Cold
\ "us-cold"
4 / US Cross Region Flex
\ "us-flex"
5 / US East Region Standard
\ "us-east-standard"
6 / US East Region Vault
\ "us-east-vault"
7 / US East Region Cold
\ "us-east-cold"
8 / US East Region Flex
\ "us-east-flex"
9 / US South Region Standard
\ "us-south-standard"
10 / US South Region Vault
\ "us-south-vault"
[snip]
32 / Toronto Flex
\ "tor01-flex"
location_constraint>1
```
8. Specify a IBM COS Location Constraint.
a. Currently, the only IBM COS values for LocationConstraint are:
us-standard / us-vault / us-cold / us-flex
us-east-standard / us-east-vault / us-east-cold / us-east-flex
us-south-standard / us-south-vault / us-south-cold / us-south-flex
eu-standard / eu-vault / eu-cold / eu-flex
9. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
```
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
\ ""
2 / US East (Ohio) Region.
\ "us-east-2"
…<omitted>…
location_constraint> us-standard
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
\ "public-read"
3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
\ "authenticated-read"
acl> 1
```
9. Specify a canned ACL. IBM COS on Bluemix(IBM Cloud) supports "public-read" and "private". IBM COS Infrastrucure on Bluemix(IBM Cloud) supports all the canned ACLs. On-Prem COS supports all the canned ACLs.
```
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ "authenticated-read"
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-read"
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
acl> 1
```
10. Set the SSE option to "None".
```
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption> 1
```
11. Set the storage class to "None" (IBM COS uses the LocationConstraint at the bucket level).
```
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
storage_class>
```
12. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
```
env_auth = false
access_key_id = <>
secret_access_key = <>
region = other-v4-signature
[xxx]
type = s3
Provider = IBMCOS
access_key_id = xxx
secret_access_key = yyy
endpoint = s3-api.us-geo.objectstorage.softlayer.net
location_constraint = us-standard
acl = private
server_side_encryption =
storage_class =
```
13. Execute rclone commands
@@ -822,21 +839,21 @@ Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
\ ""
[snip]
location_constraint>
location_constraint>
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
[snip]
acl>
acl>
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
server_side_encryption>
server_side_encryption>
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
@@ -847,7 +864,7 @@ Choose a number from below, or type in your own value
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
storage_class>
storage_class>
Remote config
--------------------
[wasabi]
@@ -856,10 +873,10 @@ access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = us-east-1
endpoint = s3.wasabisys.com
location_constraint =
acl =
server_side_encryption =
storage_class =
location_constraint =
acl =
server_side_encryption =
storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -876,8 +893,8 @@ access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = us-east-1
endpoint = s3.wasabisys.com
location_constraint =
acl =
server_side_encryption =
storage_class =
location_constraint =
acl =
server_side_encryption =
storage_class =
```