mirror of
https://github.com/rclone/rclone.git
synced 2026-02-28 10:23:19 +00:00
Version v1.73.0
This commit is contained in:
6170
MANUAL.html
generated
6170
MANUAL.html
generated
File diff suppressed because it is too large
Load Diff
2409
MANUAL.txt
generated
2409
MANUAL.txt
generated
File diff suppressed because it is too large
Load Diff
@@ -23,6 +23,7 @@ docs = [
|
||||
"gui.md",
|
||||
"rc.md",
|
||||
"overview.md",
|
||||
"tiers.md",
|
||||
"flags.md",
|
||||
"docker.md",
|
||||
"bisync.md",
|
||||
@@ -43,7 +44,7 @@ docs = [
|
||||
"compress.md",
|
||||
"combine.md",
|
||||
"doi.md",
|
||||
"drime.md"
|
||||
"drime.md",
|
||||
"dropbox.md",
|
||||
"filefabric.md",
|
||||
"filelu.md",
|
||||
@@ -143,7 +144,7 @@ def read_doc(doc):
|
||||
contents = fd.read()
|
||||
parts = contents.split("---\n", 2)
|
||||
if len(parts) != 3:
|
||||
raise ValueError("Couldn't find --- markers: found %d parts" % len(parts))
|
||||
raise ValueError(f"{doc}: Couldn't find --- markers: found {len(parts)} parts")
|
||||
contents = parts[2].strip()+"\n\n"
|
||||
# Remove icons
|
||||
contents = re.sub(r'<i class="fa.*?</i>\s*', "", contents)
|
||||
|
||||
@@ -451,6 +451,20 @@ Properties:
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --azureblob-connection-string
|
||||
|
||||
Storage Connection String.
|
||||
|
||||
Connection string for the storage. Leave blank if using other auth methods.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: connection_string
|
||||
- Env Var: RCLONE_AZUREBLOB_CONNECTION_STRING
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --azureblob-tenant
|
||||
|
||||
ID of the service principal's tenant. Also called its directory ID.
|
||||
@@ -1060,6 +1074,24 @@ Properties:
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
### Metadata
|
||||
|
||||
User metadata is stored as x-ms-meta- keys. Azure metadata keys are case insensitive and are always returned in lower case.
|
||||
|
||||
Here are the possible system metadata items for the azureblob backend.
|
||||
|
||||
| Name | Help | Type | Example | Read Only |
|
||||
|------|------|------|---------|-----------|
|
||||
| cache-control | Cache-Control header | string | no-cache | N |
|
||||
| content-disposition | Content-Disposition header | string | inline | N |
|
||||
| content-encoding | Content-Encoding header | string | gzip | N |
|
||||
| content-language | Content-Language header | string | en-US | N |
|
||||
| content-type | Content-Type header | string | text/plain | N |
|
||||
| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
|
||||
| tier | Tier of the object | string | Hot | **Y** |
|
||||
|
||||
See the [metadata](/docs/#metadata) docs for more info.
|
||||
|
||||
<!-- autogenerated options stop -->
|
||||
|
||||
### Custom upload headers
|
||||
|
||||
@@ -359,7 +359,7 @@ Azure Storage Account Name.
|
||||
|
||||
Set this to the Azure Storage Account Name in use.
|
||||
|
||||
Leave blank to use SAS URL or connection string, otherwise it needs to be set.
|
||||
Leave blank to use SAS URL or Emulator, otherwise it needs to be set.
|
||||
|
||||
If this is blank and if env_auth is set it will be read from the
|
||||
environment variable `AZURE_STORAGE_ACCOUNT_NAME` if possible.
|
||||
@@ -372,25 +372,11 @@ Properties:
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --azurefiles-share-name
|
||||
|
||||
Azure Files Share Name.
|
||||
|
||||
This is required and is the name of the share to access.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: share_name
|
||||
- Env Var: RCLONE_AZUREFILES_SHARE_NAME
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --azurefiles-env-auth
|
||||
|
||||
Read credentials from runtime (environment variables, CLI or MSI).
|
||||
|
||||
See the [authentication docs](/azurefiles#authentication) for full info.
|
||||
See the [authentication docs](/azureblob#authentication) for full info.
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -403,7 +389,7 @@ Properties:
|
||||
|
||||
Storage Account Shared Key.
|
||||
|
||||
Leave blank to use SAS URL or connection string.
|
||||
Leave blank to use SAS URL or Emulator.
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -414,9 +400,9 @@ Properties:
|
||||
|
||||
#### --azurefiles-sas-url
|
||||
|
||||
SAS URL.
|
||||
SAS URL for container level access only.
|
||||
|
||||
Leave blank if using account/key or connection string.
|
||||
Leave blank if using account/key or Emulator.
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -427,7 +413,10 @@ Properties:
|
||||
|
||||
#### --azurefiles-connection-string
|
||||
|
||||
Azure Files Connection String.
|
||||
Storage Connection String.
|
||||
|
||||
Connection string for the storage. Leave blank if using other auth methods.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -519,6 +508,20 @@ Properties:
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --azurefiles-share-name
|
||||
|
||||
Azure Files Share Name.
|
||||
|
||||
This is required and is the name of the share to access.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: share_name
|
||||
- Env Var: RCLONE_AZUREFILES_SHARE_NAME
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
### Advanced options
|
||||
|
||||
Here are the Advanced options specific to azurefiles (Microsoft Azure Files).
|
||||
@@ -581,13 +584,11 @@ Path to file containing credentials for use with a service principal.
|
||||
Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
|
||||
|
||||
$ az ad sp create-for-rbac --name "<name>" \
|
||||
--role "Storage Files Data Owner" \
|
||||
--role "Storage Blob Data Owner" \
|
||||
--scopes "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>" \
|
||||
> azure-principal.json
|
||||
|
||||
See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to files data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
|
||||
|
||||
**NB** this section needs updating for Azure Files - pull requests appreciated!
|
||||
See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to blob data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
|
||||
|
||||
It may be more convenient to put the credentials directly into the
|
||||
rclone config file under the `client_id`, `tenant` and `client_secret`
|
||||
@@ -601,6 +602,28 @@ Properties:
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --azurefiles-disable-instance-discovery
|
||||
|
||||
Skip requesting Microsoft Entra instance metadata
|
||||
|
||||
This should be set true only by applications authenticating in
|
||||
disconnected clouds, or private clouds such as Azure Stack.
|
||||
|
||||
It determines whether rclone requests Microsoft Entra instance
|
||||
metadata from `https://login.microsoft.com/` before
|
||||
authenticating.
|
||||
|
||||
Setting this to true will skip this request, making you responsible
|
||||
for ensuring the configured authority is valid and trustworthy.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_instance_discovery
|
||||
- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --azurefiles-use-msi
|
||||
|
||||
Use a managed service identity to authenticate (only works in Azure).
|
||||
@@ -660,32 +683,29 @@ Properties:
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --azurefiles-disable-instance-discovery
|
||||
#### --azurefiles-use-emulator
|
||||
|
||||
Skip requesting Microsoft Entra instance metadata
|
||||
This should be set true only by applications authenticating in
|
||||
disconnected clouds, or private clouds such as Azure Stack.
|
||||
It determines whether rclone requests Microsoft Entra instance
|
||||
metadata from `https://login.microsoft.com/` before
|
||||
authenticating.
|
||||
Setting this to true will skip this request, making you responsible
|
||||
for ensuring the configured authority is valid and trustworthy.
|
||||
Uses local storage emulator if provided as 'true'.
|
||||
|
||||
Leave blank if using real azure storage endpoint.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: disable_instance_discovery
|
||||
- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
|
||||
- Config: use_emulator
|
||||
- Env Var: RCLONE_AZUREFILES_USE_EMULATOR
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --azurefiles-use-az
|
||||
|
||||
Use Azure CLI tool az for authentication
|
||||
|
||||
Set to use the [Azure CLI tool az](https://learn.microsoft.com/en-us/cli/azure/)
|
||||
as the sole means of authentication.
|
||||
|
||||
Setting this can be useful if you wish to use the az CLI on a host with
|
||||
a System Managed Identity that you do not want to use.
|
||||
|
||||
Don't set env_auth at the same time.
|
||||
|
||||
|
||||
|
||||
@@ -1049,7 +1049,11 @@ The following backends have known issues that need more investigation:
|
||||
<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
|
||||
- `TestDropbox` (`dropbox`)
|
||||
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
|
||||
- Updated: 2025-11-21-010037
|
||||
- `TestSeafile` (`seafile`)
|
||||
- [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafile-1.txt)
|
||||
- `TestSeafileV6` (`seafile`)
|
||||
- [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafileV6-1.txt)
|
||||
- Updated: 2026-01-30-010015
|
||||
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
|
||||
|
||||
The following backends either have not been tested recently or have known issues
|
||||
@@ -1058,6 +1062,7 @@ that are deemed unfixable for the time being:
|
||||
<!--- start list_ignores - DO NOT EDIT THIS SECTION - use make commanddocs --->
|
||||
- `TestArchive` (`archive`)
|
||||
- `TestCache` (`cache`)
|
||||
- `TestDrime` (`drime`)
|
||||
- `TestFileLu` (`filelu`)
|
||||
- `TestFilesCom` (`filescom`)
|
||||
- `TestImageKit` (`imagekit`)
|
||||
|
||||
@@ -6,6 +6,64 @@ description: "Rclone Changelog"
|
||||
|
||||
# Changelog
|
||||
|
||||
## v1.73.0 - 2026-01-30
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.73.0)
|
||||
|
||||
- New backends
|
||||
- [Shade](/shade/) (jhasse-shade)
|
||||
- [Drime](/drime/) (dougal)
|
||||
- [Filen](/filen/) (Enduriel)
|
||||
- [Internxt](/internxt/) (jzunigax2)
|
||||
- New S3 providers
|
||||
- [Bizfly Cloud Simple Storage](/s3/#bizflycloud) (vupn0712)
|
||||
- New Features
|
||||
- docs: Add [Support Tiers](/tiers/) to the documentation (Nick Craig-Wood)
|
||||
- rc: Add [operations/hashsumfile](/rc/#operations-hashsumfile) to sum a single file only (Nick Craig-Wood)
|
||||
- serve webdav: Implement download directory as Zip (Leo)
|
||||
- Bug Fixes
|
||||
- fs: fix bwlimit: correct reporting (Mikel Olasagasti Uranga)
|
||||
- log: fix systemd adding extra newline (dougal)
|
||||
- docs: fixes (albertony, darkdragon-001, Duncan Smart, hyusap, Marc-Philip, Nick Craig-Wood, vicerace, vyv03354, yuval-cloudinary, yy)
|
||||
- serve s3: Make errors in `--s3-auth-key` fatal (Nick Craig-Wood)
|
||||
- Mount
|
||||
- Fix OpenBSD mount support. (Nick Owens)
|
||||
- Azure Blob
|
||||
- Add metadata and tags support across upload and copy paths (Cliff Frey)
|
||||
- Factor the common auth into a library (Nick Craig-Wood)
|
||||
- Azurefiles
|
||||
- Factor the common auth into a library (Nick Craig-Wood)
|
||||
- B2
|
||||
- Support authentication with new bucket restricted application keys (DianaNites)
|
||||
- Drive
|
||||
- Add `--drive-metadata-force-expansive-access` flag (Nick Craig-Wood)
|
||||
- Fix crash when trying to creating shortcut to a Google doc (Nick Craig-Wood)
|
||||
- FTP
|
||||
- Add http proxy authentication support (Nicolas Dessart)
|
||||
- Mega
|
||||
- Reverts TLS workaround (necaran)
|
||||
- Memory
|
||||
- Add `--memory-discard` flag for speed testing (Nick Craig-Wood)
|
||||
- OneDrive
|
||||
- Fix cancelling multipart upload (Nick Craig-Wood)
|
||||
- Fix setting modification time on directories for OneDrive Personal (Nick Craig-Wood)
|
||||
- Fix OneDrive Personal no longer supports description (Nick Craig-Wood)
|
||||
- Fix require sign in for OneDrive Personal (Nick Craig-Wood)
|
||||
- Fix permissions on OneDrive Personal (Nick Craig-Wood)
|
||||
- Oracle Object Storage
|
||||
- Eliminate unnecessary heap allocation (Qingwei Li)
|
||||
- Pcloud
|
||||
- Add support for `ChangeNotify` to enable real-time updates in mount (masrlinu)
|
||||
- Protondrive
|
||||
- Update to use forks of upstream modules to unblock development (Nick Craig-Wood)
|
||||
- S3
|
||||
- Add ability to specify an IAM role for cross-account interaction (Vladislav Tropnikov)
|
||||
- Linode: updated endpoints to use ISO 3166-1 alpha-2 standard (jbagwell-akamai)
|
||||
- Fix Copy ignoring storage class (vupn0712)
|
||||
- SFTP
|
||||
- Add http proxy authentication support (Nicolas Dessart)
|
||||
- Eliminate unnecessary heap allocation (Qingwei Li)
|
||||
|
||||
## v1.72.1 - 2025-12-10
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
|
||||
|
||||
@@ -37,6 +37,7 @@ rclone [flags]
|
||||
--azureblob-client-id string The ID of the client in use
|
||||
--azureblob-client-secret string One of the service principal's client secrets
|
||||
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
|
||||
--azureblob-connection-string string Storage Connection String
|
||||
--azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
|
||||
--azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
|
||||
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
|
||||
@@ -73,7 +74,7 @@ rclone [flags]
|
||||
--azurefiles-client-id string The ID of the client in use
|
||||
--azurefiles-client-secret string One of the service principal's client secrets
|
||||
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
|
||||
--azurefiles-connection-string string Azure Files Connection String
|
||||
--azurefiles-connection-string string Storage Connection String
|
||||
--azurefiles-description string Description of the remote
|
||||
--azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
|
||||
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
|
||||
@@ -85,12 +86,13 @@ rclone [flags]
|
||||
--azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
|
||||
--azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
|
||||
--azurefiles-password string The user's password (obscured)
|
||||
--azurefiles-sas-url string SAS URL
|
||||
--azurefiles-sas-url string SAS URL for container level access only
|
||||
--azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
|
||||
--azurefiles-share-name string Azure Files Share Name
|
||||
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
|
||||
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
|
||||
--azurefiles-use-az Use Azure CLI tool az for authentication
|
||||
--azurefiles-use-emulator Uses local storage emulator if provided as 'true'
|
||||
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
|
||||
--azurefiles-username string User name (usually an email address)
|
||||
--b2-account string Account ID or Application Key ID
|
||||
@@ -220,6 +222,16 @@ rclone [flags]
|
||||
--doi-doi string The DOI or the doi.org URL
|
||||
--doi-doi-resolver-api-url string The URL of the DOI resolver API to use
|
||||
--doi-provider string DOI provider
|
||||
--drime-access-token string API Access token
|
||||
--drime-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
|
||||
--drime-description string Description of the remote
|
||||
--drime-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
|
||||
--drime-hard-delete Delete files permanently rather than putting them into the trash
|
||||
--drime-list-chunk int Number of items to list in each call (default 1000)
|
||||
--drime-root-folder-id string ID of the root folder
|
||||
--drime-upload-concurrency int Concurrency for multipart uploads and copies (default 4)
|
||||
--drime-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
|
||||
--drime-workspace-id string Account ID
|
||||
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
|
||||
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user
|
||||
@@ -240,6 +252,7 @@ rclone [flags]
|
||||
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
|
||||
--drive-keep-revision-forever Keep new head revision of each file forever
|
||||
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
|
||||
--drive-metadata-enforce-expansive-access Whether the request should enforce expansive access rules
|
||||
--drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
|
||||
--drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
|
||||
--drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
|
||||
@@ -319,6 +332,17 @@ rclone [flags]
|
||||
--filelu-description string Description of the remote
|
||||
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
|
||||
--filelu-key string Your FileLu Rclone key from My Account
|
||||
--filen-api-key string API Key for your Filen account (obscured)
|
||||
--filen-auth-version string Authentication Version (internal use only)
|
||||
--filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
|
||||
--filen-description string Description of the remote
|
||||
--filen-email string Email of your Filen account
|
||||
--filen-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--filen-master-keys string Master Keys (internal use only)
|
||||
--filen-password string Password of your Filen account (obscured)
|
||||
--filen-private-key string Private RSA Key (internal use only)
|
||||
--filen-public-key string Public RSA Key (internal use only)
|
||||
--filen-upload-concurrency int Concurrency for chunked uploads (default 16)
|
||||
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
|
||||
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
|
||||
--filescom-api-key string The API key used to authenticate with Files.com
|
||||
@@ -369,7 +393,7 @@ rclone [flags]
|
||||
--gcs-description string Description of the remote
|
||||
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
|
||||
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gcs-endpoint string Endpoint for the service
|
||||
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
|
||||
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
|
||||
--gcs-location string Location for the newly created buckets
|
||||
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
|
||||
@@ -477,6 +501,11 @@ rclone [flags]
|
||||
--internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
|
||||
--internetarchive-secret-access-key string IAS3 Secret Key (password)
|
||||
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
|
||||
--internxt-description string Description of the remote
|
||||
--internxt-email string Email of your Internxt account
|
||||
--internxt-encoding Encoding The encoding for the backend (default Slash,BackSlash,CrLf,RightPeriod,InvalidUtf8,Dot)
|
||||
--internxt-pass string Password (obscured)
|
||||
--internxt-skip-hash-validation Skip hash validation when downloading files (default true)
|
||||
--jottacloud-auth-url string Auth server URL
|
||||
--jottacloud-client-credentials Use client credentials OAuth flow
|
||||
--jottacloud-client-id string OAuth Client Id
|
||||
@@ -562,6 +591,7 @@ rclone [flags]
|
||||
--mega-use-https Use HTTPS for transfers
|
||||
--mega-user string User name
|
||||
--memory-description string Description of the remote
|
||||
--memory-discard If set all writes will be discarded and reads will return an error
|
||||
--memprofile string Write memory profile to file
|
||||
-M, --metadata If set, preserve metadata when copying objects
|
||||
--metadata-exclude stringArray Exclude metadatas matching pattern
|
||||
@@ -819,6 +849,10 @@ rclone [flags]
|
||||
--s3-provider string Choose your S3 provider
|
||||
--s3-region string Region to connect to
|
||||
--s3-requester-pays Enables requester pays option when interacting with S3 bucket
|
||||
--s3-role-arn string ARN of the IAM role to assume
|
||||
--s3-role-external-id string External ID for assumed role
|
||||
--s3-role-session-duration string Session duration for assumed role
|
||||
--s3-role-session-name string Session name for assumed role
|
||||
--s3-sdk-log-mode Bits Set to debug the SDK (default Off)
|
||||
--s3-secret-access-key string AWS Secret Access Key (password)
|
||||
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
|
||||
@@ -903,6 +937,16 @@ rclone [flags]
|
||||
--sftp-user string SSH username (default "$USER")
|
||||
--sftp-xxh128sum-command string The command used to read XXH128 hashes
|
||||
--sftp-xxh3sum-command string The command used to read XXH3 hashes
|
||||
--shade-api-key string An API key for your account
|
||||
--shade-chunk-size SizeSuffix Chunk size to use for uploading (default 64Mi)
|
||||
--shade-description string Description of the remote
|
||||
--shade-drive-id string The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive
|
||||
--shade-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--shade-endpoint string Endpoint for the service
|
||||
--shade-max-upload-parts int Maximum amount of parts in a multipart upload (default 10000)
|
||||
--shade-token string JWT Token for performing Shade FS operations. Don't set this value - rclone will set it automatically
|
||||
--shade-token-expiry string JWT Token Expiration time. Don't set this value - rclone will set it automatically
|
||||
--shade-upload-concurrency int Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies (default 4)
|
||||
--sharefile-auth-url string Auth server URL
|
||||
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
|
||||
--sharefile-client-credentials Use client credentials OAuth flow
|
||||
@@ -1019,7 +1063,7 @@ rclone [flags]
|
||||
--use-json-log Use json log format
|
||||
--use-mmap Use mmap allocator (see docs)
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
-V, --version Print the version number
|
||||
--webdav-auth-redirect Preserve authentication on redirect
|
||||
|
||||
@@ -231,12 +231,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
|
||||
// Output: stories/The Quick Brown Fox!-20251121
|
||||
// Output: stories/The Quick Brown Fox!-20260130
|
||||
```
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
|
||||
// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
|
||||
// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM
|
||||
```
|
||||
|
||||
```console
|
||||
|
||||
@@ -41,6 +41,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
|
||||
use `-R` to make them recurse.
|
||||
|
||||
List commands prefer a recursive method that uses more memory but fewer
|
||||
transactions by default. Use `--disable ListR` to suppress the behavior.
|
||||
See [`--fast-list`](/docs/#fast-list) for more details.
|
||||
|
||||
Listing a nonexistent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket-based remotes).
|
||||
|
||||
@@ -53,6 +53,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
|
||||
use `-R` to make them recurse.
|
||||
|
||||
List commands prefer a recursive method that uses more memory but fewer
|
||||
transactions by default. Use `--disable ListR` to suppress the behavior.
|
||||
See [`--fast-list`](/docs/#fast-list) for more details.
|
||||
|
||||
Listing a nonexistent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket-based remotes).
|
||||
|
||||
@@ -158,6 +158,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
|
||||
use `-R` to make them recurse.
|
||||
|
||||
List commands prefer a recursive method that uses more memory but fewer
|
||||
transactions by default. Use `--disable ListR` to suppress the behavior.
|
||||
See [`--fast-list`](/docs/#fast-list) for more details.
|
||||
|
||||
Listing a nonexistent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket-based remotes).
|
||||
|
||||
@@ -68,7 +68,7 @@ with the following options:
|
||||
- If `--files-only` is specified then files will be returned only,
|
||||
no directories.
|
||||
|
||||
If `--stat` is set then the the output is not an array of items,
|
||||
If `--stat` is set then the output is not an array of items,
|
||||
but instead a single JSON blob will be returned about the item pointed to.
|
||||
This will return an error if the item isn't found, however on bucket based
|
||||
backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will
|
||||
@@ -111,6 +111,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
|
||||
use `-R` to make them recurse.
|
||||
|
||||
List commands prefer a recursive method that uses more memory but fewer
|
||||
transactions by default. Use `--disable ListR` to suppress the behavior.
|
||||
See [`--fast-list`](/docs/#fast-list) for more details.
|
||||
|
||||
Listing a nonexistent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket-based remotes).
|
||||
|
||||
@@ -42,6 +42,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
|
||||
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
|
||||
use `-R` to make them recurse.
|
||||
|
||||
List commands prefer a recursive method that uses more memory but fewer
|
||||
transactions by default. Use `--disable ListR` to suppress the behavior.
|
||||
See [`--fast-list`](/docs/#fast-list) for more details.
|
||||
|
||||
Listing a nonexistent directory will produce an error except for
|
||||
remotes which can't have empty directories (e.g. s3, swift, or gcs -
|
||||
the bucket-based remotes).
|
||||
|
||||
@@ -78,7 +78,7 @@ at all, then 1 PiB is set as both the total and the free size.
|
||||
## Installing on Windows
|
||||
|
||||
To run `rclone mount on Windows`, you will need to
|
||||
download and install [WinFsp](http://www.secfs.net/winfsp/).
|
||||
download and install [WinFsp](https://winfsp.dev).
|
||||
|
||||
[WinFsp](https://github.com/winfsp/winfsp) is an open-source
|
||||
Windows File System Proxy which makes it easy to write user space file
|
||||
@@ -336,7 +336,7 @@ full new copy of the file.
|
||||
When mounting with `--read-only`, attempts to write to files will fail *silently*
|
||||
as opposed to with a clear warning as in macFUSE.
|
||||
|
||||
## Mounting on Linux
|
||||
# Mounting on Linux
|
||||
|
||||
On newer versions of Ubuntu, you may encounter the following error when running
|
||||
`rclone mount`:
|
||||
|
||||
@@ -79,7 +79,7 @@ at all, then 1 PiB is set as both the total and the free size.
|
||||
## Installing on Windows
|
||||
|
||||
To run `rclone nfsmount on Windows`, you will need to
|
||||
download and install [WinFsp](http://www.secfs.net/winfsp/).
|
||||
download and install [WinFsp](https://winfsp.dev).
|
||||
|
||||
[WinFsp](https://github.com/winfsp/winfsp) is an open-source
|
||||
Windows File System Proxy which makes it easy to write user space file
|
||||
|
||||
@@ -25,7 +25,7 @@ argument by passing a hyphen as an argument. This will use the first
|
||||
line of STDIN as the password not including the trailing newline.
|
||||
|
||||
```console
|
||||
echo "secretpassword" | rclone obscure -
|
||||
echo 'secretpassword' | rclone obscure -
|
||||
```
|
||||
|
||||
If there is no data on STDIN to read, rclone obscure will default to
|
||||
|
||||
@@ -26,6 +26,26 @@ docs](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
|
||||
`--auth-key` is not provided then `serve s3` will allow anonymous
|
||||
access.
|
||||
|
||||
Like all rclone flags `--auth-key` can be set via environment
|
||||
variables, in this case `RCLONE_AUTH_KEY`. Since this flag can be
|
||||
repeated, the input to `RCLONE_AUTH_KEY` is CSV encoded. Because the
|
||||
`accessKey,secretKey` has a comma in, this means it needs to be in
|
||||
quotes.
|
||||
|
||||
```console
|
||||
export RCLONE_AUTH_KEY='"user,pass"'
|
||||
rclone serve s3 ...
|
||||
```
|
||||
|
||||
Or to supply multiple identities:
|
||||
|
||||
```console
|
||||
export RCLONE_AUTH_KEY='"user1,pass1","user2,pass2"'
|
||||
rclone serve s3 ...
|
||||
```
|
||||
|
||||
Setting this variable without quotes will produce an error.
|
||||
|
||||
Please note that some clients may require HTTPS endpoints. See [the
|
||||
SSL docs](#tls-ssl) for more information.
|
||||
|
||||
|
||||
@@ -803,6 +803,7 @@ rclone serve webdav remote:path [flags]
|
||||
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
|
||||
--dir-perms FileMode Directory permissions (default 777)
|
||||
--disable-dir-list Disable HTML directory list on GET request for a directory
|
||||
--disable-zip Disable zip download of directories
|
||||
--etag-hash string Which hash to use for the ETag, or auto or blank for off
|
||||
--file-perms FileMode File permissions (default 666)
|
||||
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
|
||||
|
||||
@@ -113,7 +113,7 @@ Properties:
|
||||
|
||||
The URL of the DOI resolver API to use.
|
||||
|
||||
The DOI resolver can be set for testing or for cases when the the canonical DOI resolver API cannot be used.
|
||||
The DOI resolver can be set for testing or for cases when the canonical DOI resolver API cannot be used.
|
||||
|
||||
Defaults to "https://doi.org/api".
|
||||
|
||||
|
||||
@@ -190,7 +190,7 @@ Properties:
|
||||
|
||||
Account ID
|
||||
|
||||
Leave this blank normally, rclone will fill it in automatically.
|
||||
Leave this blank normally unless you wish to specify a Workspace ID.
|
||||
|
||||
|
||||
Properties:
|
||||
@@ -211,6 +211,81 @@ Properties:
|
||||
- Type: int
|
||||
- Default: 1000
|
||||
|
||||
#### --drime-hard-delete
|
||||
|
||||
Delete files permanently rather than putting them into the trash.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: hard_delete
|
||||
- Env Var: RCLONE_DRIME_HARD_DELETE
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --drime-upload-cutoff
|
||||
|
||||
Cutoff for switching to chunked upload.
|
||||
|
||||
Any files larger than this will be uploaded in chunks of chunk_size.
|
||||
The minimum is 0 and the maximum is 5 GiB.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_cutoff
|
||||
- Env Var: RCLONE_DRIME_UPLOAD_CUTOFF
|
||||
- Type: SizeSuffix
|
||||
- Default: 200Mi
|
||||
|
||||
#### --drime-chunk-size
|
||||
|
||||
Chunk size to use for uploading.
|
||||
|
||||
When uploading files larger than upload_cutoff or files with unknown
|
||||
size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
|
||||
photos or google docs) they will be uploaded as multipart uploads
|
||||
using this chunk size.
|
||||
|
||||
Note that "--drime-upload-concurrency" chunks of this size are buffered
|
||||
in memory per transfer.
|
||||
|
||||
If you are transferring large files over high-speed links and you have
|
||||
enough memory, then increasing this will speed up the transfers.
|
||||
|
||||
Rclone will automatically increase the chunk size when uploading a
|
||||
large file of known size to stay below the 10,000 chunks limit.
|
||||
|
||||
Files of unknown size are uploaded with the configured
|
||||
chunk_size. Since the default chunk size is 5 MiB and there can be at
|
||||
most 10,000 chunks, this means that by default the maximum size of
|
||||
a file you can stream upload is 48 GiB. If you wish to stream upload
|
||||
larger files then you will need to increase chunk_size.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: chunk_size
|
||||
- Env Var: RCLONE_DRIME_CHUNK_SIZE
|
||||
- Type: SizeSuffix
|
||||
- Default: 5Mi
|
||||
|
||||
#### --drime-upload-concurrency
|
||||
|
||||
Concurrency for multipart uploads and copies.
|
||||
|
||||
This is the number of chunks of the same file that are uploaded
|
||||
concurrently for multipart uploads and copies.
|
||||
|
||||
If you are uploading small numbers of large files over high-speed links
|
||||
and these uploads do not fully utilize your bandwidth, then increasing
|
||||
this may help to speed up the transfers.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_concurrency
|
||||
- Env Var: RCLONE_DRIME_UPLOAD_CONCURRENCY
|
||||
- Type: int
|
||||
- Default: 4
|
||||
|
||||
#### --drime-encoding
|
||||
|
||||
The encoding for the backend.
|
||||
|
||||
@@ -1420,6 +1420,23 @@ Properties:
|
||||
- "read,write"
|
||||
- Read and Write the value.
|
||||
|
||||
#### --drive-metadata-enforce-expansive-access
|
||||
|
||||
Whether the request should enforce expansive access rules.
|
||||
|
||||
From Feb 2026 this flag will be set by default so this flag can be used for
|
||||
testing before then.
|
||||
|
||||
See: https://developers.google.com/workspace/drive/api/guides/limited-expansive-access
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: metadata_enforce_expansive_access
|
||||
- Env Var: RCLONE_DRIVE_METADATA_ENFORCE_EXPANSIVE_ACCESS
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --drive-encoding
|
||||
|
||||
The encoding for the backend.
|
||||
|
||||
@@ -140,6 +140,24 @@ Properties:
|
||||
|
||||
Here are the Advanced options specific to filen (Filen).
|
||||
|
||||
#### --filen-upload-concurrency
|
||||
|
||||
Concurrency for chunked uploads.
|
||||
|
||||
This is the upper limit for how many transfers for the same file are running concurrently.
|
||||
Setting this above to a value smaller than 1 will cause uploads to deadlock.
|
||||
|
||||
If you are uploading small numbers of large files over high-speed links
|
||||
and these uploads do not fully utilize your bandwidth, then increasing
|
||||
this may help to speed up the transfers.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_concurrency
|
||||
- Env Var: RCLONE_FILEN_UPLOAD_CONCURRENCY
|
||||
- Type: int
|
||||
- Default: 16
|
||||
|
||||
#### --filen-encoding
|
||||
|
||||
The encoding for the backend.
|
||||
@@ -153,28 +171,6 @@ Properties:
|
||||
- Type: Encoding
|
||||
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
|
||||
|
||||
#### --filen-upload-concurrency
|
||||
|
||||
Concurrency for multipart uploads.
|
||||
|
||||
This is the number of chunks of the same file that are uploaded
|
||||
concurrently for multipart uploads.
|
||||
|
||||
Note that chunks are stored in memory and there may be up to
|
||||
"--transfers" * "--filen-upload-concurrency" chunks stored at once
|
||||
in memory.
|
||||
|
||||
If you are uploading small numbers of large files over high-speed links
|
||||
and these uploads do not fully utilize your bandwidth, then increasing
|
||||
this may help to speed up the transfers.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_concurrency
|
||||
- Env Var: RCLONE_FILEN_UPLOAD_CONCURRENCY
|
||||
- Type: int
|
||||
- Default: 16
|
||||
|
||||
#### --filen-master-keys
|
||||
|
||||
Master Keys (internal use only)
|
||||
|
||||
@@ -121,7 +121,7 @@ Flags for general networking and HTTP stuff.
|
||||
--tpslimit float Limit HTTP transactions per second to this
|
||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||
--use-cookies Enable session cookiejar
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
|
||||
```
|
||||
|
||||
|
||||
@@ -352,6 +352,7 @@ Backend-only flags (these can be set in the config file also).
|
||||
--azureblob-client-id string The ID of the client in use
|
||||
--azureblob-client-secret string One of the service principal's client secrets
|
||||
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
|
||||
--azureblob-connection-string string Storage Connection String
|
||||
--azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
|
||||
--azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
|
||||
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
|
||||
@@ -388,7 +389,7 @@ Backend-only flags (these can be set in the config file also).
|
||||
--azurefiles-client-id string The ID of the client in use
|
||||
--azurefiles-client-secret string One of the service principal's client secrets
|
||||
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
|
||||
--azurefiles-connection-string string Azure Files Connection String
|
||||
--azurefiles-connection-string string Storage Connection String
|
||||
--azurefiles-description string Description of the remote
|
||||
--azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
|
||||
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
|
||||
@@ -400,12 +401,13 @@ Backend-only flags (these can be set in the config file also).
|
||||
--azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
|
||||
--azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
|
||||
--azurefiles-password string The user's password (obscured)
|
||||
--azurefiles-sas-url string SAS URL
|
||||
--azurefiles-sas-url string SAS URL for container level access only
|
||||
--azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
|
||||
--azurefiles-share-name string Azure Files Share Name
|
||||
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
|
||||
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
|
||||
--azurefiles-use-az Use Azure CLI tool az for authentication
|
||||
--azurefiles-use-emulator Uses local storage emulator if provided as 'true'
|
||||
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
|
||||
--azurefiles-username string User name (usually an email address)
|
||||
--b2-account string Account ID or Application Key ID
|
||||
@@ -507,6 +509,16 @@ Backend-only flags (these can be set in the config file also).
|
||||
--doi-doi string The DOI or the doi.org URL
|
||||
--doi-doi-resolver-api-url string The URL of the DOI resolver API to use
|
||||
--doi-provider string DOI provider
|
||||
--drime-access-token string API Access token
|
||||
--drime-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
|
||||
--drime-description string Description of the remote
|
||||
--drime-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
|
||||
--drime-hard-delete Delete files permanently rather than putting them into the trash
|
||||
--drime-list-chunk int Number of items to list in each call (default 1000)
|
||||
--drime-root-folder-id string ID of the root folder
|
||||
--drime-upload-concurrency int Concurrency for multipart uploads and copies (default 4)
|
||||
--drime-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
|
||||
--drime-workspace-id string Account ID
|
||||
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
|
||||
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
|
||||
--drive-auth-owner-only Only consider files owned by the authenticated user
|
||||
@@ -527,6 +539,7 @@ Backend-only flags (these can be set in the config file also).
|
||||
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
|
||||
--drive-keep-revision-forever Keep new head revision of each file forever
|
||||
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
|
||||
--drive-metadata-enforce-expansive-access Whether the request should enforce expansive access rules
|
||||
--drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
|
||||
--drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
|
||||
--drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
|
||||
@@ -595,6 +608,17 @@ Backend-only flags (these can be set in the config file also).
|
||||
--filelu-description string Description of the remote
|
||||
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
|
||||
--filelu-key string Your FileLu Rclone key from My Account
|
||||
--filen-api-key string API Key for your Filen account (obscured)
|
||||
--filen-auth-version string Authentication Version (internal use only)
|
||||
--filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
|
||||
--filen-description string Description of the remote
|
||||
--filen-email string Email of your Filen account
|
||||
--filen-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--filen-master-keys string Master Keys (internal use only)
|
||||
--filen-password string Password of your Filen account (obscured)
|
||||
--filen-private-key string Private RSA Key (internal use only)
|
||||
--filen-public-key string Public RSA Key (internal use only)
|
||||
--filen-upload-concurrency int Concurrency for chunked uploads (default 16)
|
||||
--filescom-api-key string The API key used to authenticate with Files.com
|
||||
--filescom-description string Description of the remote
|
||||
--filescom-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
|
||||
@@ -638,7 +662,7 @@ Backend-only flags (these can be set in the config file also).
|
||||
--gcs-description string Description of the remote
|
||||
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
|
||||
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gcs-endpoint string Endpoint for the service
|
||||
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
|
||||
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
|
||||
--gcs-location string Location for the newly created buckets
|
||||
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
|
||||
@@ -727,6 +751,11 @@ Backend-only flags (these can be set in the config file also).
|
||||
--internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
|
||||
--internetarchive-secret-access-key string IAS3 Secret Key (password)
|
||||
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
|
||||
--internxt-description string Description of the remote
|
||||
--internxt-email string Email of your Internxt account
|
||||
--internxt-encoding Encoding The encoding for the backend (default Slash,BackSlash,CrLf,RightPeriod,InvalidUtf8,Dot)
|
||||
--internxt-pass string Password (obscured)
|
||||
--internxt-skip-hash-validation Skip hash validation when downloading files (default true)
|
||||
--jottacloud-auth-url string Auth server URL
|
||||
--jottacloud-client-credentials Use client credentials OAuth flow
|
||||
--jottacloud-client-id string OAuth Client Id
|
||||
@@ -789,6 +818,7 @@ Backend-only flags (these can be set in the config file also).
|
||||
--mega-use-https Use HTTPS for transfers
|
||||
--mega-user string User name
|
||||
--memory-description string Description of the remote
|
||||
--memory-discard If set all writes will be discarded and reads will return an error
|
||||
--netstorage-account string Set the NetStorage account name
|
||||
--netstorage-description string Description of the remote
|
||||
--netstorage-host string Domain+path of NetStorage host to connect to
|
||||
@@ -964,6 +994,10 @@ Backend-only flags (these can be set in the config file also).
|
||||
--s3-provider string Choose your S3 provider
|
||||
--s3-region string Region to connect to
|
||||
--s3-requester-pays Enables requester pays option when interacting with S3 bucket
|
||||
--s3-role-arn string ARN of the IAM role to assume
|
||||
--s3-role-external-id string External ID for assumed role
|
||||
--s3-role-session-duration string Session duration for assumed role
|
||||
--s3-role-session-name string Session name for assumed role
|
||||
--s3-sdk-log-mode Bits Set to debug the SDK (default Off)
|
||||
--s3-secret-access-key string AWS Secret Access Key (password)
|
||||
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
|
||||
@@ -1047,6 +1081,16 @@ Backend-only flags (these can be set in the config file also).
|
||||
--sftp-user string SSH username (default "$USER")
|
||||
--sftp-xxh128sum-command string The command used to read XXH128 hashes
|
||||
--sftp-xxh3sum-command string The command used to read XXH3 hashes
|
||||
--shade-api-key string An API key for your account
|
||||
--shade-chunk-size SizeSuffix Chunk size to use for uploading (default 64Mi)
|
||||
--shade-description string Description of the remote
|
||||
--shade-drive-id string The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive
|
||||
--shade-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
|
||||
--shade-endpoint string Endpoint for the service
|
||||
--shade-max-upload-parts int Maximum amount of parts in a multipart upload (default 10000)
|
||||
--shade-token string JWT Token for performing Shade FS operations. Don't set this value - rclone will set it automatically
|
||||
--shade-token-expiry string JWT Token Expiration time. Don't set this value - rclone will set it automatically
|
||||
--shade-upload-concurrency int Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies (default 4)
|
||||
--sharefile-auth-url string Auth server URL
|
||||
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
|
||||
--sharefile-client-credentials Use client credentials OAuth flow
|
||||
|
||||
@@ -785,9 +785,14 @@ Properties:
|
||||
|
||||
#### --gcs-endpoint
|
||||
|
||||
Endpoint for the service.
|
||||
Custom endpoint for the storage API. Leave blank to use the provider default.
|
||||
|
||||
Leave blank normally.
|
||||
When using a custom endpoint that includes a subpath (e.g. example.org/custom/endpoint),
|
||||
the subpath will be ignored during upload operations due to a limitation in the
|
||||
underlying Google API Go client library.
|
||||
Download and listing operations will work correctly with the full endpoint path.
|
||||
If you require subpath support for uploads, avoid using subpaths in your custom
|
||||
endpoint configuration.
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -795,6 +800,13 @@ Properties:
|
||||
- Env Var: RCLONE_GCS_ENDPOINT
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "storage.example.org"
|
||||
- Specify a custom endpoint
|
||||
- "storage.example.org:4443"
|
||||
- Specifying a custom endpoint with port
|
||||
- "storage.example.org:4443/gcs/api"
|
||||
- Specifying a subpath, see the note, uploads won't use the custom path!
|
||||
|
||||
#### --gcs-encoding
|
||||
|
||||
|
||||
@@ -70,6 +70,30 @@ set](/overview/#restricted-characters).
|
||||
|
||||
Here are the Advanced options specific to memory (In memory object storage system.).
|
||||
|
||||
#### --memory-discard
|
||||
|
||||
If set all writes will be discarded and reads will return an error
|
||||
|
||||
If set then when files are uploaded the contents not be saved. The
|
||||
files will appear to have been uploaded but will give an error on
|
||||
read. Files will have their MD5 sum calculated on upload which takes
|
||||
very little CPU time and allows the transfers to be checked.
|
||||
|
||||
This can be useful for testing performance.
|
||||
|
||||
Probably most easily used by using the connection string syntax:
|
||||
|
||||
:memory,discard:bucket
|
||||
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: discard
|
||||
- Env Var: RCLONE_MEMORY_DISCARD
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --memory-description
|
||||
|
||||
Description of the remote.
|
||||
|
||||
@@ -788,7 +788,7 @@ This is why this flag is not set as the default.
|
||||
|
||||
As a rule of thumb if nearly all of your data is under rclone's root
|
||||
directory (the `root/directory` in `onedrive:root/directory`) then
|
||||
using this flag will be be a big performance win. If your data is
|
||||
using this flag will be a big performance win. If your data is
|
||||
mostly not under the root then using this flag will be a big
|
||||
performance loss.
|
||||
|
||||
@@ -995,7 +995,7 @@ Here are the possible system metadata items for the onedrive backend.
|
||||
| content-type | The MIME type of the file. | string | text/plain | **Y** |
|
||||
| created-by-display-name | Display name of the user that created the item. | string | John Doe | **Y** |
|
||||
| created-by-id | ID of the user that created the item. | string | 48d31887-5fad-4d73-a9f5-3c356e68a038 | **Y** |
|
||||
| description | A short description of the file. Max 1024 characters. Only supported for OneDrive Personal. | string | Contract for signing | N |
|
||||
| description | A short description of the file. Max 1024 characters. No longer supported by Microsoft. | string | Contract for signing | N |
|
||||
| id | The unique identifier of the item within OneDrive. | string | 01BYE5RZ6QN3ZWBTUFOFD3GSPGOHDJD36K | **Y** |
|
||||
| last-modified-by-display-name | Display name of the user that last modified the item. | string | John Doe | **Y** |
|
||||
| last-modified-by-id | ID of the user that last modified the item. | string | 48d31887-5fad-4d73-a9f5-3c356e68a038 | **Y** |
|
||||
|
||||
@@ -1372,7 +1372,7 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=m
|
||||
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
|
||||
```
|
||||
|
||||
The vfsOpt are as described in options/get and can be seen in the the
|
||||
The vfsOpt are as described in options/get and can be seen in the
|
||||
"vfs" section when running and the mountOpt can be seen in the "mount" section:
|
||||
|
||||
```console
|
||||
@@ -1703,6 +1703,40 @@ See the [hashsum](/commands/rclone_hashsum/) command for more information on the
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/hashsumfile: Produces a hash for a single file. {#operations-hashsumfile}
|
||||
|
||||
Produces a hash for a single file using the hash named.
|
||||
|
||||
This takes the following parameters:
|
||||
|
||||
- fs - a remote name string e.g. "drive:"
|
||||
- remote - a path within that remote e.g. "file.txt"
|
||||
- hashType - type of hash to be used
|
||||
- download - check by downloading rather than with hash (boolean)
|
||||
- base64 - output the hashes in base64 rather than hex (boolean)
|
||||
|
||||
If you supply the download flag, it will download the data from the
|
||||
remote and create the hash on the fly. This can be useful for remotes
|
||||
that don't support the given hash or if you really want to read all
|
||||
the data.
|
||||
|
||||
Returns:
|
||||
|
||||
- hash - hash for the file
|
||||
- hashType - type of hash used
|
||||
|
||||
Example:
|
||||
|
||||
$ rclone rc --loopback operations/hashsumfile fs=/ remote=/bin/bash hashType=MD5 download=true base64=true
|
||||
{
|
||||
"hashType": "md5",
|
||||
"hash": "MDMw-fG2YXs7Uz5Nz-H68A=="
|
||||
}
|
||||
|
||||
See the [hashsum](/commands/rclone_hashsum/) command for more information on the above.
|
||||
|
||||
**Authentication is required for this call.**
|
||||
|
||||
### operations/list: List the given remote and path in JSON format {#operations-list}
|
||||
|
||||
This takes the following parameters:
|
||||
|
||||
@@ -906,7 +906,7 @@ all the files to be uploaded as multipart.
|
||||
<!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
|
||||
### Standard options
|
||||
|
||||
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
|
||||
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
|
||||
|
||||
#### --s3-provider
|
||||
|
||||
@@ -925,6 +925,8 @@ Properties:
|
||||
- Alibaba Cloud Object Storage System (OSS) formerly Aliyun
|
||||
- "ArvanCloud"
|
||||
- Arvan Cloud Object Storage (AOS)
|
||||
- "BizflyCloud"
|
||||
- Bizfly Cloud Simple Storage
|
||||
- "Ceph"
|
||||
- Ceph Object Storage
|
||||
- "ChinaMobile"
|
||||
@@ -1066,7 +1068,7 @@ Properties:
|
||||
|
||||
- Config: region
|
||||
- Env Var: RCLONE_S3_REGION
|
||||
- Provider: AWS,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other
|
||||
- Provider: AWS,BizflyCloud,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
@@ -1175,6 +1177,12 @@ Properties:
|
||||
- AWS GovCloud (US) Region.
|
||||
- Needs location constraint us-gov-west-1.
|
||||
- Provider: AWS
|
||||
- "hn"
|
||||
- Ha Noi
|
||||
- Provider: BizflyCloud
|
||||
- "hcm"
|
||||
- Ho Chi Minh
|
||||
- Provider: BizflyCloud
|
||||
- ""
|
||||
- Use this if unsure.
|
||||
- Will use v4 signatures and an empty region.
|
||||
@@ -1446,12 +1454,21 @@ Properties:
|
||||
- "ru-1"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel,Servercore
|
||||
- "gis-1"
|
||||
- Moscow
|
||||
- Provider: Servercore
|
||||
- "ru-3"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel
|
||||
- "ru-7"
|
||||
- Moscow
|
||||
- Provider: Servercore
|
||||
- Provider: Selectel,Servercore
|
||||
- "gis-1"
|
||||
- Moscow
|
||||
- Provider: Selectel,Servercore
|
||||
- "kz-1"
|
||||
- Kazakhstan
|
||||
- Provider: Selectel
|
||||
- "uz-2"
|
||||
- Uzbekistan
|
||||
- Provider: Selectel
|
||||
- "uz-2"
|
||||
- Tashkent, Uzbekistan
|
||||
- Provider: Servercore
|
||||
@@ -1487,7 +1504,7 @@ Properties:
|
||||
|
||||
- Config: endpoint
|
||||
- Env Var: RCLONE_S3_ENDPOINT
|
||||
- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
@@ -1573,6 +1590,12 @@ Properties:
|
||||
- "s3.ir-tbz-sh1.arvanstorage.ir"
|
||||
- Tabriz Iran (Shahriar)
|
||||
- Provider: ArvanCloud
|
||||
- "hn.ss.bfcplatform.vn"
|
||||
- Hanoi endpoint
|
||||
- Provider: BizflyCloud
|
||||
- "hcm.ss.bfcplatform.vn"
|
||||
- Ho Chi Minh endpoint
|
||||
- Provider: BizflyCloud
|
||||
- "eos-wuxi-1.cmecloud.cn"
|
||||
- The default endpoint - a good choice if you are unsure.
|
||||
- East China (Suzhou)
|
||||
@@ -1979,67 +2002,67 @@ Properties:
|
||||
- Iran
|
||||
- Provider: Liara
|
||||
- "nl-ams-1.linodeobjects.com"
|
||||
- Amsterdam (Netherlands), nl-ams-1
|
||||
- Amsterdam, NL (nl-ams-1)
|
||||
- Provider: Linode
|
||||
- "us-southeast-1.linodeobjects.com"
|
||||
- Atlanta, GA (USA), us-southeast-1
|
||||
- Atlanta, GA, US (us-southeast-1)
|
||||
- Provider: Linode
|
||||
- "in-maa-1.linodeobjects.com"
|
||||
- Chennai (India), in-maa-1
|
||||
- Chennai, IN (in-maa-1)
|
||||
- Provider: Linode
|
||||
- "us-ord-1.linodeobjects.com"
|
||||
- Chicago, IL (USA), us-ord-1
|
||||
- Chicago, IL, US (us-ord-1)
|
||||
- Provider: Linode
|
||||
- "eu-central-1.linodeobjects.com"
|
||||
- Frankfurt (Germany), eu-central-1
|
||||
- Frankfurt, DE (eu-central-1)
|
||||
- Provider: Linode
|
||||
- "id-cgk-1.linodeobjects.com"
|
||||
- Jakarta (Indonesia), id-cgk-1
|
||||
- Jakarta, ID (id-cgk-1)
|
||||
- Provider: Linode
|
||||
- "gb-lon-1.linodeobjects.com"
|
||||
- London 2 (Great Britain), gb-lon-1
|
||||
- London 2, UK (gb-lon-1)
|
||||
- Provider: Linode
|
||||
- "us-lax-1.linodeobjects.com"
|
||||
- Los Angeles, CA (USA), us-lax-1
|
||||
- Los Angeles, CA, US (us-lax-1)
|
||||
- Provider: Linode
|
||||
- "es-mad-1.linodeobjects.com"
|
||||
- Madrid (Spain), es-mad-1
|
||||
- Provider: Linode
|
||||
- "au-mel-1.linodeobjects.com"
|
||||
- Melbourne (Australia), au-mel-1
|
||||
- Madrid, ES (es-mad-1)
|
||||
- Provider: Linode
|
||||
- "us-mia-1.linodeobjects.com"
|
||||
- Miami, FL (USA), us-mia-1
|
||||
- Miami, FL, US (us-mia-1)
|
||||
- Provider: Linode
|
||||
- "it-mil-1.linodeobjects.com"
|
||||
- Milan (Italy), it-mil-1
|
||||
- Milan, IT (it-mil-1)
|
||||
- Provider: Linode
|
||||
- "us-east-1.linodeobjects.com"
|
||||
- Newark, NJ (USA), us-east-1
|
||||
- Newark, NJ, US (us-east-1)
|
||||
- Provider: Linode
|
||||
- "jp-osa-1.linodeobjects.com"
|
||||
- Osaka (Japan), jp-osa-1
|
||||
- Osaka, JP (jp-osa-1)
|
||||
- Provider: Linode
|
||||
- "fr-par-1.linodeobjects.com"
|
||||
- Paris (France), fr-par-1
|
||||
- Paris, FR (fr-par-1)
|
||||
- Provider: Linode
|
||||
- "br-gru-1.linodeobjects.com"
|
||||
- São Paulo (Brazil), br-gru-1
|
||||
- Sao Paulo, BR (br-gru-1)
|
||||
- Provider: Linode
|
||||
- "us-sea-1.linodeobjects.com"
|
||||
- Seattle, WA (USA), us-sea-1
|
||||
- Seattle, WA, US (us-sea-1)
|
||||
- Provider: Linode
|
||||
- "ap-south-1.linodeobjects.com"
|
||||
- Singapore, ap-south-1
|
||||
- Singapore, SG (ap-south-1)
|
||||
- Provider: Linode
|
||||
- "sg-sin-1.linodeobjects.com"
|
||||
- Singapore 2, sg-sin-1
|
||||
- Singapore 2, SG (sg-sin-1)
|
||||
- Provider: Linode
|
||||
- "se-sto-1.linodeobjects.com"
|
||||
- Stockholm (Sweden), se-sto-1
|
||||
- Stockholm, SE (se-sto-1)
|
||||
- Provider: Linode
|
||||
- "us-iad-1.linodeobjects.com"
|
||||
- Washington, DC, (USA), us-iad-1
|
||||
- "jp-tyo-1.linodeobjects.com"
|
||||
- Tokyo 3, JP (jp-tyo-1)
|
||||
- Provider: Linode
|
||||
- "us-iad-10.linodeobjects.com"
|
||||
- Washington, DC, US (us-iad-10)
|
||||
- Provider: Linode
|
||||
- "s3.us-west-1.{account_name}.lyve.seagate.com"
|
||||
- US West 1 - California
|
||||
@@ -2243,13 +2266,25 @@ Properties:
|
||||
- SeaweedFS S3 localhost
|
||||
- Provider: SeaweedFS
|
||||
- "s3.ru-1.storage.selcloud.ru"
|
||||
- Saint Petersburg
|
||||
- St. Petersburg
|
||||
- Provider: Selectel
|
||||
- "s3.ru-3.storage.selcloud.ru"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel
|
||||
- "s3.ru-7.storage.selcloud.ru"
|
||||
- Moscow
|
||||
- Provider: Selectel,Servercore
|
||||
- "s3.gis-1.storage.selcloud.ru"
|
||||
- Moscow
|
||||
- Provider: Servercore
|
||||
- "s3.ru-7.storage.selcloud.ru"
|
||||
- Moscow
|
||||
- Provider: Selectel,Servercore
|
||||
- "s3.kz-1.storage.selcloud.ru"
|
||||
- Kazakhstan
|
||||
- Provider: Selectel
|
||||
- "s3.uz-2.storage.selcloud.ru"
|
||||
- Uzbekistan
|
||||
- Provider: Selectel
|
||||
- "s3.ru-1.storage.selcloud.ru"
|
||||
- Saint Petersburg
|
||||
- Provider: Servercore
|
||||
- "s3.uz-2.srvstorage.uz"
|
||||
- Tashkent, Uzbekistan
|
||||
@@ -2775,36 +2810,36 @@ Properties:
|
||||
|
||||
- Config: acl
|
||||
- Env Var: RCLONE_S3_ACL
|
||||
- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "private"
|
||||
- Owner gets FULL_CONTROL.
|
||||
- No one else has access rights (default).
|
||||
- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other
|
||||
- "public-read"
|
||||
- Owner gets FULL_CONTROL.
|
||||
- The AllUsers group gets READ access.
|
||||
- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- "public-read-write"
|
||||
- Owner gets FULL_CONTROL.
|
||||
- The AllUsers group gets READ and WRITE access.
|
||||
- Granting this on a bucket is generally not recommended.
|
||||
- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- "authenticated-read"
|
||||
- Owner gets FULL_CONTROL.
|
||||
- The AuthenticatedUsers group gets READ access.
|
||||
- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- "bucket-owner-read"
|
||||
- Object owner gets FULL_CONTROL.
|
||||
- Bucket owner gets READ access.
|
||||
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
|
||||
- Provider: AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- "bucket-owner-full-control"
|
||||
- Both the object owner and the bucket owner get FULL_CONTROL over the object.
|
||||
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
|
||||
- Provider: AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- "private"
|
||||
- Owner gets FULL_CONTROL.
|
||||
- No one else has access rights (default).
|
||||
@@ -2969,7 +3004,7 @@ Properties:
|
||||
|
||||
### Advanced options
|
||||
|
||||
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
|
||||
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
|
||||
|
||||
#### --s3-bucket-acl
|
||||
|
||||
@@ -2988,7 +3023,7 @@ Properties:
|
||||
|
||||
- Config: bucket_acl
|
||||
- Env Var: RCLONE_S3_BUCKET_ACL
|
||||
- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
@@ -3242,6 +3277,58 @@ Properties:
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --s3-role-arn
|
||||
|
||||
ARN of the IAM role to assume.
|
||||
|
||||
Leave blank if not using assume role.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: role_arn
|
||||
- Env Var: RCLONE_S3_ROLE_ARN
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --s3-role-session-name
|
||||
|
||||
Session name for assumed role.
|
||||
|
||||
If empty, a session name will be generated automatically.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: role_session_name
|
||||
- Env Var: RCLONE_S3_ROLE_SESSION_NAME
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --s3-role-session-duration
|
||||
|
||||
Session duration for assumed role.
|
||||
|
||||
If empty, the default session duration will be used.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: role_session_duration
|
||||
- Env Var: RCLONE_S3_ROLE_SESSION_DURATION
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --s3-role-external-id
|
||||
|
||||
External ID for assumed role.
|
||||
|
||||
Leave blank if not using an external ID.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: role_external_id
|
||||
- Env Var: RCLONE_S3_ROLE_EXTERNAL_ID
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --s3-upload-concurrency
|
||||
|
||||
Concurrency for multipart uploads and copies.
|
||||
|
||||
@@ -1,3 +1,9 @@
|
||||
---
|
||||
title: "Shade"
|
||||
description: "Shade Backend Docs"
|
||||
versionIntroduced: "v1.73"
|
||||
---
|
||||
|
||||
# {{< icon "fa fa-moon" >}} Shade
|
||||
|
||||
This is a backend for the [Shade](https://shade.inc/) platform
|
||||
@@ -115,7 +121,7 @@ Properties:
|
||||
|
||||
#### --shade-api-key
|
||||
|
||||
An API key for your account. You can find this under Settings > API Keys
|
||||
An API key for your account.
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -159,6 +165,50 @@ Properties:
|
||||
- Type: SizeSuffix
|
||||
- Default: 64Mi
|
||||
|
||||
#### --shade-upload-concurrency
|
||||
|
||||
Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: upload_concurrency
|
||||
- Env Var: RCLONE_SHADE_UPLOAD_CONCURRENCY
|
||||
- Type: int
|
||||
- Default: 4
|
||||
|
||||
#### --shade-max-upload-parts
|
||||
|
||||
Maximum amount of parts in a multipart upload.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: max_upload_parts
|
||||
- Env Var: RCLONE_SHADE_MAX_UPLOAD_PARTS
|
||||
- Type: int
|
||||
- Default: 10000
|
||||
|
||||
#### --shade-token
|
||||
|
||||
JWT Token for performing Shade FS operations. Don't set this value - rclone will set it automatically
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_SHADE_TOKEN
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --shade-token-expiry
|
||||
|
||||
JWT Token Expiration time. Don't set this value - rclone will set it automatically
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_expiry
|
||||
- Env Var: RCLONE_SHADE_TOKEN_EXPIRY
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --shade-encoding
|
||||
|
||||
The encoding for the backend.
|
||||
|
||||
@@ -564,7 +564,7 @@ Properties:
|
||||
|
||||
Above this size files will be chunked.
|
||||
|
||||
Above this size files will be chunked into a a `_segments` container
|
||||
Above this size files will be chunked into a `_segments` container
|
||||
or a `.file-segments` directory. (See the `use_segments_container` option
|
||||
for more info). Default for this is 5 GiB which is its maximum value, which
|
||||
means only files above this size will be chunked.
|
||||
|
||||
@@ -218,12 +218,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
|
||||
// Output: stories/The Quick Brown Fox!-20251121
|
||||
// Output: stories/The Quick Brown Fox!-20260130
|
||||
```
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
|
||||
// Output: stories/The Quick Brown Fox!-2025-11-21 0508PM
|
||||
// Output: stories/The Quick Brown Fox!-2026-01-30 0852PM
|
||||
```
|
||||
|
||||
```console
|
||||
|
||||
Reference in New Issue
Block a user