1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-19 17:53:16 +00:00

Compare commits

...

57 Commits

Author SHA1 Message Date
Nick Craig-Wood
b141a553be fshttp: add --dump curl for dumping HTTP requests as curl commands 2025-12-18 17:53:24 +00:00
Nick Craig-Wood
f81cd7d279 serve s3: make errors in --s3-auth-key fatal - fixes #9044
Previously if auth keys were provided without a comma then rclone
would only log an INFO message which could mean it went on to serve
without any auth.

The parsing for environment variables was changed in v1.70.0 to make
them work properly with multiple inputs. This means the input is
treated like a mini CSV file which works well except in this case when
the input has commas. This meant `user,auth` without quotes is treated
as two key pairs `user` and `quote`. The correct syntax is
`"user,auth"`. This updates the documentation accordingly.
2025-12-18 10:17:41 +00:00
Nick Craig-Wood
1a0a4628d7 Add masrlinu to contributors 2025-12-18 10:17:41 +00:00
masrlinu
c10a4d465c pcloud: add support for real-time updates in mount
Co-authored-by: masrlinu <5259918+masrlinu@users.noreply.github.com>
2025-12-17 15:13:25 +00:00
Nick Craig-Wood
3a6e07a613 memory: add --memory-discard flag for speed testing - fixes #9037 2025-12-17 10:21:12 +00:00
Nick Craig-Wood
c36f99d343 Add vyv03354 to contributors 2025-12-17 10:21:12 +00:00
jhasse-shade
3e21a7261b shade: Fix VFS test issues 2025-12-16 17:21:22 +00:00
vyv03354
fd439fab62 docs: mention use of ListR feature in ls docs 2025-12-15 09:11:00 +01:00
dependabot[bot]
976aa6b416 build: bump actions/download-artifact from 6 to 7
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 6 to 7.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v6...v7)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-13 11:01:27 +01:00
dependabot[bot]
b3a0383ca3 build: bump actions/upload-artifact from 5 to 6
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 5 to 6.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-13 11:00:59 +01:00
dependabot[bot]
c13f129339 build: bump actions/cache from 4 to 5
Bumps [actions/cache](https://github.com/actions/cache) from 4 to 5.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-12 14:52:57 +01:00
vyv03354
748d8c8957 docs: reflects the fact that pCloud supports ListR 2025-12-11 20:32:53 +01:00
jbagwell-akamai
4d379efcbb S3: Linode: updated endpoints to use ISO 3166-1 alpha-2 standard
ISO 3166-1 alpha-2 standard for countries and region short name in parentheses instead of separated by another comma
2025-12-11 17:20:34 +00:00
dougal
e5e6a4b5ae sync: fix error propagation in tests (#9025)
This commit fixes the sync transform test IO errors by resetting the
error flag which stops subsequent tests failing.
2025-12-10 15:43:22 +00:00
Nick Craig-Wood
df18e8c55b Changelog updates from Version v1.72.1 2025-12-10 15:31:48 +00:00
Nick Craig-Wood
f4e17d8b0b s3: add more regions for Selectel 2025-12-10 15:31:48 +00:00
Nick Craig-Wood
e5c69511bc Add jhasse-shade to contributors 2025-12-10 15:31:48 +00:00
jhasse-shade
175d4bc553 Add Shade backend 2025-12-09 17:08:57 +00:00
Nick Craig-Wood
4851f1796c log: fix backtrace not going to the --log-file #9014
Before the log re-organisation in:

8d353039a6 log: add log rotation to --log-file

rclone would write any backtraces to the --log-file which was very
convenient for users.

This got accidentally disabled due to a typo which meant backtraces
started going to stderr even if --log-file was supplied.

This fixes the problem.
2025-12-09 16:35:07 +00:00
Nick Craig-Wood
4ff8899b2c build: fix lint warning after linter upgrade 2025-12-09 16:15:17 +00:00
Nick Craig-Wood
8f29a0b0a1 Add Jonas Tingeborn to contributors 2025-12-09 16:15:17 +00:00
Nick Craig-Wood
8b0e76e53b Add Tingsong Xu to contributors 2025-12-09 16:15:17 +00:00
Jonas Tingeborn
233fef5c4d configfile: add piped config support - fixes #9012 2025-12-08 18:42:17 +00:00
Tingsong Xu
b9586c3e03 fs/log: fix PID not included in JSON log output
When using `--log-format pid,json`, the PID was not being added to the JSON log output. This fix adds PID support to JSON logging.
2025-12-08 18:41:58 +00:00
Nick Craig-Wood
0dc0ab1330 build: adjust lint rules to exclude new errors from linter update 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
a6bbdb35a0 proxy: fix error handling in tests spotted by the linter 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
b33cb77b6c Add Johannes Rothe to contributors 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
d51322bb5f Add Leo to contributors 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
e718ab6091 Add Vladislav Tropnikov to contributors 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
0a9e6e130f Add Cliff Frey to contributors 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
3358b9049c Add vicerace to contributors 2025-12-08 14:45:06 +00:00
DianaNites
847734d421 b2: Fix listing root buckets with unrestricted API key
Fixes previous pull request #8978

An oversight meant that unrestricted API keys
never called b2_list_buckets,
meaning the root remote could not be listed.

The call is now made in the event there are no allowed buckets,
indicating an unrestricted API key

Fixes #9007
2025-12-04 15:55:17 +00:00
Johannes Rothe
f7b255d4ec googlecloudstorage: improve endpoint parameter docs
When specifying a custom endpoint with a subpath, there is a limitation
in the Google cloud storage integration that the subpath is ignored
during upload operations. For example with the custom endpoint
"example.org/custom/endpoint" on upload the /custom/endpoint is not
reflected.

As this is most likely an issue with the underlying API client, there is
no way to fix this in rclone. By extending the documentation at least
rclone users are made aware of this limitation.

Related forum thread: https://forum.rclone.org/t/googlecloudstorage-custom-endpoint-subpath-removed-for-upload/53059
2025-12-01 19:04:02 +00:00
Leo
24c752ed9e serve webdav: implement download-directory-as-zip
Signed-off-by: Leo <i@hardrain980.com>
2025-12-01 15:42:16 +00:00
Vladislav Tropnikov
a99d155fd4 s3: The ability to specify an IAM role for cross-account interaction 2025-11-29 13:53:00 +00:00
Cliff Frey
f72b32b470 azureblob: add metadata and tags support across upload and copy paths
This change adds first-class metadata support to the Azure Blob backend,
including headers, user metadata, tags, and modtime overrides, and wires
it through uploads and server-side copies.

There is a behavior change in that rclone will now set the "mtime"
custom metadata when doing server side copies to azure and the
`--metadata` argument is given.

- Map standard headers: cache-control, content-disposition, content-encoding,
  content-language, content-type to corresponding x-ms-blob-* HTTP headers.
- Map user metadata: any non-reserved keys (excluding x-ms-*) are sent as
  blob user metadata. Keys are normalized to lowercase for consistency.
- Support tags: parse `x-ms-tags` as a comma-separated list of key=value
  pairs and apply them on uploads and copies.
- Support mtime override: accept `mtime` in metadata (RFC3339/RFC3339Nano)
  to override the stored modtime persisted in user metadata.
2025-11-27 16:58:07 +00:00
vicerace
9be7f99bf8 refactor: use strings.Cut to simplify code
Signed-off-by: vicerace <vicerace@sohu.com>
2025-11-27 14:42:11 +00:00
Nick Craig-Wood
6858bf242e docs: note where a provider has an S3 compatible alternative 2025-11-26 12:22:48 +00:00
Nick Craig-Wood
e8c6867e4c Add Shade as sponsor 2025-11-26 12:22:48 +00:00
Nick Craig-Wood
50fbd6b049 Add Duncan Smart to contributors 2025-11-26 12:22:48 +00:00
Nick Craig-Wood
0783cab952 Add Diana to contributors 2025-11-26 12:22:48 +00:00
Duncan Smart
886ac7af1d docs: Clarify OAuth scopes for readonly Google Drive access 2025-11-24 15:58:53 +00:00
Diana
3c40238f02 b2: support authentication with new bucket restricted application keys
Backblaze has updated its b2_authorize_account API endpoint, newly created
application keys are now "multi-bucket" keys, capable of being limited to
multiple buckets. These keys can only be used with the v4 endpoint, not v1 which
returns an HTTP 400.

This commit switches authorization to the v4 endpoint, and allowing such keys to
work with any of the allowed buckets.

With multi-bucket keys, missing restricted buckets can be non-fatal.

Supports listing root with multi-bucket API keys
2025-11-24 15:46:41 +00:00
Nick Craig-Wood
46ca0dd7fe docs: update sponsor logos 2025-11-24 14:58:33 +00:00
Nick Craig-Wood
2e968e7ce0 docs: fix lint error in changelog 2025-11-21 18:23:16 +00:00
Nick Craig-Wood
1886c552db Start v1.73.0-DEV development 2025-11-21 18:23:07 +00:00
Nick Craig-Wood
38ab3dd5b1 Version v1.72.0 2025-11-21 17:10:17 +00:00
Nick Craig-Wood
1d02e1219a rc: fix formatting in job/batch 2025-11-21 17:06:18 +00:00
Nick Craig-Wood
035d3f344c test speed: fix formatting of help 2025-11-21 17:02:45 +00:00
Nick Craig-Wood
7d45aee70f docs: update sponsor logos 2025-11-21 12:48:29 +00:00
dependabot[bot]
f30789180d build: bump actions/checkout from 5 to 6
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-20 23:16:24 +00:00
Sean Turner
7cb05a84e9 s3: add multi-part-upload support for If-Match and If-None-Match
#8947 implemented support for the If-Match and If-None-Match headers for S3 PUT
Object requests; however, this support did not extend to multi-part copy and
upload requests. These headers are implemented via inclusion in the
CompleteMultipartUpload request.

This updates the auto generated code also which was needed for multipart copy.
2025-11-20 17:31:15 +00:00
Nick Craig-Wood
6d4c625bfb rc: config/unlock: rename parameter to configPassword accept old as well
We accidentally added a non `camelCase` parameter to the rc
(`config_password`)- this fixes it (to `configPassword`) but accepts
the old name too as it has been in a release.
2025-11-20 16:46:01 +00:00
Nick Craig-Wood
4eccc40168 rc: correct names of parameters in job/list output
These were accidentally committed as snake_case whereas we use
camelCase elsewhere.

This corrects the issue before the first release in v1.72.0
2025-11-20 16:46:01 +00:00
Nick Craig-Wood
e451f9c999 Add Nikolay Kiryanov to contributors 2025-11-20 16:46:01 +00:00
Nikolay Kiryanov
321488441e rc: add executeId to job statuses - fixes #8972 2025-11-20 13:15:22 +00:00
dependabot[bot]
bd99e05ff0 build: bump golang.org/x/crypto from 0.43.0 to 0.45.0 to fix CVE-2025-58181
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-20 13:09:29 +00:00
209 changed files with 47649 additions and 17000 deletions

View File

@@ -95,7 +95,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0
@@ -216,7 +216,7 @@ jobs:
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0
@@ -229,7 +229,7 @@ jobs:
cache: false
- name: Cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: |
~/go/pkg/mod
@@ -307,7 +307,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0

View File

@@ -52,7 +52,7 @@ jobs:
df -h .
- name: Checkout Repository
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0
@@ -129,7 +129,7 @@ jobs:
- name: Load Go Build Cache for Docker
id: go-cache
uses: actions/cache@v4
uses: actions/cache@v5
with:
key: ${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}-${{ hashFiles('**/go.mod') }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
@@ -183,7 +183,7 @@ jobs:
touch "/tmp/digests/${digest#sha256:}"
- name: Upload Image Digest
uses: actions/upload-artifact@v5
uses: actions/upload-artifact@v6
with:
name: digests-${{ env.PLATFORM }}
path: /tmp/digests/*
@@ -198,7 +198,7 @@ jobs:
steps:
- name: Download Image Digests
uses: actions/download-artifact@v6
uses: actions/download-artifact@v7
with:
path: /tmp/digests
pattern: digests-*

View File

@@ -30,7 +30,7 @@ jobs:
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v5
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Build and publish docker plugin

View File

@@ -17,6 +17,14 @@ linters:
#- prealloc # TODO
- revive
- unconvert
exclusions:
rules:
- linters:
- revive
text: 'var-naming: avoid meaningless package names'
- linters:
- revive
text: 'var-naming: avoid package names that conflict with Go standard library package names'
# Configure checks. Mostly using defaults but with some commented exceptions.
settings:
govet:
@@ -136,6 +144,7 @@ linters:
- name: var-naming
disabled: false
formatters:
enable:
- goimports

15185
MANUAL.html generated

File diff suppressed because it is too large Load Diff

17750
MANUAL.md generated

File diff suppressed because it is too large Load Diff

7896
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -109,6 +109,7 @@ directories to and from different cloud storage providers.
- Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
- Servercore Object Storage [:page_facing_up:](https://rclone.org/s3/#servercore)
- SFTP [:page_facing_up:](https://rclone.org/sftp/)
- Shade [:page_facing_up:](https://rclone.org/shade/)
- SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
- Spectra Logic [:page_facing_up:](https://rclone.org/s3/#spectralogic)
- StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)

View File

@@ -21,6 +21,7 @@ This file describes how to make the various kinds of releases
- make doc
- git status - to check for new man pages - git add them
- git commit -a -v -m "Version v1.XX.0"
- make check
- make retag
- git push origin # without --follow-tags so it doesn't push the tag if it fails
- git push --follow-tags origin

View File

@@ -1 +1 @@
v1.72.0
v1.73.0

View File

@@ -55,6 +55,7 @@ import (
_ "github.com/rclone/rclone/backend/s3"
_ "github.com/rclone/rclone/backend/seafile"
_ "github.com/rclone/rclone/backend/sftp"
_ "github.com/rclone/rclone/backend/shade"
_ "github.com/rclone/rclone/backend/sharefile"
_ "github.com/rclone/rclone/backend/sia"
_ "github.com/rclone/rclone/backend/smb"

View File

@@ -86,12 +86,56 @@ var (
metadataMu sync.Mutex
)
// system metadata keys which this backend owns
var systemMetadataInfo = map[string]fs.MetadataHelp{
"cache-control": {
Help: "Cache-Control header",
Type: "string",
Example: "no-cache",
},
"content-disposition": {
Help: "Content-Disposition header",
Type: "string",
Example: "inline",
},
"content-encoding": {
Help: "Content-Encoding header",
Type: "string",
Example: "gzip",
},
"content-language": {
Help: "Content-Language header",
Type: "string",
Example: "en-US",
},
"content-type": {
Help: "Content-Type header",
Type: "string",
Example: "text/plain",
},
"tier": {
Help: "Tier of the object",
Type: "string",
Example: "Hot",
ReadOnly: true,
},
"mtime": {
Help: "Time of last modification, read from rclone metadata",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05.999999999Z07:00",
},
}
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "azureblob",
Description: "Microsoft Azure Blob Storage",
NewFs: NewFs,
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `User metadata is stored as x-ms-meta- keys. Azure metadata keys are case insensitive and are always returned in lower case.`,
},
Options: []fs.Option{{
Name: "account",
Help: `Azure Storage Account Name.
@@ -810,6 +854,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
BucketBased: true,
BucketBasedRootOK: true,
SetTier: true,
@@ -1157,6 +1204,289 @@ func (o *Object) updateMetadataWithModTime(modTime time.Time) {
o.meta[modTimeKey] = modTime.Format(timeFormatOut)
}
// parseXMsTags parses the value of the x-ms-tags header into a map.
// It expects comma-separated key=value pairs. Whitespace around keys and
// values is trimmed. Empty pairs and empty keys are rejected.
func parseXMsTags(s string) (map[string]string, error) {
if strings.TrimSpace(s) == "" {
return map[string]string{}, nil
}
out := make(map[string]string)
parts := strings.Split(s, ",")
for _, p := range parts {
p = strings.TrimSpace(p)
if p == "" {
continue
}
kv := strings.SplitN(p, "=", 2)
if len(kv) != 2 {
return nil, fmt.Errorf("invalid tag %q", p)
}
k := strings.TrimSpace(kv[0])
v := strings.TrimSpace(kv[1])
if k == "" {
return nil, fmt.Errorf("invalid tag key in %q", p)
}
out[k] = v
}
return out, nil
}
// mapMetadataToAzure maps a generic metadata map to Azure HTTP headers,
// user metadata, tags and optional modTime override.
// Reserved x-ms-* keys (except x-ms-tags) are ignored for user metadata.
//
// Pass a logger to surface non-fatal parsing issues (e.g. bad mtime).
func mapMetadataToAzure(meta map[string]string, logf func(string, ...any)) (headers blob.HTTPHeaders, userMeta map[string]*string, tags map[string]string, modTime *time.Time, err error) {
if meta == nil {
return headers, nil, nil, nil, nil
}
tmp := make(map[string]string)
for k, v := range meta {
lowerKey := strings.ToLower(k)
switch lowerKey {
case "cache-control":
headers.BlobCacheControl = pString(v)
case "content-disposition":
headers.BlobContentDisposition = pString(v)
case "content-encoding":
headers.BlobContentEncoding = pString(v)
case "content-language":
headers.BlobContentLanguage = pString(v)
case "content-type":
headers.BlobContentType = pString(v)
case "x-ms-tags":
parsed, perr := parseXMsTags(v)
if perr != nil {
return headers, nil, nil, nil, perr
}
// allocate only if there are tags
if len(parsed) > 0 {
tags = parsed
}
case "mtime":
// Accept multiple layouts for tolerance
var parsed time.Time
var pErr error
for _, layout := range []string{time.RFC3339Nano, time.RFC3339, timeFormatOut} {
parsed, pErr = time.Parse(layout, v)
if pErr == nil {
modTime = &parsed
break
}
}
// Log and ignore if unparseable
if modTime == nil && logf != nil {
logf("metadata: couldn't parse mtime %q: %v", v, pErr)
}
case "tier":
// ignore - handled elsewhere
default:
// Filter out other reserved headers so they don't end up as user metadata
if strings.HasPrefix(lowerKey, "x-ms-") {
continue
}
tmp[lowerKey] = v
}
}
userMeta = toAzureMetaPtr(tmp)
return headers, userMeta, tags, modTime, nil
}
// toAzureMetaPtr converts a map[string]string to map[string]*string as used by Azure SDK
func toAzureMetaPtr(in map[string]string) map[string]*string {
if len(in) == 0 {
return nil
}
out := make(map[string]*string, len(in))
for k, v := range in {
vv := v
out[k] = &vv
}
return out
}
// assembleCopyParams prepares headers, metadata and tags for copy operations.
//
// It starts from the source properties, optionally overlays mapped metadata
// from rclone's metadata options, ensures mtime presence when mapping is
// enabled, and returns whether mapping was actually requested (hadMapping).
// assembleCopyParams prepares headers, metadata and tags for copy operations.
//
// If includeBaseMeta is true, start user metadata from the source's metadata
// and overlay mapped values. This matches multipart copy commit behavior.
// If false, only include mapped user metadata (no source baseline) which
// matches previous singlepart StartCopyFromURL semantics.
func assembleCopyParams(ctx context.Context, f *Fs, src fs.Object, srcProps *blob.GetPropertiesResponse, includeBaseMeta bool) (headers blob.HTTPHeaders, meta map[string]*string, tags map[string]string, hadMapping bool, err error) {
// Start from source properties
headers = blob.HTTPHeaders{
BlobCacheControl: srcProps.CacheControl,
BlobContentDisposition: srcProps.ContentDisposition,
BlobContentEncoding: srcProps.ContentEncoding,
BlobContentLanguage: srcProps.ContentLanguage,
BlobContentMD5: srcProps.ContentMD5,
BlobContentType: srcProps.ContentType,
}
// Optionally deep copy user metadata pointers from source. Normalise keys to
// lower-case to avoid duplicate x-ms-meta headers when we later inject/overlay
// metadata (Azure treats keys case-insensitively but Go's http.Header will
// join duplicate keys into a comma separated list, which breaks shared-key
// signing).
if includeBaseMeta && len(srcProps.Metadata) > 0 {
meta = make(map[string]*string, len(srcProps.Metadata))
for k, v := range srcProps.Metadata {
if v != nil {
vv := *v
meta[strings.ToLower(k)] = &vv
}
}
}
// Only consider mapping if metadata pipeline is enabled
if fs.GetConfig(ctx).Metadata {
mapped, mapErr := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if mapErr != nil {
return headers, meta, nil, false, fmt.Errorf("failed to map metadata: %w", mapErr)
}
if mapped != nil {
// Map rclone metadata to Azure shapes
mappedHeaders, userMeta, mappedTags, mappedModTime, herr := mapMetadataToAzure(mapped, func(format string, args ...any) { fs.Debugf(f, format, args...) })
if herr != nil {
return headers, meta, nil, false, fmt.Errorf("metadata mapping: %w", herr)
}
hadMapping = true
// Overlay headers (only non-nil)
if mappedHeaders.BlobCacheControl != nil {
headers.BlobCacheControl = mappedHeaders.BlobCacheControl
}
if mappedHeaders.BlobContentDisposition != nil {
headers.BlobContentDisposition = mappedHeaders.BlobContentDisposition
}
if mappedHeaders.BlobContentEncoding != nil {
headers.BlobContentEncoding = mappedHeaders.BlobContentEncoding
}
if mappedHeaders.BlobContentLanguage != nil {
headers.BlobContentLanguage = mappedHeaders.BlobContentLanguage
}
if mappedHeaders.BlobContentType != nil {
headers.BlobContentType = mappedHeaders.BlobContentType
}
// Overlay user metadata
if len(userMeta) > 0 {
if meta == nil {
meta = make(map[string]*string, len(userMeta))
}
for k, v := range userMeta {
meta[k] = v
}
}
// Apply tags if any
if len(mappedTags) > 0 {
tags = mappedTags
}
// Ensure mtime present using mapped or source time
if _, ok := meta[modTimeKey]; !ok {
when := src.ModTime(ctx)
if mappedModTime != nil {
when = *mappedModTime
}
val := when.Format(time.RFC3339Nano)
if meta == nil {
meta = make(map[string]*string, 1)
}
meta[modTimeKey] = &val
}
// Ensure content-type fallback to source if not set by mapper
if headers.BlobContentType == nil {
headers.BlobContentType = srcProps.ContentType
}
} else {
// Mapping enabled but not provided: ensure mtime present based on source ModTime
if _, ok := meta[modTimeKey]; !ok {
when := src.ModTime(ctx)
val := when.Format(time.RFC3339Nano)
if meta == nil {
meta = make(map[string]*string, 1)
}
meta[modTimeKey] = &val
}
}
}
return headers, meta, tags, hadMapping, nil
}
// applyMappedMetadata applies mapped metadata and headers to the object state for uploads.
//
// It reads `--metadata`, `--metadata-set`, and `--metadata-mapper` outputs via fs.GetMetadataOptions
// and updates o.meta, o.tags and ui.httpHeaders accordingly.
func (o *Object) applyMappedMetadata(ctx context.Context, src fs.ObjectInfo, ui *uploadInfo, options []fs.OpenOption) (modTime time.Time, err error) {
// Start from the source modtime; may be overridden by metadata
modTime = src.ModTime(ctx)
// Fetch mapped metadata if --metadata is enabled
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return modTime, err
}
if meta == nil {
// No metadata processing requested
return modTime, nil
}
// Map metadata using common helper
headers, userMeta, tags, mappedModTime, err := mapMetadataToAzure(meta, func(format string, args ...any) { fs.Debugf(o, format, args...) })
if err != nil {
return modTime, err
}
// Merge headers into ui
if headers.BlobCacheControl != nil {
ui.httpHeaders.BlobCacheControl = headers.BlobCacheControl
}
if headers.BlobContentDisposition != nil {
ui.httpHeaders.BlobContentDisposition = headers.BlobContentDisposition
}
if headers.BlobContentEncoding != nil {
ui.httpHeaders.BlobContentEncoding = headers.BlobContentEncoding
}
if headers.BlobContentLanguage != nil {
ui.httpHeaders.BlobContentLanguage = headers.BlobContentLanguage
}
if headers.BlobContentType != nil {
ui.httpHeaders.BlobContentType = headers.BlobContentType
}
// Apply user metadata to o.meta with a single critical section
if len(userMeta) > 0 {
metadataMu.Lock()
if o.meta == nil {
o.meta = make(map[string]string, len(userMeta))
}
for k, v := range userMeta {
if v != nil {
o.meta[k] = *v
}
}
metadataMu.Unlock()
}
// Apply tags
if len(tags) > 0 {
if o.tags == nil {
o.tags = make(map[string]string, len(tags))
}
for k, v := range tags {
o.tags[k] = v
}
}
if mappedModTime != nil {
modTime = *mappedModTime
}
return modTime, nil
}
// Returns whether file is a directory marker or not
func isDirectoryMarker(size int64, metadata map[string]*string, remote string) bool {
// Directory markers are 0 length
@@ -1951,18 +2281,19 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
return nil, err
}
// Convert metadata from source object
// Prepare metadata/headers/tags for destination
// For multipart commit, include base metadata from source then overlay mapped
commitHeaders, commitMeta, commitTags, _, err := assembleCopyParams(ctx, f, src, srcProperties, true)
if err != nil {
return nil, fmt.Errorf("multipart copy: %w", err)
}
// Convert metadata from source or mapper
options := blockblob.CommitBlockListOptions{
Metadata: srcProperties.Metadata,
Tier: parseTier(f.opt.AccessTier),
HTTPHeaders: &blob.HTTPHeaders{
BlobCacheControl: srcProperties.CacheControl,
BlobContentDisposition: srcProperties.ContentDisposition,
BlobContentEncoding: srcProperties.ContentEncoding,
BlobContentLanguage: srcProperties.ContentLanguage,
BlobContentMD5: srcProperties.ContentMD5,
BlobContentType: srcProperties.ContentType,
},
Metadata: commitMeta,
Tags: commitTags,
Tier: parseTier(f.opt.AccessTier),
HTTPHeaders: &commitHeaders,
}
// Finalise the upload session
@@ -1993,10 +2324,36 @@ func (f *Fs) copySinglepart(ctx context.Context, remote, dstContainer, dstPath s
return nil, fmt.Errorf("single part copy: source auth: %w", err)
}
// Start the copy
// Prepare mapped metadata/tags/headers if requested
options := blob.StartCopyFromURLOptions{
Tier: parseTier(f.opt.AccessTier),
}
var postHeaders *blob.HTTPHeaders
// Read source properties and assemble params; this also handles the case when mapping is disabled
srcProps, err := src.readMetaDataAlways(ctx)
if err != nil {
return nil, fmt.Errorf("single part copy: read source properties: %w", err)
}
// For singlepart copy, do not include base metadata from source in StartCopyFromURL
headers, meta, tags, hadMapping, aerr := assembleCopyParams(ctx, f, src, srcProps, false)
if aerr != nil {
return nil, fmt.Errorf("single part copy: %w", aerr)
}
// Apply tags and post-copy headers only when mapping requested changes
if len(tags) > 0 {
options.BlobTags = make(map[string]string, len(tags))
for k, v := range tags {
options.BlobTags[k] = v
}
}
if hadMapping {
// Only set metadata explicitly when mapping was requested; otherwise
// let the service copy source metadata (including mtime) automatically.
if len(meta) > 0 {
options.Metadata = meta
}
postHeaders = &headers
}
var startCopy blob.StartCopyFromURLResponse
err = f.pacer.Call(func() (bool, error) {
startCopy, err = dstBlobSVC.StartCopyFromURL(ctx, srcURL, &options)
@@ -2026,6 +2383,16 @@ func (f *Fs) copySinglepart(ctx context.Context, remote, dstContainer, dstPath s
pollTime = min(2*pollTime, time.Second)
}
// If mapper requested header changes, set them post-copy
if postHeaders != nil {
blb := f.getBlobSVC(dstContainer, dstPath)
_, setErr := blb.SetHTTPHeaders(ctx, *postHeaders, nil)
if setErr != nil {
return nil, fmt.Errorf("single part copy: failed to set headers: %w", setErr)
}
}
// Metadata (when requested) is set via StartCopyFromURL options.Metadata
return f.NewObject(ctx, remote)
}
@@ -2157,6 +2524,35 @@ func (o *Object) getMetadata() (metadata map[string]*string) {
return metadata
}
// Metadata returns metadata for an object
//
// It returns a combined view of system and user metadata.
func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
// Ensure metadata is loaded
if err := o.readMetaData(ctx); err != nil {
return nil, err
}
m := fs.Metadata{}
// System metadata we expose
if !o.modTime.IsZero() {
m["mtime"] = o.modTime.Format(time.RFC3339Nano)
}
if o.accessTier != "" {
m["tier"] = string(o.accessTier)
}
// Merge user metadata (already lower-cased keys)
metadataMu.Lock()
for k, v := range o.meta {
m[k] = v
}
metadataMu.Unlock()
return m, nil
}
// decodeMetaDataFromPropertiesResponse sets the metadata from the data passed in
//
// Sets
@@ -2995,17 +3391,19 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
// containerPath = containerPath[:len(containerPath)-1]
// }
// Update Mod time
o.updateMetadataWithModTime(src.ModTime(ctx))
if err != nil {
return ui, err
}
// Create the HTTP headers for the upload
// Start with default content-type based on source
ui.httpHeaders = blob.HTTPHeaders{
BlobContentType: pString(fs.MimeType(ctx, src)),
}
// Apply mapped metadata/headers/tags if requested
modTime, err := o.applyMappedMetadata(ctx, src, &ui, options)
if err != nil {
return ui, err
}
// Ensure mtime is set in metadata based on possibly overridden modTime
o.updateMetadataWithModTime(modTime)
// Compute the Content-MD5 of the file. As we stream all uploads it
// will be set in PutBlockList API call using the 'x-ms-blob-content-md5' header
if !o.fs.opt.DisableCheckSum {

View File

@@ -5,11 +5,16 @@ package azureblob
import (
"context"
"encoding/base64"
"fmt"
"net/http"
"strings"
"testing"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
@@ -148,4 +153,417 @@ func (f *Fs) testWriteUncommittedBlocks(t *testing.T) {
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Features", f.testFeatures)
t.Run("WriteUncommittedBlocks", f.testWriteUncommittedBlocks)
t.Run("Metadata", f.testMetadataPaths)
}
// helper to read blob properties for an object
func getProps(ctx context.Context, t *testing.T, o fs.Object) *blob.GetPropertiesResponse {
ao := o.(*Object)
props, err := ao.readMetaDataAlways(ctx)
require.NoError(t, err)
return props
}
// helper to assert select headers and user metadata
func assertHeadersAndMetadata(t *testing.T, props *blob.GetPropertiesResponse, want map[string]string, wantUserMeta map[string]string) {
// Headers
get := func(p *string) string {
if p == nil {
return ""
}
return *p
}
if v, ok := want["content-type"]; ok {
assert.Equal(t, v, get(props.ContentType), "content-type")
}
if v, ok := want["cache-control"]; ok {
assert.Equal(t, v, get(props.CacheControl), "cache-control")
}
if v, ok := want["content-disposition"]; ok {
assert.Equal(t, v, get(props.ContentDisposition), "content-disposition")
}
if v, ok := want["content-encoding"]; ok {
assert.Equal(t, v, get(props.ContentEncoding), "content-encoding")
}
if v, ok := want["content-language"]; ok {
assert.Equal(t, v, get(props.ContentLanguage), "content-language")
}
// User metadata (case-insensitive keys from service)
norm := make(map[string]*string, len(props.Metadata))
for kk, vv := range props.Metadata {
norm[strings.ToLower(kk)] = vv
}
for k, v := range wantUserMeta {
pv, ok := norm[strings.ToLower(k)]
if assert.True(t, ok, fmt.Sprintf("missing user metadata key %q", k)) {
if pv == nil {
assert.Equal(t, v, "", k)
} else {
assert.Equal(t, v, *pv, k)
}
} else {
// Log available keys for diagnostics
keys := make([]string, 0, len(props.Metadata))
for kk := range props.Metadata {
keys = append(keys, kk)
}
t.Logf("available user metadata keys: %v", keys)
}
}
}
// helper to read blob tags for an object
func getTagsMap(ctx context.Context, t *testing.T, o fs.Object) map[string]string {
ao := o.(*Object)
blb := ao.getBlobSVC()
resp, err := blb.GetTags(ctx, nil)
require.NoError(t, err)
out := make(map[string]string)
for _, tag := range resp.BlobTagSet {
if tag.Key != nil {
k := *tag.Key
v := ""
if tag.Value != nil {
v = *tag.Value
}
out[k] = v
}
}
return out
}
// Test metadata across different write paths
func (f *Fs) testMetadataPaths(t *testing.T) {
ctx := context.Background()
if testing.Short() {
t.Skip("skipping in short mode")
}
// Common expected metadata and headers
baseMeta := fs.Metadata{
"cache-control": "no-cache",
"content-disposition": "inline",
"content-language": "en-US",
// Note: Don't set content-encoding here to avoid download decoding differences
// We will set a custom user metadata key
"potato": "royal",
// and modtime
"mtime": fstest.Time("2009-05-06T04:05:06.499999999Z").Format(time.RFC3339Nano),
}
// Singlepart upload
t.Run("PutSinglepart", func(t *testing.T) {
// size less than chunk size
contents := random.String(int(f.opt.ChunkSize / 2))
item := fstest.NewItem("meta-single.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
// override content-type via metadata mapping
meta := fs.Metadata{}
meta.Merge(baseMeta)
meta["content-type"] = "text/plain"
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "text/html", meta)
defer func() { _ = obj.Remove(ctx) }()
props := getProps(ctx, t, obj)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "text/plain",
"cache-control": "no-cache",
"content-disposition": "inline",
"content-language": "en-US",
}, map[string]string{
"potato": "royal",
})
_ = http.StatusOK // keep import for parity but don't inspect RawResponse
})
// Multipart upload
t.Run("PutMultipart", func(t *testing.T) {
// size greater than chunk size to force multipart
contents := random.String(int(f.opt.ChunkSize + 1024))
item := fstest.NewItem("meta-multipart.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
meta := fs.Metadata{}
meta.Merge(baseMeta)
meta["content-type"] = "application/json"
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "text/html", meta)
defer func() { _ = obj.Remove(ctx) }()
props := getProps(ctx, t, obj)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "application/json",
"cache-control": "no-cache",
"content-disposition": "inline",
"content-language": "en-US",
}, map[string]string{
"potato": "royal",
})
// Tags: Singlepart upload
t.Run("PutSinglepartTags", func(t *testing.T) {
contents := random.String(int(f.opt.ChunkSize / 2))
item := fstest.NewItem("tags-single.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
meta := fs.Metadata{
"x-ms-tags": "env=dev,team=sync",
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "text/plain", meta)
defer func() { _ = obj.Remove(ctx) }()
tags := getTagsMap(ctx, t, obj)
assert.Equal(t, "dev", tags["env"])
assert.Equal(t, "sync", tags["team"])
})
// Tags: Multipart upload
t.Run("PutMultipartTags", func(t *testing.T) {
contents := random.String(int(f.opt.ChunkSize + 2048))
item := fstest.NewItem("tags-multipart.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
meta := fs.Metadata{
"x-ms-tags": "project=alpha,release=2025-08",
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "application/octet-stream", meta)
defer func() { _ = obj.Remove(ctx) }()
tags := getTagsMap(ctx, t, obj)
assert.Equal(t, "alpha", tags["project"])
assert.Equal(t, "2025-08", tags["release"])
})
})
// Singlepart copy with metadata-set mapping; omit content-type to exercise fallback
t.Run("CopySinglepart", func(t *testing.T) {
// create small source
contents := random.String(int(f.opt.ChunkSize / 2))
srcItem := fstest.NewItem("meta-copy-single-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "text/plain", nil)
defer func() { _ = srcObj.Remove(ctx) }()
// set mapping via MetadataSet
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
ci.MetadataSet = fs.Metadata{
"cache-control": "private, max-age=60",
"content-disposition": "attachment; filename=foo.txt",
"content-language": "fr",
// no content-type: should fallback to source
"potato": "maris",
}
// do copy
dstName := "meta-copy-single-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
props := getProps(ctx2, t, dst)
// content-type should fallback to source (text/plain)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "text/plain",
"cache-control": "private, max-age=60",
"content-disposition": "attachment; filename=foo.txt",
"content-language": "fr",
}, map[string]string{
"potato": "maris",
})
// mtime should be populated on copy when --metadata is used
// and should equal the source ModTime (RFC3339Nano)
// Read user metadata (case-insensitive)
m := props.Metadata
var gotMtime string
for k, v := range m {
if strings.EqualFold(k, "mtime") && v != nil {
gotMtime = *v
break
}
}
if assert.NotEmpty(t, gotMtime, "mtime not set on destination metadata") {
// parse and compare times ignoring formatting differences
parsed, err := time.Parse(time.RFC3339Nano, gotMtime)
require.NoError(t, err)
assert.True(t, srcObj.ModTime(ctx2).Equal(parsed), "dst mtime should equal src ModTime")
}
})
// CopySinglepart with only --metadata (no MetadataSet) must inject mtime and preserve src content-type
t.Run("CopySinglepart_MetadataOnly", func(t *testing.T) {
contents := random.String(int(f.opt.ChunkSize / 2))
srcItem := fstest.NewItem("meta-copy-single-only-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "text/plain", nil)
defer func() { _ = srcObj.Remove(ctx) }()
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
dstName := "meta-copy-single-only-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
props := getProps(ctx2, t, dst)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "text/plain",
}, map[string]string{})
// Assert mtime injected
m := props.Metadata
var gotMtime string
for k, v := range m {
if strings.EqualFold(k, "mtime") && v != nil {
gotMtime = *v
break
}
}
if assert.NotEmpty(t, gotMtime, "mtime not set on destination metadata") {
parsed, err := time.Parse(time.RFC3339Nano, gotMtime)
require.NoError(t, err)
assert.True(t, srcObj.ModTime(ctx2).Equal(parsed), "dst mtime should equal src ModTime")
}
})
// Multipart copy with metadata-set mapping; omit content-type to exercise fallback
t.Run("CopyMultipart", func(t *testing.T) {
// create large source to force multipart
contents := random.String(int(f.opt.CopyCutoff + 1024))
srcItem := fstest.NewItem("meta-copy-multi-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "application/octet-stream", nil)
defer func() { _ = srcObj.Remove(ctx) }()
// set mapping via MetadataSet
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
ci.MetadataSet = fs.Metadata{
"cache-control": "max-age=0, no-cache",
// omit content-type to trigger fallback
"content-language": "de",
"potato": "desiree",
}
dstName := "meta-copy-multi-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
props := getProps(ctx2, t, dst)
// content-type should fallback to source (application/octet-stream)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "application/octet-stream",
"cache-control": "max-age=0, no-cache",
"content-language": "de",
}, map[string]string{
"potato": "desiree",
})
// mtime should be populated on copy when --metadata is used
m := props.Metadata
var gotMtime string
for k, v := range m {
if strings.EqualFold(k, "mtime") && v != nil {
gotMtime = *v
break
}
}
if assert.NotEmpty(t, gotMtime, "mtime not set on destination metadata") {
parsed, err := time.Parse(time.RFC3339Nano, gotMtime)
require.NoError(t, err)
assert.True(t, srcObj.ModTime(ctx2).Equal(parsed), "dst mtime should equal src ModTime")
}
})
// CopyMultipart with only --metadata must inject mtime and preserve src content-type
t.Run("CopyMultipart_MetadataOnly", func(t *testing.T) {
contents := random.String(int(f.opt.CopyCutoff + 2048))
srcItem := fstest.NewItem("meta-copy-multi-only-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "application/octet-stream", nil)
defer func() { _ = srcObj.Remove(ctx) }()
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
dstName := "meta-copy-multi-only-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
props := getProps(ctx2, t, dst)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "application/octet-stream",
}, map[string]string{})
m := props.Metadata
var gotMtime string
for k, v := range m {
if strings.EqualFold(k, "mtime") && v != nil {
gotMtime = *v
break
}
}
if assert.NotEmpty(t, gotMtime, "mtime not set on destination metadata") {
parsed, err := time.Parse(time.RFC3339Nano, gotMtime)
require.NoError(t, err)
assert.True(t, srcObj.ModTime(ctx2).Equal(parsed), "dst mtime should equal src ModTime")
}
})
// Tags: Singlepart copy
t.Run("CopySinglepartTags", func(t *testing.T) {
// create small source
contents := random.String(int(f.opt.ChunkSize / 2))
srcItem := fstest.NewItem("tags-copy-single-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "text/plain", nil)
defer func() { _ = srcObj.Remove(ctx) }()
// set mapping via MetadataSet including tags
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
ci.MetadataSet = fs.Metadata{
"x-ms-tags": "copy=single,mode=test",
}
dstName := "tags-copy-single-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
tags := getTagsMap(ctx2, t, dst)
assert.Equal(t, "single", tags["copy"])
assert.Equal(t, "test", tags["mode"])
})
// Tags: Multipart copy
t.Run("CopyMultipartTags", func(t *testing.T) {
// create large source to force multipart
contents := random.String(int(f.opt.CopyCutoff + 4096))
srcItem := fstest.NewItem("tags-copy-multi-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "application/octet-stream", nil)
defer func() { _ = srcObj.Remove(ctx) }()
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
ci.MetadataSet = fs.Metadata{
"x-ms-tags": "copy=multi,mode=test",
}
dstName := "tags-copy-multi-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
tags := getTagsMap(ctx2, t, dst)
assert.Equal(t, "multi", tags["copy"])
assert.Equal(t, "test", tags["mode"])
})
// Negative: invalid x-ms-tags must error
t.Run("InvalidXMsTags", func(t *testing.T) {
contents := random.String(32)
item := fstest.NewItem("tags-invalid.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
// construct ObjectInfo with invalid x-ms-tags
buf := strings.NewReader(contents)
// Build obj info with metadata
meta := fs.Metadata{
"x-ms-tags": "badpair-without-equals",
}
// force metadata on
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
obji := object.NewStaticObjectInfo(item.Path, item.ModTime, int64(len(contents)), true, nil, nil)
obji = obji.WithMetadata(meta).WithMimeType("text/plain")
_, err := f.Put(ctx2, buf, obji)
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid tag")
})
}

View File

@@ -133,23 +133,32 @@ type File struct {
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
}
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct {
// StorageAPI is as returned from the b2_authorize_account call
type StorageAPI struct {
AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
Buckets []struct { // When present, access is restricted to one or more buckets.
ID string `json:"id"` // ID of bucket
Name string `json:"name"` // When present, name of bucket - may be empty
} `json:"buckets"`
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has for every bucket.
NamePrefix any `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"`
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead.
RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance.
}
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct {
AccountID string `json:"accountId"` // The identifier for the account.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
APIs struct { // Supported APIs for this account / key. These are API-dependent JSON objects.
Storage StorageAPI `json:"storageApi"`
} `json:"apiInfo"`
}
// ListBucketsRequest is parameters for b2_list_buckets call
type ListBucketsRequest struct {
AccountID string `json:"accountId"` // The identifier for the account.

View File

@@ -607,17 +607,29 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("failed to authorize account: %w", err)
}
// If this is a key limited to a single bucket, it must exist already
if f.rootBucket != "" && f.info.Allowed.BucketID != "" {
allowedBucket := f.opt.Enc.ToStandardName(f.info.Allowed.BucketName)
if allowedBucket == "" {
return nil, errors.New("bucket that application key is restricted to no longer exists")
// If this is a key limited to one or more buckets, one of them must exist
// and be ours.
if f.rootBucket != "" && len(f.info.APIs.Storage.Allowed.Buckets) != 0 {
buckets := f.info.APIs.Storage.Allowed.Buckets
var rootFound = false
var rootID string
for _, b := range buckets {
allowedBucket := f.opt.Enc.ToStandardName(b.Name)
if allowedBucket == "" {
fs.Debugf(f, "bucket %q that application key is restricted to no longer exists", b.ID)
continue
}
if allowedBucket == f.rootBucket {
rootFound = true
rootID = b.ID
}
}
if allowedBucket != f.rootBucket {
return nil, fmt.Errorf("you must use bucket %q with this application key", allowedBucket)
if !rootFound {
return nil, fmt.Errorf("you must use bucket(s) %q with this application key", buckets)
}
f.cache.MarkOK(f.rootBucket)
f.setBucketID(f.rootBucket, f.info.Allowed.BucketID)
f.setBucketID(f.rootBucket, rootID)
}
if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the (bucket,directory) is actually an existing file
@@ -643,7 +655,7 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
defer f.authMu.Unlock()
opts := rest.Opts{
Method: "GET",
Path: "/b2api/v1/b2_authorize_account",
Path: "/b2api/v4/b2_authorize_account",
RootURL: f.opt.Endpoint,
UserName: f.opt.Account,
Password: f.opt.Key,
@@ -656,13 +668,13 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
if err != nil {
return fmt.Errorf("failed to authenticate: %w", err)
}
f.srv.SetRoot(f.info.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken)
f.srv.SetRoot(f.info.APIs.Storage.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken)
return nil
}
// hasPermission returns if the current AuthorizationToken has the selected permission
func (f *Fs) hasPermission(permission string) bool {
return slices.Contains(f.info.Allowed.Capabilities, permission)
return slices.Contains(f.info.APIs.Storage.Allowed.Capabilities, permission)
}
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
@@ -1067,44 +1079,83 @@ type listBucketFn func(*api.Bucket) error
// listBucketsToFn lists the buckets to the function supplied
func (f *Fs) listBucketsToFn(ctx context.Context, bucketName string, fn listBucketFn) error {
var account = api.ListBucketsRequest{
AccountID: f.info.AccountID,
BucketID: f.info.Allowed.BucketID,
}
if bucketName != "" && account.BucketID == "" {
account.BucketName = f.opt.Enc.FromStandardName(bucketName)
responses := make([]api.ListBucketsResponse, len(f.info.APIs.Storage.Allowed.Buckets))[:0]
call := func(id string) error {
var account = api.ListBucketsRequest{
AccountID: f.info.AccountID,
BucketID: id,
}
if bucketName != "" && account.BucketID == "" {
account.BucketName = f.opt.Enc.FromStandardName(bucketName)
}
var response api.ListBucketsResponse
opts := rest.Opts{
Method: "POST",
Path: "/b2_list_buckets",
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &account, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
}
responses = append(responses, response)
return nil
}
var response api.ListBucketsResponse
opts := rest.Opts{
Method: "POST",
Path: "/b2_list_buckets",
for i := range f.info.APIs.Storage.Allowed.Buckets {
b := &f.info.APIs.Storage.Allowed.Buckets[i]
// Empty names indicate a bucket that no longer exists, this is non-fatal
// for multi-bucket API keys.
if b.Name == "" {
continue
}
// When requesting a specific bucket skip over non-matching names
if bucketName != "" && b.Name != bucketName {
continue
}
err := call(b.ID)
if err != nil {
return err
}
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &account, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
if len(f.info.APIs.Storage.Allowed.Buckets) == 0 {
err := call("")
if err != nil {
return err
}
}
f.bucketIDMutex.Lock()
f.bucketTypeMutex.Lock()
f._bucketID = make(map[string]string, 1)
f._bucketType = make(map[string]string, 1)
for i := range response.Buckets {
bucket := &response.Buckets[i]
bucket.Name = f.opt.Enc.ToStandardName(bucket.Name)
f.cache.MarkOK(bucket.Name)
f._bucketID[bucket.Name] = bucket.ID
f._bucketType[bucket.Name] = bucket.Type
for ri := range responses {
response := &responses[ri]
for i := range response.Buckets {
bucket := &response.Buckets[i]
bucket.Name = f.opt.Enc.ToStandardName(bucket.Name)
f.cache.MarkOK(bucket.Name)
f._bucketID[bucket.Name] = bucket.ID
f._bucketType[bucket.Name] = bucket.Type
}
}
f.bucketTypeMutex.Unlock()
f.bucketIDMutex.Unlock()
for i := range response.Buckets {
bucket := &response.Buckets[i]
err = fn(bucket)
if err != nil {
return err
for ri := range responses {
response := &responses[ri]
for i := range response.Buckets {
bucket := &response.Buckets[i]
err := fn(bucket)
if err != nil {
return err
}
}
}
return nil
@@ -1606,7 +1657,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
bucket, bucketPath := f.split(remote)
var RootURL string
if f.opt.DownloadURL == "" {
RootURL = f.info.DownloadURL
RootURL = f.info.APIs.Storage.DownloadURL
} else {
RootURL = f.opt.DownloadURL
}
@@ -1957,7 +2008,7 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
// Use downloadUrl from backblaze if downloadUrl is not set
// otherwise use the custom downloadUrl
if o.fs.opt.DownloadURL == "" {
opts.RootURL = o.fs.info.DownloadURL
opts.RootURL = o.fs.info.APIs.Storage.DownloadURL
} else {
opts.RootURL = o.fs.opt.DownloadURL
}

View File

@@ -403,14 +403,14 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
if ciphertext == "" {
return "", nil
}
pos := strings.Index(ciphertext, ".")
if pos == -1 {
before, after, ok := strings.Cut(ciphertext, ".")
if !ok {
return "", ErrorNotAnEncryptedFile
} // No .
num := ciphertext[:pos]
num := before
if num == "!" {
// No rotation; probably original was not valid unicode
return ciphertext[pos+1:], nil
return after, nil
}
dir, err := strconv.Atoi(num)
if err != nil {
@@ -425,7 +425,7 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
var result bytes.Buffer
inQuote := false
for _, runeValue := range ciphertext[pos+1:] {
for _, runeValue := range after {
switch {
case inQuote:
_, _ = result.WriteRune(runeValue)

View File

@@ -346,9 +346,26 @@ can't check the size and hash but the file contents will be decompressed.
Advanced: true,
Default: false,
}, {
Name: "endpoint",
Help: "Endpoint for the service.\n\nLeave blank normally.",
Name: "endpoint",
Help: `Custom endpoint for the storage API. Leave blank to use the provider default.
When using a custom endpoint that includes a subpath (e.g. example.org/custom/endpoint),
the subpath will be ignored during upload operations due to a limitation in the
underlying Google API Go client library.
Download and listing operations will work correctly with the full endpoint path.
If you require subpath support for uploads, avoid using subpaths in your custom
endpoint configuration.`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "storage.example.org",
Help: "Specify a custom endpoint",
}, {
Value: "storage.example.org:4443",
Help: "Specifying a custom endpoint with port",
}, {
Value: "storage.example.org:4443/gcs/api",
Help: "Specifying a subpath, see the note, uploads won't use the custom path!",
}},
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,

View File

@@ -6,6 +6,7 @@ import (
"context"
"crypto/md5"
"encoding/hex"
"errors"
"fmt"
"io"
"path"
@@ -24,7 +25,8 @@ import (
var (
hashType = hash.MD5
// the object storage is persistent
buckets = newBucketsInfo()
buckets = newBucketsInfo()
errWriteOnly = errors.New("can't read when using --memory-discard")
)
// Register with Fs
@@ -33,12 +35,32 @@ func init() {
Name: "memory",
Description: "In memory object storage system.",
NewFs: NewFs,
Options: []fs.Option{},
Options: []fs.Option{{
Name: "discard",
Default: false,
Advanced: true,
Help: `If set all writes will be discarded and reads will return an error
If set then when files are uploaded the contents not be saved. The
files will appear to have been uploaded but will give an error on
read. Files will have their MD5 sum calculated on upload which takes
very little CPU time and allows the transfers to be checked.
This can be useful for testing performance.
Probably most easily used by using the connection string syntax:
:memory,discard:bucket
`,
}},
})
}
// Options defines the configuration for this backend
type Options struct{}
type Options struct {
Discard bool `config:"discard"`
}
// Fs represents a remote memory server
type Fs struct {
@@ -164,6 +186,7 @@ type objectData struct {
hash string
mimeType string
data []byte
size int64
}
// Object describes a memory object
@@ -558,7 +581,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hashType {
return "", hash.ErrUnsupported
}
if o.od.hash == "" {
if o.od.hash == "" && !o.fs.opt.Discard {
sum := md5.Sum(o.od.data)
o.od.hash = hex.EncodeToString(sum[:])
}
@@ -567,7 +590,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return int64(len(o.od.data))
return o.od.size
}
// ModTime returns the modification time of the object
@@ -593,6 +616,9 @@ func (o *Object) Storable() bool {
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
if o.fs.opt.Discard {
return nil, errWriteOnly
}
var offset, limit int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
@@ -624,13 +650,24 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
bucket, bucketPath := o.split()
data, err := io.ReadAll(in)
var data []byte
var size int64
var hash string
if o.fs.opt.Discard {
h := md5.New()
size, err = io.Copy(h, in)
hash = hex.EncodeToString(h.Sum(nil))
} else {
data, err = io.ReadAll(in)
size = int64(len(data))
}
if err != nil {
return fmt.Errorf("failed to update memory object: %w", err)
}
o.od = &objectData{
data: data,
hash: "",
size: size,
hash: hash,
modTime: src.ModTime(ctx),
mimeType: fs.MimeType(ctx, src),
}

View File

@@ -222,3 +222,11 @@ type UserInfo struct {
} `json:"steps"`
} `json:"journey"`
}
// DiffResult is the response from /diff
type DiffResult struct {
Result int `json:"result"`
DiffID int64 `json:"diffid"`
Entries []map[string]any `json:"entries"`
Error string `json:"error"`
}

View File

@@ -171,6 +171,7 @@ type Fs struct {
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
lastDiffID int64 // change tracking state for diff long-polling
}
// Object describes a pcloud object
@@ -1033,6 +1034,137 @@ func (f *Fs) Shutdown(ctx context.Context) error {
return nil
}
// ChangeNotify implements fs.Features.ChangeNotify
func (f *Fs) ChangeNotify(ctx context.Context, notify func(string, fs.EntryType), ch <-chan time.Duration) {
// Start long-poll loop in background
go f.changeNotifyLoop(ctx, notify, ch)
}
// changeNotifyLoop contains the blocking long-poll logic.
func (f *Fs) changeNotifyLoop(ctx context.Context, notify func(string, fs.EntryType), ch <-chan time.Duration) {
// Standard polling interval
interval := 30 * time.Second
// Start with diffID = 0 to get the current state
var diffID int64
// Helper to process changes from the diff API
handleChanges := func(entries []map[string]any) {
notifiedPaths := make(map[string]bool)
for _, entry := range entries {
meta, ok := entry["metadata"].(map[string]any)
if !ok {
continue
}
// Robust extraction of ParentFolderID
var pid int64
if val, ok := meta["parentfolderid"]; ok {
switch v := val.(type) {
case float64:
pid = int64(v)
case int64:
pid = v
case int:
pid = int64(v)
}
}
// Resolve the path using dirCache.GetInv
// pCloud uses "d" prefix for directory IDs in cache, but API returns numbers
dirID := fmt.Sprintf("d%d", pid)
parentPath, ok := f.dirCache.GetInv(dirID)
if !ok {
// Parent not in cache, so we can ignore this change as it is outside
// of what the mount has seen or cares about.
continue
}
name, _ := meta["name"].(string)
fullPath := path.Join(parentPath, name)
// Determine EntryType (File or Directory)
entryType := fs.EntryObject
if isFolder, ok := meta["isfolder"].(bool); ok && isFolder {
entryType = fs.EntryDirectory
}
// Deduplicate notifications for this batch
if !notifiedPaths[fullPath] {
fs.Debugf(f, "ChangeNotify: detected change in %q (type: %v)", fullPath, entryType)
notify(fullPath, entryType)
notifiedPaths[fullPath] = true
}
}
}
for {
// Check context and channel
select {
case <-ctx.Done():
return
case newInterval, ok := <-ch:
if !ok {
return
}
interval = newInterval
default:
}
// Setup /diff Request
opts := rest.Opts{
Method: "GET",
Path: "/diff",
Parameters: url.Values{},
}
if diffID != 0 {
opts.Parameters.Set("diffid", strconv.FormatInt(diffID, 10))
opts.Parameters.Set("block", "1")
} else {
opts.Parameters.Set("last", "0")
}
// Perform Long-Poll
// Timeout set to 90s (server usually blocks for 60s max)
reqCtx, cancel := context.WithTimeout(ctx, 90*time.Second)
var result api.DiffResult
_, err := f.srv.CallJSON(reqCtx, &opts, nil, &result)
cancel()
if err != nil {
if errors.Is(err, context.Canceled) {
return
}
// Ignore timeout errors as they are normal for long-polling
if !errors.Is(err, context.DeadlineExceeded) {
fs.Infof(f, "ChangeNotify: polling error: %v. Waiting %v.", err, interval)
time.Sleep(interval)
}
continue
}
// If result is not 0, reset DiffID to resync
if result.Result != 0 {
diffID = 0
time.Sleep(2 * time.Second)
continue
}
if result.DiffID != 0 {
diffID = result.DiffID
f.lastDiffID = diffID
}
if len(result.Entries) > 0 {
handleChanges(result.Entries)
}
}
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
// EU region supports SHA1 and SHA256 (but rclone doesn't
@@ -1401,6 +1533,7 @@ var (
_ fs.ListPer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View File

@@ -1,26 +1,26 @@
name: Linode
description: Linode Object Storage
endpoint:
nl-ams-1.linodeobjects.com: Amsterdam (Netherlands), nl-ams-1
us-southeast-1.linodeobjects.com: Atlanta, GA (USA), us-southeast-1
in-maa-1.linodeobjects.com: Chennai (India), in-maa-1
us-ord-1.linodeobjects.com: Chicago, IL (USA), us-ord-1
eu-central-1.linodeobjects.com: Frankfurt (Germany), eu-central-1
id-cgk-1.linodeobjects.com: Jakarta (Indonesia), id-cgk-1
gb-lon-1.linodeobjects.com: London 2 (Great Britain), gb-lon-1
us-lax-1.linodeobjects.com: Los Angeles, CA (USA), us-lax-1
es-mad-1.linodeobjects.com: Madrid (Spain), es-mad-1
au-mel-1.linodeobjects.com: Melbourne (Australia), au-mel-1
us-mia-1.linodeobjects.com: Miami, FL (USA), us-mia-1
it-mil-1.linodeobjects.com: Milan (Italy), it-mil-1
us-east-1.linodeobjects.com: Newark, NJ (USA), us-east-1
jp-osa-1.linodeobjects.com: Osaka (Japan), jp-osa-1
fr-par-1.linodeobjects.com: Paris (France), fr-par-1
br-gru-1.linodeobjects.com: São Paulo (Brazil), br-gru-1
us-sea-1.linodeobjects.com: Seattle, WA (USA), us-sea-1
ap-south-1.linodeobjects.com: Singapore, ap-south-1
sg-sin-1.linodeobjects.com: Singapore 2, sg-sin-1
se-sto-1.linodeobjects.com: Stockholm (Sweden), se-sto-1
us-iad-1.linodeobjects.com: Washington, DC, (USA), us-iad-1
nl-ams-1.linodeobjects.com: Amsterdam, NL (nl-ams-1)
us-southeast-1.linodeobjects.com: Atlanta, GA, US (us-southeast-1)
in-maa-1.linodeobjects.com: Chennai, IN (in-maa-1)
us-ord-1.linodeobjects.com: Chicago, IL, US (us-ord-1)
eu-central-1.linodeobjects.com: Frankfurt, DE (eu-central-1)
id-cgk-1.linodeobjects.com: Jakarta, ID (id-cgk-1)
gb-lon-1.linodeobjects.com: London 2, UK (gb-lon-1)
us-lax-1.linodeobjects.com: Los Angeles, CA, US (us-lax-1)
es-mad-1.linodeobjects.com: Madrid, ES (es-mad-1)
us-mia-1.linodeobjects.com: Miami, FL, US (us-mia-1)
it-mil-1.linodeobjects.com: Milan, IT (it-mil-1)
us-east-1.linodeobjects.com: Newark, NJ, US (us-east-1)
jp-osa-1.linodeobjects.com: Osaka, JP (jp-osa-1)
fr-par-1.linodeobjects.com: Paris, FR (fr-par-1)
br-gru-1.linodeobjects.com: Sao Paulo, BR (br-gru-1)
us-sea-1.linodeobjects.com: Seattle, WA, US (us-sea-1)
ap-south-1.linodeobjects.com: Singapore, SG (ap-south-1)
sg-sin-1.linodeobjects.com: Singapore 2, SG (sg-sin-1)
se-sto-1.linodeobjects.com: Stockholm, SE (se-sto-1)
jp-tyo-1.linodeobjects.com: Tokyo 3, JP (jp-tyo-1)
us-iad-10.linodeobjects.com: Washington, DC, US (us-iad-10)
acl: {}
bucket_acl: true

View File

@@ -2,7 +2,17 @@ name: Selectel
description: Selectel Object Storage
region:
ru-1: St. Petersburg
ru-3: St. Petersburg
ru-7: Moscow
gis-1: Moscow
kz-1: Kazakhstan
uz-2: Uzbekistan
endpoint:
s3.ru-1.storage.selcloud.ru: Saint Petersburg
s3.ru-1.storage.selcloud.ru: St. Petersburg
s3.ru-3.storage.selcloud.ru: St. Petersburg
s3.ru-7.storage.selcloud.ru: Moscow
s3.gis-1.storage.selcloud.ru: Moscow
s3.kz-1.storage.selcloud.ru: Kazakhstan
s3.uz-2.storage.selcloud.ru: Uzbekistan
quirks:
list_url_encode: false

View File

@@ -30,9 +30,11 @@ import (
v4signer "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/credentials/stscreds"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/aws/aws-sdk-go-v2/service/sts"
"github.com/aws/smithy-go"
"github.com/aws/smithy-go/logging"
"github.com/aws/smithy-go/middleware"
@@ -325,6 +327,30 @@ If empty it will default to the environment variable "AWS_PROFILE" or
Help: "An AWS session token.",
Advanced: true,
Sensitive: true,
}, {
Name: "role_arn",
Help: `ARN of the IAM role to assume.
Leave blank if not using assume role.`,
Advanced: true,
}, {
Name: "role_session_name",
Help: `Session name for assumed role.
If empty, a session name will be generated automatically.`,
Advanced: true,
}, {
Name: "role_session_duration",
Help: `Session duration for assumed role.
If empty, the default session duration will be used.`,
Advanced: true,
}, {
Name: "role_external_id",
Help: `External ID for assumed role.
Leave blank if not using an external ID.`,
Advanced: true,
}, {
Name: "upload_concurrency",
Help: `Concurrency for multipart uploads and copies.
@@ -927,6 +953,10 @@ type Options struct {
SharedCredentialsFile string `config:"shared_credentials_file"`
Profile string `config:"profile"`
SessionToken string `config:"session_token"`
RoleARN string `config:"role_arn"`
RoleSessionName string `config:"role_session_name"`
RoleSessionDuration fs.Duration `config:"role_session_duration"`
RoleExternalID string `config:"role_external_id"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
@@ -1290,6 +1320,34 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (s3Cli
opt.Region = "us-east-1"
}
// Handle assume role if RoleARN is specified
if opt.RoleARN != "" {
fs.Debugf(nil, "Using assume role with ARN: %s", opt.RoleARN)
// Set region for the config before creating STS client
awsConfig.Region = opt.Region
// Create STS client using the base credentials
stsClient := sts.NewFromConfig(awsConfig)
// Configure AssumeRole options
assumeRoleOptions := func(aro *stscreds.AssumeRoleOptions) {
// Set session name if provided, otherwise use a default
if opt.RoleSessionName != "" {
aro.RoleSessionName = opt.RoleSessionName
}
if opt.RoleSessionDuration != 0 {
aro.Duration = time.Duration(opt.RoleSessionDuration)
}
if opt.RoleExternalID != "" {
aro.ExternalID = &opt.RoleExternalID
}
}
// Create AssumeRole credentials provider
awsConfig.Credentials = stscreds.NewAssumeRoleProvider(stsClient, opt.RoleARN, assumeRoleOptions)
}
provider = loadProvider(opt.Provider)
if provider == nil {
fs.Logf("s3", "s3 provider %q not known - please set correctly", opt.Provider)
@@ -2835,6 +2893,8 @@ func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dst
SSECustomerKey: req.SSECustomerKey,
SSECustomerKeyMD5: req.SSECustomerKeyMD5,
UploadId: uid,
IfMatch: copyReq.IfMatch,
IfNoneMatch: copyReq.IfNoneMatch,
})
return f.shouldRetry(ctx, err)
})
@@ -2869,13 +2929,20 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
MetadataDirective: types.MetadataDirectiveCopy,
}
// Update the metadata if it is in use
if ci := fs.GetConfig(ctx); ci.Metadata {
ui, err := srcObj.prepareUpload(ctx, src, fs.MetadataAsOpenOptions(ctx), true)
if err != nil {
return nil, fmt.Errorf("failed to prepare upload: %w", err)
}
setFrom_s3CopyObjectInput_s3PutObjectInput(&req, ui.req)
// Build upload options including headers and metadata
ci := fs.GetConfig(ctx)
uploadOptions := fs.MetadataAsOpenOptions(ctx)
for _, option := range ci.UploadHeaders {
uploadOptions = append(uploadOptions, option)
}
ui, err := srcObj.prepareUpload(ctx, src, uploadOptions, true)
if err != nil {
return nil, fmt.Errorf("failed to prepare upload: %w", err)
}
setFrom_s3CopyObjectInput_s3PutObjectInput(&req, ui.req)
if ci.Metadata {
req.MetadataDirective = types.MetadataDirectiveReplace
}
@@ -4284,6 +4351,8 @@ func (w *s3ChunkWriter) Close(ctx context.Context) (err error) {
SSECustomerKey: w.multiPartUploadInput.SSECustomerKey,
SSECustomerKeyMD5: w.multiPartUploadInput.SSECustomerKeyMD5,
UploadId: w.uploadID,
IfMatch: w.ui.req.IfMatch,
IfNoneMatch: w.ui.req.IfNoneMatch,
})
return w.f.shouldRetry(ctx, err)
})

View File

@@ -70,6 +70,7 @@ func setFrom_s3ListObjectsV2Output_s3ListObjectVersionsOutput(a *s3.ListObjectsV
// setFrom_typesObject_typesObjectVersion copies matching elements from a to b
func setFrom_typesObject_typesObjectVersion(a *types.Object, b *types.ObjectVersion) {
a.ChecksumAlgorithm = b.ChecksumAlgorithm
a.ChecksumType = b.ChecksumType
a.ETag = b.ETag
a.Key = b.Key
a.LastModified = b.LastModified
@@ -82,6 +83,7 @@ func setFrom_typesObject_typesObjectVersion(a *types.Object, b *types.ObjectVers
func setFrom_s3CreateMultipartUploadInput_s3HeadObjectOutput(a *s3.CreateMultipartUploadInput, b *s3.HeadObjectOutput) {
a.BucketKeyEnabled = b.BucketKeyEnabled
a.CacheControl = b.CacheControl
a.ChecksumType = b.ChecksumType
a.ContentDisposition = b.ContentDisposition
a.ContentEncoding = b.ContentEncoding
a.ContentLanguage = b.ContentLanguage
@@ -160,12 +162,15 @@ func setFrom_s3HeadObjectOutput_s3GetObjectOutput(a *s3.HeadObjectOutput, b *s3.
a.CacheControl = b.CacheControl
a.ChecksumCRC32 = b.ChecksumCRC32
a.ChecksumCRC32C = b.ChecksumCRC32C
a.ChecksumCRC64NVME = b.ChecksumCRC64NVME
a.ChecksumSHA1 = b.ChecksumSHA1
a.ChecksumSHA256 = b.ChecksumSHA256
a.ChecksumType = b.ChecksumType
a.ContentDisposition = b.ContentDisposition
a.ContentEncoding = b.ContentEncoding
a.ContentLanguage = b.ContentLanguage
a.ContentLength = b.ContentLength
a.ContentRange = b.ContentRange
a.ContentType = b.ContentType
a.DeleteMarker = b.DeleteMarker
a.ETag = b.ETag
@@ -187,6 +192,7 @@ func setFrom_s3HeadObjectOutput_s3GetObjectOutput(a *s3.HeadObjectOutput, b *s3.
a.SSEKMSKeyId = b.SSEKMSKeyId
a.ServerSideEncryption = b.ServerSideEncryption
a.StorageClass = b.StorageClass
a.TagCount = b.TagCount
a.VersionId = b.VersionId
a.WebsiteRedirectLocation = b.WebsiteRedirectLocation
a.ResultMetadata = b.ResultMetadata
@@ -232,6 +238,7 @@ func setFrom_s3HeadObjectOutput_s3PutObjectInput(a *s3.HeadObjectOutput, b *s3.P
a.CacheControl = b.CacheControl
a.ChecksumCRC32 = b.ChecksumCRC32
a.ChecksumCRC32C = b.ChecksumCRC32C
a.ChecksumCRC64NVME = b.ChecksumCRC64NVME
a.ChecksumSHA1 = b.ChecksumSHA1
a.ChecksumSHA256 = b.ChecksumSHA256
a.ContentDisposition = b.ContentDisposition
@@ -270,6 +277,8 @@ func setFrom_s3CopyObjectInput_s3PutObjectInput(a *s3.CopyObjectInput, b *s3.Put
a.GrantRead = b.GrantRead
a.GrantReadACP = b.GrantReadACP
a.GrantWriteACP = b.GrantWriteACP
a.IfMatch = b.IfMatch
a.IfNoneMatch = b.IfNoneMatch
a.Metadata = b.Metadata
a.ObjectLockLegalHoldStatus = b.ObjectLockLegalHoldStatus
a.ObjectLockMode = b.ObjectLockMode

View File

@@ -0,0 +1,27 @@
// Package api has type definitions for shade
package api
// ListDirResponse -------------------------------------------------
// Format from shade api
type ListDirResponse struct {
Type string `json:"type"` // "file" or "tree"
Path string `json:"path"` // Full path including root
Ino int `json:"ino"` // inode number
Mtime int64 `json:"mtime"` // Modified time in milliseconds
Ctime int64 `json:"ctime"` // Created time in milliseconds
Size int64 `json:"size"` // Size in bytes
Hash string `json:"hash"` // MD5 hash
Draft bool `json:"draft"` // Whether this is a draft file
}
// PartURL Type for multipart upload/download
type PartURL struct {
URL string `json:"url"`
Headers map[string]string `json:"headers,omitempty"`
}
// CompletedPart Type for completed parts when making a multipart upload.
type CompletedPart struct {
ETag string
PartNumber int32
}

1039
backend/shade/shade.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
package shade_test
import (
"testing"
"github.com/rclone/rclone/backend/shade"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
name := "TestShade"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*shade.Object)(nil),
SkipInvalidUTF8: true,
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "eventually_consistent_delay", Value: "7"},
},
})
}

336
backend/shade/upload.go Normal file
View File

@@ -0,0 +1,336 @@
//multipart upload for shade
package shade
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"net/url"
"path"
"sort"
"sync"
"github.com/rclone/rclone/backend/shade/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/chunksize"
"github.com/rclone/rclone/lib/multipart"
"github.com/rclone/rclone/lib/rest"
)
var warnStreamUpload sync.Once
type shadeChunkWriter struct {
initToken string
chunkSize int64
size int64
f *Fs
o *Object
completedParts []api.CompletedPart
completedPartsMu sync.Mutex
}
// uploadMultipart handles multipart upload for larger files
func (o *Object) uploadMultipart(ctx context.Context, src fs.ObjectInfo, in io.Reader, options ...fs.OpenOption) error {
chunkWriter, err := multipart.UploadMultipart(ctx, src, in, multipart.UploadMultipartOptions{
Open: o.fs,
OpenOptions: options,
})
if err != nil {
return err
}
var shadeWriter = chunkWriter.(*shadeChunkWriter)
o.size = shadeWriter.size
return nil
}
// OpenChunkWriter returns the chunk size and a ChunkWriter
//
// Pass in the remote and the src object
// You can also use options to hint at the desired chunk size
func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectInfo, options ...fs.OpenOption) (info fs.ChunkWriterInfo, writer fs.ChunkWriter, err error) {
// Temporary Object under construction
o := &Object{
fs: f,
remote: remote,
}
uploadParts := f.opt.MaxUploadParts
if uploadParts < 1 {
uploadParts = 1
} else if uploadParts > maxUploadParts {
uploadParts = maxUploadParts
}
size := src.Size()
fs.FixRangeOption(options, size)
// calculate size of parts
chunkSize := f.opt.ChunkSize
// size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize
// buffers here (default 64 MB). With a maximum number of parts (10,000) this will be a file of
// 640 GB.
if size == -1 {
warnStreamUpload.Do(func() {
fs.Logf(f, "Streaming uploads using chunk size %v will have maximum file size of %v",
chunkSize, fs.SizeSuffix(int64(chunkSize)*int64(uploadParts)))
})
} else {
chunkSize = chunksize.Calculator(src, size, uploadParts, chunkSize)
}
token, err := o.fs.refreshJWTToken(ctx)
if err != nil {
return info, nil, fmt.Errorf("failed to get token: %w", err)
}
err = f.ensureParentDirectories(ctx, remote)
if err != nil {
return info, nil, fmt.Errorf("failed to ensure parent directories: %w", err)
}
fullPath := remote
if f.root != "" {
fullPath = path.Join(f.root, remote)
}
// Initiate multipart upload
type initRequest struct {
Path string `json:"path"`
PartSize int64 `json:"partSize"`
}
reqBody := initRequest{
Path: fullPath,
PartSize: int64(chunkSize),
}
var initResp struct {
Token string `json:"token"`
}
opts := rest.Opts{
Method: "POST",
Path: fmt.Sprintf("/%s/upload/multipart", o.fs.drive),
RootURL: o.fs.endpoint,
ExtraHeaders: map[string]string{
"Authorization": "Bearer " + token,
},
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
res, err := o.fs.srv.CallJSON(ctx, &opts, reqBody, &initResp)
if err != nil {
return res != nil && res.StatusCode == http.StatusTooManyRequests, err
}
return false, nil
})
if err != nil {
return info, nil, fmt.Errorf("failed to initiate multipart upload: %w", err)
}
chunkWriter := &shadeChunkWriter{
initToken: initResp.Token,
chunkSize: int64(chunkSize),
size: size,
f: f,
o: o,
}
info = fs.ChunkWriterInfo{
ChunkSize: int64(chunkSize),
Concurrency: f.opt.Concurrency,
LeavePartsOnError: false,
}
return info, chunkWriter, err
}
// WriteChunk will write chunk number with reader bytes, where chunk number >= 0
func (s *shadeChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader io.ReadSeeker) (bytesWritten int64, err error) {
token, err := s.f.refreshJWTToken(ctx)
if err != nil {
return 0, err
}
// Read chunk
var chunk bytes.Buffer
n, err := io.Copy(&chunk, reader)
if n == 0 {
return 0, nil
}
if err != nil {
return 0, fmt.Errorf("failed to read chunk: %w", err)
}
// Get presigned URL for this part
var partURL api.PartURL
partOpts := rest.Opts{
Method: "POST",
Path: fmt.Sprintf("/%s/upload/multipart/part/%d?token=%s", s.f.drive, chunkNumber+1, url.QueryEscape(s.initToken)),
RootURL: s.f.endpoint,
ExtraHeaders: map[string]string{
"Authorization": "Bearer " + token,
},
}
err = s.f.pacer.Call(func() (bool, error) {
res, err := s.f.srv.CallJSON(ctx, &partOpts, nil, &partURL)
if err != nil {
return res != nil && res.StatusCode == http.StatusTooManyRequests, err
}
return false, nil
})
if err != nil {
return 0, fmt.Errorf("failed to get part URL: %w", err)
}
opts := rest.Opts{
Method: "PUT",
RootURL: partURL.URL,
Body: &chunk,
ContentType: "",
ContentLength: &n,
}
// Add headers
var uploadRes *http.Response
if len(partURL.Headers) > 0 {
opts.ExtraHeaders = make(map[string]string)
for k, v := range partURL.Headers {
opts.ExtraHeaders[k] = v
}
}
err = s.f.pacer.Call(func() (bool, error) {
uploadRes, err = s.f.srv.Call(ctx, &opts)
if err != nil {
return uploadRes != nil && uploadRes.StatusCode == http.StatusTooManyRequests, err
}
return false, nil
})
if err != nil {
return 0, fmt.Errorf("failed to upload part %d: %w", chunk, err)
}
if uploadRes.StatusCode != http.StatusOK && uploadRes.StatusCode != http.StatusCreated {
body, _ := io.ReadAll(uploadRes.Body)
fs.CheckClose(uploadRes.Body, &err)
return 0, fmt.Errorf("part upload failed with status %d: %s", uploadRes.StatusCode, string(body))
}
// Get ETag from response
etag := uploadRes.Header.Get("ETag")
fs.CheckClose(uploadRes.Body, &err)
s.completedPartsMu.Lock()
defer s.completedPartsMu.Unlock()
s.completedParts = append(s.completedParts, api.CompletedPart{
PartNumber: int32(chunkNumber + 1),
ETag: etag,
})
return n, nil
}
// Close complete chunked writer finalising the file.
func (s *shadeChunkWriter) Close(ctx context.Context) error {
// Complete multipart upload
sort.Slice(s.completedParts, func(i, j int) bool {
return s.completedParts[i].PartNumber < s.completedParts[j].PartNumber
})
type completeRequest struct {
Parts []api.CompletedPart `json:"parts"`
}
var completeBody completeRequest
if s.completedParts == nil {
completeBody = completeRequest{Parts: []api.CompletedPart{}}
} else {
completeBody = completeRequest{Parts: s.completedParts}
}
token, err := s.f.refreshJWTToken(ctx)
if err != nil {
return err
}
completeOpts := rest.Opts{
Method: "POST",
Path: fmt.Sprintf("/%s/upload/multipart/complete?token=%s", s.f.drive, url.QueryEscape(s.initToken)),
RootURL: s.f.endpoint,
ExtraHeaders: map[string]string{
"Authorization": "Bearer " + token,
},
}
var response http.Response
err = s.f.pacer.Call(func() (bool, error) {
res, err := s.f.srv.CallJSON(ctx, &completeOpts, completeBody, &response)
if err != nil && res == nil {
return false, err
}
if res.StatusCode == http.StatusTooManyRequests {
return true, err // Retry on 429
}
if res.StatusCode != http.StatusOK && res.StatusCode != http.StatusCreated {
body, _ := io.ReadAll(res.Body)
return false, fmt.Errorf("complete multipart failed with status %d: %s", res.StatusCode, string(body))
}
return false, nil
})
if err != nil {
return fmt.Errorf("failed to complete multipart upload: %w", err)
}
return nil
}
// Abort chunk write
//
// You can and should call Abort without calling Close.
func (s *shadeChunkWriter) Abort(ctx context.Context) error {
token, err := s.f.refreshJWTToken(ctx)
if err != nil {
return err
}
opts := rest.Opts{
Method: "POST",
Path: fmt.Sprintf("/%s/upload/abort/multipart?token=%s", s.f.drive, url.QueryEscape(s.initToken)),
RootURL: s.f.endpoint,
ExtraHeaders: map[string]string{
"Authorization": "Bearer " + token,
},
}
err = s.f.pacer.Call(func() (bool, error) {
res, err := s.f.srv.Call(ctx, &opts)
if err != nil {
fs.Debugf(s.f, "Failed to abort multipart upload: %v", err)
return false, nil // Don't retry abort
}
if res.StatusCode != http.StatusOK && res.StatusCode != http.StatusCreated {
fs.Debugf(s.f, "Abort returned status %d", res.StatusCode)
}
return false, nil
})
if err != nil {
return fmt.Errorf("failed to abort multipart upload: %w", err)
}
return nil
}

View File

@@ -84,6 +84,7 @@ docs = [
"protondrive.md",
"seafile.md",
"sftp.md",
"shade.md",
"smb.md",
"storj.md",
"sugarsync.md",

View File

@@ -389,8 +389,8 @@ func parseHash(str string) (string, string, error) {
if str == "-" {
return "", "", nil
}
if pos := strings.Index(str, ":"); pos > 0 {
name, val := str[:pos], str[pos+1:]
if before, after, ok := strings.Cut(str, ":"); ok {
name, val := before, after
if name != "" && val != "" {
return name, val, nil
}

View File

@@ -26,6 +26,10 @@ Note that |ls| and |lsl| recurse by default - use |--max-depth 1| to stop the re
The other list commands |lsd|,|lsf|,|lsjson| do not recurse by default -
use |-R| to make them recurse.
List commands prefer a recursive method that uses more memory but fewer
transactions by default. Use |--disable ListR| to suppress the behavior.
See [|--fast-list|](/docs/#fast-list) for more details.
Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).`, "|", "`")

View File

@@ -153,7 +153,7 @@ func TestRun(t *testing.T) {
fs.Fatal(nil, "error generating test private key "+privateKeyErr.Error())
}
publicKey, publicKeyError := ssh.NewPublicKey(&privateKey.PublicKey)
if privateKeyErr != nil {
if publicKeyError != nil {
fs.Fatal(nil, "error generating test public key "+publicKeyError.Error())
}

View File

@@ -13,6 +13,26 @@ docs](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
`--auth-key` is not provided then `serve s3` will allow anonymous
access.
Like all rclone flags `--auth-key` can be set via environment
variables, in this case `RCLONE_AUTH_KEY`. Since this flag can be
repeated, the input to `RCLONE_AUTH_KEY` is CSV encoded. Because the
`accessKey,secretKey` has a comma in, this means it needs to be in
quotes.
```console
export RCLONE_AUTH_KEY='"user,pass"'
rclone serve s3 ...
```
Or to supply multiple identities:
```console
export RCLONE_AUTH_KEY='"user1,pass1","user2,pass2"'
rclone serve s3 ...
```
Setting this variable without quotes will produce an error.
Please note that some clients may require HTTPS endpoints. See [the
SSL docs](#tls-ssl) for more information.

View File

@@ -70,6 +70,11 @@ func newServer(ctx context.Context, f fs.Fs, opt *Options, vfsOpt *vfscommon.Opt
w.s3Secret = getAuthSecret(opt.AuthKey)
}
authList, err := authlistResolver(opt.AuthKey)
if err != nil {
return nil, fmt.Errorf("parsing auth list failed: %q", err)
}
var newLogger logger
w.faker = gofakes3.New(
newBackend(w),
@@ -77,7 +82,7 @@ func newServer(ctx context.Context, f fs.Fs, opt *Options, vfsOpt *vfscommon.Opt
gofakes3.WithLogger(newLogger),
gofakes3.WithRequestID(rand.Uint64()),
gofakes3.WithoutVersioning(),
gofakes3.WithV4Auth(authlistResolver(opt.AuthKey)),
gofakes3.WithV4Auth(authList),
gofakes3.WithIntegrityCheck(true), // Check Content-MD5 if supplied
)
@@ -92,7 +97,7 @@ func newServer(ctx context.Context, f fs.Fs, opt *Options, vfsOpt *vfscommon.Opt
w._vfs = vfs.New(f, vfsOpt)
if len(opt.AuthKey) > 0 {
w.faker.AddAuthKeys(authlistResolver(opt.AuthKey))
w.faker.AddAuthKeys(authList)
}
}

View File

@@ -3,6 +3,7 @@ package s3
import (
"context"
"encoding/hex"
"errors"
"io"
"os"
"path"
@@ -125,15 +126,14 @@ func rmdirRecursive(p string, VFS *vfs.VFS) {
}
}
func authlistResolver(list []string) map[string]string {
func authlistResolver(list []string) (map[string]string, error) {
authList := make(map[string]string)
for _, v := range list {
parts := strings.Split(v, ",")
if len(parts) != 2 {
fs.Infof(nil, "Ignored: invalid auth pair %s", v)
continue
return nil, errors.New("invalid auth pair: expecting a single comma")
}
authList[parts[0]] = parts[1]
}
return authList
return authList, nil
}

View File

@@ -58,10 +58,10 @@ type conn struct {
// interoperate with the rclone sftp backend
func (c *conn) execCommand(ctx context.Context, out io.Writer, command string) (err error) {
binary, args := command, ""
space := strings.Index(command, " ")
if space >= 0 {
binary = command[:space]
args = strings.TrimLeft(command[space+1:], " ")
before, after, ok := strings.Cut(command, " ")
if ok {
binary = before
args = strings.TrimLeft(after, " ")
}
args = shellUnEscape(args)
fs.Debugf(c.what, "exec command: binary = %q, args = %q", binary, args)

View File

@@ -45,6 +45,10 @@ var OptionsInfo = fs.Options{{
Name: "disable_dir_list",
Default: false,
Help: "Disable HTML directory list on GET request for a directory",
}, {
Name: "disable_zip",
Default: false,
Help: "Disable zip download of directories",
}}.
Add(libhttp.ConfigInfo).
Add(libhttp.AuthConfigInfo).
@@ -57,6 +61,7 @@ type Options struct {
Template libhttp.TemplateConfig
EtagHash string `config:"etag_hash"`
DisableDirList bool `config:"disable_dir_list"`
DisableZip bool `config:"disable_zip"`
}
// Opt is options set by command line flags
@@ -408,6 +413,24 @@ func (w *WebDAV) serveDir(rw http.ResponseWriter, r *http.Request, dirRemote str
return
}
dir := node.(*vfs.Dir)
if r.URL.Query().Get("download") == "zip" && !w.opt.DisableZip {
fs.Infof(dirRemote, "%s: Zipping directory", r.RemoteAddr)
zipName := path.Base(dirRemote)
if dirRemote == "" {
zipName = "root"
}
rw.Header().Set("Content-Disposition", "attachment; filename=\""+zipName+".zip\"")
rw.Header().Set("Content-Type", "application/zip")
rw.Header().Set("Last-Modified", time.Now().UTC().Format(http.TimeFormat))
err := vfs.CreateZip(ctx, dir, rw)
if err != nil {
serve.Error(ctx, dirRemote, rw, "Failed to create zip", err)
return
}
return
}
dirEntries, err := dir.ReadDirAll()
if err != nil {
@@ -417,6 +440,7 @@ func (w *WebDAV) serveDir(rw http.ResponseWriter, r *http.Request, dirRemote str
// Make the entries for display
directory := serve.NewDirectory(dirRemote, w.server.HTMLTemplate())
directory.DisableZip = w.opt.DisableZip
for _, node := range dirEntries {
if vfscommon.Opt.NoModTime {
directory.AddHTMLEntry(node.Path(), node.IsDir(), node.Size(), time.Time{})

View File

@@ -56,22 +56,22 @@ var speedCmd = &cobra.Command{
Short: `Run a speed test to the remote`,
Long: `Run a speed test to the remote.
This command runs a series of uploads and downloads to the remote, measuring
and printing the speed of each test using varying file sizes and numbers of
files.
This command runs a series of uploads and downloads to the remote, measuring
and printing the speed of each test using varying file sizes and numbers of
files.
Test time can be innaccurate with small file caps and large files. As it
uses the results of an initial test to determine how many files to use in
each subsequent test.
Test time can be innaccurate with small file caps and large files. As it
uses the results of an initial test to determine how many files to use in
each subsequent test.
It is recommended to use -q flag for a simpler output. e.g.:
rlone test speed remote: -q
It is recommended to use -q flag for a simpler output. e.g.:
**NB** This command will create and delete files on the remote in a randomly
named directory which should be tidied up after.
rclone test speed remote: -q
You can use the --json flag to only print the results in JSON format.`,
**NB** This command will create and delete files on the remote in a randomly
named directory which will be automatically removed on a clean exit.
You can use the --json flag to only print the results in JSON format.`,
Annotations: map[string]string{
"versionIntroduced": "v1.72",
},

View File

@@ -202,6 +202,7 @@ WebDAV or S3, that work out of the box.)
{{< provider name="Selectel" home="https://selectel.ru/services/cloud/storage/" config="/s3/#selectel" >}}
{{< provider name="Servercore Object Storage" home="https://servercore.com/services/object-storage/" config="/s3/#servercore" >}}
{{< provider name="SFTP" home="https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol" config="/sftp/" >}}
{{< provider name="Shade" home="https://shade.inc" config="/shade/" >}}
{{< provider name="Sia" home="https://sia.tech/" config="/sia/" >}}
{{< provider name="SMB / CIFS" home="https://en.wikipedia.org/wiki/Server_Message_Block" config="/smb/" >}}
{{< provider name="Spectra Logic" home="https://spectralogic.com/blackpearl-nearline-object-gateway/" config="/s3/#spectralogic" >}}

View File

@@ -237,7 +237,6 @@ It would be possible to add ISO support fairly easily as the library we use ([go
It would be possible to add write support, but this would only be for creating new archives, not for updating existing archives.
<!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/archive/archive.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options
Here are the Standard options specific to archive (Read archives).

View File

@@ -1047,3 +1047,16 @@ put them back in again. -->
- Sean Turner <30396892+seanturner026@users.noreply.github.com>
- jijamik <30904953+jijamik@users.noreply.github.com>
- Dominik Sander <git@dsander.de>
- Nikolay Kiryanov <nikolay@kiryanov.ru>
- Diana <5275194+DianaNites@users.noreply.github.com>
- Duncan Smart <duncan.smart@gmail.com>
- vicerace <vicerace@sohu.com>
- Cliff Frey <cliff@openai.com>
- Vladislav Tropnikov <vtr.name@gmail.com>
- Leo <i@hardrain980.com>
- Johannes Rothe <mail@johannes-rothe.de>
- Tingsong Xu <tingsong.xu@rightcapital.com>
- Jonas Tingeborn <134889+jojje@users.noreply.github.com>
- jhasse-shade <jacob@shade.inc>
- vyv03354 <VYV03354@nifty.ne.jp>
- masrlinu <masrlinu@users.noreply.github.com> <5259918+masrlinu@users.noreply.github.com>

View File

@@ -103,6 +103,26 @@ MD5 hashes are stored with blobs. However blobs that were uploaded in
chunks only have an MD5 if the source remote was capable of MD5
hashes, e.g. the local disk.
### Metadata and tags
Rclone can map arbitrary metadata to Azure Blob headers, user metadata, and tags
when `--metadata` is enabled (or when using `--metadata-set` / `--metadata-mapper`).
- Headers: Set these keys in metadata to map to the corresponding blob headers:
- `cache-control`, `content-disposition`, `content-encoding`, `content-language`, `content-type`.
- User metadata: Any other non-reserved keys are written as user metadata
(keys are normalized to lowercase). Keys starting with `x-ms-` are reserved and
are not stored as user metadata.
- Tags: Provide `x-ms-tags` as a comma-separated list of `key=value` pairs, e.g.
`x-ms-tags=env=dev,team=sync`. These are applied as blob tags on upload and on
server-side copies. Whitespace around keys/values is ignored.
- Modtime override: Provide `mtime` in RFC3339/RFC3339Nano format to override the
stored modtime persisted in user metadata. If `mtime` cannot be parsed, rclone
logs a debug message and ignores the override.
Notes:
- Rclone ignores reserved `x-ms-*` keys (except `x-ms-tags`) for user metadata.
### Performance
When uploading large files, increasing the value of
@@ -959,13 +979,13 @@ Properties:
- Type: string
- Required: false
- Examples:
- ""
- The container and its blobs can be accessed only with an authorized request.
- It's a default value.
- "blob"
- Blob data within this container can be read via anonymous request.
- "container"
- Allow full public read access for container and blob data.
- ""
- The container and its blobs can be accessed only with an authorized request.
- It's a default value.
- "blob"
- Blob data within this container can be read via anonymous request.
- "container"
- Allow full public read access for container and blob data.
#### --azureblob-directory-markers
@@ -1022,12 +1042,12 @@ Properties:
- Type: string
- Required: false
- Choices:
- ""
- By default, the delete operation fails if a blob has snapshots
- "include"
- Specify 'include' to remove the root blob and all its snapshots
- "only"
- Specify 'only' to remove only the snapshots but keep the root blob.
- ""
- By default, the delete operation fails if a blob has snapshots
- "include"
- Specify 'include' to remove the root blob and all its snapshots
- "only"
- Specify 'only' to remove only the snapshots but keep the root blob.
#### --azureblob-description

View File

@@ -283,7 +283,7 @@ It is useful to know how many requests are sent to the server in different scena
All copy commands send the following 4 requests:
```text
/b2api/v1/b2_authorize_account
/b2api/v4/b2_authorize_account
/b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names
@@ -667,6 +667,71 @@ Properties:
- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --b2-sse-customer-algorithm
If using SSE-C, the server-side encryption algorithm used when storing this object in B2.
Properties:
- Config: sse_customer_algorithm
- Env Var: RCLONE_B2_SSE_CUSTOMER_ALGORITHM
- Type: string
- Required: false
- Examples:
- ""
- None
- "AES256"
- Advanced Encryption Standard (256 bits key length)
#### --b2-sse-customer-key
To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data
Alternatively you can provide --sse-customer-key-base64.
Properties:
- Config: sse_customer_key
- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY
- Type: string
- Required: false
- Examples:
- ""
- None
#### --b2-sse-customer-key-base64
To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data
Alternatively you can provide --sse-customer-key.
Properties:
- Config: sse_customer_key_base64
- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_BASE64
- Type: string
- Required: false
- Examples:
- ""
- None
#### --b2-sse-customer-key-md5
If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
Properties:
- Config: sse_customer_key_md5
- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_MD5
- Type: string
- Required: false
- Examples:
- ""
- None
#### --b2-description
Description of the remote.
@@ -682,9 +747,11 @@ Properties:
Here are the commands specific to the b2 backend.
Run them with
Run them with:
rclone backend COMMAND remote:
```console
rclone backend COMMAND remote:
```
The help below will explain what arguments each command takes.
@@ -696,35 +763,41 @@ These can be run on a running backend using the rc command
### lifecycle
Read or set the lifecycle for a bucket
Read or set the lifecycle for a bucket.
rclone backend lifecycle remote: [options] [<arguments>+]
```console
rclone backend lifecycle remote: [options] [<arguments>+]
```
This command can be used to read or set the lifecycle for a bucket.
Usage Examples:
To show the current lifecycle rules:
rclone backend lifecycle b2:bucket
```console
rclone backend lifecycle b2:bucket
```
This will dump something like this showing the lifecycle rules.
[
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
"daysFromStartingToCancelingUnfinishedLargeFiles": null,
"fileNamePrefix": ""
}
]
```json
[
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
"daysFromStartingToCancelingUnfinishedLargeFiles": null,
"fileNamePrefix": ""
}
]
```
If there are no lifecycle rules (the default) then it will just return [].
If there are no lifecycle rules (the default) then it will just return `[]`.
To reset the current lifecycle rules:
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
```console
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
```
This will run and then print the new lifecycle rules as above.
@@ -736,22 +809,27 @@ the daysFromHidingToDeleting to 1 day. You can enable hard_delete in
the config also which will mean deletions won't cause versions but
overwrites will still cause versions to be made.
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
```console
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
```
See: <https://www.backblaze.com/docs/cloud-storage-lifecycle-rules>
Options:
- "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off.
- "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any unfinished large file versions after this many days
- "daysFromUploadingToHiding": This many days after uploading a file is hidden
- "daysFromHidingToDeleting": After a file has been hidden for this many days
it is deleted. 0 is off.
- "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any unfinished
large file versions after this many days.
- "daysFromUploadingToHiding": This many days after uploading a file is hidden.
### cleanup
Remove unfinished large file uploads.
rclone backend cleanup remote: [options] [<arguments>+]
```console
rclone backend cleanup remote: [options] [<arguments>+]
```
This command removes unfinished large file uploads of age greater than
max-age, which defaults to 24 hours.
@@ -759,29 +837,33 @@ max-age, which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup b2:bucket/path/to/object
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object
```console
rclone backend cleanup b2:bucket/path/to/object
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object
```
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
Options:
- "max-age": Max age of upload to delete
- "max-age": Max age of upload to delete.
### cleanup-hidden
Remove old versions of files.
rclone backend cleanup-hidden remote: [options] [<arguments>+]
```console
rclone backend cleanup-hidden remote: [options] [<arguments>+]
```
This command removes any old hidden versions of files.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup-hidden b2:bucket/path/to/dir
```console
rclone backend cleanup-hidden b2:bucket/path/to/dir
```
<!-- autogenerated options stop -->

View File

@@ -1047,20 +1047,16 @@ encodings.)
The following backends have known issues that need more investigation:
<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
- `TestGoFile` (`gofile`)
- [`TestBisyncRemoteLocal/all_changed`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/backupdir`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/basic`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/changes`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/check_access`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [78 more](https://pub.rclone.org/integration-tests/current/)
- Updated: 2025-08-21-010015
- `TestDropbox` (`dropbox`)
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
- Updated: 2025-11-21-010037
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
The following backends either have not been tested recently or have known issues
that are deemed unfixable for the time being:
<!--- start list_ignores - DO NOT EDIT THIS SECTION - use make commanddocs --->
- `TestArchive` (`archive`)
- `TestCache` (`cache`)
- `TestFileLu` (`filelu`)
- `TestFilesCom` (`filescom`)

View File

@@ -323,6 +323,19 @@ Properties:
- Type: string
- Required: false
#### --box-config-credentials
Box App config.json contents.
Leave blank normally.
Properties:
- Config: config_credentials
- Env Var: RCLONE_BOX_CONFIG_CREDENTIALS
- Type: string
- Required: false
#### --box-access-token
Box App Primary Access Token
@@ -347,10 +360,10 @@ Properties:
- Type: string
- Default: "user"
- Examples:
- "user"
- Rclone should act on behalf of a user.
- "enterprise"
- Rclone should act on behalf of a service account.
- "user"
- Rclone should act on behalf of a user.
- "enterprise"
- Rclone should act on behalf of a service account.
### Advanced options

View File

@@ -394,12 +394,12 @@ Properties:
- Type: SizeSuffix
- Default: 5Mi
- Examples:
- "1M"
- 1 MiB
- "5M"
- 5 MiB
- "10M"
- 10 MiB
- "1M"
- 1 MiB
- "5M"
- 5 MiB
- "10M"
- 10 MiB
#### --cache-info-age
@@ -414,12 +414,12 @@ Properties:
- Type: Duration
- Default: 6h0m0s
- Examples:
- "1h"
- 1 hour
- "24h"
- 24 hours
- "48h"
- 48 hours
- "1h"
- 1 hour
- "24h"
- 24 hours
- "48h"
- 48 hours
#### --cache-chunk-total-size
@@ -435,12 +435,12 @@ Properties:
- Type: SizeSuffix
- Default: 10Gi
- Examples:
- "500M"
- 500 MiB
- "1G"
- 1 GiB
- "10G"
- 10 GiB
- "500M"
- 500 MiB
- "1G"
- 1 GiB
- "10G"
- 10 GiB
### Advanced options
@@ -698,9 +698,11 @@ Properties:
Here are the commands specific to the cache backend.
Run them with
Run them with:
rclone backend COMMAND remote:
```console
rclone backend COMMAND remote:
```
The help below will explain what arguments each command takes.
@@ -714,6 +716,8 @@ These can be run on a running backend using the rc command
Print stats on the cache backend in JSON format.
rclone backend stats remote: [options] [<arguments>+]
```console
rclone backend stats remote: [options] [<arguments>+]
```
<!-- autogenerated options stop -->

View File

@@ -6,6 +6,146 @@ description: "Rclone Changelog"
# Changelog
## v1.72.1 - 2025-12-10
[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
- Bug Fixes
- build: update to go1.25.5 to fix [CVE-2025-61729](https://pkg.go.dev/vuln/GO-2025-4155)
- doc fixes (Duncan Smart, Nick Craig-Wood)
- configfile: Fix piped config support (Jonas Tingeborn)
- log
- Fix PID not included in JSON log output (Tingsong Xu)
- Fix backtrace not going to the --log-file (Nick Craig-Wood)
- Google Cloud Storage
- Improve endpoint parameter docs (Johannes Rothe)
- S3
- Add missing regions for Selectel provider (Nick Craig-Wood)
## v1.72.0 - 2025-11-21
[See commits](https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0)
- New backends
- [Archive](/archive) backend to read archives on cloud storage. (Nick Craig-Wood)
- New S3 providers
- [Cubbit Object Storage](/s3/#Cubbit) (Marco Ferretti)
- [FileLu S5 Object Storage](/s3/#filelu-s5) (kingston125)
- [Hetzner Object Storage](/s3/#hetzner) (spiffytech)
- [Intercolo Object Storage](/s3/#intercolo) (Robin Rolf)
- [Rabata S3-compatible secure cloud storage](/s3/#Rabata) (dougal)
- [Servercore Object Storage](/s3/#servercore) (dougal)
- [SpectraLogic](/s3/#spectralogic) (dougal)
- New commands
- [rclone archive](/commands/rclone_archive/): command to create and read archive files (Fawzib Rojas)
- [rclone config string](/commands/rclone_config_string/): for making connection strings (Nick Craig-Wood)
- [rclone test speed](/commands/rclone_test_speed/): Add command to test a specified remotes speed (dougal)
- New Features
- backends: many backends have has a paged listing (`ListP`) interface added
- this enables progress when listing large directories and reduced memory usage
- build
- Bump golang.org/x/crypto from 0.43.0 to 0.45.0 to fix CVE-2025-58181 (dependabot[bot])
- Modernize code and tests (Nick Craig-Wood, russcoss, juejinyuxitu, reddaisyy, dulanting, Oleksandr Redko)
- Update all dependencies (Nick Craig-Wood)
- Enable support for `aix/ppc64` (Lakshmi-Surekha)
- check: Improved reporting of differences in sizes and contents (albertony)
- copyurl: Added `--url` to read URLs from CSV file (S-Pegg1, dougal)
- docs:
- markdown linting (albertony)
- fixes (albertony, Andrew Gunnerson, anon-pradip, Claudius Ellsel, dougal, iTrooz, Jean-Christophe Cura, Joseph Brownlee, kapitainsky, Matt LaPaglia, n4n5, Nick Craig-Wood, nielash, SublimePeace, Ted Robertson, vastonus)
- fs: remove unnecessary Seek call on log file (Aneesh Agrawal)
- hashsum: Improved output format when listing algorithms (albertony)
- lib/http: Cleanup indentation and other whitespace in http serve template (albertony)
- lsf: Add support for `unix` and `unixnano` time formats (Motte)
- oauthutil: Improved debug logs from token refresh (albertony)
- rc
- Add [job/batch](/rc/#job-batch) for sending batches of rc commands to run concurrently (Nick Craig-Wood)
- Add `runningIds` and `finishedIds` to [job/list](/rc/#job-list) (n4n5)
- Add `osVersion`, `osKernel` and `osArch` to [core/version](/rc/#core-version) (Nick Craig-Wood)
- Make sure fatal errors run via the rc don't crash rclone (Nick Craig-Wood)
- Add `executeId` to job statuses in [job/list](/rc/#job-list) (Nikolay Kiryanov)
- `config/unlock`: rename parameter to `configPassword` accept old as well (Nick Craig-Wood)
- serve http: Download folders as zip (dougal)
- Bug Fixes
- build
- Fix tls: failed to verify certificate: x509: negative serial number (Nick Craig-Wood)
- march
- Fix `--no-traverse` being very slow (Nick Craig-Wood)
- serve s3: Fix log output to remove the EXTRA messages (iTrooz)
- Mount
- Windows: improve error message on missing WinFSP (divinity76)
- Local
- Add `--skip-specials` to ignore special files (Adam Dinwoodie)
- Azure Blob
- Add ListP interface (dougal)
- Azurefiles
- Add ListP interface (Nick Craig-Wood)
- B2
- Add ListP interface (dougal)
- Add Server-Side encryption support (fries1234)
- Fix "expected a FileSseMode but found: ''" (dougal)
- Allow individual old versions to be deleted with `--b2-versions` (dougal)
- Box
- Add ListP interface (Nick Craig-Wood)
- Allow configuration with config file contents (Dominik Sander)
- Compress
- Add zstd compression (Alex)
- Drive
- Add ListP interface (Nick Craig-Wood)
- Dropbox
- Add ListP interface (Nick Craig-Wood)
- Fix error moving just created objects (Nick Craig-Wood)
- FTP
- Fix SOCKS proxy support (dougal)
- Fix transfers from servers that return 250 ok messages (jijamik)
- Google Cloud Storage
- Add ListP interface (dougal)
- Fix `--gcs-storage-class` to work with server side copy for objects (Riaz Arbi)
- HTTP
- Add basic metadata and provide it via serve (Oleg Kunitsyn)
- Jottacloud
- Add support for Let's Go Cloud (from MediaMarkt) as a whitelabel service (albertony)
- Add support for MediaMarkt Cloud as a whitelabel service (albertony)
- Added support for traditional oauth authentication also for the main service (albertony)
- Abort attempts to run unsupported rclone authorize command (albertony)
- Improved token refresh handling (albertony)
- Fix legacy authentication (albertony)
- Fix authentication for whitelabel services from Elkjøp subsidiaries (albertony)
- Mega
- Implement 2FA login (iTrooz)
- Memory
- Add ListP interface (dougal)
- Onedrive
- Add ListP interface (Nick Craig-Wood)
- Oracle Object Storage
- Add ListP interface (dougal)
- Pcloud
- Add ListP interface (Nick Craig-Wood)
- Proton Drive
- Automated 2FA login with OTP secret key (Microscotch)
- S3
- Make it easier to add new S3 providers (dougal)
- Add `--s3-use-data-integrity-protections` quirk to fix BadDigest error in Alibaba, Tencent (hunshcn)
- Add support for `--upload-header`, `If-Match` and `If-None-Match` (Sean Turner)
- Fix single file copying behavior with low permission (hunshcn)
- SFTP
- Fix zombie SSH processes with `--sftp-ssh` (Copilot)
- Smb
- Optimize smb mount performance by avoiding stat checks during initialization (Sudipto Baral)
- Swift
- Add ListP interface (dougal)
- If storage_policy isn't set, use the root containers policy (Andrew Ruthven)
- Report disk usage in segment containers (Andrew Ruthven)
- Ulozto
- Implement the About functionality (Lukas Krejci)
- Fix downloads returning HTML error page (aliaj1)
- WebDAV
- Optimize bearer token fetching with singleflight (hunshcn)
- Add ListP interface (Nick Craig-Wood)
- Use SpaceSepList to parse bearer token command (hunshcn)
- Add `Access-Control-Max-Age` header for CORS preflight caching (viocha)
- Fix out of memory with sharepoint-ntlm when uploading large file (Nick Craig-Wood)
## v1.71.2 - 2025-10-20
[See commits](https://github.com/rclone/rclone/compare/v1.71.1...v1.71.2)

View File

@@ -356,22 +356,22 @@ Properties:
- Type: string
- Default: "md5"
- Examples:
- "none"
- Pass any hash supported by wrapped remote for non-chunked files.
- Return nothing otherwise.
- "md5"
- MD5 for composite files.
- "sha1"
- SHA1 for composite files.
- "md5all"
- MD5 for all files.
- "sha1all"
- SHA1 for all files.
- "md5quick"
- Copying a file to chunker will request MD5 from the source.
- Falling back to SHA1 if unsupported.
- "sha1quick"
- Similar to "md5quick" but prefers SHA1 over MD5.
- "none"
- Pass any hash supported by wrapped remote for non-chunked files.
- Return nothing otherwise.
- "md5"
- MD5 for composite files.
- "sha1"
- SHA1 for composite files.
- "md5all"
- MD5 for all files.
- "sha1all"
- SHA1 for all files.
- "md5quick"
- Copying a file to chunker will request MD5 from the source.
- Falling back to SHA1 if unsupported.
- "sha1quick"
- Similar to "md5quick" but prefers SHA1 over MD5.
### Advanced options
@@ -421,13 +421,13 @@ Properties:
- Type: string
- Default: "simplejson"
- Examples:
- "none"
- Do not use metadata files at all.
- Requires hash type "none".
- "simplejson"
- Simple JSON supports hash sums and chunk validation.
-
- It has the following fields: ver, size, nchunks, md5, sha1.
- "none"
- Do not use metadata files at all.
- Requires hash type "none".
- "simplejson"
- Simple JSON supports hash sums and chunk validation.
-
- It has the following fields: ver, size, nchunks, md5, sha1.
#### --chunker-fail-hard
@@ -440,10 +440,10 @@ Properties:
- Type: bool
- Default: false
- Examples:
- "true"
- Report errors and abort current command.
- "false"
- Warn user, skip incomplete file and proceed.
- "true"
- Report errors and abort current command.
- "false"
- Warn user, skip incomplete file and proceed.
#### --chunker-transactions
@@ -456,19 +456,19 @@ Properties:
- Type: string
- Default: "rename"
- Examples:
- "rename"
- Rename temporary files after a successful transaction.
- "norename"
- Leave temporary file names and write transaction ID to metadata file.
- Metadata is required for no rename transactions (meta format cannot be "none").
- If you are using norename transactions you should be careful not to downgrade Rclone
- as older versions of Rclone don't support this transaction style and will misinterpret
- files manipulated by norename transactions.
- This method is EXPERIMENTAL, don't use on production systems.
- "auto"
- Rename or norename will be used depending on capabilities of the backend.
- If meta format is set to "none", rename transactions will always be used.
- This method is EXPERIMENTAL, don't use on production systems.
- "rename"
- Rename temporary files after a successful transaction.
- "norename"
- Leave temporary file names and write transaction ID to metadata file.
- Metadata is required for no rename transactions (meta format cannot be "none").
- If you are using norename transactions you should be careful not to downgrade Rclone
- as older versions of Rclone don't support this transaction style and will misinterpret
- files manipulated by norename transactions.
- This method is EXPERIMENTAL, don't use on production systems.
- "auto"
- Rename or norename will be used depending on capabilities of the backend.
- If meta format is set to "none", rename transactions will always be used.
- This method is EXPERIMENTAL, don't use on production systems.
#### --chunker-description

View File

@@ -15,8 +15,6 @@ mounting them, listing them in lots of different ways.
See the home page (https://rclone.org/) for installation, usage,
documentation, changelog and configuration walkthroughs.
```
rclone [flags]
```
@@ -26,6 +24,8 @@ rclone [flags]
```
--alias-description string Description of the remote
--alias-remote string Remote or path to alias
--archive-description string Description of the remote
--archive-remote string Remote to wrap to read archives from
--ask-password Allow prompt for password for encrypted configuration (default true)
--auto-confirm If enabled, do not request console confirmation
--azureblob-access-tier string Access tier of blob: hot, cool, cold or archive
@@ -105,6 +105,10 @@ rclone [flags]
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
--b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket
--b2-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in B2
--b2-sse-customer-key string To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data
--b2-sse-customer-key-base64 string To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data
--b2-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-concurrency int Concurrency for multipart uploads (default 4)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
@@ -181,7 +185,7 @@ rclone [flags]
--combine-upstreams SpaceSepList Upstreams for combining
--compare-dest stringArray Include additional server-side paths during comparison
--compress-description string Description of the remote
--compress-level int GZIP compression level (-2 to 9) (default -1)
--compress-level string GZIP (levels -2 to 9):
--compress-mode string Compression mode (default "gzip")
--compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi)
--compress-remote string Remote to compress
@@ -549,6 +553,7 @@ rclone [flags]
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
--mega-2fa string The 2FA code of your MEGA account if the account is set up with one
--mega-debug Output more debug from Mega
--mega-description string Description of the remote
--mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
@@ -715,6 +720,7 @@ rclone [flags]
--protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured)
--protondrive-original-file-size Return the file size before encryption (default true)
--protondrive-otp-secret-key string The OTP secret key (obscured)
--protondrive-password string The password of your proton account (obscured)
--protondrive-replace-existing-draft Create a new revision when filename conflict is detected
--protondrive-username string The username of your proton account
@@ -831,6 +837,7 @@ rclone [flags]
--s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset)
--s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset)
--s3-use-arn-region If true, enables arn region support for the service
--s3-use-data-integrity-protections Tristate If true use AWS S3 data integrity protections (default unset)
--s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support)
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
@@ -915,6 +922,7 @@ rclone [flags]
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--size-only Skip based on size only, not modtime or checksum
--skip-links Don't warn about skipped symlinks
--skip-specials Don't warn about skipped pipes, sockets and device objects
--smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
--smb-description string Description of the remote
--smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
@@ -1015,7 +1023,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.71.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect
@@ -1057,7 +1065,11 @@ rclone [flags]
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone about](/commands/rclone_about/) - Get quota information from the remote.
* [rclone archive](/commands/rclone_archive/) - Perform an action on an archive.
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization.
* [rclone backend](/commands/rclone_backend/) - Run a backend-specific command.
* [rclone bisync](/commands/rclone_bisync/) - Perform bidirectional synchronization between two paths.
@@ -1111,3 +1123,5 @@ rclone [flags]
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number.
<!-- markdownlint-restore -->

View File

@@ -15,40 +15,46 @@ output. The output is typically used, free, quota and trash contents.
E.g. Typical output from `rclone about remote:` is:
Total: 17 GiB
Used: 7.444 GiB
Free: 1.315 GiB
Trashed: 100.000 MiB
Other: 8.241 GiB
```text
Total: 17 GiB
Used: 7.444 GiB
Free: 1.315 GiB
Trashed: 100.000 MiB
Other: 8.241 GiB
```
Where the fields are:
* Total: Total size available.
* Used: Total size used.
* Free: Total space available to this user.
* Trashed: Total space used by trash.
* Other: Total amount in other storage (e.g. Gmail, Google Photos).
* Objects: Total number of objects in the storage.
- Total: Total size available.
- Used: Total size used.
- Free: Total space available to this user.
- Trashed: Total space used by trash.
- Other: Total amount in other storage (e.g. Gmail, Google Photos).
- Objects: Total number of objects in the storage.
All sizes are in number of bytes.
Applying a `--full` flag to the command prints the bytes in full, e.g.
Total: 18253611008
Used: 7993453766
Free: 1411001220
Trashed: 104857602
Other: 8849156022
```text
Total: 18253611008
Used: 7993453766
Free: 1411001220
Trashed: 104857602
Other: 8849156022
```
A `--json` flag generates conveniently machine-readable output, e.g.
{
"total": 18253611008,
"used": 7993453766,
"trashed": 104857602,
"other": 8849156022,
"free": 1411001220
}
```json
{
"total": 18253611008,
"used": 7993453766,
"trashed": 104857602,
"other": 8849156022,
"free": 1411001220
}
```
Not all backends print all fields. Information is not included if it is not
provided by a backend. Where the value is unlimited it is omitted.
@@ -56,7 +62,6 @@ provided by a backend. Where the value is unlimited it is omitted.
Some backends does not support the `rclone about` command at all,
see complete list in [documentation](https://rclone.org/overview/#optional-features).
```
rclone about remote: [flags]
```
@@ -73,5 +78,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -0,0 +1,47 @@
---
title: "rclone archive"
description: "Perform an action on an archive."
versionIntroduced: v1.72
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/archive/ and as part of making a release run "make commanddocs"
---
# rclone archive
Perform an action on an archive.
## Synopsis
Perform an action on an archive. Requires the use of a
subcommand to specify the protocol, e.g.
rclone archive list remote:file.zip
Each subcommand has its own options which you can see in their help.
See [rclone archive create](/commands/rclone_archive_create/) for the
archive formats supported.
```
rclone archive <action> [opts] <source> [<destination>] [flags]
```
## Options
```
-h, --help help for archive
```
See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone archive create](/commands/rclone_archive_create/) - Archive source file(s) to destination.
* [rclone archive extract](/commands/rclone_archive_extract/) - Extract archives from source to destination.
* [rclone archive list](/commands/rclone_archive_list/) - List archive contents from source.
<!-- markdownlint-restore -->

View File

@@ -0,0 +1,95 @@
---
title: "rclone archive create"
description: "Archive source file(s) to destination."
versionIntroduced: v1.72
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/archive/create/ and as part of making a release run "make commanddocs"
---
# rclone archive create
Archive source file(s) to destination.
## Synopsis
Creates an archive from the files in source:path and saves the archive to
dest:path. If dest:path is missing, it will write to the console.
The valid formats for the `--format` flag are listed below. If
`--format` is not set rclone will guess it from the extension of dest:path.
| Format | Extensions |
|:-------|:-----------|
| zip | .zip |
| tar | .tar |
| tar.gz | .tar.gz, .tgz, .taz |
| tar.bz2| .tar.bz2, .tb2, .tbz, .tbz2, .tz2 |
| tar.lz | .tar.lz |
| tar.lz4| .tar.lz4 |
| tar.xz | .tar.xz, .txz |
| tar.zst| .tar.zst, .tzst |
| tar.br | .tar.br |
| tar.sz | .tar.sz |
| tar.mz | .tar.mz |
The `--prefix` and `--full-path` flags control the prefix for the files
in the archive.
If the flag `--full-path` is set then the files will have the full source
path as the prefix.
If the flag `--prefix=<value>` is set then the files will have
`<value>` as prefix. It's possible to create invalid file names with
`--prefix=<value>` so use with caution. Flag `--prefix` has
priority over `--full-path`.
Given a directory `/sourcedir` with the following:
file1.txt
dir1/file2.txt
Running the command `rclone archive create /sourcedir /dest.tar.gz`
will make an archive with the contents:
file1.txt
dir1/
dir1/file2.txt
Running the command `rclone archive create --full-path /sourcedir /dest.tar.gz`
will make an archive with the contents:
sourcedir/file1.txt
sourcedir/dir1/
sourcedir/dir1/file2.txt
Running the command `rclone archive create --prefix=my_new_path /sourcedir /dest.tar.gz`
will make an archive with the contents:
my_new_path/file1.txt
my_new_path/dir1/
my_new_path/dir1/file2.txt
```
rclone archive create [flags] <source> [<destination>]
```
## Options
```
--format string Create the archive with format or guess from extension.
--full-path Set prefix for files in archive to source path
-h, --help help for create
--prefix string Set prefix for files in archive to entered value or source path
```
See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone archive](/commands/rclone_archive/) - Perform an action on an archive.
<!-- markdownlint-restore -->

View File

@@ -0,0 +1,81 @@
---
title: "rclone archive extract"
description: "Extract archives from source to destination."
versionIntroduced: v1.72
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/archive/extract/ and as part of making a release run "make commanddocs"
---
# rclone archive extract
Extract archives from source to destination.
## Synopsis
Extract the archive contents to a destination directory auto detecting
the format. See [rclone archive create](/commands/rclone_archive_create/)
for the archive formats supported.
For example on this archive:
```
$ rclone archive list --long remote:archive.zip
6 2025-10-30 09:46:23.000000000 file.txt
0 2025-10-30 09:46:57.000000000 dir/
4 2025-10-30 09:46:57.000000000 dir/bye.txt
```
You can run extract like this
```
$ rclone archive extract remote:archive.zip remote:extracted
```
Which gives this result
```
$ rclone tree remote:extracted
/
├── dir
│ └── bye.txt
└── file.txt
```
The source or destination or both can be local or remote.
Filters can be used to only extract certain files:
```
$ rclone archive extract archive.zip partial --include "bye.*"
$ rclone tree partial
/
└── dir
└── bye.txt
```
The [archive backend](/archive/) can also be used to extract files. It
can be used to read only mount archives also but it supports a
different set of archive formats to the archive commands.
```
rclone archive extract [flags] <source> <destination>
```
## Options
```
-h, --help help for extract
```
See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone archive](/commands/rclone_archive/) - Perform an action on an archive.
<!-- markdownlint-restore -->

View File

@@ -0,0 +1,96 @@
---
title: "rclone archive list"
description: "List archive contents from source."
versionIntroduced: v1.72
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/archive/list/ and as part of making a release run "make commanddocs"
---
# rclone archive list
List archive contents from source.
## Synopsis
List the contents of an archive to the console, auto detecting the
format. See [rclone archive create](/commands/rclone_archive_create/)
for the archive formats supported.
For example:
```
$ rclone archive list remote:archive.zip
6 file.txt
0 dir/
4 dir/bye.txt
```
Or with `--long` flag for more info:
```
$ rclone archive list --long remote:archive.zip
6 2025-10-30 09:46:23.000000000 file.txt
0 2025-10-30 09:46:57.000000000 dir/
4 2025-10-30 09:46:57.000000000 dir/bye.txt
```
Or with `--plain` flag which is useful for scripting:
```
$ rclone archive list --plain /path/to/archive.zip
file.txt
dir/
dir/bye.txt
```
Or with `--dirs-only`:
```
$ rclone archive list --plain --dirs-only /path/to/archive.zip
dir/
```
Or with `--files-only`:
```
$ rclone archive list --plain --files-only /path/to/archive.zip
file.txt
dir/bye.txt
```
Filters may also be used:
```
$ rclone archive list --long archive.zip --include "bye.*"
4 2025-10-30 09:46:57.000000000 dir/bye.txt
```
The [archive backend](/archive/) can also be used to list files. It
can be used to read only mount archives also but it supports a
different set of archive formats to the archive commands.
```
rclone archive list [flags] <source>
```
## Options
```
--dirs-only Only list directories
--files-only Only list files
-h, --help help for list
--long List extra attributtes
--plain Only list file names
```
See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone archive](/commands/rclone_archive/) - Perform an action on an archive.
<!-- markdownlint-restore -->

View File

@@ -11,21 +11,23 @@ Remote authorization.
## Synopsis
Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
rclone from a machine with a browser. Use as instructed by rclone config.
See also the [remote setup documentation](/remote_setup).
The command requires 1-3 arguments:
- fs name (e.g., "drive", "s3", etc.)
- Either a base64 encoded JSON blob obtained from a previous rclone config session
- Or a client_id and client_secret pair obtained from the remote service
- Name of a backend (e.g. "drive", "s3")
- Either a base64 encoded JSON blob obtained from a previous rclone config session
- Or a client_id and client_secret pair obtained from the remote service
Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.
Use --template to generate HTML output via a custom Go template. If a blank
string is provided as an argument to this flag, the default template is used.
```
rclone authorize <fs name> [base64_json_blob | client_id client_secret] [flags]
rclone authorize <backendname> [base64_json_blob | client_id client_secret] [flags]
```
## Options
@@ -40,5 +42,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -16,27 +16,34 @@ see the backend docs for definitions.
You can discover what commands a backend implements by using
rclone backend help remote:
rclone backend help <backendname>
```console
rclone backend help remote:
rclone backend help <backendname>
```
You can also discover information about the backend using (see
[operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs
for more info).
rclone backend features remote:
```console
rclone backend features remote:
```
Pass options to the backend command with -o. This should be key=value or key, e.g.:
rclone backend stats remote:path stats -o format=json -o long
```console
rclone backend stats remote:path stats -o format=json -o long
```
Pass arguments to the backend by placing them on the end of the line
rclone backend cleanup remote:path file1 file2 file3
```console
rclone backend cleanup remote:path file1 file2 file3
```
Note to run these commands on a running backend then see
[backend/command](/rc/#backend-command) in the rc docs.
```
rclone backend <command> remote:path [opts] <args> [flags]
```
@@ -56,7 +63,7 @@ See the [global flags page](/flags/) for global options not listed here.
Important flags useful for most commands
```
```text
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
@@ -64,5 +71,10 @@ Important flags useful for most commands
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -16,18 +16,19 @@ Perform bidirectional synchronization between two paths.
bidirectional cloud sync solution in rclone.
It retains the Path1 and Path2 filesystem listings from the prior run.
On each successive run it will:
- list files on Path1 and Path2, and check for changes on each side.
Changes include `New`, `Newer`, `Older`, and `Deleted` files.
- Propagate changes on Path1 to Path2, and vice-versa.
Bisync is considered an **advanced command**, so use with care.
Make sure you have read and understood the entire [manual](https://rclone.org/bisync)
(especially the [Limitations](https://rclone.org/bisync/#limitations) section) before using,
or data loss can result. Questions can be asked in the [Rclone Forum](https://forum.rclone.org/).
(especially the [Limitations](https://rclone.org/bisync/#limitations) section)
before using, or data loss can result. Questions can be asked in the
[Rclone Forum](https://forum.rclone.org/).
See [full bisync description](https://rclone.org/bisync/) for details.
```
rclone bisync remote1:path1 remote2:path2 [flags]
```
@@ -69,7 +70,7 @@ See the [global flags page](/flags/) for global options not listed here.
Flags for anything which can copy a file
```
```text
--check-first Do all the checks before starting transfers
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only)
--compare-dest stringArray Include additional server-side paths during comparison
@@ -110,7 +111,7 @@ Flags for anything which can copy a file
Important flags useful for most commands
```
```text
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
@@ -120,7 +121,7 @@ Important flags useful for most commands
Flags for filtering directory listings
```
```text
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
@@ -148,5 +149,10 @@ Flags for filtering directory listings
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -14,15 +14,21 @@ Sends any files to standard output.
You can use it like this to output a single file
rclone cat remote:path/to/file
```sh
rclone cat remote:path/to/file
```
Or like this to output any file in dir or its subdirectories.
rclone cat remote:path/to/dir
```sh
rclone cat remote:path/to/dir
```
Or like this to output any .txt files in dir or its subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
```sh
rclone --include "*.txt" cat remote:path/to/dir
```
Use the `--head` flag to print characters only at the start, `--tail` for
the end and `--offset` and `--count` to print a section in the middle.
@@ -33,14 +39,17 @@ Use the `--separator` flag to print a separator value between files. Be sure to
shell-escape special characters. For example, to print a newline between
files, use:
* bash:
- bash:
rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir
```sh
rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir
```
* powershell:
rclone --include "*.txt" --separator "`n" cat remote:path/to/dir
- powershell:
```powershell
rclone --include "*.txt" --separator "`n" cat remote:path/to/dir
```
```
rclone cat remote:path [flags]
@@ -65,7 +74,7 @@ See the [global flags page](/flags/) for global options not listed here.
Flags for filtering directory listings
```
```text
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
@@ -95,12 +104,17 @@ Flags for filtering directory listings
Flags for listing directories
```
```text
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -52,7 +52,6 @@ you what happened to it. These are reminiscent of diff files.
The default number of parallel checks is 8. See the [--checkers](/docs/#checkers-int)
option for more information.
```
rclone check source:path dest:path [flags]
```
@@ -79,7 +78,7 @@ See the [global flags page](/flags/) for global options not listed here.
Flags used for check commands
```
```text
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
```
@@ -87,7 +86,7 @@ Flags used for check commands
Flags for filtering directory listings
```
```text
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
@@ -117,12 +116,17 @@ Flags for filtering directory listings
Flags for listing directories
```
```text
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -47,7 +47,6 @@ you what happened to it. These are reminiscent of diff files.
The default number of parallel checks is 8. See the [--checkers](/docs/#checkers-int)
option for more information.
```
rclone checksum <hash> sumfile dst:path [flags]
```
@@ -73,7 +72,7 @@ See the [global flags page](/flags/) for global options not listed here.
Flags for filtering directory listings
```
```text
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
@@ -103,12 +102,17 @@ Flags for filtering directory listings
Flags for listing directories
```
```text
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -13,7 +13,6 @@ Clean up the remote if possible.
Clean up the remote if possible. Empty the trash or delete old file
versions. Not supported by all remotes.
```
rclone cleanup remote:path [flags]
```
@@ -31,7 +30,7 @@ See the [global flags page](/flags/) for global options not listed here.
Important flags useful for most commands
```
```text
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
@@ -39,5 +38,10 @@ Important flags useful for most commands
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -15,7 +15,6 @@ Output completion script for a given shell.
Generates a shell completion script for rclone.
Run with `--help` to list the supported shells.
## Options
```
@@ -26,9 +25,14 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone completion bash](/commands/rclone_completion_bash/) - Output bash completion script for rclone.
* [rclone completion fish](/commands/rclone_completion_fish/) - Output fish completion script for rclone.
* [rclone completion powershell](/commands/rclone_completion_powershell/) - Output powershell completion script for rclone.
* [rclone completion zsh](/commands/rclone_completion_zsh/) - Output zsh completion script for rclone.
<!-- markdownlint-restore -->

View File

@@ -13,17 +13,21 @@ Output bash completion script for rclone.
Generates a bash shell autocompletion script for rclone.
By default, when run without any arguments,
By default, when run without any arguments,
rclone completion bash
```console
rclone completion bash
```
the generated script will be written to
/etc/bash_completion.d/rclone
```console
/etc/bash_completion.d/rclone
```
and so rclone will probably need to be run as root, or with sudo.
If you supply a path to a file as the command line argument, then
If you supply a path to a file as the command line argument, then
the generated script will be written to that file, in which case
you should not need root privileges.
@@ -34,12 +38,13 @@ can logout and login again to use the autocompletion script.
Alternatively, you can source the script directly
. /path/to/my_bash_completion_scripts/rclone
```console
. /path/to/my_bash_completion_scripts/rclone
```
and the autocompletion functionality will be added to your
current shell.
```
rclone completion bash [output_file] [flags]
```
@@ -54,5 +59,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
<!-- markdownlint-restore -->

View File

@@ -16,19 +16,22 @@ Generates a fish autocompletion script for rclone.
This writes to /etc/fish/completions/rclone.fish by default so will
probably need to be run with sudo or as root, e.g.
sudo rclone completion fish
```console
sudo rclone completion fish
```
Logout and login again to use the autocompletion scripts, or source
them directly
. /etc/fish/completions/rclone.fish
```console
. /etc/fish/completions/rclone.fish
```
If you supply a command line argument the script will be written
there.
If output_file is "-", then the output will be written to stdout.
```
rclone completion fish [output_file] [flags]
```
@@ -43,5 +46,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
<!-- markdownlint-restore -->

View File

@@ -15,14 +15,15 @@ Generate the autocompletion script for powershell.
To load completions in your current shell session:
rclone completion powershell | Out-String | Invoke-Expression
```console
rclone completion powershell | Out-String | Invoke-Expression
```
To load completions for every new session, add the output of the above command
to your powershell profile.
If output_file is "-" or missing, then the output will be written to stdout.
```
rclone completion powershell [output_file] [flags]
```
@@ -37,5 +38,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
<!-- markdownlint-restore -->

View File

@@ -16,19 +16,22 @@ Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will
probably need to be run with sudo or as root, e.g.
sudo rclone completion zsh
```console
sudo rclone completion zsh
```
Logout and login again to use the autocompletion scripts, or source
them directly
autoload -U compinit && compinit
```console
autoload -U compinit && compinit
```
If you supply a command line argument the script will be written
there.
If output_file is "-", then the output will be written to stdout.
```
rclone completion zsh [output_file] [flags]
```
@@ -43,5 +46,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
<!-- markdownlint-restore -->

View File

@@ -14,7 +14,6 @@ Enter an interactive configuration session where you can setup new
remotes and manage existing ones. You may also set or remove a
password to protect your configuration.
```
rclone config [flags]
```
@@ -29,6 +28,9 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options.
* [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote.
@@ -43,7 +45,10 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone config reconnect](/commands/rclone_config_reconnect/) - Re-authenticates user with remote.
* [rclone config redacted](/commands/rclone_config_redacted/) - Print redacted (decrypted) config file, or the redacted config for a single remote.
* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
* [rclone config string](/commands/rclone_config_string/) - Print connection string for a single remote.
* [rclone config touch](/commands/rclone_config_touch/) - Ensure configuration file exists.
* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
* [rclone config userinfo](/commands/rclone_config_userinfo/) - Prints info about logged in user of remote.
<!-- markdownlint-restore -->

View File

@@ -16,13 +16,17 @@ should be passed in pairs of `key` `value` or as `key=value`.
For example, to make a swift remote of name myremote using auto config
you would do:
rclone config create myremote swift env_auth true
rclone config create myremote swift env_auth=true
```sh
rclone config create myremote swift env_auth true
rclone config create myremote swift env_auth=true
```
So for example if you wanted to configure a Google Drive remote but
using remote authorization you would do this:
rclone config create mydrive drive config_is_local=false
```sh
rclone config create mydrive drive config_is_local=false
```
Note that if the config process would normally ask a question the
default is taken (unless `--non-interactive` is used). Each time
@@ -50,29 +54,29 @@ it.
This will look something like (some irrelevant detail removed):
```
```json
{
"State": "*oauth-islocal,teamdrive,,",
"Option": {
"Name": "config_is_local",
"Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
"Default": true,
"Examples": [
{
"Value": "true",
"Help": "Yes"
},
{
"Value": "false",
"Help": "No"
}
],
"Required": false,
"IsPassword": false,
"Type": "bool",
"Exclusive": true,
},
"Error": "",
"State": "*oauth-islocal,teamdrive,,",
"Option": {
"Name": "config_is_local",
"Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
"Default": true,
"Examples": [
{
"Value": "true",
"Help": "Yes"
},
{
"Value": "false",
"Help": "No"
}
],
"Required": false,
"IsPassword": false,
"Type": "bool",
"Exclusive": true,
},
"Error": "",
}
```
@@ -95,7 +99,9 @@ The keys of `Option` are used as follows:
If `Error` is set then it should be shown to the user at the same
time as the question.
rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
```sh
rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
```
Note that when using `--continue` all passwords should be passed in
the clear (not obscured). Any default config values should be passed
@@ -111,7 +117,6 @@ defaults for questions as usual.
Note that `bin/config.py` in the rclone source implements this protocol
as a readable demonstration.
```
rclone config create name type [key value]* [flags]
```
@@ -134,5 +139,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -15,7 +15,6 @@ This normally means revoking the oauth token.
To reconnect use "rclone config reconnect".
```
rclone config disconnect remote: [flags]
```
@@ -30,5 +29,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -14,7 +14,6 @@ Enter an interactive configuration session where you can setup new
remotes and manage existing ones. You may also set or remove a
password to protect your configuration.
```
rclone config edit [flags]
```
@@ -29,5 +28,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -12,7 +12,6 @@ set, remove and check the encryption for the config file
This command sets, clears and checks the encryption for the config file using
the subcommands below.
## Options
```
@@ -23,8 +22,13 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone config encryption check](/commands/rclone_config_encryption_check/) - Check that the config file is encrypted
* [rclone config encryption remove](/commands/rclone_config_encryption_remove/) - Remove the config file encryption password
* [rclone config encryption set](/commands/rclone_config_encryption_set/) - Set or change the config file encryption password
<!-- markdownlint-restore -->

View File

@@ -18,7 +18,6 @@ If decryption fails it will return a non-zero exit code if using
If the config file is not encrypted it will return a non zero exit code.
```
rclone config encryption check [flags]
```
@@ -33,5 +32,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config encryption](/commands/rclone_config_encryption/) - set, remove and check the encryption for the config file
<!-- markdownlint-restore -->

View File

@@ -19,7 +19,6 @@ password.
If the config was not encrypted then no error will be returned and
this command will do nothing.
```
rclone config encryption remove [flags]
```
@@ -34,5 +33,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config encryption](/commands/rclone_config_encryption/) - set, remove and check the encryption for the config file
<!-- markdownlint-restore -->

View File

@@ -29,7 +29,6 @@ encryption remove`), then set it again with this command which may be
easier if you don't mind the unencrypted config file being on the disk
briefly.
```
rclone config encryption set [flags]
```
@@ -44,5 +43,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config encryption](/commands/rclone_config_encryption/) - set, remove and check the encryption for the config file
<!-- markdownlint-restore -->

View File

@@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -16,13 +16,14 @@ The `password` should be passed in in clear (unobscured).
For example, to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
rclone config password myremote fieldname=mypassword
```sh
rclone config password myremote fieldname mypassword
rclone config password myremote fieldname=mypassword
```
This command is obsolete now that "config update" and "config create"
both support obscuring passwords directly.
```
rclone config password name [key value]+ [flags]
```
@@ -37,5 +38,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -15,7 +15,6 @@ To disconnect the remote use "rclone config disconnect".
This normally means going through the interactive oauth flow again.
```
rclone config reconnect remote: [flags]
```
@@ -30,5 +29,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -20,8 +20,6 @@ This makes the config file suitable for posting online for support.
It should be double checked before posting as the redaction may not be perfect.
```
rclone config redacted [<remote>] [flags]
```
@@ -36,5 +34,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -0,0 +1,55 @@
---
title: "rclone config string"
description: "Print connection string for a single remote."
versionIntroduced: v1.72
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/string/ and as part of making a release run "make commanddocs"
---
# rclone config string
Print connection string for a single remote.
## Synopsis
Print a connection string for a single remote.
The [connection strings](/docs/#connection-strings) can be used
wherever a remote is needed and can be more convenient than using the
config file, especially if using the RC API.
Backend parameters may be provided to the command also.
Example:
```sh
$ rclone config string s3:rclone --s3-no-check-bucket
:s3,access_key_id=XXX,no_check_bucket,provider=AWS,region=eu-west-2,secret_access_key=YYY:rclone
```
**NB** the strings are not quoted for use in shells (eg bash,
powershell, windows cmd). Most will work if enclosed in "double
quotes", however connection strings that contain double quotes will
require further quoting which is very shell dependent.
```
rclone config string <remote> [flags]
```
## Options
```
-h, --help help for string
```
See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -16,13 +16,17 @@ pairs of `key` `value` or as `key=value`.
For example, to update the env_auth field of a remote of name myremote
you would do:
rclone config update myremote env_auth true
rclone config update myremote env_auth=true
```sh
rclone config update myremote env_auth true
rclone config update myremote env_auth=true
```
If the remote uses OAuth the token will be updated, if you don't
require this add an extra parameter thus:
rclone config update myremote env_auth=true config_refresh_token=false
```sh
rclone config update myremote env_auth=true config_refresh_token=false
```
Note that if the config process would normally ask a question the
default is taken (unless `--non-interactive` is used). Each time
@@ -50,29 +54,29 @@ it.
This will look something like (some irrelevant detail removed):
```
```json
{
"State": "*oauth-islocal,teamdrive,,",
"Option": {
"Name": "config_is_local",
"Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
"Default": true,
"Examples": [
{
"Value": "true",
"Help": "Yes"
},
{
"Value": "false",
"Help": "No"
}
],
"Required": false,
"IsPassword": false,
"Type": "bool",
"Exclusive": true,
},
"Error": "",
"State": "*oauth-islocal,teamdrive,,",
"Option": {
"Name": "config_is_local",
"Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
"Default": true,
"Examples": [
{
"Value": "true",
"Help": "Yes"
},
{
"Value": "false",
"Help": "No"
}
],
"Required": false,
"IsPassword": false,
"Type": "bool",
"Exclusive": true,
},
"Error": "",
}
```
@@ -95,7 +99,9 @@ The keys of `Option` are used as follows:
If `Error` is set then it should be shown to the user at the same
time as the question.
rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
```sh
rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
```
Note that when using `--continue` all passwords should be passed in
the clear (not obscured). Any default config values should be passed
@@ -111,7 +117,6 @@ defaults for questions as usual.
Note that `bin/config.py` in the rclone source implements this protocol
as a readable demonstration.
```
rclone config update name [key value]+ [flags]
```
@@ -134,5 +139,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -12,7 +12,6 @@ Prints info about logged in user of remote.
This prints the details of the person logged in to the cloud storage
system.
```
rclone config userinfo remote: [flags]
```
@@ -28,5 +27,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
<!-- markdownlint-restore -->

View File

@@ -10,8 +10,8 @@ Convert file and directory names in place.
## Synopsis
convmv supports advanced path name transformations for converting and renaming files and directories by applying prefixes, suffixes, and other alterations.
convmv supports advanced path name transformations for converting and renaming
files and directories by applying prefixes, suffixes, and other alterations.
| Command | Description |
|------|------|
@@ -20,10 +20,13 @@ convmv supports advanced path name transformations for converting and renaming f
| `--name-transform suffix_keep_extension=XXXX` | Appends XXXX to the file name while preserving the original file extension. |
| `--name-transform trimprefix=XXXX` | Removes XXXX if it appears at the start of the file name. |
| `--name-transform trimsuffix=XXXX` | Removes XXXX if it appears at the end of the file name. |
| `--name-transform regex=/pattern/replacement/` | Applies a regex-based transformation. |
| `--name-transform regex=pattern/replacement` | Applies a regex-based transformation. |
| `--name-transform replace=old:new` | Replaces occurrences of old with new in the file name. |
| `--name-transform date={YYYYMMDD}` | Appends or prefixes the specified date format. |
| `--name-transform truncate=N` | Truncates the file name to a maximum of N characters. |
| `--name-transform truncate_keep_extension=N` | Truncates the file name to a maximum of N characters while preserving the original file extension. |
| `--name-transform truncate_bytes=N` | Truncates the file name to a maximum of N bytes (not characters). |
| `--name-transform truncate_bytes_keep_extension=N` | Truncates the file name to a maximum of N bytes (not characters) while preserving the original file extension. |
| `--name-transform base64encode` | Encodes the file name in Base64. |
| `--name-transform base64decode` | Decodes a Base64-encoded file name. |
| `--name-transform encoder=ENCODING` | Converts the file name to the specified encoding (e.g., ISO-8859-1, Windows-1252, Macintosh). |
@@ -38,211 +41,227 @@ convmv supports advanced path name transformations for converting and renaming f
| `--name-transform nfd` | Converts the file name to NFD Unicode normalization form. |
| `--name-transform nfkc` | Converts the file name to NFKC Unicode normalization form. |
| `--name-transform nfkd` | Converts the file name to NFKD Unicode normalization form. |
| `--name-transform command=/path/to/my/programfile names.` | Executes an external program to transform |
| `--name-transform command=/path/to/my/programfile names.` | Executes an external program to transform. |
Conversion modes:
Conversion modes:
```text
none
nfc
nfd
nfkc
nfkd
replace
prefix
suffix
suffix_keep_extension
trimprefix
trimsuffix
index
date
truncate
truncate_keep_extension
truncate_bytes
truncate_bytes_keep_extension
base64encode
base64decode
encoder
decoder
ISO-8859-1
Windows-1252
Macintosh
charmap
lowercase
uppercase
titlecase
ascii
url
regex
command
```
none
nfc
nfd
nfkc
nfkd
replace
prefix
suffix
suffix_keep_extension
trimprefix
trimsuffix
index
date
truncate
base64encode
base64decode
encoder
decoder
ISO-8859-1
Windows-1252
Macintosh
charmap
lowercase
uppercase
titlecase
ascii
url
regex
command
```
Char maps:
```
IBM-Code-Page-037
IBM-Code-Page-437
IBM-Code-Page-850
IBM-Code-Page-852
IBM-Code-Page-855
Windows-Code-Page-858
IBM-Code-Page-860
IBM-Code-Page-862
IBM-Code-Page-863
IBM-Code-Page-865
IBM-Code-Page-866
IBM-Code-Page-1047
IBM-Code-Page-1140
ISO-8859-1
ISO-8859-2
ISO-8859-3
ISO-8859-4
ISO-8859-5
ISO-8859-6
ISO-8859-7
ISO-8859-8
ISO-8859-9
ISO-8859-10
ISO-8859-13
ISO-8859-14
ISO-8859-15
ISO-8859-16
KOI8-R
KOI8-U
Macintosh
Macintosh-Cyrillic
Windows-874
Windows-1250
Windows-1251
Windows-1252
Windows-1253
Windows-1254
Windows-1255
Windows-1256
Windows-1257
Windows-1258
X-User-Defined
```
Encoding masks:
```
Asterisk
BackQuote
BackSlash
Colon
CrLf
Ctl
Del
Dollar
Dot
DoubleQuote
Exclamation
Hash
InvalidUtf8
LeftCrLfHtVt
LeftPeriod
LeftSpace
LeftTilde
LtGt
None
Percent
Pipe
Question
Raw
RightCrLfHtVt
RightPeriod
RightSpace
Semicolon
SingleQuote
Slash
SquareBracket
```
Examples:
Char maps:
```text
IBM-Code-Page-037
IBM-Code-Page-437
IBM-Code-Page-850
IBM-Code-Page-852
IBM-Code-Page-855
Windows-Code-Page-858
IBM-Code-Page-860
IBM-Code-Page-862
IBM-Code-Page-863
IBM-Code-Page-865
IBM-Code-Page-866
IBM-Code-Page-1047
IBM-Code-Page-1140
ISO-8859-1
ISO-8859-2
ISO-8859-3
ISO-8859-4
ISO-8859-5
ISO-8859-6
ISO-8859-7
ISO-8859-8
ISO-8859-9
ISO-8859-10
ISO-8859-13
ISO-8859-14
ISO-8859-15
ISO-8859-16
KOI8-R
KOI8-U
Macintosh
Macintosh-Cyrillic
Windows-874
Windows-1250
Windows-1251
Windows-1252
Windows-1253
Windows-1254
Windows-1255
Windows-1256
Windows-1257
Windows-1258
X-User-Defined
```
Encoding masks:
```text
Asterisk
BackQuote
BackSlash
Colon
CrLf
Ctl
Del
Dollar
Dot
DoubleQuote
Exclamation
Hash
InvalidUtf8
LeftCrLfHtVt
LeftPeriod
LeftSpace
LeftTilde
LtGt
None
Percent
Pipe
Question
Raw
RightCrLfHtVt
RightPeriod
RightSpace
Semicolon
SingleQuote
Slash
SquareBracket
```
Examples:
```console
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase"
// Output: STORIES/THE QUICK BROWN FOX!.TXT
```
```
```console
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow"
// Output: stories/The Slow Brown Turtle!.txt
```
```
```console
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode"
// Output: c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0
```
```
```console
rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode"
// Output: stories/The Quick Brown Fox!.txt
```
```
```console
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc"
// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
```
```
```console
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd"
// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
```
```
```console
rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii"
// Output: stories/The Quick Brown Fox!.txt
```
```
```console
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt"
// Output: stories/The Quick Brown Fox!
```
```
```console
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_"
// Output: OLD_stories/OLD_The Quick Brown Fox!.txt
```
```
```console
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7"
// Output: stories/The Quick Brown _ Fox Went to the Caf_!.txt
```
```
```console
rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket"
// Output: stories/The Quick Brown Fox A Memoir draft.txt
```
```
```console
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21"
// Output: stories/The Quick Brown 🦊 Fox
```
```
```console
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo"
// Output: stories/The Quick Brown Fox!.txt
```
```
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20250618
// Output: stories/The Quick Brown Fox!-20251121
```
```
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-06-18 0148PM
// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
```
```
```console
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
// Output: ababababababab/ababab ababababab ababababab ababab!abababab
```
Multiple transformations can be used in sequence, applied in the order they are specified on the command line.
The regex command generally accepts Perl-style regular expressions, the exact
syntax is defined in the [Go regular expression reference](https://golang.org/pkg/regexp/syntax/).
The replacement string may contain capturing group variables, referencing
capturing groups using the syntax `$name` or `${name}`, where the name can
refer to a named capturing group or it can simply be the index as a number.
To insert a literal $, use $$.
Multiple transformations can be used in sequence, applied
in the order they are specified on the command line.
The `--name-transform` flag is also available in `sync`, `copy`, and `move`.
# Files vs Directories
## Files vs Directories
By default `--name-transform` will only apply to file names. The means only the leaf file name will be transformed.
However some of the transforms would be better applied to the whole path or just directories.
To choose which which part of the file path is affected some tags can be added to the `--name-transform`.
By default `--name-transform` will only apply to file names. The means only the
leaf file name will be transformed. However some of the transforms would be
better applied to the whole path or just directories. To choose which which
part of the file path is affected some tags can be added to the `--name-transform`.
| Tag | Effect |
|------|------|
@@ -250,42 +269,58 @@ To choose which which part of the file path is affected some tags can be added t
| `dir` | Only transform name of directories - these may appear anywhere in the path |
| `all` | Transform the entire path for files and directories |
This is used by adding the tag into the transform name like this: `--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`.
This is used by adding the tag into the transform name like this:
`--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`.
For some conversions using all is more likely to be useful, for example `--name-transform all,nfc`.
For some conversions using all is more likely to be useful, for example
`--name-transform all,nfc`.
Note that `--name-transform` may not add path separators `/` to the name. This will cause an error.
Note that `--name-transform` may not add path separators `/` to the name.
This will cause an error.
# Ordering and Conflicts
## Ordering and Conflicts
* Transformations will be applied in the order specified by the user.
* If the `file` tag is in use (the default) then only the leaf name of files will be transformed.
* If the `dir` tag is in use then directories anywhere in the path will be transformed
* If the `all` tag is in use then directories and files anywhere in the path will be transformed
* Each transformation will be run one path segment at a time.
* If a transformation adds a `/` or ends up with an empty path segment then that will be an error.
* It is up to the user to put the transformations in a sensible order.
* Conflicting transformations, such as `prefix` followed by `trimprefix` or `nfc` followed by `nfd`, are possible.
* Instead of enforcing mutual exclusivity, transformations are applied in sequence as specified by the
user, allowing for intentional use cases (e.g., trimming one prefix before adding another).
* Users should be aware that certain combinations may lead to unexpected results and should verify
transformations using `--dry-run` before execution.
- Transformations will be applied in the order specified by the user.
- If the `file` tag is in use (the default) then only the leaf name of files
will be transformed.
- If the `dir` tag is in use then directories anywhere in the path will be
transformed
- If the `all` tag is in use then directories and files anywhere in the path
will be transformed
- Each transformation will be run one path segment at a time.
- If a transformation adds a `/` or ends up with an empty path segment then
that will be an error.
- It is up to the user to put the transformations in a sensible order.
- Conflicting transformations, such as `prefix` followed by `trimprefix` or
`nfc` followed by `nfd`, are possible.
- Instead of enforcing mutual exclusivity, transformations are applied in
sequence as specified by the user, allowing for intentional use cases
(e.g., trimming one prefix before adding another).
- Users should be aware that certain combinations may lead to unexpected
results and should verify transformations using `--dry-run` before execution.
# Race Conditions and Non-Deterministic Behavior
## Race Conditions and Non-Deterministic Behavior
Some transformations, such as `replace=old:new`, may introduce conflicts where multiple source files map to the same destination name.
This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these.
* If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic.
* Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.
Some transformations, such as `replace=old:new`, may introduce conflicts where
multiple source files map to the same destination name. This can lead to race
conditions when performing concurrent transfers. It is up to the user to
anticipate these.
- If two files from the source are transformed into the same name at the
destination, the final state may be non-deterministic.
- Running rclone check after a sync using such transformations may erroneously
report missing or differing files due to overwritten results.
To minimize risks, users should:
* Carefully review transformations that may introduce conflicts.
* Use `--dry-run` to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).
* Avoid transformations that cause multiple distinct source files to map to the same destination name.
* Consider disabling concurrency with `--transfers=1` if necessary.
* Certain transformations (e.g. `prefix`) will have a multiplying effect every time they are used. Avoid these when using `bisync`.
- Carefully review transformations that may introduce conflicts.
- Use `--dry-run` to inspect changes before executing a sync (but keep in mind
that it won't show the effect of non-deterministic transformations).
- Avoid transformations that cause multiple distinct source files to map to the
same destination name.
- Consider disabling concurrency with `--transfers=1` if necessary.
- Certain transformations (e.g. `prefix`) will have a multiplying effect every
time they are used. Avoid these when using `bisync`.
```
rclone convmv dest:path --name-transform XXX [flags]
@@ -306,7 +341,7 @@ See the [global flags page](/flags/) for global options not listed here.
Flags for anything which can copy a file
```
```text
--check-first Do all the checks before starting transfers
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only)
--compare-dest stringArray Include additional server-side paths during comparison
@@ -347,7 +382,7 @@ Flags for anything which can copy a file
Important flags useful for most commands
```
```text
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
@@ -357,7 +392,7 @@ Important flags useful for most commands
Flags for filtering directory listings
```
```text
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
@@ -387,12 +422,17 @@ Flags for filtering directory listings
Flags for listing directories
```
```text
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -28,22 +28,30 @@ go there.
For example
rclone copy source:sourcepath dest:destpath
```sh
rclone copy source:sourcepath dest:destpath
```
Let's say there are two files in sourcepath
sourcepath/one.txt
sourcepath/two.txt
```text
sourcepath/one.txt
sourcepath/two.txt
```
This copies them to
destpath/one.txt
destpath/two.txt
```text
destpath/one.txt
destpath/two.txt
```
Not to
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
```text
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
```
If you are familiar with `rsync`, rclone always works as if you had
written a trailing `/` - meaning "copy the contents of this directory".
@@ -59,27 +67,30 @@ For example, if you have many files in /path/to/src but only a few of
them change every day, you can copy all the files which have changed
recently very efficiently like this:
rclone copy --max-age 24h --no-traverse /path/to/src remote:
```sh
rclone copy --max-age 24h --no-traverse /path/to/src remote:
```
Rclone will sync the modification times of files and directories if
the backend supports it. If metadata syncing is required then use the
`--metadata` flag.
Note that the modification time and metadata for the root directory
will **not** be synced. See https://github.com/rclone/rclone/issues/7652
will **not** be synced. See [issue #7652](https://github.com/rclone/rclone/issues/7652)
for more info.
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics.
**Note**: Use the `--dry-run` or the `--interactive`/`-i` flag to test without copying anything.
**Note**: Use the `--dry-run` or the `--interactive`/`-i` flag to test without
copying anything.
# Logger Flags
## Logger Flags
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths,
one per line, to the file name (or stdout if it is `-`) supplied. What they write is described
in the help below. For example `--differ` will write all paths which are present
on both the source and destination but different.
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error`
flags write paths, one per line, to the file name (or stdout if it is `-`)
supplied. What they write is described in the help below. For example
`--differ` will write all paths which are present on both the source and
destination but different.
The `--combined` flag will write a file (or stdout) which contains all
file paths with a symbol and then a space and then the path to tell
@@ -112,9 +123,7 @@ are not currently supported:
Note also that each file is logged during execution, as opposed to after, so it
is most useful as a predictor of what SHOULD happen to each file
(which may or may not match what actually DID.)
(which may or may not match what actually DID).
```
rclone copy source:path dest:path [flags]
@@ -140,7 +149,7 @@ rclone copy source:path dest:path [flags]
--missing-on-dst string Report all files missing from the destination to this file
--missing-on-src string Report all files missing from the source to this file
-s, --separator string Separator for the items in the format (default ";")
-t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05)
-t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05)
```
Options shared with other commands are described next.
@@ -150,7 +159,7 @@ See the [global flags page](/flags/) for global options not listed here.
Flags for anything which can copy a file
```
```text
--check-first Do all the checks before starting transfers
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only)
--compare-dest stringArray Include additional server-side paths during comparison
@@ -191,7 +200,7 @@ Flags for anything which can copy a file
Important flags useful for most commands
```
```text
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
@@ -201,7 +210,7 @@ Important flags useful for most commands
Flags for filtering directory listings
```
```text
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
@@ -231,12 +240,17 @@ Flags for filtering directory listings
Flags for listing directories
```
```text
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -19,33 +19,40 @@ name. If the source is a directory then it acts exactly like the
So
rclone copyto src dst
```console
rclone copyto src dst
```
where src and dst are rclone paths, either remote:path or
/path/to/local or C:\windows\path\if\on\windows.
where src and dst are rclone paths, either `remote:path` or
`/path/to/local` or `C:\windows\path\if\on\windows`.
This will:
if src is file
copy it to dst, overwriting an existing file if it exists
if src is directory
copy it to dst, overwriting existing files if they exist
see copy command for full details
```text
if src is file
copy it to dst, overwriting an existing file if it exists
if src is directory
copy it to dst, overwriting existing files if they exist
see copy command for full details
```
This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. It doesn't delete files from
the destination.
*If you are looking to copy just a byte range of a file, please see 'rclone cat --offset X --count Y'*
*If you are looking to copy just a byte range of a file, please see
`rclone cat --offset X --count Y`.*
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
**Note**: Use the `-P`/`--progress` flag to view
real-time transfer statistics.
# Logger Flags
## Logger Flags
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths,
one per line, to the file name (or stdout if it is `-`) supplied. What they write is described
in the help below. For example `--differ` will write all paths which are present
on both the source and destination but different.
The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error`
flags write paths, one per line, to the file name (or stdout if it is `-`)
supplied. What they write is described in the help below. For example
`--differ` will write all paths which are present on both the source and
destination but different.
The `--combined` flag will write a file (or stdout) which contains all
file paths with a symbol and then a space and then the path to tell
@@ -78,9 +85,7 @@ are not currently supported:
Note also that each file is logged during execution, as opposed to after, so it
is most useful as a predictor of what SHOULD happen to each file
(which may or may not match what actually DID.)
(which may or may not match what actually DID).
```
rclone copyto source:path dest:path [flags]
@@ -105,7 +110,7 @@ rclone copyto source:path dest:path [flags]
--missing-on-dst string Report all files missing from the destination to this file
--missing-on-src string Report all files missing from the source to this file
-s, --separator string Separator for the items in the format (default ";")
-t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05)
-t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05)
```
Options shared with other commands are described next.
@@ -115,7 +120,7 @@ See the [global flags page](/flags/) for global options not listed here.
Flags for anything which can copy a file
```
```text
--check-first Do all the checks before starting transfers
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only)
--compare-dest stringArray Include additional server-side paths during comparison
@@ -156,7 +161,7 @@ Flags for anything which can copy a file
Important flags useful for most commands
```
```text
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
@@ -166,7 +171,7 @@ Important flags useful for most commands
Flags for filtering directory listings
```
```text
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
@@ -196,12 +201,17 @@ Flags for filtering directory listings
Flags for listing directories
```
```text
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -22,12 +22,23 @@ set in HTTP headers, it will be used instead of the name from the URL.
With `--print-filename` in addition, the resulting file name will be
printed.
Setting `--no-clobber` will prevent overwriting file on the
Setting `--no-clobber` will prevent overwriting file on the
destination if there is one with the same name.
Setting `--stdout` or making the output file name `-`
will cause the output to be written to standard output.
Setting `--urls` allows you to input a CSV file of URLs in format: URL,
FILENAME. If `--urls` is in use then replace the URL in the arguments with the
file containing the URLs, e.g.:
```sh
rclone copyurl --urls myurls.csv remote:dir
```
Missing filenames will be autogenerated equivalent to using `--auto-filename`.
Note that `--stdout` and `--print-filename` are incompatible with `--urls`.
This will do `--transfers` copies in parallel. Note that if `--auto-filename`
is desired for all URLs then a file with only URLs and no filename can be used.
## Troubleshooting
If you can't get `rclone copyurl` to work then here are some things you can try:
@@ -38,8 +49,6 @@ If you can't get `rclone copyurl` to work then here are some things you can try:
- `--user agent curl` - some sites have whitelists for curl's user-agent - try that
- Make sure the site works with `curl` directly
```
rclone copyurl https://example.com dest:path [flags]
```
@@ -53,6 +62,7 @@ rclone copyurl https://example.com dest:path [flags]
--no-clobber Prevent overwriting file with same name
-p, --print-filename Print the resulting name from --auto-filename
--stdout Write the output to stdout rather than a file
--urls Use a CSV file of links to process multiple URLs
```
Options shared with other commands are described next.
@@ -62,7 +72,7 @@ See the [global flags page](/flags/) for global options not listed here.
Important flags useful for most commands
```
```text
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
@@ -70,5 +80,10 @@ Important flags useful for most commands
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -10,7 +10,7 @@ Cryptcheck checks the integrity of an encrypted remote.
## Synopsis
Checks a remote against a [crypted](/crypt/) remote. This is the equivalent
Checks a remote against an [encrypted](/crypt/) remote. This is the equivalent
of running rclone [check](/commands/rclone_check/), but able to check the
checksums of the encrypted remote.
@@ -24,14 +24,18 @@ checksum of the file it has just encrypted.
Use it like this
rclone cryptcheck /path/to/files encryptedremote:path
```console
rclone cryptcheck /path/to/files encryptedremote:path
```
You can use it like this also, but that will involve downloading all
the files in remote:path.
the files in `remote:path`.
rclone cryptcheck remote:path encryptedremote:path
```console
rclone cryptcheck remote:path encryptedremote:path
```
After it has run it will log the status of the encryptedremote:.
After it has run it will log the status of the `encryptedremote:`.
If you supply the `--one-way` flag, it will only check that files in
the source match the files in the destination, not the other way
@@ -57,7 +61,6 @@ you what happened to it. These are reminiscent of diff files.
The default number of parallel checks is 8. See the [--checkers](/docs/#checkers-int)
option for more information.
```
rclone cryptcheck remote:path cryptedremote:path [flags]
```
@@ -82,7 +85,7 @@ See the [global flags page](/flags/) for global options not listed here.
Flags used for check commands
```
```text
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
```
@@ -90,7 +93,7 @@ Flags used for check commands
Flags for filtering directory listings
```
```text
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
@@ -120,12 +123,17 @@ Flags for filtering directory listings
Flags for listing directories
```
```text
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -17,13 +17,13 @@ If you supply the `--reverse` flag, it will return encrypted file names.
use it like this
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
rclone cryptdecode --reverse encryptedremote: filename1 filename2
Another way to accomplish this is by using the `rclone backend encode` (or `decode`) command.
See the documentation on the [crypt](/crypt/) overlay for more info.
```console
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
rclone cryptdecode --reverse encryptedremote: filename1 filename2
```
Another way to accomplish this is by using the `rclone backend encode` (or `decode`)
command. See the documentation on the [crypt](/crypt/) overlay for more info.
```
rclone cryptdecode encryptedremote: encryptedfilename [flags]
@@ -40,5 +40,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -30,14 +30,15 @@ directories have been merged.
Next, if deduping by name, for every group of duplicate file names /
hashes, it will delete all but one identical file it finds without
confirmation. This means that for most duplicated files the `dedupe` command will not be interactive.
confirmation. This means that for most duplicated files the
`dedupe` command will not be interactive.
`dedupe` considers files to be identical if they have the
same file path and the same hash. If the backend does not support hashes (e.g. crypt wrapping
Google Drive) then they will never be found to be identical. If you
use the `--size-only` flag then files will be considered
identical if they have the same size (any hash will be ignored). This
can be useful on crypt backends which do not support hashes.
same file path and the same hash. If the backend does not support
hashes (e.g. crypt wrapping Google Drive) then they will never be found
to be identical. If you use the `--size-only` flag then files
will be considered identical if they have the same size (any hash will be
ignored). This can be useful on crypt backends which do not support hashes.
Next rclone will resolve the remaining duplicates. Exactly which
action is taken depends on the dedupe mode. By default, rclone will
@@ -50,71 +51,82 @@ Here is an example run.
Before - with duplicates
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
6048320 2016-03-05 16:23:11.775000000 one.txt
564374 2016-03-05 16:23:06.731000000 one.txt
6048320 2016-03-05 16:18:26.092000000 one.txt
6048320 2016-03-05 16:22:46.185000000 two.txt
1744073 2016-03-05 16:22:38.104000000 two.txt
564374 2016-03-05 16:22:52.118000000 two.txt
```console
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
6048320 2016-03-05 16:23:11.775000000 one.txt
564374 2016-03-05 16:23:06.731000000 one.txt
6048320 2016-03-05 16:18:26.092000000 one.txt
6048320 2016-03-05 16:22:46.185000000 two.txt
1744073 2016-03-05 16:22:38.104000000 two.txt
564374 2016-03-05 16:22:52.118000000 two.txt
```
Now the `dedupe` session
$ rclone dedupe drive:dupes
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
one.txt: Found 4 files with duplicate names
one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36")
one.txt: 2 duplicates remain
1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 files with duplicate names
two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> r
two-1.txt: renamed from: two.txt
two-2.txt: renamed from: two.txt
two-3.txt: renamed from: two.txt
```console
$ rclone dedupe drive:dupes
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
one.txt: Found 4 files with duplicate names
one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36")
one.txt: 2 duplicates remain
1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 files with duplicate names
two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> r
two-1.txt: renamed from: two.txt
two-2.txt: renamed from: two.txt
two-3.txt: renamed from: two.txt
```
The result being
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
564374 2016-03-05 16:22:52.118000000 two-1.txt
6048320 2016-03-05 16:22:46.185000000 two-2.txt
1744073 2016-03-05 16:22:38.104000000 two-3.txt
```console
$ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt
564374 2016-03-05 16:22:52.118000000 two-1.txt
6048320 2016-03-05 16:22:46.185000000 two-2.txt
1744073 2016-03-05 16:22:38.104000000 two-3.txt
```
Dedupe can be run non interactively using the `--dedupe-mode` flag or by using an extra parameter with the same value
Dedupe can be run non interactively using the `--dedupe-mode` flag
or by using an extra parameter with the same value
* `--dedupe-mode interactive` - interactive as above.
* `--dedupe-mode skip` - removes identical files then skips anything left.
* `--dedupe-mode first` - removes identical files then keeps the first one.
* `--dedupe-mode newest` - removes identical files then keeps the newest one.
* `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
* `--dedupe-mode largest` - removes identical files then keeps the largest one.
* `--dedupe-mode smallest` - removes identical files then keeps the smallest one.
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
* `--dedupe-mode list` - lists duplicate dirs and files only and changes nothing.
- `--dedupe-mode interactive` - interactive as above.
- `--dedupe-mode skip` - removes identical files then skips anything left.
- `--dedupe-mode first` - removes identical files then keeps the first one.
- `--dedupe-mode newest` - removes identical files then keeps the newest one.
- `--dedupe-mode oldest` - removes identical files then keeps the oldest one.
- `--dedupe-mode largest` - removes identical files then keeps the largest one.
- `--dedupe-mode smallest` - removes identical files then keeps the smallest one.
- `--dedupe-mode rename` - removes identical files then renames the rest to be different.
- `--dedupe-mode list` - lists duplicate dirs and files only and changes nothing.
For example, to rename all the identically named photos in your Google Photos directory, do
For example, to rename all the identically named photos in your Google Photos
directory, do
rclone dedupe --dedupe-mode rename "drive:Google Photos"
```console
rclone dedupe --dedupe-mode rename "drive:Google Photos"
```
Or
rclone dedupe rename "drive:Google Photos"
```console
rclone dedupe rename "drive:Google Photos"
```
```
rclone dedupe [mode] remote:path [flags]
@@ -135,7 +147,7 @@ See the [global flags page](/flags/) for global options not listed here.
Important flags useful for most commands
```
```text
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
@@ -143,5 +155,10 @@ Important flags useful for most commands
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -17,19 +17,23 @@ obeys include/exclude filters so can be used to selectively delete files.
alone. If you want to delete a directory and all of its contents use
the [purge](/commands/rclone_purge/) command.
If you supply the `--rmdirs` flag, it will remove all empty directories along with it.
You can also use the separate command [rmdir](/commands/rclone_rmdir/) or
[rmdirs](/commands/rclone_rmdirs/) to delete empty directories only.
If you supply the `--rmdirs` flag, it will remove all empty directories along
with it. You can also use the separate command [rmdir](/commands/rclone_rmdir/)
or [rmdirs](/commands/rclone_rmdirs/) to delete empty directories only.
For example, to delete all files bigger than 100 MiB, you may first want to
check what would be deleted (use either):
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
```sh
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
```
Then proceed with the actual delete:
rclone --min-size 100M delete remote:path
```sh
rclone --min-size 100M delete remote:path
```
That reads "delete everything with a minimum size of 100 MiB", hence
delete all files bigger than 100 MiB.
@@ -37,7 +41,6 @@ delete all files bigger than 100 MiB.
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
```
rclone delete remote:path [flags]
```
@@ -56,7 +59,7 @@ See the [global flags page](/flags/) for global options not listed here.
Important flags useful for most commands
```
```text
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
@@ -66,7 +69,7 @@ Important flags useful for most commands
Flags for filtering directory listings
```
```text
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
@@ -96,12 +99,17 @@ Flags for filtering directory listings
Flags for listing directories
```
```text
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -11,9 +11,8 @@ Remove a single file from remote.
## Synopsis
Remove a single file from remote. Unlike `delete` it cannot be used to
remove a directory and it doesn't obey include/exclude filters - if the specified file exists,
it will always be removed.
remove a directory and it doesn't obey include/exclude filters - if the
specified file exists, it will always be removed.
```
rclone deletefile remote:path [flags]
@@ -32,7 +31,7 @@ See the [global flags page](/flags/) for global options not listed here.
Important flags useful for most commands
```
```text
-n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
@@ -40,5 +39,10 @@ Important flags useful for most commands
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -28,5 +28,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -18,19 +18,21 @@ users.
[git-annex]: https://git-annex.branchable.com/
Installation on Linux
---------------------
## Installation on Linux
1. Skip this step if your version of git-annex is [10.20240430] or newer.
Otherwise, you must create a symlink somewhere on your PATH with a particular
name. This symlink helps git-annex tell rclone it wants to run the "gitannex"
subcommand.
```sh
# Create the helper symlink in "$HOME/bin".
Create the helper symlink in "$HOME/bin":
```console
ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin"
# Verify the new symlink is on your PATH.
Verify the new symlink is on your PATH:
```console
which git-annex-remote-rclone-builtin
```
@@ -42,11 +44,15 @@ Installation on Linux
Start by asking git-annex to describe the remote's available configuration
parameters.
```sh
# If you skipped step 1:
git annex initremote MyRemote type=rclone --whatelse
If you skipped step 1:
# If you created a symlink in step 1:
```console
git annex initremote MyRemote type=rclone --whatelse
```
If you created a symlink in step 1:
```console
git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse
```
@@ -62,7 +68,7 @@ Installation on Linux
be one configured in your rclone.conf file, which can be located with `rclone
config file`.
```sh
```console
git annex initremote MyRemote \
type=external \
externaltype=rclone-builtin \
@@ -76,13 +82,12 @@ Installation on Linux
remote**. This command is very new and has not been tested on many rclone
backends. Caveat emptor!
```sh
```console
git annex testremote MyRemote
```
Happy annexing!
```
rclone gitannex [flags]
```
@@ -97,5 +102,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -29,25 +29,28 @@ as a relative path).
Run without a hash to see the list of all supported hashes, e.g.
$ rclone hashsum
Supported hashes are:
* md5
* sha1
* whirlpool
* crc32
* sha256
* sha512
* blake3
* xxh3
* xxh128
```console
$ rclone hashsum
Supported hashes are:
- md5
- sha1
- whirlpool
- crc32
- sha256
- sha512
- blake3
- xxh3
- xxh128
```
Then
$ rclone hashsum MD5 remote:path
```console
rclone hashsum MD5 remote:path
```
Note that hash names are case insensitive and values are output in lower case.
```
rclone hashsum [<hash> remote:path] [flags]
```
@@ -69,7 +72,7 @@ See the [global flags page](/flags/) for global options not listed here.
Flags for filtering directory listings
```
```text
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
@@ -99,12 +102,17 @@ Flags for filtering directory listings
Flags for listing directories
```
```text
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactions
```
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

View File

@@ -12,10 +12,12 @@ Generate public link to file/folder.
Create, retrieve or remove a public link to the given file or folder.
rclone link remote:path/to/file
rclone link remote:path/to/folder/
rclone link --unlink remote:path/to/folder/
rclone link --expire 1d remote:path/to/file
```console
rclone link remote:path/to/file
rclone link remote:path/to/folder/
rclone link --unlink remote:path/to/folder/
rclone link --expire 1d remote:path/to/file
```
If you supply the --expire flag, it will set the expiration time
otherwise it will use the default (100 years). **Note** not all
@@ -28,10 +30,9 @@ don't will just ignore it.
If successful, the last line of the output will contain the
link. Exact capabilities depend on the remote, but the link will
always by default be created with the least constraints e.g. no
always by default be created with the least constraints - e.g. no
expiry, no password protection, accessible without account.
```
rclone link remote:path [flags]
```
@@ -48,5 +49,10 @@ See the [global flags page](/flags/) for global options not listed here.
## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
<!-- markdownlint-restore -->

Some files were not shown because too many files have changed in this diff Show More