1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-21 10:43:37 +00:00

Compare commits

..

55 Commits

Author SHA1 Message Date
Nick Craig-Wood
69ae5f2aaf serve: fix auth proxy using stale config parameters when making a backend
Before this change, if the auth proxy script returned updated config
parameters for a backend (eg the api_key changed for the backend)
rclone would continue to re-use the old backend with the old config
parameters out of the fscache.

This fixes the problem by adding a short config hash to the fs names
created by the auth proxy. They used to be `proxy-user` (where user
was as supplied to the auth proxy) and they will now be
`proxy-user-hash` where hash is a base64 encoded partial md5 hash of
the config.

These new config names will be visible in the logs so this is a user
visible change.
2025-01-27 19:20:10 +00:00
Nick Craig-Wood
c837664653 sync: fix cpu spinning when empty directory finding with leading slashes
Before this change the logic which makes sure we create all
directories could get confused with directories which started with
slashes and get into an infinite loop consuming 100% of the CPU.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
77429b154e s3: fix handling of objects with // in #5858 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
39b8f17ebb azureblob: fix handling of objects with // in #5858 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
81ecfb0f64 fstest: add integration tests objects with // on bucket based backends #5858 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
656e789c5b fs/list: tweak directory listing assertions after allowing // names 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
fe19184084 lib/bucket: fix tidying of // in object keys #5858
Before this change, bucket.Join would tidy up object keys by removing
repeated / in them. This means we can't access objects with // in them
which is valid for object keys (but not for file system paths).

This could have consequences for users who are relying on rclone to
fix improper paths for them.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
b4990cd858 lib/bucket: add IsAllSlashes function 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
8e955c6b13 azureblob: remove uncommitted blocks on InvalidBlobOrBlock error
When doing a multipart upload or copy, if a InvalidBlobOrBlock error
is received, it can mean that there are uncomitted blocks from a
previous failed attempt with a different length of ID.

This patch makes rclone attempt to clear the uncomitted blocks and
retry if it receives this error.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
3a5ddfcd3c azureblob: implement multipart server side copy
This implements multipart server side copy to improve copying from one
azure region to another by orders of magnitude (from 30s for a 100M
file to 10s for a 10G file with --azureblob-upload-concurrency 500).

- Add `--azureblob-copy-cutoff` to control the cutoff from single to multipart copy
- Add `--azureblob-copy-concurrency` to control the copy concurrency
- Add ServerSideAcrossConfigs flag as this now works properly
- Implement multipart copy using put block list API
- Shortcut multipart copy for same storage account
- Override with `--azureblob-use-copy-blob`

Fixes #8249
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
ac3f7a87c3 azureblob: speed up server side copies for small files #8249
This speeds up server side copies for small files which need the check
the copy status by using an exponential ramp up of time to check the
copy status endpoint.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
4e9b63e141 azureblob: cleanup uncommitted blocks on upload errors
Before this change, if a multipart upload was aborted, then rclone
would leave uncommitted blocks lying around. Azure has a limit of
100,000 uncommitted blocks per storage account, so when you then try
to upload other stuff into that account, or simply the same file
again, you can run into this limit. This causes errors like the
following:

BlockCountExceedsLimit: The uncommitted block count cannot exceed the
maximum limit of 100,000 blocks.

This change removes the uncommitted blocks if a multipart upload is
aborted or fails.

If there was an existing destination file, it takes care not to
overwrite it by recomitting already comitted blocks.

This means that the scheme for allocating block IDs had to change to
make them different for each block and each upload.

Fixes #5583
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
7fd7fe3c82 azureblob: factor readMetaData into readMetaDataAlways returning blob properties 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
9dff45563d Add b-wimmer to contributors 2025-01-22 11:56:05 +00:00
b-wimmer
83cf8fb821 azurefiles: add --azurefiles-use-az and --azurefiles-disable-instance-discovery
Adds additional authentication options from azureblob to azurefiles as well

See rclone#8078
2025-01-22 11:11:18 +00:00
Nick Craig-Wood
32e79a5c5c onedrive: mark German (de) region as deprecated
See: https://learn.microsoft.com/en-us/previous-versions/azure/germany/
2025-01-22 11:00:37 +00:00
Nick Craig-Wood
fc44a8114e Add Trevor Starick to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
657172ef77 Add hiddenmarten to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
71eb4199c3 Add Corentin Barreau to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
ac3c21368d Add Bruno Fernandes to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
db71b2bd5f Add Moises Lima to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
8cfe42d09f Add izouxv to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
e673a28a72 Add Robin Schneider to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
59889ce46b Add Tim White to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
62e8a01e7e Add Christoph Berger to contributors 2025-01-22 11:00:37 +00:00
Trevor Starick
87eaf37629 azureblob: add support for x-ms-tags header 2025-01-17 19:37:56 +00:00
hiddenmarten
7c7606a6cf rc: disable the metrics server when running rclone rc
Fixes #8248
2025-01-17 17:46:22 +00:00
Corentin Barreau
dbb21165d4 internetarchive: add --internetarchive-metadata="key=value" for setting item metadata
Added the ability to include item's metadata on uploads via the
Internet Archive backend using the `--internetarchive-metadata="key=value"`
argument. This is hidden from the configurator as should only
really be used on the command line.

Before this change, metadata had to be manually added after uploads.
With this new feature, users can specify metadata directly during the
upload process.
2025-01-17 16:00:34 +00:00
Dan McArdle
375953cba3 lib/batcher: Deprecate unused option: batch_commit_timeout 2025-01-17 15:56:09 +00:00
Bruno Fernandes
af5385b344 s3: Added new storage class to magalu provider 2025-01-17 15:54:34 +00:00
Moises Lima
347be176af http servers: add --user-from-header to use for authentication
Retrieve the username from a specified HTTP header if no
other authentication methods are configured
(ideal for proxied setups)
2025-01-17 15:53:23 +00:00
Pat Patterson
bf5a4774c6 b2: add SkipDestructive handling to backend commands - fixes #8194 2025-01-17 15:47:01 +00:00
izouxv
0275d3edf2 vfs: close the change notify channel on Shutdown 2025-01-17 15:38:09 +00:00
Robin Schneider
be53ae98f8 Docker image: Add label org.opencontainers.image.source for release notes in Renovate dependency updates 2025-01-17 15:29:36 +00:00
Tim White
0d9fe51632 docs: add OneDrive Impersonate instructions - fixes #5610 2025-01-17 14:30:51 +00:00
Christoph Berger
03bd795221 docs: explain the stringArray flag parameter descriptor 2025-01-17 09:50:22 +01:00
Nick Craig-Wood
5a4026ccb4 iclouddrive: add notes on ADP and Missing PCS cookies - fixes #8310 2025-01-16 10:14:52 +00:00
Dimitri Papadopoulos
b1d4de69c2 docs: fix typos found by codespell in docs and code comments 2025-01-16 10:39:01 +01:00
Nick Craig-Wood
5316acd046 fs: fix confusing "didn't find section in config file" error
This change decorates the error with the section name not found which
will hopefully save user confusion.

Fixes #8170
2025-01-15 16:32:59 +00:00
Nick Craig-Wood
2c72842c10 vfs: fix race detected by race detector
This race would only happen when --dir-cache-time was very small.

This was noticed in the VFS tests when --dir-cache-time was 100 mS so
is unlikely to affect normal users.
2025-01-14 20:46:27 +00:00
Nick Craig-Wood
4a81f12c26 Add Jonathan Giannuzzi to contributors 2025-01-14 20:46:27 +00:00
Nick Craig-Wood
aabda1cda2 Add Spencer McCullough to contributors 2025-01-14 20:46:27 +00:00
Nick Craig-Wood
572fe20f8e Add Matt Ickstadt to contributors 2025-01-14 20:46:27 +00:00
Jonathan Giannuzzi
2fd4c45b34 smb: add support for kerberos authentication
Fixes #7800
2025-01-14 19:24:31 +00:00
Spencer McCullough
ec5489e23f drive: added backend moveid command 2025-01-14 19:21:13 +00:00
Matt Ickstadt
6898375a2d docs: fix reference to serves3 setting disable_multipart_uploads which was renamed 2025-01-14 18:51:19 +01:00
Matt Ickstadt
d413443a6a docs: fix link to Rclone Serve S3 2025-01-14 18:51:19 +01:00
Nick Craig-Wood
5039747f26 serve s3: fix list objects encoding-type
Before this change rclone would always use encoding-type url even if
the client hadn't asked for it.

This confused some clients.

This fixes the problem by leaving the URL encoding to the gofakes3
library which has also been fixed.

Fixes #7836
2025-01-14 16:08:18 +00:00
Nick Craig-Wood
11ba4ac539 build: update gopkg.in/yaml.v2 to v3 2025-01-14 15:25:10 +00:00
Nick Craig-Wood
b4ed7fb7d7 build: update all dependencies 2025-01-14 15:25:10 +00:00
Nick Craig-Wood
719473565e bisync: fix go vet problems with go1.24 2025-01-14 15:25:10 +00:00
Nick Craig-Wood
bd7278d7e9 build: update to go1.24rc1 and make go1.22 the minimum required version 2025-01-14 12:13:14 +00:00
Nick Craig-Wood
45ba81c726 version: add --deps flag to show dependencies and other build info 2025-01-14 12:08:49 +00:00
Nick Craig-Wood
530658e0cc doc: make man page well formed for whatis - fixes #7430 2025-01-13 18:35:27 +00:00
Nick Craig-Wood
b742705d0c Start v1.70.0-DEV development 2025-01-12 16:31:12 +00:00
150 changed files with 2902 additions and 3315 deletions

View File

@@ -26,12 +26,12 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.22', 'go1.23']
include:
- job_name: linux
os: ubuntu-latest
go: '>=1.23.0-rc.1'
go: '>=1.24.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -42,14 +42,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '>=1.23.0-rc.1'
go: '>=1.24.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-latest
go: '>=1.23.0-rc.1'
go: '>=1.24.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -58,14 +58,14 @@ jobs:
- job_name: mac_arm64
os: macos-latest
go: '>=1.23.0-rc.1'
go: '>=1.24.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '>=1.23.0-rc.1'
go: '>=1.24.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -75,11 +75,23 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '>=1.23.0-rc.1'
go: '>=1.24.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.22
os: ubuntu-latest
go: '1.22'
quicktest: true
racequicktest: true
- job_name: go1.23
os: ubuntu-latest
go: '1.23'
quicktest: true
racequicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
@@ -299,7 +311,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '>=1.23.0-rc.1'
go-version: '>=1.24.0-rc.1'
- name: Set global environment variables
shell: bash

View File

@@ -0,0 +1,77 @@
name: Docker beta build
on:
push:
branches:
- master
jobs:
build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
# `secrets.GITHUB_TOKEN` is a secret that's automatically generated by
# GitHub Actions at the start of a workflow run to identify the job.
# This is used to authenticate against GitHub Container Registry.
# See https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret
# for more detailed information.
password: ${{ secrets.GITHUB_TOKEN }}
- name: Show disk usage
shell: bash
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
push: true # push the image to ghcr
tags: |
ghcr.io/rclone/rclone:beta
rclone/rclone:beta
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, mode=max, scope=${{ github.workflow }}
provenance: false
# Eventually cache will need to be cleared if builds more frequent than once a week
# https://github.com/docker/build-push-action/issues/252
- name: Show disk usage
shell: bash
run: |
df -h .

View File

@@ -1,294 +0,0 @@
---
# Github Actions release for rclone
# -*- compile-command: "yamllint -f parsable build_publish_docker_image.yml" -*-
name: Build & Push Docker Images
# Trigger the workflow on push or pull request
on:
push:
branches:
- '**'
tags:
- '**'
workflow_dispatch:
inputs:
manual:
description: Manual run (bypass default conditions)
type: boolean
default: true
jobs:
build-image:
if: inputs.manual || (github.repository == 'rclone/rclone' && github.event_name != 'pull_request')
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
include:
- platform: linux/amd64
runs-on: ubuntu-24.04
- platform: linux/386
runs-on: ubuntu-24.04
- platform: linux/arm64
runs-on: ubuntu-24.04-arm
- platform: linux/arm/v7
runs-on: ubuntu-24.04-arm
- platform: linux/arm/v6
runs-on: ubuntu-24.04-arm
name: Build Docker Image for ${{ matrix.platform }}
runs-on: ${{ matrix.runs-on }}
steps:
- name: Free Space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout Repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set REPO_NAME Variable
run: |
echo "REPO_NAME=`echo ${{github.repository}} | tr '[:upper:]' '[:lower:]'`" >> ${GITHUB_ENV}
- name: Set PLATFORM Variable
run: |
platform=${{ matrix.platform }}
echo "PLATFORM=${platform//\//-}" >> $GITHUB_ENV
- name: Set CACHE_NAME Variable
shell: python
run: |
import os, re
def slugify(input_string, max_length=63):
slug = input_string.lower()
slug = re.sub(r'[^a-z0-9 -]', ' ', slug)
slug = slug.strip()
slug = re.sub(r'\s+', '-', slug)
slug = re.sub(r'-+', '-', slug)
slug = slug[:max_length]
slug = re.sub(r'[-]+$', '', slug)
return slug
ref_name_slug = "cache"
if os.environ.get("GITHUB_REF_NAME") and os.environ['GITHUB_EVENT_NAME'] == "pull_request":
ref_name_slug += "-pr-" + slugify(os.environ['GITHUB_REF_NAME'])
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"CACHE_NAME={ref_name_slug}\n")
- name: Get ImageOS
# There's no way around this, because "ImageOS" is only available to
# processes, but the setup-go action uses it in its key.
id: imageos
uses: actions/github-script@v7
with:
result-encoding: string
script: |
return process.env.ImageOS
- name: Extract Metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: manifest,manifest-descriptor # Important for digest annotation (used by Github packages)
with:
images: |
ghcr.io/${{ env.REPO_NAME }}
labels: |
org.opencontainers.image.url=https://github.com/rclone/rclone/pkgs/container/rclone
org.opencontainers.image.vendor=${{ github.repository_owner }}
org.opencontainers.image.authors=rclone <https://github.com/rclone>
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
tags: |
type=sha
type=ref,event=pr
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=beta,enable={{is_default_branch}}
- name: Setup QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Load Go Build Cache for Docker
id: go-cache
uses: actions/cache@v4
with:
key: ${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}-${{ hashFiles('**/go.mod') }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}
# Cache only the go builds, the module download is cached via the docker layer caching
path: |
go-build-cache
- name: Inject Go Build Cache into Docker
uses: reproducible-containers/buildkit-cache-dance@v3
with:
cache-map: |
{
"go-build-cache": "/root/.cache/go-build"
}
skip-extraction: ${{ steps.go-cache.outputs.cache-hit }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Publish Image Digest
id: build
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
provenance: false
# don't specify 'tags' here (error "get can't push tagged ref by digest")
# tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
annotations: ${{ steps.meta.outputs.annotations }}
platforms: ${{ matrix.platform }}
outputs: |
type=image,name=ghcr.io/${{ env.REPO_NAME }},push-by-digest=true,name-canonical=true,push=true
cache-from: |
type=registry,ref=ghcr.io/${{ env.REPO_NAME }}:build-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}
cache-to: |
type=registry,ref=ghcr.io/${{ env.REPO_NAME }}:build-${{ env.CACHE_NAME }}-${{ env.PLATFORM }},image-manifest=true,mode=max,compression=zstd
- name: Export Image Digest
run: |
mkdir -p /tmp/digests
digest="${{ steps.build.outputs.digest }}"
touch "/tmp/digests/${digest#sha256:}"
- name: Upload Image Digest
uses: actions/upload-artifact@v4
with:
name: digests-${{ env.PLATFORM }}
path: /tmp/digests/*
retention-days: 1
if-no-files-found: error
merge-image:
name: Merge & Push Final Docker Image
runs-on: ubuntu-24.04
needs:
- build-image
steps:
- name: Download Image Digests
uses: actions/download-artifact@v4
with:
path: /tmp/digests
pattern: digests-*
merge-multiple: true
- name: Set REPO_NAME Variable
run: |
echo "REPO_NAME=`echo ${{github.repository}} | tr '[:upper:]' '[:lower:]'`" >> ${GITHUB_ENV}
- name: Extract Metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
with:
images: |
${{ env.REPO_NAME }}
ghcr.io/${{ env.REPO_NAME }}
labels: |
org.opencontainers.image.url=https://github.com/rclone/rclone/pkgs/container/rclone
org.opencontainers.image.vendor=${{ github.repository_owner }}
org.opencontainers.image.authors=rclone <https://github.com/rclone>
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
tags: |
type=sha
type=ref,event=pr
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=beta,enable={{is_default_branch}}
- name: Extract Tags
shell: python
run: |
import json, os
metadata_json = os.environ['DOCKER_METADATA_OUTPUT_JSON']
metadata = json.loads(metadata_json)
tags = [f"--tag '{tag}'" for tag in metadata["tags"]]
tags_string = " ".join(tags)
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"TAGS={tags_string}\n")
- name: Extract Annotations
shell: python
run: |
import json, os
metadata_json = os.environ['DOCKER_METADATA_OUTPUT_JSON']
metadata = json.loads(metadata_json)
annotations = [f"--annotation '{annotation}'" for annotation in metadata["annotations"]]
annotations_string = " ".join(annotations)
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"ANNOTATIONS={annotations_string}\n")
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create & Push Manifest List
working-directory: /tmp/digests
run: |
docker buildx imagetools create \
${{ env.TAGS }} \
${{ env.ANNOTATIONS }} \
$(printf 'ghcr.io/${{ env.REPO_NAME }}@sha256:%s ' *)
- name: Inspect and Run Multi-Platform Image
run: |
docker buildx imagetools inspect --raw ${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }}
docker buildx imagetools inspect --raw ghcr.io/${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }}
docker run --rm ghcr.io/${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }} version

View File

@@ -1,49 +0,0 @@
---
# Github Actions release for rclone
# -*- compile-command: "yamllint -f parsable build_publish_docker_plugin.yml" -*-
name: Release Build for Docker Plugin
on:
release:
types: [published]
workflow_dispatch:
inputs:
manual:
description: Manual run (bypass default conditions)
type: boolean
default: true
jobs:
build_docker_volume_plugin:
if: inputs.manual || github.repository == 'rclone/rclone'
name: Build docker plugin job
runs-on: ubuntu-latest
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build and publish docker plugin
shell: bash
run: |
VER=${GITHUB_REF#refs/tags/}
PLUGIN_USER=rclone
docker login --username ${{ secrets.DOCKER_HUB_USER }} \
--password-stdin <<< "${{ secrets.DOCKER_HUB_PASSWORD }}"
for PLUGIN_ARCH in amd64 arm64 arm/v7 arm/v6 ;do
export PLUGIN_USER PLUGIN_ARCH
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}-${VER#v}
done
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=${VER#v}

View File

@@ -0,0 +1,89 @@
name: Docker release build
on:
release:
types: [published]
jobs:
build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get actual patch version
id: actual_patch_version
run: echo ::set-output name=ACTUAL_PATCH_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g')
- name: Get actual minor version
id: actual_minor_version
run: echo ::set-output name=ACTUAL_MINOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1,2)
- name: Get actual major version
id: actual_major_version
run: echo ::set-output name=ACTUAL_MAJOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1)
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USER }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Build and publish image
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
push: true
tags: |
rclone/rclone:latest
rclone/rclone:${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }}
rclone/rclone:${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }}
rclone/rclone:${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
build_docker_volume_plugin:
if: github.repository == 'rclone/rclone'
needs: build
runs-on: ubuntu-latest
name: Build docker plugin job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build and publish docker plugin
shell: bash
run: |
VER=${GITHUB_REF#refs/tags/}
PLUGIN_USER=rclone
docker login --username ${{ secrets.DOCKER_HUB_USER }} \
--password-stdin <<< "${{ secrets.DOCKER_HUB_PASSWORD }}"
for PLUGIN_ARCH in amd64 arm64 arm/v7 arm/v6 ;do
export PLUGIN_USER PLUGIN_ARCH
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}-${VER#v}
done
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=${VER#v}

View File

@@ -1,47 +1,21 @@
FROM golang:alpine AS builder
ARG CGO_ENABLED=0
COPY . /go/src/github.com/rclone/rclone/
WORKDIR /go/src/github.com/rclone/rclone/
RUN echo "**** Set Go Environment Variables ****" && \
go env -w GOCACHE=/root/.cache/go-build
RUN echo "**** Install Dependencies ****" && \
apk add --no-cache \
make \
bash \
gawk \
git
COPY go.mod .
COPY go.sum .
RUN echo "**** Download Go Dependencies ****" && \
go mod download -x
RUN echo "**** Verify Go Dependencies ****" && \
go mod verify
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build,sharing=locked \
echo "**** Build Binary ****" && \
make
RUN echo "**** Print Version Binary ****" && \
./rclone version
RUN apk add --no-cache make bash gawk git
RUN \
CGO_ENABLED=0 \
make
RUN ./rclone version
# Begin final image
FROM alpine:latest
RUN echo "**** Install Dependencies ****" && \
apk add --no-cache \
ca-certificates \
fuse3 \
tzdata && \
echo "Enable user_allow_other in fuse" && \
echo "user_allow_other" >> /etc/fuse.conf
LABEL org.opencontainers.image.source="https://github.com/rclone/rclone"
RUN apk --no-cache add ca-certificates fuse3 tzdata && \
echo "user_allow_other" >> /etc/fuse.conf
COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/

762
MANUAL.html generated

File diff suppressed because it is too large Load Diff

354
MANUAL.md generated
View File

@@ -1,78 +1,7 @@
% rclone(1) User Manual
% Nick Craig-Wood
% May 21, 2025
% Jan 12, 2025
# NAME
rclone - manage files on cloud storage
# SYNOPSIS
```
Usage:
rclone [flags]
rclone [command]
Available commands:
about Get quota information from the remote.
authorize Remote authorization.
backend Run a backend-specific command.
bisync Perform bidirectional synchronization between two paths.
cat Concatenates any files and sends them to stdout.
check Checks the files in the source and destination match.
checksum Checks the files in the destination against a SUM file.
cleanup Clean up the remote if possible.
completion Output completion script for a given shell.
config Enter an interactive configuration session.
copy Copy files from source to dest, skipping identical files.
copyto Copy files from source to dest, skipping identical files.
copyurl Copy the contents of the URL supplied content to dest:path.
cryptcheck Cryptcheck checks the integrity of an encrypted remote.
cryptdecode Cryptdecode returns unencrypted file names.
dedupe Interactively find duplicate filenames and delete/rename them.
delete Remove the files in path.
deletefile Remove a single file from remote.
gendocs Output markdown docs for rclone to the directory supplied.
gitannex Speaks with git-annex over stdin/stdout.
hashsum Produces a hashsum file for all the objects in the path.
help Show help for rclone commands, flags and backends.
link Generate public link to file/folder.
listremotes List all the remotes in the config file and defined in environment variables.
ls List the objects in the path with size and path.
lsd List all directories/containers/buckets in the path.
lsf List directories and objects in remote:path formatted for parsing.
lsjson List directories and objects in the path in JSON format.
lsl List the objects in path with modification time, size and path.
md5sum Produces an md5sum file for all the objects in the path.
mkdir Make the path if it doesn't already exist.
mount Mount the remote as file system on a mountpoint.
move Move files from source to dest.
moveto Move file or directory from source to dest.
ncdu Explore a remote with a text based user interface.
nfsmount Mount the remote as file system on a mountpoint.
obscure Obscure password for use in the rclone config file.
purge Remove the path and all of its contents.
rc Run a command against a running rclone.
rcat Copies standard input to file on remote.
rcd Run rclone listening to remote control commands only.
rmdir Remove the empty directory at path.
rmdirs Remove empty directories under the path.
selfupdate Update the rclone binary.
serve Serve a remote over a protocol.
settier Changes storage class/tier of objects in remote.
sha1sum Produces an sha1sum file for all the objects in the path.
size Prints the total size and number of objects in remote:path.
sync Make source and dest identical, modifying destination only.
test Run a test command
touch Create new file or change file modification time.
tree List the contents of the remote in a tree like fashion.
version Show the version number.
Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.
```
# Rclone syncs your files to cloud storage
<img width="50%" src="https://rclone.org/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" >
@@ -1761,9 +1690,6 @@ include/exclude filters - everything will be removed. Use the
delete files. To delete empty directories only, use command
[rmdir](https://rclone.org/commands/rclone_rmdir/) or [rmdirs](https://rclone.org/commands/rclone_rmdirs/).
The concurrency of this operation is controlled by the `--checkers` global flag. However, some backends will
implement this command directly, in which case `--checkers` will be ignored.
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
@@ -2858,18 +2784,13 @@ Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
The command requires 1-3 arguments:
- fs name (e.g., "drive", "s3", etc.)
- Either a base64 encoded JSON blob obtained from a previous rclone config session
- Or a client_id and client_secret pair obtained from the remote service
Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.
```
rclone authorize <fs name> [base64_json_blob | client_id client_secret] [flags]
rclone authorize [flags]
```
## Options
@@ -3824,12 +3745,12 @@ password to re-encrypt the config.
When `--password-command` is called to change the password then the
environment variable `RCLONE_PASSWORD_CHANGE=1` will be set. So if
changing passwords programmatically you can use the environment
changing passwords programatically you can use the environment
variable to distinguish which password you must supply.
Alternatively you can remove the password first (with `rclone config
encryption remove`), then set it again with this command which may be
easier if you don't mind the unencrypted config file being on the disk
easier if you don't mind the unecrypted config file being on the disk
briefly.
@@ -4237,8 +4158,6 @@ This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. It doesn't delete files from
the destination.
*If you are looking to copy just a byte range of a file, please see 'rclone cat --offset X --count Y'*
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
@@ -4360,7 +4279,7 @@ Setting `--auto-filename` will attempt to automatically determine the
filename from the URL (after any redirections) and used in the
destination path.
With `--header-filename` in addition, if a specific filename is
With `--auto-filename-header` in addition, if a specific filename is
set in HTTP headers, it will be used instead of the name from the URL.
With `--print-filename` in addition, the resulting file name will be
printed.
@@ -4371,7 +4290,7 @@ destination if there is one with the same name.
Setting `--stdout` or making the output file name `-`
will cause the output to be written to standard output.
## Troubleshooting
## Troublshooting
If you can't get `rclone copyurl` to work then here are some things you can try:
@@ -5868,11 +5787,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -7130,11 +7049,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -8258,11 +8177,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -8813,11 +8732,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -9370,11 +9289,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -10108,11 +10027,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -10662,7 +10581,7 @@ that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance. This sort of cache can't be backed
up and restored as the underlying handles will change. This is Linux
only. It requires running rclone as root or with `CAP_DAC_READ_SEARCH`.
only. It requres running rclone as root or with `CAP_DAC_READ_SEARCH`.
You can run rclone with this extra permission by doing this to the
rclone binary `sudo setcap cap_dac_read_search+ep /path/to/rclone`.
@@ -10785,11 +10704,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -11434,7 +11353,7 @@ docs](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
access.
Please note that some clients may require HTTPS endpoints. See [the
SSL docs](#tls-ssl) for more information.
SSL docs](#ssl-tls) for more information.
This command uses the [VFS directory cache](#vfs-virtual-file-system).
All the functionality will work with `--vfs-cache-mode off`. Using
@@ -11489,7 +11408,7 @@ secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
Note that setting `use_multipart_uploads = false` is to work around
Note that setting `disable_multipart_uploads = true` is to work around
[a bug](#bugs) which will be fixed in due course.
## Bugs
@@ -11741,11 +11660,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -12335,11 +12254,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -13116,11 +13035,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant
@@ -14525,11 +14444,6 @@ it to `false`. It is also possible to specify `--boolean=false` or
parsed as `--boolean` and the `false` is parsed as an extra command
line argument for rclone.
Options documented to take a `stringArray` parameter accept multiple
values. To pass more than one value, repeat the option; for example:
`--include value1 --include value2`.
### Time or duration options {#time-option}
TIME or DURATION options can be specified as a duration string or a
@@ -16841,7 +16755,7 @@ so they take exactly the same form.
The options set by environment variables can be seen with the `-vv` flag, e.g. `rclone version -vv`.
Options that can appear multiple times (type `stringArray`) are
treated slightly differently as environment variables can only be
treated slighly differently as environment variables can only be
defined once. In order to allow a simple mechanism for adding one or
many items, the input is treated as a [CSV encoded](https://godoc.org/encoding/csv)
string. For example
@@ -20023,7 +19937,7 @@ the `--vfs-cache-mode` is off, it will return an empty result.
],
}
The `expiry` time is the time until the file is eligible for being
The `expiry` time is the time until the file is elegible for being
uploaded in floating point seconds. This may go negative. As rclone
only transfers `--transfers` files at once, only the lowest
`--transfers` expiry times will have `uploading` as `true`. So there
@@ -21104,7 +21018,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.3")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
```
@@ -22148,7 +22062,7 @@ on the host.
The _FUSE_ driver is a prerequisite for rclone mounting and should be
installed on host:
```
sudo apt-get -y install fuse3
sudo apt-get -y install fuse
```
Create two directories required by rclone docker plugin:
@@ -23152,7 +23066,7 @@ See the [bisync filters](#filtering) section and generic
[--filter-from](https://rclone.org/filtering/#filter-from-read-filtering-patterns-from-a-file)
documentation.
An [example filters file](#example-filters-file) contains filters for
non-allowed files for syncing with Dropbox.
non-allowed files for synching with Dropbox.
If you make changes to your filters file then bisync requires a run
with `--resync`. This is a safety feature, which prevents existing files
@@ -23329,7 +23243,7 @@ Using `--check-sync=false` will disable it and may significantly reduce the
sync run times for very large numbers of files.
The check may be run manually with `--check-sync=only`. It runs only the
integrity check and terminates without actually syncing.
integrity check and terminates without actually synching.
Note that currently, `--check-sync` **only checks listing snapshots and NOT the
actual files on the remotes.** Note also that the listing snapshots will not
@@ -23806,7 +23720,7 @@ The `--include*`, `--exclude*`, and `--filter` flags are also supported.
### How to filter directories
Filtering portions of the directory tree is a critical feature for syncing.
Filtering portions of the directory tree is a critical feature for synching.
Examples of directory trees (always beneath the Path1/Path2 root level)
you may want to exclude from your sync:
@@ -23915,7 +23829,7 @@ quashed by adding `--quiet` to the bisync command line.
## Example exclude-style filters files for use with Dropbox {#exclude-filters}
- Dropbox disallows syncing the listed temporary and configuration/data files.
- Dropbox disallows synching the listed temporary and configuration/data files.
The `- <filename>` filters exclude these files where ever they may occur
in the sync tree. Consider adding similar exclusions for file types
you don't need to sync, such as core dump and software build files.
@@ -24249,7 +24163,7 @@ test command flags can be equally prefixed by a single `-` or double dash.
- `go test . -case basic -remote local -remote2 local`
runs the `test_basic` test case using only the local filesystem,
syncing one local directory with another local directory.
synching one local directory with another local directory.
Test script output is to the console, while commands within scenario.txt
have their output sent to the `.../workdir/test.log` file,
which is finally compared to the golden copy.
@@ -24480,9 +24394,6 @@ about _Unison_ and synchronization in general.
## Changelog
### `v1.69.1`
* Fixed an issue causing listings to not capture concurrent modifications under certain conditions
### `v1.68`
* Fixed an issue affecting backends that round modtimes to a lower precision.
@@ -25769,7 +25680,7 @@ Notes on above:
that `USER_NAME` has been created.
2. The Resource entry must include both resource ARNs, as one implies
the bucket and the other implies the bucket's objects.
3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already exists, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included.
3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already exsits, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included.
For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
that will generate one or more buckets that will work with `rclone sync`.
@@ -25790,8 +25701,7 @@ tries to access data from the glacier storage class you will see an error like b
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html)
the object(s) in question before accessing object contents.
The [restore](#restore) section below shows how to do this with rclone.
the object(s) in question before using rclone.
Note that rclone only speaks the S3 API it does not speak the Glacier
Vault API, so rclone cannot directly access Glacier Vaults.
@@ -27194,7 +27104,7 @@ or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Fre
Usage Examples:
rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY
@@ -28748,7 +28658,7 @@ location_constraint = au-nsw
### Rclone Serve S3 {#rclone}
Rclone can serve any remote over the S3 protocol. For details see the
[rclone serve s3](https://rclone.org/commands/rclone_serve_s3/) documentation.
[rclone serve s3](https://rclone.org/commands/rclone_serve_http/) documentation.
For example, to serve `remote:path` over s3, run the server like this:
@@ -28768,8 +28678,8 @@ secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
Note that setting `use_multipart_uploads = false` is to work around
[a bug](https://rclone.org/commands/rclone_serve_s3/#bugs) which will be fixed in due course.
Note that setting `disable_multipart_uploads = true` is to work around
[a bug](https://rclone.org/commands/rclone_serve_http/#bugs) which will be fixed in due course.
### Scaleway
@@ -29865,49 +29775,27 @@ Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Amsterdam (Netherlands), nl-ams-1
\ (nl-ams-1.linodeobjects.com)
2 / Atlanta, GA (USA), us-southeast-1
1 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
3 / Chennai (India), in-maa-1
\ (in-maa-1.linodeobjects.com)
4 / Chicago, IL (USA), us-ord-1
2 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
5 / Frankfurt (Germany), eu-central-1
3 / Frankfurt (Germany), eu-central-1
\ (eu-central-1.linodeobjects.com)
6 / Jakarta (Indonesia), id-cgk-1
\ (id-cgk-1.linodeobjects.com)
7 / London 2 (Great Britain), gb-lon-1
\ (gb-lon-1.linodeobjects.com)
8 / Los Angeles, CA (USA), us-lax-1
\ (us-lax-1.linodeobjects.com)
9 / Madrid (Spain), es-mad-1
\ (es-mad-1.linodeobjects.com)
10 / Melbourne (Australia), au-mel-1
\ (au-mel-1.linodeobjects.com)
11 / Miami, FL (USA), us-mia-1
\ (us-mia-1.linodeobjects.com)
12 / Milan (Italy), it-mil-1
4 / Milan (Italy), it-mil-1
\ (it-mil-1.linodeobjects.com)
13 / Newark, NJ (USA), us-east-1
5 / Newark, NJ (USA), us-east-1
\ (us-east-1.linodeobjects.com)
14 / Osaka (Japan), jp-osa-1
\ (jp-osa-1.linodeobjects.com)
15 / Paris (France), fr-par-1
6 / Paris (France), fr-par-1
\ (fr-par-1.linodeobjects.com)
16 / São Paulo (Brazil), br-gru-1
\ (br-gru-1.linodeobjects.com)
17 / Seattle, WA (USA), us-sea-1
7 / Seattle, WA (USA), us-sea-1
\ (us-sea-1.linodeobjects.com)
18 / Singapore, ap-south-1
8 / Singapore ap-south-1
\ (ap-south-1.linodeobjects.com)
19 / Singapore 2, sg-sin-1
\ (sg-sin-1.linodeobjects.com)
20 / Stockholm (Sweden), se-sto-1
9 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
21 / Washington, DC, (USA), us-iad-1
10 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
endpoint> 5
endpoint> 3
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@@ -31600,7 +31488,7 @@ machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the
token as returned from Box. This only runs from the moment it opens
your browser to the moment you get back the verification code. This
is on `http://127.0.0.1:53682/` and this may require you to unblock
is on `http://127.0.0.1:53682/` and this it may require you to unblock
it temporarily if you are running a host firewall.
Once configured you can then use `rclone` like this,
@@ -34527,7 +34415,7 @@ strong random number generator. The nonce is incremented for each
chunk read making sure each nonce is unique for each block written.
The chance of a nonce being reused is minuscule. If you wrote an
exabyte of data (10¹⁸ bytes) you would have a probability of
approximately 2×10⁻³² of reusing a nonce.
approximately 2×10⁻³² of re-using a nonce.
#### Chunk
@@ -41673,7 +41561,7 @@ Enter a value.
config_2fa> 2FACODE
Remote config
--------------------
[iclouddrive]
[koofr]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@@ -41690,20 +41578,6 @@ y/e/d> y
ADP is currently unsupported and need to be disabled
On iPhone, Settings `>` Apple Account `>` iCloud `>` 'Access iCloud Data on the Web' must be ON, and 'Advanced Data Protection' OFF.
## Troubleshooting
### Missing PCS cookies from the request
This means you have Advanced Data Protection (ADP) turned on. This is not supported at the moment. If you want to use rclone you will have to turn it off. See above for how to turn it off.
You will need to clear the `cookies` and the `trust_token` fields in the config. Or you can delete the remote config and start again.
You should then run `rclone reconnect remote:`.
Note that changing the ADP setting may not take effect immediately - you may need to wait a few hours or a day before you can get rclone to work - keep clearing the config entry and running `rclone reconnect remote:` until rclone functions properly.
### Standard options
@@ -46161,7 +46035,7 @@ Properties:
- "us"
- Microsoft Cloud for US Government
- "de"
- Microsoft Cloud Germany (deprecated - try global region first).
- Microsoft Cloud Germany
- "cn"
- Azure and Office 365 operated by Vnet Group in China
@@ -46778,28 +46652,6 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
### Impersonate other users as Admin
Unlike Google Drive and impersonating any domain user via service accounts, OneDrive requires you to authenticate as an admin account, and manually setup a remote per user you wish to impersonate.
1. In [Microsoft 365 Admin Center](https://admin.microsoft.com), open each user you need to "impersonate" and go to the OneDrive section. There is a heading called "Get access to files", you need to click to create the link, this creates the link of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/` but also changes the permissions so you your admin user has access.
2. Then in powershell run the following commands:
```console
Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
Import-Module Microsoft.Graph.Files
Connect-MgGraph -Scopes "Files.ReadWrite.All"
# Follow the steps to allow access to your admin user
# Then run this for each user you want to impersonate to get the Drive ID
Get-MgUserDefaultDrive -UserId '{emailaddress}'
# This will give you output of the format:
# Name Id DriveType CreatedDateTime
# ---- -- --------- ---------------
# OneDrive b!XYZ123 business 14/10/2023 1:00:58pm
```
3. Then in rclone add a onedrive remote type, and use the `Type in driveID` with the DriveID you got in the previous step. One remote per user. It will then confirm the drive ID, and hopefully give you a message of `Found drive "root" of type "business"` and then include the URL of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents`
## Limitations
If you don't use rclone for 90 days the refresh token will
@@ -53121,8 +52973,7 @@ Properties:
On some SFTP servers (e.g. Synology) the paths are different
for SSH and SFTP so the hashes can't be calculated properly.
You can either use [`--sftp-path-override`](#--sftp-path-override)
or [`disable_hashcheck`](#--sftp-disable-hashcheck).
For them using `disable_hashcheck` is a good idea.
The only ssh agent supported under Windows is Putty's pageant.
@@ -56658,84 +56509,6 @@ Options:
# Changelog
## v1.69.3 - 2025-05-21
[See commits](https://github.com/rclone/rclone/compare/v1.69.2...v1.69.3)
* Bug Fixes
* build: Reapply update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to fix CVE-2025-30204 (dependabot[bot])
* build: Update github.com/ebitengine/purego to work around bug in go1.24.3 (Nick Craig-Wood)
## v1.69.2 - 2025-05-01
[See commits](https://github.com/rclone/rclone/compare/v1.69.1...v1.69.2)
* Bug fixes
* accounting: Fix percentDiff calculation -- (Anagh Kumar Baranwal)
* build
* Update github.com/golang-jwt/jwt/v4 from 4.5.1 to 4.5.2 to fix CVE-2025-30204 (dependabot[bot])
* Update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to fix CVE-2025-30204 (dependabot[bot])
* Update golang.org/x/crypto to v0.35.0 to fix CVE-2025-22869 (Nick Craig-Wood)
* Update golang.org/x/net from 0.36.0 to 0.38.0 to fix CVE-2025-22870 (dependabot[bot])
* Update golang.org/x/net to 0.36.0. to fix CVE-2025-22869 (dependabot[bot])
* Stop building with go < go1.23 as security updates forbade it (Nick Craig-Wood)
* Fix docker plugin build (Anagh Kumar Baranwal)
* cmd: Fix crash if rclone is invoked without any arguments (Janne Hellsten)
* config: Read configuration passwords from stdin even when terminated with EOF (Samantha Bowen)
* doc fixes (Andrew Kreimer, Danny Garside, eccoisle, Ed Craig-Wood, emyarod, jack, Jugal Kishore, Markus Gerstel, Michael Kebe, Nick Craig-Wood, simonmcnair, simwai, Zachary Vorhies)
* fs: Fix corruption of SizeSuffix with "B" suffix in config (eg --min-size) (Nick Craig-Wood)
* lib/http: Fix race between Serve() and Shutdown() (Nick Craig-Wood)
* object: Fix memory object out of bounds Seek (Nick Craig-Wood)
* operations: Fix call fmt.Errorf with wrong err (alingse)
* rc
* Disable the metrics server when running `rclone rc` (hiddenmarten)
* Fix debug/* commands not being available over unix sockets (Nick Craig-Wood)
* serve nfs: Fix unlikely crash (Nick Craig-Wood)
* stats: Fix the speed not getting updated after a pause in the processing (Anagh Kumar Baranwal)
* sync
* Fix cpu spinning when empty directory finding with leading slashes (Nick Craig-Wood)
* Copy dir modtimes even when copyEmptySrcDirs is false (ll3006)
* VFS
* Fix directory cache serving stale data (Lorenz Brun)
* Fix inefficient directory caching when directory reads are slow (huanghaojun)
* Fix integration test failures (Nick Craig-Wood)
* Drive
* Metadata: fix error when setting copy-requires-writer-permission on a folder (Nick Craig-Wood)
* Dropbox
* Retry link without expiry (Dave Vasilevsky)
* HTTP
* Correct root if definitely pointing to a file (nielash)
* Iclouddrive
* Fix so created files are writable (Ben Alex)
* Onedrive
* Fix metadata ordering in permissions (Nick Craig-Wood)
## v1.69.1 - 2025-02-14
[See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.69.1)
* Bug Fixes
* lib/oauthutil: Fix redirect URL mismatch errors (Nick Craig-Wood)
* bisync: Fix listings missing concurrent modifications (nielash)
* serve s3: Fix list objects encoding-type (Nick Craig-Wood)
* fs: Fix confusing "didn't find section in config file" error (Nick Craig-Wood)
* doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
* build: Added parallel docker builds and caching for go build in the container (Anagh Kumar Baranwal)
* VFS
* Fix the cache failing to upload symlinks when `--links` was specified (Nick Craig-Wood)
* Fix race detected by race detector (Nick Craig-Wood)
* Close the change notify channel on Shutdown (izouxv)
* B2
* Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
* Iclouddrive
* Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
* Onedrive
* Mark German (de) region as deprecated (Nick Craig-Wood)
* S3
* Added new storage class to magalu provider (Bruno Fernandes)
* Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
* Add latest Linode Object Storage endpoints (jbagwell-akamai)
## v1.69.0 - 2025-01-12
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)
@@ -56765,7 +56538,7 @@ Options:
* fs: Make `--links` flag global and add new `--local-links` and `--vfs-links` flags (Nick Craig-Wood)
* http servers: Disable automatic authentication skipping for unix sockets in http servers (Moises Lima)
* This was making it impossible to use unix sockets with an proxy
* This might now cause rclone to need authentication where it didn't before
* This might now cause rclone to need authenticaton where it didn't before
* oauthutil: add support for OAuth client credential flow (Martin Hassack, Nick Craig-Wood)
* operations: make log messages consistent for mkdir/rmdir at INFO level (Nick Craig-Wood)
* rc: Add `relative` to [vfs/queue-set-expiry](https://rclone.org/rc/#vfs-queue-set-expiry) (Nick Craig-Wood)
@@ -57443,7 +57216,7 @@ instead of of `--size-only`, when `check` is not available.
* Update all dependencies (Nick Craig-Wood)
* Refactor version info and icon resource handling on windows (albertony)
* doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri Papadopoulos, Herby Gillot, Joda Stößer, Manoj Ghosh, Nick Craig-Wood)
* Implement `--metadata-mapper` to transform metadata with a user supplied program (Nick Craig-Wood)
* Implement `--metadata-mapper` to transform metatadata with a user supplied program (Nick Craig-Wood)
* Add `ChunkWriterDoesntSeek` feature flag and set it for b2 (Nick Craig-Wood)
* lib/http: Export basic go string functions for use in `--template` (Gabriel Espinoza)
* makefile: Use POSIX compatible install arguments (Mina Galić)
@@ -57558,7 +57331,7 @@ instead of of `--size-only`, when `check` is not available.
* Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
* B2
* Fix multipart upload: corrupted on transfer: sizes differ XXX vs 0 (Nick Craig-Wood)
* Fix locking window when getting multipart upload URL (Nick Craig-Wood)
* Fix locking window when getting mutipart upload URL (Nick Craig-Wood)
* Fix server side copies greater than 4GB (Nick Craig-Wood)
* Fix chunked streaming uploads (Nick Craig-Wood)
* Reduce default `--b2-upload-concurrency` to 4 to reduce memory usage (Nick Craig-Wood)
@@ -63465,6 +63238,7 @@ put them back in again.` >}}
* ben-ba <benjamin.brauner@gmx.de>
* Eli Orzitzer <e_orz@yahoo.com>
* Anthony Metzidis <anthony.metzidis@gmail.com>
* emyarod <afw5059@gmail.com>
* keongalvin <keongalvin@gmail.com>
* rarspace01 <rarspace01@users.noreply.github.com>
* Paul Stern <paulstern45@gmail.com>

424
MANUAL.txt generated
View File

@@ -1,75 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
May 21, 2025
NAME
rclone - manage files on cloud storage
SYNOPSIS
Usage:
rclone [flags]
rclone [command]
Available commands:
about Get quota information from the remote.
authorize Remote authorization.
backend Run a backend-specific command.
bisync Perform bidirectional synchronization between two paths.
cat Concatenates any files and sends them to stdout.
check Checks the files in the source and destination match.
checksum Checks the files in the destination against a SUM file.
cleanup Clean up the remote if possible.
completion Output completion script for a given shell.
config Enter an interactive configuration session.
copy Copy files from source to dest, skipping identical files.
copyto Copy files from source to dest, skipping identical files.
copyurl Copy the contents of the URL supplied content to dest:path.
cryptcheck Cryptcheck checks the integrity of an encrypted remote.
cryptdecode Cryptdecode returns unencrypted file names.
dedupe Interactively find duplicate filenames and delete/rename them.
delete Remove the files in path.
deletefile Remove a single file from remote.
gendocs Output markdown docs for rclone to the directory supplied.
gitannex Speaks with git-annex over stdin/stdout.
hashsum Produces a hashsum file for all the objects in the path.
help Show help for rclone commands, flags and backends.
link Generate public link to file/folder.
listremotes List all the remotes in the config file and defined in environment variables.
ls List the objects in the path with size and path.
lsd List all directories/containers/buckets in the path.
lsf List directories and objects in remote:path formatted for parsing.
lsjson List directories and objects in the path in JSON format.
lsl List the objects in path with modification time, size and path.
md5sum Produces an md5sum file for all the objects in the path.
mkdir Make the path if it doesn't already exist.
mount Mount the remote as file system on a mountpoint.
move Move files from source to dest.
moveto Move file or directory from source to dest.
ncdu Explore a remote with a text based user interface.
nfsmount Mount the remote as file system on a mountpoint.
obscure Obscure password for use in the rclone config file.
purge Remove the path and all of its contents.
rc Run a command against a running rclone.
rcat Copies standard input to file on remote.
rcd Run rclone listening to remote control commands only.
rmdir Remove the empty directory at path.
rmdirs Remove empty directories under the path.
selfupdate Update the rclone binary.
serve Serve a remote over a protocol.
settier Changes storage class/tier of objects in remote.
sha1sum Produces an sha1sum file for all the objects in the path.
size Prints the total size and number of objects in remote:path.
sync Make source and dest identical, modifying destination only.
test Run a test command
touch Create new file or change file modification time.
tree List the contents of the remote in a tree like fashion.
version Show the version number.
Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.
Jan 12, 2025
Rclone syncs your files to cloud storage
@@ -1669,10 +1600,6 @@ include/exclude filters - everything will be removed. Use the delete
command if you want to selectively delete files. To delete empty
directories only, use command rmdir or rmdirs.
The concurrency of this operation is controlled by the --checkers global
flag. However, some backends will implement this command directly, in
which case --checkers will be ignored.
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive/-i flag.
@@ -2668,11 +2595,6 @@ Synopsis
Remote authorization. Used to authorize a remote or headless rclone from
a machine with a browser - use as instructed by rclone config.
The command requires 1-3 arguments: - fs name (e.g., "drive", "s3",
etc.) - Either a base64 encoded JSON blob obtained from a previous
rclone config session - Or a client_id and client_secret pair obtained
from the remote service
Use --auth-no-open-browser to prevent rclone to open auth link in
default browser automatically.
@@ -2680,7 +2602,7 @@ Use --template to generate HTML output via a custom Go template. If a
blank string is provided as an argument to this flag, the default
template is used.
rclone authorize <fs name> [base64_json_blob | client_id client_secret] [flags]
rclone authorize [flags]
Options
@@ -3545,12 +3467,12 @@ re-encrypt the config.
When --password-command is called to change the password then the
environment variable RCLONE_PASSWORD_CHANGE=1 will be set. So if
changing passwords programmatically you can use the environment variable
changing passwords programatically you can use the environment variable
to distinguish which password you must supply.
Alternatively you can remove the password first (with
rclone config encryption remove), then set it again with this command
which may be easier if you don't mind the unencrypted config file being
which may be easier if you don't mind the unecrypted config file being
on the disk briefly.
rclone config encryption set [flags]
@@ -3909,9 +3831,6 @@ This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. It doesn't delete files from
the destination.
If you are looking to copy just a byte range of a file, please see
'rclone cat --offset X --count Y'
Note: Use the -P/--progress flag to view real-time transfer statistics
rclone copyto source:path dest:path [flags]
@@ -4020,8 +3939,8 @@ Setting --auto-filename will attempt to automatically determine the
filename from the URL (after any redirections) and used in the
destination path.
With --header-filename in addition, if a specific filename is set in
HTTP headers, it will be used instead of the name from the URL. With
With --auto-filename-header in addition, if a specific filename is set
in HTTP headers, it will be used instead of the name from the URL. With
--print-filename in addition, the resulting file name will be printed.
Setting --no-clobber will prevent overwriting file on the destination if
@@ -4030,7 +3949,7 @@ there is one with the same name.
Setting --stdout or making the output file name - will cause the output
to be written to standard output.
Troubleshooting
Troublshooting
If you can't get rclone copyurl to work then here are some things you
can try:
@@ -5449,11 +5368,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
the cache may exceed these quotas for two reasons. Firstly because it is
If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -6681,11 +6600,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
the cache may exceed these quotas for two reasons. Firstly because it is
If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -7797,11 +7716,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
the cache may exceed these quotas for two reasons. Firstly because it is
If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -8342,11 +8261,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
the cache may exceed these quotas for two reasons. Firstly because it is
If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -8891,11 +8810,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
the cache may exceed these quotas for two reasons. Firstly because it is
If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -9648,11 +9567,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
the cache may exceed these quotas for two reasons. Firstly because it is
If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -10183,7 +10102,7 @@ uses an on disk cache, but the cache entries are held as symlinks.
Rclone will use the handle of the underlying file as the NFS handle
which improves performance. This sort of cache can't be backed up and
restored as the underlying handles will change. This is Linux only. It
requires running rclone as root or with CAP_DAC_READ_SEARCH. You can run
requres running rclone as root or with CAP_DAC_READ_SEARCH. You can run
rclone with this extra permission by doing this to the rclone binary
sudo setcap cap_dac_read_search+ep /path/to/rclone.
@@ -10304,11 +10223,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
the cache may exceed these quotas for two reasons. Firstly because it is
If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -10984,8 +10903,8 @@ which is defined like this:
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
Note that setting use_multipart_uploads = false is to work around a bug
which will be fixed in due course.
Note that setting disable_multipart_uploads = true is to work around a
bug which will be fixed in due course.
Bugs
@@ -11232,11 +11151,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
the cache may exceed these quotas for two reasons. Firstly because it is
If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -11819,11 +11738,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
the cache may exceed these quotas for two reasons. Firstly because it is
If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -12619,11 +12538,11 @@ and if they haven't been accessed for --vfs-write-back seconds. If
rclone is quit or dies with files that haven't been uploaded, these will
be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size or --vfs-cache-min-free-space note that
the cache may exceed these quotas for two reasons. Firstly because it is
If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
cache may exceed these quotas for two reasons. Firstly because it is
only checked every --vfs-cache-poll-interval. Secondly because open
files cannot be evicted from the cache. When --vfs-cache-max-size or
--vfs-cache-min-free-space is exceeded, rclone will attempt to evict the
--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
least accessed files from the cache first. rclone will start with files
that haven't been accessed for the longest. This cache flushing strategy
is efficient and more relevant files are likely to remain cached.
@@ -13976,10 +13895,6 @@ also possible to specify --boolean=false or --boolean=true. Note that
--boolean false is not valid - this is parsed as --boolean and the false
is parsed as an extra command line argument for rclone.
Options documented to take a stringArray parameter accept multiple
values. To pass more than one value, repeat the option; for example:
--include value1 --include value2.
Time or duration options
TIME or DURATION options can be specified as a duration string or a time
@@ -16262,7 +16177,7 @@ The options set by environment variables can be seen with the -vv flag,
e.g. rclone version -vv.
Options that can appear multiple times (type stringArray) are treated
slightly differently as environment variables can only be defined once.
slighly differently as environment variables can only be defined once.
In order to allow a simple mechanism for adding one or many items, the
input is treated as a CSV encoded string. For example
@@ -19505,7 +19420,7 @@ This is only useful if --vfs-cache-mode > off. If you call it when the
],
}
The expiry time is the time until the file is eligible for being
The expiry time is the time until the file is elegible for being
uploaded in floating point seconds. This may go negative. As rclone only
transfers --transfers files at once, only the lowest --transfers expiry
times will have uploading as true. So there may be files with negative
@@ -20654,7 +20569,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.3")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
Performance
@@ -21659,7 +21574,7 @@ Start from installing Docker on the host.
The FUSE driver is a prerequisite for rclone mounting and should be
installed on host:
sudo apt-get -y install fuse3
sudo apt-get -y install fuse
Create two directories required by rclone docker plugin:
@@ -22616,7 +22531,7 @@ Also see the all files changed check.
By using rclone filter features you can exclude file types or directory
sub-trees from the sync. See the bisync filters section and generic
--filter-from documentation. An example filters file contains filters
for non-allowed files for syncing with Dropbox.
for non-allowed files for synching with Dropbox.
If you make changes to your filters file then bisync requires a run with
--resync. This is a safety feature, which prevents existing files on the
@@ -22789,7 +22704,7 @@ of a sync. Using --check-sync=false will disable it and may
significantly reduce the sync run times for very large numbers of files.
The check may be run manually with --check-sync=only. It runs only the
integrity check and terminates without actually syncing.
integrity check and terminates without actually synching.
Note that currently, --check-sync only checks listing snapshots and NOT
the actual files on the remotes. Note also that the listing snapshots
@@ -23322,7 +23237,7 @@ supported.
How to filter directories
Filtering portions of the directory tree is a critical feature for
syncing.
synching.
Examples of directory trees (always beneath the Path1/Path2 root level)
you may want to exclude from your sync: - Directory trees containing
@@ -23433,7 +23348,7 @@ This noise can be quashed by adding --quiet to the bisync command line.
Example exclude-style filters files for use with Dropbox
- Dropbox disallows syncing the listed temporary and
- Dropbox disallows synching the listed temporary and
configuration/data files. The `- ` filters exclude these files where
ever they may occur in the sync tree. Consider adding similar
exclusions for file types you don't need to sync, such as core dump
@@ -23753,7 +23668,7 @@ dash.
Running tests
- go test . -case basic -remote local -remote2 local runs the
test_basic test case using only the local filesystem, syncing one
test_basic test case using only the local filesystem, synching one
local directory with another local directory. Test script output is
to the console, while commands within scenario.txt have their output
sent to the .../workdir/test.log file, which is finally compared to
@@ -23986,11 +23901,6 @@ Unison and synchronization in general.
Changelog
v1.69.1
- Fixed an issue causing listings to not capture concurrent
modifications under certain conditions
v1.68
- Fixed an issue affecting backends that round modtimes to a lower
@@ -25282,7 +25192,7 @@ Notes on above:
that USER_NAME has been created.
2. The Resource entry must include both resource ARNs, as one implies
the bucket and the other implies the bucket's objects.
3. When using s3-no-check-bucket and the bucket already exists, the
3. When using s3-no-check-bucket and the bucket already exsits, the
"arn:aws:s3:::BUCKET_NAME" doesn't have to be included.
For reference, here's an Ansible script that will generate one or more
@@ -25304,9 +25214,8 @@ glacier storage class you will see an error like below.
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
In this case you need to restore the object(s) in question before
accessing object contents. The restore section below shows how to do
this with rclone.
In this case you need to restore the object(s) in question before using
rclone.
Note that rclone only speaks the S3 API it does not speak the Glacier
Vault API, so rclone cannot directly access Glacier Vaults.
@@ -26737,7 +26646,7 @@ Access tier to the Frequent Access tier.
Usage Examples:
rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY
@@ -28246,8 +28155,8 @@ this:
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
Note that setting use_multipart_uploads = false is to work around a bug
which will be fixed in due course.
Note that setting disable_multipart_uploads = true is to work around a
bug which will be fixed in due course.
Scaleway
@@ -29294,49 +29203,27 @@ This will guide you through an interactive setup process.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Amsterdam (Netherlands), nl-ams-1
\ (nl-ams-1.linodeobjects.com)
2 / Atlanta, GA (USA), us-southeast-1
1 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
3 / Chennai (India), in-maa-1
\ (in-maa-1.linodeobjects.com)
4 / Chicago, IL (USA), us-ord-1
2 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
5 / Frankfurt (Germany), eu-central-1
3 / Frankfurt (Germany), eu-central-1
\ (eu-central-1.linodeobjects.com)
6 / Jakarta (Indonesia), id-cgk-1
\ (id-cgk-1.linodeobjects.com)
7 / London 2 (Great Britain), gb-lon-1
\ (gb-lon-1.linodeobjects.com)
8 / Los Angeles, CA (USA), us-lax-1
\ (us-lax-1.linodeobjects.com)
9 / Madrid (Spain), es-mad-1
\ (es-mad-1.linodeobjects.com)
10 / Melbourne (Australia), au-mel-1
\ (au-mel-1.linodeobjects.com)
11 / Miami, FL (USA), us-mia-1
\ (us-mia-1.linodeobjects.com)
12 / Milan (Italy), it-mil-1
4 / Milan (Italy), it-mil-1
\ (it-mil-1.linodeobjects.com)
13 / Newark, NJ (USA), us-east-1
5 / Newark, NJ (USA), us-east-1
\ (us-east-1.linodeobjects.com)
14 / Osaka (Japan), jp-osa-1
\ (jp-osa-1.linodeobjects.com)
15 / Paris (France), fr-par-1
6 / Paris (France), fr-par-1
\ (fr-par-1.linodeobjects.com)
16 / São Paulo (Brazil), br-gru-1
\ (br-gru-1.linodeobjects.com)
17 / Seattle, WA (USA), us-sea-1
7 / Seattle, WA (USA), us-sea-1
\ (us-sea-1.linodeobjects.com)
18 / Singapore, ap-south-1
8 / Singapore ap-south-1
\ (ap-south-1.linodeobjects.com)
19 / Singapore 2, sg-sin-1
\ (sg-sin-1.linodeobjects.com)
20 / Stockholm (Sweden), se-sto-1
9 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
21 / Washington, DC, (USA), us-iad-1
10 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
endpoint> 5
endpoint> 3
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@@ -30961,7 +30848,7 @@ Internet browser available.
Note that rclone runs a webserver on your local machine to collect the
token as returned from Box. This only runs from the moment it opens your
browser to the moment you get back the verification code. This is on
http://127.0.0.1:53682/ and this may require you to unblock it
http://127.0.0.1:53682/ and this it may require you to unblock it
temporarily if you are running a host firewall.
Once configured you can then use rclone like this,
@@ -33870,7 +33757,7 @@ The initial nonce is generated from the operating systems crypto strong
random number generator. The nonce is incremented for each chunk read
making sure each nonce is unique for each block written. The chance of a
nonce being reused is minuscule. If you wrote an exabyte of data (10¹⁸
bytes) you would have a probability of approximately 2×10⁻³² of reusing
bytes) you would have a probability of approximately 2×10⁻³² of re-using
a nonce.
Chunk
@@ -41091,7 +40978,7 @@ This will guide you through an interactive setup process:
config_2fa> 2FACODE
Remote config
--------------------
[iclouddrive]
[koofr]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@@ -41107,27 +40994,6 @@ Advanced Data Protection
ADP is currently unsupported and need to be disabled
On iPhone, Settings > Apple Account > iCloud > 'Access iCloud Data on
the Web' must be ON, and 'Advanced Data Protection' OFF.
Troubleshooting
Missing PCS cookies from the request
This means you have Advanced Data Protection (ADP) turned on. This is
not supported at the moment. If you want to use rclone you will have to
turn it off. See above for how to turn it off.
You will need to clear the cookies and the trust_token fields in the
config. Or you can delete the remote config and start again.
You should then run rclone reconnect remote:.
Note that changing the ADP setting may not take effect immediately - you
may need to wait a few hours or a day before you can get rclone to work
- keep clearing the config entry and running rclone reconnect remote:
until rclone functions properly.
Standard options
Here are the Standard options specific to iclouddrive (iCloud Drive).
@@ -45723,8 +45589,7 @@ Properties:
- "us"
- Microsoft Cloud for US Government
- "de"
- Microsoft Cloud Germany (deprecated - try global region
first).
- Microsoft Cloud Germany
- "cn"
- Azure and Office 365 operated by Vnet Group in China
@@ -46383,38 +46248,6 @@ Here are the possible system metadata items for the onedrive backend.
See the metadata docs for more info.
Impersonate other users as Admin
Unlike Google Drive and impersonating any domain user via service
accounts, OneDrive requires you to authenticate as an admin account, and
manually setup a remote per user you wish to impersonate.
1. In Microsoft 365 Admin Center, open each user you need to
"impersonate" and go to the OneDrive section. There is a heading
called "Get access to files", you need to click to create the link,
this creates the link of the format
https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/
but also changes the permissions so you your admin user has access.
2. Then in powershell run the following commands:
Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
Import-Module Microsoft.Graph.Files
Connect-MgGraph -Scopes "Files.ReadWrite.All"
# Follow the steps to allow access to your admin user
# Then run this for each user you want to impersonate to get the Drive ID
Get-MgUserDefaultDrive -UserId '{emailaddress}'
# This will give you output of the format:
# Name Id DriveType CreatedDateTime
# ---- -- --------- ---------------
# OneDrive b!XYZ123 business 14/10/2023 1:00:58pm
3. Then in rclone add a onedrive remote type, and use the
Type in driveID with the DriveID you got in the previous step. One
remote per user. It will then confirm the drive ID, and hopefully
give you a message of Found drive "root" of type "business" and then
include the URL of the format
https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents
Limitations
If you don't use rclone for 90 days the refresh token will expire. This
@@ -52743,8 +52576,8 @@ Properties:
Limitations
On some SFTP servers (e.g. Synology) the paths are different for SSH and
SFTP so the hashes can't be calculated properly. You can either use
--sftp-path-override or disable_hashcheck.
SFTP so the hashes can't be calculated properly. For them using
disable_hashcheck is a good idea.
The only ssh agent supported under Windows is Putty's pageant.
@@ -56324,112 +56157,6 @@ Options:
Changelog
v1.69.3 - 2025-05-21
See commits
- Bug Fixes
- build: Reapply update github.com/golang-jwt/jwt/v5 from 5.2.1 to
5.2.2 to fix CVE-2025-30204 (dependabot[bot])
- build: Update github.com/ebitengine/purego to work around bug in
go1.24.3 (Nick Craig-Wood)
v1.69.2 - 2025-05-01
See commits
- Bug fixes
- accounting: Fix percentDiff calculation -- (Anagh Kumar
Baranwal)
- build
- Update github.com/golang-jwt/jwt/v4 from 4.5.1 to 4.5.2 to
fix CVE-2025-30204 (dependabot[bot])
- Update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to
fix CVE-2025-30204 (dependabot[bot])
- Update golang.org/x/crypto to v0.35.0 to fix CVE-2025-22869
(Nick Craig-Wood)
- Update golang.org/x/net from 0.36.0 to 0.38.0 to fix
CVE-2025-22870 (dependabot[bot])
- Update golang.org/x/net to 0.36.0. to fix CVE-2025-22869
(dependabot[bot])
- Stop building with go < go1.23 as security updates forbade
it (Nick Craig-Wood)
- Fix docker plugin build (Anagh Kumar Baranwal)
- cmd: Fix crash if rclone is invoked without any arguments (Janne
Hellsten)
- config: Read configuration passwords from stdin even when
terminated with EOF (Samantha Bowen)
- doc fixes (Andrew Kreimer, Danny Garside, eccoisle, Ed
Craig-Wood, emyarod, jack, Jugal Kishore, Markus Gerstel,
Michael Kebe, Nick Craig-Wood, simonmcnair, simwai, Zachary
Vorhies)
- fs: Fix corruption of SizeSuffix with "B" suffix in config (eg
--min-size) (Nick Craig-Wood)
- lib/http: Fix race between Serve() and Shutdown() (Nick
Craig-Wood)
- object: Fix memory object out of bounds Seek (Nick Craig-Wood)
- operations: Fix call fmt.Errorf with wrong err (alingse)
- rc
- Disable the metrics server when running rclone rc
(hiddenmarten)
- Fix debug/* commands not being available over unix sockets
(Nick Craig-Wood)
- serve nfs: Fix unlikely crash (Nick Craig-Wood)
- stats: Fix the speed not getting updated after a pause in the
processing (Anagh Kumar Baranwal)
- sync
- Fix cpu spinning when empty directory finding with leading
slashes (Nick Craig-Wood)
- Copy dir modtimes even when copyEmptySrcDirs is false
(ll3006)
- VFS
- Fix directory cache serving stale data (Lorenz Brun)
- Fix inefficient directory caching when directory reads are slow
(huanghaojun)
- Fix integration test failures (Nick Craig-Wood)
- Drive
- Metadata: fix error when setting copy-requires-writer-permission
on a folder (Nick Craig-Wood)
- Dropbox
- Retry link without expiry (Dave Vasilevsky)
- HTTP
- Correct root if definitely pointing to a file (nielash)
- Iclouddrive
- Fix so created files are writable (Ben Alex)
- Onedrive
- Fix metadata ordering in permissions (Nick Craig-Wood)
v1.69.1 - 2025-02-14
See commits
- Bug Fixes
- lib/oauthutil: Fix redirect URL mismatch errors (Nick
Craig-Wood)
- bisync: Fix listings missing concurrent modifications (nielash)
- serve s3: Fix list objects encoding-type (Nick Craig-Wood)
- fs: Fix confusing "didn't find section in config file" error
(Nick Craig-Wood)
- doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt
Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
- build: Added parallel docker builds and caching for go build in
the container (Anagh Kumar Baranwal)
- VFS
- Fix the cache failing to upload symlinks when --links was
specified (Nick Craig-Wood)
- Fix race detected by race detector (Nick Craig-Wood)
- Close the change notify channel on Shutdown (izouxv)
- B2
- Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
- Iclouddrive
- Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
- Onedrive
- Mark German (de) region as deprecated (Nick Craig-Wood)
- S3
- Added new storage class to magalu provider (Bruno Fernandes)
- Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
- Add latest Linode Object Storage endpoints (jbagwell-akamai)
v1.69.0 - 2025-01-12
See commits
@@ -56475,7 +56202,7 @@ See commits
sockets in http servers (Moises Lima)
- This was making it impossible to use unix sockets with an
proxy
- This might now cause rclone to need authentication where it
- This might now cause rclone to need authenticaton where it
didn't before
- oauthutil: add support for OAuth client credential flow (Martin
Hassack, Nick Craig-Wood)
@@ -57420,7 +57147,7 @@ See commits
- doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri
Papadopoulos, Herby Gillot, Joda Stößer, Manoj Ghosh, Nick
Craig-Wood)
- Implement --metadata-mapper to transform metadata with a user
- Implement --metadata-mapper to transform metatadata with a user
supplied program (Nick Craig-Wood)
- Add ChunkWriterDoesntSeek feature flag and set it for b2 (Nick
Craig-Wood)
@@ -57582,7 +57309,7 @@ See commits
- B2
- Fix multipart upload: corrupted on transfer: sizes differ XXX vs
0 (Nick Craig-Wood)
- Fix locking window when getting multipart upload URL (Nick
- Fix locking window when getting mutipart upload URL (Nick
Craig-Wood)
- Fix server side copies greater than 4GB (Nick Craig-Wood)
- Fix chunked streaming uploads (Nick Craig-Wood)
@@ -64972,6 +64699,7 @@ email addresses removed from here need to be added to bin/.ignore-emails to make
- ben-ba benjamin.brauner@gmx.de
- Eli Orzitzer e_orz@yahoo.com
- Anthony Metzidis anthony.metzidis@gmail.com
- emyarod afw5059@gmail.com
- keongalvin keongalvin@gmail.com
- rarspace01 rarspace01@users.noreply.github.com
- Paul Stern paulstern45@gmail.com

View File

@@ -1,4 +1,20 @@
<div align="center">
<sup>Special thanks to our sponsor:</sup>
<br>
<br>
<a href="https://www.warp.dev/?utm_source=github&utm_medium=referral&utm_campaign=rclone_20231103">
<div>
<img src="https://rclone.org/img/logos/warp-github.svg" width="300" alt="Warp">
</div>
<b>Warp is a modern, Rust-based terminal with AI built in so you and your team can build great software, faster.</b>
<div>
<sup>Visit warp.dev to learn more.</sup>
</div>
</a>
<br>
<hr>
</div>
<br>
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)

View File

@@ -47,13 +47,20 @@ Early in the next release cycle update the dependencies.
* `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod`
then go to manual mode. `go1.20` here is the lowest supported version
go 1.22.0
then go to manual mode. `go1.22` here is the lowest supported version
in the `go.mod`.
If `make updatedirect` added a `toolchain` directive then remove it.
We don't want to force a toolchain on our users. Linux packagers are
often using a version of Go that is a few versions out of date.
```
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.20 -compat=1.20
go mod tidy -go=1.22 -compat=1.22
```
If the `go mod tidy` fails use the output from it to remove the
@@ -124,8 +131,8 @@ Now
* git co ${BASE_TAG}-stable
* git cherry-pick any fixes
* make startstable
* Do the steps as above
* make startstable
* git co master
* `#` cherry pick the changes to the changelog - check the diff to make sure it is correct
* git checkout ${BASE_TAG}-stable docs/content/changelog.md

View File

@@ -1 +1 @@
v1.69.3
v1.70.0

File diff suppressed because it is too large Load Diff

View File

@@ -3,16 +3,149 @@
package azureblob
import (
"context"
"encoding/base64"
"strings"
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func (f *Fs) InternalTest(t *testing.T) {
// Check first feature flags are set on this
// remote
func TestBlockIDCreator(t *testing.T) {
// Check creation and random number
bic, err := newBlockIDCreator()
require.NoError(t, err)
bic2, err := newBlockIDCreator()
require.NoError(t, err)
assert.NotEqual(t, bic.random, bic2.random)
assert.NotEqual(t, bic.random, [8]byte{})
// Set random to known value for tests
bic.random = [8]byte{1, 2, 3, 4, 5, 6, 7, 8}
chunkNumber := uint64(0xFEDCBA9876543210)
// Check creation of ID
want := base64.StdEncoding.EncodeToString([]byte{0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10, 1, 2, 3, 4, 5, 6, 7, 8})
assert.Equal(t, "/ty6mHZUMhABAgMEBQYHCA==", want)
got := bic.newBlockID(chunkNumber)
assert.Equal(t, want, got)
assert.Equal(t, "/ty6mHZUMhABAgMEBQYHCA==", got)
// Test checkID is working
assert.NoError(t, bic.checkID(chunkNumber, got))
assert.ErrorContains(t, bic.checkID(chunkNumber, "$"+got), "illegal base64")
assert.ErrorContains(t, bic.checkID(chunkNumber, "AAAA"+got), "bad block ID length")
assert.ErrorContains(t, bic.checkID(chunkNumber+1, got), "expecting decoded")
assert.ErrorContains(t, bic2.checkID(chunkNumber, got), "random bytes")
}
func (f *Fs) testFeatures(t *testing.T) {
// Check first feature flags are set on this remote
enabled := f.Features().SetTier
assert.True(t, enabled)
enabled = f.Features().GetTier
assert.True(t, enabled)
}
type ReadSeekCloser struct {
*strings.Reader
}
func (r *ReadSeekCloser) Close() error {
return nil
}
// Stage a block at remote but don't commit it
func (f *Fs) stageBlockWithoutCommit(ctx context.Context, t *testing.T, remote string) {
var (
containerName, blobPath = f.split(remote)
containerClient = f.cntSVC(containerName)
blobClient = containerClient.NewBlockBlobClient(blobPath)
data = "uncommitted data"
blockID = "1"
blockIDBase64 = base64.StdEncoding.EncodeToString([]byte(blockID))
)
r := &ReadSeekCloser{strings.NewReader(data)}
_, err := blobClient.StageBlock(ctx, blockIDBase64, r, nil)
require.NoError(t, err)
// Verify the block is staged but not committed
blockList, err := blobClient.GetBlockList(ctx, blockblob.BlockListTypeAll, nil)
require.NoError(t, err)
found := false
for _, block := range blockList.UncommittedBlocks {
if *block.Name == blockIDBase64 {
found = true
break
}
}
require.True(t, found, "Block ID not found in uncommitted blocks")
}
// This tests uploading a blob where it has uncommitted blocks with a different ID size.
//
// https://gauravmantri.com/2013/05/18/windows-azure-blob-storage-dealing-with-the-specified-blob-or-block-content-is-invalid-error/
//
// TestIntegration/FsMkdir/FsPutFiles/Internal/WriteUncommittedBlocks
func (f *Fs) testWriteUncommittedBlocks(t *testing.T) {
var (
ctx = context.Background()
remote = "testBlob"
)
// Multipart copy the blob please
oldUseCopyBlob, oldCopyCutoff := f.opt.UseCopyBlob, f.opt.CopyCutoff
f.opt.UseCopyBlob = false
f.opt.CopyCutoff = f.opt.ChunkSize
defer func() {
f.opt.UseCopyBlob, f.opt.CopyCutoff = oldUseCopyBlob, oldCopyCutoff
}()
// Create a blob with uncommitted blocks
f.stageBlockWithoutCommit(ctx, t, remote)
// Now attempt to overwrite the block with a different sized block ID to provoke this error
// Check the object does not exist
_, err := f.NewObject(ctx, remote)
require.Equal(t, fs.ErrorObjectNotFound, err)
// Upload a multipart file over the block with uncommitted chunks of a different ID size
size := 4*int(f.opt.ChunkSize) - 1
contents := random.String(size)
item := fstest.NewItem(remote, contents, fstest.Time("2001-05-06T04:05:06.499Z"))
o := fstests.PutTestContents(ctx, t, f, &item, contents, true)
// Check size
assert.Equal(t, int64(size), o.Size())
// Create a new blob with uncommitted blocks
newRemote := "testBlob2"
f.stageBlockWithoutCommit(ctx, t, newRemote)
// Copy over that block
dst, err := f.Copy(ctx, o, newRemote)
require.NoError(t, err)
// Check basics
assert.Equal(t, int64(size), dst.Size())
assert.Equal(t, newRemote, dst.Remote())
// Check contents
gotContents := fstests.ReadObject(ctx, t, dst, -1)
assert.Equal(t, contents, gotContents)
// Remove the object
require.NoError(t, dst.Remove(ctx))
}
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Features", f.testFeatures)
t.Run("WriteUncommittedBlocks", f.testWriteUncommittedBlocks)
}

View File

@@ -15,13 +15,17 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
name := "TestAzureBlob"
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureBlob:",
RemoteName: name + ":",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool", "Cold"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: defaultChunkSize,
},
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "use_copy_blob", Value: "false"},
},
})
}
@@ -40,6 +44,7 @@ func TestIntegration2(t *testing.T) {
},
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "directory_markers", Value: "true"},
{Name: name, Key: "use_copy_blob", Value: "false"},
},
})
}
@@ -48,8 +53,13 @@ func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetCopyCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setCopyCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetCopyCutoffer = (*Fs)(nil)
)
func TestValidateAccessTier(t *testing.T) {

View File

@@ -237,6 +237,30 @@ msi_client_id, or msi_mi_res_id parameters.`,
Help: "Azure resource ID of the user-assigned MSI to use, if any.\n\nLeave blank if msi_client_id or msi_object_id specified.",
Advanced: true,
Sensitive: true,
}, {
Name: "disable_instance_discovery",
Help: `Skip requesting Microsoft Entra instance metadata
This should be set true only by applications authenticating in
disconnected clouds, or private clouds such as Azure Stack.
It determines whether rclone requests Microsoft Entra instance
metadata from ` + "`https://login.microsoft.com/`" + ` before
authenticating.
Setting this to true will skip this request, making you responsible
for ensuring the configured authority is valid and trustworthy.
`,
Default: false,
Advanced: true,
}, {
Name: "use_az",
Help: `Use Azure CLI tool az for authentication
Set to use the [Azure CLI tool az](https://learn.microsoft.com/en-us/cli/azure/)
as the sole means of authentication.
Setting this can be useful if you wish to use the az CLI on a host with
a System Managed Identity that you do not want to use.
Don't set env_auth at the same time.
`,
Default: false,
Advanced: true,
}, {
Name: "endpoint",
Help: "Endpoint for the service.\n\nLeave blank normally.",
@@ -319,10 +343,12 @@ type Options struct {
Username string `config:"username"`
Password string `config:"password"`
ServicePrincipalFile string `config:"service_principal_file"`
DisableInstanceDiscovery bool `config:"disable_instance_discovery"`
UseMSI bool `config:"use_msi"`
MSIObjectID string `config:"msi_object_id"`
MSIClientID string `config:"msi_client_id"`
MSIResourceID string `config:"msi_mi_res_id"`
UseAZ bool `config:"use_az"`
Endpoint string `config:"endpoint"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
MaxStreamSize fs.SizeSuffix `config:"max_stream_size"`
@@ -414,7 +440,8 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
}
// Read credentials from the environment
options := azidentity.DefaultAzureCredentialOptions{
ClientOptions: policyClientOptions,
ClientOptions: policyClientOptions,
DisableInstanceDiscovery: opt.DisableInstanceDiscovery,
}
cred, err = azidentity.NewDefaultAzureCredential(&options)
if err != nil {
@@ -425,6 +452,13 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
if err != nil {
return nil, fmt.Errorf("create new shared key credential failed: %w", err)
}
case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{}
cred, err = azidentity.NewAzureCLICredential(&options)
fmt.Println(cred)
if err != nil {
return nil, fmt.Errorf("failed to create Azure CLI credentials: %w", err)
}
case opt.SASURL != "":
client, err = service.NewClientWithNoCredential(opt.SASURL, &clientOpt)
if err != nil {

View File

@@ -30,6 +30,7 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
@@ -299,13 +300,14 @@ type Fs struct {
// Object describes a b2 object
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
id string // b2 id of the file
modTime time.Time // The modified time of the object if known
sha1 string // SHA-1 hash if known
size int64 // Size of the object
mimeType string // Content-Type of the object
fs *Fs // what this object is part of
remote string // The remote path
id string // b2 id of the file
modTime time.Time // The modified time of the object if known
sha1 string // SHA-1 hash if known
size int64 // Size of the object
mimeType string // Content-Type of the object
meta map[string]string // The object metadata if known - may be nil - with lower case keys
}
// ------------------------------------------------------------
@@ -1317,16 +1319,22 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden b
// Check current version of the file
if deleteHidden && object.Action == "hide" {
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
toBeDeleted <- object
if !operations.SkipDestructive(ctx, object.Name, "remove hide marker") {
toBeDeleted <- object
}
} else if deleteUnfinished && object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
fs.Debugf(remote, "Deleting current version (id %q) as it is a start marker (upload started at %s)", object.ID, time.Time(object.UploadTimestamp).Local())
toBeDeleted <- object
if !operations.SkipDestructive(ctx, object.Name, "remove pending upload") {
toBeDeleted <- object
}
} else {
fs.Debugf(remote, "Not deleting current version (id %q) %q dated %v (%v ago)", object.ID, object.Action, time.Time(object.UploadTimestamp).Local(), time.Since(time.Time(object.UploadTimestamp)))
}
} else {
fs.Debugf(remote, "Deleting (id %q)", object.ID)
toBeDeleted <- object
if !operations.SkipDestructive(ctx, object.Name, "delete") {
toBeDeleted <- object
}
}
last = remote
tr.Done(ctx, nil)
@@ -1597,6 +1605,9 @@ func (o *Object) decodeMetaDataRaw(ID, SHA1 string, Size int64, UploadTimestamp
if err != nil {
return err
}
// For now, just set "mtime" in metadata
o.meta = make(map[string]string, 1)
o.meta["mtime"] = o.modTime.Format(time.RFC3339Nano)
return nil
}
@@ -1876,6 +1887,13 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
Info: Info,
}
// Embryonic metadata support - just mtime
o.meta = make(map[string]string, 1)
modTime, err := parseTimeStringHelper(info.Info[timeKey])
if err == nil {
o.meta["mtime"] = modTime.Format(time.RFC3339Nano)
}
// When reading files from B2 via cloudflare using
// --b2-download-url cloudflare strips the Content-Length
// headers (presumably so it can inject stuff) so use the old
@@ -2282,8 +2300,10 @@ func (f *Fs) lifecycleCommand(ctx context.Context, name string, arg []string, op
}
skip := operations.SkipDestructive(ctx, name, "update lifecycle rules")
var bucket *api.Bucket
if newRule.DaysFromHidingToDeleting != nil || newRule.DaysFromUploadingToHiding != nil || newRule.DaysFromStartingToCancelingUnfinishedLargeFiles != nil {
if !skip && (newRule.DaysFromHidingToDeleting != nil || newRule.DaysFromUploadingToHiding != nil || newRule.DaysFromStartingToCancelingUnfinishedLargeFiles != nil) {
bucketID, err := f.getBucketID(ctx, bucketName)
if err != nil {
return nil, err

View File

@@ -5,6 +5,7 @@ import (
"crypto/sha1"
"fmt"
"path"
"sort"
"strings"
"testing"
"time"
@@ -13,6 +14,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/bucket"
@@ -256,6 +258,12 @@ func (f *Fs) internalTestMetadata(t *testing.T, size string, uploadCutoff string
assert.Equal(t, v, got, k)
}
// mtime
for k, v := range metadata {
got := o.meta[k]
assert.Equal(t, v, got, k)
}
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
@@ -457,24 +465,161 @@ func (f *Fs) InternalTestVersions(t *testing.T) {
})
t.Run("Cleanup", func(t *testing.T) {
require.NoError(t, f.cleanUp(ctx, true, false, 0))
items := append([]fstest.Item{newItem}, fstests.InternalTestFiles...)
fstest.CheckListing(t, f, items)
// Set --b2-versions for this test
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
fstest.CheckListing(t, f, items)
t.Run("DryRun", func(t *testing.T) {
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Listing should be unchanged after dry run
before := listAllFiles(ctx, t, f, dirName)
ctx, ci := fs.AddConfig(ctx)
ci.DryRun = true
require.NoError(t, f.cleanUp(ctx, true, false, 0))
after := listAllFiles(ctx, t, f, dirName)
assert.Equal(t, before, after)
})
t.Run("RealThing", func(t *testing.T) {
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Listing should reflect current state after cleanup
require.NoError(t, f.cleanUp(ctx, true, false, 0))
items := append([]fstest.Item{newItem}, fstests.InternalTestFiles...)
fstest.CheckListing(t, f, items)
})
})
// Purge gets tested later
}
func (f *Fs) InternalTestCleanupUnfinished(t *testing.T) {
ctx := context.Background()
// B2CleanupHidden tests cleaning up hidden files
t.Run("CleanupUnfinished", func(t *testing.T) {
dirName := "unfinished"
fileCount := 5
expectedFiles := []string{}
for i := 1; i < fileCount; i++ {
fileName := fmt.Sprintf("%s/unfinished-%d", dirName, i)
expectedFiles = append(expectedFiles, fileName)
obj := &Object{
fs: f,
remote: fileName,
}
objInfo := object.NewStaticObjectInfo(fileName, fstest.Time("2002-02-03T04:05:06.499999999Z"), -1, true, nil, nil)
_, err := f.newLargeUpload(ctx, obj, nil, objInfo, f.opt.ChunkSize, false, nil)
require.NoError(t, err)
}
checkListing(ctx, t, f, dirName, expectedFiles)
t.Run("DryRun", func(t *testing.T) {
// Listing should not change after dry run
ctx, ci := fs.AddConfig(ctx)
ci.DryRun = true
require.NoError(t, f.cleanUp(ctx, false, true, 0))
checkListing(ctx, t, f, dirName, expectedFiles)
})
t.Run("RealThing", func(t *testing.T) {
// Listing should be empty after real cleanup
require.NoError(t, f.cleanUp(ctx, false, true, 0))
checkListing(ctx, t, f, dirName, []string{})
})
})
}
func listAllFiles(ctx context.Context, t *testing.T, f *Fs, dirName string) []string {
bucket, directory := f.split(dirName)
foundFiles := []string{}
require.NoError(t, f.list(ctx, bucket, directory, "", false, true, 0, true, false, func(remote string, object *api.File, isDirectory bool) error {
if !isDirectory {
foundFiles = append(foundFiles, object.Name)
}
return nil
}))
sort.Strings(foundFiles)
return foundFiles
}
func checkListing(ctx context.Context, t *testing.T, f *Fs, dirName string, expectedFiles []string) {
foundFiles := listAllFiles(ctx, t, f, dirName)
sort.Strings(expectedFiles)
assert.Equal(t, expectedFiles, foundFiles)
}
func (f *Fs) InternalTestLifecycleRules(t *testing.T) {
ctx := context.Background()
opt := map[string]string{}
t.Run("InitState", func(t *testing.T) {
// There should be no lifecycle rules at the outset
lifecycleRulesIf, err := f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules := lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
})
t.Run("DryRun", func(t *testing.T) {
// There should still be no lifecycle rules after each dry run operation
ctx, ci := fs.AddConfig(ctx)
ci.DryRun = true
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err := f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules := lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
delete(opt, "daysFromHidingToDeleting")
opt["daysFromUploadingToHiding"] = "40"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
})
t.Run("RealThing", func(t *testing.T) {
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err := f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules := lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 1, len(lifecycleRules))
assert.Equal(t, 30, *lifecycleRules[0].DaysFromHidingToDeleting)
delete(opt, "daysFromHidingToDeleting")
opt["daysFromUploadingToHiding"] = "40"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 1, len(lifecycleRules))
assert.Equal(t, 40, *lifecycleRules[0].DaysFromUploadingToHiding)
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 1, len(lifecycleRules))
assert.Equal(t, 30, *lifecycleRules[0].DaysFromHidingToDeleting)
assert.Equal(t, 40, *lifecycleRules[0].DaysFromUploadingToHiding)
})
}
// -run TestIntegration/FsMkdir/FsPutFiles/Internal
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Metadata", f.InternalTestMetadata)
t.Run("Versions", f.InternalTestVersions)
t.Run("CleanupUnfinished", f.InternalTestCleanupUnfinished)
t.Run("LifecycleRules", f.InternalTestLifecycleRules)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -2480,7 +2480,7 @@ func unmarshalSimpleJSON(ctx context.Context, metaObject fs.Object, data []byte)
if len(data) > maxMetadataSizeWritten {
return nil, false, ErrMetaTooBig
}
if len(data) < 2 || data[0] != '{' || data[len(data)-1] != '}' {
if data == nil || len(data) < 2 || data[0] != '{' || data[len(data)-1] != '}' {
return nil, false, errors.New("invalid json")
}
var metadata metaSimpleJSON

View File

@@ -203,6 +203,7 @@ func driveScopesContainsAppFolder(scopes []string) bool {
if scope == scopePrefix+"drive.appfolder" {
return true
}
}
return false
}
@@ -1211,7 +1212,6 @@ func fixMimeType(mimeTypeIn string) string {
}
return mimeTypeOut
}
func fixMimeTypeMap(in map[string][]string) (out map[string][]string) {
out = make(map[string][]string, len(in))
for k, v := range in {
@@ -1222,11 +1222,9 @@ func fixMimeTypeMap(in map[string][]string) (out map[string][]string) {
}
return out
}
func isInternalMimeType(mimeType string) bool {
return strings.HasPrefix(mimeType, "application/vnd.google-apps.")
}
func isLinkMimeType(mimeType string) bool {
return strings.HasPrefix(mimeType, "application/x-link-")
}
@@ -1659,8 +1657,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *drive.F
// When the drive.File cannot be represented as an fs.Object it will return (nil, nil).
func (f *Fs) newObjectWithExportInfo(
ctx context.Context, remote string, info *drive.File,
extension, exportName, exportMimeType string, isDocument bool,
) (o fs.Object, err error) {
extension, exportName, exportMimeType string, isDocument bool) (o fs.Object, err error) {
// Note that resolveShortcut will have been called already if
// we are being called from a listing. However the drive.Item
// will have been resolved so this will do nothing.
@@ -1763,7 +1760,7 @@ func (f *Fs) createDir(ctx context.Context, pathID, leaf string, metadata fs.Met
}
var updateMetadata updateMetadataFn
if len(metadata) > 0 {
updateMetadata, err = f.updateMetadata(ctx, createInfo, metadata, true, true)
updateMetadata, err = f.updateMetadata(ctx, createInfo, metadata, true)
if err != nil {
return nil, fmt.Errorf("create dir: failed to update metadata: %w", err)
}
@@ -1794,7 +1791,7 @@ func (f *Fs) updateDir(ctx context.Context, dirID string, metadata fs.Metadata)
}
dirID = actualID(dirID)
updateInfo := &drive.File{}
updateMetadata, err := f.updateMetadata(ctx, updateInfo, metadata, true, true)
updateMetadata, err := f.updateMetadata(ctx, updateInfo, metadata, true)
if err != nil {
return nil, fmt.Errorf("update dir: failed to update metadata from source object: %w", err)
}
@@ -1851,7 +1848,6 @@ func linkTemplate(mt string) *template.Template {
})
return _linkTemplates[mt]
}
func (f *Fs) fetchFormats(ctx context.Context) {
fetchFormatsOnce.Do(func() {
var about *drive.About
@@ -1897,8 +1893,7 @@ func (f *Fs) importFormats(ctx context.Context) map[string][]string {
// Look through the exportExtensions and find the first format that can be
// converted. If none found then return ("", "", false)
func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string) (
extension, mimeType string, isDocument bool,
) {
extension, mimeType string, isDocument bool) {
exportMimeTypes, isDocument := f.exportFormats(ctx)[itemMimeType]
if isDocument {
for _, _extension := range f.exportExtensions {
@@ -2694,7 +2689,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if shortcutID != "" {
return f.delete(ctx, shortcutID, f.opt.UseTrash)
}
trashedFiles := false
var trashedFiles = false
if check {
found, err := f.list(ctx, []string{directoryID}, "", false, false, f.opt.TrashedOnly, true, func(item *drive.File) bool {
if !item.Trashed {
@@ -2931,6 +2926,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
err := f.svc.Files.EmptyTrash().Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return err
}
@@ -3191,7 +3187,6 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
}
}()
}
func (f *Fs) changeNotifyStartPageToken(ctx context.Context) (pageToken string, err error) {
var startPageToken *drive.StartPageToken
err = f.pacer.Call(func() (bool, error) {
@@ -3530,14 +3525,14 @@ func (f *Fs) unTrashDir(ctx context.Context, dir string, recurse bool) (r unTras
return f.unTrash(ctx, dir, directoryID, true)
}
// copy file with id to dest
func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
// copy or move file with id to dest
func (f *Fs) copyOrMoveID(ctx context.Context, operation string, id, dest string) (err error) {
info, err := f.getFile(ctx, id, f.getFileFields(ctx))
if err != nil {
return fmt.Errorf("couldn't find id: %w", err)
}
if info.MimeType == driveFolderType {
return fmt.Errorf("can't copy directory use: rclone copy --drive-root-folder-id %s %s %s", id, fs.ConfigString(f), dest)
return fmt.Errorf("can't %s directory use: rclone %s --drive-root-folder-id %s %s %s", operation, operation, id, fs.ConfigString(f), dest)
}
info.Name = f.opt.Enc.ToStandardName(info.Name)
o, err := f.newObjectWithInfo(ctx, info.Name, info)
@@ -3558,9 +3553,15 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
if err != nil {
return err
}
_, err = operations.Copy(ctx, dstFs, nil, destLeaf, o)
if err != nil {
return fmt.Errorf("copy failed: %w", err)
var opErr error
if operation == "moveid" {
_, opErr = operations.Move(ctx, dstFs, nil, destLeaf, o)
} else {
_, opErr = operations.Copy(ctx, dstFs, nil, destLeaf, o)
}
if opErr != nil {
return fmt.Errorf("%s failed: %w", operation, opErr)
}
return nil
}
@@ -3797,6 +3798,28 @@ attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
`,
}, {
Name: "moveid",
Short: "Move files by ID",
Long: `This command moves files by ID
Usage:
rclone backend moveid drive: ID path
rclone backend moveid drive: ID1 path1 ID2 path2
It moves the drive file with ID given to the path (an rclone path which
will be passed internally to rclone moveto).
The path should end with a / to indicate move the file as named to
this directory. If it doesn't end with a / then the last path
component will be used as the file name.
If the destination is a drive backend then server-side moving will be
attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be moved beforehand.
`,
}, {
Name: "exportformats",
Short: "Dump the export formats for debug purposes",
@@ -3975,16 +3998,16 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
dir = arg[0]
}
return f.unTrashDir(ctx, dir, true)
case "copyid":
case "copyid", "moveid":
if len(arg)%2 != 0 {
return nil, errors.New("need an even number of arguments")
}
for len(arg) > 0 {
id, dest := arg[0], arg[1]
arg = arg[2:]
err = f.copyID(ctx, id, dest)
err = f.copyOrMoveID(ctx, name, id, dest)
if err != nil {
return nil, fmt.Errorf("failed copying %q to %q: %w", id, dest, err)
return nil, fmt.Errorf("failed %s %q to %q: %w", name, id, dest, err)
}
}
return nil, nil
@@ -3995,13 +4018,14 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
case "query":
if len(arg) == 1 {
query := arg[0]
results, err := f.query(ctx, query)
var results, err = f.query(ctx, query)
if err != nil {
return nil, fmt.Errorf("failed to execute query: %q, error: %w", query, err)
}
return results, nil
} else {
return nil, errors.New("need a query argument")
}
return nil, errors.New("need a query argument")
case "rescue":
dirID := ""
_, delete := opt["delete"]
@@ -4061,7 +4085,6 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
}
return "", hash.ErrUnsupported
}
func (o *baseObject) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 && t != hash.SHA1 && t != hash.SHA256 {
return "", hash.ErrUnsupported
@@ -4076,8 +4099,7 @@ func (o *baseObject) Size() int64 {
// getRemoteInfoWithExport returns a drive.File and the export settings for the remote
func (f *Fs) getRemoteInfoWithExport(ctx context.Context, remote string) (
info *drive.File, extension, exportName, exportMimeType string, isDocument bool, err error,
) {
info *drive.File, extension, exportName, exportMimeType string, isDocument bool, err error) {
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, false)
if err != nil {
if err == fs.ErrorDirNotFound {
@@ -4290,13 +4312,12 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
return o.baseObject.open(ctx, o.url, options...)
}
func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Update the size with what we are reading as it can change from
// the HEAD in the listing to this GET. This stops rclone marking
// the transfer as corrupted.
var offset, end int64 = 0, -1
newOptions := options[:0]
var newOptions = options[:0]
for _, o := range options {
// Note that Range requests don't work on Google docs:
// https://developers.google.com/drive/v3/web/manage-downloads#partial_download
@@ -4323,10 +4344,9 @@ func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in
}
return
}
func (o *linkObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
data := o.content
var data = o.content
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
@@ -4351,8 +4371,7 @@ func (o *linkObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.
}
func (o *baseObject) update(ctx context.Context, updateInfo *drive.File, uploadMimeType string, in io.Reader,
src fs.ObjectInfo,
) (info *drive.File, err error) {
src fs.ObjectInfo) (info *drive.File, err error) {
// Make the API request to upload metadata and file data.
size := src.Size()
if size >= 0 && size < int64(o.fs.opt.UploadCutoff) {
@@ -4430,7 +4449,6 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return nil
}
func (o *documentObject) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
srcMimeType := fs.MimeType(ctx, src)
importMimeType := ""
@@ -4526,7 +4544,6 @@ func (o *baseObject) Metadata(ctx context.Context) (metadata fs.Metadata, err er
func (o *documentObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}
func (o *linkObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}

View File

@@ -479,8 +479,8 @@ func (f *Fs) InternalTestUnTrash(t *testing.T) {
require.NoError(t, f.Purge(ctx, "trashDir"))
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/CopyID
func (f *Fs) InternalTestCopyID(t *testing.T) {
// TestIntegration/FsMkdir/FsPutFiles/Internal/CopyOrMoveID
func (f *Fs) InternalTestCopyOrMoveID(t *testing.T) {
ctx := context.Background()
obj, err := f.NewObject(ctx, existingFile)
require.NoError(t, err)
@@ -498,7 +498,7 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
}
t.Run("BadID", func(t *testing.T) {
err = f.copyID(ctx, "ID-NOT-FOUND", dir+"/")
err = f.copyOrMoveID(ctx, "moveid", "ID-NOT-FOUND", dir+"/")
require.Error(t, err)
assert.Contains(t, err.Error(), "couldn't find id")
})
@@ -506,19 +506,31 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
t.Run("Directory", func(t *testing.T) {
rootID, err := f.dirCache.RootID(ctx, false)
require.NoError(t, err)
err = f.copyID(ctx, rootID, dir+"/")
err = f.copyOrMoveID(ctx, "moveid", rootID, dir+"/")
require.Error(t, err)
assert.Contains(t, err.Error(), "can't copy directory")
assert.Contains(t, err.Error(), "can't moveid directory")
})
t.Run("WithoutDestName", func(t *testing.T) {
err = f.copyID(ctx, o.id, dir+"/")
t.Run("MoveWithoutDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "moveid", o.id, dir+"/")
require.NoError(t, err)
checkFile(path.Base(existingFile))
})
t.Run("WithDestName", func(t *testing.T) {
err = f.copyID(ctx, o.id, dir+"/potato.txt")
t.Run("CopyWithoutDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "copyid", o.id, dir+"/")
require.NoError(t, err)
checkFile(path.Base(existingFile))
})
t.Run("MoveWithDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "moveid", o.id, dir+"/potato.txt")
require.NoError(t, err)
checkFile("potato.txt")
})
t.Run("CopyWithDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "copyid", o.id, dir+"/potato.txt")
require.NoError(t, err)
checkFile("potato.txt")
})
@@ -647,7 +659,7 @@ func (f *Fs) InternalTest(t *testing.T) {
})
t.Run("Shortcuts", f.InternalTestShortcuts)
t.Run("UnTrash", f.InternalTestUnTrash)
t.Run("CopyID", f.InternalTestCopyID)
t.Run("CopyOrMoveID", f.InternalTestCopyOrMoveID)
t.Run("Query", f.InternalTestQuery)
t.Run("AgeQuery", f.InternalTestAgeQuery)
t.Run("ShouldRetry", f.InternalTestShouldRetry)

View File

@@ -508,7 +508,7 @@ type updateMetadataFn func(context.Context, *drive.File) error
//
// It returns a callback which should be called to finish the updates
// after the data is uploaded.
func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs.Metadata, update, isFolder bool) (callback updateMetadataFn, err error) {
func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs.Metadata, update bool) (callback updateMetadataFn, err error) {
callbackFns := []updateMetadataFn{}
callback = func(ctx context.Context, info *drive.File) error {
for _, fn := range callbackFns {
@@ -533,9 +533,7 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
switch k {
case "copy-requires-writer-permission":
if isFolder {
fs.Debugf(f, "Ignoring %s=%s as can't set on folders", k, v)
} else if err := parseBool(&updateInfo.CopyRequiresWriterPermission); err != nil {
if err := parseBool(&updateInfo.CopyRequiresWriterPermission); err != nil {
return nil, err
}
case "writers-can-share":
@@ -632,7 +630,7 @@ func (f *Fs) fetchAndUpdateMetadata(ctx context.Context, src fs.ObjectInfo, opti
if err != nil {
return nil, fmt.Errorf("failed to read metadata from source object: %w", err)
}
callback, err = f.updateMetadata(ctx, updateInfo, meta, update, false)
callback, err = f.updateMetadata(ctx, updateInfo, meta, update)
if err != nil {
return nil, fmt.Errorf("failed to update metadata from source object: %w", err)
}

View File

@@ -1174,16 +1174,6 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
return shouldRetry(ctx, err)
})
if err != nil && createArg.Settings.Expires != nil && strings.Contains(err.Error(), sharing.SharedLinkSettingsErrorNotAuthorized) {
// Some plans can't create links with expiry
fs.Debugf(absPath, "can't create link with expiry, trying without")
createArg.Settings.Expires = nil
err = f.pacer.Call(func() (bool, error) {
linkRes, err = f.sharing.CreateSharedLinkWithSettings(&createArg)
return shouldRetry(ctx, err)
})
}
if err != nil && strings.Contains(err.Error(),
sharing.CreateSharedLinkWithSettingsErrorSharedLinkAlreadyExists) {
fs.Debugf(absPath, "has a public link already, attempting to retrieve it")

View File

@@ -180,6 +180,7 @@ func getFsEndpoint(ctx context.Context, client *http.Client, url string, opt *Op
}
addHeaders(req, opt)
res, err := noRedir.Do(req)
if err != nil {
fs.Debugf(nil, "Assuming path is a file as HEAD request could not be sent: %v", err)
return createFileResult()
@@ -248,14 +249,6 @@ func (f *Fs) httpConnection(ctx context.Context, opt *Options) (isFile bool, err
f.httpClient = client
f.endpoint = u
f.endpointURL = u.String()
if isFile {
// Correct root if definitely pointing to a file
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
return isFile, nil
}

View File

@@ -631,7 +631,7 @@ func NewUpdateFileInfo() UpdateFileInfo {
FileFlags: FileFlags{
IsExecutable: true,
IsHidden: false,
IsWritable: true,
IsWritable: false,
},
}
}

View File

@@ -151,6 +151,19 @@ Owner is able to add custom keys. Metadata feature grabs all the keys including
Help: "Host of InternetArchive Frontend.\n\nLeave blank for default value.",
Default: "https://archive.org",
Advanced: true,
}, {
Name: "item_metadata",
Help: `Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set.
Format is key=value and the 'x-archive-meta-' prefix is automatically added.`,
Default: []string{},
Hide: fs.OptionHideConfigurator,
Advanced: true,
}, {
Name: "item_derive",
Help: `Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload.
The derive process produces a number of secondary files from an upload to make an upload more usable on the web.
Setting this to false is useful for uploading files that are already in a format that IA can display or reduce burden on IA's infrastructure.`,
Default: true,
}, {
Name: "disable_checksum",
Help: `Don't ask the server to test against MD5 checksum calculated by rclone.
@@ -201,6 +214,8 @@ type Options struct {
Endpoint string `config:"endpoint"`
FrontEndpoint string `config:"front_endpoint"`
DisableChecksum bool `config:"disable_checksum"`
ItemMetadata []string `config:"item_metadata"`
ItemDerive bool `config:"item_derive"`
WaitArchive fs.Duration `config:"wait_archive"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -790,17 +805,23 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
"x-amz-filemeta-rclone-update-track": updateTracker,
// we add some more headers for intuitive actions
"x-amz-auto-make-bucket": "1", // create an item if does not exist, do nothing if already
"x-archive-auto-make-bucket": "1", // same as above in IAS3 original way
"x-archive-keep-old-version": "0", // do not keep old versions (a.k.a. trashes in other clouds)
"x-archive-meta-mediatype": "data", // mark media type of the uploading file as "data"
"x-archive-queue-derive": "0", // skip derivation process (e.g. encoding to smaller files, OCR on PDFs)
"x-archive-cascade-delete": "1", // enable "cascate delete" (delete all derived files in addition to the file itself)
"x-amz-auto-make-bucket": "1", // create an item if does not exist, do nothing if already
"x-archive-auto-make-bucket": "1", // same as above in IAS3 original way
"x-archive-keep-old-version": "0", // do not keep old versions (a.k.a. trashes in other clouds)
"x-archive-cascade-delete": "1", // enable "cascate delete" (delete all derived files in addition to the file itself)
}
if size >= 0 {
headers["Content-Length"] = fmt.Sprintf("%d", size)
headers["x-archive-size-hint"] = fmt.Sprintf("%d", size)
}
// This is IA's ITEM metadata, not file metadata
headers, err = o.appendItemMetadataHeaders(headers, o.fs.opt)
if err != nil {
return err
}
var mdata fs.Metadata
mdata, err = fs.GetMetadataOptions(ctx, o.fs, src, options)
if err == nil && mdata != nil {
@@ -863,6 +884,51 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
func (o *Object) appendItemMetadataHeaders(headers map[string]string, options Options) (newHeaders map[string]string, err error) {
metadataCounter := make(map[string]int)
metadataValues := make(map[string][]string)
// First pass: count occurrences and collect values
for _, v := range options.ItemMetadata {
parts := strings.SplitN(v, "=", 2)
if len(parts) != 2 {
return newHeaders, errors.New("item metadata key=value should be in the form key=value")
}
key, value := parts[0], parts[1]
metadataCounter[key]++
metadataValues[key] = append(metadataValues[key], value)
}
// Second pass: add headers with appropriate prefixes
for key, count := range metadataCounter {
if count == 1 {
// Only one occurrence, use x-archive-meta-
headers[fmt.Sprintf("x-archive-meta-%s", key)] = metadataValues[key][0]
} else {
// Multiple occurrences, use x-archive-meta01-, x-archive-meta02-, etc.
for i, value := range metadataValues[key] {
headers[fmt.Sprintf("x-archive-meta%02d-%s", i+1, key)] = value
}
}
}
if o.fs.opt.ItemDerive {
headers["x-archive-queue-derive"] = "1"
} else {
headers["x-archive-queue-derive"] = "0"
}
fs.Debugf(o, "Setting IA item derive: %t", o.fs.opt.ItemDerive)
for k, v := range headers {
if strings.HasPrefix(k, "x-archive-meta") {
fs.Debugf(o, "Setting IA item metadata: %s=%s", k, v)
}
}
return headers, nil
}
// Remove an object
func (o *Object) Remove(ctx context.Context) (err error) {
bucket, bucketPath := o.split()

View File

@@ -396,57 +396,10 @@ func (m *Metadata) WritePermissions(ctx context.Context) (err error) {
return nil
}
// Order the permissions so that any with users come first.
//
// This is to work around a quirk with Graph:
//
// 1. You are adding permissions for both a group and a user.
// 2. The user is a member of the group.
// 3. The permissions for the group and user are the same.
// 4. You are adding the group permission before the user permission.
//
// When all of the above are true, Graph indicates it has added the
// user permission, but it immediately drops it
//
// See: https://github.com/rclone/rclone/issues/8465
func (m *Metadata) orderPermissions(xs []*api.PermissionsType) {
// Return true if identity has any user permissions
hasUserIdentity := func(identity *api.IdentitySet) bool {
if identity == nil {
return false
}
return identity.User.ID != "" || identity.User.DisplayName != "" || identity.User.Email != "" || identity.User.LoginName != ""
}
// Return true if p has any user permissions
hasUser := func(p *api.PermissionsType) bool {
if hasUserIdentity(p.GetGrantedTo(m.fs.driveType)) {
return true
}
for _, identity := range p.GetGrantedToIdentities(m.fs.driveType) {
if hasUserIdentity(identity) {
return true
}
}
return false
}
// Put Permissions with a user first, leaving unsorted otherwise
slices.SortStableFunc(xs, func(a, b *api.PermissionsType) int {
aHasUser := hasUser(a)
bHasUser := hasUser(b)
if aHasUser && !bHasUser {
return -1
} else if !aHasUser && bHasUser {
return 1
}
return 0
})
}
// sortPermissions sorts the permissions (to be written) into add, update, and remove queues
func (m *Metadata) sortPermissions() (add, update, remove []*api.PermissionsType) {
new, old := m.queuedPermissions, m.permissions
if len(old) == 0 || m.permsAddOnly {
m.orderPermissions(new)
return new, nil, nil // they must all be "add"
}
@@ -494,9 +447,6 @@ func (m *Metadata) sortPermissions() (add, update, remove []*api.PermissionsType
remove = append(remove, o)
}
}
m.orderPermissions(add)
m.orderPermissions(update)
m.orderPermissions(remove)
return add, update, remove
}

View File

@@ -1,125 +0,0 @@
package onedrive
import (
"encoding/json"
"testing"
"github.com/rclone/rclone/backend/onedrive/api"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestOrderPermissions(t *testing.T) {
tests := []struct {
name string
input []*api.PermissionsType
expected []string
}{
{
name: "empty",
input: []*api.PermissionsType{},
expected: []string(nil),
},
{
name: "users first, then group, then none",
input: []*api.PermissionsType{
{ID: "1", GrantedTo: &api.IdentitySet{Group: api.Identity{DisplayName: "Group1"}}},
{ID: "2", GrantedToIdentities: []*api.IdentitySet{{User: api.Identity{DisplayName: "Alice"}}}},
{ID: "3", GrantedTo: &api.IdentitySet{User: api.Identity{DisplayName: "Alice"}}},
{ID: "4"},
},
expected: []string{"2", "3", "1", "4"},
},
{
name: "same type unsorted",
input: []*api.PermissionsType{
{ID: "b", GrantedTo: &api.IdentitySet{Group: api.Identity{DisplayName: "Group B"}}},
{ID: "a", GrantedTo: &api.IdentitySet{Group: api.Identity{DisplayName: "Group A"}}},
{ID: "c", GrantedToIdentities: []*api.IdentitySet{{Group: api.Identity{DisplayName: "Group A"}}, {User: api.Identity{DisplayName: "Alice"}}}},
},
expected: []string{"c", "b", "a"},
},
{
name: "all user identities",
input: []*api.PermissionsType{
{ID: "c", GrantedTo: &api.IdentitySet{User: api.Identity{DisplayName: "Bob"}}},
{ID: "a", GrantedTo: &api.IdentitySet{User: api.Identity{Email: "alice@example.com"}}},
{ID: "b", GrantedToIdentities: []*api.IdentitySet{{User: api.Identity{LoginName: "user3"}}}},
},
expected: []string{"c", "a", "b"},
},
{
name: "no user or group info",
input: []*api.PermissionsType{
{ID: "z"},
{ID: "x"},
{ID: "y"},
},
expected: []string{"z", "x", "y"},
},
}
for _, driveType := range []string{driveTypePersonal, driveTypeBusiness} {
t.Run(driveType, func(t *testing.T) {
for _, tt := range tests {
m := &Metadata{fs: &Fs{driveType: driveType}}
t.Run(tt.name, func(t *testing.T) {
if driveType == driveTypeBusiness {
for i := range tt.input {
tt.input[i].GrantedToV2 = tt.input[i].GrantedTo
tt.input[i].GrantedTo = nil
tt.input[i].GrantedToIdentitiesV2 = tt.input[i].GrantedToIdentities
tt.input[i].GrantedToIdentities = nil
}
}
m.orderPermissions(tt.input)
var gotIDs []string
for _, p := range tt.input {
gotIDs = append(gotIDs, p.ID)
}
assert.Equal(t, tt.expected, gotIDs)
})
}
})
}
}
func TestOrderPermissionsJSON(t *testing.T) {
testJSON := `[
{
"id": "1",
"grantedToV2": {
"group": {
"id": "group@example.com"
}
},
"roles": [
"write"
]
},
{
"id": "2",
"grantedToV2": {
"user": {
"id": "user@example.com"
}
},
"roles": [
"write"
]
}
]`
var testPerms []*api.PermissionsType
err := json.Unmarshal([]byte(testJSON), &testPerms)
require.NoError(t, err)
m := &Metadata{fs: &Fs{driveType: driveTypeBusiness}}
m.orderPermissions(testPerms)
var gotIDs []string
for _, p := range testPerms {
gotIDs = append(gotIDs, p.ID)
}
assert.Equal(t, []string{"2", "1"}, gotIDs)
}

View File

@@ -934,67 +934,34 @@ func init() {
Help: "The default endpoint\nIran",
}},
}, {
// Linode endpoints: https://techdocs.akamai.com/cloud-computing/docs/object-storage-product-limits#supported-endpoint-types-by-region
// Linode endpoints: https://www.linode.com/docs/products/storage/object-storage/guides/urls/#cluster-url-s3-endpoint
Name: "endpoint",
Help: "Endpoint for Linode Object Storage API.",
Provider: "Linode",
Examples: []fs.OptionExample{{
Value: "nl-ams-1.linodeobjects.com",
Help: "Amsterdam (Netherlands), nl-ams-1",
}, {
Value: "us-southeast-1.linodeobjects.com",
Help: "Atlanta, GA (USA), us-southeast-1",
}, {
Value: "in-maa-1.linodeobjects.com",
Help: "Chennai (India), in-maa-1",
}, {
Value: "us-ord-1.linodeobjects.com",
Help: "Chicago, IL (USA), us-ord-1",
}, {
Value: "eu-central-1.linodeobjects.com",
Help: "Frankfurt (Germany), eu-central-1",
}, {
Value: "id-cgk-1.linodeobjects.com",
Help: "Jakarta (Indonesia), id-cgk-1",
}, {
Value: "gb-lon-1.linodeobjects.com",
Help: "London 2 (Great Britain), gb-lon-1",
}, {
Value: "us-lax-1.linodeobjects.com",
Help: "Los Angeles, CA (USA), us-lax-1",
}, {
Value: "es-mad-1.linodeobjects.com",
Help: "Madrid (Spain), es-mad-1",
}, {
Value: "au-mel-1.linodeobjects.com",
Help: "Melbourne (Australia), au-mel-1",
}, {
Value: "us-mia-1.linodeobjects.com",
Help: "Miami, FL (USA), us-mia-1",
}, {
Value: "it-mil-1.linodeobjects.com",
Help: "Milan (Italy), it-mil-1",
}, {
Value: "us-east-1.linodeobjects.com",
Help: "Newark, NJ (USA), us-east-1",
}, {
Value: "jp-osa-1.linodeobjects.com",
Help: "Osaka (Japan), jp-osa-1",
}, {
Value: "fr-par-1.linodeobjects.com",
Help: "Paris (France), fr-par-1",
}, {
Value: "br-gru-1.linodeobjects.com",
Help: "São Paulo (Brazil), br-gru-1",
}, {
Value: "us-sea-1.linodeobjects.com",
Help: "Seattle, WA (USA), us-sea-1",
}, {
Value: "ap-south-1.linodeobjects.com",
Help: "Singapore, ap-south-1",
}, {
Value: "sg-sin-1.linodeobjects.com",
Help: "Singapore 2, sg-sin-1",
Help: "Singapore ap-south-1",
}, {
Value: "se-sto-1.linodeobjects.com",
Help: "Stockholm (Sweden), se-sto-1",
@@ -1389,10 +1356,6 @@ func init() {
Value: "sfo3.digitaloceanspaces.com",
Help: "DigitalOcean Spaces San Francisco 3",
Provider: "DigitalOcean",
}, {
Value: "sfo2.digitaloceanspaces.com",
Help: "DigitalOcean Spaces San Francisco 2",
Provider: "DigitalOcean",
}, {
Value: "fra1.digitaloceanspaces.com",
Help: "DigitalOcean Spaces Frankfurt 1",
@@ -1409,18 +1372,6 @@ func init() {
Value: "sgp1.digitaloceanspaces.com",
Help: "DigitalOcean Spaces Singapore 1",
Provider: "DigitalOcean",
}, {
Value: "lon1.digitaloceanspaces.com",
Help: "DigitalOcean Spaces London 1",
Provider: "DigitalOcean",
}, {
Value: "tor1.digitaloceanspaces.com",
Help: "DigitalOcean Spaces Toronto 1",
Provider: "DigitalOcean",
}, {
Value: "blr1.digitaloceanspaces.com",
Help: "DigitalOcean Spaces Bangalore 1",
Provider: "DigitalOcean",
}, {
Value: "localhost:8333",
Help: "SeaweedFS S3 localhost",
@@ -3726,6 +3677,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if opt.Provider == "IDrive" {
f.features.SetTier = false
}
if opt.Provider == "AWS" {
f.features.DoubleSlash = true
}
if opt.DirectoryMarkers {
f.features.CanHaveEmptyDirectories = true
}
@@ -4197,7 +4151,7 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
opt.prefix += "/"
}
if !opt.findFile {
if opt.directory != "" {
if opt.directory != "" && (opt.prefix == "" && !bucket.IsAllSlashes(opt.directory) || opt.prefix != "" && !strings.HasSuffix(opt.directory, "/")) {
opt.directory += "/"
}
}
@@ -4294,14 +4248,18 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
}
remote = f.opt.Enc.ToStandardPath(remote)
if !strings.HasPrefix(remote, opt.prefix) {
fs.Logf(f, "Odd name received %q", remote)
fs.Logf(f, "Odd directory name received %q", remote)
continue
}
remote = remote[len(opt.prefix):]
// Trim one slash off the remote name
remote, _ = strings.CutSuffix(remote, "/")
if remote == "" || bucket.IsAllSlashes(remote) {
remote += "/"
}
if opt.addBucket {
remote = bucket.Join(opt.bucket, remote)
}
remote = strings.TrimSuffix(remote, "/")
err = fn(remote, &types.Object{Key: &remote}, nil, true)
if err != nil {
if err == errEndList {
@@ -4976,7 +4934,7 @@ or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Fre
Usage Examples:
rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY

View File

@@ -31,13 +31,29 @@ func (f *Fs) dial(ctx context.Context, network, addr string) (*conn, error) {
}
}
d := &smb2.Dialer{
Initiator: &smb2.NTLMInitiator{
d := &smb2.Dialer{}
if f.opt.UseKerberos {
cl, err := getKerberosClient()
if err != nil {
return nil, err
}
spn := f.opt.SPN
if spn == "" {
spn = "cifs/" + f.opt.Host
}
d.Initiator = &smb2.Krb5Initiator{
Client: cl,
TargetSPN: spn,
}
} else {
d.Initiator = &smb2.NTLMInitiator{
User: f.opt.User,
Password: pass,
Domain: f.opt.Domain,
TargetSPN: f.opt.SPN,
},
}
}
session, err := d.DialConn(ctx, tconn, addr)

78
backend/smb/kerberos.go Normal file
View File

@@ -0,0 +1,78 @@
package smb
import (
"fmt"
"os"
"os/user"
"path/filepath"
"strings"
"sync"
"github.com/jcmturner/gokrb5/v8/client"
"github.com/jcmturner/gokrb5/v8/config"
"github.com/jcmturner/gokrb5/v8/credentials"
)
var (
kerberosClient *client.Client
kerberosErr error
kerberosOnce sync.Once
)
// getKerberosClient returns a Kerberos client that can be used to authenticate.
func getKerberosClient() (*client.Client, error) {
if kerberosClient == nil || kerberosErr == nil {
kerberosOnce.Do(func() {
kerberosClient, kerberosErr = createKerberosClient()
})
}
return kerberosClient, kerberosErr
}
// createKerberosClient creates a new Kerberos client.
func createKerberosClient() (*client.Client, error) {
cfgPath := os.Getenv("KRB5_CONFIG")
if cfgPath == "" {
cfgPath = "/etc/krb5.conf"
}
cfg, err := config.Load(cfgPath)
if err != nil {
return nil, err
}
// Determine the ccache location from the environment, falling back to the
// default location.
ccachePath := os.Getenv("KRB5CCNAME")
switch {
case strings.Contains(ccachePath, ":"):
parts := strings.SplitN(ccachePath, ":", 2)
switch parts[0] {
case "FILE":
ccachePath = parts[1]
case "DIR":
primary, err := os.ReadFile(filepath.Join(parts[1], "primary"))
if err != nil {
return nil, err
}
ccachePath = filepath.Join(parts[1], strings.TrimSpace(string(primary)))
default:
return nil, fmt.Errorf("unsupported KRB5CCNAME: %s", ccachePath)
}
case ccachePath == "":
u, err := user.Current()
if err != nil {
return nil, err
}
ccachePath = "/tmp/krb5cc_" + u.Uid
}
ccache, err := credentials.LoadCCache(ccachePath)
if err != nil {
return nil, err
}
return client.NewFromCCache(ccache, cfg)
}

View File

@@ -76,6 +76,16 @@ authentication, and it often needs to be set for clusters. For example:
Leave blank if not sure.
`,
Sensitive: true,
}, {
Name: "use_kerberos",
Help: `Use Kerberos authentication.
If set, rclone will use Kerberos authentication instead of NTLM. This
requires a valid Kerberos configuration and credentials cache to be
available, either in the default locations or as specified by the
KRB5_CONFIG and KRB5CCNAME environment variables.
`,
Default: false,
}, {
Name: "idle_timeout",
Default: fs.Duration(60 * time.Second),
@@ -126,6 +136,7 @@ type Options struct {
Pass string `config:"pass"`
Domain string `config:"domain"`
SPN string `config:"spn"`
UseKerberos bool `config:"use_kerberos"`
HideSpecial bool `config:"hide_special_share"`
CaseInsensitive bool `config:"case_insensitive"`
IdleTimeout fs.Duration `config:"idle_timeout"`

View File

@@ -2,6 +2,7 @@
package smb_test
import (
"path/filepath"
"testing"
"github.com/rclone/rclone/backend/smb"
@@ -15,3 +16,13 @@ func TestIntegration(t *testing.T) {
NilObject: (*smb.Object)(nil),
})
}
func TestIntegration2(t *testing.T) {
krb5Dir := t.TempDir()
t.Setenv("KRB5_CONFIG", filepath.Join(krb5Dir, "krb5.conf"))
t.Setenv("KRB5CCNAME", filepath.Join(krb5Dir, "ccache"))
fstests.Run(t, &fstests.Opt{
RemoteName: "TestSMBKerberos:rclone",
NilObject: (*smb.Object)(nil),
})
}

View File

@@ -120,7 +120,7 @@ func init() {
srv := rest.NewClient(fshttp.NewClient(ctx)).SetRoot(rootURL) // FIXME
// FIXME
// err = f.pacer.Call(func() (bool, error) {
//err = f.pacer.Call(func() (bool, error) {
resp, err = srv.CallXML(context.Background(), &opts, &authRequest, nil)
// return shouldRetry(ctx, resp, err)
//})
@@ -327,7 +327,7 @@ func (f *Fs) readMetaDataForID(ctx context.Context, ID string) (info *api.File,
func (f *Fs) getAuthToken(ctx context.Context) error {
fs.Debugf(f, "Renewing token")
authRequest := api.TokenAuthRequest{
var authRequest = api.TokenAuthRequest{
AccessKeyID: withDefault(f.opt.AccessKeyID, accessKeyID),
PrivateAccessKey: withDefault(f.opt.PrivateAccessKey, obscure.MustReveal(encryptedPrivateAccessKey)),
RefreshToken: f.opt.RefreshToken,
@@ -509,7 +509,7 @@ func errorHandler(resp *http.Response) (err error) {
return fmt.Errorf("error reading error out of body: %w", err)
}
match := findError.FindSubmatch(body)
if len(match) < 2 || len(match[1]) == 0 {
if match == nil || len(match) < 2 || len(match[1]) == 0 {
return fmt.Errorf("HTTP error %v (%v) returned body: %q", resp.StatusCode, resp.Status, body)
}
return fmt.Errorf("HTTP error %v (%v): %s", resp.StatusCode, resp.Status, match[1])
@@ -552,7 +552,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
// fs.Debugf(f, "FindLeaf(%q, %q)", pathID, leaf)
//fs.Debugf(f, "FindLeaf(%q, %q)", pathID, leaf)
// Find the leaf in pathID
found, err = f.listAll(ctx, pathID, nil, func(item *api.Collection) bool {
if strings.EqualFold(item.Name, leaf) {

View File

@@ -11,5 +11,4 @@
<services+github@simjo.st>
<seb•ɑƬ•chezwam•ɖɵʈ•org>
<allllaboutyou@gmail.com>
<psycho@feltzv.fr>
<afw5059@gmail.com>
<psycho@feltzv.fr>

View File

@@ -23,23 +23,19 @@ func init() {
}
var commandDefinition = &cobra.Command{
Use: "authorize <fs name> [base64_json_blob | client_id client_secret]",
Use: "authorize",
Short: `Remote authorization.`,
Long: `Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
The command requires 1-3 arguments:
- fs name (e.g., "drive", "s3", etc.)
- Either a base64 encoded JSON blob obtained from a previous rclone config session
- Or a client_id and client_secret pair obtained from the remote service
Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.`,
Annotations: map[string]string{
"versionIntroduced": "v1.27",
// "groups": "",
},
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 3, command, args)

View File

@@ -1,32 +0,0 @@
package authorize
import (
"bytes"
"strings"
"testing"
"github.com/spf13/cobra"
)
func TestAuthorizeCommand(t *testing.T) {
// Test that the Use string is correctly formatted
if commandDefinition.Use != "authorize <fs name> [base64_json_blob | client_id client_secret]" {
t.Errorf("Command Use string doesn't match expected format: %s", commandDefinition.Use)
}
// Test that help output contains the argument information
buf := &bytes.Buffer{}
cmd := &cobra.Command{}
cmd.AddCommand(commandDefinition)
cmd.SetOut(buf)
cmd.SetArgs([]string{"authorize", "--help"})
err := cmd.Execute()
if err != nil {
t.Fatalf("Failed to execute help command: %v", err)
}
helpOutput := buf.String()
if !strings.Contains(helpOutput, "authorize <fs name>") {
t.Errorf("Help output doesn't contain correct usage information")
}
}

View File

@@ -9,7 +9,7 @@ import (
"github.com/rclone/rclone/cmd/bisync/bilib"
"github.com/rclone/rclone/fs"
"github.com/stretchr/testify/assert"
"gopkg.in/yaml.v2"
"gopkg.in/yaml.v3"
)
const configFile = "../../fstest/test_all/config.yaml"

View File

@@ -746,16 +746,6 @@ func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) {
case "test-func":
b.TestFn = testFunc
return
case "concurrent-func":
b.TestFn = func() {
src := filepath.Join(b.dataDir, "file7.txt")
dst := "file1.txt"
err := b.copyFile(ctx, src, b.replaceHex(b.path2), dst)
if err != nil {
fs.Errorf(src, "error copying file: %v", err)
}
}
return
case "fix-names":
// in case the local os converted any filenames
ci.NoUnicodeNormalization = true
@@ -881,9 +871,10 @@ func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) {
if !ok || err != nil {
fs.Logf(remotePath, "Can't find expected file %s (was it renamed by the os?) %v", args[1], err)
return
} else {
// include hash of filename to make unicode form differences easier to see in logs
fs.Debugf(remotePath, "verified file exists at correct path. filename hash: %s", stringToHash(leaf))
}
// include hash of filename to make unicode form differences easier to see in logs
fs.Debugf(remotePath, "verified file exists at correct path. filename hash: %s", stringToHash(leaf))
return
default:
return fmt.Errorf("unknown command: %q", args[0])

View File

@@ -63,40 +63,40 @@ func (b *bisyncRun) setCompareDefaults(ctx context.Context) error {
}
if b.opt.Compare.SlowHashSyncOnly && b.opt.Compare.SlowHashDetected && b.opt.Resync {
fs.Logf(nil, Color(terminal.Dim, "Ignoring checksums during --resync as --slow-hash-sync-only is set.")) ///nolint:govet
fs.Log(nil, Color(terminal.Dim, "Ignoring checksums during --resync as --slow-hash-sync-only is set."))
ci.CheckSum = false
// note not setting b.opt.Compare.Checksum = false as we still want to build listings on the non-slow side, if any
} else if b.opt.Compare.Checksum && !ci.CheckSum {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: Checksums will be compared for deltas but not during sync as --checksum is not set.")) //nolint:govet
fs.Log(nil, Color(terminal.YellowFg, "WARNING: Checksums will be compared for deltas but not during sync as --checksum is not set."))
}
if b.opt.Compare.Modtime && (b.fs1.Precision() == fs.ModTimeNotSupported || b.fs2.Precision() == fs.ModTimeNotSupported) {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: Modtime compare was requested but at least one remote does not support it. It is recommended to use --checksum or --size-only instead.")) //nolint:govet
fs.Log(nil, Color(terminal.YellowFg, "WARNING: Modtime compare was requested but at least one remote does not support it. It is recommended to use --checksum or --size-only instead."))
}
if (ci.CheckSum || b.opt.Compare.Checksum) && b.opt.IgnoreListingChecksum {
if (b.opt.Compare.HashType1 == hash.None || b.opt.Compare.HashType2 == hash.None) && !b.opt.Compare.DownloadHash {
fs.Logf(nil, Color(terminal.YellowFg, `WARNING: Checksum compare was requested but at least one remote does not support checksums (or checksums are being ignored) and --ignore-listing-checksum is set.
Ignoring Checksums globally and falling back to --compare modtime,size for sync. (Use --compare size or --size-only to ignore modtime). Path1 (%s): %s, Path2 (%s): %s`),
b.fs1.String(), b.opt.Compare.HashType1.String(), b.fs2.String(), b.opt.Compare.HashType2.String()) //nolint:govet
b.fs1.String(), b.opt.Compare.HashType1.String(), b.fs2.String(), b.opt.Compare.HashType2.String())
b.opt.Compare.Modtime = true
b.opt.Compare.Size = true
ci.CheckSum = false
b.opt.Compare.Checksum = false
} else {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: Ignoring checksum for deltas as --ignore-listing-checksum is set")) //nolint:govet
fs.Log(nil, Color(terminal.YellowFg, "WARNING: Ignoring checksum for deltas as --ignore-listing-checksum is set"))
// note: --checksum will still affect the internal sync calls
}
}
if !ci.CheckSum && !b.opt.Compare.Checksum && !b.opt.IgnoreListingChecksum {
fs.Infof(nil, Color(terminal.Dim, "Setting --ignore-listing-checksum as neither --checksum nor --compare checksum are set.")) //nolint:govet
fs.Infoc(nil, Color(terminal.Dim, "Setting --ignore-listing-checksum as neither --checksum nor --compare checksum are set."))
b.opt.IgnoreListingChecksum = true
}
if !b.opt.Compare.Size && !b.opt.Compare.Modtime && !b.opt.Compare.Checksum {
return errors.New(Color(terminal.RedFg, "must set a Compare method. (size, modtime, and checksum can't all be false.)")) //nolint:govet
return errors.New(Color(terminal.RedFg, "must set a Compare method. (size, modtime, and checksum can't all be false.)"))
}
notSupported := func(label string, value bool, opt *bool) {
if value {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: %s is set but bisync does not support it. It will be ignored."), label) //nolint:govet
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: %s is set but bisync does not support it. It will be ignored."), label)
*opt = false
}
}
@@ -123,13 +123,13 @@ func sizeDiffers(a, b int64) bool {
func hashDiffers(a, b string, ht1, ht2 hash.Type, size1, size2 int64) bool {
if a == "" || b == "" {
if ht1 != hash.None && ht2 != hash.None && !(size1 <= 0 || size2 <= 0) {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: hash unexpectedly blank despite Fs support (%s, %s) (you may need to --resync!)"), a, b) //nolint:govet
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: hash unexpectedly blank despite Fs support (%s, %s) (you may need to --resync!)"), a, b)
}
return false
}
if ht1 != ht2 {
if !(downloadHash && ((ht1 == hash.MD5 && ht2 == hash.None) || (ht1 == hash.None && ht2 == hash.MD5))) {
fs.Infof(nil, Color(terminal.YellowFg, "WARNING: Can't compare hashes of different types (%s, %s)"), ht1.String(), ht2.String()) //nolint:govet
fs.Infof(nil, Color(terminal.YellowFg, "WARNING: Can't compare hashes of different types (%s, %s)"), ht1.String(), ht2.String())
return false
}
}
@@ -151,7 +151,7 @@ func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) {
return
}
} else if b.opt.Compare.SlowHashSyncOnly && b.opt.Compare.SlowHashDetected {
fs.Logf(b.fs2, Color(terminal.YellowFg, "Ignoring --slow-hash-sync-only and falling back to --no-slow-hash as Path1 and Path2 have no hashes in common.")) //nolint:govet
fs.Log(b.fs2, Color(terminal.YellowFg, "Ignoring --slow-hash-sync-only and falling back to --no-slow-hash as Path1 and Path2 have no hashes in common."))
b.opt.Compare.SlowHashSyncOnly = false
b.opt.Compare.NoSlowHash = true
ci.CheckSum = false
@@ -159,7 +159,7 @@ func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) {
}
if !b.opt.Compare.DownloadHash && !b.opt.Compare.SlowHashSyncOnly {
fs.Logf(b.fs2, Color(terminal.YellowFg, "--checksum is in use but Path1 and Path2 have no hashes in common; falling back to --compare modtime,size for sync. (Use --compare size or --size-only to ignore modtime)")) //nolint:govet
fs.Log(b.fs2, Color(terminal.YellowFg, "--checksum is in use but Path1 and Path2 have no hashes in common; falling back to --compare modtime,size for sync. (Use --compare size or --size-only to ignore modtime)"))
fs.Infof("Path1 hashes", "%v", b.fs1.Hashes().String())
fs.Infof("Path2 hashes", "%v", b.fs2.Hashes().String())
b.opt.Compare.Modtime = true
@@ -167,25 +167,25 @@ func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) {
ci.CheckSum = false
}
if (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.fs1.Features().SlowHash {
fs.Infof(nil, Color(terminal.YellowFg, "Slow hash detected on Path1. Will ignore checksum due to slow-hash settings")) //nolint:govet
fs.Infoc(nil, Color(terminal.YellowFg, "Slow hash detected on Path1. Will ignore checksum due to slow-hash settings"))
b.opt.Compare.HashType1 = hash.None
} else {
b.opt.Compare.HashType1 = b.fs1.Hashes().GetOne()
if b.opt.Compare.HashType1 != hash.None {
fs.Logf(b.fs1, Color(terminal.YellowFg, "will use %s for same-side diffs on Path1 only"), b.opt.Compare.HashType1) //nolint:govet
fs.Logf(b.fs1, Color(terminal.YellowFg, "will use %s for same-side diffs on Path1 only"), b.opt.Compare.HashType1)
}
}
if (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.fs2.Features().SlowHash {
fs.Infof(nil, Color(terminal.YellowFg, "Slow hash detected on Path2. Will ignore checksum due to slow-hash settings")) //nolint:govet
fs.Infoc(nil, Color(terminal.YellowFg, "Slow hash detected on Path2. Will ignore checksum due to slow-hash settings"))
b.opt.Compare.HashType1 = hash.None
} else {
b.opt.Compare.HashType2 = b.fs2.Hashes().GetOne()
if b.opt.Compare.HashType2 != hash.None {
fs.Logf(b.fs2, Color(terminal.YellowFg, "will use %s for same-side diffs on Path2 only"), b.opt.Compare.HashType2) //nolint:govet
fs.Logf(b.fs2, Color(terminal.YellowFg, "will use %s for same-side diffs on Path2 only"), b.opt.Compare.HashType2)
}
}
if b.opt.Compare.HashType1 == hash.None && b.opt.Compare.HashType2 == hash.None && !b.opt.Compare.DownloadHash {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: Ignoring checksums globally as hashes are ignored or unavailable on both sides.")) //nolint:govet
fs.Log(nil, Color(terminal.YellowFg, "WARNING: Ignoring checksums globally as hashes are ignored or unavailable on both sides."))
b.opt.Compare.Checksum = false
ci.CheckSum = false
b.opt.IgnoreListingChecksum = true
@@ -232,7 +232,7 @@ func (b *bisyncRun) setFromCompareFlag(ctx context.Context) error {
b.opt.Compare.Checksum = true
CompareFlag.Checksum = true
default:
return fmt.Errorf(Color(terminal.RedFg, "unknown compare option: %s (must be size, modtime, or checksum)"), opt) //nolint:govet
return fmt.Errorf(Color(terminal.RedFg, "unknown compare option: %s (must be size, modtime, or checksum)"), opt)
}
}
@@ -284,14 +284,14 @@ func tryDownloadHash(ctx context.Context, o fs.DirEntry, hashVal string) (string
}
if o.Size() < 0 {
downloadHashWarn.Do(func() {
fs.Logf(o, Color(terminal.YellowFg, "Skipping hash download as checksum not reliable with files of unknown length.")) //nolint:govet
fs.Log(o, Color(terminal.YellowFg, "Skipping hash download as checksum not reliable with files of unknown length."))
})
fs.Debugf(o, "Skipping hash download as checksum not reliable with files of unknown length.")
return hashVal, hash.ErrUnsupported
}
firstDownloadHash.Do(func() {
fs.Infof(obj.Fs().Name(), Color(terminal.Dim, "Downloading hashes...")) //nolint:govet
fs.Infoc(obj.Fs().Name(), Color(terminal.Dim, "Downloading hashes..."))
})
tr := accounting.Stats(ctx).NewCheckingTransfer(o, "computing hash with --download-hash")
defer func() {

View File

@@ -161,7 +161,9 @@ func (b *bisyncRun) findDeltas(fctx context.Context, f fs.Fs, oldListing string,
return
}
err = b.checkListing(now, newListing, "current "+msg)
if err == nil {
err = b.checkListing(now, newListing, "current "+msg)
}
if err != nil {
return
}
@@ -284,7 +286,7 @@ func (b *bisyncRun) findDeltas(fctx context.Context, f fs.Fs, oldListing string,
}
// applyDeltas
func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (results2to1, results1to2 []Results, queues queues, err error) {
func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (changes1, changes2 bool, results2to1, results1to2 []Results, queues queues, err error) {
path1 := bilib.FsPath(b.fs1)
path2 := bilib.FsPath(b.fs2)
@@ -365,7 +367,7 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
}
}
// if there are potential conflicts to check, check them all here (outside the loop) in one fell swoop
//if there are potential conflicts to check, check them all here (outside the loop) in one fell swoop
matches, err := b.checkconflicts(ctxCheck, filterCheck, b.fs1, b.fs2)
for _, file := range ds1.sort() {
@@ -390,7 +392,7 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
} else if d2.is(deltaOther) {
b.indent("!WARNING", file, "New or changed in both paths")
// if files are identical, leave them alone instead of renaming
//if files are identical, leave them alone instead of renaming
if (dirs1.has(file) || dirs1.has(alias)) && (dirs2.has(file) || dirs2.has(alias)) {
fs.Infof(nil, "This is a directory, not a file. Skipping equality check and will not rename: %s", file)
ls1.getPut(file, skippedDirs1)
@@ -484,6 +486,7 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
// Do the batch operation
if copy2to1.NotEmpty() && !b.InGracefulShutdown {
changes1 = true
b.indent("Path2", "Path1", "Do queued copies to")
ctx = b.setBackupDir(ctx, 1)
results2to1, err = b.fastCopy(ctx, b.fs2, b.fs1, copy2to1, "copy2to1")
@@ -495,11 +498,12 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
return
}
// copy empty dirs from path2 to path1 (if --create-empty-src-dirs)
//copy empty dirs from path2 to path1 (if --create-empty-src-dirs)
b.syncEmptyDirs(ctx, b.fs1, copy2to1, dirs2, &results2to1, "make")
}
if copy1to2.NotEmpty() && !b.InGracefulShutdown {
changes2 = true
b.indent("Path1", "Path2", "Do queued copies to")
ctx = b.setBackupDir(ctx, 2)
results1to2, err = b.fastCopy(ctx, b.fs1, b.fs2, copy1to2, "copy1to2")
@@ -511,7 +515,7 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
return
}
// copy empty dirs from path1 to path2 (if --create-empty-src-dirs)
//copy empty dirs from path1 to path2 (if --create-empty-src-dirs)
b.syncEmptyDirs(ctx, b.fs2, copy1to2, dirs1, &results1to2, "make")
}
@@ -519,7 +523,7 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
if err = b.saveQueue(delete1, "delete1"); err != nil {
return
}
// propagate deletions of empty dirs from path2 to path1 (if --create-empty-src-dirs)
//propagate deletions of empty dirs from path2 to path1 (if --create-empty-src-dirs)
b.syncEmptyDirs(ctx, b.fs1, delete1, dirs1, &results2to1, "remove")
}
@@ -527,7 +531,7 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
if err = b.saveQueue(delete2, "delete2"); err != nil {
return
}
// propagate deletions of empty dirs from path1 to path2 (if --create-empty-src-dirs)
//propagate deletions of empty dirs from path1 to path2 (if --create-empty-src-dirs)
b.syncEmptyDirs(ctx, b.fs2, delete2, dirs2, &results1to2, "remove")
}

View File

@@ -78,6 +78,15 @@ func Color(style string, s string) string {
return style + s + terminal.Reset
}
// ColorX handles terminal colors for bisync
func ColorX(style string, s string) string {
if !Colors {
return s
}
terminal.Start()
return style + s + terminal.Reset
}
func encode(s string) string {
return encoder.OS.ToStandardPath(encoder.OS.FromStandardPath(s))
}

View File

@@ -131,18 +131,18 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
finaliseOnce.Do(func() {
if atexit.Signalled() {
if b.opt.Resync {
fs.Logf(nil, Color(terminal.GreenFg, "No need to gracefully shutdown during --resync (just run it again.)")) //nolint:govet
fs.Log(nil, Color(terminal.GreenFg, "No need to gracefully shutdown during --resync (just run it again.)"))
} else {
fs.Logf(nil, Color(terminal.YellowFg, "Attempting to gracefully shutdown. (Send exit signal again for immediate un-graceful shutdown.)")) //nolint:govet
fs.Log(nil, Color(terminal.YellowFg, "Attempting to gracefully shutdown. (Send exit signal again for immediate un-graceful shutdown.)"))
b.InGracefulShutdown = true
if b.SyncCI != nil {
fs.Infof(nil, Color(terminal.YellowFg, "Telling Sync to wrap up early.")) //nolint:govet
fs.Infoc(nil, Color(terminal.YellowFg, "Telling Sync to wrap up early."))
b.SyncCI.MaxTransfer = 1
b.SyncCI.MaxDuration = 1 * time.Second
b.SyncCI.CutoffMode = fs.CutoffModeSoft
gracePeriod := 30 * time.Second // TODO: flag to customize this?
if !waitFor("Canceling Sync if not done in", gracePeriod, func() bool { return b.CleanupCompleted }) {
fs.Logf(nil, Color(terminal.YellowFg, "Canceling sync and cleaning up")) //nolint:govet
fs.Log(nil, Color(terminal.YellowFg, "Canceling sync and cleaning up"))
b.CancelSync()
waitFor("Aborting Bisync if not done in", 60*time.Second, func() bool { return b.CleanupCompleted })
}
@@ -150,13 +150,13 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
// we haven't started to sync yet, so we're good.
// no need to worry about the listing files, as we haven't overwritten them yet.
b.CleanupCompleted = true
fs.Logf(nil, Color(terminal.GreenFg, "Graceful shutdown completed successfully.")) //nolint:govet
fs.Log(nil, Color(terminal.GreenFg, "Graceful shutdown completed successfully."))
}
}
if !b.CleanupCompleted {
if !b.opt.Resync {
fs.Logf(nil, Color(terminal.HiRedFg, "Graceful shutdown failed.")) //nolint:govet
fs.Logf(nil, Color(terminal.RedFg, "Bisync interrupted. Must run --resync to recover.")) //nolint:govet
fs.Log(nil, Color(terminal.HiRedFg, "Graceful shutdown failed."))
fs.Log(nil, Color(terminal.RedFg, "Bisync interrupted. Must run --resync to recover."))
}
markFailed(b.listing1)
markFailed(b.listing2)
@@ -180,14 +180,14 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
b.critical = false
}
if err == nil {
fs.Logf(nil, Color(terminal.GreenFg, "Graceful shutdown completed successfully.")) //nolint:govet
fs.Log(nil, Color(terminal.GreenFg, "Graceful shutdown completed successfully."))
}
}
if b.critical {
if b.retryable && b.opt.Resilient {
fs.Errorf(nil, Color(terminal.RedFg, "Bisync critical error: %v"), err) //nolint:govet
fs.Errorf(nil, Color(terminal.YellowFg, "Bisync aborted. Error is retryable without --resync due to --resilient mode.")) //nolint:govet
fs.Errorf(nil, Color(terminal.RedFg, "Bisync critical error: %v"), err)
fs.Error(nil, Color(terminal.YellowFg, "Bisync aborted. Error is retryable without --resync due to --resilient mode."))
} else {
if bilib.FileExists(b.listing1) {
_ = os.Rename(b.listing1, b.listing1+"-err")
@@ -196,15 +196,15 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
_ = os.Rename(b.listing2, b.listing2+"-err")
}
fs.Errorf(nil, Color(terminal.RedFg, "Bisync critical error: %v"), err)
fs.Errorf(nil, Color(terminal.RedFg, "Bisync aborted. Must run --resync to recover.")) //nolint:govet
fs.Error(nil, Color(terminal.RedFg, "Bisync aborted. Must run --resync to recover."))
}
return ErrBisyncAborted
}
if b.abort && !b.InGracefulShutdown {
fs.Logf(nil, Color(terminal.RedFg, "Bisync aborted. Please try again.")) //nolint:govet
fs.Log(nil, Color(terminal.RedFg, "Bisync aborted. Please try again."))
}
if err == nil {
fs.Infof(nil, Color(terminal.GreenFg, "Bisync successful")) //nolint:govet
fs.Infoc(nil, Color(terminal.GreenFg, "Bisync successful"))
}
return err
}
@@ -270,7 +270,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
if b.opt.Recover && bilib.FileExists(b.listing1+"-old") && bilib.FileExists(b.listing2+"-old") {
errTip := fmt.Sprintf(Color(terminal.CyanFg, "Path1: %s\n"), Color(terminal.HiBlueFg, b.listing1))
errTip += fmt.Sprintf(Color(terminal.CyanFg, "Path2: %s"), Color(terminal.HiBlueFg, b.listing2))
fs.Logf(nil, Color(terminal.YellowFg, "Listings not found. Reverting to prior backup as --recover is set. \n")+errTip) //nolint:govet
fs.Log(nil, Color(terminal.YellowFg, "Listings not found. Reverting to prior backup as --recover is set. \n")+errTip)
if opt.CheckSync != CheckSyncFalse {
// Run CheckSync to ensure old listing is valid (garbage in, garbage out!)
fs.Infof(nil, "Validating backup listings for Path1 %s vs Path2 %s", quotePath(path1), quotePath(path2))
@@ -279,7 +279,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
b.retryable = true
return err
}
fs.Infof(nil, Color(terminal.GreenFg, "Backup listing is valid.")) //nolint:govet
fs.Infoc(nil, Color(terminal.GreenFg, "Backup listing is valid."))
}
b.revertToOldListings()
} else {
@@ -299,7 +299,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
fs.Infof(nil, "Building Path1 and Path2 listings")
ls1, ls2, err = b.makeMarchListing(fctx)
if err != nil || accounting.Stats(fctx).Errored() {
fs.Errorf(nil, Color(terminal.RedFg, "There were errors while building listings. Aborting as it is too dangerous to continue.")) //nolint:govet
fs.Error(nil, Color(terminal.RedFg, "There were errors while building listings. Aborting as it is too dangerous to continue."))
b.critical = true
b.retryable = true
return err
@@ -359,6 +359,8 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
// Determine and apply changes to Path1 and Path2
noChanges := ds1.empty() && ds2.empty()
changes1 := false // 2to1
changes2 := false // 1to2
results2to1 := []Results{}
results1to2 := []Results{}
@@ -368,7 +370,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
fs.Infof(nil, "No changes found")
} else {
fs.Infof(nil, "Applying changes")
results2to1, results1to2, queues, err = b.applyDeltas(octx, ds1, ds2)
changes1, changes2, results2to1, results1to2, queues, err = b.applyDeltas(octx, ds1, ds2)
if err != nil {
if b.InGracefulShutdown && (err == context.Canceled || err == accounting.ErrorMaxTransferLimitReachedGraceful || strings.Contains(err.Error(), "context canceled")) {
fs.Infof(nil, "Ignoring sync error due to Graceful Shutdown: %v", err)
@@ -393,11 +395,21 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
}
b.saveOldListings()
// save new listings
// NOTE: "changes" in this case does not mean this run vs. last run, it means start of this run vs. end of this run.
// i.e. whether we can use the March lst-new as this side's lst without modifying it.
if noChanges {
b.replaceCurrentListings()
} else {
err1 = b.modifyListing(fctx, b.fs2, b.fs1, results2to1, queues, false) // 2to1
err2 = b.modifyListing(fctx, b.fs1, b.fs2, results1to2, queues, true) // 1to2
if changes1 || b.InGracefulShutdown { // 2to1
err1 = b.modifyListing(fctx, b.fs2, b.fs1, results2to1, queues, false)
} else {
err1 = bilib.CopyFileIfExists(b.newListing1, b.listing1)
}
if changes2 || b.InGracefulShutdown { // 1to2
err2 = b.modifyListing(fctx, b.fs1, b.fs2, results1to2, queues, true)
} else {
err2 = bilib.CopyFileIfExists(b.newListing2, b.listing2)
}
}
if b.DebugName != "" {
l1, _ := b.loadListing(b.listing1)
@@ -611,7 +623,7 @@ func (b *bisyncRun) checkSyntax() error {
func (b *bisyncRun) debug(nametocheck, msgiftrue string) {
if b.DebugName != "" && b.DebugName == nametocheck {
fs.Infof(Color(terminal.MagentaBg, "DEBUGNAME "+b.DebugName), Color(terminal.MagentaBg, msgiftrue)) //nolint:govet
fs.Infoc(Color(terminal.MagentaBg, "DEBUGNAME "+b.DebugName), Color(terminal.MagentaBg, msgiftrue))
}
}

View File

@@ -161,7 +161,7 @@ func WriteResults(ctx context.Context, sigil operations.Sigil, src, dst fs.DirEn
prettyprint(result, "writing result", fs.LogLevelDebug)
if result.Size < 0 && result.Flags != "d" && ((queueCI.CheckSum && !downloadHash) || queueCI.SizeOnly) {
once.Do(func() {
fs.Logf(result.Name, Color(terminal.YellowFg, "Files of unknown size (such as Google Docs) do not sync reliably with --checksum or --size-only. Consider using modtime instead (the default) or --drive-skip-gdocs")) //nolint:govet
fs.Log(result.Name, Color(terminal.YellowFg, "Files of unknown size (such as Google Docs) do not sync reliably with --checksum or --size-only. Consider using modtime instead (the default) or --drive-skip-gdocs"))
})
}

View File

@@ -142,7 +142,7 @@ func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias s
if winningPath > 0 {
fs.Infof(file, Color(terminal.GreenFg, "The winner is: Path%d"), winningPath)
} else {
fs.Infof(file, Color(terminal.RedFg, "A winner could not be determined.")) //nolint:govet
fs.Infoc(file, Color(terminal.RedFg, "A winner could not be determined."))
}
}

View File

@@ -15,7 +15,7 @@ import (
// and either flag is sufficient without the other.
func (b *bisyncRun) setResyncDefaults() {
if b.opt.Resync && b.opt.ResyncMode == PreferNone {
fs.Debugf(nil, Color(terminal.Dim, "defaulting to --resync-mode path1 as --resync is set")) //nolint:govet
fs.Debug(nil, Color(terminal.Dim, "defaulting to --resync-mode path1 as --resync is set"))
b.opt.ResyncMode = PreferPath1
}
if b.opt.ResyncMode != PreferNone {

View File

@@ -1,10 +0,0 @@
# bisync listing v1 from test
- 109 - - 2000-01-01T00:00:00.000000000+0000 "RCLONE_TEST"
- 19 - - 2023-08-26T00:00:00.000000000+0000 "file1.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file2.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file3.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file4.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file5.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file6.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file7.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file8.txt"

View File

@@ -1,10 +0,0 @@
# bisync listing v1 from test
- 109 - - 2000-01-01T00:00:00.000000000+0000 "RCLONE_TEST"
- 19 - - 2023-08-26T00:00:00.000000000+0000 "file1.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file2.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file3.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file4.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file5.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file6.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file7.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file8.txt"

View File

@@ -1,10 +0,0 @@
# bisync listing v1 from test
- 109 - - 2000-01-01T00:00:00.000000000+0000 "RCLONE_TEST"
- 19 - - 2023-08-26T00:00:00.000000000+0000 "file1.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file2.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file3.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file4.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file5.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file6.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file7.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file8.txt"

View File

@@ -1,10 +0,0 @@
# bisync listing v1 from test
- 109 - - 2000-01-01T00:00:00.000000000+0000 "RCLONE_TEST"
- 19 - - 2023-08-26T00:00:00.000000000+0000 "file1.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file2.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file3.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file4.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file5.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file6.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file7.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file8.txt"

View File

@@ -1,10 +0,0 @@
# bisync listing v1 from test
- 109 - - 2000-01-01T00:00:00.000000000+0000 "RCLONE_TEST"
- 19 - - 2023-08-26T00:00:00.000000000+0000 "file1.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file2.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file3.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file4.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file5.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file6.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file7.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file8.txt"

View File

@@ -1,10 +0,0 @@
# bisync listing v1 from test
- 109 - - 2000-01-01T00:00:00.000000000+0000 "RCLONE_TEST"
- 19 - - 2023-08-26T00:00:00.000000000+0000 "file1.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file2.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file3.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file4.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file5.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file6.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file7.txt"
- 0 - - 2000-01-01T00:00:00.000000000+0000 "file8.txt"

View File

@@ -1,73 +0,0 @@
(01) : test concurrent
(02) : test initial bisync
(03) : bisync resync
INFO : Setting --ignore-listing-checksum as neither --checksum nor --compare checksum are set.
INFO : Bisyncing with Comparison Settings:
{
"Modtime": true,
"Size": true,
"Checksum": false,
"NoSlowHash": false,
"SlowHashSyncOnly": false,
"DownloadHash": false
}
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1
INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
(04) : test changed on one path - file1
(05) : touch-glob 2001-01-02 {datadir/} file5R.txt
(06) : touch-glob 2023-08-26 {datadir/} file7.txt
(07) : copy-as {datadir/}file5R.txt {path2/} file1.txt
(08) : test bisync with file changed during
(09) : concurrent-func
(10) : bisync
INFO : Setting --ignore-listing-checksum as neither --checksum nor --compare checksum are set.
INFO : Bisyncing with Comparison Settings:
{
"Modtime": true,
"Size": true,
"Checksum": false,
"NoSlowHash": false,
"SlowHashSyncOnly": false,
"DownloadHash": false
}
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Building Path1 and Path2 listings
INFO : Path1 checking for diffs
INFO : Path2 checking for diffs
INFO : - Path2 File changed: size (larger), time (newer) - file1.txt
INFO : Path2: 1 changes:  0 new,  1 modified,  0 deleted
INFO : (Modified:  1 newer,  0 older,  1 larger,  0 smaller)
INFO : Applying changes
INFO : - Path2 Queue copy to Path1 - {path1/}file1.txt
INFO : - Path2 Do queued copies to - Path1
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful
(11) : bisync
INFO : Setting --ignore-listing-checksum as neither --checksum nor --compare checksum are set.
INFO : Bisyncing with Comparison Settings:
{
"Modtime": true,
"Size": true,
"Checksum": false,
"NoSlowHash": false,
"SlowHashSyncOnly": false,
"DownloadHash": false
}
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Building Path1 and Path2 listings
INFO : Path1 checking for diffs
INFO : Path2 checking for diffs
INFO : No changes found
INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful

View File

@@ -1 +0,0 @@
This file is used for testing the health of rclone accesses to the local/remote file system. Do not delete.

View File

@@ -1 +0,0 @@
This file is newer

View File

@@ -1 +0,0 @@
This file is newer

View File

@@ -1 +0,0 @@
This file is newer

View File

@@ -1 +0,0 @@
Newer version

View File

@@ -1 +0,0 @@
This file is newer and not equal to 5R

View File

@@ -1 +0,0 @@
This file is newer and not equal to 5L

View File

@@ -1 +0,0 @@
This file is newer

View File

@@ -1 +0,0 @@
This file is newer

View File

@@ -1,15 +0,0 @@
test concurrent
test initial bisync
bisync resync
test changed on one path - file1
touch-glob 2001-01-02 {datadir/} file5R.txt
touch-glob 2023-08-26 {datadir/} file7.txt
copy-as {datadir/}file5R.txt {path2/} file1.txt
test bisync with file changed during
concurrent-func
bisync
bisync

View File

@@ -430,7 +430,7 @@ func initConfig() {
}
// Start the metrics server if configured and not running the "rc" command
if len(os.Args) >= 2 && os.Args[1] != "rc" {
if os.Args[1] != "rc" {
_, err = rcserver.MetricsStart(ctx, &rc.Opt)
if err != nil {
fs.Fatalf(nil, "Failed to start metrics server: %v", err)

View File

@@ -43,8 +43,6 @@ This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. It doesn't delete files from
the destination.
*If you are looking to copy just a byte range of a file, please see 'rclone cat --offset X --count Y'*
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics
`,
Annotations: map[string]string{

View File

@@ -43,7 +43,7 @@ Setting |--auto-filename| will attempt to automatically determine the
filename from the URL (after any redirections) and used in the
destination path.
With |--header-filename| in addition, if a specific filename is
With |--auto-filename-header| in addition, if a specific filename is
set in HTTP headers, it will be used instead of the name from the URL.
With |--print-filename| in addition, the resulting file name will be
printed.

View File

@@ -22,9 +22,6 @@ include/exclude filters - everything will be removed. Use the
delete files. To delete empty directories only, use command
[rmdir](/commands/rclone_rmdir/) or [rmdirs](/commands/rclone_rmdirs/).
The concurrency of this operation is controlled by the ` + "`--checkers`" + ` global flag. However, some backends will
implement this command directly, in which case ` + "`--checkers`" + ` will be ignored.
**Important**: Since this can cause data loss, test first with the
` + "`--dry-run` or the `--interactive`/`-i`" + ` flag.
`,

View File

@@ -29,7 +29,6 @@ func setSys(fi os.FileInfo) {
node, ok := fi.(vfs.Node)
if !ok {
fs.Errorf(fi, "internal error: %T is not a vfs.Node", fi)
return
}
vfs := node.VFS()
// Set the UID and GID for the node passed in from the VFS defaults.

View File

@@ -4,8 +4,10 @@ package proxy
import (
"bytes"
"context"
"crypto/md5"
"crypto/sha256"
"crypto/subtle"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
@@ -217,8 +219,13 @@ func (p *Proxy) call(user, auth string, isPublicKey bool) (value interface{}, er
return nil, fmt.Errorf("proxy: couldn't find backend for %q: %w", fsName, err)
}
// Add a config hash to ensure configs with different values have different names.
// 5 characters length is 5*6 = 30 bits of base64
md5sumBinary := md5.Sum([]byte(config.String()))
configHash := base64.RawURLEncoding.EncodeToString(md5sumBinary[:])[:5]
// base name of config on user name. This may appear in logs
name := "proxy-" + user
name := "proxy-" + user + "-" + configHash
fsString := name + ":" + root
// Look for fs in the VFS cache

View File

@@ -90,7 +90,8 @@ func TestRun(t *testing.T) {
require.NotNil(t, entry.vfs)
f := entry.vfs.Fs()
require.NotNil(t, f)
assert.Equal(t, "proxy-"+testUser, f.Name())
assert.True(t, strings.HasPrefix(f.Name(), "proxy-"+testUser+"-"))
assert.Equal(t, len("proxy-"+testUser+"-")+5, len(f.Name()))
assert.True(t, strings.HasPrefix(f.String(), "Local file system"))
// check it is in the cache
@@ -108,7 +109,7 @@ func TestRun(t *testing.T) {
vfs, vfsKey, err := p.Call(testUser, testPass, false)
require.NoError(t, err)
require.NotNil(t, vfs)
assert.Equal(t, "proxy-"+testUser, vfs.Fs().Name())
assert.True(t, strings.HasPrefix(vfs.Fs().Name(), "proxy-"+testUser+"-"))
assert.Equal(t, testUser, vfsKey)
// check it is in the cache
@@ -129,7 +130,7 @@ func TestRun(t *testing.T) {
vfs, vfsKey, err = p.Call(testUser, testPass, false)
require.NoError(t, err)
require.NotNil(t, vfs)
assert.Equal(t, "proxy-"+testUser, vfs.Fs().Name())
assert.True(t, strings.HasPrefix(vfs.Fs().Name(), "proxy-"+testUser+"-"))
assert.Equal(t, testUser, vfsKey)
// check cache is at the same level
@@ -173,7 +174,7 @@ func TestRun(t *testing.T) {
require.NotNil(t, entry.vfs)
f := entry.vfs.Fs()
require.NotNil(t, f)
assert.Equal(t, "proxy-"+testUser, f.Name())
assert.True(t, strings.HasPrefix(f.Name(), "proxy-"+testUser+"-"))
assert.True(t, strings.HasPrefix(f.String(), "Local file system"))
// check it is in the cache
@@ -195,7 +196,7 @@ func TestRun(t *testing.T) {
)
require.NoError(t, err)
require.NotNil(t, vfs)
assert.Equal(t, "proxy-"+testUser, vfs.Fs().Name())
assert.True(t, strings.HasPrefix(vfs.Fs().Name(), "proxy-"+testUser+"-"))
assert.Equal(t, testUser, vfsKey)
// check it is in the cache
@@ -216,7 +217,7 @@ func TestRun(t *testing.T) {
vfs, vfsKey, err = p.Call(testUser, publicKeyString, true)
require.NoError(t, err)
require.NotNil(t, vfs)
assert.Equal(t, "proxy-"+testUser, vfs.Fs().Name())
assert.True(t, strings.HasPrefix(vfs.Fs().Name(), "proxy-"+testUser+"-"))
assert.Equal(t, testUser, vfsKey)
// check cache is at the same level

View File

@@ -14,7 +14,7 @@ docs](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
access.
Please note that some clients may require HTTPS endpoints. See [the
SSL docs](#tls-ssl) for more information.
SSL docs](#ssl-tls) for more information.
This command uses the [VFS directory cache](#vfs-virtual-file-system).
All the functionality will work with `--vfs-cache-mode off`. Using

View File

@@ -3,10 +3,13 @@ package version
import (
"context"
"debug/buildinfo"
"errors"
"fmt"
"io"
"net/http"
"os"
"runtime/debug"
"strings"
"time"
@@ -20,12 +23,14 @@ import (
var (
check = false
deps = false
)
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &check, "check", "", false, "Check for new version", "")
flags.BoolVarP(cmdFlags, &deps, "deps", "", false, "Show the Go dependencies", "")
}
var commandDefinition = &cobra.Command{
@@ -67,18 +72,25 @@ Or
beta: 1.42.0.5 (released 2018-06-17)
upgrade: https://beta.rclone.org/v1.42-005-g56e1e820
If you supply the --deps flag then rclone will print a list of all the
packages it depends on and their versions along with some other
information about the build.
`,
Annotations: map[string]string{
"versionIntroduced": "v1.33",
},
Run: func(command *cobra.Command, args []string) {
RunE: func(command *cobra.Command, args []string) error {
ctx := context.Background()
cmd.CheckArgs(0, 0, command, args)
if deps {
return printDependencies()
}
if check {
CheckVersion(ctx)
} else {
cmd.ShowVersion()
}
return nil
},
}
@@ -151,3 +163,36 @@ func CheckVersion(ctx context.Context) {
fmt.Println("Your version is compiled from git so comparisons may be wrong.")
}
}
// Print info about a build module
func printModule(module *debug.Module) {
if module.Replace != nil {
fmt.Printf("- %s %s (replaced by %s %s)\n",
module.Path, module.Version, module.Replace.Path, module.Replace.Version)
} else {
fmt.Printf("- %s %s\n", module.Path, module.Version)
}
}
// printDependencies shows the packages we use in a format like go.mod
func printDependencies() error {
info, err := buildinfo.ReadFile(os.Args[0])
if err != nil {
return fmt.Errorf("error reading build info: %w", err)
}
fmt.Println("Go Version:")
fmt.Printf("- %s\n", info.GoVersion)
fmt.Println("Main package:")
printModule(&info.Main)
fmt.Println("Binary path:")
fmt.Printf("- %s\n", info.Path)
fmt.Println("Settings:")
for _, setting := range info.Settings {
fmt.Printf("- %s: %s\n", setting.Key, setting.Value)
}
fmt.Println("Dependencies:")
for _, dep := range info.Deps {
printModule(dep)
}
return nil
}

View File

@@ -809,6 +809,7 @@ put them back in again.` >}}
* ben-ba <benjamin.brauner@gmx.de>
* Eli Orzitzer <e_orz@yahoo.com>
* Anthony Metzidis <anthony.metzidis@gmail.com>
* emyarod <afw5059@gmail.com>
* keongalvin <keongalvin@gmail.com>
* rarspace01 <rarspace01@users.noreply.github.com>
* Paul Stern <paulstern45@gmail.com>
@@ -924,3 +925,16 @@ put them back in again.` >}}
* ToM <thomas.faucher@bibliosansfrontieres.org>
* TAKEI Yuya <853320+takei-yuya@users.noreply.github.com>
* Francesco Frassinelli <fraph24@gmail.com> <francesco.frassinelli@nina.no>
* Matt Ickstadt <mattico8@gmail.com> <matt@beckenterprises.com>
* Spencer McCullough <mccullough.spencer@gmail.com>
* Jonathan Giannuzzi <jonathan@giannuzzi.me>
* Christoph Berger <github@christophberger.com>
* Tim White <tim.white@su.org.au>
* Robin Schneider <robin.schneider@stackit.cloud>
* izouxv <izouxv@users.noreply.github.com>
* Moises Lima <mozlima@users.noreply.github.com>
* Bruno Fernandes <bruno.fernandes1996@hotmail.com>
* Corentin Barreau <corentin@archive.org>
* hiddenmarten <hiddenmarten@gmail.com>
* Trevor Starick <trevor.starick@gmail.com>
* b-wimmer <132347192+b-wimmer@users.noreply.github.com>

View File

@@ -938,8 +938,9 @@ You can set custom upload headers with the `--header-upload` flag.
- Content-Encoding
- Content-Language
- Content-Type
- X-MS-Tags
Eg `--header-upload "Content-Type: text/potato"`
Eg `--header-upload "Content-Type: text/potato"` or `--header-upload "X-MS-Tags: foo=bar"`
## Limitations

View File

@@ -206,6 +206,13 @@ If the resource has multiple user-assigned identities you will need to
unset `env_auth` and set `use_msi` instead. See the [`use_msi`
section](#use_msi).
If you are operating in disconnected clouds, or private clouds such as
Azure Stack you may want to set `disable_instance_discovery = true`.
This determines whether rclone requests Microsoft Entra instance
metadata from `https://login.microsoft.com/` before authenticating.
Setting this to `true` will skip this request, making you responsible
for ensuring the configured authority is valid and trustworthy.
##### Env Auth: 3. Azure CLI credentials (as used by the az tool)
Credentials created with the `az` tool can be picked up using `env_auth`.
@@ -288,6 +295,13 @@ be explicitly specified using exactly one of the `msi_object_id`,
If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is
set, this is is equivalent to using `env_auth`.
#### Azure CLI tool `az` {#use_az}
Set to use the [Azure CLI tool `az`](https://learn.microsoft.com/en-us/cli/azure/)
as the sole means of authentication.
Setting this can be useful if you wish to use the `az` CLI on a host with
a System Managed Identity that you do not want to use.
Don't set `env_auth` at the same time.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/azurefiles/azurefiles.go then run make backenddocs" >}}
### Standard options

View File

@@ -1815,9 +1815,6 @@ about _Unison_ and synchronization in general.
## Changelog
### `v1.69.1`
* Fixed an issue causing listings to not capture concurrent modifications under certain conditions
### `v1.68`
* Fixed an issue affecting backends that round modtimes to a lower precision.

View File

@@ -87,7 +87,7 @@ machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the
token as returned from Box. This only runs from the moment it opens
your browser to the moment you get back the verification code. This
is on `http://127.0.0.1:53682/` and this may require you to unblock
is on `http://127.0.0.1:53682/` and this it may require you to unblock
it temporarily if you are running a host firewall.
Once configured you can then use `rclone` like this,

View File

@@ -5,84 +5,6 @@ description: "Rclone Changelog"
# Changelog
## v1.69.3 - 2025-05-21
[See commits](https://github.com/rclone/rclone/compare/v1.69.2...v1.69.3)
* Bug Fixes
* build: Reapply update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to fix CVE-2025-30204 (dependabot[bot])
* build: Update github.com/ebitengine/purego to work around bug in go1.24.3 (Nick Craig-Wood)
## v1.69.2 - 2025-05-01
[See commits](https://github.com/rclone/rclone/compare/v1.69.1...v1.69.2)
* Bug fixes
* accounting: Fix percentDiff calculation -- (Anagh Kumar Baranwal)
* build
* Update github.com/golang-jwt/jwt/v4 from 4.5.1 to 4.5.2 to fix CVE-2025-30204 (dependabot[bot])
* Update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to fix CVE-2025-30204 (dependabot[bot])
* Update golang.org/x/crypto to v0.35.0 to fix CVE-2025-22869 (Nick Craig-Wood)
* Update golang.org/x/net from 0.36.0 to 0.38.0 to fix CVE-2025-22870 (dependabot[bot])
* Update golang.org/x/net to 0.36.0. to fix CVE-2025-22869 (dependabot[bot])
* Stop building with go < go1.23 as security updates forbade it (Nick Craig-Wood)
* Fix docker plugin build (Anagh Kumar Baranwal)
* cmd: Fix crash if rclone is invoked without any arguments (Janne Hellsten)
* config: Read configuration passwords from stdin even when terminated with EOF (Samantha Bowen)
* doc fixes (Andrew Kreimer, Danny Garside, eccoisle, Ed Craig-Wood, emyarod, jack, Jugal Kishore, Markus Gerstel, Michael Kebe, Nick Craig-Wood, simonmcnair, simwai, Zachary Vorhies)
* fs: Fix corruption of SizeSuffix with "B" suffix in config (eg --min-size) (Nick Craig-Wood)
* lib/http: Fix race between Serve() and Shutdown() (Nick Craig-Wood)
* object: Fix memory object out of bounds Seek (Nick Craig-Wood)
* operations: Fix call fmt.Errorf with wrong err (alingse)
* rc
* Disable the metrics server when running `rclone rc` (hiddenmarten)
* Fix debug/* commands not being available over unix sockets (Nick Craig-Wood)
* serve nfs: Fix unlikely crash (Nick Craig-Wood)
* stats: Fix the speed not getting updated after a pause in the processing (Anagh Kumar Baranwal)
* sync
* Fix cpu spinning when empty directory finding with leading slashes (Nick Craig-Wood)
* Copy dir modtimes even when copyEmptySrcDirs is false (ll3006)
* VFS
* Fix directory cache serving stale data (Lorenz Brun)
* Fix inefficient directory caching when directory reads are slow (huanghaojun)
* Fix integration test failures (Nick Craig-Wood)
* Drive
* Metadata: fix error when setting copy-requires-writer-permission on a folder (Nick Craig-Wood)
* Dropbox
* Retry link without expiry (Dave Vasilevsky)
* HTTP
* Correct root if definitely pointing to a file (nielash)
* Iclouddrive
* Fix so created files are writable (Ben Alex)
* Onedrive
* Fix metadata ordering in permissions (Nick Craig-Wood)
## v1.69.1 - 2025-02-14
[See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.69.1)
* Bug Fixes
* lib/oauthutil: Fix redirect URL mismatch errors (Nick Craig-Wood)
* bisync: Fix listings missing concurrent modifications (nielash)
* serve s3: Fix list objects encoding-type (Nick Craig-Wood)
* fs: Fix confusing "didn't find section in config file" error (Nick Craig-Wood)
* doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
* build: Added parallel docker builds and caching for go build in the container (Anagh Kumar Baranwal)
* VFS
* Fix the cache failing to upload symlinks when `--links` was specified (Nick Craig-Wood)
* Fix race detected by race detector (Nick Craig-Wood)
* Close the change notify channel on Shutdown (izouxv)
* B2
* Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
* Iclouddrive
* Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
* Onedrive
* Mark German (de) region as deprecated (Nick Craig-Wood)
* S3
* Added new storage class to magalu provider (Bruno Fernandes)
* Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
* Add latest Linode Object Storage endpoints (jbagwell-akamai)
## v1.69.0 - 2025-01-12
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)
@@ -112,7 +34,7 @@ description: "Rclone Changelog"
* fs: Make `--links` flag global and add new `--local-links` and `--vfs-links` flags (Nick Craig-Wood)
* http servers: Disable automatic authentication skipping for unix sockets in http servers (Moises Lima)
* This was making it impossible to use unix sockets with an proxy
* This might now cause rclone to need authentication where it didn't before
* This might now cause rclone to need authenticaton where it didn't before
* oauthutil: add support for OAuth client credential flow (Martin Hassack, Nick Craig-Wood)
* operations: make log messages consistent for mkdir/rmdir at INFO level (Nick Craig-Wood)
* rc: Add `relative` to [vfs/queue-set-expiry](/rc/#vfs-queue-set-expiry) (Nick Craig-Wood)
@@ -790,7 +712,7 @@ instead of of `--size-only`, when `check` is not available.
* Update all dependencies (Nick Craig-Wood)
* Refactor version info and icon resource handling on windows (albertony)
* doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri Papadopoulos, Herby Gillot, Joda Stößer, Manoj Ghosh, Nick Craig-Wood)
* Implement `--metadata-mapper` to transform metadata with a user supplied program (Nick Craig-Wood)
* Implement `--metadata-mapper` to transform metatadata with a user supplied program (Nick Craig-Wood)
* Add `ChunkWriterDoesntSeek` feature flag and set it for b2 (Nick Craig-Wood)
* lib/http: Export basic go string functions for use in `--template` (Gabriel Espinoza)
* makefile: Use POSIX compatible install arguments (Mina Galić)
@@ -905,7 +827,7 @@ instead of of `--size-only`, when `check` is not available.
* Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
* B2
* Fix multipart upload: corrupted on transfer: sizes differ XXX vs 0 (Nick Craig-Wood)
* Fix locking window when getting multipart upload URL (Nick Craig-Wood)
* Fix locking window when getting mutipart upload URL (Nick Craig-Wood)
* Fix server side copies greater than 4GB (Nick Craig-Wood)
* Fix chunked streaming uploads (Nick Craig-Wood)
* Reduce default `--b2-upload-concurrency` to 4 to reduce memory usage (Nick Craig-Wood)

View File

@@ -965,7 +965,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.3")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect

View File

@@ -14,18 +14,13 @@ Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by
rclone config.
The command requires 1-3 arguments:
- fs name (e.g., "drive", "s3", etc.)
- Either a base64 encoded JSON blob obtained from a previous rclone config session
- Or a client_id and client_secret pair obtained from the remote service
Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.
```
rclone authorize <fs name> [base64_json_blob | client_id client_secret] [flags]
rclone authorize [flags]
```
## Options

View File

@@ -21,12 +21,12 @@ password to re-encrypt the config.
When `--password-command` is called to change the password then the
environment variable `RCLONE_PASSWORD_CHANGE=1` will be set. So if
changing passwords programmatically you can use the environment
changing passwords programatically you can use the environment
variable to distinguish which password you must supply.
Alternatively you can remove the password first (with `rclone config
encryption remove`), then set it again with this command which may be
easier if you don't mind the unencrypted config file being on the disk
easier if you don't mind the unecrypted config file being on the disk
briefly.

View File

@@ -36,8 +36,6 @@ This doesn't transfer files that are identical on src and dst, testing
by size and modification time or MD5SUM. It doesn't delete files from
the destination.
*If you are looking to copy just a byte range of a file, please see 'rclone cat --offset X --count Y'*
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics

View File

@@ -17,7 +17,7 @@ Setting `--auto-filename` will attempt to automatically determine the
filename from the URL (after any redirections) and used in the
destination path.
With `--header-filename` in addition, if a specific filename is
With `--auto-filename-header` in addition, if a specific filename is
set in HTTP headers, it will be used instead of the name from the URL.
With `--print-filename` in addition, the resulting file name will be
printed.
@@ -28,7 +28,7 @@ destination if there is one with the same name.
Setting `--stdout` or making the output file name `-`
will cause the output to be written to standard output.
## Troubleshooting
## Troublshooting
If you can't get `rclone copyurl` to work then here are some things you can try:

View File

@@ -571,11 +571,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant

View File

@@ -572,11 +572,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant

View File

@@ -15,9 +15,6 @@ include/exclude filters - everything will be removed. Use the
delete files. To delete empty directories only, use command
[rmdir](/commands/rclone_rmdir/) or [rmdirs](/commands/rclone_rmdirs/).
The concurrency of this operation is controlled by the `--checkers` global flag. However, some backends will
implement this command directly, in which case `--checkers` will be ignored.
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.

View File

@@ -134,11 +134,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant

View File

@@ -146,11 +146,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant

View File

@@ -127,11 +127,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant

View File

@@ -245,11 +245,11 @@ seconds. If rclone is quit or dies with files that haven't been
uploaded, these will be uploaded next time rclone is run with the same
flags.
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-space` note
If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
that the cache may exceed these quotas for two reasons. Firstly
because it is only checked every `--vfs-cache-poll-interval`. Secondly
because open files cannot be evicted from the cache. When
`--vfs-cache-max-size` or `--vfs-cache-min-free-space` is exceeded,
`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
rclone will attempt to evict the least accessed files from the cache
first. rclone will start with files that haven't been accessed for the
longest. This cache flushing strategy is efficient and more relevant

Some files were not shown because too many files have changed in this diff Show More