1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-01 16:13:35 +00:00

Compare commits

..

25 Commits

Author SHA1 Message Date
Nick Craig-Wood
1d1d847f18 union: change epff policy to search local disks first
Its always been random which remote epff/ff finds first. Make it so
that we search local disks first which will save on network resources.

See: https://forum.rclone.org/t/rclone-union-no-longer-preferring-local-copies-windows/32002/3
2022-07-21 17:23:56 +01:00
Nick Craig-Wood
7a24c173f6 build: disable revive linter pending a fix in golangci-lint
The revive linter got extremely slow in golangci-lint 1.47.1 causing
the CI to time out.

Disable for the time being until it is fixed.

See: https://github.com/golangci/golangci-lint/issues/2997
2022-07-20 23:07:20 +01:00
Nick Craig-Wood
fb60aeddae Add Jordi Gonzalez Muñoz to contributors 2022-07-20 23:07:02 +01:00
Nick Craig-Wood
695736d1e4 Add Steve Kowalik to contributors 2022-07-20 23:07:02 +01:00
albertony
f0396070eb sftp: fix issue with WS_FTP by working around failing RealPath 2022-07-20 18:07:50 +01:00
Jordi Gonzalez Muñoz
f1166757ba librclone: add PHP bindings and test program 2022-07-20 17:20:12 +01:00
Steve Kowalik
9b76434ad5 drive: make --drive-stop-on-upload-limit obey quota exceeded error
Extend the shouldRetry function by also checking for the quotaExceeded
reason, and since this function appeared to be untested, add a test case
for the existing errors and this new one.

Fixes #615
2022-07-20 10:37:34 +01:00
Nick Craig-Wood
440d0cd179 s3: fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput
In

22abd785eb s3: implement reading and writing of metadata #111

The reading information of objects was refactored to use the
s3.HeadObjectOutput structure.

Unfortunately the code branch with `--s3-no-head` was not tested
otherwise this panic would have been discovered.

This shows that this is path is not integration tested, so this adds a
new integration test.

Fixes #6322
2022-07-18 23:38:50 +01:00
Nick Craig-Wood
a047d30eca Add Yen Hu to contributors 2022-07-18 23:38:50 +01:00
Yen Hu
03d0f331f7 onedrive: rename Onedrive(cn) 21Vianet to Vnet Group
The old site had shown a redirect page to the new one since 2021-4-21.
https://www.21vianet.com
The official site had renamed to Vnet Group also.
https://www.vnet.com/en/about
2022-07-17 17:07:23 +01:00
Lesmiscore
049674aeab backend/internetarchive: ignore checksums for files using the different method 2022-07-17 14:02:40 +01:00
Nick Craig-Wood
50f053cada dropbox: fix hang on quit with --dropbox-batch-mode off
This problem was created by the fact that we are much more diligent
about calling Shutdown now, and the dropbox backend had a hang if the
batch mode was "off" in the Shutdown method.

See: https://forum.rclone.org/t/dropbox-lsjson-in-1-59-stuck-on-commiting-upload/31853
2022-07-17 12:51:44 +01:00
Nick Craig-Wood
140af43c26 build: add 32 bit test runner to avoid problems like #6311 2022-07-14 20:13:03 +01:00
Nick Craig-Wood
f467188876 Add Evan Spensley to contributors 2022-07-14 20:13:03 +01:00
Evan Spensley
4a4379b312 jobs: add ability to stop group
Adds new rc call to stop all running jobs in a group. Fixes #5561
2022-07-13 18:13:31 +01:00
Nick Naumann
8c02fe7b89 sync: update docs and error messages to reflect fixes to overlap checks 2022-07-13 16:04:53 +01:00
Nick Naumann
11be920e90 sync: add filter-sensitivity to --backup-dir option
The old Overlapping function and corresponding tests have been removed, as it has been completely replaced by the OverlappingFilterCheck function.
2022-07-13 16:04:53 +01:00
albertony
8c19b355a5 docs: fix links to mount command from install docs 2022-07-13 12:33:54 +02:00
r-ricci
67fd60275a union: fix panic due to misalignment of struct field in 32 bit architectures
`FS.cacheExpiry` is accessed through sync/atomic.
According to the documentation, "On ARM, 386, and 32-bit MIPS, it is
the caller's responsibility to arrange for 64-bit alignment of 64-bit
words accessed atomically. The first word in a variable or in an
allocated struct, array, or slice can be relied upon to be 64-bit
aligned."
Before commit 1d2fe0d856 this field was
aligned, but then a new field was added to the structure, causing the
test suite to panic on linux/386.
No other field is used with sync/atomic, so `cacheExpiry` can just be
placed at the beginning of the stuct to ensure it is always aligned.
2022-07-11 18:34:06 +01:00
Nick Craig-Wood
b310490fa5 union: fix multiple files being uploaded when roots don't exist
See: https://forum.rclone.org/t/union-backend-copying-to-all-remotes-while-it-shouldnt/31781
2022-07-11 18:19:36 +01:00
Nick Craig-Wood
0ee0812a2b union: fix duplicated files when using directories with leading /
See: https://forum.rclone.org/t/union-backend-copying-to-all-remotes-while-it-shouldnt/31781
2022-07-11 18:19:36 +01:00
Nick Craig-Wood
55bbff6346 operations: add --server-side-across-configs global flag for any backend 2022-07-11 18:17:42 +01:00
Nick Craig-Wood
9c6cfc1ff0 combine: throw error if duplicate directory name is specified
See: https://forum.rclone.org/t/v1-59-combine-qs/31814
2022-07-10 15:40:30 +01:00
Nick Craig-Wood
f753d7cd42 combine: fix docs showing remote= instead of upstreams=
See: https://forum.rclone.org/t/v1-59-combine-qs/31814
2022-07-10 15:34:48 +01:00
Nick Craig-Wood
f5be1d6b65 Start v1.60.0-DEV development 2022-07-09 20:43:17 +01:00
44 changed files with 631 additions and 716 deletions

View File

@@ -25,7 +25,7 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.16', 'go1.17']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.16', 'go1.17']
include:
- job_name: linux
@@ -39,6 +39,13 @@ jobs:
librclonetest: true
deploy: true
- job_name: linux_386
os: ubuntu-latest
go: '1.18.x'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-11
go: '1.18.x'
@@ -245,6 +252,10 @@ jobs:
with:
go-version: 1.18.x
# Upgrade together with Go version. Using a GitHub-provided version saves around 2 minutes.
- name: Force NDK version
run: echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install "ndk;23.1.7779620" | grep -v = || true
- name: Go module cache
uses: actions/cache@v2
with:
@@ -267,29 +278,27 @@ jobs:
go install golang.org/x/mobile/cmd/gobind@latest
go install golang.org/x/mobile/cmd/gomobile@latest
env PATH=$PATH:~/go/bin gomobile init
echo "RCLONE_NDK_VERSION=21" >> $GITHUB_ENV
- name: arm-v7a gomobile build
run: env PATH=$PATH:~/go/bin gomobile bind -androidapi ${RCLONE_NDK_VERSION} -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile
run: env PATH=$PATH:~/go/bin gomobile bind -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile
- name: arm-v7a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi16-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm' >> $GITHUB_ENV
echo 'GOARM=7' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: arm-v7a build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv7a .
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-16-armv7a .
- name: arm64-v8a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm64' >> $GITHUB_ENV
@@ -297,12 +306,12 @@ jobs:
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: arm64-v8a build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv8a .
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-21-armv8a .
- name: x86 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android16-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=386' >> $GITHUB_ENV
@@ -310,12 +319,12 @@ jobs:
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: x86 build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x86 .
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-16-x86 .
- name: x64 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=amd64' >> $GITHUB_ENV
@@ -323,7 +332,7 @@ jobs:
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: x64 build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x64 .
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-21-x64 .
- name: Upload artifacts
run: |

View File

@@ -4,7 +4,7 @@ linters:
enable:
- deadcode
- errcheck
#- goimports
- goimports
#- revive
- ineffassign
- structcheck

100
MANUAL.html generated
View File

@@ -19,7 +19,7 @@
<header id="title-block-header">
<h1 class="title">rclone(1) User Manual</h1>
<p class="author">Nick Craig-Wood</p>
<p class="date">Aug 08, 2022</p>
<p class="date">Jul 09, 2022</p>
</header>
<h1 id="rclone-syncs-your-files-to-cloud-storage">Rclone syncs your files to cloud storage</h1>
<p><img width="50%" src="https://rclone.org/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" ></p>
@@ -300,7 +300,7 @@ go build</code></pre>
<p>Run the <a href="https://rclone.org/commands/rclone_config_paths/">config paths</a> command to see the locations that rclone will use.</p>
<p>To override them set the corresponding options (as command-line arguments, or as <a href="https://rclone.org/docs/#environment-variables">environment variables</a>): - <a href="https://rclone.org/docs/#config-config-file">--config</a> - <a href="https://rclone.org/docs/#cache-dir-dir">--cache-dir</a> - <a href="https://rclone.org/docs/#temp-dir-dir">--temp-dir</a></p>
<h2 id="autostart">Autostart</h2>
<p>After installing and configuring rclone, as described above, you are ready to use rclone as an interactive command line utility. If your goal is to perform <em>periodic</em> operations, such as a regular <a href="https://rclone.org/commands/rclone_sync/">sync</a>, you will probably want to configure your rclone command in your operating system's scheduler. If you need to expose <em>service</em>-like features, such as <a href="https://rclone.org/rc/">remote control</a>, <a href="https://rclone.org/gui/">GUI</a>, <a href="https://rclone.org/commands/rclone_serve/">serve</a> or <a href="https://rclone.org/commands/rclone_mount/">mount</a>, you will often want an rclone command always running in the background, and configuring it to run in a service infrastructure may be a better option. Below are some alternatives on how to achieve this on different operating systems.</p>
<p>After installing and configuring rclone, as described above, you are ready to use rclone as an interactive command line utility. If your goal is to perform <em>periodic</em> operations, such as a regular <a href="https://rclone.org/commands/rclone_sync/">sync</a>, you will probably want to configure your rclone command in your operating system's scheduler. If you need to expose <em>service</em>-like features, such as <a href="https://rclone.org/rc/">remote control</a>, <a href="https://rclone.org/gui/">GUI</a>, <a href="https://rclone.org/commands/rclone_serve/">serve</a> or <a href="https://rclone.org/commands/rclone_move/">mount</a>, you will often want an rclone command always running in the background, and configuring it to run in a service infrastructure may be a better option. Below are some alternatives on how to achieve this on different operating systems.</p>
<p>NOTE: Before setting up autorun it is highly recommended that you have tested your command manually from a Command Prompt first.</p>
<h3 id="autostart-on-windows">Autostart on Windows</h3>
<p>The most relevant alternatives for autostart on Windows are: - Run at user log on using the Startup folder - Run at user log on, at system startup or at schedule using Task Scheduler - Run at system startup using Windows service</p>
@@ -309,7 +309,7 @@ go build</code></pre>
<p>Example command to run a sync in background:</p>
<pre><code>c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclone\logs\sync_files.txt</code></pre>
<h4 id="user-account">User account</h4>
<p>As mentioned in the <a href="https://rclone.org/commands/rclone_mount/">mount</a> documentation, mounted drives created as Administrator are not visible to other accounts, not even the account that was elevated as Administrator. By running the mount command as the built-in <code>SYSTEM</code> user account, it will create drives accessible for everyone on the system. Both scheduled task and Windows service can be used to achieve this.</p>
<p>As mentioned in the <a href="https://rclone.org/commands/rclone_move/">mount</a> documentation, mounted drives created as Administrator are not visible to other accounts, not even the account that was elevated as Administrator. By running the mount command as the built-in <code>SYSTEM</code> user account, it will create drives accessible for everyone on the system. Both scheduled task and Windows service can be used to achieve this.</p>
<p>NOTE: Remember that when rclone runs as the <code>SYSTEM</code> user, the user profile that it sees will not be yours. This means that if you normally run rclone with configuration file in the default location, to be able to use the same configuration when running as the system user you must explicitely tell rclone where to find it with the <a href="https://rclone.org/docs/#config-config-file"><code>--config</code></a> option, or else it will look in the system users profile path (<code>C:\Windows\System32\config\systemprofile</code>). To test your command manually from a Command Prompt, you can run it with the <a href="https://docs.microsoft.com/en-us/sysinternals/downloads/psexec">PsExec</a> utility from Microsoft's Sysinternals suite, which takes option <code>-s</code> to execute commands as the <code>SYSTEM</code> user.</p>
<h4 id="start-from-startup-folder">Start from Startup folder</h4>
<p>To quickly execute an rclone command you can simply create a standard Windows Explorer shortcut for the complete rclone command you want to run. If you store this shortcut in the special "Startup" start-menu folder, Windows will automatically run it at login. To open this folder in Windows Explorer, enter path <code>%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup</code>, or <code>C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp</code> if you want the command to start for <em>every</em> user that logs in.</p>
@@ -469,7 +469,6 @@ destpath/sourcepath/two.txt</code></pre>
<p>Note that files in the destination won't be deleted if there were any errors at any point. Duplicate objects (files with the same name, on those providers that support it) are also not yet handled.</p>
<p>It is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the <a href="https://rclone.org/commands/rclone_copy/">copy</a> command if unsure.</p>
<p>If dest:path doesn't exist, it is created and the source:path contents go there.</p>
<p>It is not possible to sync overlapping remotes. However, you may exclude the destination from the sync with a filter rule or by putting an exclude-if-present file inside the destination directory and sync to a destination that is inside the source directory.</p>
<p><strong>Note</strong>: Use the <code>-P</code>/<code>--progress</code> flag to view real-time transfer statistics</p>
<p><strong>Note</strong>: Use the <code>rclone dedupe</code> command to deal with "Duplicate object/directory found in source/destination - ignoring" errors. See <a href="https://forum.rclone.org/t/sync-not-clearing-duplicates/14372">this forum post</a> for more info.</p>
<pre><code>rclone sync source:path dest:path [flags]</code></pre>
@@ -4237,7 +4236,7 @@ rclone sync -i /path/to/files remote:current-backup</code></pre>
<h3 id="backup-dirdir">--backup-dir=DIR</h3>
<p>When using <code>sync</code>, <code>copy</code> or <code>move</code> any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.</p>
<p>If <code>--suffix</code> is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.</p>
<p>The remote in use must support server-side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory without it being excluded by a filter rule.</p>
<p>The remote in use must support server-side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory.</p>
<p>For example</p>
<pre><code>rclone sync -i /path/to/local remote:current --backup-dir remote:old</code></pre>
<p>will sync <code>/path/to/local</code> to <code>remote:current</code>, but for any files which would have been updated or deleted will be stored in <code>remote:old</code>.</p>
@@ -8379,7 +8378,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.59.1&quot;)
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.59.0&quot;)
-v, --verbose count Print lots more stuff (repeat for more)</code></pre>
<h2 id="backend-flags">Backend Flags</h2>
<p>These flags are available for every command. They control the backends and may be set in the config file.</p>
@@ -17157,7 +17156,7 @@ remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
upstreams = &quot;My Drive=My Drive:&quot; &quot;Test Drive=Test Drive:&quot;</code></pre>
remote = &quot;My Drive=My Drive:&quot; &quot;Test Drive=Test Drive:&quot;</code></pre>
<p>If you then add that config to your config file (find it with <code>rclone config file</code>) then you can access all the shared drives in one place with the <code>AllDrives:</code> remote.</p>
<p>See <a href="https://rclone.org/drive/#drives">the Google Drive docs</a> for full info.</p>
<h3 id="standard-options-11">Standard options</h3>
@@ -19658,7 +19657,7 @@ remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
upstreams = &quot;My Drive=My Drive:&quot; &quot;Test Drive=Test Drive:&quot;</code></pre>
remote = &quot;My Drive=My Drive:&quot; &quot;Test Drive=Test Drive:&quot;</code></pre>
<p>Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. Any illegal charactes will be substituted with "_" and duplicate names will have numbers suffixed. It will also add a remote called AllDrives which shows all the shared drives combined into one directory tree.</p>
<h3 id="untrash">untrash</h3>
<p>Untrash files and directories</p>
@@ -20953,8 +20952,8 @@ y/e/d&gt; y</code></pre>
<p>The Internet Archive backend utilizes Items on <a href="https://archive.org/">archive.org</a></p>
<p>Refer to <a href="https://archive.org/services/docs/api/ias3.html">IAS3 API documentation</a> for the API this backend uses.</p>
<p>Paths are specified as <code>remote:bucket</code> (or <code>remote:</code> for the <code>lsd</code> command.) You may put subdirectories in too, e.g. <code>remote:item/path/to/dir</code>.</p>
<p>Once you have made a remote (see the provider specific section above) you can use it like this:</p>
<p>Unlike S3, listing up all items uploaded by you isn't supported.</p>
<p>Once you have made a remote, you can use it like this:</p>
<p>Make a new item</p>
<pre><code>rclone mkdir remote:item</code></pre>
<p>List the contents of a item</p>
@@ -20966,7 +20965,7 @@ y/e/d&gt; y</code></pre>
<p>You can optionally wait for the server's processing to finish, by setting non-zero value to <code>wait_archive</code> key. By making it wait, rclone can do normal file comparison. Make sure to set a large enough value (e.g. <code>30m0s</code> for smaller files) as it can take a long time depending on server's queue.</p>
<h2 id="about-metadata">About metadata</h2>
<p>This backend supports setting, updating and reading metadata of each file. The metadata will appear as file metadata on Internet Archive. However, some fields are reserved by both Internet Archive and rclone.</p>
<p>The following are reserved by Internet Archive: - <code>name</code> - <code>source</code> - <code>size</code> - <code>md5</code> - <code>crc32</code> - <code>sha1</code> - <code>format</code> - <code>old_version</code> - <code>viruscheck</code> - <code>summation</code></p>
<p>The following are reserved by Internet Archive: - <code>name</code> - <code>source</code> - <code>size</code> - <code>md5</code> - <code>crc32</code> - <code>sha1</code> - <code>format</code> - <code>old_version</code> - <code>viruscheck</code></p>
<p>Trying to set values to these keys is ignored with a warning. Only setting <code>mtime</code> is an exception. Doing so make it the identical behavior as setting ModTime.</p>
<p>rclone reserves all the keys starting with <code>rclone-</code>. Setting value for these keys will give you warnings, but values are set according to request.</p>
<p>If there are multiple values for a key, only the first one is returned. This is a limitation of rclone, that supports one value per one key. It can be triggered when you did a server-side copy.</p>
@@ -21139,42 +21138,42 @@ y/e/d&gt; y</code></pre>
<td>CRC32 calculated by Internet Archive</td>
<td>string</td>
<td>01234567</td>
<td><strong>Y</strong></td>
<td>N</td>
</tr>
<tr class="even">
<td>format</td>
<td>Name of format identified by Internet Archive</td>
<td>string</td>
<td>Comma-Separated Values</td>
<td><strong>Y</strong></td>
<td>N</td>
</tr>
<tr class="odd">
<td>md5</td>
<td>MD5 hash calculated by Internet Archive</td>
<td>string</td>
<td>01234567012345670123456701234567</td>
<td><strong>Y</strong></td>
<td>N</td>
</tr>
<tr class="even">
<td>mtime</td>
<td>Time of last modification, managed by Rclone</td>
<td>RFC 3339</td>
<td>2006-01-02T15:04:05.999999999Z</td>
<td><strong>Y</strong></td>
<td>N</td>
</tr>
<tr class="odd">
<td>name</td>
<td>Full file path, without the bucket part</td>
<td>filename</td>
<td>backend/internetarchive/internetarchive.go</td>
<td><strong>Y</strong></td>
<td>N</td>
</tr>
<tr class="even">
<td>old_version</td>
<td>Whether the file was replaced and moved by keep-old-version flag</td>
<td>boolean</td>
<td>true</td>
<td><strong>Y</strong></td>
<td>N</td>
</tr>
<tr class="odd">
<td>rclone-ia-mtime</td>
@@ -21202,35 +21201,28 @@ y/e/d&gt; y</code></pre>
<td>SHA1 hash calculated by Internet Archive</td>
<td>string</td>
<td>0123456701234567012345670123456701234567</td>
<td><strong>Y</strong></td>
<td>N</td>
</tr>
<tr class="odd">
<td>size</td>
<td>File size in bytes</td>
<td>decimal number</td>
<td>123456</td>
<td><strong>Y</strong></td>
<td>N</td>
</tr>
<tr class="even">
<td>source</td>
<td>The source of the file</td>
<td>string</td>
<td>original</td>
<td><strong>Y</strong></td>
<td>N</td>
</tr>
<tr class="odd">
<td>summation</td>
<td>Check https://forum.rclone.org/t/31922 for how it is used</td>
<td>string</td>
<td>md5</td>
<td><strong>Y</strong></td>
</tr>
<tr class="even">
<td>viruscheck</td>
<td>The last time viruscheck process was run for the file (?)</td>
<td>unixtime</td>
<td>1654191352</td>
<td><strong>Y</strong></td>
<td>N</td>
</tr>
</tbody>
</table>
@@ -27765,60 +27757,6 @@ $ tree /tmp/b
<li>"error": return an error based on option value</li>
</ul>
<h1 id="changelog">Changelog</h1>
<h2 id="v1.59.1---2022-08-08">v1.59.1 - 2022-08-08</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1">See commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>accounting: Fix panic in core/stats-reset with unknown group (Nick Craig-Wood)</li>
<li>build: Fix android build after GitHub actions change (Nick Craig-Wood)</li>
<li>dlna: Fix SOAP action header parsing (Joram Schrijver)</li>
<li>docs: Fix links to mount command from install docs (albertony)</li>
<li>dropox: Fix ChangeNotify was unable to decrypt errors (Nick Craig-Wood)</li>
<li>fs: Fix parsing of times and durations of the form "YYYY-MM-DD HH:MM:SS" (Nick Craig-Wood)</li>
<li>serve sftp: Fix checksum detection (Nick Craig-Wood)</li>
<li>sync: Add accidentally missed filter-sensitivity to --backup-dir option (Nick Naumann)</li>
</ul></li>
<li>Combine
<ul>
<li>Fix docs showing <code>remote=</code> instead of <code>upstreams=</code> (Nick Craig-Wood)</li>
<li>Throw error if duplicate directory name is specified (Nick Craig-Wood)</li>
<li>Fix errors with backends shutting down while in use (Nick Craig-Wood)</li>
</ul></li>
<li>Dropbox
<ul>
<li>Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)</li>
<li>Fix infinite loop on uploading a corrupted file (Nick Craig-Wood)</li>
</ul></li>
<li>Internetarchive
<ul>
<li>Ignore checksums for files using the different method (Lesmiscore)</li>
<li>Handle hash symbol in the middle of filename (Lesmiscore)</li>
</ul></li>
<li>Jottacloud
<ul>
<li>Fix working with whitelabel Elgiganten Cloud</li>
<li>Do not store username in config when using standard auth (albertony)</li>
</ul></li>
<li>Mega
<ul>
<li>Fix nil pointer exception when bad node received (Nick Craig-Wood)</li>
</ul></li>
<li>S3
<ul>
<li>Fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput (Nick Craig-Wood)</li>
</ul></li>
<li>SFTP
<ul>
<li>Fix issue with WS_FTP by working around failing RealPath (albertony)</li>
</ul></li>
<li>Union
<ul>
<li>Fix duplicated files when using directories with leading / (Nick Craig-Wood)</li>
<li>Fix multiple files being uploaded when roots don't exist (Nick Craig-Wood)</li>
<li>Fix panic due to misalignment of struct field in 32 bit architectures (r-ricci)</li>
</ul></li>
</ul>
<h2 id="v1.59.0---2022-07-09">v1.59.0 - 2022-07-09</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0">See commits</a></p>
<ul>

84
MANUAL.md generated
View File

@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Aug 08, 2022
% Jul 09, 2022
# Rclone syncs your files to cloud storage
@@ -506,7 +506,7 @@ such as a regular [sync](https://rclone.org/commands/rclone_sync/), you will pro
to configure your rclone command in your operating system's scheduler. If you need to
expose *service*-like features, such as [remote control](https://rclone.org/rc/),
[GUI](https://rclone.org/gui/), [serve](https://rclone.org/commands/rclone_serve/)
or [mount](https://rclone.org/commands/rclone_mount/), you will often want an rclone
or [mount](https://rclone.org/commands/rclone_move/), you will often want an rclone
command always running in the background, and configuring it to run in a service infrastructure
may be a better option. Below are some alternatives on how to achieve this on
different operating systems.
@@ -539,7 +539,7 @@ c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclo
#### User account
As mentioned in the [mount](https://rclone.org/commands/rclone_mount/) documentation,
As mentioned in the [mount](https://rclone.org/commands/rclone_move/) documentation,
mounted drives created as Administrator are not visible to other accounts, not even the
account that was elevated as Administrator. By running the mount command as the
built-in `SYSTEM` user account, it will create drives accessible for everyone on
@@ -897,11 +897,6 @@ extended explanation in the [copy](https://rclone.org/commands/rclone_copy/) com
If dest:path doesn't exist, it is created and the source:path contents
go there.
It is not possible to sync overlapping remotes. However, you may exclude
the destination from the sync with a filter rule or by putting an
exclude-if-present file inside the destination directory and sync to a
destination that is inside the source directory.
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
**Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.
@@ -8740,8 +8735,7 @@ been added) in DIR, then it will be overwritten.
The remote in use must support server-side move or copy and you must
use the same remote as the destination of the sync. The backup
directory must not overlap the destination directory without it being
excluded by a filter rule.
directory must not overlap the destination directory.
For example
@@ -14342,7 +14336,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.1")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0")
-v, --verbose count Print lots more stuff (repeat for more)
```
@@ -25142,7 +25136,7 @@ This would produce something like this:
[AllDrives]
type = combine
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
If you then add that config to your config file (find it with `rclone
config file`) then you can access all the shared drives in one place
@@ -28349,7 +28343,7 @@ drives found and a combined drive.
[AllDrives]
type = combine
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
Adding this to the rclone config file will cause those team drives to
be accessible with the aliases shown. Any illegal charactes will be
@@ -30501,9 +30495,10 @@ Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.htm
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`.
Unlike S3, listing up all items uploaded by you isn't supported.
Once you have made a remote (see the provider specific section above)
you can use it like this:
Once you have made a remote, you can use it like this:
Unlike S3, listing up all items uploaded by you isn't supported.
Make a new item
@@ -30541,7 +30536,6 @@ The following are reserved by Internet Archive:
- `format`
- `old_version`
- `viruscheck`
- `summation`
Trying to set values to these keys is ignored with a warning.
Only setting `mtime` is an exception. Doing so make it the identical behavior as setting ModTime.
@@ -30747,20 +30741,19 @@ Here are the possible system metadata items for the internetarchive backend.
| Name | Help | Type | Example | Read Only |
|------|------|------|---------|-----------|
| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | **Y** |
| format | Name of format identified by Internet Archive | string | Comma-Separated Values | **Y** |
| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | **Y** |
| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | **Y** |
| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | **Y** |
| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | **Y** |
| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | N |
| format | Name of format identified by Internet Archive | string | Comma-Separated Values | N |
| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | N |
| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | N |
| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | N |
| rclone-ia-mtime | Time of last modification, managed by Internet Archive | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| rclone-mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| rclone-update-track | Random value used by Rclone for tracking changes inside Internet Archive | string | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | N |
| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | **Y** |
| size | File size in bytes | decimal number | 123456 | **Y** |
| source | The source of the file | string | original | **Y** |
| summation | Check https://forum.rclone.org/t/31922 for how it is used | string | md5 | **Y** |
| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | **Y** |
| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | N |
| size | File size in bytes | decimal number | 123456 | N |
| source | The source of the file | string | original | N |
| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | N |
See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
@@ -39420,43 +39413,6 @@ Options:
# Changelog
## v1.59.1 - 2022-08-08
[See commits](https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1)
* Bug Fixes
* accounting: Fix panic in core/stats-reset with unknown group (Nick Craig-Wood)
* build: Fix android build after GitHub actions change (Nick Craig-Wood)
* dlna: Fix SOAP action header parsing (Joram Schrijver)
* docs: Fix links to mount command from install docs (albertony)
* dropox: Fix ChangeNotify was unable to decrypt errors (Nick Craig-Wood)
* fs: Fix parsing of times and durations of the form "YYYY-MM-DD HH:MM:SS" (Nick Craig-Wood)
* serve sftp: Fix checksum detection (Nick Craig-Wood)
* sync: Add accidentally missed filter-sensitivity to --backup-dir option (Nick Naumann)
* Combine
* Fix docs showing `remote=` instead of `upstreams=` (Nick Craig-Wood)
* Throw error if duplicate directory name is specified (Nick Craig-Wood)
* Fix errors with backends shutting down while in use (Nick Craig-Wood)
* Dropbox
* Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)
* Fix infinite loop on uploading a corrupted file (Nick Craig-Wood)
* Internetarchive
* Ignore checksums for files using the different method (Lesmiscore)
* Handle hash symbol in the middle of filename (Lesmiscore)
* Jottacloud
* Fix working with whitelabel Elgiganten Cloud
* Do not store username in config when using standard auth (albertony)
* Mega
* Fix nil pointer exception when bad node received (Nick Craig-Wood)
* S3
* Fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput (Nick Craig-Wood)
* SFTP
* Fix issue with WS_FTP by working around failing RealPath (albertony)
* Union
* Fix duplicated files when using directories with leading / (Nick Craig-Wood)
* Fix multiple files being uploaded when roots don't exist (Nick Craig-Wood)
* Fix panic due to misalignment of struct field in 32 bit architectures (r-ricci)
## v1.59.0 - 2022-07-09
[See commits](https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0)

157
MANUAL.txt generated
View File

@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
Aug 08, 2022
Jul 09, 2022
Rclone syncs your files to cloud storage
@@ -857,11 +857,6 @@ extended explanation in the copy command if unsure.
If dest:path doesn't exist, it is created and the source:path contents
go there.
It is not possible to sync overlapping remotes. However, you may exclude
the destination from the sync with a filter rule or by putting an
exclude-if-present file inside the destination directory and sync to a
destination that is inside the source directory.
Note: Use the -P/--progress flag to view real-time transfer statistics
Note: Use the rclone dedupe command to deal with "Duplicate
@@ -8337,8 +8332,7 @@ added) in DIR, then it will be overwritten.
The remote in use must support server-side move or copy and you must use
the same remote as the destination of the sync. The backup directory
must not overlap the destination directory without it being excluded by
a filter rule.
must not overlap the destination directory.
For example
@@ -13893,7 +13887,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.1")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0")
-v, --verbose count Print lots more stuff (repeat for more)
Backend Flags
@@ -24550,7 +24544,7 @@ This would produce something like this:
[AllDrives]
type = combine
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
If you then add that config to your config file (find it with
rclone config file) then you can access all the shared drives in one
@@ -27758,7 +27752,7 @@ found and a combined drive.
[AllDrives]
type = combine
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
Adding this to the rclone config file will cause those team drives to be
accessible with the aliases shown. Any illegal charactes will be
@@ -29892,9 +29886,10 @@ Refer to IAS3 API documentation for the API this backend uses.
Paths are specified as remote:bucket (or remote: for the lsd command.)
You may put subdirectories in too, e.g. remote:item/path/to/dir.
Unlike S3, listing up all items uploaded by you isn't supported.
Once you have made a remote (see the provider specific section above)
you can use it like this:
Once you have made a remote, you can use it like this:
Unlike S3, listing up all items uploaded by you isn't supported.
Make a new item
@@ -29934,7 +29929,7 @@ file. The metadata will appear as file metadata on Internet Archive.
However, some fields are reserved by both Internet Archive and rclone.
The following are reserved by Internet Archive: - name - source - size -
md5 - crc32 - sha1 - format - old_version - viruscheck - summation
md5 - crc32 - sha1 - format - old_version - viruscheck
Trying to set values to these keys is ignored with a warning. Only
setting mtime is an exception. Doing so make it the identical behavior
@@ -30145,52 +30140,65 @@ including them.
Here are the possible system metadata items for the internetarchive
backend.
--------------------------------------------------------------------------------------------------------------------------------------
Name Help Type Example Read Only
--------------------- ---------------------------------- ----------- -------------------------------------------- --------------------
crc32 CRC32 calculated by Internet string 01234567 Y
Archive
----------------------------------------------------------------------------------------------------------------------
Name Help Type Example Read Only
--------------------- ------------------ ----------- -------------------------------------------- --------------------
crc32 CRC32 calculated string 01234567 N
by Internet
Archive
format Name of format identified by string Comma-Separated Values Y
Internet Archive
format Name of format string Comma-Separated Values N
identified by
Internet Archive
md5 MD5 hash calculated by Internet string 01234567012345670123456701234567 Y
Archive
md5 MD5 hash string 01234567012345670123456701234567 N
calculated by
Internet Archive
mtime Time of last modification, managed RFC 3339 2006-01-02T15:04:05.999999999Z Y
by Rclone
mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z N
modification,
managed by Rclone
name Full file path, without the bucket filename backend/internetarchive/internetarchive.go Y
part
name Full file path, filename backend/internetarchive/internetarchive.go N
without the bucket
part
old_version Whether the file was replaced and boolean true Y
moved by keep-old-version flag
old_version Whether the file boolean true N
was replaced and
moved by
keep-old-version
flag
rclone-ia-mtime Time of last modification, managed RFC 3339 2006-01-02T15:04:05.999999999Z N
by Internet Archive
rclone-ia-mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z N
modification,
managed by
Internet Archive
rclone-mtime Time of last modification, managed RFC 3339 2006-01-02T15:04:05.999999999Z N
by Rclone
rclone-mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z N
modification,
managed by Rclone
rclone-update-track Random value used by Rclone for string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa N
tracking changes inside Internet
Archive
rclone-update-track Random value used string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa N
by Rclone for
tracking changes
inside Internet
Archive
sha1 SHA1 hash calculated by Internet string 0123456701234567012345670123456701234567 Y
Archive
sha1 SHA1 hash string 0123456701234567012345670123456701234567 N
calculated by
Internet Archive
size File size in bytes decimal 123456 Y
number
size File size in bytes decimal 123456 N
number
source The source of the file string original Y
source The source of the string original N
file
summation Check string md5 Y
https://forum.rclone.org/t/31922
for how it is used
viruscheck The last time viruscheck process unixtime 1654191352 Y
was run for the file (?)
--------------------------------------------------------------------------------------------------------------------------------------
viruscheck The last time unixtime 1654191352 N
viruscheck process
was run for the
file (?)
----------------------------------------------------------------------------------------------------------------------
See the metadata docs for more info.
@@ -38931,59 +38939,6 @@ Options:
Changelog
v1.59.1 - 2022-08-08
See commits
- Bug Fixes
- accounting: Fix panic in core/stats-reset with unknown group
(Nick Craig-Wood)
- build: Fix android build after GitHub actions change (Nick
Craig-Wood)
- dlna: Fix SOAP action header parsing (Joram Schrijver)
- docs: Fix links to mount command from install docs (albertony)
- dropox: Fix ChangeNotify was unable to decrypt errors (Nick
Craig-Wood)
- fs: Fix parsing of times and durations of the form "YYYY-MM-DD
HH:MM:SS" (Nick Craig-Wood)
- serve sftp: Fix checksum detection (Nick Craig-Wood)
- sync: Add accidentally missed filter-sensitivity to --backup-dir
option (Nick Naumann)
- Combine
- Fix docs showing remote= instead of upstreams= (Nick Craig-Wood)
- Throw error if duplicate directory name is specified (Nick
Craig-Wood)
- Fix errors with backends shutting down while in use (Nick
Craig-Wood)
- Dropbox
- Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)
- Fix infinite loop on uploading a corrupted file (Nick
Craig-Wood)
- Internetarchive
- Ignore checksums for files using the different method
(Lesmiscore)
- Handle hash symbol in the middle of filename (Lesmiscore)
- Jottacloud
- Fix working with whitelabel Elgiganten Cloud
- Do not store username in config when using standard auth
(albertony)
- Mega
- Fix nil pointer exception when bad node received (Nick
Craig-Wood)
- S3
- Fix --s3-no-head panic: reflect: Elem of invalid type
s3.PutObjectInput (Nick Craig-Wood)
- SFTP
- Fix issue with WS_FTP by working around failing RealPath
(albertony)
- Union
- Fix duplicated files when using directories with leading / (Nick
Craig-Wood)
- Fix multiple files being uploaded when roots don't exist (Nick
Craig-Wood)
- Fix panic due to misalignment of struct field in 32 bit
architectures (r-ricci)
v1.59.0 - 2022-07-09
See commits

View File

@@ -1 +1 @@
v1.59.1
v1.60.0

View File

@@ -145,7 +145,6 @@ func (f *Fs) newUpstream(ctx context.Context, dir, remote string) (*upstream, er
dir: dir,
pathAdjustment: newAdjustment(f.root, dir),
}
cache.PinUntilFinalized(u.f, u)
return u, nil
}

View File

@@ -758,6 +758,9 @@ func (f *Fs) shouldRetry(ctx context.Context, err error) (bool, error) {
} else if f.opt.StopOnDownloadLimit && reason == "downloadQuotaExceeded" {
fs.Errorf(f, "Received download limit error: %v", err)
return false, fserrors.FatalError(err)
} else if f.opt.StopOnUploadLimit && reason == "quotaExceeded" {
fs.Errorf(f, "Received upload limit error: %v", err)
return false, fserrors.FatalError(err)
} else if f.opt.StopOnUploadLimit && reason == "teamDriveFileLimitExceeded" {
fs.Errorf(f, "Received Shared Drive file limit error: %v", err)
return false, fserrors.FatalError(err)

View File

@@ -19,6 +19,7 @@ import (
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/filter"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/sync"
@@ -28,6 +29,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/api/drive/v3"
"google.golang.org/api/googleapi"
)
func TestDriveScopes(t *testing.T) {
@@ -190,6 +192,60 @@ func TestExtensionsForImportFormats(t *testing.T) {
}
}
func (f *Fs) InternalTestShouldRetry(t *testing.T) {
ctx := context.Background()
gatewayTimeout := googleapi.Error{
Code: 503,
}
timeoutRetry, timeoutError := f.shouldRetry(ctx, &gatewayTimeout)
assert.True(t, timeoutRetry)
assert.Equal(t, &gatewayTimeout, timeoutError)
generic403 := googleapi.Error{
Code: 403,
}
rLEItem := googleapi.ErrorItem{
Reason: "rateLimitExceeded",
Message: "User rate limit exceeded.",
}
generic403.Errors = append(generic403.Errors, rLEItem)
oldStopUpload := f.opt.StopOnUploadLimit
oldStopDownload := f.opt.StopOnDownloadLimit
f.opt.StopOnUploadLimit = true
f.opt.StopOnDownloadLimit = true
defer func() {
f.opt.StopOnUploadLimit = oldStopUpload
f.opt.StopOnDownloadLimit = oldStopDownload
}()
expectedRLError := fserrors.FatalError(&generic403)
rateLimitRetry, rateLimitErr := f.shouldRetry(ctx, &generic403)
assert.False(t, rateLimitRetry)
assert.Equal(t, rateLimitErr, expectedRLError)
dQEItem := googleapi.ErrorItem{
Reason: "downloadQuotaExceeded",
}
generic403.Errors[0] = dQEItem
expectedDQError := fserrors.FatalError(&generic403)
downloadQuotaRetry, downloadQuotaError := f.shouldRetry(ctx, &generic403)
assert.False(t, downloadQuotaRetry)
assert.Equal(t, downloadQuotaError, expectedDQError)
tDFLEItem := googleapi.ErrorItem{
Reason: "teamDriveFileLimitExceeded",
}
generic403.Errors[0] = tDFLEItem
expectedTDFLError := fserrors.FatalError(&generic403)
teamDriveFileLimitRetry, teamDriveFileLimitError := f.shouldRetry(ctx, &generic403)
assert.False(t, teamDriveFileLimitRetry)
assert.Equal(t, teamDriveFileLimitError, expectedTDFLError)
qEItem := googleapi.ErrorItem{
Reason: "quotaExceeded",
}
generic403.Errors[0] = qEItem
expectedQuotaError := fserrors.FatalError(&generic403)
quotaExceededRetry, quotaExceededError := f.shouldRetry(ctx, &generic403)
assert.False(t, quotaExceededRetry)
assert.Equal(t, quotaExceededError, expectedQuotaError)
}
func (f *Fs) InternalTestDocumentImport(t *testing.T) {
oldAllow := f.opt.AllowImportNameChange
f.opt.AllowImportNameChange = true
@@ -545,6 +601,7 @@ func (f *Fs) InternalTest(t *testing.T) {
t.Run("UnTrash", f.InternalTestUnTrash)
t.Run("CopyID", f.InternalTestCopyID)
t.Run("AgeQuery", f.InternalTestAgeQuery)
t.Run("ShouldRetry", f.InternalTestShouldRetry)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -1435,7 +1435,7 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
}
if entryPath != "" {
notifyFunc(f.opt.Enc.ToStandardPath(entryPath), entryType)
notifyFunc(entryPath, entryType)
}
}
if !changeList.HasMore {
@@ -1697,9 +1697,6 @@ func (o *Object) uploadChunked(ctx context.Context, in0 io.Reader, commitInfo *f
if size > 0 {
// if size is known, check if next chunk is final
appendArg.Close = uint64(size)-in.BytesRead() <= uint64(chunkSize)
if in.BytesRead() > uint64(size) {
return nil, fmt.Errorf("expected %d bytes in input, but have read %d so far", size, in.BytesRead())
}
} else {
// if size is unknown, upload as long as we can read full chunks from the reader
appendArg.Close = in.BytesRead()-cursor.Offset < uint64(chunkSize)

View File

@@ -572,7 +572,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
return "", err
}
bucket, bucketPath := f.split(remote)
return path.Join(f.opt.FrontEndpoint, "/download/", bucket, quotePath(bucketPath)), nil
return path.Join(f.opt.FrontEndpoint, "/download/", bucket, bucketPath), nil
}
// Copy src to this remote using server-side copy operations.
@@ -760,7 +760,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// make a GET request to (frontend)/download/:item/:path
opts := rest.Opts{
Method: "GET",
Path: path.Join("/download/", o.fs.root, quotePath(o.fs.opt.Enc.FromStandardPath(o.remote))),
Path: path.Join("/download/", o.fs.root, o.fs.opt.Enc.FromStandardPath(o.remote)),
Options: optionsFixed,
}
err = o.fs.pacer.Call(func() (bool, error) {

View File

@@ -152,7 +152,7 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
m.Set(configClientSecret, "")
srv := rest.NewClient(fshttp.NewClient(ctx))
token, tokenEndpoint, err := doTokenAuth(ctx, srv, loginToken)
token, tokenEndpoint, username, err := doTokenAuth(ctx, srv, loginToken)
if err != nil {
return nil, fmt.Errorf("failed to get oauth token: %w", err)
}
@@ -161,6 +161,7 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
if err != nil {
return nil, fmt.Errorf("error while saving token: %w", err)
}
m.Set(configUsername, username)
return fs.ConfigGoto("choose_device")
case "legacy": // configure a jottacloud backend using legacy authentication
m.Set("configVersion", fmt.Sprint(legacyConfigVersion))
@@ -271,21 +272,30 @@ sync or the backup section, for example, you must choose yes.`)
if config.Result != "true" {
m.Set(configDevice, "")
m.Set(configMountpoint, "")
}
username, userOk := m.Get(configUsername)
if userOk && config.Result != "true" {
return fs.ConfigGoto("end")
}
oAuthClient, _, err := getOAuthClient(ctx, name, m)
if err != nil {
return nil, err
}
jfsSrv := rest.NewClient(oAuthClient).SetRoot(jfsURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
cust, err := getCustomerInfo(ctx, apiSrv)
if err != nil {
return nil, err
if !userOk {
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
cust, err := getCustomerInfo(ctx, apiSrv)
if err != nil {
return nil, err
}
username = cust.Username
m.Set(configUsername, username)
if config.Result != "true" {
return fs.ConfigGoto("end")
}
}
acc, err := getDriveInfo(ctx, jfsSrv, cust.Username)
jfsSrv := rest.NewClient(oAuthClient).SetRoot(jfsURL)
acc, err := getDriveInfo(ctx, jfsSrv, username)
if err != nil {
return nil, err
}
@@ -316,14 +326,10 @@ a new by entering a unique name.`, defaultDevice)
return nil, err
}
jfsSrv := rest.NewClient(oAuthClient).SetRoot(jfsURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
cust, err := getCustomerInfo(ctx, apiSrv)
if err != nil {
return nil, err
}
username, _ := m.Get(configUsername)
acc, err := getDriveInfo(ctx, jfsSrv, cust.Username)
acc, err := getDriveInfo(ctx, jfsSrv, username)
if err != nil {
return nil, err
}
@@ -338,7 +344,7 @@ a new by entering a unique name.`, defaultDevice)
var dev *api.JottaDevice
if isNew {
fs.Debugf(nil, "Creating new device: %s", device)
dev, err = createDevice(ctx, jfsSrv, path.Join(cust.Username, device))
dev, err = createDevice(ctx, jfsSrv, path.Join(username, device))
if err != nil {
return nil, err
}
@@ -346,7 +352,7 @@ a new by entering a unique name.`, defaultDevice)
m.Set(configDevice, device)
if !isNew {
dev, err = getDeviceInfo(ctx, jfsSrv, path.Join(cust.Username, device))
dev, err = getDeviceInfo(ctx, jfsSrv, path.Join(username, device))
if err != nil {
return nil, err
}
@@ -376,16 +382,11 @@ You may create a new by entering a unique name.`, device)
return nil, err
}
jfsSrv := rest.NewClient(oAuthClient).SetRoot(jfsURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
cust, err := getCustomerInfo(ctx, apiSrv)
if err != nil {
return nil, err
}
username, _ := m.Get(configUsername)
device, _ := m.Get(configDevice)
dev, err := getDeviceInfo(ctx, jfsSrv, path.Join(cust.Username, device))
dev, err := getDeviceInfo(ctx, jfsSrv, path.Join(username, device))
if err != nil {
return nil, err
}
@@ -403,7 +404,7 @@ You may create a new by entering a unique name.`, device)
return nil, fmt.Errorf("custom mountpoints not supported on built-in %s device: %w", defaultDevice, err)
}
fs.Debugf(nil, "Creating new mountpoint: %s", mountpoint)
_, err := createMountPoint(ctx, jfsSrv, path.Join(cust.Username, device, mountpoint))
_, err := createMountPoint(ctx, jfsSrv, path.Join(username, device, mountpoint))
if err != nil {
return nil, err
}
@@ -590,10 +591,10 @@ func doLegacyAuth(ctx context.Context, srv *rest.Client, oauthConfig *oauth2.Con
}
// doTokenAuth runs the actual token request for V2 authentication
func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 string) (token oauth2.Token, tokenEndpoint string, err error) {
func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 string) (token oauth2.Token, tokenEndpoint string, username string, err error) {
loginTokenBytes, err := base64.RawURLEncoding.DecodeString(loginTokenBase64)
if err != nil {
return token, "", err
return token, "", "", err
}
// decode login token
@@ -601,7 +602,7 @@ func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 stri
decoder := json.NewDecoder(bytes.NewReader(loginTokenBytes))
err = decoder.Decode(&loginToken)
if err != nil {
return token, "", err
return token, "", "", err
}
// retrieve endpoint urls
@@ -612,7 +613,7 @@ func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 stri
var wellKnown api.WellKnown
_, err = apiSrv.CallJSON(ctx, &opts, nil, &wellKnown)
if err != nil {
return token, "", err
return token, "", "", err
}
// prepare out token request with username and password
@@ -634,14 +635,14 @@ func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 stri
var jsonToken api.TokenJSON
_, err = apiSrv.CallJSON(ctx, &opts, nil, &jsonToken)
if err != nil {
return token, "", err
return token, "", "", err
}
token.AccessToken = jsonToken.AccessToken
token.RefreshToken = jsonToken.RefreshToken
token.TokenType = jsonToken.TokenType
token.Expiry = time.Now().Add(time.Duration(jsonToken.ExpiresIn) * time.Second)
return token, wellKnown.TokenEndpoint, err
return token, wellKnown.TokenEndpoint, loginToken.Username, err
}
// getCustomerInfo queries general information about the account
@@ -943,11 +944,17 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return err
})
cust, err := getCustomerInfo(ctx, f.apiSrv)
if err != nil {
return nil, err
user, userOk := m.Get(configUsername)
if userOk {
f.user = user
} else {
fs.Infof(nil, "Username not found in config and must be looked up, reconfigure to avoid the extra request")
cust, err := getCustomerInfo(ctx, f.apiSrv)
if err != nil {
return nil, err
}
f.user = cust.Username
}
f.user = cust.Username
f.setEndpoints()
if root != "" && !rootIsDir {

View File

@@ -118,7 +118,7 @@ func init() {
Help: "Microsoft Cloud Germany",
}, {
Value: regionCN,
Help: "Azure and Office 365 operated by 21Vianet in China",
Help: "Azure and Office 365 operated by Vnet Group in China",
},
},
}, {
@@ -2184,7 +2184,7 @@ func (o *Object) ID() string {
* 3. To avoid region-related issues, please don't manually build rest.Opts from scratch.
* Instead, use these helper function, and customize the URL afterwards if needed.
*
* currently, the 21ViaNet's API differs in the following places:
* currently, the Vnet Group's API differs in the following places:
* - https://{Endpoint}/drives/{driveID}/items/{leaf}:/{route}
* - this API doesn't work (gives invalid request)
* - can be replaced with the following API:
@@ -2233,7 +2233,7 @@ func escapeSingleQuote(str string) string {
// newOptsCallWithIDPath build the rest.Opts structure with *a normalizedID (driveID#fileID, or simply fileID) and leaf*
// using url template https://{Endpoint}/drives/{driveID}/items/{leaf}:/{route} (for international OneDrive)
// or https://{Endpoint}/drives/{driveID}/items/children('{leaf}')/{route}
// and https://{Endpoint}/drives/{driveID}/items/children('@a1')/{route}?@a1=URLEncode("'{leaf}'") (for 21ViaNet)
// and https://{Endpoint}/drives/{driveID}/items/children('@a1')/{route}?@a1=URLEncode("'{leaf}'") (for Vnet Group)
// if isPath is false, this function will only work when the leaf is "" or a child name (i.e. it doesn't accept multi-level leaf)
// if isPath is true, multi-level leaf like a/b/c can be passed
func (f *Fs) newOptsCallWithIDPath(normalizedID string, leaf string, isPath bool, method string, route string) (opts rest.Opts, ok bool) {

View File

@@ -16,11 +16,14 @@ func init() {
// Given the order of the candidates, act on the first one found where the relative path exists.
type EpFF struct{}
func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath string) (*upstream.Fs, error) {
func (p *EpFF) epffIsLocal(ctx context.Context, upstreams []*upstream.Fs, filePath string, isLocal bool) (*upstream.Fs, error) {
ch := make(chan *upstream.Fs, len(upstreams))
ctx, cancel := context.WithCancel(ctx)
defer cancel()
for _, u := range upstreams {
if u.IsLocal() != isLocal {
continue
}
u := u // Closure
go func() {
rfs := u.RootFs
@@ -32,7 +35,10 @@ func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath stri
}()
}
var u *upstream.Fs
for range upstreams {
for _, upstream := range upstreams {
if upstream.IsLocal() != isLocal {
continue
}
u = <-ch
if u != nil {
break
@@ -44,6 +50,15 @@ func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath stri
return u, nil
}
func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath string) (*upstream.Fs, error) {
// search local disks first
u, err := p.epffIsLocal(ctx, upstreams, filePath, true)
if err == fs.ErrorObjectNotFound {
u, err = p.epffIsLocal(ctx, upstreams, filePath, false)
}
return u, err
}
// Action category policy, governing the modification of files and directories
func (p *EpFF) Action(ctx context.Context, upstreams []*upstream.Fs, path string) ([]*upstream.Fs, error) {
if len(upstreams) == 0 {

View File

@@ -34,6 +34,7 @@ type Fs struct {
Opt *common.Options
writable bool
creatable bool
isLocal bool
usage *fs.Usage // Cache the usage
cacheTime time.Duration // cache duration
cacheMutex sync.RWMutex
@@ -95,6 +96,7 @@ func New(ctx context.Context, remote, root string, opt *common.Options) (*Fs, er
return nil, err
}
f.RootFs = rFs
f.isLocal = rFs.Features().IsLocal
rootString := fspath.JoinRootPath(remote, root)
myFs, err := cache.Get(ctx, rootString)
if err != nil && err != fs.ErrorIsFile {
@@ -142,6 +144,11 @@ func (f *Fs) WrapEntry(e fs.DirEntry) (Entry, error) {
}
}
// IsLocal true if the upstream Fs is a local disk
func (f *Fs) IsLocal() bool {
return f.isLocal
}
// UpstreamFs get the upstream Fs the entry is stored in
func (e *Directory) UpstreamFs() *Fs {
return e.f

View File

@@ -186,7 +186,7 @@ func (s *server) rootDescHandler(w http.ResponseWriter, r *http.Request) {
// Handle a service control HTTP request.
func (s *server) serviceControlHandler(w http.ResponseWriter, r *http.Request) {
soapActionString := r.Header.Get("SOAPACTION")
soapAction, err := upnp.ParseActionHTTPHeader(soapActionString)
soapAction, err := parseActionHTTPHeader(soapActionString)
if err != nil {
serveError(s, w, "Could not parse SOAPACTION header", err)
return

View File

@@ -119,8 +119,6 @@ func TestContentDirectoryBrowseMetadata(t *testing.T) {
assert.Equal(t, http.StatusOK, resp.StatusCode)
body, err := ioutil.ReadAll(resp.Body)
require.NoError(t, err)
// should contain an appropriate URN
require.Contains(t, string(body), "urn:schemas-upnp-org:service:ContentDirectory:1")
// expect a <container> element
require.Contains(t, string(body), html.EscapeString("<container "))
require.NotContains(t, string(body), html.EscapeString("<item "))

View File

@@ -3,6 +3,7 @@ package dlna
import (
"crypto/md5"
"encoding/xml"
"errors"
"fmt"
"io"
"log"
@@ -11,6 +12,9 @@ import (
"net/http/httptest"
"net/http/httputil"
"os"
"regexp"
"strconv"
"strings"
"github.com/anacrolix/dms/soap"
"github.com/anacrolix/dms/upnp"
@@ -85,6 +89,36 @@ func marshalSOAPResponse(sa upnp.SoapAction, args map[string]string) []byte {
sa.Action, sa.ServiceURN.String(), mustMarshalXML(soapArgs)))
}
var serviceURNRegexp = regexp.MustCompile(`:service:(\w+):(\d+)$`)
func parseServiceType(s string) (ret upnp.ServiceURN, err error) {
matches := serviceURNRegexp.FindStringSubmatch(s)
if matches == nil {
err = errors.New(s)
return
}
if len(matches) != 3 {
log.Panicf("Invalid serviceURNRegexp ?")
}
ret.Type = matches[1]
ret.Version, err = strconv.ParseUint(matches[2], 0, 0)
return
}
func parseActionHTTPHeader(s string) (ret upnp.SoapAction, err error) {
if s[0] != '"' || s[len(s)-1] != '"' {
return
}
s = s[1 : len(s)-1]
hashIndex := strings.LastIndex(s, "#")
if hashIndex == -1 {
return
}
ret.Action = s[hashIndex+1:]
ret.ServiceURN, err = parseServiceType(s[:hashIndex])
return
}
type loggingResponseWriter struct {
http.ResponseWriter
request *http.Request

View File

@@ -101,9 +101,6 @@ func (c *conn) execCommand(ctx context.Context, out io.Writer, command string) (
if binary == "sha1sum" {
ht = hash.SHA1
}
if !c.vfs.Fs().Hashes().Contains(ht) {
return fmt.Errorf("%v hash not supported", ht)
}
var hashSum string
if args == "" {
// empty hash for no input

View File

@@ -626,3 +626,7 @@ put them back in again.` >}}
* Lorenzo Maiorfi <maiorfi@gmail.com>
* Claudio Maradonna <penguyman@stronzi.org>
* Ovidiu Victor Tatar <ovi.tatar@googlemail.com>
* Evan Spensley <epspensley@gmail.com>
* Yen Hu <61753151+0x59656e@users.noreply.github.com>
* Steve Kowalik <steven@wedontsleep.org>
* Jordi Gonzalez Muñoz <jordigonzm@gmail.com>

View File

@@ -5,43 +5,6 @@ description: "Rclone Changelog"
# Changelog
## v1.59.1 - 2022-08-08
[See commits](https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1)
* Bug Fixes
* accounting: Fix panic in core/stats-reset with unknown group (Nick Craig-Wood)
* build: Fix android build after GitHub actions change (Nick Craig-Wood)
* dlna: Fix SOAP action header parsing (Joram Schrijver)
* docs: Fix links to mount command from install docs (albertony)
* dropox: Fix ChangeNotify was unable to decrypt errors (Nick Craig-Wood)
* fs: Fix parsing of times and durations of the form "YYYY-MM-DD HH:MM:SS" (Nick Craig-Wood)
* serve sftp: Fix checksum detection (Nick Craig-Wood)
* sync: Add accidentally missed filter-sensitivity to --backup-dir option (Nick Naumann)
* Combine
* Fix docs showing `remote=` instead of `upstreams=` (Nick Craig-Wood)
* Throw error if duplicate directory name is specified (Nick Craig-Wood)
* Fix errors with backends shutting down while in use (Nick Craig-Wood)
* Dropbox
* Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)
* Fix infinite loop on uploading a corrupted file (Nick Craig-Wood)
* Internetarchive
* Ignore checksums for files using the different method (Lesmiscore)
* Handle hash symbol in the middle of filename (Lesmiscore)
* Jottacloud
* Fix working with whitelabel Elgiganten Cloud
* Do not store username in config when using standard auth (albertony)
* Mega
* Fix nil pointer exception when bad node received (Nick Craig-Wood)
* S3
* Fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput (Nick Craig-Wood)
* SFTP
* Fix issue with WS_FTP by working around failing RealPath (albertony)
* Union
* Fix duplicated files when using directories with leading / (Nick Craig-Wood)
* Fix multiple files being uploaded when roots don't exist (Nick Craig-Wood)
* Fix panic due to misalignment of struct field in 32 bit architectures (r-ricci)
## v1.59.0 - 2022-07-09
[See commits](https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0)

View File

@@ -37,11 +37,6 @@ extended explanation in the [copy](/commands/rclone_copy/) command if unsure.
If dest:path doesn't exist, it is created and the source:path contents
go there.
It is not possible to sync overlapping remotes. However, you may exclude
the destination from the sync with a filter rule or by putting an
exclude-if-present file inside the destination directory and sync to a
destination that is inside the source directory.
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
**Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.

View File

@@ -1644,6 +1644,18 @@ This sets the interval between each retry specified by `--retries`
The default is `0`. Use `0` to disable.
### --server-side-across-configs ###
Allow server-side operations (e.g. copy or move) to work across
different configurations.
This can be useful if you wish to do a server-side copy or move
between two remotes which use the same backend but are configured
differently.
Note that this isn't enabled by default because it isn't easy for
rclone to tell if it will work between any two configurations.
### --size-only ###
Normally rclone will look at modification time and size of files to

View File

@@ -160,7 +160,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.1")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0")
-v, --verbose count Print lots more stuff (repeat for more)
```

View File

@@ -263,7 +263,7 @@ Properties:
- "de"
- Microsoft Cloud Germany
- "cn"
- Azure and Office 365 operated by 21Vianet in China
- Azure and Office 365 operated by Vnet Group in China
### Advanced options

View File

@@ -1 +1 @@
v1.59.1
v1.60.0

View File

@@ -2,7 +2,6 @@ package accounting
import (
"context"
"fmt"
"sync"
"github.com/rclone/rclone/fs/rc"
@@ -191,9 +190,6 @@ func rcResetStats(ctx context.Context, in rc.Params) (rc.Params, error) {
if group != "" {
stats := groups.get(group)
if stats == nil {
return rc.Params{}, fmt.Errorf("group %q not found", group)
}
stats.ResetErrors()
stats.ResetCounters()
} else {

View File

@@ -7,10 +7,8 @@ import (
"testing"
"time"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fstest/testy"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestStatsGroupOperations(t *testing.T) {
@@ -119,89 +117,6 @@ func TestStatsGroupOperations(t *testing.T) {
t.Errorf("HeapObjects = %d, expected %d", end.HeapObjects, start.HeapObjects)
}
})
testGroupStatsInfo := NewStatsGroup(ctx, "test-group")
testGroupStatsInfo.Deletes(1)
GlobalStats().Deletes(41)
t.Run("core/group-list", func(t *testing.T) {
call := rc.Calls.Get("core/group-list")
require.NotNil(t, call)
got, err := call.Fn(ctx, rc.Params{})
require.NoError(t, err)
require.Equal(t, rc.Params{
"groups": []string{
"test-group",
},
}, got)
})
t.Run("core/stats", func(t *testing.T) {
call := rc.Calls.Get("core/stats")
require.NotNil(t, call)
gotNoGroup, err := call.Fn(ctx, rc.Params{})
require.NoError(t, err)
gotGroup, err := call.Fn(ctx, rc.Params{"group": "test-group"})
require.NoError(t, err)
assert.Equal(t, int64(42), gotNoGroup["deletes"])
assert.Equal(t, int64(1), gotGroup["deletes"])
})
t.Run("core/transferred", func(t *testing.T) {
call := rc.Calls.Get("core/transferred")
require.NotNil(t, call)
gotNoGroup, err := call.Fn(ctx, rc.Params{})
require.NoError(t, err)
gotGroup, err := call.Fn(ctx, rc.Params{"group": "test-group"})
require.NoError(t, err)
assert.Equal(t, rc.Params{
"transferred": []TransferSnapshot{},
}, gotNoGroup)
assert.Equal(t, rc.Params{
"transferred": []TransferSnapshot{},
}, gotGroup)
})
t.Run("core/stats-reset", func(t *testing.T) {
call := rc.Calls.Get("core/stats-reset")
require.NotNil(t, call)
assert.Equal(t, int64(41), GlobalStats().deletes)
assert.Equal(t, int64(1), testGroupStatsInfo.deletes)
_, err := call.Fn(ctx, rc.Params{"group": "test-group"})
require.NoError(t, err)
assert.Equal(t, int64(41), GlobalStats().deletes)
assert.Equal(t, int64(0), testGroupStatsInfo.deletes)
_, err = call.Fn(ctx, rc.Params{})
require.NoError(t, err)
assert.Equal(t, int64(0), GlobalStats().deletes)
assert.Equal(t, int64(0), testGroupStatsInfo.deletes)
_, err = call.Fn(ctx, rc.Params{"group": "not-found"})
require.ErrorContains(t, err, `group "not-found" not found`)
})
testGroupStatsInfo = NewStatsGroup(ctx, "test-group")
t.Run("core/stats-delete", func(t *testing.T) {
call := rc.Calls.Get("core/stats-delete")
require.NotNil(t, call)
assert.Equal(t, []string{"test-group"}, groups.names())
_, err := call.Fn(ctx, rc.Params{"group": "test-group"})
require.NoError(t, err)
assert.Equal(t, []string{}, groups.names())
_, err = call.Fn(ctx, rc.Params{"group": "not-found"})
require.NoError(t, err)
})
}
func percentDiff(start, end uint64) uint64 {

View File

@@ -45,96 +45,97 @@ var (
// ConfigInfo is filesystem config options
type ConfigInfo struct {
LogLevel LogLevel
StatsLogLevel LogLevel
UseJSONLog bool
DryRun bool
Interactive bool
CheckSum bool
SizeOnly bool
IgnoreTimes bool
IgnoreExisting bool
IgnoreErrors bool
ModifyWindow time.Duration
Checkers int
Transfers int
ConnectTimeout time.Duration // Connect timeout
Timeout time.Duration // Data channel timeout
ExpectContinueTimeout time.Duration
Dump DumpFlags
InsecureSkipVerify bool // Skip server certificate verification
DeleteMode DeleteMode
MaxDelete int64
TrackRenames bool // Track file renames.
TrackRenamesStrategy string // Comma separated list of strategies used to track renames
LowLevelRetries int
UpdateOlder bool // Skip files that are newer on the destination
NoGzip bool // Disable compression
MaxDepth int
IgnoreSize bool
IgnoreChecksum bool
IgnoreCaseSync bool
NoTraverse bool
CheckFirst bool
NoCheckDest bool
NoUnicodeNormalization bool
NoUpdateModTime bool
DataRateUnit string
CompareDest []string
CopyDest []string
BackupDir string
Suffix string
SuffixKeepExtension bool
UseListR bool
BufferSize SizeSuffix
BwLimit BwTimetable
BwLimitFile BwTimetable
TPSLimit float64
TPSLimitBurst int
BindAddr net.IP
DisableFeatures []string
UserAgent string
Immutable bool
AutoConfirm bool
StreamingUploadCutoff SizeSuffix
StatsFileNameLength int
AskPassword bool
PasswordCommand SpaceSepList
UseServerModTime bool
MaxTransfer SizeSuffix
MaxDuration time.Duration
CutoffMode CutoffMode
MaxBacklog int
MaxStatsGroups int
StatsOneLine bool
StatsOneLineDate bool // If we want a date prefix at all
StatsOneLineDateFormat string // If we want to customize the prefix
ErrorOnNoTransfer bool // Set appropriate exit code if no files transferred
Progress bool
ProgressTerminalTitle bool
Cookie bool
UseMmap bool
CaCert string // Client Side CA
ClientCert string // Client Side Cert
ClientKey string // Client Side Key
MultiThreadCutoff SizeSuffix
MultiThreadStreams int
MultiThreadSet bool // whether MultiThreadStreams was set (set in fs/config/configflags)
OrderBy string // instructions on how to order the transfer
UploadHeaders []*HTTPOption
DownloadHeaders []*HTTPOption
Headers []*HTTPOption
MetadataSet Metadata // extra metadata to write when uploading
RefreshTimes bool
NoConsole bool
TrafficClass uint8
FsCacheExpireDuration time.Duration
FsCacheExpireInterval time.Duration
DisableHTTP2 bool
HumanReadable bool
KvLockTime time.Duration // maximum time to keep key-value database locked by process
DisableHTTPKeepAlives bool
Metadata bool
LogLevel LogLevel
StatsLogLevel LogLevel
UseJSONLog bool
DryRun bool
Interactive bool
CheckSum bool
SizeOnly bool
IgnoreTimes bool
IgnoreExisting bool
IgnoreErrors bool
ModifyWindow time.Duration
Checkers int
Transfers int
ConnectTimeout time.Duration // Connect timeout
Timeout time.Duration // Data channel timeout
ExpectContinueTimeout time.Duration
Dump DumpFlags
InsecureSkipVerify bool // Skip server certificate verification
DeleteMode DeleteMode
MaxDelete int64
TrackRenames bool // Track file renames.
TrackRenamesStrategy string // Comma separated list of strategies used to track renames
LowLevelRetries int
UpdateOlder bool // Skip files that are newer on the destination
NoGzip bool // Disable compression
MaxDepth int
IgnoreSize bool
IgnoreChecksum bool
IgnoreCaseSync bool
NoTraverse bool
CheckFirst bool
NoCheckDest bool
NoUnicodeNormalization bool
NoUpdateModTime bool
DataRateUnit string
CompareDest []string
CopyDest []string
BackupDir string
Suffix string
SuffixKeepExtension bool
UseListR bool
BufferSize SizeSuffix
BwLimit BwTimetable
BwLimitFile BwTimetable
TPSLimit float64
TPSLimitBurst int
BindAddr net.IP
DisableFeatures []string
UserAgent string
Immutable bool
AutoConfirm bool
StreamingUploadCutoff SizeSuffix
StatsFileNameLength int
AskPassword bool
PasswordCommand SpaceSepList
UseServerModTime bool
MaxTransfer SizeSuffix
MaxDuration time.Duration
CutoffMode CutoffMode
MaxBacklog int
MaxStatsGroups int
StatsOneLine bool
StatsOneLineDate bool // If we want a date prefix at all
StatsOneLineDateFormat string // If we want to customize the prefix
ErrorOnNoTransfer bool // Set appropriate exit code if no files transferred
Progress bool
ProgressTerminalTitle bool
Cookie bool
UseMmap bool
CaCert string // Client Side CA
ClientCert string // Client Side Cert
ClientKey string // Client Side Key
MultiThreadCutoff SizeSuffix
MultiThreadStreams int
MultiThreadSet bool // whether MultiThreadStreams was set (set in fs/config/configflags)
OrderBy string // instructions on how to order the transfer
UploadHeaders []*HTTPOption
DownloadHeaders []*HTTPOption
Headers []*HTTPOption
MetadataSet Metadata // extra metadata to write when uploading
RefreshTimes bool
NoConsole bool
TrafficClass uint8
FsCacheExpireDuration time.Duration
FsCacheExpireInterval time.Duration
DisableHTTP2 bool
HumanReadable bool
KvLockTime time.Duration // maximum time to keep key-value database locked by process
DisableHTTPKeepAlives bool
Metadata bool
ServerSideAcrossConfigs bool
}
// NewConfig creates a new config with everything set to the default

View File

@@ -141,6 +141,7 @@ func AddFlags(ci *fs.ConfigInfo, flagSet *pflag.FlagSet) {
flags.DurationVarP(flagSet, &ci.KvLockTime, "kv-lock-time", "", ci.KvLockTime, "Maximum time to keep key-value database locked by process")
flags.BoolVarP(flagSet, &ci.DisableHTTPKeepAlives, "disable-http-keep-alives", "", ci.DisableHTTPKeepAlives, "Disable HTTP keep-alives and use each connection once.")
flags.BoolVarP(flagSet, &ci.Metadata, "metadata", "M", ci.Metadata, "If set, preserve metadata when copying objects")
flags.BoolVarP(flagSet, &ci.ServerSideAcrossConfigs, "server-side-across-configs", "", ci.ServerSideAcrossConfigs, "Allow server-side operations (e.g. copy) to work across different configs")
}
// ParseHeaders converts the strings passed in via the header flags into HTTPOptions

View File

@@ -424,7 +424,7 @@ func Copy(ctx context.Context, f fs.Fs, dst fs.Object, remote string, src fs.Obj
return nil, accounting.ErrorMaxTransferLimitReachedGraceful
}
}
if doCopy := f.Features().Copy; doCopy != nil && (SameConfig(src.Fs(), f) || (SameRemoteType(src.Fs(), f) && f.Features().ServerSideAcrossConfigs)) {
if doCopy := f.Features().Copy; doCopy != nil && (SameConfig(src.Fs(), f) || (SameRemoteType(src.Fs(), f) && (f.Features().ServerSideAcrossConfigs || ci.ServerSideAcrossConfigs))) {
in := tr.Account(ctx, nil) // account the transfer
in.ServerSideCopyStart()
newDst, err = doCopy(ctx, src, remote)
@@ -604,6 +604,7 @@ func SameObject(src, dst fs.Object) bool {
// It returns the destination object if possible. Note that this may
// be nil.
func Move(ctx context.Context, fdst fs.Fs, dst fs.Object, remote string, src fs.Object) (newDst fs.Object, err error) {
ci := fs.GetConfig(ctx)
tr := accounting.Stats(ctx).NewCheckingTransfer(src)
defer func() {
if err == nil {
@@ -618,7 +619,7 @@ func Move(ctx context.Context, fdst fs.Fs, dst fs.Object, remote string, src fs.
return newDst, nil
}
// See if we have Move available
if doMove := fdst.Features().Move; doMove != nil && (SameConfig(src.Fs(), fdst) || (SameRemoteType(src.Fs(), fdst) && fdst.Features().ServerSideAcrossConfigs)) {
if doMove := fdst.Features().Move; doMove != nil && (SameConfig(src.Fs(), fdst) || (SameRemoteType(src.Fs(), fdst) && (fdst.Features().ServerSideAcrossConfigs || ci.ServerSideAcrossConfigs))) {
// Delete destination if it exists and is not the same file as src (could be same file while seemingly different if the remote is case insensitive)
if dst != nil && !SameObject(src, dst) {
err = DeleteFile(ctx, dst)

View File

@@ -126,7 +126,7 @@ func parseDurationFromNow(age string, getNow func() time.Time) (d time.Duration,
// ParseDuration parses a duration string. Accept ms|s|m|h|d|w|M|y suffixes. Defaults to second if not provided
func ParseDuration(age string) (time.Duration, error) {
return parseDurationFromNow(age, timeNowFunc)
return parseDurationFromNow(age, time.Now)
}
// ReadableString parses d into a human-readable duration.
@@ -216,7 +216,7 @@ func (d *Duration) UnmarshalJSON(in []byte) error {
// Scan implements the fmt.Scanner interface
func (d *Duration) Scan(s fmt.ScanState, ch rune) error {
token, err := s.Token(true, func(rune) bool { return true })
token, err := s.Token(true, nil)
if err != nil {
return err
}

View File

@@ -145,28 +145,11 @@ func TestDurationReadableString(t *testing.T) {
}
func TestDurationScan(t *testing.T) {
now := time.Date(2020, 9, 5, 8, 15, 5, 250, time.UTC)
oldTimeNowFunc := timeNowFunc
timeNowFunc = func() time.Time { return now }
defer func() { timeNowFunc = oldTimeNowFunc }()
for _, test := range []struct {
in string
want Duration
}{
{"17m", Duration(17 * time.Minute)},
{"-12h", Duration(-12 * time.Hour)},
{"0", Duration(0)},
{"off", DurationOff},
{"2022-03-26T17:48:19Z", Duration(now.Sub(time.Date(2022, 03, 26, 17, 48, 19, 0, time.UTC)))},
{"2022-03-26 17:48:19", Duration(now.Sub(time.Date(2022, 03, 26, 17, 48, 19, 0, time.Local)))},
} {
var got Duration
n, err := fmt.Sscan(test.in, &got)
require.NoError(t, err)
assert.Equal(t, 1, n)
assert.Equal(t, test.want, got)
}
var v Duration
n, err := fmt.Sscan(" 17m ", &v)
require.NoError(t, err)
assert.Equal(t, 1, n)
assert.Equal(t, Duration(17*60*time.Second), v)
}
func TestParseUnmarshalJSON(t *testing.T) {

View File

@@ -83,7 +83,7 @@ func (t *Time) UnmarshalJSON(in []byte) error {
// Scan implements the fmt.Scanner interface
func (t *Time) Scan(s fmt.ScanState, ch rune) error {
token, err := s.Token(true, func(rune) bool { return true })
token, err := s.Token(true, nil)
if err != nil {
return err
}

View File

@@ -93,23 +93,15 @@ func TestTimeScan(t *testing.T) {
timeNowFunc = func() time.Time { return now }
defer func() { timeNowFunc = oldTimeNowFunc }()
for _, test := range []struct {
in string
want Time
}{
{"17m", Time(now.Add(-17 * time.Minute))},
{"-12h", Time(now.Add(12 * time.Hour))},
{"0", Time(now)},
{"off", Time(time.Time{})},
{"2022-03-26T17:48:19Z", Time(time.Date(2022, 03, 26, 17, 48, 19, 0, time.UTC))},
{"2022-03-26 17:48:19", Time(time.Date(2022, 03, 26, 17, 48, 19, 0, time.Local))},
} {
var got Time
n, err := fmt.Sscan(test.in, &got)
require.NoError(t, err)
assert.Equal(t, 1, n)
assert.Equal(t, test.want, got)
}
var v1, v2, v3, v4, v5 Time
n, err := fmt.Sscan(" 17m -12h 0 off 2022-03-26T17:48:19Z ", &v1, &v2, &v3, &v4, &v5)
require.NoError(t, err)
assert.Equal(t, 5, n)
assert.Equal(t, Time(now.Add(-17*time.Minute)), v1)
assert.Equal(t, Time(now.Add(12*time.Hour)), v2)
assert.Equal(t, Time(now), v3)
assert.Equal(t, Time(time.Time{}), v4)
assert.Equal(t, Time(time.Date(2022, 03, 26, 17, 48, 19, 0, time.UTC)), v5)
}
func TestParseTimeUnmarshalJSON(t *testing.T) {

View File

@@ -406,3 +406,34 @@ func rcJobStop(ctx context.Context, in rc.Params) (out rc.Params, err error) {
job.Stop()
return out, nil
}
func init() {
rc.Add(rc.Call{
Path: "job/stopgroup",
Fn: rcGroupStop,
Title: "Stop all running jobs in a group",
Help: `Parameters:
- group - name of the group (string).
`,
})
}
// Stops all running jobs in a group
func rcGroupStop(ctx context.Context, in rc.Params) (out rc.Params, err error) {
group, err := in.GetString("group")
if err != nil {
return nil, err
}
running.mu.RLock()
defer running.mu.RUnlock()
for _, job := range running.jobs {
if job.Group == group {
job.mu.Lock()
job.Stop()
job.mu.Unlock()
}
}
out = make(rc.Params)
return out, nil
}

View File

@@ -452,6 +452,48 @@ func TestRcSyncJobStop(t *testing.T) {
assert.Equal(t, false, out["success"])
}
func TestRcJobStopGroup(t *testing.T) {
ctx := context.Background()
jobID = 0
_, _, err := NewJob(ctx, ctxFn, rc.Params{
"_async": true,
"_group": "myparty",
})
require.NoError(t, err)
_, _, err = NewJob(ctx, ctxFn, rc.Params{
"_async": true,
"_group": "myparty",
})
require.NoError(t, err)
call := rc.Calls.Get("job/stopgroup")
assert.NotNil(t, call)
in := rc.Params{"group": "myparty"}
out, err := call.Fn(context.Background(), in)
require.NoError(t, err)
require.Empty(t, out)
in = rc.Params{}
_, err = call.Fn(context.Background(), in)
require.Error(t, err)
assert.Contains(t, err.Error(), "Didn't find key")
time.Sleep(10 * time.Millisecond)
call = rc.Calls.Get("job/status")
assert.NotNil(t, call)
for i := 1; i <= 2; i++ {
in = rc.Params{"jobid": i}
out, err = call.Fn(context.Background(), in)
require.NoError(t, err)
require.NotNil(t, out)
assert.Equal(t, "myparty", out["group"])
assert.Equal(t, "context canceled", out["error"])
assert.Equal(t, true, out["finished"])
assert.Equal(t, false, out["success"])
}
}
func TestOnFinish(t *testing.T) {
jobID = 0
done := make(chan struct{})

View File

@@ -1,4 +1,4 @@
package fs
// VersionTag of rclone
var VersionTag = "v1.59.1"
var VersionTag = "v1.60.0"

2
go.mod
View File

@@ -51,7 +51,7 @@ require (
github.com/spf13/cobra v1.4.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.7.2
github.com/t3rm1n4l/go-mega v0.0.0-20220725095014-c4e0c2b5debf
github.com/t3rm1n4l/go-mega v0.0.0-20200416171014-ffad7fcb44b8
github.com/winfsp/cgofuse v1.5.1-0.20220421173602-ce7e5a65cac7
github.com/xanzy/ssh-agent v0.3.1
github.com/youmark/pkcs8 v0.0.0-20201027041543-1326539a0a0a

4
go.sum
View File

@@ -591,8 +591,8 @@ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.2 h1:4jaiDzPyXQvSd7D0EjG45355tLlV3VOECpq10pLC+8s=
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
github.com/t3rm1n4l/go-mega v0.0.0-20220725095014-c4e0c2b5debf h1:Y43S3e9P1NPs/QF4R5/SdlXj2d31540hP4Gk8VKNvDg=
github.com/t3rm1n4l/go-mega v0.0.0-20220725095014-c4e0c2b5debf/go.mod h1:c+cGNU1qi9bO7ZF4IRMYk+KaZTNiQ/gQrSbyMmGFq1Q=
github.com/t3rm1n4l/go-mega v0.0.0-20200416171014-ffad7fcb44b8 h1:IGJQmLBLYBdAknj21W3JsVof0yjEXfy1Q0K3YZebDOg=
github.com/t3rm1n4l/go-mega v0.0.0-20200416171014-ffad7fcb44b8/go.mod h1:XWL4vDyd3JKmJx+hZWUVgCNmmhZ2dTBcaNDcxH465s0=
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
github.com/tinylib/msgp v1.0.2/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
github.com/tklauser/go-sysconf v0.3.10 h1:IJ1AZGZRWbY8T5Vfk04D9WOA5WSejdflXxP03OUqALw=

View File

@@ -217,6 +217,14 @@ This needs expanding and submitting to pypi...
Rust bindings are available in the `librclone` crate: https://crates.io/crates/librclone
## PHP
The `php` subdirectory contains how to use the C library librclone in php through foreign
function interface (FFI).
Useful docs:
- [PHP / FFI](https://www.php.net/manual/en/book.ffi.php)
## TODO
- Async jobs must currently be cancelled manually at the moment - RcloneFinalize doesn't do it.

53
librclone/php/rclone.php Normal file
View File

@@ -0,0 +1,53 @@
<?php
/*
PHP interface to librclone.so, using FFI ( Foreign Function Interface )
Create an rclone object
$rc = new Rclone( __DIR__ . '/librclone.so' );
Then call rpc calls on it
$rc->rpc( "config/listremotes", "{}" );
When finished, close it
$rc->close();
*/
class Rclone {
protected $rclone;
private $out;
public function __construct( $libshared )
{
$this->rclone = \FFI::cdef("
struct RcloneRPCResult {
char* Output;
int Status;
};
extern void RcloneInitialize();
extern void RcloneFinalize();
extern struct RcloneRPCResult RcloneRPC(char* method, char* input);
extern void RcloneFreeString(char* str);
", $libshared);
$this->rclone->RcloneInitialize();
}
public function rpc( $method, $input ): array
{
$this->out = $this->rclone->RcloneRPC( $method, $input );
$response = [
'output' => \FFI::string( $this->out->Output ),
'status' => $this->out->Status
];
$this->rclone->RcloneFreeString( $this->out->Output );
return $response;
}
public function close( ): void
{
$this->rclone->RcloneFinalize();
}
}

55
librclone/php/test.php Normal file
View File

@@ -0,0 +1,55 @@
<?php
/*
Test program for librclone
*/
include_once ( "rclone.php" );
const REMOTE = 'gdrive:/';
const FOLDER = "rcloneTest";
const FILE = "testFile.txt";
$rc = new Rclone( __DIR__ . '/librclone.so' );
$response = $rc->rpc( "config/listremotes", "{}" );
print_r( $response );
$response = $rc->rpc("operations/mkdir",
json_encode( [
'fs' => REMOTE,
'remote'=> FOLDER
]));
print_r( $response );
$response = $rc->rpc("operations/list",
json_encode( [
'fs' => REMOTE,
'remote'=> ''
]));
print_r( $response );
file_put_contents("./" . FILE, "Success!!!");
$response = $rc->rpc("operations/copyfile",
json_encode( [
'srcFs' => getcwd(),
'srcRemote'=> FILE,
'dstFs' => REMOTE . FOLDER,
'dstRemote' => FILE
]));
print_r( $response );
$response = $rc->rpc("operations/list",
json_encode( [
'fs' => REMOTE . FOLDER,
'remote'=> ''
]));
print_r( $response );
if ( $response['output'] ) {
$array = @json_decode( $response['output'], true );
if ( $response['status'] == 200 && $array['list'] ?? 0 ) {
$valid = $array['list'][0]['Name'] == FILE ? "SUCCESS" : "FAIL";
print_r("The test seems: " . $valid . "\n");
}
}
$rc->close();

155
rclone.1 generated
View File

@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 2.9.2.1
.\"
.TH "rclone" "1" "Aug 08, 2022" "User Manual" ""
.TH "rclone" "1" "Jul 09, 2022" "User Manual" ""
.hy
.SH Rclone syncs your files to cloud storage
.PP
@@ -732,9 +732,9 @@ system\[aq]s scheduler.
If you need to expose \f[I]service\f[R]-like features, such as remote
control (https://rclone.org/rc/), GUI (https://rclone.org/gui/),
serve (https://rclone.org/commands/rclone_serve/) or
mount (https://rclone.org/commands/rclone_mount/), you will often want
an rclone command always running in the background, and configuring it
to run in a service infrastructure may be a better option.
mount (https://rclone.org/commands/rclone_move/), you will often want an
rclone command always running in the background, and configuring it to
run in a service infrastructure may be a better option.
Below are some alternatives on how to achieve this on different
operating systems.
.PP
@@ -770,7 +770,7 @@ c:\[rs]rclone\[rs]rclone.exe sync c:\[rs]files remote:/files --no-console --log-
.fi
.SS User account
.PP
As mentioned in the mount (https://rclone.org/commands/rclone_mount/)
As mentioned in the mount (https://rclone.org/commands/rclone_move/)
documentation, mounted drives created as Administrator are not visible
to other accounts, not even the account that was elevated as
Administrator.
@@ -1271,11 +1271,6 @@ copy (https://rclone.org/commands/rclone_copy/) command if unsure.
If dest:path doesn\[aq]t exist, it is created and the source:path
contents go there.
.PP
It is not possible to sync overlapping remotes.
However, you may exclude the destination from the sync with a filter
rule or by putting an exclude-if-present file inside the destination
directory and sync to a destination that is inside the source directory.
.PP
\f[B]Note\f[R]: Use the \f[C]-P\f[R]/\f[C]--progress\f[R] flag to view
real-time transfer statistics
.PP
@@ -10978,8 +10973,7 @@ in DIR, then it will be overwritten.
.PP
The remote in use must support server-side move or copy and you must use
the same remote as the destination of the sync.
The backup directory must not overlap the destination directory without
it being excluded by a filter rule.
The backup directory must not overlap the destination directory.
.PP
For example
.IP
@@ -19713,7 +19707,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.59.1\[dq])
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.59.0\[dq])
-v, --verbose count Print lots more stuff (repeat for more)
\f[R]
.fi
@@ -34707,7 +34701,7 @@ remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
upstreams = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq]
remote = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq]
\f[R]
.fi
.PP
@@ -39087,7 +39081,7 @@ remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
upstreams = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq]
remote = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq]
\f[R]
.fi
.PP
@@ -41688,9 +41682,10 @@ Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for
the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g.
\f[C]remote:item/path/to/dir\f[R].
.PP
Unlike S3, listing up all items uploaded by you isn\[aq]t supported.
Once you have made a remote (see the provider specific section above)
you can use it like this:
.PP
Once you have made a remote, you can use it like this:
Unlike S3, listing up all items uploaded by you isn\[aq]t supported.
.PP
Make a new item
.IP
@@ -41746,7 +41741,7 @@ However, some fields are reserved by both Internet Archive and rclone.
The following are reserved by Internet Archive: - \f[C]name\f[R] -
\f[C]source\f[R] - \f[C]size\f[R] - \f[C]md5\f[R] - \f[C]crc32\f[R] -
\f[C]sha1\f[R] - \f[C]format\f[R] - \f[C]old_version\f[R] -
\f[C]viruscheck\f[R] - \f[C]summation\f[R]
\f[C]viruscheck\f[R]
.PP
Trying to set values to these keys is ignored with a warning.
Only setting \f[C]mtime\f[R] is an exception.
@@ -42004,7 +41999,7 @@ string
T}@T{
01234567
T}@T{
\f[B]Y\f[R]
N
T}
T{
format
@@ -42015,7 +42010,7 @@ string
T}@T{
Comma-Separated Values
T}@T{
\f[B]Y\f[R]
N
T}
T{
md5
@@ -42026,7 +42021,7 @@ string
T}@T{
01234567012345670123456701234567
T}@T{
\f[B]Y\f[R]
N
T}
T{
mtime
@@ -42037,7 +42032,7 @@ RFC 3339
T}@T{
2006-01-02T15:04:05.999999999Z
T}@T{
\f[B]Y\f[R]
N
T}
T{
name
@@ -42048,7 +42043,7 @@ filename
T}@T{
backend/internetarchive/internetarchive.go
T}@T{
\f[B]Y\f[R]
N
T}
T{
old_version
@@ -42059,7 +42054,7 @@ boolean
T}@T{
true
T}@T{
\f[B]Y\f[R]
N
T}
T{
rclone-ia-mtime
@@ -42103,7 +42098,7 @@ string
T}@T{
0123456701234567012345670123456701234567
T}@T{
\f[B]Y\f[R]
N
T}
T{
size
@@ -42114,7 +42109,7 @@ decimal number
T}@T{
123456
T}@T{
\f[B]Y\f[R]
N
T}
T{
source
@@ -42125,18 +42120,7 @@ string
T}@T{
original
T}@T{
\f[B]Y\f[R]
T}
T{
summation
T}@T{
Check https://forum.rclone.org/t/31922 for how it is used
T}@T{
string
T}@T{
md5
T}@T{
\f[B]Y\f[R]
N
T}
T{
viruscheck
@@ -42147,7 +42131,7 @@ unixtime
T}@T{
1654191352
T}@T{
\f[B]Y\f[R]
N
T}
.TE
.PP
@@ -53981,99 +53965,6 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
.SS v1.59.1 - 2022-08-08
.PP
See commits (https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
accounting: Fix panic in core/stats-reset with unknown group (Nick
Craig-Wood)
.IP \[bu] 2
build: Fix android build after GitHub actions change (Nick Craig-Wood)
.IP \[bu] 2
dlna: Fix SOAP action header parsing (Joram Schrijver)
.IP \[bu] 2
docs: Fix links to mount command from install docs (albertony)
.IP \[bu] 2
dropox: Fix ChangeNotify was unable to decrypt errors (Nick Craig-Wood)
.IP \[bu] 2
fs: Fix parsing of times and durations of the form \[dq]YYYY-MM-DD
HH:MM:SS\[dq] (Nick Craig-Wood)
.IP \[bu] 2
serve sftp: Fix checksum detection (Nick Craig-Wood)
.IP \[bu] 2
sync: Add accidentally missed filter-sensitivity to --backup-dir option
(Nick Naumann)
.RE
.IP \[bu] 2
Combine
.RS 2
.IP \[bu] 2
Fix docs showing \f[C]remote=\f[R] instead of \f[C]upstreams=\f[R] (Nick
Craig-Wood)
.IP \[bu] 2
Throw error if duplicate directory name is specified (Nick Craig-Wood)
.IP \[bu] 2
Fix errors with backends shutting down while in use (Nick Craig-Wood)
.RE
.IP \[bu] 2
Dropbox
.RS 2
.IP \[bu] 2
Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)
.IP \[bu] 2
Fix infinite loop on uploading a corrupted file (Nick Craig-Wood)
.RE
.IP \[bu] 2
Internetarchive
.RS 2
.IP \[bu] 2
Ignore checksums for files using the different method (Lesmiscore)
.IP \[bu] 2
Handle hash symbol in the middle of filename (Lesmiscore)
.RE
.IP \[bu] 2
Jottacloud
.RS 2
.IP \[bu] 2
Fix working with whitelabel Elgiganten Cloud
.IP \[bu] 2
Do not store username in config when using standard auth (albertony)
.RE
.IP \[bu] 2
Mega
.RS 2
.IP \[bu] 2
Fix nil pointer exception when bad node received (Nick Craig-Wood)
.RE
.IP \[bu] 2
S3
.RS 2
.IP \[bu] 2
Fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput
(Nick Craig-Wood)
.RE
.IP \[bu] 2
SFTP
.RS 2
.IP \[bu] 2
Fix issue with WS_FTP by working around failing RealPath (albertony)
.RE
.IP \[bu] 2
Union
.RS 2
.IP \[bu] 2
Fix duplicated files when using directories with leading / (Nick
Craig-Wood)
.IP \[bu] 2
Fix multiple files being uploaded when roots don\[aq]t exist (Nick
Craig-Wood)
.IP \[bu] 2
Fix panic due to misalignment of struct field in 32 bit architectures
(r-ricci)
.RE
.SS v1.59.0 - 2022-07-09
.PP
See commits (https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0)