1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-30 15:13:55 +00:00

Compare commits

..

501 Commits

Author SHA1 Message Date
Paul Collins
8d80df2c85 swift: add workarounds for bad listings in Ceph RGW
Ceph's Swift API emulation does not fully confirm to the API spec.
As a result, it sometimes returns fewer items in a container than
the requested limit, which according to the spec should means
that there are no more objects left in the container.  (Note that
python-swiftclient always fetches unless the current page is empty.)

This commit adds a pair of new Swift backend settings to handle this.

Set `fetch_until_empty_page` to true to always fetch another
page of the container listing unless there are no items left.

Alternatively, set `partial_page_fetch_threshold` to an integer
percentage.  In this case rclone will fetch a new page only when
the current page is within this percentage of the limit.

Swift API reference: https://docs.openstack.org/swift/latest/api/pagination.html

PR against ncw/swift with research and discussion: https://github.com/ncw/swift/pull/167

Fixes #7924
2024-06-28 09:54:44 +01:00
Nick Craig-Wood
754e53dbcc docs: remove warp as silver sponsor 2024-06-24 10:33:18 +01:00
Nick Craig-Wood
5511fa441a onedrive: fix nil pointer error when uploading small files
Before this fix when uploading a single part file, if the
o.fetchAndUpdateMetadata() call failed rclone would call
o.setMetaData() with a nil info which caused a crash.

This fixes the problem by returning the error from
o.fetchAndUpdateMetadata() explicitly.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
4ed4483bbc vfs: fix fatal error: sync: unlock of unlocked mutex in panics
Before this change a panic could be overwritten with the message

    fatal error: sync: unlock of unlocked mutex

This was because we temporarily unlocked the mutex, but failed to lock
it again if there was a panic.

This is code is never the cause of an error but it masks the
underlying error by overwriting the panic cause.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
0e85ba5080 Add Filipe Herculano to contributors 2024-06-24 09:30:59 +01:00
Nick Craig-Wood
e5095a7d7b Add Thearas to contributors 2024-06-24 09:30:59 +01:00
wiserain
300851e8bf pikpak: implement custom hash to replace wrong sha1
This improves PikPak's file integrity verification by implementing a custom 
hash function named gcid and replacing the previously used SHA-1 hash.
2024-06-20 00:57:21 +09:00
wiserain
cbccad9491 pikpak: improves data consistency by ensuring async tasks complete
Similar to uploads implemented in commit ce5024bf33, 
this change ensures most asynchronous file operations (copy, move, delete, 
purge, and cleanup) complete before proceeding with subsequent actions. 
This reduces the risk of data inconsistencies and improves overall reliability.
2024-06-20 00:07:05 +09:00
dependabot[bot]
9f1a7cfa67 build(deps): bump docker/build-push-action from 5 to 6
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-18 14:48:30 +01:00
Filipe Herculano
d84a4c9ac1 s3: fix incorrect region for Magalu provider 2024-06-15 17:40:28 +01:00
Thearas
1c9da8c96a docs: recommend no_check_bucket = true for Alibaba - fixes #7889
Change-Id: Ib6246e416ce67dddc3cb69350de69129a8826ce3
2024-06-15 17:39:05 +01:00
Nick Craig-Wood
af9c5fef93 docs: tidy .gitignore for docs 2024-06-15 13:08:20 +01:00
Nick Craig-Wood
7060777d1d docs: fix hugo warning: found no layout file for "html" for kind "term"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "term": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

This turned out to be the addition of the `groups:` keyword to the
command frontmatter. Hugo is doing something with this keyword though
this isn't documented in the frontmatter documentation.

The fix was removing the `groups:` keyword from the frontmatter since
it was never used by hugo.
2024-06-15 12:59:49 +01:00
Nick Craig-Wood
0197e7f4e5 docs: remove slug and url from command pages since they are no longer needed 2024-06-15 12:37:43 +01:00
Nick Craig-Wood
c1c9e209f3 docs: fix hugo warning: found no layout file for "html" for kind "section"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "section": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

It turned out to be
- the arrangement of the oracle object storage docs and sub page
- the fact that a section template was missing
2024-06-15 12:29:37 +01:00
Nick Craig-Wood
fd182af866 serve dlna: fix panic: invalid argument to Int63n
This updates the upstream github.com/anacrolix/dms to master to fix
the problem.

Fixes #7911
2024-06-15 10:58:57 +01:00
Nick Craig-Wood
4ea629446f Start v1.68.0-DEV development 2024-06-14 17:54:27 +01:00
Nick Craig-Wood
93e8a976ef Version v1.67.0 2024-06-14 16:04:51 +01:00
nielash
8470bdf810 s3: fix 405 error on HEAD for delete marker with versionId
When getting an object by specifying a versionId in the request, if the
specified version is a delete marker, it returns 405 (Method Not Allowed),
instead of 404 (Not Found) which would be returned without a versionId. See
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html

Before this change, we were only looking for 404 (and not 405) to determine
whether the object exists. This meant that in some circumstances (ex. when
Versioning is enabled for the bucket and we have a non-null X-Amz-Version-Id), we
deemed the object to exist when we should not have.

After this change, 405 (Method Not Allowed) is treated the same as 404 (Not
Found) for the purposes of headObject.

See https://forum.rclone.org/t/bisync-rename-failed-method-not-allowed/45723/13
2024-06-13 18:09:29 +01:00
Nick Craig-Wood
1aa3a37a28 gitannex: make tests run more quietly - use go test -v for more info
These tests were generating 1000s of lines of logs and making it
difficult to figure out what was failing in other tests.
2024-06-13 17:33:56 +01:00
albertony
ae887ad042 jottacloud: set metadata on server side copy and move - fixes #7900 2024-06-13 16:19:36 +01:00
Nick Craig-Wood
d279fea44a qingstor: disable integration tests as test account suspended
QingStor support have disabled the integration test account with this message

尊敬的用户您好:依据监管部门相关内容安全合规要求,QingStor即日起限制对
个人客户提供对象存储服务,您的对象存储服务将被系统置于禁用状态,如需继
续使用QingsStor对象存储服务,您可以通过工单或者拨打400热线申请开通,未
解封期间您的数据将不受影响,感谢您的谅解和支持。

Which google translate renders as

> Dear user: In accordance with the relevant content security
> compliance requirements of the regulatory authorities, QingStor will
> limit the provision of object storage services to individual
> customers from now on. Your object storage service will be disabled
> by the system. If you need to continue to use the QingsStor object
> storage service, you can apply for activation through a work order
> or by calling the 400 hotline. Your data will not be affected during
> the period of unblocking. Thank you for your understanding and
> support.
2024-06-13 12:50:35 +01:00
Nick Craig-Wood
282e34f2d5 operations: add operations.ReadFile to read the contents of a file into memory 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
021f25a748 fs: make ConfigFs take an fs.Info which makes it more useful 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
18e9d039ad touch: fix using -R on certain backends
On backends which return a valid object for "" with NewObject then
touch was going wrong as it thought it was passed an object.

This should not happen normally but s3 can be configured with
--s3-no-head where it is happy to believe that all objects exist.
2024-06-12 17:57:28 +01:00
Nick Craig-Wood
cbcfb90d9a serve s3: fix XML of error message
This updates the s3 libary to fix the XML of the error response

Fixes #7749
2024-06-12 17:53:57 +01:00
Nick Craig-Wood
caba22a585 fs/logger: make the tests deterministic
Previously this used `rclone test makefiles --seed 0` which sets a
random seed and every now and again we get this error

    Failed to open file "$WORK\\src\\moru": open $WORK\src\moru: is a directory

Because a file with the same name was created as a file in the src and
a dir in the dst.

This fixes it by using determinstic seeds each time.
2024-06-12 16:39:30 +01:00
Nick Craig-Wood
3fef8016b5 zoho: sleep for 60 seconds if rate limit error received 2024-06-12 16:34:30 +01:00
Nick Craig-Wood
edf6537c61 zoho: remove simple file names complication which is no longer needed 2024-06-12 16:34:27 +01:00
Nick Craig-Wood
00f0e9df9d zoho: retry reading info if size wasn't returned 2024-06-12 16:34:24 +01:00
Nick Craig-Wood
e6ab644350 zoho: fix throttling problem when uploading files
Before this change rclone checked to see if a file existed before
uploading it. It did this to avoid making duplicate files. This
involved listing the destination directory to see if the file existed
which was rate limited by Zoho.

However Zoho can't have duplicate files anyway so this fix just
removes that check and the PutUnchecked method which isn't needed.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697
See: https://forum.rclone.org/t/followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/44794
2024-06-12 16:34:18 +01:00
Nick Craig-Wood
61c18e3b60 zoho: use cursor listing for improved performance
Cursor listing enables us to list up to 1,000 items per call
(previously it was 10) and uses one less transaction per call.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697/4
2024-06-12 16:34:11 +01:00
Nick Craig-Wood
d068e0b1a9 operations: fix hashing problem in integration tests
Before this change backends which supported more than one hash (eg
pcloud) or backends which wrapped backends supporting more than one
hash (combine) would fail the TestMultithreadCopy and
TestMultithreadCopyAbort with an error like

    Failed to make new multi hasher: requested set 000001ff contains unknown hash types

This was caused by the tests limiting the globally available hashes to
the first hash supplied by the backend.

This was added in this commit

d5d28a7513 operations: fix overwrite of destination when multi-thread transfer fails

to overcome the tests taking >100s on the local backend because they
made every single hash that the local backend. It brought this time
down to 20s.

This commit fixes the problem and retains the CPU speedup by only
applying the fix from the original commit if the destination backend
is the local backend. This fixes the common case (testing on the local
backend). This does not fix the problem for a backend which wraps the
local backend (eg combine) but this is run only on the integration
test machine and not on all the CI.
2024-06-12 11:11:54 +01:00
Nick Craig-Wood
a341065b8d Add Bill Fraser to contributors 2024-06-12 11:11:54 +01:00
Nick Craig-Wood
0c29a1fe31 Add Florian Klink to contributors 2024-06-12 11:11:54 +01:00
Nick Craig-Wood
1a40300b5f Add Michał Dzienisiewicz to contributors 2024-06-12 11:11:54 +01:00
dependabot[bot]
44be27729a build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.5.2 to 1.6.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/internal/v1.5.2...sdk/azcore/v1.6.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-12 09:33:04 +01:00
wiserain
b7624287ac pikpak: implement configurable chunk size for multipart upload
Previously, the fixed 10MB chunk size could lead to exceeding the maximum 
allowed number of parts for very large files. Similar to other backends, options for 
chunk size and upload concurrency are now user-configurable. Additionally, 
the internal library is used to automatically adjust chunk size to prevent exceeding 
upload part limitations.

Fixes #7850
2024-06-12 13:19:25 +09:00
Evan Harris
6db9f7180f docs: added info about --progress terminal width 2024-06-11 21:37:40 +01:00
wiserain
b601961e54 pikpak: remove PublicLink from integration tests
This commit removes the test for PublicLink as it is not currently supported in the test environment.

This removes it from the integration tests to avoid meaningless retries.
2024-06-12 04:43:49 +09:00
Nick Craig-Wood
51aca9cf9d onedrive: add --onedrive-hard-delete to permanently delete files
Fixes #7812
2024-06-11 17:48:15 +01:00
Bill Fraser
7c0645dda9 dropbox: add option to override root namespace
This lets you, for example, use shared folders without mounting them
into your home namespace first, as long as you know their namespace ID.

(The --dropbox-shared-folders flag could thus be changed to not need to
mount the shared folder first, but I'm not doing that here as it's a
behavior change, who knows, maybe somebody relies on it.)
2024-06-11 12:49:01 +01:00
Florian Klink
aed77a8fb2 tree-wide: replace /bin/bash with /usr/bin/env bash
The latter is more portable, while the former only works on systems
where /bin/bash exists (or is symlinked appropriately).
2024-06-11 12:47:47 +01:00
Michał Dzienisiewicz
4250dd98f3 protondrive: don't auth with an empty access token 2024-06-11 12:46:00 +01:00
nielash
c13118246c serve s3: fix in-memory metadata storing wrong modtime
Before this change, serve s3 did not consistently save the correct modtime value
in memory after putting or copying an object, which could sometimes cause an
incorrect modtime to be returned. This change fixes the issue by ensuring that
both "mtime" and "X-Amz-Meta-Mtime" are updated in b.meta when we have fresh data.

The issue was discovered on the TestBisyncRemoteRemote/ext_paths test.
2024-06-11 12:10:43 +01:00
nielash
a56cd52025 vfs: fix renaming a directory
Before this change, renaming a directory d failed to rename its key in
d.parent.items, which caused trouble later when doing Dir.Stat on a
subdirectory. This change fixes the issue.
2024-06-11 12:09:19 +01:00
nielash
3ae4534ce6 fstest: make RandomRemoteName shorter
Use 12 random characters instead of 24, to avoid trouble on the bisync tests
2024-06-11 12:02:48 +01:00
nielash
9c287c72d6 googlephotos: remove unnecessary nil check
golangci-lint was complaining about this. `entry` can never be nil because
itemToDirEntry never returns a nil interface value
2024-06-11 11:55:42 +01:00
nielash
862d5d6086 s3, googlecloudstorage, azureblob: fix encoding issue with dir path comparison
`remote` has been converted ToStandardPath a few lines above, so `directory`
needs to be converted the same way in order to be compared properly. This was
spotted on `TestBisyncRemoteRemote/extended_filenames` for
`TestS3,directory_markers:` and `TestGoogleCloudStorage,directory_markers:`
which tripped over a directory name containing a Line Feed symbol.
2024-06-11 11:54:54 +01:00
Nick Craig-Wood
003f4531fe sync: don't test reading metadata if we can't write it 2024-06-11 11:31:35 +01:00
Nick Craig-Wood
a52e887ddd linkbox: ignore TestListDirSorted test until encoding is implemented 2024-06-11 11:31:35 +01:00
Nick Craig-Wood
b7681e72bf Add Tomasz Melcer to contributors 2024-06-11 11:31:35 +01:00
wiserain
ce5024bf33 pikpak: improve upload reliability and resolve potential file conflicts
This attempts to resolve upload conflicts by implementing cancel/cleanup on failed
uploads

* fix panic error on defer cancel upload
* increase pacer min sleep from 10 to 100 ms
* stop using uploadByForm()
* introduce force sleep before and after async tasks
* use pacer's retry scheme instead of manual implementation

Fixes #7787
2024-06-10 18:08:07 +01:00
Tomasz Melcer
d2af114139 sftp: --sftp-connections to limit maximum number of connections
Done based on a similar feature in the ftp remote. However, the switch
name is different, as `concurrency` is already taken by a different
feature.
2024-06-09 18:36:30 +01:00
Nick Craig-Wood
c8d6b02dd6 ulozto: fix panic in various integration tests
Before this change some of the integration tests were producing this error

    panic: runtime error: invalid memory address or nil pointer dereference

This was caused by an `fs.Object` of which the type (`*Object`) was
not `nil`, but the value within was `nil`. These do not compare as
`nil` leading to the panic.

This is a classic Go gotcha: https://go.dev/doc/faq#nil_error

This was easily fixed by changing the type of one function to return
fs.Object instead of *Object.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
55cac4c34d swift: fix integration tester with use_segments_container=false 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
7ce60a47e8 drive: fix tests for backend query command
The tests assumed that there would be only one match, but on the
integration test server there are multiple matches due to failed test
runs.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
27496fb26d mailru: attempt to fix throttling by decreasing min sleep to 100ms
Before this change we waited a minimum of 10ms between API calls for
mailru.

The tests no longer pass at this rate, so this increases the time to
100ms.

See #7768
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
39f8d039fe sync: fix expecting SFTP to have MkdirMetadata method: optional feature not implemented
Before this fix we attempted to copy metadata to SFTP backends despite
them not being capable of it.

This fixes the problem by making the need to copy metadata explicit
rather than implicit in a value being present or not.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
57f5ad188b operations: fix incorrect modtime on some multipart transfers
In this commit

6a0a54ab97 operations: fix missing metadata for multipart transfers to local disk

We broke the setting of modification times when doing multipart
transfers from a backend which didn't support metadata to a backend
which did support metadata.

This was fixed by setting the "mtime" in the metadata if it was
missing.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
76798d5bb1 sync: fix tests on backends which can't have empty directories 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
5921bb0efd cache: fix tests when testing for Object.SetMetadata 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
ce0d8a70a3 Add Charles Hamilton to contributors 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
c23a40cb2a Add Thomas Schneider to contributors 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
86a1951a56 Add Bruno Fernandes to contributors 2024-06-08 17:44:11 +01:00
Charles Hamilton
b778ec0142 windows: make rclone work with SeBackupPrivilege and/or SeRestorePrivilege
On Windows, this change includes the `FILE_FLAG_BACKUP_SEMANTICS` in
all calls to `CreateFile`.

Adding this flag allows is useful when rclone is running within a
security context that has `SeBackupPrivilege` and/or `SeRestorePrivilege`
token privileges enabled.

Without this flag, rclone cannot properly leverage special security
groups such as Backup Operators who possess the these privileges.

See: https://forum.rclone.org/t/rclone-sebackupprivilege-file-flag-backup-semantics/45339
See: https://github.com/rclone/rclone/pull/7877.
2024-06-07 13:26:30 +01:00
Dan McArdle
dac7f76b14 cmd/gitannex: Update command docs
Mentioned the possibility of skipping the symlink for new versions of
git-annex. (Probably deserves a test once the new git-annex trickles
down to CI platforms.)

I stopped trying to explain each config parameter here. Rather, the doc
now shows the user how to ask git-annex to describe config parameters
with `--whatelse`.
2024-06-06 17:42:27 +01:00
Dan McArdle
446d6b28b8 cmd/gitannex: Support synonyms of config values 2024-06-06 17:42:27 +01:00
Thomas Schneider
7e04ff9528 S3: Ceph Backend use already exist changed to true (now tested) - fixes #7871 2024-06-06 11:27:07 +01:00
Bruno Fernandes
4568feb5f9 s3: Add Magalu S3 Object Storage as provider 2024-06-06 11:25:45 +01:00
Nick Craig-Wood
b9a2d3b6b9 config: fix default value for description 2024-06-06 09:25:17 +01:00
Nick Craig-Wood
775e567a7b b2: update URLs to new home 2024-06-06 09:25:17 +01:00
Nick Craig-Wood
59fc7ac193 Add yumeiyin to contributors 2024-06-06 09:25:17 +01:00
albertony
fea61cac9e serve dlna: make BrowseMetadata more compliant - fixes #7883 2024-06-02 14:07:45 +02:00
albertony
3f3e4b055e Fix new lint issues reported by golangci-lint v1.59.0
Error return value of `fmt.Fprintf` is not checked (errcheck)
2024-05-31 09:48:32 +02:00
yumeiyin
2257c03391 docs: fix some comments 2024-05-24 21:39:40 +02:00
Nick Craig-Wood
8f1c309c81 build: update all dependencies 2024-05-22 15:50:31 +01:00
Nick Craig-Wood
8e2f596fd0 drive: debug when we are ignoring permissions #7853 2024-05-21 15:32:26 +01:00
Nick Craig-Wood
de742ffc67 Add Dominik Joe Pantůček to contributors 2024-05-21 15:32:26 +01:00
Dominik Joe Pantůček
181ed55662 docs: crypt: fix incorrect terminology
This fixes the misuse of the key-derivation term (salt) used in place
of symmetric cipher nonce (IV) in the crypt remote documentation.
2024-05-20 23:21:21 +01:00
Nick Craig-Wood
a5700a4a53 operations: rework rcat so that it doesn't call the --metadata-mapper twice
The --metadata-mapper was being called twice for files that rclone
needed to stream to disk,

This happened only for:
- files bigger than --upload-streaming-cutoff
- on backends which didn't support PutStream

This also meant that these were being logged as two transfers which
was a little strange.

This fixes the problem by not using operations.Copy to upload the file
once it has been streamed to disk, instead using the Put method on the
backend.

This should have no effect on reliability of the transfers as we retry
Put if possible.

This also tidies up the Rcat function to make the different ways of
uploading the data clearer and make it easy to see that it gets
verified on all those paths.

See #7848
2024-05-20 18:16:54 +01:00
Nick Craig-Wood
faa58315c5 operations: ensure SrcFsType is set correctly when using --metadata-mapper
Before this change on files which have unknown length (like Google
Documents) the SrcFsType would be set to "memoryFs".

This change fixes the problem by getting the Copy function to pass the
src Fs into a variant of Rcat.

Fixes #7848
2024-05-20 18:16:54 +01:00
Nick Craig-Wood
7b89735ae7 onedrive: allow setting permissions to fail if failok flag is set
For example using

    --onedrive-metadata-permissions read,write,failok

Will allow permissions to be read and written but if the writing
fails, then only an ERROR will be written in the log and the transfer
won't fail.
2024-05-17 11:03:46 +01:00
Nick Craig-Wood
91192c2c5e Add Evan McBeth to contributors 2024-05-17 11:03:46 +01:00
Evan McBeth
96e39ea486 docs: improve readability in faq 2024-05-16 15:35:42 +02:00
Nick Craig-Wood
488ed28635 fs: fix panic when using --metadata-mapper on large google doc files
Before this change, attempting to copy a large google doc while using
the metadata mapper caused a panic. Google doc files use Rcat to
download as they have an unknown size, and when the size of the doc
file got above --streaming-upload-cutoff it used a
object.NewStaticObjectInfo with a `nil` Fs to upload the file which
caused the crash in the metadata mapper code.

This change makes sure that the Fs in object.NewStaticObjectInfo is
never nil, and it returns MemoryFs which is consistent with the Rcat
code when the source is sized below the --streaming-upload-cutoff
threshold.

Fixes #7845
2024-05-16 10:05:08 +01:00
Nick Craig-Wood
b059c96322 Add JT Olio to contributors 2024-05-16 10:05:08 +01:00
Nick Craig-Wood
6d22168a8c Add overallteach to contributors 2024-05-16 10:05:08 +01:00
JT Olio
e34e2df600 go.mod: update storj.io/uplink to latest release
significant performance and stability improvements
2024-05-16 08:43:57 +01:00
overallteach
6607102034 chore: fix function name in comment
Signed-off-by: overallteach <cricis@foxmail.com>
2024-05-15 19:30:17 +01:00
Nick Craig-Wood
c6c327e4e7 build: update issue label notification machinery 2024-05-15 15:07:45 +01:00
Nick Craig-Wood
6a0a54ab97 operations: fix missing metadata for multipart transfers to local disk
Before this change multipart downloads to the local disk with
--metadata failed to have their metadata set properly.

This was because the OpenWriterAt interface doesn't receive metadata
when creating the object.

This patch fixes the problem by using the recently introduced
Object.SetMetadata method to set the metadata on the object after the
download has completed (when using --metadata). If the backend we are
copying to is using OpenWriterAt but the Object doesn't support
SetMetadata then it will write an ERROR level log but complete
successfully. This should not happen at the moment as only the local
backend supports metadata and OpenWriterAt but it may in the future.

It also adds a test to check metadata is preserved when doing
multipart transfers.

Fixes #7424
2024-05-14 12:51:03 +01:00
Nick Craig-Wood
629e895da8 local: implement Object.SetMetadata 2024-05-14 12:51:03 +01:00
Nick Craig-Wood
cc634213a5 fs: define the optional interface SetMetadata and implement it in wrapping backends
This also implements backend integration tests for the feature
2024-05-14 12:51:03 +01:00
Nick Craig-Wood
e9e9feb21e drive: allow setting metadata to fail if failok flag is set
For example using

    --drive-metadata-permissions read,write,failok

Will allow metadata to be read and written but if the writing fails,
then only an ERROR will be written in the log and the transfer won't
fail.
2024-05-13 19:44:03 +01:00
Dan McArdle
f26fc8f07c cmd/gitannex: When tags do not match, run e2e tests anyway
Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
96703bb31e build: Inject rclone version tag when testing
This enables gitannex end-to-end tests to run on CI. Otherwise, the
version would not match and tests that check the rclone version would
fail like so:

```
=== RUN   TestEndToEnd
    e2e_test.go:199: Skipping due to rclone version: expected version "v1.67.0-DEV", but got "v1.67.0-beta.7905.220bbe24d.merge"
--- SKIP: TestEndToEnd (0.07s)
```

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
96d3adc771 cmd/gitannex: Remove assumption in e2e test version check 2024-05-13 18:44:31 +01:00
Dan McArdle
f82822baca .github/workflows: Install git-annex-remote-rclone on Linux and macOS
Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
af33a4f822 cmd/gitannex: Add TestEndToEndMigration tests
For each layout mode, these tests start with a git-annex-remote-rclone
remote, migrate it to a git-annex-remote-rclone-builtin remote. They
verify that a file copied pre-migration is still present and that `git
annex testremote` passes.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
a675cc6677 cmd/gitannex: Describe new rclonelayout config in help
Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
ad605ee356 cmd/gitannex: Drop chdir from e2e tests
Now that e2e tests are running in parallel, undoing the chdir to the
temp dir was causing flaky failures on cleanup. We don't need it anyway
because the worrisome subcommands have their working directory
controlled by `runInRepo()`.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
4ab235c06c cmd/gitannex: Repeat TestEndToEnd for all layout modes
I'm hopeful that running these in parallel will not impact CI runtime
very much, but that likely depends on the number of CPU cores and
whether the tmp filesystem is backed by memory vs a physical disk.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
9a2b85d71c cmd/gitannex: Refactor e2e tests, add layout compat tests
TestEndToEndRepoLayoutCompat exercises git-annex-remote-rclone-builtin
and git-annex-remote-rclone on the same rclone remote to ensure they are
compatible. It repeats the same test for all known layout modes.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
29b58dd4c5 cmd/gitannex: Add support for different layouts
This commit adds support for the same repo layouts supported by
git-annex-remote-rclone. This should enable git-annex users with remotes
of type "rclone" to switch to a "rclone-builtin" without needing to
retransfer content.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
36ad4eb145 cmd/gitannex: Simplify messageParser's finalParameter() func
Issue #7625
2024-05-13 18:44:31 +01:00
nielash
61ab519791 chunker: fix finalizer already set error
Before this change, cache.PinUntilFinalized was called twice if the root pointed
to a composite multi-chunk file without metadata, resulting in a fatal "finalizer
already set" error. This change fixes the issue.
2024-05-13 18:33:54 +01:00
nielash
678941afc1 mailru: use --tpslimit 10 on bisync tests
see https://github.com/rclone/rclone/issues/7768#issuecomment-2060888980
2024-05-13 18:33:11 +01:00
nielash
b153254b3a bisync: ignore "Implicitly create directory" messages on tests 2024-05-13 18:32:55 +01:00
nielash
17cd7a9496 quatrix: fix f.String() not including subpath 2024-05-13 18:32:41 +01:00
Nick Craig-Wood
0735f44f91 operations: fix lsjson --encrypted when using --crypt-XXX parameters
Before this change an `rclone lsjson --encrypted` command where
additional `--crypt-` parameters were supplied on the command line:

    rclone lsjson --crypt-description XXX --encrypted secret:

Produced an error like this:

    Failed to lsjson: ListJSON failed to load config for crypt remote: config name contains invalid characters...

This was due to an incorrect lookup of the crypt config to create the
encrypted mapping.

Fixes #7833
2024-05-13 17:59:58 +01:00
Nick Craig-Wood
04c69959b8 Add Sunny to contributors 2024-05-13 17:59:58 +01:00
Nick Craig-Wood
25cc8c927a Add Michael Terry to contributors 2024-05-13 17:59:58 +01:00
Sunny
6356b51b33 serve http: added content-length header when html directory is served 2024-05-13 17:24:54 +01:00
albertony
1890608f55 docs: minor formatting improvement 2024-05-13 12:50:22 +02:00
Michael Terry
cd76fd9219 oauthutil: clear client secret if client ID is set
When an external OAuth flow is being used (i.e. a client ID and an
OAuth token are set in the config), a client secret should not be set.
If one is, the server may reject a token refresh attempt.

But there's no way to clear out a backend's default client secret via
configuration, since empty-string config values are ignored.

So instead, when a client ID is set, we should clear out any default
client secret, since it wouldn't apply anyway.
2024-05-11 16:03:32 +01:00
Nick Craig-Wood
5b8cdaff39 drive: fix description being overwritten on server side moves
Before this the description for files got overwritten on server side
moves.

This change stops rclone setting the description to the leaf name
completely.

See: https://forum.rclone.org/t/file-descriptions-in-google-drive-not-shown-after-dedupe/45510
Fixes: #7770
2024-05-11 15:59:40 +01:00
albertony
f2f559230c bump golangci/golangci-lint-action from 4 to 6
Version 5 removed go cache management, and therefore also options skip-pkg-cache and
skip-build-cache, because the cache related to go itself is already handled by
actions/setup-go, and now it only caches golangci-lint analysis. Since we run multiple
golangci-lint-action steps for different goos, we want to cache package and build cache
and golangci-lint results from all of them, and therefore this commit now changes the
approach by disabling all built-in caching and introducing a separate cache step to
handle it properly.
2024-05-11 14:43:29 +02:00
nielash
e0b38cc9ac onedrive: add support for group permissions
This change adds support for "group" identities, and SharePoint variants
"siteUser" and "siteGroup". It also adds support for using any identity type
(including "application" and "device") as a recipient source when adding
permissions.
2024-05-10 16:25:08 +01:00
nielash
68dc79eddd onedrive: fix references to deprecated permissions properties
Before this change, metadata permissions used the `grantedTo` and
`grantedToIdentities` properties, which are deprecated on OneDrive Business in
favor of `grantedToV2` and `grantedToIdentitiesV2`. After this change, OneDrive
Business uses the new V2 versions, while OneDrive Personal still uses the
originals, as the V2 versions are not available for OneDrive Personal. (see
https://learn.microsoft.com/en-us/answers/questions/1079737/inconsistency-between-grantedtov2-and-grantedto-re)
2024-05-10 16:25:08 +01:00
nielash
76cea0c704 onedrive: skip writing permissions with 'owner' role
The 'owner' role is an implicit role that can't be removed, so don't try to.
2024-05-10 16:25:08 +01:00
Nick Craig-Wood
41d5d8b88a build: add issue label notification machinery 2024-05-10 16:15:28 +01:00
Nick Craig-Wood
aa2746d0de union: fix deleting dirs when all remotes can't have empty dirs 2024-05-10 11:15:12 +01:00
wiserain
b2f6aac754 pikpak: improve getFile() usage
Previously, `getFile()` was called indiscriminately during uploads, moves, 
and download link generation. This could lead to users with download limit 
causing subsequent operations like uploads and moves to fail. 
This PR optimizes the use of getFile(), by only calling it 
when it's strictly necessary.
2024-05-08 09:09:56 +09:00
Eric Wolf
a0dacf4930 docs: exit code 9 requires --error-on-no-transfer
Updated exit code 9 definition to include that it requires the use of the "--error-on-no-transfer" flag with a link to that section.
2024-05-07 09:18:05 +02:00
IoT Maestro
c5ff5afc21 ulozto: Fix handling of root paths with leading / trailing slashes.
This fixes #7796
2024-05-04 20:04:30 +01:00
Nick Craig-Wood
bd8523f208 fstest: reduce precision of directory time checks on CI
For unknown reasons the precision of modification times of directories
on the CI is > 15mS compared to files which are 100nS. The tests
work fine when run in Virtualbox though so I conjecture this is
something to do with the file system used there.
2024-05-03 12:29:18 +01:00
nielash
0bfd70c405 sync: remove now superfluous copyEmptyDirectories function 2024-05-03 12:29:18 +01:00
Nick Craig-Wood
47735d8fe1 sync: fix failed to update directory timestamp or metadata: directory not found
See: https://forum.rclone.org/t/empty-dirs-not-wanted/45059/14
Co-authored-by: nielash <nielronash@gmail.com>
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
617534112b sync: fix directory modification times not being set
Co-authored-by: nielash <nielronash@gmail.com>
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
271ec43189 sync: don't need to sync directories if they haven't been modified
Before this change we synced directories regardless if the source
directory existed. It is irrelevant whether the source directory
exists or not, what we need to know is has the directory been
modified.

Co-authored-by: nielash <nielronash@gmail.com>
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
10eb4742dd sync: fix creation of empty directories when --create-empty-src-dirs=false
In v1.66.0 the changes to enable metadata preservation on directories
introduced a regression, namely that empty directories were created
despite the state of the --create-empty-src-dirs flag.

This patch fixes the problem by letting the normal rclone directory
creation create the directories and fixing up their timestamps and
metadata afterwards if --create-empty-src-dirs=false.

Fixes #7689
See: https://forum.rclone.org/t/empty-dirs-not-wanted/45059/
See: https://forum.rclone.org/t/how-to-ignore-empty-directories-when-uploading-from-windows/45057/
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
2a2ec06ec1 sync: fix management of empty directories to make it more accurate
Before this change we used the same datastructure for managing empty
directories for both --create-empty-src-dirs in sync/copy/move and for
the --delete-empty-src-dirs flag in move.

These two uses are subtly incompatible and this change uses a separate
datastructure for both uses. This makes it more accurate and easier to
understand.
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
7237b142fa drive: be more explicit in debug when setting permissions fail 2024-05-02 18:10:16 +01:00
Nick Craig-Wood
254e514330 onedrive,drive: make errors setting permissions into no retry errors 2024-04-30 09:34:33 +01:00
Nick Craig-Wood
9fa610088f docs: add Backblaze as a sponsor 2024-04-29 12:43:13 +01:00
Nick Craig-Wood
d2fa45acf3 storj: update bio on request 2024-04-29 11:49:13 +01:00
albertony
a86eb7ad50 docs: note that newer linux kernel version is required for ARMv5 2024-04-27 22:21:40 +02:00
Nick Craig-Wood
1fef8e667c build: migrate bucket storage for the project to new provider
This changes

- beta.rclone.org
- www.rclone.org
- pub.rclone.org
- downloads.rclone.org
2024-04-25 17:04:18 +01:00
Nick Craig-Wood
a5daef3892 Add hidewrong to contributors 2024-04-25 17:04:18 +01:00
Nick Craig-Wood
5bf70c68f1 swift: implement --swift-use-segments-container to allow >5G files on Blomp
This switches between storing chunks in a separate container suffixed
with `_segments` (the default) and a directory in the root
`.file-segments`)

By default the `.file-segments` mode will be auto selected if
`auth_url`s that require it are detected.

If the `.file-segments` mode is in use then rclone will omit that
directory from listings.

See: https://forum.rclone.org/t/blomp-unable-to-upload-5gb-files/42498/
2024-04-25 11:14:14 +01:00
Nick Craig-Wood
8a18c29835 random: update Password docs 2024-04-25 11:14:14 +01:00
albertony
29ed17d19c build: add linting for different values of GOOS 2024-04-22 19:29:12 +02:00
albertony
7ee22fcdf9 build: fix linting issues reported by running golangci-lint with different GOOS 2024-04-22 19:29:12 +02:00
albertony
159e274921 build: fix linting issues reported by golangci-lint on windows 2024-04-22 19:29:12 +02:00
albertony
fdc56b21c1 log: fix lint issue SA1019: syscall.Syscall has been deprecated since Go 1.18: Use SyscallN instead. 2024-04-22 19:29:12 +02:00
albertony
1ca825b6f0 build: run go mod tidy 2024-04-22 19:29:12 +02:00
Kyle Reynolds
d36bc8833c backend http: Adding no-escape flag for option to not escape URL metacharacters in path names - fixes issue #7637 2024-04-22 17:57:09 +01:00
nielash
8977655869 bisync: avoid starting tests we don't have time to finish
To prevent all-or-nothing retries, for tests that take longer (in total) than the
-timeout but less than the -timeout * -maxtries

https://github.com/rclone/rclone/pull/7743#issuecomment-2057250848
2024-04-19 22:29:41 +01:00
nielash
58e09e1cd4 bisync: skip test if config string contains a space 2024-04-19 22:29:41 +01:00
Kyle Reynolds
64734dfe41 fs accounting: Add deleted files total size to status summary line - fixes issue #7190 2024-04-18 22:09:23 +01:00
albertony
68bf6aa584 build: remove build constraint syntax for go 1.16 and older 2024-04-18 16:53:55 +02:00
albertony
db17aaf7cd build: remove separate go module cache step as its done by setup-go 2024-04-18 12:43:26 +02:00
albertony
9531cd2c46 Convert source files with crlf to lf 2024-04-18 11:32:45 +02:00
hidewrong
c09426bcfe fix spelling 2024-04-17 18:02:44 +02:00
nielash
30517698aa bisync: make session path even shorter on tests
The .lck file filename length needs to be less than 255 bytes (not symbols) on
linux, and it was still too long on this test, because of the
subdir=測試_Русский_{spc}_{spc}_ě_áñ
on remotes with long names, such as TestChunkerChunk3bNoRenameLocal:
2024-04-16 14:45:54 -04:00
Nick Craig-Wood
8dc4c01209 build: make integration tests run better on macOS and Windows
This changes as many of the integraton tests as possible so that they
use port forwarding rather than the docker IP directly.

Using the docker IP directly does not work on macOS and Windows as the
docker images are running in a VM rather than a container.

This adds the PORTS.md document to document which port numbers we are
using for which service as they need to be unique.
2024-04-16 10:48:48 +01:00
Nick Craig-Wood
807a7dabaa docs: fix heading anchor 2024-04-16 10:48:48 +01:00
Nick Craig-Wood
416324c047 Add pawsey-kbuckley to contributors 2024-04-16 10:48:48 +01:00
Nick Craig-Wood
524137f78a Add Katia Esposito to contributors 2024-04-16 10:48:48 +01:00
Evan Harris
f4c033a6a6 lsjson: small docs change to clarify options 2024-04-15 17:11:35 +01:00
pawsey-kbuckley
d459fb0cb8 genautocomplete: remove Ubuntu-ism from docs and clarify non-root use 2024-04-15 17:00:43 +01:00
Dave Nicolson
205745313d docs: fix macOS install from source link 2024-04-15 16:33:41 +01:00
Katia Esposito
79c00879ff ncdu: Do not quit on Esc 2024-04-15 16:18:27 +01:00
Nick Craig-Wood
2cff5514aa fix: test_all re-running too much stuff
This re-works the code which works out which tests need re-running to
be more accurate.
2024-04-15 16:08:27 +01:00
Nick Craig-Wood
88322f3eb2 Add Dave Nicolson to contributors 2024-04-15 16:08:27 +01:00
Nick Craig-Wood
036690c060 Add Butanediol to contributors 2024-04-15 16:08:27 +01:00
Nick Craig-Wood
805584a8dd Add yudrywet to contributors 2024-04-15 16:08:27 +01:00
Dave Nicolson
cc3ae931db docs: Add left and right padding to prevent icon truncation 2024-04-14 17:51:10 +01:00
Butanediol
0c0d64c316 serve s3: fix Last-Modified header format 2024-04-14 17:49:51 +01:00
yudrywet
50aa677934 chore: fix function names in comment
Signed-off-by: yudrywet <yudeyao@yeah.net>
2024-04-14 14:38:01 +01:00
nielash
51582e36e8 onedrive: set all metadata permissions and return error summary
Before this change when setting permissions from the metadata rclone
would stop on the first error.

This change causes rclone to attempt to set all the permissions and
return an error summary at the end.
2024-04-13 19:57:30 +01:00
Kyle Reynolds
47cbddbd27 fs rc: fixes incorrect Content-Type in HTTP API - fixes #7726 2024-04-13 19:56:34 +01:00
nielash
5323a21898 operations: fix move when dst is nil and fdst is case-insensitive
Before this change, the MoveCaseInsensitive logic in operations.move made the
assumption that dst != nil && remote != "". After this change, it should work
correctly when either one is present without the other.
2024-04-13 19:28:09 +01:00
Nick Craig-Wood
f2e693f722 sync: fix case normalisation on s3
Before this change when the sync routine attempted to normalise a
case, say from "FiLe.txt" to "file.txt" this caused a 400 Bad Request
error:

> This copy request is illegal because it is trying to copy an object
> to itself without changing the object's metadata, storage class,
> website redirect location or encryption attributes.

This was caused by passing the same object as the source and
destination to the move routine, whereas the destination object had a
different case and didn't exist, so should have been passed as nil.

See: https://github.com/rclone/rclone/pull/7743#discussion_r1557345906
2024-04-13 19:28:09 +01:00
Nick Craig-Wood
93955b755f operations: fix retries downloading too much data with certain backends
Before this fix if more than one retry happened on a file that rclone
had opened for read with a backend that uses fs.FixRangeOption then
rclone would read too much data and the transfer would fail.

Backends affected:

- azureblob, azurefiles, b2, box, dropbox, fichier, filefabric
- googlecloudstorage, hidrive, imagekit, jottacloud, koofr, netstorage
- onedrive, opendrive, oracleobjectstorage, pikpak, premiumizeme
- protondrive, qingstor, quatrix, s3, sharefile, sugarsync, swift
- uptobox, webdav, zoho

This was because rclone was emitting Range requests for the wrong data
range on the second and subsequent retries.

This was caused by fs.FixRangeOption modifying the options and the
reopen code relying on them not being modified.

This fix makes a copy of the fs.FixRangeOption in the reopen code to
fix the problem.

In future it might be best to change fs.FixRangeOption so it returns a
new options slice.

Fixes #7759
2024-04-13 19:25:15 +01:00
Nick Craig-Wood
a4fc5edc5e operations: add more assertions to ReOpen tests to check seek positions 2024-04-13 19:25:15 +01:00
Nick Craig-Wood
8b73dcb95d Add static-moonlight to contributors 2024-04-13 19:25:15 +01:00
static-moonlight
3ba57cabce doc: add example how to run serve s3 2024-04-13 19:22:26 +01:00
Nick Craig-Wood
c87097109b serve s3: adjust to move of Mikubill/gofakes3 to rclone/gofakes3
This also updates the interface which has gained a ctx parameter in
the mean time.
2024-04-13 18:25:41 +01:00
Nick Craig-Wood
ae76498a38 Add guangwu to contributors 2024-04-13 18:25:41 +01:00
Nick Craig-Wood
10f730c49f Add jakzoe to contributors 2024-04-13 18:25:41 +01:00
albertony
88d96d133b Add go mod and sum to gitattributes for consistent line endings 2024-04-13 11:16:42 +02:00
nielash
2c7680050b bisync: rename extended_char_paths test
The .lck file filename length needs to be less than 255 bytes (not symbols) on
linux, and it was still too long on this test, because of the
subdir=測試_Русский_{spc}_{spc}_ě_áñ
2024-04-11 16:27:20 +01:00
nielash
fe6c9aa4da chunker: fix case-insensitive comparison on local without metadata
Before this change, chunker would erroneously consider two different paths to be
equal if, due to special characters, they normalized to equal-folding strings in
Standard Encoding, but not otherwise. This caused base objects to get moved when
they should not have been. This change fixes the issue, which was discovered on
the bisync integration tests.

Ideally it should also be fixed when the base Fs is non-local, but there's not an
easy way at the moment to reference the wrapped Fs's encoding, at least without
breaking encapsulation.
2024-04-11 16:27:20 +01:00
nielash
8524afa9ce chunker: fix NewFs when root points to composite multi-chunk file without metadata
Before this change, calling NewFs on a composite multi-chunk file with
--chunker-meta-format "none"
would fail due to f.base pointing to the wrong Fs. This change fixes the issue,
which was discovered on the bisync integration tests.
2024-04-11 16:27:20 +01:00
nielash
21f3ba13f6 bisync: more fixes for integration tests
-use fs.ConfigStringFull instead of bilib.StripHexString to properly reverse
connection string remotes
2024-04-11 16:27:20 +01:00
nielash
04128f97ee bisync: fix endless loop if lockfile decoder errors
Before this change, the decoder looked only for `io.EOF`, and if any other error
was returned, it could cause an infinite loop. This change fixes the issue by
breaking for any non-nil error.
2024-04-10 16:33:05 +01:00
nielash
bef9fd0bc3 bisync: make tempDir path shorter
to avoid exceeding linux filename length limits
2024-04-10 16:33:05 +01:00
guangwu
2ab2ec29f9 fix: close cpu profile
Signed-off-by: guoguangwu <guoguangwug@gmail.com>
2024-04-09 11:23:55 +01:00
jakzoe
8817ee25ae docs: fix typo in filtering.md
Fix typo: moved misplaced double quotation mark.
2024-04-09 11:16:59 +01:00
Nick Craig-Wood
46b3854330 drive: set all metadata permissions and return error summary
Before this change when setting permissions from the metadata rclone
would stop on the first error.

This change causes rclone to attempt to set all the permissions and
return an error summary at the end.
2024-04-08 17:23:22 +01:00
Nick Craig-Wood
efbaca3a95 crypt: fix max suggested length of filenames 2024-04-08 17:23:22 +01:00
nielash
f995ece64d bisync: fix io.PipeWriter not getting closed on tests 2024-04-07 21:55:26 -04:00
jumbi77
68c2ba74dd pikpak: fix a typo in a comment
Last still open fix from PR #6970
2024-04-06 11:27:24 +01:00
albertony
e739ee2c27 docs: ensure empty line between text and a following heading 2024-04-05 21:39:44 +02:00
Dan McArdle
05e5712bc4 .github/workflows: Upgrade deprecated macos-11 to macos-latest
See https://github.com/actions/runner-images
2024-04-05 18:01:39 +01:00
Dan McArdle
a2e38e9883 cmd/gitannex: Downgrade to protocol version 1
This enables compatibility with versions of git-annex currently
available on GitHub's "ubuntu-latest" image, aka Ubuntu 22.04 Jammy.
Currently, Jammy is shipping git-annex 8.20210223-2ubuntu2.
https://packages.ubuntu.com/jammy/git-annex

Issue #7625
2024-04-05 18:01:39 +01:00
Dan McArdle
ef42c32cc6 cmd/gitannex: Replace e2e test script with Go test
This commit implements milestone 2.1 for the gitannex subcommand:
https://github.com/rclone/rclone/issues/7625#issuecomment-1951403856

This rewrite makes a few improvements over the old shell script:

(1) It no longer uses the system's rclone.conf. Now, it writes the
    rclone.conf file in an ephemeral directory.

(2) It no longer makes any assumptions about the contents of /tmp.

However, it now assumes that an rclone built from the HEAD commit is on
the PATH. It makes a best-effort attempt to verify this assumption, but
I'm not sure it's bulletproof.

I'm hoping that writing this in Go will enable more cross-platform
support in the future, but for now we're still restricted to Unixy
systems due to reliance on the HOME environment variable.

Issue #7625
2024-04-05 18:01:39 +01:00
Nick Craig-Wood
6a5c0065ef docs: clarify option syntax
See: https://forum.rclone.org/t/seeming-documentation-problem-rclones-syntax-a-problem-with-the-categories-on-this-forum/45395/
2024-04-05 15:59:32 +01:00
Nick Craig-Wood
6da27db844 build: fix CVE-2023-45288 by upgrading golang.org/x/net
See: https://pkg.go.dev/vuln/GO-2024-2687
2024-04-05 15:59:32 +01:00
Nick Craig-Wood
c0497d46d5 ulozto: remove use of github.com/pkg/errors 2024-04-05 15:59:32 +01:00
Nick Craig-Wood
df3df06d2e Add Pieter van Oostrum to contributors 2024-04-05 15:59:32 +01:00
Pieter van Oostrum
1e3ab7acfd docs: fix MANUAL formatting problems
1) Missing closing code backticks (```) in s3.md causes formatting problems
2) Pandoc requires blank lines before ATX headings
2024-04-05 10:41:47 +02:00
Kyle Reynolds
339bc1d1a3 backend koofr: remove trailing bracket - fixes #7600 2024-04-04 20:03:26 +01:00
nielash
71069ed5c1 webdav: fix SetModTime erasing checksums on owncloud and nextcloud
Before this change, calling SetModTime on owncloud and nextcloud would
inadvertently erase the object's stored hashes. This change fixes the issue,
which was discovered by the bisync integration tests.
2024-04-03 16:43:11 -04:00
nielash
75df38f6ee bisync: use fstest.RandomRemote on tests
- use fstest.RandomRemote to create the root level directory
- fix parsing of canonical name for connection string remotes
2024-04-03 16:43:11 -04:00
nielash
ce4064aabf hdfs: fix f.String() not including subpath 2024-04-03 16:43:11 -04:00
Nick Craig-Wood
7c9f1b8917 local: disable unreliable test
In this commit we merged an unreliable test

e053c8a1c0 copy: fix nil pointer dereference when corrupted on transfer with nil dst

It is a good idea but very hard to implement so it always works.

Hence this disables it for the moment.
2024-04-02 18:48:34 +01:00
Nick Craig-Wood
e71c95a554 docs: update warp sponsorship 2024-04-02 16:32:24 +01:00
nielash
e053c8a1c0 copy: fix nil pointer dereference when corrupted on transfer with nil dst 2024-04-02 15:34:58 +01:00
Nick Craig-Wood
c2d96113ac Add Erisa A to contributors 2024-04-02 15:34:58 +01:00
Nick Craig-Wood
5ff961d2ea Add yoelvini to contributors 2024-04-02 15:34:58 +01:00
Nick Craig-Wood
eedeaf7cbb Add Alexandre Lavigne to contributors 2024-04-02 15:34:58 +01:00
Kyle Reynolds
f6e716543a test info: improve cleanup of temp files - fixes #7209
Co-authored-by: Kyle Reynolds <kyle.reynolds@bridgerphotonics.com>
2024-04-02 15:03:33 +01:00
nielash
998df26ceb onedrive: fix --metadata-mapper called twice if writing permissions
Before this change, the --metadata-mapper was called twice if an object was
uploaded via multipart upload with --metadata and --onedrive-metadata-permissions
"write" or "read,write". This change fixes the issue.
2024-04-02 14:57:43 +01:00
Pat Patterson
93c960df59 b2: Add tests for new cleanup and cleanup-hidden backend commands. 2024-04-02 12:36:43 +01:00
Nikita Shoshin
92368f6d2b rcserver: set ModTime for dirs and files served by --rc-serve 2024-04-02 12:10:45 +01:00
Erisa A
08bf5228a7 docs: Add R2 note about no_check_bucket 2024-04-02 12:08:56 +01:00
yoelvini
76f3eb3ed2 s3: add new AWS region il-central-1 Tel Aviv 2024-04-01 18:17:16 +01:00
nielash
0d43da7655 bisync: more fixes for integration tests
- fix parsing of connection string remotes (comma in name)
- skip remotes that can't upload empty files
- Mkdir the test case subdir before cache.Get-ing it
	(only storj seems to need this... bug?)
2024-03-31 17:56:28 -04:00
Alexandre Lavigne
f9429de807 s3: update Scaleway's configuration options - fixes #7507
In order to handle special character, the configuration must specify
rclone configuration to use `list_url_encode`.
2024-03-31 17:42:20 +01:00
nielash
bce80be2f8 bisync: several fixes for integration tests
Several fixes for the bisync integration tests:

- use unique initdir and datadir for each subtest so concurrent tests don't interfere with each other
- remove dots from dir names for bucket backends
- ignore messages specific to cache backend
- skip fix-case tests on backends that can't fix-case
- don't expect "{hashtype} differ" messages on backends with no hash types
- print timestamps in UTC local

More fixes will still be needed, but this should hopefully fix a good portion of them.
2024-03-30 13:39:44 -04:00
Nick Craig-Wood
679f4fdfa9 ulozto: make password config item be obscured 2024-03-30 17:32:32 +00:00
Nick Craig-Wood
7c828ffe09 operations: fix very long file names when using copy with --partial
Before this change we were using the wrong variable to read the
filename length from. This meant that very long filenames were not
being truncated as intended.

This problem was spotted by Wang Zhiwei on the forum in a code review.

See: https://forum.rclone.org/t/why-use-c-remoteforcopy-instead-of-c-remote-to-check-length-in-copy-operation/45099
2024-03-30 09:06:58 +00:00
Nick Craig-Wood
1cf1f4fab2 Add Warrentheo to contributors 2024-03-30 09:06:58 +00:00
Nick Craig-Wood
853e802d8d Add Alex Garel to contributors 2024-03-30 09:06:30 +00:00
Warrentheo
3052d026ce onedrive: fix typo 2024-03-30 08:22:32 +00:00
albertony
9c9487365f config: show more user friendly names of custom types in ui 2024-03-29 21:20:19 +01:00
albertony
0f6c10ca02 config: add ending period on description option help text 2024-03-29 17:11:21 +01:00
Alex Garel
a075654f20 docs: add an indication in case of recursive shortcuts in drive
Help people handle an issue which might be difficult to understand
otherwise.

If you have recursive shortcuts (pointing to a parent folder) in a
google drive, rclone is doing infinite recursion, never ending and
filling the disk. Even if you ask not to get shortcuts content.
2024-03-29 12:35:47 +01:00
IoT Maestro
571d20d126 ulozto: implement Mover and DirMover interfaces. 2024-03-29 09:09:12 +00:00
IoT Maestro
c9ce384ec7 ulozto: revert the temporary file size limitations 2024-03-29 09:07:44 +00:00
IoT Maestro
748c43d525 ulozto: set Content-Length header if the file size is known. 2024-03-29 09:07:44 +00:00
Nick Craig-Wood
7c20ec3772 local: fix and update -l docs
See: https://forum.rclone.org/t/rclone-l-cloning-symlink-file-problem/45286/
2024-03-28 11:56:22 +00:00
Nick Craig-Wood
42914bc0b0 serve webdav: fix webdav with --baseurl under Windows
Windows webdav does an OPTIONS request on the root even when given a
path and if we return 404 here then Windows refuses to use the path.

This patch allows OPTIONS requests only on the root to fix this.

This affects all the HTTP servers.
2024-03-28 10:06:04 +00:00
nielash
f62e7b5b30 memory: fix incorrect list entries when rooted at subdirectory
Before this change, List would return incorrect directory paths (relative to the
wrong root) if the Fs root pointed to a subdirectory. For example, listing dir
"a/b/c/d" of remote :memory: would work correctly, but listing dir "c/d" of
remote :memory:a/b would not, and would result in "Entry doesn't belong in
directory %q (contains subdir)" errors.

This change fixes the issue and adds a test to detect any other backends that
might have the same issue.
2024-03-27 11:43:26 -04:00
nielash
2b0a25a64d memory: fix deadlock in operations.Purge
Before this change, the Memory backend had the potential to deadlock under
certain conditions, if the ListR callback required locking the b.mu mutex. This
was the case with operations.Purge, because Memory has no Purge method, and the
fallback option does:

	err = DeleteFiles(ctx, listToChan(ctx, f, dir))

which potentially starts removing objects before the listing has completed.

This change fixes the issue by batching all the entries before calling the
callback on them.
2024-03-27 11:42:49 -04:00
nielash
2bebbfaded bisync: add to integration tests - fixes #7665
This change officially adds bisync to the nightly integration tests for all
backends.

This will be part of giving us the confidence to take bisync out of beta.

A number of fixes have been added to account for features which can differ on
different backends -- for example, hash types / modtime support, empty
directories, unicode normalization, and unimportant differences in log output.
We will likely find that more of these are needed once we start running these
with the full set of remotes.

Additionally, bisync's extremely sensitive tests revealed a few bugs in other
backends that weren't previously covered by other tests. Fixes for those issues
have been submitted on the following separate PRs (and bisync test failures will
be expected until they are merged):

- #7670 memory: fix deadlock in operations.Purge
- #7688 memory: fix incorrect list entries when rooted at subdirectory
- #7690 memory: fix dst mutating src after server-side copy
- #7692 dropbox: fix chunked uploads when size <= chunkSize

Relatedly, workarounds have been put in place for the following backend
limitations that are unsolvable for the time being:

- #3262 drive is sometimes aware of trashed files/folders when it shouldn't be
- #6199 dropbox can't handle emojis and certain other characters
- #4590 onedrive API has longstanding bug for conflictBehavior=replace in
	server-side copy/move
2024-03-27 10:50:14 -04:00
nielash
fecce67ac6 memory: fix dst mutating src after server-side copy
Before this change, the Memory backend's Copy method created a dst object that
referenced the src's objectData by pointer instead of making a copy. While this
minimized memory usage, an unintended consequence was that subsequently mutating
the src (such as changing the modtime) would inadvertently also mutate the dst,
and vice versa.

This change fixes the issue and adds a test.
2024-03-26 20:40:06 -04:00
Nick Craig-Wood
a67688dcc7 mount,cmount,mount2: add --direct-io flag to force uncached access
This change adds the --direct-io flag to the mount. This means the
page cache is completely bypassed for reads and writes. No read-ahead
takes place. Shared mmap is disabled.

This is useful to accurately read files which may change length
frequently on the source.
2024-03-26 17:32:11 +00:00
Nick Craig-Wood
f3f743c3f9 vfs: fix download loop when file size shrunk
Before this change, if a file shrunk in size on the remote then rclone
could get into an loop trying to download the file forever.

The symptom was repeating errors like this:

    vfs cache: restart download failed: failed to start downloader: failed to open downloader: vfs reader: failed to open source file: invalid seek position

The fix was to check that file size in various places and makes sure
that we weren't trying to download too much data.

This was a problems with backends (like s3) which update the size of
the object on Open to the actual size of the object.
2024-03-26 17:32:10 +00:00
Nick Craig-Wood
ac6ba11d22 local: add --local-time-type to use mtime/atime/btime/ctime as the time
Fixes #7484
2024-03-26 11:58:28 +00:00
Nick Craig-Wood
854a36c4ab Add psychopatt to contributors 2024-03-26 11:58:28 +00:00
psychopatt
522ab1de6d docs: remove email from authors 2024-03-26 11:45:22 +00:00
Nick Craig-Wood
215ae17272 rc: fix stats groups being ignored in operations/check
Before this change operations/check was using a background context for
the checking which was causing the stats group to be ignored.

This fixes the problem and also a similar problem in backend/command

See: https://forum.rclone.org/t/operations-check-only-reports-to-global-stats-not-per-job-group/45254
2024-03-26 11:23:40 +00:00
Nick Craig-Wood
efed6b01d2 drive: fix server side copy with metadata from my drive to shared drive
Before this change trying to server side copy an object from a my
drive to a shared drive using --metadata caused this error:

    Sharing restrictions cannot be set on a shared drive item., teamDrivesSharingRestrictionNotAllowed

This was because we were setting the "writers-can-share" metadata
which isn't allowed on shared drives
2024-03-26 11:16:22 +00:00
Nick Craig-Wood
d11fe9779e drive: stop sending notification emails when setting permissions 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
f167846fb9 Add iotmaestro to contributors 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
1f4b433ace Add Vitaly to contributors 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
4d09320b2b Add hoyho to contributors 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
af313d66d5 Add Lewis Hook to contributors 2024-03-26 11:11:18 +00:00
iotmaestro
4b5c10f72e Add a new backend for uloz.to
Note that this temporarily skips uploads of files over 2.5 GB.

See https://github.com/rclone/rclone/pull/7552#issuecomment-1956316492
for details.
2024-03-26 09:46:47 +00:00
Dan McArdle
dfc329c036 cmd/gitannex: Add the gitannex subcommand
This commit adds a new subcommand named "gitannex", aka
"git-annex-remote-rclone-builtin" when invoked via a symlink.

This accomplishes milestone 1 from issue #7625: "minimal support for the
external special remote protocol".

Issue #7625
2024-03-26 09:43:43 +00:00
gvitali
d9601c78b1 linkbox: fix list paging and optimized synchronization.
1. The maximum number of objects on a page should be no more than
1000. Currently it is 1024, for this reason the listing always ends on
the first page with the error “object not found”, rclone tries to
upload the file again, Linkbox stores it with the name “filename(N)”,
and so the storage fills up indefinitely.

2. A hyphen is added to the list of allowed characters, that makes
queries more optimized (no need to load all files in a directory for
an entity with a hyphen).
2024-03-24 12:05:58 +00:00
Vitaly
4258ad705e linkbox: fix working with names longer than 8-25 Unicode chars.
The LinkBox API does not allow searching by more than 25 Unicode
characters in the name, for this reason it is currently impossible to
work with files and folders named longer than 8 Unicode chars (if
encoded in base32).

This fix queries all files in a directory for long names and checks
their names one by one, thus solving the issue.

Fixes #7542
2024-03-24 12:05:58 +00:00
Pat Patterson
070cff8a65 b2: Add new cleanup and cleanup-hidden backend commands. 2024-03-23 18:07:02 +00:00
hoyho
a24aeba495 s3: validate CopyCutoff size before copy
Signed-off-by: hoyho <luohaihao@gmail.com>
2024-03-23 15:09:38 +00:00
Lewis Hook
bf494d48d6 Improve error messages when objects have been corrupted on transfer - fixes #5268 2024-03-23 12:35:35 +00:00
Nick Craig-Wood
aee8d909b3 onedrive: fix "unauthenticated: Unauthenticated" errors when downloading
Before this change we would pass the Authorization header on to the
download server. This is allowed according to the docs, but on some
onedrive servers this sometimes causes an error with the text
"unauthenticated: Unauthenticated".

This is a similar fix to

dedad9f071 onedrive: fix "unauthenticated: Unauthenticated" errors when uploading

See: https://forum.rclone.org/t/cryptcheck-on-encrypted-onedrive-personal-failed-with-unauthenticated-error/44581/
2024-03-23 12:08:35 +00:00
Nick Craig-Wood
48262849df lib/rest: Add Client.Do function to call http.Client.Do 2024-03-23 12:08:23 +00:00
Nick Craig-Wood
09cc8179cc lib/rest: add CheckRedirect function for redirect management 2024-03-23 12:08:23 +00:00
Nick Craig-Wood
ff855fe1fb operations: Fix "optional feature not implemented" error with a crypted sftp
Before this change operations.SetDirModTime could return the error
"optional feature not implemented" when attempting to set modification
times on crypted sftp backends.

This was because crypt wraps the directories using fs.DirWrapper but
these return fs.ErrorNotImplemented for the SetModTime method.

The fix is to recognise that error and fall back to using the
DirSetModTime method on the backend which does work.

Fixes #7673
2024-03-22 17:36:04 +00:00
Nick Craig-Wood
5ee89bdcf8 Add Kyle Reynolds to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
f7bf28806c Add YukiUnHappy to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
df6c573c99 Add Gachoud Philippe to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
b7c06e5eb9 Add racerole to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
e0e9ac50d3 Add John-Paul Smith to contributors 2024-03-22 17:36:04 +00:00
YukiUnHappy
f68d962c86 onedrive: make server-side copy to work in more scenarios 2024-03-22 17:29:38 +00:00
kapitainsky
6232cc123f docs: Proton Drive, correct typo
Proton Drive correct typo
2024-03-22 16:36:21 +00:00
Gachoud Philippe
a33576af7d docs: drive: corrected relative path of scopes to absolute
and added some links to the reference
2024-03-22 12:26:34 +00:00
kapitainsky
2591703494 docs: clarify shell_type = none and ssh = behaviour
Discussed on the forum:

https://forum.rclone.org/t/can-rclone-be-made-to-work-with-an-sftp-server-confining-users-to-an-sftp-jail-and-no-login/44931
2024-03-21 15:08:56 +01:00
Kyle Reynolds
7803b4ed6c fs: improve JSON Unmarshalling for Duration
Enhanced the UnmarshalJSON method for the Duration type to correctly
handle the special string 'off' and ensure large integers are parsed
accurately without floating-point rounding errors. This resolves
issues with setting and removing the MinAge filter through the rclone
rc command.

Fixes #3783

Co-authored-by: Kyle Reynolds <kyle.reynolds@bridgerphotonics.com>
2024-03-13 18:08:59 +00:00
racerole
00fb847662 docs: remove repeated words 2024-03-13 17:12:39 +00:00
Thomas Müller
c7bfadd10a owncloud: add config owncloud_exclude_mounts which allows to exclude mounted folders when listing remote resources 2024-03-13 17:09:10 +00:00
John-Paul Smith
ca903b9872 drive: backend query command
This command executes a list query in Google Drive’s native query
language and returns a JSON dump of matches. It’s useful for locating
files quickly in folders with a large number of files, where rclone’s
normal list command is slow due to client-side filtering.
2024-03-11 20:16:13 +00:00
Nick Craig-Wood
b7783f75a4 Start v1.67.0-DEV development 2024-03-10 12:14:00 +00:00
Nick Craig-Wood
b6013a5e68 Version v1.66.0 2024-03-10 11:22:43 +00:00
Nick Craig-Wood
b7422a4fc8 docs: update metadata docs with Move and Copy support 2024-03-09 14:13:18 +00:00
nielash
9b650d3517 hasher: look for cached hash if passed hash unexpectedly blank
Before this change, Hasher did not check whether a "passed hash" (hashtype
natively supported by the wrapped backend) returned from a backend was blank,
and would sometimes return a blank hash to the caller even when a non-blank hash
was already stored in the db. This caused issues with, for example, Google
Drive, which has SHA1 / SHA256 hashes for some files but not others
(https://rclone.org/drive/#sha1-or-sha256-hashes-may-be-missing) and sometimes also
does not have hashes for very recently modified files.

After this change, Hasher will check if the received "passed hash" is
unexpectedly blank, and if so, it will continue to try other enabled methods,
such as retrieving a value from the database, or possibly regenerating it.

https://forum.rclone.org/t/hasher-with-gdrive-backend-does-not-return-sha1-sha256-for-old-files/44680/9?u=nielash
2024-03-09 11:58:02 +00:00
nielash
ff0acfb568 hasher: fix error from trying to stop an already-stopped db
Before this change, Hasher would sometimes try to stop a bolt db that was
already stopped, resulting in an error. This change fixes the issue by checking
first whether the db is already stopped.

https://forum.rclone.org/t/hasher-with-gdrive-backend-does-not-return-sha1-sha256-for-old-files/44680/11?u=nielash
2024-03-09 11:58:02 +00:00
Nick Craig-Wood
ac830ddd42 sync: don't sync directory modtimes from backends which don't have directories
Some backends (like s3, swift, gcs, azureblob) don't have directories
(this can be overridden on some using the directory markers feature).

It therefore makes no sense to sync directory times from them as they
will all be a value made up by rclone (--default-time)

We use the feature flag CanHaveEmptyDirectories to mark backends
without real directory support and disable the directory modification
time syncing on those.
2024-03-09 11:28:15 +00:00
Nick Craig-Wood
f491efc85d sync: fix integration tests on chunker
The tests added in this commit needed a tweak for chunker

8c69455c37 sync: don't set dir modtimes if already set
2024-03-08 15:04:35 +00:00
Nick Craig-Wood
fcb182efce docs: add current sponsor logos in 2024-03-08 15:04:35 +00:00
nielash
1473de3f04 onedrive: add metadata support
This change adds support for metadata on OneDrive. Metadata (including
permissions) is supported for both files and directories.

OneDrive supports System Metadata (not User Metadata, as of this writing.) Much
of the metadata is read-only, and there are some differences between OneDrive
Personal and Business (see table in OneDrive backend docs for details).

Permissions are also supported, if --onedrive-metadata-permissions is set. The
accepted values for --onedrive-metadata-permissions are read, write, read,write, and
off (the default). write supports adding new permissions, updating the "role" of
existing permissions, and removing permissions. Updating and removing require
the Permission ID to be known, so it is recommended to use read,write instead of
write if you wish to update/remove permissions.

Permissions are read/written in JSON format using the same schema as the
OneDrive API, which differs slightly between OneDrive Personal and Business.
(See OneDrive backend docs for examples.)

To write permissions, pass in a "permissions" metadata key using this same
format. The --metadata-mapper tool can be very helpful for this.

When adding permissions, an email address can be provided in the User.ID or
DisplayName properties of grantedTo or grantedToIdentities. Alternatively, an
ObjectID can be provided in User.ID. At least one valid recipient must be
provided in order to add a permission for a user. Creating a Public Link is also
supported, if Link.Scope is set to "anonymous".

Note that adding a permission can fail if a conflicting permission already
exists for the file/folder.

To update an existing permission, include both the Permission ID and the new
roles to be assigned. roles is the only property that can be changed.

To remove permissions, pass in a blob containing only the permissions you wish
to keep (which can be empty, to remove all.)

Note that both reading and writing permissions requires extra API calls, so if
you don't need to read or write permissions it is recommended to omit --onedrive-
metadata-permissions.

Metadata and permissions are supported for Folders (directories) as well as
Files. Note that setting the mtime or btime on a Folder requires one extra API
call on OneDrive Business only.

OneDrive does not currently support User Metadata. When writing metadata, only
writeable system properties will be written -- any read-only or unrecognized keys
passed in will be ignored.

TIP: to see the metadata and permissions for any file or folder, run:

rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read

See the OneDrive backend docs for a table of all the supported metadata
properties.
2024-03-08 14:48:54 +00:00
Nick Craig-Wood
4e07a72dc7 fs: Implement --no-update-dir-modtime to disable setting modification times on dirs 2024-03-07 17:20:24 +00:00
Nick Craig-Wood
99acee7ba0 operations: remove stray debug 2024-03-07 17:15:43 +00:00
Nick Craig-Wood
bda4f25baa s3: support metadata setting and mapping on server side Copy
Before this change the backend would not run the metadata mapper and
it would ignore metadata set when doing server side copies.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
9f2ce2c7fc drive: support metadata setting and mapping on server side Move,Copy
Before this change the backend would not run the metadata mapper and
it would ignore metadata set when doing server side moves or copies.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
6e85a39e99 local: support metadata setting and mapping on server side Move
Before this change the backend would not run the metadata mapper and
it would ignore metadata set when doing server side moves.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
24b4148b5e fs: add MetadataAsOpenOptions 2024-03-07 14:44:45 +00:00
Nick Craig-Wood
41b1250eaf fstests: add tests for Metadata on server side Move and Copy 2024-03-07 14:44:45 +00:00
Nick Craig-Wood
339d3e8ee6 netstorage,quatrix,seafile: fix Root to return correct directory when pointing to a file
This fixes the TestIntegration/FsMkdir/FsPutFiles/FsIsFile/FsRoot
integration test.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
5750795324 protondrive: fix encoding of Root method
This fixes the TestIntegration/FsMkdir/FsPutFiles/FsIsFile/FsRoot
integration test.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
cdcb8b2a0a Add huajin tong to contributors 2024-03-07 14:44:45 +00:00
huajin tong
b1ae7df556 docs: fix some comments
Signed-off-by: thirdkeyword <fliterdashen@gmail.com>
2024-03-07 12:57:15 +00:00
nielash
431524445e combine: fix operations.DirMove across upstreams - fixes #7661
Before this change, operations.DirMove would fail when moving a directory, if
the src and dest were on different upstreams of a combine remote.

The issue only affected operations.DirMove, and not sync.MoveDir, because they
checked for server-side-move support in different ways.

MoveDir checks by just trying it and seeing what error comes back. This works
fine for combine because combine returns fs.ErrorCantDirMove which MoveDir
understands what to do with.

DirMove, however, only checked whether the function pointer is nil. This is an
unreliable way to check for combine, because combine does advertise support for
DirMove, despite not always being able to do it.

This change fixes the issue by checking the returned error in a manner similar
to sync.MoveDir and falling back to individual file moves (copy + delete)
depending on which error was returned.
2024-03-07 11:11:46 +00:00
nielash
252562d00a combine: fix CopyDirMetadata error on upstream root
Before this change, operations.CopyDirMetadata would fail with: `internal error:
expecting directory string from combine root '' to have SetMetadata method:
optional feature not implemented` if the dst was the root directory of a combine
upstream. This is because combine was returning a *fs.Dir, which does not
satisfy the fs.SetMetadataer interface.

While it is true that combine cannot set metadata on the root of an upstream
(see also #7652), this should not be considered an error that causes sync to do
high-level retries, abort without doing deletes, etc.

This change addresses the issue by creating a new type of DirWrapper that is
allowed to fail silently, for exceptional cases such as this where certain
special directories have more limited abilities than what the Fs usually
supports.

It is possible that other similar wrapping backends (Union?) may need this same
fix.
2024-03-07 11:09:07 +00:00
nielash
6a72cfd6e1 operations: fix typo in log messages
I assume this must be a typo as %T of dir would only ever print "string"
2024-03-07 11:09:07 +00:00
nielash
354ea6fff3 docs: update to reflect dir modtime/metadata support 2024-03-07 11:09:07 +00:00
nielash
8c69455c37 sync: don't set dir modtimes if already set
Before this change, directory modtimes (and metadata) were always synced from
src to dst, even if already in sync (i.e. their modtimes already matched.) This
potentially required excessive API calls, made logs noisy, and was potentially
problematic for backends that create "versions" or otherwise log activity
updates when modtime/metadata is updated.

After this change, a new DirsEqual function is added to check whether dirs are
equal based on a number of factors such as ModifyWindow and sync flags in use.
If the dirs are equal, the modtime/metadata update is skipped.

For backends that require setDirModTimeAfter, the "after" sync is performed only
for dirs that could have been changed by the sync (i.e. dirs containing files
that were created/updated.)

Note that dir metadata (other than modtime) is not currently considered by
DirsEqual, consistent with how object metadata is synced (only when objects are
unequal for reasons other than metadata).

To sync dir modtimes and metadata unconditionally (the previous behavior), use
--ignore-times.
2024-03-07 09:57:11 +00:00
nielash
fd8faeb0e6 vfs: fix unicode normalization on macOS - fixes #7072
Before this change, the VFS layer did not properly handle unicode normalization,
which caused problems particularly for users of macOS. While attempts were made
to handle it with various `-o modules=iconv` combinations, this was an imperfect
solution, as no one combination allowed both NFC and NFD content to
simultaneously be both visible and editable via Finder.

After this change, the VFS supports `--no-unicode-normalization` (default `false`)
via the existing `--vfs-case-insensitive` logic, which is extended to apply to both
case insensitivity and unicode normalization form.

This change also adds an additional flag, `--vfs-block-norm-dupes`, to address a
probably rare but potentially possible scenario where a directory contains
multiple duplicate filenames after applying case and unicode normalization
settings. In such a scenario, this flag (disabled by default) hides the
duplicates. This comes with a performance tradeoff, as rclone will have to scan
the entire directory for duplicates when listing a directory. For this reason,
it is recommended to leave this disabled if not needed. However, macOS users may
wish to consider using it, as otherwise, if a remote directory contains both NFC
and NFD versions of the same filename, an odd situation will occur: both
versions of the file will be visible in the mount, and both will appear to be
editable, however, editing either version will actually result in only the NFD
version getting edited under the hood. `--vfs-block-norm-dupes` prevents this
confusion by detecting this scenario, hiding the duplicates, and logging an
error, similar to how this is handled in `rclone sync`.
2024-03-06 16:12:13 +00:00
Kyle Reynolds
dcdbad3554 bisync: clarify file operation directions in dry-run logs - fixes #7029
Before this change, NOTICE log messages during bisync dry runs were unclear as
to the direction of the skipped operation (Path1 to 2 vs. 2 to 1.) This change
adjusts the cmd/bisync/log.go indent function to be more expressive about
direction.
2024-03-06 09:26:53 -05:00
Nick Craig-Wood
effad3fe4b build: fix CVE-2024-24786 by upgrading google.golang.org/protobuf
See: https://pkg.go.dev/vuln/GO-2024-2611
2024-03-06 12:42:38 +00:00
Nick Craig-Wood
692af42858 operations: fix TestSetDirModTime for backends with SetDirModTime but not Metadata 2024-03-01 11:39:21 +00:00
Nick Craig-Wood
1693d7ad0f sftp: set DirModTimeUpdatesOnWrite to fix integration tests 2024-03-01 11:29:08 +00:00
Nick Craig-Wood
3bb9394ae5 operations: fix TestMkdirModTime test
This was failing on backends that didn't support metadata but did
support setting directory modtimes.
2024-03-01 11:18:24 +00:00
Nick Craig-Wood
be39e99918 sync: fix TestMoveEmptyDirectories so they work on backends which don't support DirModTimes 2024-03-01 10:56:48 +00:00
Nick Craig-Wood
6e28edeb9a cache: fix crash in tests which assumed local could Purge 2024-02-29 17:55:36 +00:00
Nick Craig-Wood
d50572b108 operations: add operations/hashsum to the rc as rclone hashsum equivalent
Fixes #7569
2024-02-29 16:21:42 +00:00
Nick Craig-Wood
0b8689dc28 rc: Add GetFsNamedFileOK to get an fs which could also be a file 2024-02-29 16:21:42 +00:00
Nick Craig-Wood
5994fcfed8 fs/cache: add PutErr to add an fs.Fs with an fs.ErrorIsFile error to the cache 2024-02-29 16:21:41 +00:00
Nick Craig-Wood
e3f6f68885 lib/cache: add PutErr to put a value with an error into the cache 2024-02-29 16:21:41 +00:00
Nick Craig-Wood
6ff1b6c505 local: delete backend implementation of Purge to speed up and make stats
In this commit (2014 for v1.02) Purge was implemented for the local
backend:

1527e64ee7 local: Implement Purger interface

This appeared to be implemented just to make a Purge and doesn't
appear to do anything useful.

It is in fact significatly worse than the rclone fallback purge since
it doesn't operate in parallel or update stats.

This patch removes the Purge routine for a consequent speed up and
showing of stats.

See: https://forum.rclone.org/t/progress-flag-for-rclone-purge/44416
2024-02-29 15:04:51 +00:00
Nick Craig-Wood
4a049c12fe copyurl: add troubleshooting section to the docs
See: https://forum.rclone.org/t/copyurl-fails-with-stream-error-wget-and-curl-works/44382/2
2024-02-29 14:58:12 +00:00
Nick Craig-Wood
15890b7ce7 cmd: make auto completion work for all shells and reduce the size
This updates the bash completion to work with GenBashCompletionV2
which cuts down the size of the completion file dramatically.

See: https://forum.rclone.org/t/request-make-remote-path-completion-work-for-fish-and-zsh/42982/
See: #7000
2024-02-29 14:46:50 +00:00
Nick Craig-Wood
186bb85c44 crypt: add missing error check spotted by linter 2024-02-29 14:46:50 +00:00
nielash
4c6d2c5410 crypt: improve handling of undecryptable file names - fixes #5787 fixes #6439 fixes #6437
Before this change, undecryptable file names would be skipped very quietly
(there was a log warning, but only at DEBUG level),
failing to alert users of a potentially serious issue that needs attention.

After this change, the log level is raised to NOTICE by default and a new
--crypt-strict-names flag allows raising an error, for users who may prefer not
to proceed if such an issue is detected.

See https://forum.rclone.org/t/skipping-undecryptable-file-name-should-be-an-error/27115
https://github.com/rclone/rclone/issues/5787
2024-02-29 12:11:02 +00:00
Nick Craig-Wood
f5f86786b2 sync: implement directory sync for mod times and metadata
Directory mod times are synced by default if the backend is capable
and directory metadata is synced if the --metadata flag is provided
and the backend is capable.

This updates the bisync golden tests also which were affected by
--dry-run setting of directory modtimes.

Fixes #6685
2024-02-28 16:26:14 +00:00
Nick Craig-Wood
15579c2195 fstests: factor out fstest.NewObject function 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
e8fe0b0553 operations: Implement CopyDirMetadata, CopyDirModTime and SetDirModTime 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
09953d77b5 lsjson,lsf: make sure metadata appears for directories 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
e4d0055b3e drive: implement modtime and metadata setting for directories 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
a60da2ef38 local: fix setting of btime on directories on Windows
Before this change this would give errors like this

    failed to set metadata on directory: failed to set birth (creation) time: Access is denied.

This was caused by opening the directory in the wrong mode.
2024-02-28 16:25:59 +00:00
Nick Craig-Wood
7b01564f83 local: implement modtime and metadata for directories
A consequence of this is that fs.Directory returned by the local
backend will now have a correct size in (rather than -1). Some tests
depended on this and have been fixed by this commit too.
2024-02-28 16:09:04 +00:00
Nick Craig-Wood
39db8caff1 cache,chunker,combine,compress,crypt,hasher,union: implement MkdirMetadata and related Features 2024-02-28 16:09:04 +00:00
nielash
0297542f6b cache,chunker,combine,compress,crypt,hasher,union: implement DirSetModTime (if supported by wrapped remote) 2024-02-28 16:09:04 +00:00
nielash
17c0ecc72c sftp: implement DirSetModTime 2024-02-28 16:09:04 +00:00
nielash
cbcb295185 drive: implement DirSetModTime 2024-02-27 19:59:13 +00:00
nielash
67e3725205 local: implement DirSetModTime 2024-02-27 19:59:13 +00:00
Nick Craig-Wood
61d76ae47d fstests: add integration tests for Directory Metadata and ModTime 2024-02-27 19:59:13 +00:00
Nick Craig-Wood
fd1ca2dfe8 fs: allow Metadata calls to be called with Directory or Object
This involved adding the Fs() method to DirEntry as it is needed in
the metadata mapper.

Unspecialised fs.Dir objects will return a new fs.Unknown from their
Fs() methods as they are not specific to any given Fs.
2024-02-27 10:56:19 +00:00
Nick Craig-Wood
e1032f693f fs: add DirWrapper for wrapping Directory-s with optional methods 2024-02-27 10:56:19 +00:00
Nick Craig-Wood
a4cadd1128 fs: add Directory Metadata flags for backends and interfaces
Add backend flags
- ReadDirMetadata
- WriteDirMetadata
- WriteDirSetModTime
- UserDirMetadata
- DirModTimeUpdatesOnWrite

Add Metadata/SetMetadata for directories.

Add MkdirMetadata optional feature
2024-02-27 10:56:19 +00:00
nielash
6da52d76a7 fs: implement DirSetModTime optional feature 2024-02-22 11:13:54 +00:00
Nick Craig-Wood
71a1bbb2be errcount: factor errcount abstraction from operations 2024-02-22 11:13:54 +00:00
Nick Craig-Wood
8f0e9f9f6b mega: fix panic with go1.22
Before this fix rclone would crash with

    panic: encoding alphabet includes duplicate symbols

When compiled with go1.22. This was fixed upstream in

https://github.com/t3rm1n4l/go-mega/issues/48

And this just pulls in the fix.

Fixes #7639
2024-02-21 18:41:44 +00:00
Nick Craig-Wood
072d1f10ab serve webdav: fix --baseurl without leading /
The webdav server needs the prefix passed to it with a leading /
otherwise it does not remove it properly.

The docs state that a leading slash is optional so this patch adds one
if not present.

See: https://forum.rclone.org/t/cant-rename-files-in-rclone-serve-webdav-with-baseurl-maybe-wrong-handling-of-move-request-method/44637
2024-02-21 18:08:44 +00:00
Nick Craig-Wood
5014348229 Add Anders Swanson to contributors 2024-02-21 18:08:44 +00:00
Nick Craig-Wood
ed78ac7c92 Add Joe Cai to contributors 2024-02-21 18:08:44 +00:00
Nick Craig-Wood
53d873d60d Add Dan McArdle to contributors 2024-02-21 18:08:44 +00:00
Nick Craig-Wood
f2c35fdec6 Add Gabriel Ramos to contributors 2024-02-21 18:08:44 +00:00
Nick Craig-Wood
1c69b20ed7 Add Jack Provance to contributors 2024-02-21 18:08:44 +00:00
nielash
547c635552 mailru: add override for TestApplyTransforms - #7591
mailru is unable to handle filenames with certain combining characters (for
example: йěáñ), and is therefore incapable of testing ApplyTransforms. (It is
also therefore incapable of fully supporting --no-unicode-normalization.)

The same override is applied to chunker when wrapping mailru.
2024-02-21 18:02:19 +00:00
nielash
f0d9117ff3 linkbox: add override for TestFixCase - #7591
linkbox already has an override for TestCaseInsensitiveMoveFile, and being able
to handle case-insensitive moves is a prerequisite for TestFixCase.
2024-02-21 18:02:19 +00:00
nielash
9d2bd163c7 opendrive: fix moving file/folder within the same parent dir - #7591
Before this change, moving (renaming) a file or folder to a different name
within the same parent directory would fail, due to using the wrong API
operation ("/file/move_copy.json" and "/folder/move_copy.json", instead of the
separate "/file/rename.json" and "/folder/rename.json" that opendrive has for
this purpose.)

After this change, Move and DirMove check whether the move is within the same
parent dir. If so, "rename" is used. If not, "move_copy" is used, like before.
2024-02-21 18:02:19 +00:00
Anders Swanson
db8fb5ceda oracleobjectstorage: supports workload identity authentication for OKE
Signed-off-by: Anders Swanson <anders.swanson@oracle.com>
2024-02-20 16:25:59 +00:00
Joe Cai
a1e66cc5e8 swift: Avoid unnecessary container versioning check
Container versioning check is only needed for non-empty large objects.
2024-02-20 15:52:25 +00:00
nielash
7b8bbe531e nfsmount: fix --volname being ignored #7503
Before this change, nfsmount ignored the --volname flag. After this change, the --
volname flag is respected, making it possible to set a custom volume name.

macOS users should note that Finder will show the correct volume name in most
places, but a notable exception is the sidebar, which will show "localhost".
This seems to be a system limitation (at least without `sudo`), but see the
discussion at https://github.com/rclone/rclone/issues/7503#issuecomment-1933997678
for some possible workarounds.
2024-02-18 05:08:59 -05:00
nielash
0e2f1d64e3 nfsmount: fix exit after external unmount #7503
Before this change, if a user unmounted externally (for example, via the Finder
UI), rclone would not be aware of this and wait forever to exit -- effectively
causing a deadlock that would require Ctrl+C to terminate.

After this change, when the handler detects an external unmount, it calls a
function which allows rclone to cleanly shutdown the VFS and exit.
2024-02-18 05:08:59 -05:00
nielash
5638a3841f serve nfs: fix writing files via Finder on macOS - fixes #7503
Before this change, writing files to an `nfsmount` via Finder on macOS would
cause critical errors, rendering `nfsmount` effectively unusable on macOS. This
change fixes the issue so that writes via Finder should be possible.

The issue was primarily caused by the handler's HandleLimit being set to -1. -1 is
the correct default for a NullAuthHandler, but not for a CachingHandler, which
interprets -1 not as "no limit" but as "no cache".

This change sets a high default of 1000000, and gives the user control over it
with a new --nfs-cache-handle-limit flag (available in both `serve nfs` and
`nfsmount`. A minimum of 5 is enforced, as any lower than this will be
insufficient to support directory listing.
2024-02-18 05:08:59 -05:00
Dan McArdle
6986a43b68 bisync: delete flushCache() function from tests
The flushCache() function has a bug that causes it to never actually
flush the cache. Specifically, it checks whether DirCacheFlush is nil,
but never calls it.

The tests are already passing without flushing the dir cache, so this
commit just deletes flushCache() and its call sites.

Fixes rclone/rclone#7623
2024-02-18 04:14:51 -05:00
Oksana Zhykina
11c6489fd1 quatrix: add option to skip project folders 2024-02-18 07:38:19 +01:00
Gabriel Ramos
43823bc925 webdav: reduce priority of chunks upload log 2024-02-18 07:29:23 +01:00
dependabot[bot]
a3b661be0d build(deps): bump golangci/golangci-lint-action from 3 to 4
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 3 to 4.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-18 07:25:50 +01:00
Jack Provance
f113c68b13 docs: Fix a heading level in webdav.md documentation (#7631)
This fixes a heading problem under the "Provider Notes" section.
2024-02-18 07:16:23 +01:00
nielash
137f7f62fb sync: use operations.DirMove instead of sync.MoveDir for --fix-case - #7591
This should be more efficient for the purposes of --fix-case, as operations.DirMove
accepts `srcRemote` and `dstRemote` arguments, while sync.MoveDir does not.

This also factors the two-step-move logic to operations.DirMoveCaseInsensitive, so
that it is reusable by other commands.
2024-02-13 15:07:41 -05:00
nielash
dfe76570a1 operations: skip backends incapable of testing TestApplyTransforms - #7591
This adds a step to detect whether the backend is capable of supporting the
feature, and skips the test if not. A backend can be incapable if, for example,
it is non-case-preserving or automatically converts NFD to NFC.
2024-02-13 15:07:41 -05:00
nielash
f4c058e13e bisync: use global --retries and --retries-sleep flags instead of overriding 2024-02-12 13:24:54 -05:00
nielash
407a0f3733 cmd: refactor --retries and --retries-sleep to global config
This change moves the --retries and --retries-sleep flags/variables from cmd to
config (consistent with --low-level-retries), so that they can be more easily
referenced from subcommands.
2024-02-12 13:24:54 -05:00
nielash
b14269fd23 bisync: add support for --retries-sleep - fixes #7555
Before this change, bisync supported --retries but not --retries-sleep.
This change adds support for --retries-sleep.
2024-02-12 13:24:54 -05:00
nielash
76b7bcd4d7 bisync: reset errors between retries
Before this change, in the event of a retryable error, bisync would always retry
the maximum number of times allowed by the `--retries` flag, even if one of the
retries was successful. This change fixes the issue, so that bisync moves on
after the first successful retry.
2024-02-12 13:24:54 -05:00
nielash
782ab3f582 bisync: clean up docs
(as the flags in docs/content/bisync.md do not update automatically, unlike
docs/content/commands/rclone_bisync.md)
2024-02-12 13:24:54 -05:00
nielash
9c6325c131 backend: rename variables to fix CI lint test failures 2024-02-12 12:49:00 -05:00
Volodymyr
2abeda5961 quatrix: fix Content-Range header
This change does not actually affect uploads. Just to be right according to definition of Content-Range in
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Range#range-end
2024-02-09 16:44:45 +00:00
nielash
885a543023 operations: use --download for TestApplyTransforms #7591
This makes it possible to run the test even on remotes without MD5 support.
2024-02-08 16:08:05 +00:00
nielash
f3680d222c operations: fix TestCaseInsensitiveMoveFileDryRun on chunker integration tests #7591
It appears that ci.DryRun = true affects the behavior of r.WriteObject on
chunker only, and no other remotes. This change puts a quick bandaid on it by
setting it later on in the test, but perhaps the underlying issue warrants a
closer look at some point... is chunker checking ci.DryRun itself in a way that
no other remote does? If so, should it? (Does this break encapsulation?)
2024-02-08 16:08:02 +00:00
nielash
d2b37cf61e operations: fix case-insensitive moves in operations.Move #7591
Before this change, operations.moveOrCopyFile had a special section to detect
and handle changing case of a file on a case insensitive remote, but
operations.Move did not. This caused operations.Move to fail for certain
backends that are incapable of renaming a file in-place to an equal-folding name.
(Not all case-insensitive backends have this limitation -- for example, Dropbox
does but macOS local does not.)

After this change, the special two-part-move section from
operations.moveOrCopyFile is factored out to its own function,
moveCaseInsensitive, which is then called from both operations.moveOrCopyFile
and operations.Move.
2024-02-08 16:07:57 +00:00
Nick Craig-Wood
83f61a9cfb s3: GCS provider: fix server side copy of files bigger than 5G
GCS gives NotImplemented errors for multi-part server side copies. The
threshold for these is currently set just below 5G so any files bigger
than 5G that rclone attempts to server side copy will fail.

This patch works around the problem by adding a quirk for GCS raising
--s3-copy-cutoff to the maximum. This means that rclone will never use
multi-part copies for files in GCS. This includes files bigger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. However this seems to work with GCS.

See: https://forum.rclone.org/t/chunker-uploads-to-gcs-s3-fail-if-the-chunk-size-is-greater-than-the-max-part-size/44349/
See: https://issuetracker.google.com/issues/323465186
2024-02-08 14:53:30 +00:00
Nick Craig-Wood
b206496f63 b2: clarify exactly what --b2-download-auth-duration does in the docs
See: https://forum.rclone.org/t/what-does-b2-download-auth-duration-mean/44504/
2024-02-08 09:39:53 +00:00
Nick Craig-Wood
24fdecf107 ftp: fix mkdir with rsftp which is returning the wrong code
On a successfull MKD, rsftp seems to return code 250 whereas we and
the RFC expects 257.

This patch makes rclone accept 250 here as well.

See: https://forum.rclone.org/t/rclone-pop-up-an-i-o-error-when-creating-a-folder-in-a-mounted-ftp-drive/44368/3
2024-02-07 22:09:56 +00:00
Nick Craig-Wood
9bd7262dfc Add DanielEgbers to contributors 2024-02-07 22:09:56 +00:00
DanielEgbers
a0dff2dd9c Seafile: Fix download/upload error when FILE_SERVER_ROOT is relative
A seafile server can be configured to use a relative URL as
FILE_SERVER_ROOT in order to support more than one hostname/ip. (see
https://github.com/haiwen/seahub/issues/3398#issuecomment-506920360 )

The previous backend implementation always expected an absolute
download/upload URL, resulting in an "unsupported protocol scheme"
error.

With this commit it supports both absolute and relative.
2024-02-05 11:48:51 +00:00
Nick Craig-Wood
91b54aafcc rc: add srcFs and dstFs to core/stats and core/transferred stats
Before this change it wasn't possible to see where transfers were
going from and to in core/stats and core/transferred.

When use in rclone mount in particular this made interpreting the
stats very hard.
2024-02-02 11:43:10 +00:00
Nick Craig-Wood
81a29e6895 Add Thomas Müller to contributors 2024-02-02 11:43:10 +00:00
Nick Craig-Wood
f762ef668f Add Michael Eischer to contributors 2024-02-02 11:43:10 +00:00
Thomas Müller
99b9062551 owncloud: add config owncloud_exclude_shares which allows to exclude shared files and folders when listing remote resources 2024-01-31 14:47:24 +00:00
Michael Eischer
ef2c5a1998 serve restic: fix error handling
* serve restic: return internal error if listing failed

If listing a remote failed, then rclone returned http status "not
found". This has become a problem since restic 0.16.0 which ignores "not
found"-errors while listing a directory.

Just return internal server error, if something unexpected happens while
listing a directory.

* serve restic: fix error handling if getting a file fails

If the call to `newObject` in `serveObject` fails, then rclone always
returned a "not found" error. This prevents restic from distinguishing
permanent "not found" errors from everything else.

Thus, only return "not found" if the object is not found and an internal
server error otherwise.
2024-01-29 17:54:23 +00:00
Nick Craig-Wood
6e4dd2ab96 docs: ignore amazon cloud drive doc stub when building the docs 2024-01-25 16:35:33 +00:00
Nick Craig-Wood
0c17a17e19 Changelog updates from Version v1.65.2 2024-01-24 16:40:47 +00:00
Nick Craig-Wood
03295bbc3c azureblob: fix data corruption bug #7590
It was reported that rclone copy occasionally uploaded corrupted data
to azure blob.

This turned out to be a race condition updating the block count which
caused blocks to be duplicated.

This bug was introduced in this commit in v1.64.0 and will be fixed in v1.65.2

0427177857 azureblob: implement OpenChunkWriter and multi-thread uploads #7056

This race only seems to happen if `--checksum` is used but can happen otherwise.

Unfortunately Azure blob does not check the MD5 that we send them so
despite sending incorrect data this corruption is not detected. The
corruption is detected when rclone tries to download the file, so
attempting to copy the files back to local disk will result in errors
such as:

    ERROR : file.pokosuf5.partial: corrupted on transfer: md5 hash differ "XXX" vs "YYY"

This adds a check to test the blocklist we upload is as we expected
which would have caught the problem had it been in place earlier.
2024-01-24 11:28:05 +00:00
Nick Craig-Wood
b3a1f66759 build: add -race flag to integration tester test_all 2024-01-24 11:27:43 +00:00
Nick Craig-Wood
a947f75d3b Add Kyle Reynolds to contributors 2024-01-24 11:27:43 +00:00
Nick Craig-Wood
ae0a4c8bbf Add Tera to contributors 2024-01-24 11:27:43 +00:00
Kyle Reynolds
7835991147 fs: add more detailed logging for file includes/excludes
This makes a DEBUG log to show why files were included or excluded.

Fixes #7463
2024-01-22 16:46:26 +00:00
nielash
810644e873 bisync: add --resync-mode for customizing --resync - fixes #5681
Before this change, the path1 version of a file always prevailed during
--resync, and many users requested options to automatically select the winner
based on characteristics such as newer, older, larger, and smaller. This change
adds support for such options.

Note that ideally this feature would have been implemented by allowing the
existing `--resync` flag to optionally accept string values such as `--resync
newer`. However, this would have been a breaking change, as the existing flag
is a `bool` and it does not seem to be possible to have a `string` flag that
accepts both `--resync newer` and `--resync` (with no argument.) (`NoOptDefVal`
does not work for this, as it would force an `=` like `--resync=newer`.) So
instead, the best compromise to avoid a breaking change was to add a new
`--resync-mode CHOICE` flag that implies `--resync`, while maintaining the
existing behavior of `--resync` (which implies `--resync-mode path1`. i.e. both
flags are now valid, and either can be used without the other.

--resync-mode CHOICE

In the event that a file differs on both sides during a `--resync`,
`--resync-mode` controls which version will overwrite the other. The supported
options are similar to `--conflict-resolve`. For all of the following options,
the version that is kept is referred to as the "winner", and the version that
is overwritten (deleted) is referred to as the "loser". The options are named
after the "winner":

- `path1` - (the default) - the version from Path1 is unconditionally
considered the winner (regardless of `modtime` and `size`, if any). This can be
useful if one side is more trusted or up-to-date than the other, at the time of
the `--resync`.
- `path2` - same as `path1`, except the path2 version is considered the winner.
- `newer` - the newer file (by `modtime`) is considered the winner, regardless
of which side it came from. This may result in having a mix of some winners
from Path1, and some winners from Path2. (The implementation is analagous to
running `rclone copy --update` in both directions.)
- `older` - same as `newer`, except the older file is considered the winner,
and the newer file is considered the loser.
- `larger` - the larger file (by `size`) is considered the winner (regardless
of `modtime`, if any). This can be a useful option for remotes without
`modtime` support, or with the kinds of files (such as logs) that tend to grow
but not shrink, over time.
- `smaller` - the smaller file (by `size`) is considered the winner (regardless
of `modtime`, if any).

For all of the above options, note the following:
- If either of the underlying remotes lacks support for the chosen method, it
will be ignored and will fall back to the default of `path1`. (For example, if
`--resync-mode newer` is set, but one of the paths uses a remote that doesn't
support `modtime`.)
- If a winner can't be determined because the chosen method's attribute is
missing or equal, it will be ignored, and bisync will instead try to determine
whether the files differ by looking at the other `--compare` methods in effect.
(For example, if `--resync-mode newer` is set, but the Path1 and Path2 modtimes
are identical, bisync will compare the sizes.) If bisync concludes that they
differ, preference is given to whichever is the "source" at that moment. (In
practice, this gives a slight advantage to Path2, as the 2to1 copy comes before
the 1to2 copy.) If the files _do not_ differ, nothing is copied (as both sides
are already correct).
- These options apply only to files that exist on both sides (with the same
name and relative path). Files that exist *only* on one side and not the other
are *always* copied to the other, during `--resync` (this is one of the main
differences between resync and non-resync runs.).
- `--conflict-resolve`, `--conflict-loser`, and `--conflict-suffix` do not
apply during `--resync`, and unlike these flags, nothing is renamed during
`--resync`. When a file differs on both sides during `--resync`, one version
always overwrites the other (much like in `rclone copy`.) (Consider using
`--backup-dir` to retain a backup of the losing version.)
- Unlike for `--conflict-resolve`, `--resync-mode none` is not a valid option
(or rather, it will be interpreted as "no resync", unless `--resync` has also
been specified, in which case it will be ignored.)
- Winners and losers are decided at the individual file-level only (there is
not currently an option to pick an entire winning directory atomically,
although the `path1` and `path2` options typically produce a similar result.)
- To maintain backward-compatibility, the `--resync` flag implies
`--resync-mode path1` unless a different `--resync-mode` is explicitly
specified. Similarly, all `--resync-mode` options (except `none`) imply
`--resync`, so it is not necessary to use both the `--resync` and
`--resync-mode` flags simultaneously -- either one is sufficient without the
other.
2024-01-20 17:17:01 -05:00
nielash
8d3bcc025a bisync: fix --colors flag
quick fix to get around lack of support in fs.Infof etc.
2024-01-20 17:17:01 -05:00
nielash
0f549520ef bisync: factor resync to separate file 2024-01-20 17:17:01 -05:00
nielash
ba16fcfaf5 bisync: skip empty test case dirs 2024-01-20 17:17:01 -05:00
nielash
68f0998699 bisync: add options to auto-resolve conflicts - fixes #7471
Before this change, when a file was new/changed on both paths (relative to the
prior sync), and the versions on each side were not identical, bisync would
keep both versions, renaming them with ..path1 and ..path2 suffixes,
respectively. Many users have requested more control over how bisync handles
such conflicts -- including an option to automatically select one version as
the "winner" and rename or delete the "loser". This change introduces support
for such options.

--conflict-resolve CHOICE

In bisync, a "conflict" is a file that is *new* or *changed* on *both sides*
(relative to the prior run) AND is *not currently identical* on both sides.
`--conflict-resolve` controls how bisync handles such a scenario. The currently
supported options are:

- `none` - (the default) - do not attempt to pick a winner, keep and rename
both files according to `--conflict-loser` and
`--conflict-suffix` settings. For example, with the default
settings, `file.txt` on Path1 is renamed `file.txt.conflict1` and `file.txt` on
Path2 is renamed `file.txt.conflict2`. Both are copied to the opposite path
during the run, so both sides end up with a copy of both files. (As `none` is
the default, it is not necessary to specify `--conflict-resolve none` -- you
can just omit the flag.)
- `newer` - the newer file (by `modtime`) is considered the winner and is
copied without renaming. The older file (the "loser") is handled according to
`--conflict-loser` and `--conflict-suffix` settings (either renamed or
deleted.) For example, if `file.txt` on Path1 is newer than `file.txt` on
Path2, the result on both sides (with other default settings) will be `file.txt`
(winner from Path1) and `file.txt.conflict1` (loser from Path2).
- `older` - same as `newer`, except the older file is considered the winner,
and the newer file is considered the loser.
- `larger` - the larger file (by `size`) is considered the winner (regardless
of `modtime`, if any).
- `smaller` - the smaller file (by `size`) is considered the winner (regardless
of `modtime`, if any).
- `path1` - the version from Path1 is unconditionally considered the winner
(regardless of `modtime` and `size`, if any). This can be useful if one side is
usually more trusted or up-to-date than the other.
- `path2` - same as `path1`, except the path2 version is considered the
winner.

For all of the above options, note the following:
- If either of the underlying remotes lacks support for the chosen method, it
will be ignored and fall back to `none`. (For example, if `--conflict-resolve
newer` is set, but one of the paths uses a remote that doesn't support
`modtime`.)
- If a winner can't be determined because the chosen method's attribute is
missing or equal, it will be ignored and fall back to `none`. (For example, if
`--conflict-resolve newer` is set, but the Path1 and Path2 modtimes are
identical, even if the sizes may differ.)
- If the file's content is currently identical on both sides, it is not
considered a "conflict", even if new or changed on both sides since the prior
sync. (For example, if you made a change on one side and then synced it to the
other side by other means.) Therefore, none of the conflict resolution flags
apply in this scenario.
- The conflict resolution flags do not apply during a `--resync`, as there is
no "prior run" to speak of (but see `--resync-mode` for similar
options.)

--conflict-loser CHOICE

`--conflict-loser` determines what happens to the "loser" of a sync conflict
(when `--conflict-resolve` determines a winner) or to both
files (when there is no winner.) The currently supported options are:

- `num` - (the default) - auto-number the conflicts by automatically appending
the next available number to the `--conflict-suffix`, in chronological order.
For example, with the default settings, the first conflict for `file.txt` will
be renamed `file.txt.conflict1`. If `file.txt.conflict1` already exists,
`file.txt.conflict2` will be used instead (etc., up to a maximum of
9223372036854775807 conflicts.)
- `pathname` - rename the conflicts according to which side they came from,
which was the default behavior prior to `v1.66`. For example, with
`--conflict-suffix path`, `file.txt` from Path1 will be renamed
`file.txt.path1`, and `file.txt` from Path2 will be renamed `file.txt.path2`.
If two non-identical suffixes are provided (ex. `--conflict-suffix
cloud,local`), the trailing digit is omitted. Importantly, note that with
`pathname`, there is no auto-numbering beyond `2`, so if `file.txt.path2`
somehow already exists, it will be overwritten. Using a dynamic date variable
in your `--conflict-suffix` (see below) is one possible way to avoid this. Note
also that conflicts-of-conflicts are possible, if the original conflict is not
manually resolved -- for example, if for some reason you edited
`file.txt.path1` on both sides, and those edits were different, the result
would be `file.txt.path1.path1` and `file.txt.path1.path2` (in addition to
`file.txt.path2`.)
- `delete` - keep the winner only and delete the loser, instead of renaming it.
If a winner cannot be determined (see `--conflict-resolve` for details on how
this could happen), `delete` is ignored and the default `num` is used instead
(i.e. both versions are kept and renamed, and neither is deleted.) `delete` is
inherently the most destructive option, so use it only with care.

For all of the above options, note that if a winner cannot be determined (see
`--conflict-resolve` for details on how this could happen), or if
`--conflict-resolve` is not in use, *both* files will be renamed.

--conflict-suffix STRING[,STRING]

`--conflict-suffix` controls the suffix that is appended when bisync renames a
`--conflict-loser` (default: `conflict`).
`--conflict-suffix` will accept either one string or two comma-separated
strings to assign different suffixes to Path1 vs. Path2. This may be helpful
later in identifying the source of the conflict. (For example,
`--conflict-suffix dropboxconflict,laptopconflict`)

With `--conflict-loser num`, a number is always appended to the suffix. With
`--conflict-loser pathname`, a number is appended only when one suffix is
specified (or when two identical suffixes are specified.) i.e. with
`--conflict-loser pathname`, all of the following would produce exactly the
same result:

```
--conflict-suffix path
--conflict-suffix path,path
--conflict-suffix path1,path2
```

Suffixes may be as short as 1 character. By default, the suffix is appended
after any other extensions (ex. `file.jpg.conflict1`), however, this can be
changed with the `--suffix-keep-extension` flag (i.e. to instead result in
`file.conflict1.jpg`).

`--conflict-suffix` supports several *dynamic date variables* when enclosed in
curly braces as globs. This can be helpful to track the date and/or time that
each conflict was handled by bisync. For example:

```
--conflict-suffix {DateOnly}-conflict
// result: myfile.txt.2006-01-02-conflict1
```

All of the formats described [here](https://pkg.go.dev/time#pkg-constants) and
[here](https://pkg.go.dev/time#example-Time.Format) are supported, but take
care to ensure that your chosen format does not use any characters that are
illegal on your remotes (for example, macOS does not allow colons in
filenames, and slashes are also best avoided as they are often interpreted as
directory separators.) To address this particular issue, an additional
`{MacFriendlyTime}` (or just `{mac}`) option is supported, which results in
`2006-01-02 0304PM`.

Note that `--conflict-suffix` is entirely separate from rclone's main `--sufix`
flag. This is intentional, as users may wish to use both flags simultaneously,
if also using `--backup-dir`.

Finally, note that the default in bisync prior to `v1.66` was to rename
conflicts with `..path1` and `..path2` (with two periods, and `path` instead of
`conflict`.) Bisync now defaults to a single dot instead of a double dot, but
additional dots can be added by including them in the specified suffix string.
For example, for behavior equivalent to the previous default, use:

```
[--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path
```
2024-01-20 17:17:01 -05:00
nielash
d031cc138d bisync: check for syntax errors in path args - fixes #7511
Before this change, certain shell quoting / escaping errors (particularly on
Windows) were not detected by Bisync, possibly resulting in incorrect expansion
and confusing errors. In particular, Windows paths with a single trailing
backslash followed by a quote would be interpreted as an escaped quote --
resulting in the quote and subsequent flags being erroneously considered part
of the path.

After this change, Bisync specifically checks for a few of the most common
patterns, and if detected, exits with a more helpful error message before doing
any damage.
2024-01-20 16:54:12 -05:00
nielash
e71b252b65 bisync: add overlapping paths check
Before this change, Bisync did not check to make sure that Path1 and Path2 do
not overlap, nor did it check for overlaps with `--backup-dir`. While `sync`
does check for these things, it can sometimes be fooled because of the way
Bisync calls it with `--files-from` filters. Relying on sync could also leave a
run in a half-finished state if it were to error in one direction but not the
other (`--backup-dir` only checks for overlaps with the dest.)

After this change, Bisync does its own check up front, so we can quickly return
an error and exit before any changes are made.
2024-01-20 16:54:12 -05:00
nielash
e9cd3e5986 bisync: allow lock file expiration/renewal with --max-lock - #7470
Background: Bisync uses lock files as a safety feature to prevent
interference from other bisync runs while it is running. Bisync normally
removes these lock files at the end of a run, but if bisync is abruptly
interrupted, these files will be left behind. By default, they will lock out
all future runs, until the user has a chance to manually check things out and
remove the lock.

Before this change, lock files blocked future runs indefinitely, so a single
interrupted run would lock out all future runs forever (absent user
intervention), and there was no way to change this behavior.

After this change, a new --max-lock flag can be used to make lock files
automatically expire after a certain period of time, so that future runs are
not locked out forever, and auto-recovery is possible. --max-lock can be any
duration 2m or greater (or 0 to disable). If set, lock files older than this
will be considered "expired", and future runs will be allowed to disregard them
and proceed. (Note that the --max-lock duration must be set by the process that
left the lock file -- not the later one interpreting it.)

If set, bisync will also "renew" these lock files every
--max-lock_minus_one_minute throughout a run, for extra safety. (For example,
with --max-lock 5m, bisync would renew the lock file (for another 5 minutes)
every 4 minutes until the run has completed.) In other words, it should not be
possible for a lock file to pass its expiration time while the process that
created it is still running -- and you can therefore be reasonably sure that
any _expired_ lock file you may find was left there by an interrupted run, not
one that is still running and just taking awhile.

If --max-lock is 0 or not set, the default is that lock files will never
expire, and will block future runs (of these same two bisync paths)
indefinitely.

For maximum resilience from disruptions, consider setting a relatively short
duration like --max-lock 2m along with --resilient and --recover, and a
relatively frequent cron schedule. The result will be a very robust
"set-it-and-forget-it" bisync run that can automatically bounce back from
almost any interruption it might encounter, without requiring the user to get
involved and run a --resync.
2024-01-20 16:31:28 -05:00
nielash
4025f42bd9 bisync: Graceful Shutdown, --recover from interruptions without --resync - fixes #7470
Before this change, bisync had no mechanism to gracefully cancel a sync early
and exit in a clean state. Additionally, there was no way to recover on the
next run -- any interruption at all would cause bisync to require a --resync,
which made  bisync more difficult to use as a scheduled background process.

This change introduces a "Graceful Shutdown" mode and --recover flag to
robustly recover from even un-graceful shutdowns.

If --recover is set, in the event of a sudden interruption or other un-graceful
shutdown, bisync will attempt to automatically recover on the next run, instead
of requiring --resync. Bisync is able to recover robustly by keeping one
"backup" listing at all times, representing the state of both paths after the
last known successful sync. Bisync can then compare the current state with this
snapshot to determine which changes it needs to retry. Changes that were synced
after this snapshot (during the run that was later interrupted) will appear to
bisync as if they are "new or changed on both sides", but in most cases this is
not a problem, as bisync will simply do its usual "equality check" and learn
that no action needs to be taken on these files, since they are already
identical on both sides.

In the rare event that a file is synced successfully during a run that later
aborts, and then that same file changes AGAIN before the next run, bisync will
think it is a sync conflict, and handle it accordingly. (From bisync's
perspective, the file has changed on both sides since the last trusted sync,
and the files on either side are not currently identical.) Therefore, --recover
carries with it a slightly increased chance of having conflicts -- though in
practice this is pretty rare, as the conditions required to cause it are quite
specific. This risk can be reduced by using bisync's "Graceful Shutdown" mode
(triggered by sending SIGINT or Ctrl+C), when you have the choice, instead of
forcing a sudden termination.

--recover and --resilient are similar, but distinct -- the main difference is
that --resilient is about _retrying_, while --recover is about _recovering_.
Most users will probably want both. --resilient allows retrying when bisync has
chosen to abort itself due to safety features such as failing --check-access or
detecting a filter change. --resilient does not cover external interruptions
such as a user shutting down their computer in the middle of a sync -- that is
what --recover is for.

"Graceful Shutdown" mode is activated by sending SIGINT or pressing Ctrl+C
during a run. Once triggered, bisync will use best efforts to exit cleanly
before the timer runs out. If bisync is in the middle of transferring files, it
will attempt to cleanly empty its queue by finishing what it has started but
not taking more. If it cannot do so within 30 seconds, it will cancel the
in-progress transfers at that point and then give itself a maximum of 60
seconds to wrap up, save its state for next time, and exit. With the -vP flags
you will see constant status updates and a final confirmation of whether or not
the graceful shutdown was successful.

At any point during the "Graceful Shutdown" sequence, a second SIGINT or Ctrl+C
will trigger an immediate, un-graceful exit, which will leave things in a
messier state. Usually a robust recovery will still be possible if using
--recover mode, otherwise you will need to do a --resync.

If you plan to use Graceful Shutdown mode, it is recommended to use --resilient
and --recover, and it is important to NOT use --inplace, otherwise you risk
leaving partially-written files on one side, which may be confused for real
files on the next run. Note also that in the event of an abrupt interruption, a
lock file will be left behind to block concurrent runs. You will need to delete
it before you can proceed with the next run (or wait for it to expire on its
own, if using --max-lock.)
2024-01-20 16:31:28 -05:00
nielash
b4216648e4 bisync: full support for comparing checksum, size, modtime - fixes #5679 fixes #5683 fixes #5684 fixes #5675
Before this change, bisync could only detect changes based on modtime, and
would refuse to run if either path lacked modtime support. This made bisync
unavailable for many of rclone's backends. Additionally, bisync did not account
for the Fs's precision when comparing modtimes, meaning that they could only be
reliably compared within the same side -- not against the opposite side. Size
and checksum (even when available) were ignored completely for deltas.

After this change, bisync now fully supports comparing based on any combination
of size, modtime, and checksum, lifting the prior restriction on backends
without modtime support. The comparison logic considers the backend's
precision, hash types, and other features as appropriate.

The comparison features optionally use a new --compare flag (which takes any
combination of size,modtime,checksum) and even supports some combinations not
otherwise supported in `sync` (like comparing all three at the same time.) By
default (without the --compare flag), bisync inherits the same comparison
options as `sync` (that is: size and modtime by default, unless modified with
flags such as --checksum or --size-only.) If the --compare flag is set, it will
override these defaults.

If --compare includes checksum and both remotes support checksums but have no
hash types in common with each other, checksums will be considered only for
comparisons within the same side (to determine what has changed since the prior
sync), but not for comparisons against the opposite side. If one side supports
checksums and the other does not, checksums will only be considered on the side
that supports them. When comparing with checksum and/or size without modtime,
bisync cannot determine whether a file is newer or older -- only whether it is
changed or unchanged. (If it is changed on both sides, bisync still does the
standard equality-check to avoid declaring a sync conflict unless it absolutely
has to.)

Also included are some new flags to customize the checksum comparison behavior
on backends where hashes are slow or unavailable. --no-slow-hash and
--slow-hash-sync-only allow selectively ignoring checksums on backends such as
local where they are slow. --download-hash allows computing them by downloading
when (and only when) they're otherwise not available. Of course, this option
probably won't be practical with large files, but may be a good option for
syncing small-but-important files with maximum accuracy (for example, a source
code repo on a crypt remote.) An additional advantage over methods like
cryptcheck is that the original file is not required for comparison (for
example, --download-hash can be used to bisync two different crypt remotes with
different passwords.)

Additionally, all of the above are now considered during the final --check-sync
for much-improved accuracy (before this change, it only compared filenames!)

Many other details are explained in the included docs.
2024-01-20 16:08:06 -05:00
nielash
d8e07bfd8e bisync: document beta status more clearly - fixes #6082 2024-01-20 15:38:26 -05:00
nielash
199d82969b bisync: normalize session name to non-canonical - fixes #7423
Before this change, bisync used the "canonical" Fs name in the filename for its
listing files, including any {hexstring} suffix. An unintended consequence of
this was that if a user added a backend-specific flag from the command line
(thus "overriding" the config), bisync would fail to find the listing files it
created during the prior run without this flag, due to the path now having a
{hexstring} suffix that wasn't there before (or vice versa, if the flag was
present when the session was established, and later removed.) This would
sometimes cause bisync to fail with a critical error (if no listing existed
with the alternate name), or worse -- it would sometimes cause bisync to use an
old, incorrect listing (if old listings with the alternate name DID still
exist, from before the user changed their flags.)

After this change, the issue is fixed by always normalizing the SessionName to
the non-canonical version (no {hexstring} suffix), regardless of the flags. To
avoid a breaking change, we first check if a suffixed listing exists. If so, we
rename it (and overwrite the non-suffixed version, if any.) If not, we carry on
with the non-suffixed version. (We should only find a suffixed version if
created prior to this commit.)

The result for the user is that the same pair of paths will always use the same
.lst filenames, with or without backend-specific flags.
2024-01-20 15:38:26 -05:00
nielash
bb74a13c07 bisync: update version number in docs
as these changes did not make it in time for 1.65
2024-01-20 15:38:26 -05:00
nielash
57624629d6 bisync: account for differences in backend features on integration tests - see #5679
Before this change, integration tests often could not be run on backends with
differing features from the local system that goldenized them. In particular,
differences in modtime precision, checksum support, and encoding would cause
false positives. After this change, the tests more accurately account for the
features of the backend being tested, which allows us to see true positives
more clearly, and more meaningfully assess whether a backend is supported.
2024-01-20 14:50:08 -05:00
nielash
7c6f0cc455 operations: fix renaming a file on macOS
Before this change, a file would sometimes be silently deleted instead of
renamed on macOS, due to its unique handling of unicode normalization. Rclone
already had a SameObject check in place for case insensitivity before deleting
the source (for example if "hello.txt" was renamed to "HELLO.txt"), but had no
such check for unicode normalization. After this change, the delete is skipped
on macOS if the src and dst filenames normalize to the same NFC string.

Example of the previous behavior:

 ~ % rclone touch /Users/nielash/rename_test/ö
 ~ % rclone lsl /Users/nielash/rename_test/ö
        0 2023-11-21 17:28:06.170486000 ö
 ~ % rclone moveto /Users/nielash/rename_test/ö /Users/nielash/rename_test/ö -vv
2023/11/21 17:28:51 DEBUG : rclone: Version "v1.64.0" starting with parameters ["rclone" "moveto" "/Users/nielash/rename_test/ö" "/Users/nielash/rename_test/ö" "-vv"]
2023/11/21 17:28:51 DEBUG : Creating backend with remote "/Users/nielash/rename_test/ö"
2023/11/21 17:28:51 DEBUG : Using config file from "/Users/nielash/.config/rclone/rclone.conf"
2023/11/21 17:28:51 DEBUG : fs cache: adding new entry for parent of "/Users/nielash/rename_test/ö", "/Users/nielash/rename_test"
2023/11/21 17:28:51 DEBUG : Creating backend with remote "/Users/nielash/rename_test/"
2023/11/21 17:28:51 DEBUG : fs cache: renaming cache item "/Users/nielash/rename_test/" to be canonical "/Users/nielash/rename_test"
2023/11/21 17:28:51 DEBUG : ö: Size and modification time the same (differ by 0s, within tolerance 1ns)
2023/11/21 17:28:51 DEBUG : ö: Unchanged skipping
2023/11/21 17:28:51 INFO  : ö: Deleted
2023/11/21 17:28:51 INFO  :
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Checks:                 1 / 1, 100%
Deleted:                1 (files), 0 (dirs)
Elapsed time:         0.0s

2023/11/21 17:28:51 DEBUG : 5 go routines active
 ~ % rclone lsl /Users/nielash/rename_test/
 ~ %
2024-01-20 14:50:08 -05:00
nielash
422b037087 bisync: fallback to cryptcheck or --download when can't check hash
Bisync checks file equality before renaming sync conflicts by comparing
checksums. Before this change, backends without checksum support (notably
Crypt) would fall back to --size-only for these checks, which is not a very
safe method (differing files can sometimes have the same size, especially if
they're small.) After this change, Crypt remotes fallback to using Cryptcheck
so that checksums can be compared. As a last resort when neither Check nor
Cryptcheck are available, files are compared using --download so that we can be
certain the files are identical regardless of checksum support.
2024-01-20 14:50:08 -05:00
nielash
7f854acb05 local: fix cleanRootPath on Windows after go1.21.4 stdlib update
Similar to
acf1e2df84,
go1.21.4 appears to have broken sync.MoveDir on Windows because
filepath.VolumeName() returns `\\?` instead of `\\?\C:` in cleanRootPath. It
looks like the Go team is aware of the issue and planning a fix, so this may
only be needed temporarily.
2024-01-20 14:50:08 -05:00
nielash
bbf9b1b3d2 bisync: support two --backup-dir paths on different remotes
Before this change, bisync supported `--backup-dir` only when `Path1` and
`Path2` were different paths on the same remote. With this change, bisync
introduces new `--backup-dir1` and `--backup-dir2` flags to support separate
backup-dirs for `Path1` and `Path2`.

`--backup-dir1` and `--backup-dir2` can use different remotes from each other,
but `--backup-dir1` must use the same remote as `Path1`, and `--backup-dir2`
must use the same remote as `Path2`. Each backup directory must not overlap its
respective bisync Path without being excluded by a filter rule.

The standard `--backup-dir` will also work, if both paths use the same remote
(but note that deleted files from both paths would be mixed together in the
same dir). If either `--backup-dir1` and `--backup-dir2` are set, they will
override `--backup-dir`.
2024-01-20 14:50:08 -05:00
nielash
9cf783677e bisync: support files with unknown length, including Google Docs - fixes #5696
Before this change, bisync intentionally ignored Google Docs (albeit in a
buggy way that caused problems during --resync.) After this change, Google Docs
(including Google Sheets, Slides, etc.) are now supported in bisync, subject to
the same options, defaults, and limitations as in `rclone sync`. When bisyncing
drive with non-drive backends, the drive -> non-drive direction is controlled
by `--drive-export-formats` (default `"docx,xlsx,pptx,svg"`) and the non-drive
-> drive direction is controlled by `--drive-import-formats` (default none.)

For example, with the default export/import formats, a Google Sheet on the
drive side will be synced to an `.xlsx` file on the non-drive side. In the
reverse direction, `.xlsx` files with filenames that match an existing Google
Sheet will be synced to that Google Sheet, while `.xlsx` files that do NOT
match an existing Google Sheet will be copied to drive as normal `.xlsx` files
(without conversion to Sheets, although the Google Drive web browser UI may
still give you the option to open it as one.)

If `--drive-import-formats` is set (it's not, by default), then all of the
specified formats will be converted to Google Docs, if there is no existing
Google Doc with a matching name. Caution: such conversion can be quite lossy,
and in most cases it's probably not what you want!

To bisync Google Docs as URL shortcut links (in a manner similar to "Drive for
Desktop"), use: `--drive-export-formats url` (or alternatives.)

Note that these link files cannot be edited on the non-drive side -- you will
get errors if you try to sync an edited link file back to drive. They CAN be
deleted (it will result in deleting the corresponding Google Doc.) If you
create a `.url` file on the non-drive side that does not match an existing
Google Doc, bisyncing it will just result in copying the literal `.url` file
over to drive (no Google Doc will be created.) So, as a general rule of thumb,
think of them as read-only placeholders on the non-drive side, and make all
your changes on the drive side.

Likewise, even with other export-formats, it is best to only move/rename Google
Docs on the drive side. This is because otherwise, bisync will interpret this
as a file deleted and another created, and accordingly, it will delete the
Google Doc and create a new file at the new path. (Whether or not that new file
is a Google Doc depends on `--drive-import-formats`.)

Lastly, take note that all Google Docs on the drive side have a size of `-1`
and no checksum. Therefore, they cannot be reliably synced with the
`--checksum` or `--size-only` flags. (To be exact: they will still get
created/deleted, and bisync's delta engine will notice changes and queue them
for syncing, but the underlying sync function will consider them identical and
skip them.) To work around this, use the default (modtime and size) instead of
`--checksum` or `--size-only`.

To ignore Google Docs entirely, use `--drive-skip-gdocs`.

Nearly all of the Google Docs logic is outsourced to the Drive backend, so
future changes should also be supported by bisync.
2024-01-20 14:50:08 -05:00
nielash
4d5d6ee61b bisync: provide more info in critical error msgs 2024-01-20 14:50:08 -05:00
nielash
44637dcd7f bisync: high-level retries if --resilient
Before this change, bisync had no ability to retry in the event of sync errors.
After this change, bisync will retry if --resilient is passed, but only in one
direction at a time. We can safely retry in one direction because the source is
still intact, even if the dest was left in a messy state. If the first
direction still fails after our final retry, we abort and do NOT continue in
the other direction, to prevent the messy dest from polluting the source. If
the first direction succeeds, we do then allow retries in the other direction.

The number of retries is controllable by --retries (default 3)

bisync: high-level retries if --resilient

Before this change, bisync had no ability to retry in the event of sync errors.
After this change, bisync will retry if --resilient is passed, but only in one
direction at a time. We can safely retry in one direction because the source is
still intact, even if the dest was left in a messy state. If the first
direction still fails after our final retry, we abort and do NOT continue in
the other direction, to prevent the messy dest from polluting the source. If
the first direction succeeds, we do then allow retries in the other direction.

The number of retries is controllable by --retries (default 3)
2024-01-20 14:50:08 -05:00
nielash
98f539de8f bisync: refactor normalization code, fix deltas - fixes #7270
Refactored the case / unicode normalization logic to be much more efficient,
 and fix the last outstanding issue from #7270. Before this change, we were
 doing lots of for loops and re-normalizing strings we had already normalized
 earlier. Now, we leave the normalizing entirely to March and avoid
 re-transforming later, which seems to make a large difference in terms of
 performance.
2024-01-20 14:50:08 -05:00
nielash
58fd6d7b94 docs: add bisync to index 2024-01-20 14:50:08 -05:00
nielash
9c96c13a35 bisync: optimize --resync performance -- partially addresses #5681
Before this change, --resync was handled in three steps, and needed to do a lot
of unnecessary work to implement its own --ignore-existing logic, which also
caused problems with unicode normalization, in addition to being pretty slow.
After this change, it is refactored to produce the same result much more
efficiently, by reducing the three steps to two and letting ci.IgnoreExisting
do the work instead of reinventing the wheel.

The behavior and sync order remain unchanged for now -- just faster (but see
the ongoing lively discussions about potential future changes in #5681!)
2024-01-20 14:50:08 -05:00
nielash
f7f4651828 bisync: handle unicode and case normalization consistently - mostly-fixes #7270
Before this change, Bisync sometimes normalized NFD to NFC and sometimes
did not, causing errors in some scenarios (particularly for users of macOS).
It was similarly inconsistent in its handling of case-insensitivity.

There were three main places where Bisync should have normalized, but didn't:

1. When building the list of files that need to be transferred during --resync
2. When building the list of deltas during a non-resync
3. When comparing Path1 to Path2 during --check-sync

After this change, 1 and 3 are resolved, and bisync supports
--no-unicode-normalization and --ignore-case-sync in the same way as sync.
2 will be addressed in a future update.
2024-01-20 14:50:08 -05:00
nielash
11afc3dde0 sync: --fix-case flag to rename case insensitive dest - fixes #4854
Before this change, a sync to a case insensitive dest (such as macOS / Windows)
would not result in a matching filename if the source and dest had casing
differences but were otherwise equal. For example, syncing `hello.txt` to
`HELLO.txt` would result in the dest filename remaining `HELLO.txt`.
Furthermore, `--local-case-sensitive` did not solve this, as it actually caused
`HELLO.txt` to get deleted!

After this change, `HELLO.txt` is renamed to `hello.txt` to match the source,
only if the `--fix-case` flag is specified. (The old behavior remains the
default.)
2024-01-20 14:50:08 -05:00
nielash
88e516adee moveOrCopyFile: avoid panic on --dry-run
Before this change, changing the case of a file on a case insensitive remote
would fatally panic when `--dry-run` was set, due to `moveOrCopyFile`
attempting to access the non-existent `tmpObj` it (would normally have)
created. After this change, the panic is avoided by skipping this step during
a `--dry-run` (with the usual "skipped as --dry-run is set" log message.)
2024-01-20 14:50:08 -05:00
nielash
fd95511091 bisync: generate listings concurrently with march -- fixes #7332
Before this change, bisync needed to build a full listing for Path1, then a
full listing for Path2, then compare them -- and each of those tasks needed to
finish before the next one could start. In addition to being slow and
inefficient, it also caused real problems if a file changed between the time
bisync checked it on Path1 and the time it checked the corresponding file on
Path2.

This change solves these problems by listing both paths concurrently, using
the same March infrastructure that check and sync use to traverse two
directories in lock-step, optimized by Go's robust concurrency support.
Listings should now be much faster, and any given path is now checked
nearly-instantaneously on both sides, minimizing room for error.

Further discussion:
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=4.%20Listings%20should%20alternate%20between%20paths%20to%20minimize%20errors
2024-01-20 14:50:08 -05:00
nielash
0cac5d67ab bisync: introduce terminal colors
This introduces a few basic color codings to make the terminal output more
readable (and more fun). Rclone's standard --color flag is supported.
(AUTO|NEVER|ALWAYS)

Only a few lines have colors right now -- more will probably be added in
future versions.
2024-01-20 14:50:08 -05:00
nielash
6d6dc00abb bisync: rollback listing on error
Before this change, bisync had no mechanism for "retrying" a file again next
time, in the event of an unexpected and possibly temporary error. After this
change, bisync is now essentially able to mark a file as needing to be
rechecked next time. Bisync does this by keeping one prior listing on hand at
all times. In a low-confidence situation, bisync can revert a given file row
back to its state at the end of the last known successful sync, ensuring that
any subsequent changes will be re-noticed on the next run.
This can potentially be helpful for a dynamically changing file system, where
files may be changing quickly while bisync is working with them.
2024-01-20 14:50:08 -05:00
nielash
079763f09a bisync: isDir check for deltas
Before this change, if --create-empty-src-dirs was specified, bisync would
include directories in the list of deltas to evaluate by their modtime,
relative to the prior sync. This was unnecessary, as rclone does not yet
support setting modtime for directories.

After this change, we skip directories when comparing modtimes. (In other
words, we care only if a directory is created or deleted, not whether it is
newer or older.)
2024-01-20 14:50:08 -05:00
nielash
978cbf9360 bisync: generate final listing from sync results, not relisting -- fixes #5676
Before this change, if there were changes to sync, bisync listed each path
twice: once before the sync and once after. The second listing caused quite
a lot of problems, in addition to making each run much slower and more
expensive. A serious side-effect was that file changes could slip through
undetected, if they happened to occur while a sync was running (between the
first and second listing snapshots.)

After this change, the second listing is eliminated by getting the underlying
sync operation to report back a list of what it changed. Not only is this more
efficient, but also much more robust to concurrent modifications. It should no
longer be necessary to avoid make changes while it's running -- bisync will
simply learn about those changes next time and handle them on the next run.
Additionally, this also makes --check-sync usable again.

For further discussion, see:
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=5.%20Final%20listings%20should%20be%20created%20from%20initial%20snapshot%20%2B%20deltas%2C%20not%20full%20re%2Dscans%2C%20to%20avoid%20errors%20if%20files%20changed%20during%20sync
2024-01-20 14:50:08 -05:00
nielash
3a50f35df9 sync: report list of synced paths to file -- see #7282
Allows rclone sync to accept the same output file flags as rclone check,
for the purpose of writing results to a file.
A new --dest-after option is also supported, which writes a list file using
the same ListFormat flags as lsf (including customizable options for hash,
modtime, etc.) Conceptually it is similar to rsync's --itemize-changes, but
not identical -- it should output an accurate list of what will be on the
destination after the sync.

Note that it has a few limitations, and certain scenarios
are not currently supported:

--max-duration / CutoffModeHard
--compare-dest / --copy-dest (because equal() is called multiple times for the
    same file)
server-side moves of an entire dir at once (because we never get the individual
file objects in the dir)
High-level retries, because there would be dupes
Possibly some error scenarios that didn't come up on the tests

Note also that each file is logged during the sync, as opposed to after, so it
is most useful as a predictor of what SHOULD happen to each file
(which may or may not match what actually DID.)

Only rclone sync is currently supported -- support for copy and move may be
added in the future.
2024-01-20 14:50:08 -05:00
nielash
c0968a0987 operations: add logger to log list of sync results -- fixes #7282
Logger instruments the Sync routine with a status report for each file pair,
making it possible to output a list of the synced files, along with their
attributes and sigil categorization (match/differ/missing/etc.)
It is very customizable by passing in a custom LoggerFn, options, and
io.Writers to be written to. Possible uses include:
- allow sync to write path lists to a file, in the same format as rclone check
- allow sync to output a --dest-after file using the same format flags as lsf
- receive results as JSON when calling sync from an internal function
- predict the post-sync state of the destination

For usage examples, see bisync.WriteResults() or sync.SyncLoggerFn()
2024-01-20 14:50:08 -05:00
nielash
932f9ec34a bisync: document support for atomic uploads 2024-01-20 14:50:08 -05:00
nielash
0e5f12126f bisync: merge copies and deletes, support --track-renames and --backup-dir -- fixes #5690 fixes #5685
Before this change, bisync handled copies and deletes in separate operations.
After this change, they are combined in one sync operation, which is faster
and also allows bisync to support --track-renames and --backup-dir.

Bisync uses a --files-from filter containing only the paths bisync has
determined need to be synced. Just like in sync (but in both directions),
if a path is present on the dst but not the src, it's interpreted as a delete
rather than a copy.
2024-01-20 14:50:08 -05:00
nielash
5c7ba0bfd3 bisync: fix tests on macOS
normalizes unicode and ignores .DS_Store files to make testing possible
on macOS
2024-01-20 14:50:08 -05:00
nielash
9933d6c071 check: respect --no-unicode-normalization and --ignore-case-sync for --checkfile
Before this change, --no-unicode-normalization and --ignore-case-sync
were respected for rclone check but not for rclone check --checkfile,
causing them to give different results.

This change adds support for --checkfile so that the behavior is consistent.
2024-01-20 14:50:08 -05:00
nielash
66929416d4 lsf: add --time-format flag
Before this change, lsf's time format was hard-coded to "2006-01-02 15:04:05",
regardless of the Fs's precision. After this change, a new optional
--time-format flag is added to allow customizing the format (the default is
unchanged).

Examples:
	rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)'
	rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000'
	rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00'
	rclone lsf remote:path --format pt --time-format RFC3339
	rclone lsf remote:path --format pt --time-format DateOnly
	rclone lsf remote:path --format pt --time-format max

--time-format max will automatically truncate '2006-01-02 15:04:05.000000000'
to the maximum precision supported by the remote.
2024-01-20 14:50:08 -05:00
dependabot[bot]
b06935a12e build(deps): bump actions/cache from 3 to 4
Bumps [actions/cache](https://github.com/actions/cache) from 3 to 4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-19 17:19:08 +00:00
Tera
806f6ab1eb add missing backtick 2024-01-19 11:17:36 +00:00
Nick Craig-Wood
c482624a6c config: add config/paths to the rc as rclone config paths equivalent
Fixes #7568
2024-01-18 17:47:39 +00:00
kapitainsky
17fea90ac9 docs: add rclone OS requirements
Adds rclone OS requirements list and latest rclone versions known to be working with specific historical OS versions.

Discussed on the forum:
https://forum.rclone.org/t/rclone-1-65-1-runtime-exception-error-crash-immediately-after-running-the-command/44051

Fixes: #7571
2024-01-17 16:42:33 +00:00
Harshit Budhraja
78176d39fd imagekit: updated overview - supported operations 2024-01-17 16:38:54 +00:00
Nick Craig-Wood
ae3c73f610 stats: fix race between ResetCounters and stopAverageLoop called from time.AfterFunc
Before this change StatsInfo.ResetCounters() and stopAverageLoop()
(when called from time.AfterFunc) could race on StatsInfo.average.
This was because the deferred stopAverageLoop accessed
StatsInfo.average without locking.

For some reason this only ever happened on macOS. This caused the CI
to fail on macOS thus causing the macOS builds not to appear.

This commit fixes the problem with a bit of extra locking.

It also renames all StatsInfo methods that should be called without
the lock to start with an initial underscore as this is the convention
we use elsewhere.

Fixes #7567
2024-01-17 10:23:50 +00:00
Nick Craig-Wood
d20f647487 Add Harshit Budhraja to contributors 2024-01-17 10:23:50 +00:00
Harshit Budhraja
6521394865 imagekit: Updated docs and web content 2024-01-16 18:25:25 +00:00
Nick Craig-Wood
42cac4cf53 build: use API when fetching golangci-lint as it is more reliable
This was turned off previously because we used it in the CI and it
rate limited.
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
223d8c5fe3 serve dlna: now only supported on go1.21 or later
This is due to use of go1.21 only constructs in github.com/anacrolix/log
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
dd0e5b9a7f operations: use built in io.OffsetWriter for go1.20 2024-01-15 16:22:07 +00:00
Nick Craig-Wood
da244a3709 ssh: shorten wait delay for external ssh binaries now that we are using go1.20
Now we are guaranteed to have go1.20 or later we can use the WaitDelay
flag when running external ssh binaries.
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
938b43c26c build: remove random.Seed since random generator is seeded automatically in go1.20
Now that the minimum version is go1.20 we can stop seeding the random
number generator.
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
13fb2fb2ec build: update to go1.22rc1 and make go1.20 the minimum required version 2024-01-15 16:22:07 +00:00
Nick Craig-Wood
43cc2435c3 build: update indirect dependencies where possible 2024-01-15 16:18:42 +00:00
Nick Craig-Wood
1b1e43074f build: update direct dependencies and fix serve nfs
This updates the direct dependencies.

The latest github.com/willscott/go-nfs has changed the interface
slightly so this implements a dummy InvalidateHandle method in order
to satisfy it.
2024-01-15 16:18:42 +00:00
Nick Craig-Wood
cacfc100de docs: add warp.dev sponsorship to github home page 2024-01-15 11:57:27 +00:00
Nick Craig-Wood
f8c5695aed docs: add warp.dev as a sponsor 2024-01-15 11:55:38 +00:00
Nick Craig-Wood
a5972fe0d1 docs: update website footer 2024-01-15 11:55:38 +00:00
Nick Craig-Wood
184459ba8f vfs: fix stale data when using --vfs-cache-mode full
Before this change the VFS cache could get into a state where when an
object was updated remotely, the fingerprint of the item was correct
for the new object but the data in the VFS cache was for the old
object.

This fixes the problem by updating the fingerprint of the item at the
point we remove the stale data. The empty cache item now represents
the new item even though it has no data in.

This stops the fallback code for an empty fingerprint running (used
when we are writing items to the cache instead of reading them) which
was causing the problem.

Fixes #6053
See: https://forum.rclone.org/t/cached-webdav-mount-fingerprints-get-nuked-on-ls/43974/
2024-01-15 11:12:59 +00:00
Nick Craig-Wood
519fe98e6e azureblob: implement --azureblob-delete-snapshots
This flag controls what happens when we try to delete a blob with a
snapshot. The UI follows the azcopy tool.

See: https://forum.rclone.org/t/how-to-delete-undeleted-blobs-on-azure/43911/
2024-01-13 14:27:54 +00:00
Nick Craig-Wood
3df6518006 Add Nikhil Ahuja to contributors 2024-01-13 14:27:54 +00:00
Nikhil Ahuja
1045f54128 oracleobjectstorage: Support "backend restore" command - fixes #7371 2024-01-09 09:43:36 +00:00
dependabot[bot]
0563cc6314 build(deps): bump github.com/cloudflare/circl from 1.3.6 to 1.3.7
Bumps [github.com/cloudflare/circl](https://github.com/cloudflare/circl) from 1.3.6 to 1.3.7.
- [Release notes](https://github.com/cloudflare/circl/releases)
- [Commits](https://github.com/cloudflare/circl/compare/v1.3.6...v1.3.7)

---
updated-dependencies:
- dependency-name: github.com/cloudflare/circl
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-08 17:38:09 +00:00
Nick Craig-Wood
e20f2eee59 Changelog updates from Version v1.65.1 2024-01-08 11:54:02 +00:00
Vincent Murphy
41b8935a6c docs: Fix broken test_proxy.py link again
The previous fix fixed the auto generated output - this fixes the source.
2024-01-08 11:54:02 +00:00
Nick Craig-Wood
fbdf71ab64 operations: fix files moved by rclone move not being counted as transfers
Before this change we were only counting moves as checks. This means
that when using `rclone move` the `Transfers` stat did not count up
like it should do.

This changes introduces a new primitive operations.MoveTransfers which
counts moves as Transfers for use where that is appropriate, such as
rclone move/moveto. Otherwise moves are counted as checks and their
bytes are not accounted.

See: #7183
See: https://forum.rclone.org/t/stats-one-line-date-broken-in-1-64-0-and-later/43263/
2024-01-07 11:26:09 +00:00
Nick Craig-Wood
d392f9fcd8 accounting: fix stats to show server side transfers
Before this fix we were not counting transferred files nor transferred
bytes for server side moves/copies.

If the server side move/copy has been marked as a transfer and not a
checker then this accounts transferred files and transferred bytes.

The transferred bytes are not accounted to the network though so this
should not affect the network stats.
2024-01-07 11:26:09 +00:00
Nick Craig-Wood
dedad9f071 onedrive: fix "unauthenticated: Unauthenticated" errors when uploading
Before this change, sometimes when uploading files the onedrive
servers return 401 Unauthorized errors with the text "unauthenticated:
Unauthenticated".

This is because we are sending the Authorization header with the
request and it says in the docs that we shouldn't.

https://learn.microsoft.com/en-us/graph/api/driveitem-createuploadsession?view=graph-rest-1.0#remarks

> If you include the Authorization header when issuing the PUT call,
> it may result in an HTTP 401 Unauthorized response. Only send the
> Authorization header and bearer token when issuing the POST during
> the first step. Don't include it when you issue the PUT call.

This patch fixes the problem by doing the PUT request with an
unauthenticated client.

Fixes #7405
See: https://forum.rclone.org/t/onedrive-unauthenticated-when-trying-to-copy-sync-but-can-use-lsd/41149/
See: https://forum.rclone.org/t/onedrive-unauthenticated-issue/43792/
2024-01-07 11:14:08 +00:00
Nick Craig-Wood
1f6271fa15 s3: copy parts in parallel when doing chunked server side copy
Before this change rclone copied each chunk serially.

After this change it does --s3-upload-concurrency at once.

See: https://forum.rclone.org/t/transfer-big-files-50gb-from-s3-bucket-to-another-s3-bucket-doesnt-starts/43209
2024-01-05 15:54:52 +00:00
Nick Craig-Wood
c16c22d6e1 s3: fix crash if no UploadId in multipart upload
Before this change if the S3 API returned a multipart upload with no
UploadId then rclone would crash.

This detects the problem and attempts to retry the multipart upload
creation.

See: https://forum.rclone.org/t/panic-runtime-error-invalid-memory-address-or-nil-pointer-dereference/43425
2024-01-05 15:52:52 +00:00
Nick Craig-Wood
486a10bec5 serve s3: fix listing oddities
Before this change, listing a subdirectory gave errors like this:

    Entry doesn't belong in directory "" (contains subdir) - ignoring

It also did full recursive listings when it didn't need to.

This was caused by the code using the underlying Fs to do recursive
listings on bucket based backends.

Using both the VFS and the underlying Fs is a mistake so this patch
removes the code which uses the underlying Fs and just uses the VFS.

Fixes #7500
2024-01-05 15:51:13 +00:00
Nick Craig-Wood
5fa13e3e31 protondrive: fix CVE-2023-45286 / GHSA-xwh9-gc39-5298
A race condition in go-resty can result in HTTP request body
disclosure across requests.

See: https://pkg.go.dev/vuln/GO-2023-2328
Fixes: #7491
2024-01-04 17:14:53 +00:00
Nick Craig-Wood
0e746f25a3 amazonclouddrive: remove Amazon Drive backend code and docs #7539
The Amazon Drive backend is closed from 2023-12-31.

See: https://www.amazon.com/b?ie=UTF8&node=23943055011
2024-01-04 17:05:54 +00:00
Nick Craig-Wood
578b9df6ea build: fix docker build on arm/v6
Unexpectedly the team which runs the Go docker images have removed the
arm/v6 image which means that the rclone docker images no longer
build.

One of the recommended fixes is what we've done here - switch to the
alpine builder. This has the advantage that it actually builds arm/v6
architecture unlike the previous builder which build arm/v5.

See: https://github.com/docker-library/golang/issues/502
2024-01-03 17:43:23 +00:00
Nick Craig-Wood
208e49ce4b fs: update use of math/rand to modern practice 2024-01-03 16:14:40 +00:00
Nick Craig-Wood
7aa066cff8 Add Paul Stern to contributors 2024-01-03 16:14:40 +00:00
dependabot[bot]
64df4cf2db build(deps): bump golang.org/x/crypto to fix ssh terrapin CVE-2023-48795
Fixes SSH terrapin attack: see https://terrapin-attack.com.

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.14.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-03 15:47:34 +00:00
rkonfj
451d7badf7 oauthutil: avoid panic when *token and *ts.token are the same
the field `raw` of `oauth2.Token` may be an uncomparable type(often map[string]interface{}), causing `*token != *ts.token` expression to panic(comparing uncomparable type ...).

the semantics of comparing whether two tokens are the same can be achieved by comparing accessToken, refreshToken and expire to avoid panic.
2024-01-03 15:15:14 +00:00
WeidiDeng
d977fa25fa ftp: fix multi-thread copy
Before this change multi-thread copies using the FTP backend used to error with

    551 Error reading file

This was caused by a spurious error being reported which this code silences.

Fixes #7532
See #3942
2024-01-03 12:21:08 +00:00
Paul Stern
bb679a9def backend: add description field for all backends
Fixes #4391
2024-01-03 10:57:59 +00:00
Nick Craig-Wood
a3d19942bd googlephotos: fix nil pointer exception when batch failed
This was a simple error check that was missing. Interestingly the
errcheck linter did not spot this.

See: https://forum.rclone.org/t/invalid-memory-address-or-nil-pointer-dereference-error-when-copy-to-google-photos/43634/
2024-01-03 10:57:59 +00:00
Nick Craig-Wood
394195cfdf Add rarspace01 to contributors 2024-01-03 10:57:59 +00:00
nielash
3ca766b2f1 hasher: fix invalid memory address error when MaxAge == 0
When f.opt.MaxAge == 0, f.db is never set, however several methods later assume
it is set and attempt to access it, causing an invalid memory address error.
This change fixes the issue in a few spots (there may still be others I haven't
yet encountered.)
2024-01-02 18:14:01 +00:00
albertony
3bf8c877c3 docs/librclone: the newer and recommended ucrt64 subsystem of msys2 can now be used for building on windows 2024-01-01 21:56:45 +01:00
rarspace01
fba2d4c4a7 docs: fix broken link in serve webdav 2023-12-30 18:10:27 +01:00
Oksana
8503282a5a azure-files: fix storage base url
Documented in https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview
2023-12-18 14:15:13 +00:00
Manoj Ghosh
743ea6ac26 oracle object storage: fix object storage endpoint for custom endpoints 2023-12-15 10:13:35 +00:00
Nick Craig-Wood
c69eb84573 chunker,compress,crypt,hasher,union: fix rclone move a file over itself deleting the file
This fixes the Root() returned by the backend when it has returned
fs.ErrorIsFile.

Before this change it returned a root which included the file path.

Because Root() was wrong this caused the detection of the file being
moved over itself check to fail.

This adds an integration test to check it for all backends.

See: https://forum.rclone.org/t/rclone-move-chunker-dir-file-chunker-dir-deletes-all-file-chunks/43333/
2023-12-10 22:29:57 +00:00
Nick Craig-Wood
f98e672f37 selfupdate: fix crash in tests if beta not found 2023-12-10 22:29:57 +00:00
Nick Craig-Wood
242fe96b18 Add keongalvin to contributors 2023-12-10 22:29:57 +00:00
rkonfj
3f159bac16 backend: fs implements the Shutdowner interface
Since `tokenRenewer` adds a Shutdown method, we should call it to
clean up resources.

changes backends:
onedrive,box,pcloud,amazonclouddrive,hidrive,jottacloud,sharefile
,premiumizeme

Signed-off-by: rkonfj <rkonfj@gmail.com>
2023-12-09 11:44:50 +00:00
rkonfj
6c58e9976c oauthutil: add Shutdown method
Before this change, calling the `oauthutil.NewRenew` func may
cause goroutine leaks.

This change adds a `Shutdown` method to allow the caller to exit
the goroutine to avoid leaks.

Signed-off-by: rkonfj <rkonfj@gmail.com>
2023-12-09 11:44:50 +00:00
keongalvin
110d07548f docs: fix broken link 2023-12-08 16:21:09 +00:00
Nick Craig-Wood
f45cee831f dropbox: fix used space on dropbox team accounts
Before this change we were not using the used space from the team
stats.

This patch uses that as the used space if available as it seems to
include the user stats in it.

See: https://forum.rclone.org/t/rclone-about-with-dropbox-reporte-size-incorrectly/43269/
2023-12-08 14:26:46 +00:00
Nick Craig-Wood
ef0f3020e4 vfs: note that --vfs-refresh runs in the background #6830 2023-12-08 14:26:46 +00:00
Nick Craig-Wood
113b2b648c Add emyarod to contributors 2023-12-08 14:26:46 +00:00
Nick Craig-Wood
57ab4d279e Add Anthony Metzidis to contributors 2023-12-08 14:26:46 +00:00
Nick Craig-Wood
8e21c77ead Add Eli Orzitzer to contributors 2023-12-08 14:26:46 +00:00
emyarod
4751980659 docs: update contributor email 2023-12-08 11:21:26 +00:00
Anthony Metzidis
9fe343b725 s3: S3 IPv6 support with option "use_dual_stack" (bool)
dualstack_endpoint=true enables IPv6 DNS lookup for S3 endpoints
in s3.go, add Options.DualstackEndpoint to support IPv6 on S3
2023-12-08 11:11:47 +00:00
dependabot[bot]
2f5685b405 build(deps): bump actions/setup-go from 4 to 5
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-07 16:48:50 +00:00
Eli Orzitzer
c3117d9efb Doc change: Add the CreateBucket permission requirement for AWS S3 2023-12-07 16:46:04 +00:00
Nick Craig-Wood
1ebbc74f1d nfsmount: compile for all unix oses, add --sudo and fix error/option handling
- make compile on all unix OSes - this will make the docs appear on linux and rclone.org!
- add --sudo flag for using with mount
- improve error reporting
- fix option handling
2023-12-05 10:44:53 +00:00
Nick Craig-Wood
aee787d33e serve nfs: Mark as experimental 2023-12-05 10:44:53 +00:00
Anagh Kumar Baranwal
298c13e719 systemd: Fix detection and switch to the coreos package everywhere
rather than having 2 separate libraries

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-12-02 14:17:15 +00:00
Nick Craig-Wood
f0c774156e onedrive: fix error listing: unknown object type <nil>
This error was introduced in this commit when refactoring the list
routine.

b8591b230d onedrive: implement ListR method which gives --fast-list support

The error was caused by OneNote files not being skipped properly.
2023-12-02 10:49:15 +00:00
Nick Craig-Wood
08c460dd1a Add ben-ba to contributors 2023-12-02 10:49:15 +00:00
ben-ba
e3d0bff9ca docs: fix typo in docs.md
- OpenChunkedWriter
+ OpenChunkWriter
2023-12-01 20:45:48 +01:00
Nick Craig-Wood
caf5dd9d5e mount: notice daemon dying much quicker
Before this change we waited until until the timeout to check the
daemon was alive.

Now we check it every 100ms like we do the mount status.

This also fixes compiling on all platforms which was broken by the
previous change

9bfbf2a4a mount: fix macOS not noticing errors with --daemon

See: https://forum.rclone.org/t/rclone-mount-daemon-exits-successfully-even-when-mount-fails/43146
2023-12-01 09:36:05 +00:00
Nick Craig-Wood
97d7945cef Add halms to contributors 2023-12-01 09:36:05 +00:00
Manoj Ghosh
9061e81850 multipart copy create bucket if it doesn't exist. 2023-11-29 15:47:56 +00:00
halms
58339845f4 smb: fix shares not listed by updating go-smb2
Before this change the IP address of the server was used in the SMB
connect request (see CloudSoda/go-smb2#18).
The updated library now can pass the hostname instead.

The update requires a small change in the dial method call.

Fixes rclone#6672
2023-11-29 15:39:27 +00:00
Nick Craig-Wood
4d4f3de5a5 s3: add --s3-version-deleted to show delete markers in listings when using versions.
See: https://forum.rclone.org/t/s3-object-deletion-times/42781
2023-11-29 09:44:40 +00:00
Nick Craig-Wood
9bfbf2a4ae mount: fix macOS not noticing errors with --daemon
See: https://forum.rclone.org/t/rclone-mount-daemon-exits-successfully-even-when-mount-fails/43146
2023-11-28 19:42:00 +00:00
Nick Craig-Wood
96f8b7c827 install.sh: fix harmless error message on install
This was caused by trying to write to a non existent file, and
changing the order of the cleanup fixed it.

https://forum.rclone.org/t/rclone-v1-65-0-release/43100/18
2023-11-28 19:10:04 +00:00
Nick Craig-Wood
85f142a206 Start v1.66.0-DEV development 2023-11-26 17:14:38 +00:00
1232 changed files with 159525 additions and 58236 deletions

4
.gitattributes vendored
View File

@@ -1,3 +1,7 @@
# Go writes go.mod and go.sum with lf even on windows
go.mod text eol=lf
go.sum text eol=lf
# Ignore generated files in GitHub language statistics and diffs
/MANUAL.* linguist-generated=true
/rclone.1 linguist-generated=true

View File

@@ -27,12 +27,12 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.19', 'go1.20']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.20', 'go1.21']
include:
- job_name: linux
os: ubuntu-latest
go: '1.21'
go: '>=1.22.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -43,14 +43,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '1.21'
go: '>=1.22.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-11
go: '1.21'
os: macos-latest
go: '>=1.22.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -58,15 +58,15 @@ jobs:
deploy: true
- job_name: mac_arm64
os: macos-11
go: '1.21'
os: macos-latest
go: '>=1.22.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '1.21'
go: '>=1.22.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -76,23 +76,23 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '1.21'
go: '>=1.22.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.19
os: ubuntu-latest
go: '1.19'
quicktest: true
racequicktest: true
- job_name: go1.20
os: ubuntu-latest
go: '1.20'
quicktest: true
racequicktest: true
- job_name: go1.21
os: ubuntu-latest
go: '1.21'
quicktest: true
racequicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
@@ -124,7 +124,7 @@ jobs:
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse3 libfuse-dev rpm pkg-config
sudo apt-get install fuse3 libfuse-dev rpm pkg-config git-annex git-annex-remote-rclone
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
@@ -137,7 +137,8 @@ jobs:
brew untap --force homebrew/cask
brew update
brew install --cask macfuse
if: matrix.os == 'macos-11'
brew install git-annex git-annex-remote-rclone
if: matrix.os == 'macos-latest'
- name: Install Libraries on Windows
shell: powershell
@@ -167,14 +168,6 @@ jobs:
printf "\n\nSystem environment:\n\n"
env
- name: Go module cache
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Build rclone
shell: bash
run: |
@@ -230,21 +223,71 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Get runner parameters
id: get-runner-parameters
shell: bash
run: |
echo "year-week=$(/bin/date -u "+%Y%V")" >> $GITHUB_OUTPUT
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout
uses: actions/checkout@v4
- name: Code quality test
uses: golangci/golangci-lint-action@v3
with:
# Optional: version of golangci-lint to use in form of v1.2 or v1.2.3 or `latest` to use the latest version
version: latest
# Run govulncheck on the latest go version, the one we build binaries with
- name: Install Go
id: setup-go
uses: actions/setup-go@v5
with:
go-version: '1.21'
go-version: '>=1.22.0-rc.1'
check-latest: true
cache: false
- name: Cache
uses: actions/cache@v4
with:
path: |
~/go/pkg/mod
~/.cache/go-build
~/.cache/golangci-lint
key: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-${{ hashFiles('go.sum') }}
restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-
- name: Code quality test (Linux)
uses: golangci/golangci-lint-action@v6
with:
version: latest
skip-cache: true
- name: Code quality test (Windows)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "windows"
with:
version: latest
skip-cache: true
- name: Code quality test (macOS)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "darwin"
with:
version: latest
skip-cache: true
- name: Code quality test (FreeBSD)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "freebsd"
with:
version: latest
skip-cache: true
- name: Code quality test (OpenBSD)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "openbsd"
with:
version: latest
skip-cache: true
- name: Install govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
@@ -268,15 +311,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.21'
- name: Go module cache
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
go-version: '>=1.22.0-rc.1'
- name: Set global environment variables
shell: bash

View File

@@ -56,7 +56,7 @@ jobs:
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .

15
.github/workflows/notify.yml vendored Normal file
View File

@@ -0,0 +1,15 @@
name: Notify users based on issue labels
on:
issues:
types: [labeled]
jobs:
notify:
runs-on: ubuntu-latest
steps:
- uses: jenschelkopf/issue-label-notification-action@1.3
with:
token: ${{ secrets.NOTIFY_ACTION_TOKEN }}
recipients: |
Support Contract=@rclone/support

View File

@@ -1,14 +1,14 @@
name: Publish to Winget
on:
release:
types: [released]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: vedantmgoyal2009/winget-releaser@v2
with:
identifier: Rclone.Rclone
installers-regex: '-windows-\w+\.zip$'
token: ${{ secrets.WINGET_TOKEN }}
name: Publish to Winget
on:
release:
types: [released]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: vedantmgoyal2009/winget-releaser@v2
with:
identifier: Rclone.Rclone
installers-regex: '-windows-\w+\.zip$'
token: ${{ secrets.WINGET_TOKEN }}

6
.gitignore vendored
View File

@@ -3,10 +3,13 @@ _junk/
rclone
rclone.exe
build
docs/public
/docs/public/
/docs/.hugo_build.lock
/docs/static/img/logos/
rclone.iml
.idea
.history
.vscode
*.test
*.iml
fuzz-build.zip
@@ -15,6 +18,5 @@ fuzz-build.zip
Thumbs.db
__pycache__
.DS_Store
/docs/static/img/logos/
resource_windows_*.syso
.devcontainer

33310
MANUAL.html generated

File diff suppressed because it is too large Load Diff

4556
MANUAL.md generated

File diff suppressed because it is too large Load Diff

33844
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -36,13 +36,14 @@ ifdef BETA_SUBDIR
endif
BETA_PATH := $(BRANCH_PATH)$(TAG)$(BETA_SUBDIR)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
BETA_UPLOAD_ROOT := beta.rclone.org:
BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
LINTTAGS=--build-tags "$(GOTAGS)"
endif
LDFLAGS=--ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)"
.PHONY: rclone test_all vars version
@@ -50,7 +51,7 @@ rclone:
ifeq ($(GO_OS),windows)
go run bin/resource_windows.go -version $(TAG) -syso resource_windows_`go env GOARCH`.syso
endif
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS)
go build -v $(LDFLAGS) $(BUILDTAGS) $(BUILD_ARGS)
ifeq ($(GO_OS),windows)
rm resource_windows_`go env GOARCH`.syso
endif
@@ -59,7 +60,7 @@ endif
mv -v `go env GOPATH`/bin/rclone`go env GOEXE`.new `go env GOPATH`/bin/rclone`go env GOEXE`
test_all:
go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS) github.com/rclone/rclone/fstest/test_all
go install $(LDFLAGS) $(BUILDTAGS) $(BUILD_ARGS) github.com/rclone/rclone/fstest/test_all
vars:
@echo SHELL="'$(SHELL)'"
@@ -87,13 +88,13 @@ test: rclone test_all
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) ./...
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) ./...
racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race ./...
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) -cpu=2 -race ./...
compiletest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -run XXX ./...
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) -run XXX ./...
# Do source code quality checks
check: rclone
@@ -103,7 +104,7 @@ check: rclone
# Get the build dependencies
build_dep:
go run bin/get-github-release.go -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
go run bin/get-github-release.go -use-api -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
# Get the release dependencies we only install on linux
release_dep_linux:
@@ -167,7 +168,7 @@ website:
@if grep -R "raw HTML omitted" docs/public ; then echo "ERROR: found unescaped HTML - fix the markdown source" ; fi
upload_website: website
rclone -v sync docs/public memstore:www-rclone-org
rclone -v sync docs/public www.rclone.org:
upload_test_website: website
rclone -P sync docs/public test-rclone-org:
@@ -194,8 +195,8 @@ check_sign:
cd build && gpg --verify SHA256SUMS && gpg --decrypt SHA256SUMS | sha256sum -c
upload:
rclone -P copy build/ memstore:downloads-rclone-org/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "memstore:downloads-rclone-org/$(TAG)/$$i" "memstore:downloads-rclone-org/$$j"'
rclone -P copy build/ downloads.rclone.org:/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "downloads.rclone.org:/$(TAG)/$$i" "downloads.rclone.org:/$$j"'
upload_github:
./bin/upload-github $(TAG)
@@ -205,7 +206,7 @@ cross: doc
beta:
go run bin/cross-compile.go $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
rclone -v copy build/ pub.rclone.org:/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/
log_since_last_release:
@@ -218,18 +219,18 @@ ci_upload:
sudo chown -R $$USER build
find build -type l -delete
gzip -r9v build
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
./rclone --no-check-dest --config bin/ci.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
ifeq ($(or $(BRANCH_PATH),$(RELEASE_TAG)),)
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
./rclone --no-check-dest --config bin/ci.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
endif
@echo Beta release ready at $(BETA_URL)/testbuilds
ci_beta:
git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
rclone --no-check-dest --config bin/ci.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifeq ($(or $(BRANCH_PATH),$(RELEASE_TAG)),)
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
rclone --no-check-dest --config bin/ci.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
endif
@echo Beta release ready at $(BETA_URL)
@@ -238,7 +239,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w --disableFastRender
cd docs && hugo server --logLevel info -w --disableFastRender
tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

View File

@@ -1,7 +1,23 @@
<div align="center">
<sup>Special thanks to our sponsor:</sup>
<br>
<br>
<a href="https://www.warp.dev/?utm_source=github&utm_medium=referral&utm_campaign=rclone_20231103">
<div>
<img src="https://rclone.org/img/logos/warp-github.svg" width="300" alt="Warp">
</div>
<b>Warp is a modern, Rust-based terminal with AI built in so you and your team can build great software, faster.</b>
<div>
<sup>Visit warp.dev to learn more.</sup>
</div>
</a>
<br>
<hr>
</div>
<br>
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
[<img src="https://rclone.org/img/logos/warp-github-light.svg" title="Visit warp.dev to learn more." align="right">](https://www.warp.dev/?utm_source=github&utm_medium=referral&utm_campaign=rclone_20231103#gh-light-mode-only)
[<img src="https://rclone.org/img/logos/warp-github-dark.svg" title="Visit warp.dev to learn more." align="right">](https://www.warp.dev/?utm_source=github&utm_medium=referral&utm_campaign=rclone_20231103#gh-dark-mode-only)
[Website](https://rclone.org) |
[Documentation](https://rclone.org/docs/) |
@@ -25,7 +41,6 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
@@ -58,6 +73,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
* Linkbox [:page_facing_up:](https://rclone.org/linkbox)
* Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
* Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
@@ -95,6 +111,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
* Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)

View File

@@ -37,18 +37,44 @@ This file describes how to make the various kinds of releases
## Update dependencies
Early in the next release cycle update the dependencies
Early in the next release cycle update the dependencies.
* Review any pinned packages in go.mod and remove if possible
* make updatedirect
* make GOTAGS=cmount
* make compiletest
* git commit -a -v
* make update
* make GOTAGS=cmount
* make compiletest
* `make updatedirect`
* `make GOTAGS=cmount`
* `make compiletest`
* Fix anything which doesn't compile at this point and commit changes here
* `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod`
then go to manual mode. `go1.20` here is the lowest supported version
in the `go.mod`.
```
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.20 -compat=1.20
```
If the `go mod tidy` fails use the output from it to remove the
package which can't be upgraded from `/tmp/potential-upgrades` when
done
```
git co go.mod go.sum
```
And try again.
Optionally upgrade the direct and indirect dependencies. This is very
likely to fail if the manual method was used abve - in that case
ignore it as it is too time consuming to fix.
* `make update`
* `make GOTAGS=cmount`
* `make compiletest`
* roll back any updates which didn't compile
* git commit -a -v --amend
* `git commit -a -v --amend`
* **NB** watch out for this changing the default go version in `go.mod`
Note that `make update` updates all direct and indirect dependencies
@@ -57,6 +83,9 @@ doing that so it may be necessary to roll back dependencies to the
version specified by `make updatedirect` in order to get rclone to
build.
Once it compiles locally, push it on a test branch and commit fixes
until the tests pass.
## Tidy beta
At some point after the release run

View File

@@ -1 +1 @@
v1.65.2
v1.68.0

View File

@@ -81,10 +81,12 @@ func TestNewFS(t *testing.T) {
for i, gotEntry := range gotEntries {
what := fmt.Sprintf("%s, entry=%d", what, i)
wantEntry := test.entries[i]
_, isDir := gotEntry.(fs.Directory)
require.Equal(t, wantEntry.remote, gotEntry.Remote(), what)
require.Equal(t, wantEntry.size, gotEntry.Size(), what)
_, isDir := gotEntry.(fs.Directory)
if !isDir {
require.Equal(t, wantEntry.size, gotEntry.Size(), what)
}
require.Equal(t, wantEntry.isDir, isDir, what)
}
}

View File

@@ -4,7 +4,6 @@ package all
import (
// Active file systems
_ "github.com/rclone/rclone/backend/alias"
_ "github.com/rclone/rclone/backend/amazonclouddrive"
_ "github.com/rclone/rclone/backend/azureblob"
_ "github.com/rclone/rclone/backend/azurefiles"
_ "github.com/rclone/rclone/backend/b2"
@@ -54,6 +53,7 @@ import (
_ "github.com/rclone/rclone/backend/storj"
_ "github.com/rclone/rclone/backend/sugarsync"
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/ulozto"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav"

File diff suppressed because it is too large Load Diff

View File

@@ -1,21 +0,0 @@
// Test AmazonCloudDrive filesystem interface
//go:build acd
// +build acd
package amazonclouddrive_test
import (
"testing"
"github.com/rclone/rclone/backend/amazonclouddrive"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.NilObject = fs.Object((*amazonclouddrive.Object)(nil))
fstests.RemoteName = "TestAmazonCloudDrive:"
fstests.Run(t)
}

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob
@@ -402,6 +401,24 @@ rclone does if you know the container exists already.
Help: `If set, do not do HEAD before GET when getting objects.`,
Default: false,
Advanced: true,
}, {
Name: "delete_snapshots",
Help: `Set to specify how to deal with snapshots on blob deletion.`,
Examples: []fs.OptionExample{
{
Value: "",
Help: "By default, the delete operation fails if a blob has snapshots",
}, {
Value: string(blob.DeleteSnapshotsOptionTypeInclude),
Help: "Specify 'include' to remove the root blob and all its snapshots",
}, {
Value: string(blob.DeleteSnapshotsOptionTypeOnly),
Help: "Specify 'only' to remove only the snapshots but keep the root blob.",
},
},
Default: "",
Exclusive: true,
Advanced: true,
}},
})
}
@@ -438,6 +455,7 @@ type Options struct {
DirectoryMarkers bool `config:"directory_markers"`
NoCheckContainer bool `config:"no_check_container"`
NoHeadObject bool `config:"no_head_object"`
DeleteSnapshots string `config:"delete_snapshots"`
}
// Fs represents a remote azure server
@@ -1070,7 +1088,7 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
isDirectory := isDirectoryMarker(*file.Properties.ContentLength, file.Metadata, remote)
if isDirectory {
// Don't insert the root directory
if remote == directory {
if remote == f.opt.Enc.ToStandardPath(directory) {
continue
}
// process directory markers as directories
@@ -2356,9 +2374,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
blb := o.getBlobSVC()
//only := blob.DeleteSnapshotsOptionTypeOnly
opt := blob.DeleteOptions{
//DeleteSnapshots: &only,
opt := blob.DeleteOptions{}
if o.fs.opt.DeleteSnapshots != "" {
action := blob.DeleteSnapshotsOptionType(o.fs.opt.DeleteSnapshots)
opt.DeleteSnapshots = &action
}
return o.fs.pacer.Call(func() (bool, error) {
_, err := blb.Delete(ctx, &opt)

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob

View File

@@ -1,7 +1,6 @@
// Test AzureBlob filesystem interface
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || solaris || js
// +build plan9 solaris js
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
// Package azurefiles provides an interface to Microsoft Azure Files
package azurefiles

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package azurefiles

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package azurefiles

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || js
// +build plan9 js
// Package azurefiles provides an interface to Microsoft Azure Files
package azurefiles

View File

@@ -60,6 +60,7 @@ const (
defaultChunkSize = 96 * fs.Mebi
defaultUploadCutoff = 200 * fs.Mebi
largeFileCopyCutoff = 4 * fs.Gibi // 5E9 is the max
defaultMaxAge = 24 * time.Hour
)
// Globals
@@ -101,7 +102,7 @@ below will cause b2 to return specific errors:
* "force_cap_exceeded"
These will be set in the "X-Bz-Test-Mode" header which is documented
in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).`,
in the [b2 integrations checklist](https://www.backblaze.com/docs/cloud-storage-integration-checklist).`,
Default: "",
Hide: fs.OptionHideConfigurator,
Advanced: true,
@@ -193,9 +194,12 @@ Example:
Advanced: true,
}, {
Name: "download_auth_duration",
Help: `Time before the authorization token will expire in s or suffix ms|s|m|h|d.
Help: `Time before the public link authorization token will expire in s or suffix ms|s|m|h|d.
This is used in combination with "rclone link" for making files
accessible to the public and sets the duration before the download
authorization token will expire.
The duration before the download authorization token will expire.
The minimum value is 1 second. The maximum value is one week.`,
Default: fs.Duration(7 * 24 * time.Hour),
Advanced: true,
@@ -240,7 +244,7 @@ See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// See: https://www.backblaze.com/b2/docs/files.html
// See: https://www.backblaze.com/docs/cloud-storage-files
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
// FIXME: allow /, but not leading, trailing or double
Default: (encoder.Display |
@@ -359,7 +363,7 @@ var retryErrorCodes = []int{
504, // Gateway Time-out
}
// shouldRetryNoAuth returns a boolean as to whether this resp and err
// shouldRetryNoReauth returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetryNoReauth(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
@@ -1245,7 +1249,7 @@ func (f *Fs) deleteByID(ctx context.Context, ID, Name string) error {
// if oldOnly is true then it deletes only non current files.
//
// Implemented here so we can make sure we delete old versions.
func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden bool, deleteUnfinished bool, maxAge time.Duration) error {
bucket, directory := f.split(dir)
if bucket == "" {
return errors.New("can't purge from root")
@@ -1263,7 +1267,7 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
}
}
var isUnfinishedUploadStale = func(timestamp api.Timestamp) bool {
return time.Since(time.Time(timestamp)).Hours() > 24
return time.Since(time.Time(timestamp)) > maxAge
}
// Delete Config.Transfers in parallel
@@ -1286,6 +1290,21 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
}
}()
}
if oldOnly {
if deleteHidden && deleteUnfinished {
fs.Infof(f, "cleaning bucket %q of all hidden files, and pending multipart uploads older than %v", bucket, maxAge)
} else if deleteHidden {
fs.Infof(f, "cleaning bucket %q of all hidden files", bucket)
} else if deleteUnfinished {
fs.Infof(f, "cleaning bucket %q of pending multipart uploads older than %v", bucket, maxAge)
} else {
fs.Errorf(f, "cleaning bucket %q of nothing. This should never happen!", bucket)
return nil
}
} else {
fs.Infof(f, "cleaning bucket %q of all files", bucket)
}
last := ""
checkErr(f.list(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", true, 0, true, false, func(remote string, object *api.File, isDirectory bool) error {
if !isDirectory {
@@ -1296,14 +1315,14 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
tr := accounting.Stats(ctx).NewCheckingTransfer(oi, "checking")
if oldOnly && last != remote {
// Check current version of the file
if object.Action == "hide" {
if deleteHidden && object.Action == "hide" {
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
toBeDeleted <- object
} else if object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
} else if deleteUnfinished && object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
fs.Debugf(remote, "Deleting current version (id %q) as it is a start marker (upload started at %s)", object.ID, time.Time(object.UploadTimestamp).Local())
toBeDeleted <- object
} else {
fs.Debugf(remote, "Not deleting current version (id %q) %q", object.ID, object.Action)
fs.Debugf(remote, "Not deleting current version (id %q) %q dated %v (%v ago)", object.ID, object.Action, time.Time(object.UploadTimestamp).Local(), time.Since(time.Time(object.UploadTimestamp)))
}
} else {
fs.Debugf(remote, "Deleting (id %q)", object.ID)
@@ -1325,12 +1344,17 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
// Purge deletes all the files and directories including the old versions.
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purge(ctx, dir, false)
return f.purge(ctx, dir, false, false, false, defaultMaxAge)
}
// CleanUp deletes all the hidden files.
// CleanUp deletes all hidden files and pending multipart uploads older than 24 hours.
func (f *Fs) CleanUp(ctx context.Context) error {
return f.purge(ctx, "", true)
return f.purge(ctx, "", true, true, true, defaultMaxAge)
}
// cleanUp deletes all hidden files and/or pending multipart uploads older than the specified age.
func (f *Fs) cleanUp(ctx context.Context, deleteHidden bool, deleteUnfinished bool, maxAge time.Duration) (err error) {
return f.purge(ctx, "", true, deleteHidden, deleteUnfinished, maxAge)
}
// copy does a server-side copy from dstObj <- srcObj
@@ -1542,7 +1566,7 @@ func (o *Object) Size() int64 {
//
// Make sure it is lower case.
//
// Remove unverified prefix - see https://www.backblaze.com/b2/docs/uploading.html
// Remove unverified prefix - see https://www.backblaze.com/docs/cloud-storage-upload-files-with-the-native-api
// Some tools (e.g. Cyberduck) use this
func cleanSHA1(sha1 string) string {
const unverified = "unverified:"
@@ -1760,14 +1784,14 @@ func (file *openFile) Close() (err error) {
// Check to see we read the correct number of bytes
if file.o.Size() != file.bytes {
return fmt.Errorf("object corrupted on transfer - length mismatch (want %d got %d)", file.o.Size(), file.bytes)
return fmt.Errorf("corrupted on transfer: lengths differ want %d vs got %d", file.o.Size(), file.bytes)
}
// Check the SHA1
receivedSHA1 := file.o.sha1
calculatedSHA1 := fmt.Sprintf("%x", file.hash.Sum(nil))
if receivedSHA1 != "" && receivedSHA1 != calculatedSHA1 {
return fmt.Errorf("object corrupted on transfer - SHA1 mismatch (want %q got %q)", receivedSHA1, calculatedSHA1)
return fmt.Errorf("corrupted on transfer: SHA1 hashes differ want %q vs got %q", receivedSHA1, calculatedSHA1)
}
return nil
@@ -2240,8 +2264,56 @@ func (f *Fs) lifecycleCommand(ctx context.Context, name string, arg []string, op
return bucket.LifecycleRules, nil
}
var cleanupHelp = fs.CommandHelp{
Name: "cleanup",
Short: "Remove unfinished large file uploads.",
Long: `This command removes unfinished large file uploads of age greater than
max-age, which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup b2:bucket/path/to/object
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
`,
Opts: map[string]string{
"max-age": "Max age of upload to delete",
},
}
func (f *Fs) cleanupCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
maxAge := defaultMaxAge
if opt["max-age"] != "" {
maxAge, err = fs.ParseDuration(opt["max-age"])
if err != nil {
return nil, fmt.Errorf("bad max-age: %w", err)
}
}
return nil, f.cleanUp(ctx, false, true, maxAge)
}
var cleanupHiddenHelp = fs.CommandHelp{
Name: "cleanup-hidden",
Short: "Remove old versions of files.",
Long: `This command removes any old hidden versions of files.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup-hidden b2:bucket/path/to/dir
`,
}
func (f *Fs) cleanupHiddenCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
return nil, f.cleanUp(ctx, true, false, 0)
}
var commandHelp = []fs.CommandHelp{
lifecycleHelp,
cleanupHelp,
cleanupHiddenHelp,
}
// Command the backend to run a named command
@@ -2257,6 +2329,10 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
switch name {
case "lifecycle":
return f.lifecycleCommand(ctx, name, arg, opt)
case "cleanup":
return f.cleanupCommand(ctx, name, arg, opt)
case "cleanup-hidden":
return f.cleanupHiddenCommand(ctx, name, arg, opt)
default:
return nil, fs.ErrorCommandNotFound
}

View File

@@ -1,15 +1,29 @@
package b2
import (
"context"
"crypto/sha1"
"fmt"
"path"
"strings"
"testing"
"time"
"github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/version"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Test b2 string encoding
// https://www.backblaze.com/b2/docs/string_encoding.html
// https://www.backblaze.com/docs/cloud-storage-native-api-string-encoding
var encodeTest = []struct {
fullyEncoded string
@@ -170,9 +184,234 @@ func TestParseTimeString(t *testing.T) {
}
// This is adapted from the s3 equivalent.
func (f *Fs) InternalTestMetadata(t *testing.T) {
ctx := context.Background()
original := random.String(1000)
contents := fstest.Gz(t, original)
mimeType := "text/html"
item := fstest.NewItem("test-metadata", contents, fstest.Time("2001-05-06T04:05:06.499Z"))
btime := time.Now()
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, contents, true, mimeType, nil)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
o := obj.(*Object)
gotMetadata, err := o.getMetaData(ctx)
require.NoError(t, err)
// We currently have a limited amount of metadata to test with B2
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
var mtime api.Timestamp
err = mtime.UnmarshalJSON([]byte(gotMetadata.Info[timeKey]))
if err != nil {
fs.Debugf(o, "Bad "+timeHeader+" header: %v", err)
}
assert.Equal(t, item.ModTime, time.Time(mtime), "Modification time")
// Upload time
gotBtime := time.Time(gotMetadata.UploadTimestamp)
dt := gotBtime.Sub(btime)
assert.True(t, dt < time.Minute && dt > -time.Minute, fmt.Sprintf("btime more than 1 minute out want %v got %v delta %v", btime, gotBtime, dt))
t.Run("GzipEncoding", func(t *testing.T) {
// Test that the gzipped file we uploaded can be
// downloaded
checkDownload := func(wantContents string, wantSize int64, wantHash string) {
gotContents := fstests.ReadObject(ctx, t, o, -1)
assert.Equal(t, wantContents, gotContents)
assert.Equal(t, wantSize, o.Size())
gotHash, err := o.Hash(ctx, hash.SHA1)
require.NoError(t, err)
assert.Equal(t, wantHash, gotHash)
}
t.Run("NoDecompress", func(t *testing.T) {
checkDownload(contents, int64(len(contents)), sha1Sum(t, contents))
})
})
}
func sha1Sum(t *testing.T, s string) string {
hash := sha1.Sum([]byte(s))
return fmt.Sprintf("%x", hash)
}
// This is adapted from the s3 equivalent.
func (f *Fs) InternalTestVersions(t *testing.T) {
ctx := context.Background()
// Small pause to make the LastModified different since AWS
// only seems to track them to 1 second granularity
time.Sleep(2 * time.Second)
// Create an object
const dirName = "versions"
const fileName = dirName + "/" + "test-versions.txt"
contents := random.String(100)
item := fstest.NewItem(fileName, contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
objMetadata, err := obj.(*Object).getMetaData(ctx)
require.NoError(t, err)
// Small pause
time.Sleep(2 * time.Second)
// Remove it
assert.NoError(t, obj.Remove(ctx))
// Small pause to make the LastModified different since AWS only seems to track them to 1 second granularity
time.Sleep(2 * time.Second)
// And create it with different size and contents
newContents := random.String(101)
newItem := fstest.NewItem(fileName, newContents, fstest.Time("2002-05-06T04:05:06.499999999Z"))
newObj := fstests.PutTestContents(ctx, t, f, &newItem, newContents, true)
newObjMetadata, err := newObj.(*Object).getMetaData(ctx)
require.NoError(t, err)
t.Run("Versions", func(t *testing.T) {
// Set --b2-versions for this test
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Read the contents
entries, err := f.List(ctx, dirName)
require.NoError(t, err)
tests := 0
var fileNameVersion string
for _, entry := range entries {
t.Log(entry)
remote := entry.Remote()
if remote == fileName {
t.Run("ReadCurrent", func(t *testing.T) {
assert.Equal(t, newContents, fstests.ReadObject(ctx, t, entry.(fs.Object), -1))
})
tests++
} else if versionTime, p := version.Remove(remote); !versionTime.IsZero() && p == fileName {
t.Run("ReadVersion", func(t *testing.T) {
assert.Equal(t, contents, fstests.ReadObject(ctx, t, entry.(fs.Object), -1))
})
assert.WithinDuration(t, time.Time(objMetadata.UploadTimestamp), versionTime, time.Second, "object time must be with 1 second of version time")
fileNameVersion = remote
tests++
}
}
assert.Equal(t, 2, tests, "object missing from listing")
// Check we can read the object with a version suffix
t.Run("NewObject", func(t *testing.T) {
o, err := f.NewObject(ctx, fileNameVersion)
require.NoError(t, err)
require.NotNil(t, o)
assert.Equal(t, int64(100), o.Size(), o.Remote())
})
// Check we can make a NewFs from that object with a version suffix
t.Run("NewFs", func(t *testing.T) {
newPath := bucket.Join(fs.ConfigStringFull(f), fileNameVersion)
// Make sure --b2-versions is set in the config of the new remote
fs.Debugf(nil, "oldPath = %q", newPath)
lastColon := strings.LastIndex(newPath, ":")
require.True(t, lastColon >= 0)
newPath = newPath[:lastColon] + ",versions" + newPath[lastColon:]
fs.Debugf(nil, "newPath = %q", newPath)
fNew, err := cache.Get(ctx, newPath)
// This should return pointing to a file
require.Equal(t, fs.ErrorIsFile, err)
require.NotNil(t, fNew)
// With the directory above
assert.Equal(t, dirName, path.Base(fs.ConfigStringFull(fNew)))
})
})
t.Run("VersionAt", func(t *testing.T) {
// We set --b2-version-at for this test so make sure we reset it at the end
defer func() {
f.opt.VersionAt = fs.Time{}
}()
var (
firstObjectTime = time.Time(objMetadata.UploadTimestamp)
secondObjectTime = time.Time(newObjMetadata.UploadTimestamp)
)
for _, test := range []struct {
what string
at time.Time
want []fstest.Item
wantErr error
wantSize int64
}{
{
what: "Before",
at: firstObjectTime.Add(-time.Second),
want: fstests.InternalTestFiles,
wantErr: fs.ErrorObjectNotFound,
},
{
what: "AfterOne",
at: firstObjectTime.Add(time.Second),
want: append([]fstest.Item{item}, fstests.InternalTestFiles...),
wantSize: 100,
},
{
what: "AfterDelete",
at: secondObjectTime.Add(-time.Second),
want: fstests.InternalTestFiles,
wantErr: fs.ErrorObjectNotFound,
},
{
what: "AfterTwo",
at: secondObjectTime.Add(time.Second),
want: append([]fstest.Item{newItem}, fstests.InternalTestFiles...),
wantSize: 101,
},
} {
t.Run(test.what, func(t *testing.T) {
f.opt.VersionAt = fs.Time(test.at)
t.Run("List", func(t *testing.T) {
fstest.CheckListing(t, f, test.want)
})
// b2 NewObject doesn't work with VersionAt
//t.Run("NewObject", func(t *testing.T) {
// gotObj, gotErr := f.NewObject(ctx, fileName)
// assert.Equal(t, test.wantErr, gotErr)
// if gotErr == nil {
// assert.Equal(t, test.wantSize, gotObj.Size())
// }
//})
})
}
})
t.Run("Cleanup", func(t *testing.T) {
require.NoError(t, f.cleanUp(ctx, true, false, 0))
items := append([]fstest.Item{newItem}, fstests.InternalTestFiles...)
fstest.CheckListing(t, f, items)
// Set --b2-versions for this test
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
fstest.CheckListing(t, f, items)
})
// Purge gets tested later
}
// -run TestIntegration/FsMkdir/FsPutFiles/Internal
func (f *Fs) InternalTest(t *testing.T) {
// Internal tests go here
t.Run("Metadata", f.InternalTestMetadata)
t.Run("Versions", f.InternalTestVersions)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -1,6 +1,6 @@
// Upload large files for b2
//
// Docs - https://www.backblaze.com/b2/docs/large_files.html
// Docs - https://www.backblaze.com/docs/cloud-storage-large-files
package b2

View File

@@ -1207,6 +1207,12 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
return err
}
// Shutdown shutdown the fs
func (f *Fs) Shutdown(ctx context.Context) error {
f.tokenRenewer.Shutdown()
return nil
}
// ChangeNotify calls the passed function with a path that has had changes.
// If the implementation uses polling, it should adhere to the given interval.
//
@@ -1719,6 +1725,7 @@ var (
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
// Package cache implements a virtual provider to cache existing remotes.
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test
@@ -30,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/random"
@@ -935,8 +935,7 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
}
if purge {
_ = f.Features().Purge(context.Background(), "")
require.NoError(t, err)
_ = operations.Purge(context.Background(), f, "")
}
err = f.Mkdir(context.Background(), "")
require.NoError(t, err)
@@ -949,7 +948,7 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
}
func (r *run) cleanupFs(t *testing.T, f fs.Fs) {
err := f.Features().Purge(context.Background(), "")
err := operations.Purge(context.Background(), f, "")
require.NoError(t, err)
cfs, err := r.getCacheFs(f)
require.NoError(t, err)

View File

@@ -1,7 +1,6 @@
// Test Cache filesystem interface
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test
@@ -16,10 +15,11 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt", "OpenChunkWriter"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt", "OpenChunkWriter", "DirSetModTime", "MkdirMetadata"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata", "SetMetadata"},
UnimplementableDirectoryMethods: []string{"Metadata", "SetMetadata", "SetModTime"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache
})
}

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || js
// +build plan9 js
// Package cache implements a virtual provider to cache existing remotes.
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache
@@ -119,7 +118,7 @@ func (r *Handle) startReadWorkers() {
r.scaleWorkers(totalWorkers)
}
// scaleOutWorkers will increase the worker pool count by the provided amount
// scaleWorkers will increase the worker pool count by the provided amount
func (r *Handle) scaleWorkers(desired int) {
current := r.workers
if current == desired {

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,3 +1,6 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache
import bolt "go.etcd.io/bbolt"

View File

@@ -29,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/encoder"
)
// Chunker's composite files have one or more chunks
@@ -101,8 +102,10 @@ var (
//
// And still chunker's primary function is to chunk large files
// rather than serve as a generic metadata container.
const maxMetadataSize = 1023
const maxMetadataSizeWritten = 255
const (
maxMetadataSize = 1023
maxMetadataSizeWritten = 255
)
// Current/highest supported metadata format.
const metadataVersion = 2
@@ -305,7 +308,6 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
root: rpath,
opt: *opt,
}
cache.PinUntilFinalized(f.base, f)
f.dirSort = true // processEntries requires that meta Objects prerun data chunks atm.
if err := f.configure(opt.NameFormat, opt.MetaFormat, opt.HashType, opt.Transactions); err != nil {
@@ -317,13 +319,15 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// i.e. `rpath` does not exist in the wrapped remote, but chunker
// detects a composite file because it finds the first chunk!
// (yet can't satisfy fstest.CheckListing, will ignore)
if err == nil && !f.useMeta && strings.Contains(rpath, "/") {
if err == nil && !f.useMeta {
firstChunkPath := f.makeChunkName(remotePath, 0, "", "")
_, testErr := cache.Get(ctx, baseName+firstChunkPath)
newBase, testErr := cache.Get(ctx, baseName+firstChunkPath)
if testErr == fs.ErrorIsFile {
f.base = newBase
err = testErr
}
}
cache.PinUntilFinalized(f.base, f)
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
@@ -338,13 +342,18 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// Note 2: features.Fill() points features.PutStream to our PutStream,
// but features.Mask() will nullify it if wrappedFs does not have it.
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: true,
ReadMimeType: false, // Object.MimeType not supported
WriteMimeType: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: true,
CaseInsensitive: true,
DuplicateFiles: true,
ReadMimeType: false, // Object.MimeType not supported
WriteMimeType: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
}).Fill(ctx, f).Mask(ctx, baseFs).WrapsFs(f, baseFs)
f.features.Disable("ListR") // Recursive listing may cause chunker skip files
@@ -821,8 +830,7 @@ func (f *Fs) processEntries(ctx context.Context, origEntries fs.DirEntries, dirP
}
case fs.Directory:
isSubdir[entry.Remote()] = true
wrapDir := fs.NewDirCopy(ctx, entry)
wrapDir.SetRemote(entry.Remote())
wrapDir := fs.NewDirWrapper(entry.Remote(), entry)
tempEntries = append(tempEntries, wrapDir)
default:
if f.opt.FailHard {
@@ -955,6 +963,11 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
if caseInsensitive {
sameMain = strings.EqualFold(mainRemote, remote)
if sameMain && f.base.Features().IsLocal {
// on local, make sure the EqualFold still holds true when accounting for encoding.
// sometimes paths with special characters will only normalize the same way in Standard Encoding.
sameMain = strings.EqualFold(encoder.OS.FromStandardPath(mainRemote), encoder.OS.FromStandardPath(remote))
}
} else {
sameMain = mainRemote == remote
}
@@ -968,7 +981,7 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
continue
}
//fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo)
// fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo)
if err := o.addChunk(entry, chunkNo); err != nil {
return nil, err
}
@@ -1130,8 +1143,8 @@ func (o *Object) readXactID(ctx context.Context) (xactID string, err error) {
// put implements Put, PutStream, PutUnchecked, Update
func (f *Fs) put(
ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options []fs.OpenOption,
basePut putFn, action string, target fs.Object) (obj fs.Object, err error) {
basePut putFn, action string, target fs.Object,
) (obj fs.Object, err error) {
// Perform consistency checks
if err := f.forbidChunk(src, remote); err != nil {
return nil, fmt.Errorf("%s refused: %w", action, err)
@@ -1571,6 +1584,14 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.base.Mkdir(ctx, dir)
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
if do := f.base.Features().MkdirMetadata; do != nil {
return do(ctx, dir, metadata)
}
return nil, fs.ErrorNotImplemented
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
@@ -1888,6 +1909,14 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return do(ctx, srcFs.base, srcRemote, dstRemote)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
if do := f.base.Features().DirSetModTime; do != nil {
return do(ctx, dir, modTime)
}
return fs.ErrorNotImplemented
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
@@ -1936,7 +1965,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
return
}
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
//fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
// fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
if entryType == fs.EntryObject {
mainPath, _, _, xactID := f.parseChunkName(path)
metaXactID := ""
@@ -2548,6 +2577,8 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)

View File

@@ -36,6 +36,7 @@ func TestIntegration(t *testing.T) {
"GetTier",
"SetTier",
"Metadata",
"SetMetadata",
},
UnimplementableFsMethods: []string{
"PublicLink",

View File

@@ -222,18 +222,23 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
}
// check features
var features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}).Fill(ctx, f)
canMove := true
for _, u := range f.upstreams {
@@ -440,6 +445,32 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return u.f.Mkdir(ctx, uRemote)
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return nil, err
}
do := u.f.Features().MkdirMetadata
if do == nil {
return nil, fs.ErrorNotImplemented
}
newDir, err := do(ctx, uRemote, metadata)
if err != nil {
return nil, err
}
entries := fs.DirEntries{newDir}
entries, err = u.wrapEntries(ctx, entries)
if err != nil {
return nil, err
}
newDir, ok := entries[0].(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be fs.Directory", entries[0])
}
return newDir, nil
}
// purge the upstream or fallback to a slow way
func (u *upstream) purge(ctx context.Context, dir string) (err error) {
if do := u.f.Features().Purge; do != nil {
@@ -755,12 +786,11 @@ func (u *upstream) wrapEntries(ctx context.Context, entries fs.DirEntries) (fs.D
case fs.Object:
entries[i] = u.newObject(x)
case fs.Directory:
newDir := fs.NewDirCopy(ctx, x)
newPath, err := u.pathAdjustment.do(newDir.Remote())
newPath, err := u.pathAdjustment.do(x.Remote())
if err != nil {
return nil, err
}
newDir.SetRemote(newPath)
newDir := fs.NewDirWrapper(newPath, x)
entries[i] = newDir
default:
return nil, fmt.Errorf("unknown entry type %T", entry)
@@ -783,7 +813,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if f.root == "" && dir == "" {
entries = make(fs.DirEntries, 0, len(f.upstreams))
for combineDir := range f.upstreams {
d := fs.NewDir(combineDir, f.when)
d := fs.NewLimitedDirWrapper(combineDir, fs.NewDir(combineDir, f.when))
entries = append(entries, d)
}
return entries, nil
@@ -965,6 +995,22 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
return do(ctx, uDirs)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
u, uDir, err := f.findUpstream(dir)
if err != nil {
return err
}
if uDir == "" {
fs.Debugf(dir, "Can't set modtime on upstream root. skipping.")
return nil
}
if do := u.f.Features().DirSetModTime; do != nil {
return do(ctx, uDir, modTime)
}
return fs.ErrorNotImplemented
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
@@ -1073,6 +1119,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// SetTier performs changing storage tier of the Object if
// multiple storage classes supported
func (o *Object) SetTier(tier string) error {
@@ -1099,6 +1156,8 @@ var (
_ fs.PublicLinker = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.OpenWriterAter = (*Fs)(nil)
_ fs.FullObject = (*Object)(nil)

View File

@@ -183,18 +183,23 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: false,
WriteMimeType: false,
GetTier: true,
SetTier: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: false,
WriteMimeType: false,
GetTier: true,
SetTier: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
// We support reading MIME types no matter the wrapped fs
f.features.ReadMimeType = true
@@ -450,7 +455,7 @@ func (f *Fs) verifyObjectHash(ctx context.Context, o fs.Object, hasher *hash.Mul
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return fmt.Errorf("corrupted on transfer: %v compressed hashes differ %q vs %q", ht, srcHash, dstHash)
return fmt.Errorf("corrupted on transfer: %v compressed hashes differ src(%s) %q vs dst(%s) %q", ht, f.Fs, srcHash, o.Fs(), dstHash)
}
return nil
}
@@ -784,6 +789,14 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.Fs.Mkdir(ctx, dir)
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
if do := f.Fs.Features().MkdirMetadata; do != nil {
return do(ctx, dir, metadata)
}
return nil, fs.ErrorNotImplemented
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
@@ -927,6 +940,14 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return do(ctx, srcFs.Fs, srcRemote, dstRemote)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
if do := f.Fs.Features().DirSetModTime; do != nil {
return do(ctx, dir, modTime)
}
return fs.ErrorNotImplemented
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
@@ -1265,6 +1286,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
@@ -1497,6 +1529,8 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)

View File

@@ -130,6 +130,16 @@ trying to recover an encrypted file with errors and it is desired to
recover as much of the file as possible.`,
Default: false,
Advanced: true,
}, {
Name: "strict_names",
Help: `If set, this will raise an error when crypt comes across a filename that can't be decrypted.
(By default, rclone will just log a NOTICE and continue as normal.)
This can happen if encrypted and unencrypted files are stored in the same
directory (which is not recommended.) It may also indicate a more serious
problem that should be investigated.`,
Default: false,
Advanced: true,
}, {
Name: "filename_encoding",
Help: `How to encode the encrypted filename to text string.
@@ -263,19 +273,24 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
f.features = (&fs.Features{
CaseInsensitive: !cipher.dirNameEncrypt || cipher.NameEncryptionMode() == NameEncryptionOff,
DuplicateFiles: true,
ReadMimeType: false, // MimeTypes not supported with crypt
WriteMimeType: false,
BucketBased: true,
CanHaveEmptyDirectories: true,
SetTier: true,
GetTier: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
CaseInsensitive: !cipher.dirNameEncrypt || cipher.NameEncryptionMode() == NameEncryptionOff,
DuplicateFiles: true,
ReadMimeType: false, // MimeTypes not supported with crypt
WriteMimeType: false,
BucketBased: true,
CanHaveEmptyDirectories: true,
SetTier: true,
GetTier: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
return f, err
@@ -294,6 +309,7 @@ type Options struct {
PassBadBlocks bool `config:"pass_bad_blocks"`
FilenameEncoding string `config:"filename_encoding"`
Suffix string `config:"suffix"`
StrictNames bool `config:"strict_names"`
}
// Fs represents a wrapped fs.Fs
@@ -328,45 +344,64 @@ func (f *Fs) String() string {
}
// Encrypt an object file name to entries.
func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) {
func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) error {
remote := obj.Remote()
decryptedRemote, err := f.cipher.DecryptFileName(remote)
if err != nil {
fs.Debugf(remote, "Skipping undecryptable file name: %v", err)
return
if f.opt.StrictNames {
return fmt.Errorf("%s: undecryptable file name detected: %v", remote, err)
}
fs.Logf(remote, "Skipping undecryptable file name: %v", err)
return nil
}
if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newObject(obj))
return nil
}
// Encrypt a directory file name to entries.
func (f *Fs) addDir(ctx context.Context, entries *fs.DirEntries, dir fs.Directory) {
func (f *Fs) addDir(ctx context.Context, entries *fs.DirEntries, dir fs.Directory) error {
remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Skipping undecryptable dir name: %v", err)
return
if f.opt.StrictNames {
return fmt.Errorf("%s: undecryptable dir name detected: %v", remote, err)
}
fs.Logf(remote, "Skipping undecryptable dir name: %v", err)
return nil
}
if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newDir(ctx, dir))
return nil
}
// Encrypt some directory entries. This alters entries returning it as newEntries.
func (f *Fs) encryptEntries(ctx context.Context, entries fs.DirEntries) (newEntries fs.DirEntries, err error) {
newEntries = entries[:0] // in place filter
errors := 0
var firsterr error
for _, entry := range entries {
switch x := entry.(type) {
case fs.Object:
f.add(&newEntries, x)
err = f.add(&newEntries, x)
case fs.Directory:
f.addDir(ctx, &newEntries, x)
err = f.addDir(ctx, &newEntries, x)
default:
return nil, fmt.Errorf("unknown object type %T", entry)
}
if err != nil {
errors++
if firsterr == nil {
firsterr = err
}
}
}
if firsterr != nil {
return nil, fmt.Errorf("there were %v undecryptable name errors. first error: %v", errors, firsterr)
}
return newEntries, nil
}
@@ -485,7 +520,7 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options [
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return nil, fmt.Errorf("corrupted on transfer: %v encrypted hash differ src %q vs dst %q", ht, srcHash, dstHash)
return nil, fmt.Errorf("corrupted on transfer: %v encrypted hashes differ src(%s) %q vs dst(%s) %q", ht, f.Fs, srcHash, o.Fs(), dstHash)
}
fs.Debugf(src, "%v = %s OK", ht, srcHash)
}
@@ -520,6 +555,37 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.Fs.Mkdir(ctx, f.cipher.EncryptDirName(dir))
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
do := f.Fs.Features().MkdirMetadata
if do == nil {
return nil, fs.ErrorNotImplemented
}
newDir, err := do(ctx, f.cipher.EncryptDirName(dir), metadata)
if err != nil {
return nil, err
}
var entries = make(fs.DirEntries, 0, 1)
err = f.addDir(ctx, &entries, newDir)
if err != nil {
return nil, err
}
newDir, ok := entries[0].(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be fs.Directory", entries[0])
}
return newDir, nil
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
do := f.Fs.Features().DirSetModTime
if do == nil {
return fs.ErrorNotImplemented
}
return do(ctx, f.cipher.EncryptDirName(dir), modTime)
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
@@ -761,7 +827,7 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
}
out := make([]fs.Directory, len(dirs))
for i, dir := range dirs {
out[i] = fs.NewDirCopy(ctx, dir).SetRemote(f.cipher.EncryptDirName(dir.Remote()))
out[i] = fs.NewDirWrapper(f.cipher.EncryptDirName(dir.Remote()), dir)
}
return do(ctx, out)
}
@@ -997,14 +1063,14 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// newDir returns a dir with the Name decrypted
func (f *Fs) newDir(ctx context.Context, dir fs.Directory) fs.Directory {
newDir := fs.NewDirCopy(ctx, dir)
remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Undecryptable dir name: %v", err)
} else {
newDir.SetRemote(decryptedRemote)
remote = decryptedRemote
}
newDir := fs.NewDirWrapper(remote, dir)
return newDir
}
@@ -1182,6 +1248,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// MimeType returns the content type of the Object if
// known, or "" if not
//
@@ -1207,6 +1284,8 @@ var (
_ fs.Abouter = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)

View File

@@ -151,6 +151,7 @@ func (rwChoices) Choices() []fs.BitsChoicesInfo {
{Bit: uint64(rwOff), Name: "off"},
{Bit: uint64(rwRead), Name: "read"},
{Bit: uint64(rwWrite), Name: "write"},
{Bit: uint64(rwFailOK), Name: "failok"},
}
}
@@ -160,6 +161,7 @@ type rwChoice = fs.Bits[rwChoices]
const (
rwRead rwChoice = 1 << iota
rwWrite
rwFailOK
rwOff rwChoice = 0
)
@@ -173,6 +175,9 @@ var rwExamples = fs.OptionExamples{{
}, {
Value: rwWrite.String(),
Help: "Write the value only",
}, {
Value: rwFailOK.String(),
Help: "If writing fails log errors only, don't fail the transfer",
}, {
Value: (rwRead | rwWrite).String(),
Help: "Read and Write the value.",
@@ -287,7 +292,10 @@ func init() {
},
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `User metadata is stored in the properties field of the drive object.`,
Help: `User metadata is stored in the properties field of the drive object.
Metadata is supported on files and directories.
`,
},
Options: append(driveOAuthOptions(), []fs.Option{{
Name: "scope",
@@ -870,6 +878,11 @@ type Object struct {
v2Download bool // generate v2 download link ondemand
}
// Directory describes a drive directory
type Directory struct {
baseObject
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
@@ -1374,15 +1387,20 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
}
f.isTeamDrive = opt.TeamDriveID != ""
f.features = (&fs.Features{
DuplicateFiles: true,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
FilterAware: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
DuplicateFiles: true,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
FilterAware: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: false, // FIXME need to check!
}).Fill(ctx, f)
// Create a new authorized Drive client.
@@ -1729,26 +1747,72 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
return pathIDOut, found, err
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
// createDir makes a directory with pathID as parent and name leaf with optional metadata
func (f *Fs) createDir(ctx context.Context, pathID, leaf string, metadata fs.Metadata) (info *drive.File, err error) {
leaf = f.opt.Enc.FromStandardName(leaf)
// fmt.Println("Making", path)
// Define the metadata for the directory we are going to create.
pathID = actualID(pathID)
createInfo := &drive.File{
Name: leaf,
Description: leaf,
MimeType: driveFolderType,
Parents: []string{pathID},
Name: leaf,
MimeType: driveFolderType,
Parents: []string{pathID},
}
var updateMetadata updateMetadataFn
if len(metadata) > 0 {
updateMetadata, err = f.updateMetadata(ctx, createInfo, metadata, true)
if err != nil {
return nil, fmt.Errorf("create dir: failed to update metadata: %w", err)
}
}
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Create(createInfo).
Fields("id").
Fields(f.getFileFields(ctx)).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
if updateMetadata != nil {
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
}
return info, nil
}
// updateDir updates an existing a directory with the metadata passed in
func (f *Fs) updateDir(ctx context.Context, dirID string, metadata fs.Metadata) (info *drive.File, err error) {
if len(metadata) == 0 {
return f.getFile(ctx, dirID, f.getFileFields(ctx))
}
dirID = actualID(dirID)
updateInfo := &drive.File{}
updateMetadata, err := f.updateMetadata(ctx, updateInfo, metadata, true)
if err != nil {
return nil, fmt.Errorf("update dir: failed to update metadata from source object: %w", err)
}
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Update(dirID, updateInfo).
Fields(f.getFileFields(ctx)).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
return info, nil
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
info, err := f.createDir(ctx, pathID, leaf, nil)
if err != nil {
return "", err
}
@@ -1859,7 +1923,7 @@ func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string
return "", "", isDocument
}
// findExportFormatByMimeType works out the optimum export settings
// findExportFormat works out the optimum export settings
// for the given drive.File.
//
// Look through the exportExtensions and find the first format that can be
@@ -2161,7 +2225,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
// Send the entry to the caller, queueing any directories as new jobs
cb := func(entry fs.DirEntry) error {
if d, isDir := entry.(*fs.Dir); isDir {
if d, isDir := entry.(fs.Directory); isDir {
job := listREntry{actualID(d.ID()), d.Remote()}
sendJob(job)
}
@@ -2338,11 +2402,11 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, item *drive.File
if item.ResourceKey != "" {
f.dirResourceKeys.Store(item.Id, item.ResourceKey)
}
when, _ := time.Parse(timeFormatIn, item.ModifiedTime)
d := fs.NewDir(remote, when).SetID(item.Id)
if len(item.Parents) > 0 {
d.SetParentID(item.Parents[0])
baseObject, err := f.newBaseObject(ctx, remote, item)
if err != nil {
return nil, err
}
d := &Directory{baseObject: baseObject}
return d, nil
case f.opt.AuthOwnerOnly && !isAuthOwned(item):
// ignore object
@@ -2370,7 +2434,6 @@ func (f *Fs) createFileInfo(ctx context.Context, remote string, modTime time.Tim
// Define the metadata for the file we are going to create.
createInfo := &drive.File{
Name: leaf,
Description: leaf,
Parents: []string{directoryID},
ModifiedTime: modTime.Format(timeFormatOut),
}
@@ -2535,6 +2598,59 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return err
}
// MkdirMetadata makes the directory passed in as dir.
//
// It shouldn't return an error if it already exists.
//
// If the metadata is not nil it is set.
//
// It returns the directory that was created.
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
var info *drive.File
dirID, err := f.dirCache.FindDir(ctx, dir, false)
if err == fs.ErrorDirNotFound {
// Directory does not exist so create it
var leaf, parentID string
leaf, parentID, err = f.dirCache.FindPath(ctx, dir, true)
if err != nil {
return nil, err
}
info, err = f.createDir(ctx, parentID, leaf, metadata)
} else if err == nil {
// Directory exists and needs updating
info, err = f.updateDir(ctx, dirID, metadata)
}
if err != nil {
return nil, err
}
// Convert the info into a directory entry
entry, err := f.itemToDirEntry(ctx, dir, info)
if err != nil {
return nil, err
}
dirEntry, ok := entry.(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be an fs.Directory", entry)
}
return dirEntry, nil
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
dirID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return err
}
o := baseObject{
fs: f,
remote: dir,
id: dirID,
}
return o.SetModTime(ctx, modTime)
}
// delete a file or directory unconditionally by ID
func (f *Fs) delete(ctx context.Context, id string, useTrash bool) error {
return f.pacer.Call(func() (bool, error) {
@@ -2678,6 +2794,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
createInfo.Description = ""
}
// Adjust metadata if required
updateMetadata, err := f.fetchAndUpdateMetadata(ctx, src, fs.MetadataAsOpenOptions(ctx), createInfo, false)
if err != nil {
return nil, err
}
// get the ID of the thing to copy
// copy the contents if CopyShortcutContent
// else copy the shortcut only
@@ -2691,7 +2813,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
copy := f.svc.Files.Copy(id, createInfo).
Fields(partialFields).
Fields(f.getFileFields(ctx)).
SupportsAllDrives(true).
KeepRevisionForever(f.opt.KeepRevisionForever)
srcObj.addResourceKey(copy.Header())
@@ -2711,7 +2833,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// FIXME remove this when google fixes the problem!
if isDoc {
// A short sleep is needed here in order to make the
// change effective, without it is is ignored. This is
// change effective, without it is ignored. This is
// probably some eventual consistency nastiness.
sleepTime := 2 * time.Second
fs.Debugf(f, "Sleeping for %v before setting the modtime to work around drive bug - see #4517", sleepTime)
@@ -2727,6 +2849,11 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Errorf(existingObject, "Failed to remove existing object after copy: %v", err)
}
}
// Finalise metadata
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
return newObject, nil
}
@@ -2900,13 +3027,19 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
dstParents := strings.Join(dstInfo.Parents, ",")
dstInfo.Parents = nil
// Adjust metadata if required
updateMetadata, err := f.fetchAndUpdateMetadata(ctx, src, fs.MetadataAsOpenOptions(ctx), dstInfo, true)
if err != nil {
return nil, err
}
// Do the move
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Update(shortcutID(srcObj.id), dstInfo).
RemoveParents(srcParentID).
AddParents(dstParents).
Fields(partialFields).
Fields(f.getFileFields(ctx)).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
@@ -2915,6 +3048,11 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// Finalise metadata
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
return f.newObjectWithInfo(ctx, remote, info)
}
@@ -3420,6 +3558,50 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
return nil
}
func (f *Fs) query(ctx context.Context, query string) (entries []*drive.File, err error) {
list := f.svc.Files.List()
if query != "" {
list.Q(query)
}
if f.opt.ListChunk > 0 {
list.PageSize(f.opt.ListChunk)
}
list.SupportsAllDrives(true)
list.IncludeItemsFromAllDrives(true)
if f.isTeamDrive && !f.opt.SharedWithMe {
list.DriveId(f.opt.TeamDriveID)
list.Corpora("drive")
}
// If using appDataFolder then need to add Spaces
if f.rootFolderID == "appDataFolder" {
list.Spaces("appDataFolder")
}
fields := fmt.Sprintf("files(%s),nextPageToken,incompleteSearch", f.getFileFields(ctx))
var results []*drive.File
for {
var files *drive.FileList
err = f.pacer.Call(func() (bool, error) {
files, err = list.Fields(googleapi.Field(fields)).Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("failed to execute query: %w", err)
}
if files.IncompleteSearch {
fs.Errorf(f, "search result INCOMPLETE")
}
results = append(results, files.Files...)
if files.NextPageToken == "" {
break
}
list.PageToken(files.NextPageToken)
}
return results, nil
}
var commandHelp = []fs.CommandHelp{{
Name: "get",
Short: "Get command for fetching the drive config parameters",
@@ -3570,6 +3752,47 @@ Use the --interactive/-i or --dry-run flag to see what would be copied before co
}, {
Name: "importformats",
Short: "Dump the import formats for debug purposes",
}, {
Name: "query",
Short: "List files using Google Drive query language",
Long: `This command lists files based on a query
Usage:
rclone backend query drive: query
The query syntax is documented at [Google Drive Search query terms and
operators](https://developers.google.com/drive/api/guides/ref-search-terms).
For example:
rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'"
If the query contains literal ' or \ characters, these need to be escaped with
\ characters. "'" becomes "\'" and "\" becomes "\\\", for example to match a
file named "foo ' \.txt":
rclone backend query drive: "name = 'foo \' \\\.txt'"
The result is a JSON array of matches, for example:
[
{
"createdTime": "2017-06-29T19:58:28.537Z",
"id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
"md5Checksum": "68518d16be0c6fbfab918be61d658032",
"mimeType": "text/plain",
"modifiedTime": "2024-02-02T10:40:02.874Z",
"name": "foo ' \\.txt",
"parents": [
"0BxAe_BCDE4zkFGZpcWJGek0xbzC"
],
"resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
"sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
"size": "311",
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]`,
}}
// Command the backend to run a named command
@@ -3687,6 +3910,17 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
return f.exportFormats(ctx), nil
case "importformats":
return f.importFormats(ctx), nil
case "query":
if len(arg) == 1 {
query := arg[0]
var results, err = f.query(ctx, query)
if err != nil {
return nil, fmt.Errorf("failed to execute query: %q, error: %w", query, err)
}
return results, nil
} else {
return nil, errors.New("need a query argument")
}
default:
return nil, fs.ErrorCommandNotFound
}
@@ -4193,6 +4427,37 @@ func (o *linkObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}
// Items returns the count of items in this directory or this
// directory and subdirectories if known, -1 for unknown
func (d *Directory) Items() int64 {
return -1
}
// SetMetadata sets metadata for a Directory
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (d *Directory) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
info, err := d.fs.updateDir(ctx, d.id, metadata)
if err != nil {
return fmt.Errorf("failed to update directory info: %w", err)
}
// Update directory from info returned
baseObject, err := d.fs.newBaseObject(ctx, d.remote, info)
if err != nil {
return fmt.Errorf("failed to process directory info: %w", err)
}
d.baseObject = baseObject
return err
}
// Hash does nothing on a directory
//
// This method is implemented with the incorrect type signature to
// stop the Directory type asserting to fs.Object or fs.ObjectInfo
func (d *Directory) Hash() {
// Does nothing
}
// templates for document link files
const (
urlTemplate = `[InternetShortcut]{{"\r"}}
@@ -4242,6 +4507,8 @@ var (
_ fs.PublicLinker = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
@@ -4256,4 +4523,8 @@ var (
_ fs.MimeTyper = (*linkObject)(nil)
_ fs.IDer = (*linkObject)(nil)
_ fs.ParentIDer = (*linkObject)(nil)
_ fs.Directory = (*Directory)(nil)
_ fs.SetModTimer = (*Directory)(nil)
_ fs.SetMetadataer = (*Directory)(nil)
_ fs.ParentIDer = (*Directory)(nil)
)

View File

@@ -524,6 +524,43 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
})
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/Query
func (f *Fs) InternalTestQuery(t *testing.T) {
ctx := context.Background()
var err error
t.Run("BadQuery", func(t *testing.T) {
_, err = f.query(ctx, "this is a bad query")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to execute query")
})
t.Run("NoMatch", func(t *testing.T) {
results, err := f.query(ctx, fmt.Sprintf("name='%s' and name!='%s'", existingSubDir, existingSubDir))
require.NoError(t, err)
assert.Len(t, results, 0)
})
t.Run("GoodQuery", func(t *testing.T) {
pathSegments := strings.Split(existingFile, "/")
var parent string
for _, item := range pathSegments {
// the file name contains ' characters which must be escaped
escapedItem := f.opt.Enc.FromStandardName(item)
escapedItem = strings.ReplaceAll(escapedItem, `\`, `\\`)
escapedItem = strings.ReplaceAll(escapedItem, `'`, `\'`)
results, err := f.query(ctx, fmt.Sprintf("%strashed=false and name='%s'", parent, escapedItem))
require.NoError(t, err)
require.True(t, len(results) > 0)
for _, result := range results {
assert.True(t, len(result.Id) > 0)
assert.Equal(t, result.Name, item)
}
parent = fmt.Sprintf("'%s' in parents and ", results[0].Id)
}
})
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/AgeQuery
func (f *Fs) InternalTestAgeQuery(t *testing.T) {
// Check set up for filtering
@@ -611,6 +648,7 @@ func (f *Fs) InternalTest(t *testing.T) {
t.Run("Shortcuts", f.InternalTestShortcuts)
t.Run("UnTrash", f.InternalTestUnTrash)
t.Run("CopyID", f.InternalTestCopyID)
t.Run("Query", f.InternalTestQuery)
t.Run("AgeQuery", f.InternalTestAgeQuery)
t.Run("ShouldRetry", f.InternalTestShouldRetry)
}

View File

@@ -9,6 +9,8 @@ import (
"sync"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/errcount"
"golang.org/x/sync/errgroup"
drive "google.golang.org/api/drive/v3"
"google.golang.org/api/googleapi"
@@ -37,7 +39,7 @@ var systemMetadataInfo = map[string]fs.MetadataHelp{
Example: "true",
},
"writers-can-share": {
Help: "Whether users with only writer permission can modify the file's permissions. Not populated for items in shared drives.",
Help: "Whether users with only writer permission can modify the file's permissions. Not populated and ignored when setting for items in shared drives.",
Type: "boolean",
Example: "false",
},
@@ -135,23 +137,30 @@ func (f *Fs) getPermission(ctx context.Context, fileID, permissionID string, use
// Set the permissions on the info
func (f *Fs) setPermissions(ctx context.Context, info *drive.File, permissions []*drive.Permission) (err error) {
errs := errcount.New()
for _, perm := range permissions {
if perm.Role == "owner" {
// ignore owner permissions - these are set with owner
continue
}
cleanPermissionForWrite(perm)
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Permissions.Create(info.Id, perm).
err := f.pacer.Call(func() (bool, error) {
_, err := f.svc.Permissions.Create(info.Id, perm).
SupportsAllDrives(true).
SendNotificationEmail(false).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to set permission: %w", err)
fs.Errorf(f, "Failed to set permission %s for %q: %v", perm.Role, perm.EmailAddress, err)
errs.Add(err)
}
}
return nil
err = errs.Err("failed to set permission")
if err != nil {
err = fserrors.NoRetryError(err)
}
return err
}
// Clean attributes from permissions which we can't write
@@ -253,7 +262,7 @@ func (f *Fs) setLabels(ctx context.Context, info *drive.File, labels []*drive.La
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to set owner: %w", err)
return fmt.Errorf("failed to set labels: %w", err)
}
return nil
}
@@ -363,6 +372,7 @@ func (o *baseObject) parseMetadata(ctx context.Context, info *drive.File) (err e
// shared drives.
if o.fs.isTeamDrive && !info.HasAugmentedPermissions {
// Don't process permissions if there aren't any specifically set
fs.Debugf(o, "Ignoring %d permissions and %d permissionIds as is shared drive with hasAugmentedPermissions false", len(info.Permissions), len(info.PermissionIds))
info.Permissions = nil
info.PermissionIds = nil
}
@@ -527,8 +537,12 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
return nil, err
}
case "writers-can-share":
if err := parseBool(&updateInfo.WritersCanShare); err != nil {
return nil, err
if !f.isTeamDrive {
if err := parseBool(&updateInfo.WritersCanShare); err != nil {
return nil, err
}
} else {
fs.Debugf(f, "Ignoring %s=%s as can't set on shared drives", k, v)
}
case "viewed-by-me":
// Can't write this
@@ -540,7 +554,12 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
// Can't set Owner on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
return f.setOwner(ctx, info, v)
err := f.setOwner(ctx, info, v)
if err != nil && f.opt.MetadataOwner.IsSet(rwFailOK) {
fs.Errorf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
})
case "permissions":
if !f.opt.MetadataPermissions.IsSet(rwWrite) {
@@ -553,7 +572,13 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
// Can't set Permissions on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
return f.setPermissions(ctx, info, perms)
err := f.setPermissions(ctx, info, perms)
if err != nil && f.opt.MetadataPermissions.IsSet(rwFailOK) {
// We've already logged the permissions errors individually here
fs.Debugf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
})
case "labels":
if !f.opt.MetadataLabels.IsSet(rwWrite) {
@@ -566,7 +591,12 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
// Can't set Labels on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
return f.setLabels(ctx, info, labels)
err := f.setLabels(ctx, info, labels)
if err != nil && f.opt.MetadataLabels.IsSet(rwFailOK) {
fs.Errorf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
})
case "folder-color-rgb":
updateInfo.FolderColorRgb = v

View File

@@ -216,7 +216,10 @@ are supported.
Note that we don't unmount the shared folder afterwards so the
--dropbox-shared-folders can be omitted after the first use of a particular
shared folder.`,
shared folder.
See also --dropbox-root-namespace for an alternative way to work with shared
folders.`,
Default: false,
Advanced: true,
}, {
@@ -237,6 +240,11 @@ shared folder.`,
encoder.EncodeDel |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8,
}, {
Name: "root_namespace",
Help: "Specify a different Dropbox namespace ID to use as the root for all paths.",
Default: "",
Advanced: true,
}}...), defaultBatcherOptions.FsOptions("For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)\n\n")...),
})
}
@@ -253,6 +261,7 @@ type Options struct {
AsyncBatch bool `config:"async_batch"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
Enc encoder.MultiEncoder `config:"encoding"`
RootNsid string `config:"root_namespace"`
}
// Fs represents a remote dropbox server
@@ -428,15 +437,15 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
members := []*team.UserSelectorArg{&user}
args := team.NewMembersGetInfoArgs(members)
memberIds, err := f.team.MembersGetInfo(args)
memberIDs, err := f.team.MembersGetInfo(args)
if err != nil {
return nil, fmt.Errorf("invalid dropbox team member: %q: %w", opt.Impersonate, err)
}
if len(memberIds) == 0 || memberIds[0].MemberInfo == nil || memberIds[0].MemberInfo.Profile == nil {
if len(memberIDs) == 0 || memberIDs[0].MemberInfo == nil || memberIDs[0].MemberInfo.Profile == nil {
return nil, fmt.Errorf("dropbox team member not found: %q", opt.Impersonate)
}
cfg.AsMemberID = memberIds[0].MemberInfo.Profile.MemberProfile.TeamMemberId
cfg.AsMemberID = memberIDs[0].MemberInfo.Profile.MemberProfile.TeamMemberId
}
f.srv = files.New(cfg)
@@ -502,8 +511,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features.Fill(ctx, f)
// If root starts with / then use the actual root
if strings.HasPrefix(root, "/") {
if f.opt.RootNsid != "" {
f.ns = f.opt.RootNsid
fs.Debugf(f, "Overriding root namespace to %q", f.ns)
} else if strings.HasPrefix(root, "/") {
// If root starts with / then use the actual root
var acc *users.FullAccount
err = f.pacer.Call(func() (bool, error) {
acc, err = f.users.GetCurrentAccount()
@@ -644,7 +656,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil)
}
// listSharedFoldersApi lists all available shared folders mounted and not mounted
// listSharedFolders lists all available shared folders mounted and not mounted
// we'll need the id later so we have to return them in original format
func (f *Fs) listSharedFolders(ctx context.Context) (entries fs.DirEntries, err error) {
started := false
@@ -1231,7 +1243,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
return nil, err
}
var total uint64
var used = q.Used
used := q.Used
if q.Allocation != nil {
if q.Allocation.Individual != nil {
total += q.Allocation.Individual.Allocated

View File

@@ -970,6 +970,8 @@ func (f *Fs) mkdir(ctx context.Context, abspath string) error {
f.putFtpConnection(&c, err)
if errX := textprotoError(err); errX != nil {
switch errX.Code {
case ftp.StatusRequestedFileActionOK: // some ftp servers apparently return 250 instead of 257
err = nil // see: https://forum.rclone.org/t/rclone-pop-up-an-i-o-error-when-creating-a-folder-in-a-mounted-ftp-drive/44368/
case ftp.StatusFileUnavailable: // dir already exists: see issue #2181
err = nil
case 521: // dir already exists: error number according to RFC 959: issue #2363

View File

@@ -697,7 +697,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
// is this a directory marker?
if isDirectory {
// Don't insert the root directory
if remote == directory {
if remote == f.opt.Enc.ToStandardPath(directory) {
continue
}
// process directory markers as directories

View File

@@ -56,8 +56,7 @@ type MediaItem struct {
CreationTime time.Time `json:"creationTime"`
Width string `json:"width"`
Height string `json:"height"`
Photo struct {
} `json:"photo"`
Photo struct{} `json:"photo"`
} `json:"mediaMetadata"`
Filename string `json:"filename"`
}
@@ -68,7 +67,7 @@ type MediaItems struct {
NextPageToken string `json:"nextPageToken"`
}
//Content categories
// Content categories
// NONE Default content category. This category is ignored when any other category is used in the filter.
// LANDSCAPES Media items containing landscapes.
// RECEIPTS Media items containing receipts.
@@ -187,5 +186,5 @@ type BatchCreateResponse struct {
// BatchRemoveItems is for removing items from an album
type BatchRemoveItems struct {
MediaItemIds []string `json:"mediaItemIds"`
MediaItemIDs []string `json:"mediaItemIds"`
}

View File

@@ -280,7 +280,7 @@ func errorHandler(resp *http.Response) error {
if strings.HasPrefix(resp.Header.Get("Content-Type"), "image/") {
body = []byte("Image not found or broken")
}
var e = api.Error{
e := api.Error{
Details: api.ErrorDetails{
Code: resp.StatusCode,
Message: string(body),
@@ -620,9 +620,7 @@ func (f *Fs) listDir(ctx context.Context, prefix string, filter api.SearchFilter
if err != nil {
return err
}
if entry != nil {
entries = append(entries, entry)
}
entries = append(entries, entry)
return nil
})
if err != nil {
@@ -702,7 +700,7 @@ func (f *Fs) createAlbum(ctx context.Context, albumTitle string) (album *api.Alb
Path: "/albums",
Parameters: url.Values{},
}
var request = api.CreateAlbum{
request := api.CreateAlbum{
Album: &api.Album{
Title: albumTitle,
},
@@ -1002,7 +1000,7 @@ func (f *Fs) commitBatchAlbumID(ctx context.Context, items []uploadedItem, resul
Method: "POST",
Path: "/mediaItems:batchCreate",
}
var request = api.BatchCreateRequest{
request := api.BatchCreateRequest{
AlbumID: albumID,
}
itemsInBatch := 0
@@ -1174,8 +1172,8 @@ func (o *Object) Remove(ctx context.Context) (err error) {
Path: "/albums/" + album.ID + ":batchRemoveMediaItems",
NoResponse: true,
}
var request = api.BatchRemoveItems{
MediaItemIds: []string{o.id},
request := api.BatchRemoveItems{
MediaItemIDs: []string{o.id},
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {

View File

@@ -38,7 +38,7 @@ type dirPattern struct {
toEntries func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error)
}
// dirPatters is a slice of all the directory patterns
// dirPatterns is a slice of all the directory patterns
type dirPatterns []dirPattern
// patterns describes the layout of the google photos backend file system.

View File

@@ -164,16 +164,21 @@ func NewFs(ctx context.Context, fsname, rpath string, cmap configmap.Mapper) (fs
}
stubFeatures := &fs.Features{
CanHaveEmptyDirectories: true,
IsLocal: true,
ReadMimeType: true,
WriteMimeType: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
CanHaveEmptyDirectories: true,
IsLocal: true,
ReadMimeType: true,
WriteMimeType: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}
f.features = stubFeatures.Fill(ctx, f).Mask(ctx, f.Fs).WrapsFs(f, f.Fs)
@@ -341,6 +346,22 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
return errors.New("MergeDirs not supported")
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
if do := f.Fs.Features().DirSetModTime; do != nil {
return do(ctx, dir, modTime)
}
return fs.ErrorNotImplemented
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
if do := f.Fs.Features().MkdirMetadata; do != nil {
return do(ctx, dir, metadata)
}
return nil, fs.ErrorNotImplemented
}
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
@@ -418,7 +439,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// Shutdown the backend, closing any background tasks and any cached connections.
func (f *Fs) Shutdown(ctx context.Context) (err error) {
if f.db != nil {
if f.db != nil && !f.db.IsStopped() {
err = f.db.Stop(false)
}
if do := f.Fs.Features().Shutdown; do != nil {
@@ -514,6 +535,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
@@ -530,6 +562,8 @@ var (
_ fs.Abouter = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)

View File

@@ -71,7 +71,14 @@ func (o *Object) Hash(ctx context.Context, hashType hash.Type) (hashVal string,
f := o.f
if f.passHashes.Contains(hashType) {
fs.Debugf(o, "pass %s", hashType)
return o.Object.Hash(ctx, hashType)
hashVal, err = o.Object.Hash(ctx, hashType)
if hashVal != "" {
return hashVal, err
}
if err != nil {
fs.Debugf(o, "error passing %s: %v", hashType, err)
}
fs.Debugf(o, "passed %s is blank -- trying other methods", hashType)
}
if !f.suppHashes.Contains(hashType) {
fs.Debugf(o, "unsupp %s", hashType)

View File

@@ -1,5 +1,4 @@
//go:build !plan9
// +build !plan9
package hdfs
@@ -150,7 +149,7 @@ func (f *Fs) Root() string {
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("hdfs://%s", f.opt.Namenode)
return fmt.Sprintf("hdfs://%s/%s", f.opt.Namenode, f.root)
}
// Features returns the optional features of this Fs
@@ -210,7 +209,8 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
fs: f,
remote: remote,
size: x.Size(),
modTime: x.ModTime()})
modTime: x.ModTime(),
})
}
}
return entries, nil

View File

@@ -1,5 +1,4 @@
//go:build !plan9
// +build !plan9
// Package hdfs provides an interface to the HDFS storage system.
package hdfs

View File

@@ -1,7 +1,6 @@
// Test HDFS filesystem interface
//go:build !plan9
// +build !plan9
package hdfs_test

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9
// +build plan9
// Package hdfs provides an interface to the HDFS storage system.
package hdfs

View File

@@ -1,5 +1,4 @@
//go:build !plan9
// +build !plan9
package hdfs

View File

@@ -762,6 +762,12 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return nil
}
// Shutdown shutdown the fs
func (f *Fs) Shutdown(ctx context.Context) error {
f.tokenRenewer.Shutdown()
return nil
}
// ------------------------------------------------------------
// Fs returns the parent Fs.
@@ -997,6 +1003,7 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View File

@@ -89,6 +89,10 @@ that directory listings are much quicker, but rclone won't have the times or
sizes of any files, and some files that don't exist may be in the listing.`,
Default: false,
Advanced: true,
}, {
Name: "no_escape",
Help: "Do not escape URL metacharacters in path names.",
Default: false,
}},
}
fs.Register(fsi)
@@ -100,6 +104,7 @@ type Options struct {
NoSlash bool `config:"no_slash"`
NoHead bool `config:"no_head"`
Headers fs.CommaSepList `config:"headers"`
NoEscape bool `config:"no_escape"`
}
// Fs stores the interface to the remote HTTP files
@@ -326,6 +331,11 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// Join's the remote onto the base URL
func (f *Fs) url(remote string) string {
if f.opt.NoEscape {
// Directly concatenate without escaping, no_escape behavior
return f.endpointURL + remote
}
// Default behavior
return f.endpointURL + rest.URLPathEscape(remote)
}

View File

@@ -1487,16 +1487,38 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(ctx, remote)
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, err
}
if err := f.mkParentDir(ctx, remote); err != nil {
return nil, err
}
info, err := f.copyOrMove(ctx, "cp", srcObj.filePath(), remote)
// if destination was a trashed file then after a successful copy the copied file is still in trash (bug in api?)
if err == nil && bool(info.Deleted) && !f.opt.TrashedOnly && info.State == "COMPLETED" {
fs.Debugf(src, "Server-side copied to trashed destination, restoring")
info, err = f.createOrUpdate(ctx, remote, srcObj.createTime, srcObj.modTime, srcObj.size, srcObj.md5)
if err == nil {
var createTime time.Time
var createTimeMeta bool
var modTime time.Time
var modTimeMeta bool
if meta != nil {
createTime, createTimeMeta = srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime
}
modTime, modTimeMeta = srcObj.parseFsMetadataTime(meta, "mtime")
if !modTimeMeta {
modTime = srcObj.modTime
}
}
if bool(info.Deleted) && !f.opt.TrashedOnly && info.State == "COMPLETED" {
// Workaround necessary when destination was a trashed file, to avoid the copied file also being in trash (bug in api?)
fs.Debugf(src, "Server-side copied to trashed destination, restoring")
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
} else if createTimeMeta || modTimeMeta {
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
}
}
if err != nil {
@@ -1523,12 +1545,30 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(ctx, remote)
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, err
}
if err := f.mkParentDir(ctx, remote); err != nil {
return nil, err
}
info, err := f.copyOrMove(ctx, "mv", srcObj.filePath(), remote)
if err != nil && meta != nil {
createTime, createTimeMeta := srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime
}
modTime, modTimeMeta := srcObj.parseFsMetadataTime(meta, "mtime")
if !modTimeMeta {
modTime = srcObj.modTime
}
if createTimeMeta || modTimeMeta {
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
}
}
if err != nil {
return nil, fmt.Errorf("couldn't move file: %w", err)
}
@@ -1680,6 +1720,12 @@ func (f *Fs) CleanUp(ctx context.Context) error {
return nil
}
// Shutdown shutdown the fs
func (f *Fs) Shutdown(ctx context.Context) error {
f.tokenRenewer.Shutdown()
return nil
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
@@ -1780,6 +1826,20 @@ func (o *Object) readMetaData(ctx context.Context, force bool) (err error) {
return o.setMetaData(info)
}
// parseFsMetadataTime parses a time string from fs.Metadata with key
func (o *Object) parseFsMetadataTime(m fs.Metadata, key string) (t time.Time, ok bool) {
value, ok := m[key]
if ok {
var err error
t, err = time.Parse(time.RFC3339Nano, value) // metadata stores RFC3339Nano timestamps
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", key, value, err)
ok = false
}
}
return t, ok
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
@@ -1951,21 +2011,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var createdTime string
var modTime string
if meta != nil {
if v, ok := meta["btime"]; ok {
t, err := time.Parse(time.RFC3339Nano, v) // metadata stores RFC3339Nano timestamps
if err != nil {
fs.Debugf(o, "failed to parse metadata btime: %q: %v", v, err)
} else {
createdTime = api.Rfc3339Time(t).String() // jottacloud api wants RFC3339 timestamps
}
if t, ok := o.parseFsMetadataTime(meta, "btime"); ok {
createdTime = api.Rfc3339Time(t).String() // jottacloud api wants RFC3339 timestamps
}
if v, ok := meta["mtime"]; ok {
t, err := time.Parse(time.RFC3339Nano, v)
if err != nil {
fs.Debugf(o, "failed to parse metadata mtime: %q: %v", v, err)
} else {
modTime = api.Rfc3339Time(t).String()
}
if t, ok := o.parseFsMetadataTime(meta, "mtime"); ok {
modTime = api.Rfc3339Time(t).String()
}
}
if modTime == "" { // prefer mtime in meta as Modified time, fallback to source ModTime
@@ -2104,6 +2154,7 @@ var (
_ fs.Abouter = (*Fs)(nil)
_ fs.UserInfoer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
_ fs.Metadataer = (*Object)(nil)

View File

@@ -67,13 +67,13 @@ func init() {
Sensitive: true,
}, {
Name: "password",
Help: "Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).",
Help: "Your password for rclone generate one at https://app.koofr.net/app/admin/preferences/password.",
Provider: "koofr",
IsPassword: true,
Required: true,
}, {
Name: "password",
Help: "Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).",
Help: "Your password for rclone generate one at https://storage.rcs-rds.ro/app/admin/preferences/password.",
Provider: "digistorage",
IsPassword: true,
Required: true,

View File

@@ -36,7 +36,7 @@ import (
)
const (
maxEntitiesPerPage = 1024
maxEntitiesPerPage = 1000
minSleep = 200 * time.Millisecond
maxSleep = 2 * time.Second
pacerBurst = 1
@@ -219,7 +219,8 @@ type listAllFn func(*entity) bool
// Search is a bit fussy about which characters match
//
// If the name doesn't match this then do an dir list instead
var searchOK = regexp.MustCompile(`^[a-zA-Z0-9_ .]+$`)
// N.B.: Linkbox doesn't support search by name that is longer than 50 chars
var searchOK = regexp.MustCompile(`^[a-zA-Z0-9_ -.]{1,50}$`)
// Lists the directory required calling the user function on each item found
//
@@ -238,6 +239,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, name string, fn listAllF
// If name isn't good then do an unbounded search
name = ""
}
OUTER:
for numberOfEntities == maxEntitiesPerPage {
pageNumber++
@@ -258,7 +260,6 @@ OUTER:
err = getUnmarshaledResponse(ctx, f, opts, &responseResult)
if err != nil {
return false, fmt.Errorf("getting files failed: %w", err)
}
numberOfEntities = len(responseResult.SearchData.Entities)

View File

@@ -13,5 +13,7 @@ func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestLinkbox:",
NilObject: (*linkbox.Object)(nil),
// Linkbox doesn't support leading dots for files
SkipLeadingDot: true,
})
}

View File

@@ -1,5 +1,4 @@
//go:build darwin || dragonfly || freebsd || linux
// +build darwin dragonfly freebsd linux
package local
@@ -24,9 +23,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
}
bs := int64(s.Bsize) // nolint: unconvert
usage := &fs.Usage{
Total: fs.NewUsageValue(bs * int64(s.Blocks)), // quota of bytes that can be used
Used: fs.NewUsageValue(bs * int64(s.Blocks-s.Bfree)), // bytes in use
Free: fs.NewUsageValue(bs * int64(s.Bavail)), // bytes which can be uploaded before reaching the quota
Total: fs.NewUsageValue(bs * int64(s.Blocks)), //nolint: unconvert // quota of bytes that can be used
Used: fs.NewUsageValue(bs * int64(s.Blocks-s.Bfree)), //nolint: unconvert // bytes in use
Free: fs.NewUsageValue(bs * int64(s.Bavail)), //nolint: unconvert // bytes which can be uploaded before reaching the quota
}
return usage, nil
}

View File

@@ -1,5 +1,4 @@
//go:build windows
// +build windows
package local

View File

@@ -1,5 +1,4 @@
//go:build !linux
// +build !linux
package local

View File

@@ -1,5 +1,4 @@
//go:build linux
// +build linux
package local

View File

@@ -1,5 +1,4 @@
//go:build windows || plan9 || js
// +build windows plan9 js
package local

View File

@@ -1,5 +1,4 @@
//go:build !windows && !plan9 && !js
// +build !windows,!plan9,!js
package local

View File

@@ -36,6 +36,27 @@ const devUnset = 0xdeadbeefcafebabe // a d
const linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link
const useReadDir = (runtime.GOOS == "windows" || runtime.GOOS == "plan9") // these OSes read FileInfos directly
// timeType allows the user to choose what exactly ModTime() returns
type timeType = fs.Enum[timeTypeChoices]
const (
mTime timeType = iota
aTime
bTime
cTime
)
type timeTypeChoices struct{}
func (timeTypeChoices) Choices() []string {
return []string{
mTime: "mtime",
aTime: "atime",
bTime: "btime",
cTime: "ctime",
}
}
// Register with Fs
func init() {
fsi := &fs.RegInfo{
@@ -53,6 +74,8 @@ netbsd, macOS and Solaris. It is **not** supported on Windows yet
User metadata is stored as extended attributes (which may not be
supported by all file systems) under the "user.*" prefix.
Metadata is supported on files and directories.
`,
},
Options: []fs.Option{{
@@ -211,6 +234,42 @@ when copying to a CIFS mount owned by another user. If this option is
enabled, rclone will no longer update the modtime after copying a file.`,
Default: false,
Advanced: true,
}, {
Name: "time_type",
Help: `Set what kind of time is returned.
Normally rclone does all operations on the mtime or Modification time.
If you set this flag then rclone will return the Modified time as whatever
you set here. So if you use "rclone lsl --local-time-type ctime" then
you will see ctimes in the listing.
If the OS doesn't support returning the time_type specified then rclone
will silently replace it with the modification time which all OSes support.
- mtime is supported by all OSes
- atime is supported on all OSes except: plan9, js
- btime is only supported on: Windows, macOS, freebsd, netbsd
- ctime is supported on all Oses except: Windows, plan9, js
Note that setting the time will still set the modified time so this is
only useful for reading.
`,
Default: mTime,
Advanced: true,
Examples: []fs.OptionExample{{
Value: mTime.String(),
Help: "The last modification time.",
}, {
Value: aTime.String(),
Help: "The last access time.",
}, {
Value: bTime.String(),
Help: "The creation time.",
}, {
Value: cTime.String(),
Help: "The last status change time.",
}},
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -235,6 +294,7 @@ type Options struct {
NoPreAllocate bool `config:"no_preallocate"`
NoSparse bool `config:"no_sparse"`
NoSetModTime bool `config:"no_set_modtime"`
TimeType timeType `config:"time_type"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -270,6 +330,11 @@ type Object struct {
translatedLink bool // Is this object a translated link
}
// Directory represents a local filesystem directory
type Directory struct {
Object
}
// ------------------------------------------------------------
var (
@@ -301,15 +366,20 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
f.root = cleanRootPath(root, f.opt.NoUNC, f.opt.Enc)
f.features = (&fs.Features{
CaseInsensitive: f.caseInsensitive(),
CanHaveEmptyDirectories: true,
IsLocal: true,
SlowHash: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: xattrSupported, // can only R/W general purpose metadata if xattrs are supported
FilterAware: true,
PartialUploads: true,
CaseInsensitive: f.caseInsensitive(),
CanHaveEmptyDirectories: true,
IsLocal: true,
SlowHash: true,
ReadMetadata: true,
WriteMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: xattrSupported, // can only R/W general purpose metadata if xattrs are supported
DirModTimeUpdatesOnWrite: true,
UserMetadata: xattrSupported, // can only R/W general purpose metadata if xattrs are supported
FilterAware: true,
PartialUploads: true,
}).Fill(ctx, f)
if opt.FollowSymlinks {
f.lstat = os.Stat
@@ -453,6 +523,15 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
}
// Create new directory object from the info passed in
func (f *Fs) newDirectory(dir string, fi os.FileInfo) *Directory {
o := f.newObject(dir)
o.setMetadata(fi)
return &Directory{
Object: *o,
}
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
@@ -563,7 +642,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Ignore directories which are symlinks. These are junction points under windows which
// are kind of a souped up symlink. Unix doesn't have directories which are symlinks.
if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
d := fs.NewDir(newRemote, fi.ModTime())
d := f.newDirectory(newRemote, fi)
entries = append(entries, d)
}
} else {
@@ -643,6 +722,58 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return nil
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
o := Object{
fs: f,
remote: dir,
path: f.localPath(dir),
}
return o.SetModTime(ctx, modTime)
}
// MkdirMetadata makes the directory passed in as dir.
//
// It shouldn't return an error if it already exists.
//
// If the metadata is not nil it is set.
//
// It returns the directory that was created.
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
// Find and or create the directory
localPath := f.localPath(dir)
fi, err := f.lstat(localPath)
if errors.Is(err, os.ErrNotExist) {
err := f.Mkdir(ctx, dir)
if err != nil {
return nil, fmt.Errorf("mkdir metadata: failed make directory: %w", err)
}
fi, err = f.lstat(localPath)
if err != nil {
return nil, fmt.Errorf("mkdir metadata: failed to read info: %w", err)
}
} else if err != nil {
return nil, err
}
// Create directory object
d := f.newDirectory(dir, fi)
// Set metadata on the directory object if provided
if metadata != nil {
err = d.writeMetadata(metadata)
if err != nil {
return nil, fmt.Errorf("failed to set metadata on directory: %w", err)
}
// Re-read info now we have finished setting stuff
err = d.lstat()
if err != nil {
return nil, fmt.Errorf("mkdir metadata: failed to re-read info: %w", err)
}
}
return d, nil
}
// Rmdir removes the directory
//
// If it isn't empty it will return an error
@@ -720,27 +851,6 @@ func (f *Fs) readPrecision() (precision time.Duration) {
return
}
// Purge deletes all the files in the directory
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context, dir string) error {
dir = f.localPath(dir)
fi, err := f.lstat(dir)
if err != nil {
// already purged
if os.IsNotExist(err) {
return fs.ErrorDirNotFound
}
return err
}
if !fi.Mode().IsDir() {
return fmt.Errorf("can't purge non directory: %q", dir)
}
return os.RemoveAll(dir)
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
@@ -780,6 +890,12 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, fmt.Errorf("move: failed to read metadata: %w", err)
}
// Do the move
err = os.Rename(srcObj.path, dstObj.path)
if os.IsNotExist(err) {
@@ -795,6 +911,12 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantMove
}
// Set metadata if --metadata is in use
err = dstObj.writeMetadata(meta)
if err != nil {
return nil, fmt.Errorf("move: failed to set metadata: %w", err)
}
// Update the info
err = dstObj.lstat()
if err != nil {
@@ -1068,7 +1190,7 @@ func (file *localOpenFile) Read(p []byte) (n int, err error) {
if oldsize != fi.Size() {
return 0, fserrors.NoLowLevelRetryError(fmt.Errorf("can't copy - source file is being updated (size changed from %d to %d)", oldsize, fi.Size()))
}
if !oldtime.Equal(fi.ModTime()) {
if !oldtime.Equal(readTime(file.o.fs.opt.TimeType, fi)) {
return 0, fserrors.NoLowLevelRetryError(fmt.Errorf("can't copy - source file is being updated (mod time changed from %v to %v)", oldtime, fi.ModTime()))
}
}
@@ -1364,7 +1486,7 @@ func (o *Object) setMetadata(info os.FileInfo) {
}
o.fs.objectMetaMu.Lock()
o.size = info.Size()
o.modTime = info.ModTime()
o.modTime = readTime(o.fs.opt.TimeType, info)
o.mode = info.Mode()
o.fs.objectMetaMu.Unlock()
// Read the size of the link.
@@ -1433,6 +1555,18 @@ func (o *Object) writeMetadata(metadata fs.Metadata) (err error) {
return err
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
err := o.writeMetadata(metadata)
if err != nil {
return fmt.Errorf("SetMetadata failed on Object: %w", err)
}
// Re-read info now we have finished setting stuff
return o.lstat()
}
func cleanRootPath(s string, noUNC bool, enc encoder.MultiEncoder) string {
if runtime.GOOS != "windows" || !strings.HasPrefix(s, "\\") {
if !filepath.IsAbs(s) {
@@ -1447,6 +1581,10 @@ func cleanRootPath(s string, noUNC bool, enc encoder.MultiEncoder) string {
if runtime.GOOS == "windows" {
s = filepath.ToSlash(s)
vol := filepath.VolumeName(s)
if vol == `\\?` && len(s) >= 6 {
// `\\?\C:`
vol = s[:6]
}
s = vol + enc.FromStandardPath(s[len(vol):])
s = filepath.FromSlash(s)
if !noUNC {
@@ -1459,15 +1597,52 @@ func cleanRootPath(s string, noUNC bool, enc encoder.MultiEncoder) string {
return s
}
// Items returns the count of items in this directory or this
// directory and subdirectories if known, -1 for unknown
func (d *Directory) Items() int64 {
return -1
}
// ID returns the internal ID of this directory if known, or
// "" otherwise
func (d *Directory) ID() string {
return ""
}
// SetMetadata sets metadata for a Directory
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (d *Directory) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
err := d.writeMetadata(metadata)
if err != nil {
return fmt.Errorf("SetMetadata failed on Directory: %w", err)
}
// Re-read info now we have finished setting stuff
return d.lstat()
}
// Hash does nothing on a directory
//
// This method is implemented with the incorrect type signature to
// stop the Directory type asserting to fs.Object or fs.ObjectInfo
func (d *Directory) Hash() {
// Does nothing
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
_ fs.Purger = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
_ fs.DirMover = &Fs{}
_ fs.Commander = &Fs{}
_ fs.OpenWriterAter = &Fs{}
_ fs.Object = &Object{}
_ fs.Metadataer = &Object{}
_ fs.Fs = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
_ fs.DirMover = &Fs{}
_ fs.Commander = &Fs{}
_ fs.OpenWriterAter = &Fs{}
_ fs.DirSetModTimer = &Fs{}
_ fs.MkdirMetadataer = &Fs{}
_ fs.Object = &Object{}
_ fs.Metadataer = &Object{}
_ fs.SetMetadataer = &Object{}
_ fs.Directory = &Directory{}
_ fs.SetModTimer = &Directory{}
_ fs.SetMetadataer = &Directory{}
)

View File

@@ -76,6 +76,24 @@ func TestUpdatingCheck(t *testing.T) {
}
// Test corrupted on transfer
// should error due to size/hash mismatch
func TestVerifyCopy(t *testing.T) {
t.Skip("FIXME this test is unreliable")
r := fstest.NewRun(t)
filePath := "sub dir/local test"
r.WriteFile(filePath, "some content", time.Now())
src, err := r.Flocal.NewObject(context.Background(), filePath)
require.NoError(t, err)
src.(*Object).fs.opt.NoCheckUpdated = true
for i := 0; i < 100; i++ {
go r.WriteFile(src.Remote(), fmt.Sprintf("some new content %d", i), src.ModTime(context.Background()))
}
_, err = operations.Copy(context.Background(), r.Fremote, nil, filePath+"2", src)
assert.Error(t, err)
}
func TestSymlink(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)

View File

@@ -1,16 +1,34 @@
//go:build darwin || freebsd || netbsd
// +build darwin freebsd netbsd
package local
import (
"fmt"
"os"
"syscall"
"time"
"github.com/rclone/rclone/fs"
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
stat, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
fs.Debugf(nil, "didn't return Stat_t as expected")
return fi.ModTime()
}
switch t {
case aTime:
return time.Unix(stat.Atimespec.Unix())
case bTime:
return time.Unix(stat.Birthtimespec.Unix())
case cTime:
return time.Unix(stat.Ctimespec.Unix())
}
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
info, err := o.fs.lstat(o.path)

View File

@@ -1,12 +1,13 @@
//go:build linux
// +build linux
package local
import (
"fmt"
"os"
"runtime"
"sync"
"syscall"
"time"
"github.com/rclone/rclone/fs"
@@ -18,6 +19,22 @@ var (
readMetadataFromFileFn func(o *Object, m *fs.Metadata) (err error)
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
stat, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
fs.Debugf(nil, "didn't return Stat_t as expected")
return fi.ModTime()
}
switch t {
case aTime:
return time.Unix(stat.Atim.Unix())
case cTime:
return time.Unix(stat.Ctim.Unix())
}
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
statxCheckOnce.Do(func() {

View File

@@ -1,14 +1,20 @@
//go:build plan9 || js
// +build plan9 js
//go:build dragonfly || plan9 || js
package local
import (
"fmt"
"os"
"time"
"github.com/rclone/rclone/fs"
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
info, err := o.fs.lstat(o.path)

View File

@@ -1,16 +1,32 @@
//go:build openbsd || solaris
// +build openbsd solaris
package local
import (
"fmt"
"os"
"syscall"
"time"
"github.com/rclone/rclone/fs"
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
stat, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
fs.Debugf(nil, "didn't return Stat_t as expected")
return fi.ModTime()
}
switch t {
case aTime:
return time.Unix(stat.Atim.Unix())
case cTime:
return time.Unix(stat.Ctim.Unix())
}
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
info, err := o.fs.lstat(o.path)

View File

@@ -1,16 +1,32 @@
//go:build windows
// +build windows
package local
import (
"fmt"
"os"
"syscall"
"time"
"github.com/rclone/rclone/fs"
)
// Read the time specified from the os.FileInfo
func readTime(t timeType, fi os.FileInfo) time.Time {
stat, ok := fi.Sys().(*syscall.Win32FileAttributeData)
if !ok {
fs.Debugf(nil, "didn't return Win32FileAttributeData as expected")
return fi.ModTime()
}
switch t {
case aTime:
return time.Unix(0, stat.LastAccessTime.Nanoseconds())
case bTime:
return time.Unix(0, stat.CreationTime.Nanoseconds())
}
return fi.ModTime()
}
// Read the metadata from the file into metadata where possible
func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
info, err := o.fs.lstat(o.path)

View File

@@ -1,7 +1,6 @@
// Device reading functions
//go:build !darwin && !dragonfly && !freebsd && !linux && !netbsd && !openbsd && !solaris
// +build !darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris
package local

View File

@@ -1,7 +1,6 @@
// Device reading functions
//go:build darwin || dragonfly || freebsd || linux || netbsd || openbsd || solaris
// +build darwin dragonfly freebsd linux netbsd openbsd solaris
package local

View File

@@ -1,5 +1,4 @@
//go:build !windows
// +build !windows
package local

View File

@@ -1,18 +1,13 @@
//go:build windows
// +build windows
package local
import (
"os"
"syscall"
"time"
"github.com/rclone/rclone/fs"
)
const (
ERROR_SHARING_VIOLATION syscall.Errno = 32
"golang.org/x/sys/windows"
)
// Removes name, retrying on a sharing violation
@@ -28,7 +23,7 @@ func remove(name string) (err error) {
if !ok {
break
}
if pathErr.Err != ERROR_SHARING_VIOLATION {
if pathErr.Err != windows.ERROR_SHARING_VIOLATION {
break
}
fs.Logf(name, "Remove detected sharing violation - retry %d/%d sleeping %v", i+1, maxTries, sleepTime)

View File

@@ -1,5 +1,4 @@
//go:build !windows
// +build !windows
package local

View File

@@ -1,10 +1,8 @@
//go:build windows
// +build windows
package local
import (
"os"
"syscall"
"time"
)
@@ -13,7 +11,13 @@ const haveSetBTime = true
// setBTime sets the birth time of the file passed in
func setBTime(name string, btime time.Time) (err error) {
h, err := syscall.Open(name, os.O_RDWR, 0755)
pathp, err := syscall.UTF16PtrFromString(name)
if err != nil {
return err
}
h, err := syscall.CreateFile(pathp,
syscall.FILE_WRITE_ATTRIBUTES, syscall.FILE_SHARE_WRITE, nil,
syscall.OPEN_EXISTING, syscall.FILE_FLAG_BACKUP_SEMANTICS, 0)
if err != nil {
return err
}

View File

@@ -1,5 +1,4 @@
//go:build !windows && !plan9 && !js
// +build !windows,!plan9,!js
package local

View File

@@ -1,5 +1,4 @@
//go:build windows || plan9 || js
// +build windows plan9 js
package local

View File

@@ -1,5 +1,4 @@
//go:build !openbsd && !plan9
// +build !openbsd,!plan9
package local

View File

@@ -1,7 +1,7 @@
//go:build openbsd || plan9
// +build openbsd plan9
// The pkg/xattr module doesn't compile for openbsd or plan9
//go:build openbsd || plan9
package local
import "github.com/rclone/rclone/fs"

View File

@@ -46,8 +46,8 @@ import (
// Global constants
const (
minSleepPacer = 10 * time.Millisecond
maxSleepPacer = 2 * time.Second
minSleepPacer = 100 * time.Millisecond
maxSleepPacer = 5 * time.Second
decayConstPacer = 2 // bigger for slower decay, exponential
metaExpirySec = 20 * 60 // meta server expiration time
serverExpirySec = 3 * 60 // download server expiration time

View File

@@ -38,8 +38,7 @@ func init() {
}
// Options defines the configuration for this backend
type Options struct {
}
type Options struct{}
// Fs represents a remote memory server
type Fs struct {
@@ -297,7 +296,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
slash := strings.IndexRune(localPath, '/')
if slash >= 0 {
// send a directory if have a slash
dir := directory + localPath[:slash]
dir := strings.TrimPrefix(directory, f.rootDirectory+"/") + localPath[:slash]
if addBucket {
dir = path.Join(bucket, dir)
}
@@ -385,10 +384,22 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
bucket, directory := f.split(dir)
list := walk.NewListRHelper(callback)
entries := fs.DirEntries{}
listR := func(bucket, directory, prefix string, addBucket bool) error {
return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, entry fs.DirEntry, isDirectory bool) error {
return list.Add(entry)
err = f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, entry fs.DirEntry, isDirectory bool) error {
entries = append(entries, entry) // can't list.Add here -- could deadlock
return nil
})
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
return nil
}
if bucket == "" {
entries, err := f.listBuckets(ctx)
@@ -482,7 +493,8 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if od == nil {
return nil, fs.ErrorObjectNotFound
}
buckets.updateObjectData(dstBucket, dstPath, od)
odCopy := *od
buckets.updateObjectData(dstBucket, dstPath, &odCopy)
return f.NewObject(ctx, remote)
}

View File

@@ -0,0 +1,40 @@
package memory
import (
"context"
"fmt"
"testing"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/require"
)
var t1 = fstest.Time("2001-02-03T04:05:06.499999999Z")
// InternalTest dispatches all internal tests
func (f *Fs) InternalTest(t *testing.T) {
t.Run("PurgeListDeadlock", func(t *testing.T) {
testPurgeListDeadlock(t)
})
}
// test that Purge fallback does not result in deadlock from concurrently listing and removing
func testPurgeListDeadlock(t *testing.T) {
ctx := context.Background()
r := fstest.NewRunIndividual(t)
r.Mkdir(ctx, r.Fremote)
r.Fremote.Features().Disable("Purge") // force fallback-purge
// make a lot of files to prevent it from finishing too quickly
for i := 0; i < 100; i++ {
dst := "file" + fmt.Sprint(i) + ".txt"
r.WriteObject(ctx, dst, "hello", t1)
}
require.NoError(t, operations.Purge(ctx, r.Fremote, ""))
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -15,6 +15,7 @@ import (
"math/rand"
"net/http"
"net/url"
"path"
"strconv"
"strings"
"sync"
@@ -260,6 +261,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
case fs.ErrorObjectNotFound:
return f, nil
case fs.ErrorIsFile:
// Correct root if definitely pointing to a file
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
// Fs points to the parent directory
return f, err
default:
@@ -437,7 +443,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
URL := f.url(dir)
files, err := f.netStorageDirRequest(ctx, dir, URL)
files, err := f.netStorageDirRequest(ctx, URL)
if err != nil {
return nil, err
}
@@ -926,7 +932,7 @@ func (f *Fs) netStorageStatRequest(ctx context.Context, URL string, directory bo
}
// netStorageDirRequest performs a NetStorage dir request
func (f *Fs) netStorageDirRequest(ctx context.Context, dir string, URL string) ([]File, error) {
func (f *Fs) netStorageDirRequest(ctx context.Context, URL string) ([]File, error) {
const actionHeader = "version=1&action=dir&format=xml&encoding=utf-8"
statResp := &Stat{}
if _, err := f.callBackend(ctx, URL, "GET", actionHeader, false, statResp, nil); err != nil {

View File

@@ -7,7 +7,7 @@ import (
)
const (
timeFormat = `"` + time.RFC3339 + `"`
timeFormat = `"` + "2006-01-02T15:04:05.999Z" + `"`
// PackageTypeOneNote is the package type value for OneNote files
PackageTypeOneNote = "oneNote"
@@ -40,17 +40,22 @@ var _ error = (*Error)(nil)
// Identity represents an identity of an actor. For example, and actor
// can be a user, device, or application.
type Identity struct {
DisplayName string `json:"displayName"`
ID string `json:"id"`
DisplayName string `json:"displayName,omitempty"`
ID string `json:"id,omitempty"`
Email string `json:"email,omitempty"` // not officially documented, but seems to sometimes exist
LoginName string `json:"loginName,omitempty"` // SharePoint only
}
// IdentitySet is a keyed collection of Identity objects. It is used
// to represent a set of identities associated with various events for
// an item, such as created by or last modified by.
type IdentitySet struct {
User Identity `json:"user"`
Application Identity `json:"application"`
Device Identity `json:"device"`
User Identity `json:"user,omitempty"`
Application Identity `json:"application,omitempty"`
Device Identity `json:"device,omitempty"`
Group Identity `json:"group,omitempty"`
SiteGroup Identity `json:"siteGroup,omitempty"` // The SharePoint group associated with this action. Optional.
SiteUser Identity `json:"siteUser,omitempty"` // The SharePoint user associated with this action. Optional.
}
// Quota groups storage space quota-related information on OneDrive into a single structure.
@@ -150,16 +155,15 @@ type FileFacet struct {
// facet can be used to specify the last modified date or created date
// of the item as it was on the local device.
type FileSystemInfoFacet struct {
CreatedDateTime Timestamp `json:"createdDateTime"` // The UTC date and time the file was created on a client.
LastModifiedDateTime Timestamp `json:"lastModifiedDateTime"` // The UTC date and time the file was last modified on a client.
CreatedDateTime Timestamp `json:"createdDateTime,omitempty"` // The UTC date and time the file was created on a client.
LastModifiedDateTime Timestamp `json:"lastModifiedDateTime,omitempty"` // The UTC date and time the file was last modified on a client.
}
// DeletedFacet indicates that the item on OneDrive has been
// deleted. In this version of the API, the presence (non-null) of the
// facet value indicates that the file was deleted. A null (or
// missing) value indicates that the file is not deleted.
type DeletedFacet struct {
}
type DeletedFacet struct{}
// PackageFacet indicates that a DriveItem is the top level item
// in a "package" or a collection of items that should be treated as a collection instead of individual items.
@@ -168,31 +172,143 @@ type PackageFacet struct {
Type string `json:"type"`
}
// SharedType indicates a DriveItem has been shared with others. The resource includes information about how the item is shared.
// If a Driveitem has a non-null shared facet, the item has been shared.
type SharedType struct {
Owner IdentitySet `json:"owner,omitempty"` // The identity of the owner of the shared item. Read-only.
Scope string `json:"scope,omitempty"` // Indicates the scope of how the item is shared: anonymous, organization, or users. Read-only.
SharedBy IdentitySet `json:"sharedBy,omitempty"` // The identity of the user who shared the item. Read-only.
SharedDateTime Timestamp `json:"sharedDateTime,omitempty"` // The UTC date and time when the item was shared. Read-only.
}
// SharingInvitationType groups invitation-related data items into a single structure.
type SharingInvitationType struct {
Email string `json:"email,omitempty"` // The email address provided for the recipient of the sharing invitation. Read-only.
InvitedBy *IdentitySet `json:"invitedBy,omitempty"` // Provides information about who sent the invitation that created this permission, if that information is available. Read-only.
SignInRequired bool `json:"signInRequired,omitempty"` // If true the recipient of the invitation needs to sign in in order to access the shared item. Read-only.
}
// SharingLinkType groups link-related data items into a single structure.
// If a Permission resource has a non-null sharingLink facet, the permission represents a sharing link (as opposed to permissions granted to a person or group).
type SharingLinkType struct {
Application *Identity `json:"application,omitempty"` // The app the link is associated with.
Type LinkType `json:"type,omitempty"` // The type of the link created.
Scope LinkScope `json:"scope,omitempty"` // The scope of the link represented by this permission. Value anonymous indicates the link is usable by anyone, organization indicates the link is only usable for users signed into the same tenant.
WebHTML string `json:"webHtml,omitempty"` // For embed links, this property contains the HTML code for an <iframe> element that will embed the item in a webpage.
WebURL string `json:"webUrl,omitempty"` // A URL that opens the item in the browser on the OneDrive website.
}
// LinkType represents the type of SharingLinkType created.
type LinkType string
const (
ViewLinkType LinkType = "view" // ViewLinkType (role: read) A view-only sharing link, allowing read-only access.
EditLinkType LinkType = "edit" // EditLinkType (role: write) An edit sharing link, allowing read-write access.
EmbedLinkType LinkType = "embed" // EmbedLinkType (role: read) A view-only sharing link that can be used to embed content into a host webpage. Embed links are not available for OneDrive for Business or SharePoint.
)
// LinkScope represents the scope of the link represented by this permission.
// Value anonymous indicates the link is usable by anyone, organization indicates the link is only usable for users signed into the same tenant.
type LinkScope string
const (
AnonymousScope LinkScope = "anonymous" // AnonymousScope = Anyone with the link has access, without needing to sign in. This may include people outside of your organization.
OrganizationScope LinkScope = "organization" // OrganizationScope = Anyone signed into your organization (tenant) can use the link to get access. Only available in OneDrive for Business and SharePoint.
)
// PermissionsType provides information about a sharing permission granted for a DriveItem resource.
// Sharing permissions have a number of different forms. The Permission resource represents these different forms through facets on the resource.
type PermissionsType struct {
ID string `json:"id"` // The unique identifier of the permission among all permissions on the item. Read-only.
GrantedTo *IdentitySet `json:"grantedTo,omitempty"` // For user type permissions, the details of the users & applications for this permission. Read-only. Deprecated on OneDrive Business only.
GrantedToIdentities []*IdentitySet `json:"grantedToIdentities,omitempty"` // For link type permissions, the details of the users to whom permission was granted. Read-only. Deprecated on OneDrive Business only.
GrantedToV2 *IdentitySet `json:"grantedToV2,omitempty"` // For user type permissions, the details of the users & applications for this permission. Read-only. Not available for OneDrive Personal.
GrantedToIdentitiesV2 []*IdentitySet `json:"grantedToIdentitiesV2,omitempty"` // For link type permissions, the details of the users to whom permission was granted. Read-only. Not available for OneDrive Personal.
Invitation *SharingInvitationType `json:"invitation,omitempty"` // Details of any associated sharing invitation for this permission. Read-only.
InheritedFrom *ItemReference `json:"inheritedFrom,omitempty"` // Provides a reference to the ancestor of the current permission, if it is inherited from an ancestor. Read-only.
Link *SharingLinkType `json:"link,omitempty"` // Provides the link details of the current permission, if it is a link type permissions. Read-only.
Roles []Role `json:"roles,omitempty"` // The type of permission (read, write, owner, member). Read-only.
ShareID string `json:"shareId,omitempty"` // A unique token that can be used to access this shared item via the shares API. Read-only.
}
// Role represents the type of permission (read, write, owner, member)
type Role string
const (
ReadRole Role = "read" // ReadRole provides the ability to read the metadata and contents of the item.
WriteRole Role = "write" // WriteRole provides the ability to read and modify the metadata and contents of the item.
OwnerRole Role = "owner" // OwnerRole represents the owner role for SharePoint and OneDrive for Business.
MemberRole Role = "member" // MemberRole represents the member role for SharePoint and OneDrive for Business.
)
// PermissionsResponse is the response to the list permissions method
type PermissionsResponse struct {
Value []*PermissionsType `json:"value"` // An array of Item objects
}
// AddPermissionsRequest is the request for the add permissions method
type AddPermissionsRequest struct {
Recipients []DriveRecipient `json:"recipients,omitempty"` // A collection of recipients who will receive access and the sharing invitation.
Message string `json:"message,omitempty"` // A plain text formatted message that is included in the sharing invitation. Maximum length 2000 characters.
RequireSignIn bool `json:"requireSignIn,omitempty"` // Specifies whether the recipient of the invitation is required to sign-in to view the shared item.
SendInvitation bool `json:"sendInvitation,omitempty"` // If true, a sharing link is sent to the recipient. Otherwise, a permission is granted directly without sending a notification.
Roles []Role `json:"roles,omitempty"` // Specify the roles that are to be granted to the recipients of the sharing invitation.
RetainInheritedPermissions bool `json:"retainInheritedPermissions,omitempty"` // Optional. If true (default), any existing inherited permissions are retained on the shared item when sharing this item for the first time. If false, all existing permissions are removed when sharing for the first time. OneDrive Business Only.
}
// UpdatePermissionsRequest is the request for the update permissions method
type UpdatePermissionsRequest struct {
Roles []Role `json:"roles,omitempty"` // Specify the roles that are to be granted to the recipients of the sharing invitation.
}
// DriveRecipient represents a person, group, or other recipient to share with using the invite action.
type DriveRecipient struct {
Email string `json:"email,omitempty"` // The email address for the recipient, if the recipient has an associated email address.
Alias string `json:"alias,omitempty"` // The alias of the domain object, for cases where an email address is unavailable (e.g. security groups).
ObjectID string `json:"objectId,omitempty"` // The unique identifier for the recipient in the directory.
}
// Item represents metadata for an item in OneDrive
type Item struct {
ID string `json:"id"` // The unique identifier of the item within the Drive. Read-only.
Name string `json:"name"` // The name of the item (filename and extension). Read-write.
ETag string `json:"eTag"` // eTag for the entire item (metadata + content). Read-only.
CTag string `json:"cTag"` // An eTag for the content of the item. This eTag is not changed if only the metadata is changed. Read-only.
CreatedBy IdentitySet `json:"createdBy"` // Identity of the user, device, and application which created the item. Read-only.
LastModifiedBy IdentitySet `json:"lastModifiedBy"` // Identity of the user, device, and application which last modified the item. Read-only.
CreatedDateTime Timestamp `json:"createdDateTime"` // Date and time of item creation. Read-only.
LastModifiedDateTime Timestamp `json:"lastModifiedDateTime"` // Date and time the item was last modified. Read-only.
Size int64 `json:"size"` // Size of the item in bytes. Read-only.
ParentReference *ItemReference `json:"parentReference"` // Parent information, if the item has a parent. Read-write.
WebURL string `json:"webUrl"` // URL that displays the resource in the browser. Read-only.
Description string `json:"description"` // Provide a user-visible description of the item. Read-write.
Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only.
File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only.
RemoteItem *RemoteItemFacet `json:"remoteItem"` // Remote Item metadata, if the item is a remote shared item. Read-only.
FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write.
ID string `json:"id"` // The unique identifier of the item within the Drive. Read-only.
Name string `json:"name"` // The name of the item (filename and extension). Read-write.
ETag string `json:"eTag"` // eTag for the entire item (metadata + content). Read-only.
CTag string `json:"cTag"` // An eTag for the content of the item. This eTag is not changed if only the metadata is changed. Read-only.
CreatedBy IdentitySet `json:"createdBy"` // Identity of the user, device, and application which created the item. Read-only.
LastModifiedBy IdentitySet `json:"lastModifiedBy"` // Identity of the user, device, and application which last modified the item. Read-only.
CreatedDateTime Timestamp `json:"createdDateTime"` // Date and time of item creation. Read-only.
LastModifiedDateTime Timestamp `json:"lastModifiedDateTime"` // Date and time the item was last modified. Read-only.
Size int64 `json:"size"` // Size of the item in bytes. Read-only.
ParentReference *ItemReference `json:"parentReference"` // Parent information, if the item has a parent. Read-write.
WebURL string `json:"webUrl"` // URL that displays the resource in the browser. Read-only.
Description string `json:"description,omitempty"` // Provides a user-visible description of the item. Read-write. Only on OneDrive Personal. Undocumented limit of 1024 characters.
Folder *FolderFacet `json:"folder"` // Folder metadata, if the item is a folder. Read-only.
File *FileFacet `json:"file"` // File metadata, if the item is a file. Read-only.
RemoteItem *RemoteItemFacet `json:"remoteItem"` // Remote Item metadata, if the item is a remote shared item. Read-only.
FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write.
// Image *ImageFacet `json:"image"` // Image metadata, if the item is an image. Read-only.
// Photo *PhotoFacet `json:"photo"` // Photo metadata, if the item is a photo. Read-only.
// Audio *AudioFacet `json:"audio"` // Audio metadata, if the item is an audio file. Read-only.
// Video *VideoFacet `json:"video"` // Video metadata, if the item is a video. Read-only.
// Location *LocationFacet `json:"location"` // Location metadata, if the item has location data. Read-only.
Package *PackageFacet `json:"package"` // If present, indicates that this item is a package instead of a folder or file. Packages are treated like files in some contexts and folders in others. Read-only.
Deleted *DeletedFacet `json:"deleted"` // Information about the deleted state of the item. Read-only.
Package *PackageFacet `json:"package"` // If present, indicates that this item is a package instead of a folder or file. Packages are treated like files in some contexts and folders in others. Read-only.
Deleted *DeletedFacet `json:"deleted"` // Information about the deleted state of the item. Read-only.
Malware *struct{} `json:"malware,omitempty"` // Malware metadata, if the item was detected to contain malware. Read-only. (Currently has no properties.)
Shared *SharedType `json:"shared,omitempty"` // Indicates that the item has been shared with others and provides information about the shared state of the item. Read-only.
}
// Metadata represents a request to update Metadata.
// It includes only the writeable properties.
// omitempty is intentionally included for all, per https://learn.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_update?view=odsp-graph-online#request-body
type Metadata struct {
Description string `json:"description,omitempty"` // Provides a user-visible description of the item. Read-write. Only on OneDrive Personal. Undocumented limit of 1024 characters.
FileSystemInfo *FileSystemInfoFacet `json:"fileSystemInfo,omitempty"` // File system information on client. Read-write.
}
// IsEmpty returns true if the metadata is empty (there is nothing to set)
func (m Metadata) IsEmpty() bool {
return m.Description == "" && m.FileSystemInfo == &FileSystemInfoFacet{}
}
// DeltaResponse is the response to the view delta method
@@ -216,6 +332,12 @@ type CreateItemRequest struct {
ConflictBehavior string `json:"@name.conflictBehavior"` // Determines what to do if an item with a matching name already exists in this folder. Accepted values are: rename, replace, and fail (the default).
}
// CreateItemWithMetadataRequest is like CreateItemRequest but also allows setting Metadata
type CreateItemWithMetadataRequest struct {
CreateItemRequest
Metadata
}
// SetFileSystemInfo is used to Update an object's FileSystemInfo.
type SetFileSystemInfo struct {
FileSystemInfo FileSystemInfoFacet `json:"fileSystemInfo"` // File system information on client. Read-write.
@@ -223,7 +345,7 @@ type SetFileSystemInfo struct {
// CreateUploadRequest is used by CreateUploadSession to set the dates correctly
type CreateUploadRequest struct {
Item SetFileSystemInfo `json:"item"`
Item Metadata `json:"item"`
}
// CreateUploadResponse is the response from creating an upload session
@@ -419,6 +541,11 @@ func (i *Item) GetParentReference() *ItemReference {
return i.ParentReference
}
// MalwareDetected returns true if OneDrive has detected that this item contains malware.
func (i *Item) MalwareDetected() bool {
return i.Malware != nil
}
// IsRemote checks if item is a remote item
func (i *Item) IsRemote() bool {
return i.RemoteItem != nil
@@ -461,7 +588,7 @@ type DrivesResponse struct {
Drives []DriveResource `json:"value"`
}
// SiteResource is part of the response from from "/sites/root:"
// SiteResource is part of the response from "/sites/root:"
type SiteResource struct {
SiteID string `json:"id"`
SiteName string `json:"displayName"`
@@ -472,3 +599,25 @@ type SiteResource struct {
type SiteResponse struct {
Sites []SiteResource `json:"value"`
}
// GetGrantedTo returns the GrantedTo property.
// This is to get around the odd problem of
// GrantedTo being deprecated on OneDrive Business, while
// GrantedToV2 is unavailable on OneDrive Personal.
func (p *PermissionsType) GetGrantedTo(driveType string) *IdentitySet {
if driveType == "personal" {
return p.GrantedTo
}
return p.GrantedToV2
}
// GetGrantedToIdentities returns the GrantedToIdentities property.
// This is to get around the odd problem of
// GrantedToIdentities being deprecated on OneDrive Business, while
// GrantedToIdentitiesV2 is unavailable on OneDrive Personal.
func (p *PermissionsType) GetGrantedToIdentities(driveType string) []*IdentitySet {
if driveType == "personal" {
return p.GrantedToIdentities
}
return p.GrantedToIdentitiesV2
}

View File

@@ -0,0 +1,982 @@
package onedrive
import (
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"strings"
"time"
"github.com/rclone/rclone/backend/onedrive/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/errcount"
"golang.org/x/exp/slices" // replace with slices after go1.21 is the minimum version
)
const (
dirMimeType = "inode/directory"
timeFormatIn = time.RFC3339
timeFormatOut = "2006-01-02T15:04:05.999Z" // mS for OneDrive Personal, otherwise only S
)
// system metadata keys which this backend owns
var systemMetadataInfo = map[string]fs.MetadataHelp{
"content-type": {
Help: "The MIME type of the file.",
Type: "string",
Example: "text/plain",
ReadOnly: true,
},
"mtime": {
Help: "Time of last modification with S accuracy (mS for OneDrive Personal).",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05Z",
},
"btime": {
Help: "Time of file birth (creation) with S accuracy (mS for OneDrive Personal).",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05Z",
},
"utime": {
Help: "Time of upload with S accuracy (mS for OneDrive Personal).",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05Z",
ReadOnly: true,
},
"created-by-display-name": {
Help: "Display name of the user that created the item.",
Type: "string",
Example: "John Doe",
ReadOnly: true,
},
"created-by-id": {
Help: "ID of the user that created the item.",
Type: "string",
Example: "48d31887-5fad-4d73-a9f5-3c356e68a038",
ReadOnly: true,
},
"description": {
Help: "A short description of the file. Max 1024 characters. Only supported for OneDrive Personal.",
Type: "string",
Example: "Contract for signing",
},
"id": {
Help: "The unique identifier of the item within OneDrive.",
Type: "string",
Example: "01BYE5RZ6QN3ZWBTUFOFD3GSPGOHDJD36K",
ReadOnly: true,
},
"last-modified-by-display-name": {
Help: "Display name of the user that last modified the item.",
Type: "string",
Example: "John Doe",
ReadOnly: true,
},
"last-modified-by-id": {
Help: "ID of the user that last modified the item.",
Type: "string",
Example: "48d31887-5fad-4d73-a9f5-3c356e68a038",
ReadOnly: true,
},
"malware-detected": {
Help: "Whether OneDrive has detected that the item contains malware.",
Type: "boolean",
Example: "true",
ReadOnly: true,
},
"package-type": {
Help: "If present, indicates that this item is a package instead of a folder or file. Packages are treated like files in some contexts and folders in others.",
Type: "string",
Example: "oneNote",
ReadOnly: true,
},
"shared-owner-id": {
Help: "ID of the owner of the shared item (if shared).",
Type: "string",
Example: "48d31887-5fad-4d73-a9f5-3c356e68a038",
ReadOnly: true,
},
"shared-by-id": {
Help: "ID of the user that shared the item (if shared).",
Type: "string",
Example: "48d31887-5fad-4d73-a9f5-3c356e68a038",
ReadOnly: true,
},
"shared-scope": {
Help: "If shared, indicates the scope of how the item is shared: anonymous, organization, or users.",
Type: "string",
Example: "users",
ReadOnly: true,
},
"shared-time": {
Help: "Time when the item was shared, with S accuracy (mS for OneDrive Personal).",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05Z",
ReadOnly: true,
},
"permissions": {
Help: "Permissions in a JSON dump of OneDrive format. Enable with --onedrive-metadata-permissions. Properties: id, grantedTo, grantedToIdentities, invitation, inheritedFrom, link, roles, shareId",
Type: "JSON",
Example: "{}",
},
}
// rwChoices type for fs.Bits
type rwChoices struct{}
func (rwChoices) Choices() []fs.BitsChoicesInfo {
return []fs.BitsChoicesInfo{
{Bit: uint64(rwOff), Name: "off"},
{Bit: uint64(rwRead), Name: "read"},
{Bit: uint64(rwWrite), Name: "write"},
{Bit: uint64(rwFailOK), Name: "failok"},
}
}
// rwChoice type alias
type rwChoice = fs.Bits[rwChoices]
const (
rwRead rwChoice = 1 << iota
rwWrite
rwFailOK
rwOff rwChoice = 0
)
// Examples for the options
var rwExamples = fs.OptionExamples{{
Value: rwOff.String(),
Help: "Do not read or write the value",
}, {
Value: rwRead.String(),
Help: "Read the value only",
}, {
Value: rwWrite.String(),
Help: "Write the value only",
}, {
Value: (rwRead | rwWrite).String(),
Help: "Read and Write the value.",
}, {
Value: rwFailOK.String(),
Help: "If writing fails log errors only, don't fail the transfer",
}}
// Metadata describes metadata properties shared by both Objects and Directories
type Metadata struct {
fs *Fs // what this object/dir is part of
remote string // remote, for convenience when obj/dir not in scope
mimeType string // Content-Type of object from server (may not be as uploaded)
description string // Provides a user-visible description of the item. Read-write. Only on OneDrive Personal
mtime time.Time // Time of last modification with S accuracy.
btime time.Time // Time of file birth (creation) with S accuracy.
utime time.Time // Time of upload with S accuracy.
createdBy api.IdentitySet // user that created the item
lastModifiedBy api.IdentitySet // user that last modified the item
malwareDetected bool // Whether OneDrive has detected that the item contains malware.
packageType string // If present, indicates that this item is a package instead of a folder or file.
shared *api.SharedType // information about the shared state of the item, if shared
normalizedID string // the normalized ID of the object or dir
permissions []*api.PermissionsType // The current set of permissions for the item. Note that to save API calls, this is not guaranteed to be cached on the object. Use m.Get() to refresh.
queuedPermissions []*api.PermissionsType // The set of permissions queued to be updated.
permsAddOnly bool // Whether to disable "update" and "remove" (for example, during server-side copy when the dst will have new IDs)
}
// Get retrieves the cached metadata and converts it to fs.Metadata.
// This is most typically used when OneDrive is the source (as opposed to the dest).
// If m.fs.opt.MetadataPermissions includes "read" then this will also include permissions, which requires an API call.
// Get does not use an API call otherwise.
func (m *Metadata) Get(ctx context.Context) (metadata fs.Metadata, err error) {
metadata = make(fs.Metadata, 17)
metadata["content-type"] = m.mimeType
metadata["mtime"] = m.mtime.Format(timeFormatOut)
metadata["btime"] = m.btime.Format(timeFormatOut)
metadata["utime"] = m.utime.Format(timeFormatOut)
metadata["created-by-display-name"] = m.createdBy.User.DisplayName
metadata["created-by-id"] = m.createdBy.User.ID
if m.description != "" {
metadata["description"] = m.description
}
metadata["id"] = m.normalizedID
metadata["last-modified-by-display-name"] = m.lastModifiedBy.User.DisplayName
metadata["last-modified-by-id"] = m.lastModifiedBy.User.ID
metadata["malware-detected"] = fmt.Sprint(m.malwareDetected)
if m.packageType != "" {
metadata["package-type"] = m.packageType
}
if m.shared != nil {
metadata["shared-owner-id"] = m.shared.Owner.User.ID
metadata["shared-by-id"] = m.shared.SharedBy.User.ID
metadata["shared-scope"] = m.shared.Scope
metadata["shared-time"] = time.Time(m.shared.SharedDateTime).Format(timeFormatOut)
}
if m.fs.opt.MetadataPermissions.IsSet(rwRead) {
p, _, err := m.fs.getPermissions(ctx, m.normalizedID)
if err != nil {
return nil, fmt.Errorf("failed to get permissions: %w", err)
}
m.permissions = p
if len(p) > 0 {
fs.PrettyPrint(m.permissions, "perms", fs.LogLevelDebug)
buf, err := json.Marshal(m.permissions)
if err != nil {
return nil, fmt.Errorf("failed to marshal permissions: %w", err)
}
metadata["permissions"] = string(buf)
}
}
return metadata, nil
}
// Set takes fs.Metadata and parses/converts it to cached Metadata.
// This is most typically used when OneDrive is the destination (as opposed to the source).
// It does not actually update the remote (use Write for that.)
// It sets only the writeable metadata properties (i.e. read-only properties are skipped.)
// Permissions are included if m.fs.opt.MetadataPermissions includes "write".
// It returns errors if writeable properties can't be parsed.
// It does not return errors for unsupported properties that may be passed in.
// It returns the number of writeable properties set (if it is 0, we can skip the Write API call.)
func (m *Metadata) Set(ctx context.Context, metadata fs.Metadata) (numSet int, err error) {
numSet = 0
for k, v := range metadata {
k, v := k, v
switch k {
case "mtime":
t, err := time.Parse(timeFormatIn, v)
if err != nil {
return numSet, fmt.Errorf("failed to parse metadata %q = %q: %w", k, v, err)
}
m.mtime = t
numSet++
case "btime":
t, err := time.Parse(timeFormatIn, v)
if err != nil {
return numSet, fmt.Errorf("failed to parse metadata %q = %q: %w", k, v, err)
}
m.btime = t
numSet++
case "description":
if m.fs.driveType != driveTypePersonal {
fs.Debugf(m.remote, "metadata description is only supported for OneDrive Personal -- skipping: %s", v)
continue
}
m.description = v
numSet++
case "permissions":
if !m.fs.opt.MetadataPermissions.IsSet(rwWrite) {
continue
}
var perms []*api.PermissionsType
err := json.Unmarshal([]byte(v), &perms)
if err != nil {
return numSet, fmt.Errorf("failed to unmarshal permissions: %w", err)
}
m.queuedPermissions = perms
numSet++
default:
fs.Debugf(m.remote, "skipping unsupported metadata item: %s: %s", k, v)
}
}
if numSet == 0 {
fs.Infof(m.remote, "no writeable metadata found: %v", metadata)
}
return numSet, nil
}
// toAPIMetadata converts object/dir Metadata to api.Metadata for API calls.
// If btime is missing but mtime is present, mtime is also used as the btime, as otherwise it would get overwritten.
func (m *Metadata) toAPIMetadata() api.Metadata {
update := api.Metadata{
FileSystemInfo: &api.FileSystemInfoFacet{},
}
if m.description != "" && m.fs.driveType == driveTypePersonal {
update.Description = m.description
}
if !m.mtime.IsZero() {
update.FileSystemInfo.LastModifiedDateTime = api.Timestamp(m.mtime)
}
if !m.btime.IsZero() {
update.FileSystemInfo.CreatedDateTime = api.Timestamp(m.btime)
}
if m.btime.IsZero() && !m.mtime.IsZero() { // use mtime as btime if missing
m.btime = m.mtime
update.FileSystemInfo.CreatedDateTime = api.Timestamp(m.btime)
}
return update
}
// Write takes the cached Metadata and sets it on the remote, using API calls.
// If m.fs.opt.MetadataPermissions includes "write" and updatePermissions == true, permissions are also set.
// Calling Write without any writeable metadata will result in an error.
func (m *Metadata) Write(ctx context.Context, updatePermissions bool) (*api.Item, error) {
update := m.toAPIMetadata()
if update.IsEmpty() {
return nil, fmt.Errorf("%v: no writeable metadata found: %v", m.remote, m)
}
opts := m.fs.newOptsCallWithPath(ctx, m.remote, "PATCH", "")
var info *api.Item
err := m.fs.pacer.Call(func() (bool, error) {
resp, err := m.fs.srv.CallJSON(ctx, &opts, &update, &info)
return shouldRetry(ctx, resp, err)
})
if err != nil {
fs.Debugf(m.remote, "errored metadata: %v", m)
return nil, fmt.Errorf("%v: error updating metadata: %v", m.remote, err)
}
if m.fs.opt.MetadataPermissions.IsSet(rwWrite) && updatePermissions {
m.normalizedID = info.GetID()
err = m.WritePermissions(ctx)
if err != nil {
fs.Errorf(m.remote, "error writing permissions: %v", err)
return info, err
}
}
// update the struct since we have fresh info
m.fs.setSystemMetadata(info, m, m.remote, m.mimeType)
return info, err
}
// RefreshPermissions fetches the current permissions from the remote and caches them as Metadata
func (m *Metadata) RefreshPermissions(ctx context.Context) (err error) {
if m.normalizedID == "" {
return errors.New("internal error: normalizedID is missing")
}
p, _, err := m.fs.getPermissions(ctx, m.normalizedID)
if err != nil {
return fmt.Errorf("failed to refresh permissions: %w", err)
}
m.permissions = p
return nil
}
// WritePermissions sets the permissions (and no other metadata) on the remote.
// m.permissions (the existing perms) and m.queuedPermissions (the new perms to be set) must be set correctly before calling this.
// m.permissions == nil will not error, as it is valid to add permissions when there were previously none.
// If successful, m.permissions will be set with the new current permissions and m.queuedPermissions will be nil.
func (m *Metadata) WritePermissions(ctx context.Context) (err error) {
if !m.fs.opt.MetadataPermissions.IsSet(rwWrite) {
return errors.New("can't write permissions without --onedrive-metadata-permissions write")
}
if m.normalizedID == "" {
return errors.New("internal error: normalizedID is missing")
}
if m.fs.opt.MetadataPermissions.IsSet(rwFailOK) {
// If failok is set, allow the permissions setting to fail and only log an ERROR
defer func() {
if err != nil {
fs.Errorf(m.fs, "Ignoring error as failok is set: %v", err)
err = nil
}
}()
}
// compare current to queued and sort into add/update/remove queues
add, update, remove := m.sortPermissions()
fs.Debugf(m.remote, "metadata permissions: to add: %d to update: %d to remove: %d", len(add), len(update), len(remove))
_, err = m.processPermissions(ctx, add, update, remove)
if err != nil {
return fmt.Errorf("failed to process permissions: %w", err)
}
err = m.RefreshPermissions(ctx)
fs.Debugf(m.remote, "updated permissions (now has %d permissions)", len(m.permissions))
if err != nil {
return fmt.Errorf("failed to get permissions: %w", err)
}
m.queuedPermissions = nil
return nil
}
// sortPermissions sorts the permissions (to be written) into add, update, and remove queues
func (m *Metadata) sortPermissions() (add, update, remove []*api.PermissionsType) {
new, old := m.queuedPermissions, m.permissions
if len(old) == 0 || m.permsAddOnly {
return new, nil, nil // they must all be "add"
}
for _, n := range new {
if n == nil {
continue
}
if n.ID != "" {
// sanity check: ensure there's a matching "old" id with a non-matching role
if !slices.ContainsFunc(old, func(o *api.PermissionsType) bool {
return o.ID == n.ID && slices.Compare(o.Roles, n.Roles) != 0 && len(o.Roles) > 0 && len(n.Roles) > 0 && !slices.Contains(o.Roles, api.OwnerRole)
}) {
fs.Debugf(m.remote, "skipping update for invalid roles: %v (perm ID: %v)", n.Roles, n.ID)
continue
}
if m.fs.driveType != driveTypePersonal && n.Link != nil && n.Link.WebURL != "" {
// special case to work around API limitation -- can't update a sharing link perm so need to remove + add instead
// https://learn.microsoft.com/en-us/answers/questions/986279/why-is-update-permission-graph-api-for-files-not-w
// https://github.com/microsoftgraph/msgraph-sdk-dotnet/issues/1135
fs.Debugf(m.remote, "sortPermissions: can't update due to API limitation, will remove + add instead: %v", n.Roles)
remove = append(remove, n)
add = append(add, n)
continue
}
fs.Debugf(m.remote, "sortPermissions: will update role to %v", n.Roles)
update = append(update, n)
} else {
fs.Debugf(m.remote, "sortPermissions: will add permission: %v %v", n, n.Roles)
add = append(add, n)
}
}
for _, o := range old {
if slices.Contains(o.Roles, api.OwnerRole) {
fs.Debugf(m.remote, "skipping remove permission -- can't remove 'owner' role")
continue
}
newHasOld := slices.ContainsFunc(new, func(n *api.PermissionsType) bool {
if n == nil || n.ID == "" {
return false // can't remove perms without an ID
}
return n.ID == o.ID
})
if !newHasOld && o.ID != "" && !slices.Contains(add, o) && !slices.Contains(update, o) {
fs.Debugf(m.remote, "sortPermissions: will remove permission: %v %v (perm ID: %v)", o, o.Roles, o.ID)
remove = append(remove, o)
}
}
return add, update, remove
}
// processPermissions executes the add, update, and remove queues for writing permissions
func (m *Metadata) processPermissions(ctx context.Context, add, update, remove []*api.PermissionsType) (newPermissions []*api.PermissionsType, err error) {
errs := errcount.New()
for _, p := range remove { // remove (need to do these first because of remove + add workaround)
_, err := m.removePermission(ctx, p)
if err != nil {
fs.Errorf(m.remote, "Failed to remove permission: %v", err)
errs.Add(err)
}
}
for _, p := range add { // add
newPs, _, err := m.addPermission(ctx, p)
if err != nil {
fs.Errorf(m.remote, "Failed to add permission: %v", err)
errs.Add(err)
continue
}
newPermissions = append(newPermissions, newPs...)
}
for _, p := range update { // update
newP, _, err := m.updatePermission(ctx, p)
if err != nil {
fs.Errorf(m.remote, "Failed to update permission: %v", err)
errs.Add(err)
continue
}
newPermissions = append(newPermissions, newP)
}
err = errs.Err("failed to set permissions")
if err != nil {
err = fserrors.NoRetryError(err)
}
return newPermissions, err
}
// fillRecipients looks for recipients to add from the permission passed in.
// It looks for an email address in identity.User.Email, ID, and DisplayName, otherwise it uses the identity.User.ID as r.ObjectID.
// It considers both "GrantedTo" and "GrantedToIdentities".
func fillRecipients(p *api.PermissionsType, driveType string) (recipients []api.DriveRecipient) {
if p == nil {
return recipients
}
ids := make(map[string]struct{}, len(p.GetGrantedToIdentities(driveType))+1)
isUnique := func(s string) bool {
_, ok := ids[s]
return !ok && s != ""
}
addRecipient := func(identity *api.IdentitySet) {
r := api.DriveRecipient{}
id := ""
if strings.ContainsRune(identity.User.Email, '@') {
id = identity.User.Email
r.Email = id
} else if strings.ContainsRune(identity.User.ID, '@') {
id = identity.User.ID
r.Email = id
} else if strings.ContainsRune(identity.User.DisplayName, '@') {
id = identity.User.DisplayName
r.Email = id
} else {
id = identity.User.ID
r.ObjectID = id
}
if !isUnique(id) {
return
}
ids[id] = struct{}{}
recipients = append(recipients, r)
}
forIdentitySet := func(iSet *api.IdentitySet) {
if iSet == nil {
return
}
iS := *iSet
forIdentity := func(i api.Identity) {
if i != (api.Identity{}) {
iS.User = i
addRecipient(&iS)
}
}
forIdentity(iS.User)
forIdentity(iS.SiteUser)
forIdentity(iS.Group)
forIdentity(iS.SiteGroup)
forIdentity(iS.Application)
forIdentity(iS.Device)
}
for _, identitySet := range p.GetGrantedToIdentities(driveType) {
forIdentitySet(identitySet)
}
forIdentitySet(p.GetGrantedTo(driveType))
return recipients
}
// addPermission adds new permissions to an object or dir.
// if p.Link.Scope == "anonymous" then it will also create a Public Link.
func (m *Metadata) addPermission(ctx context.Context, p *api.PermissionsType) (newPs []*api.PermissionsType, resp *http.Response, err error) {
opts := m.fs.newOptsCall(m.normalizedID, "POST", "/invite")
req := &api.AddPermissionsRequest{
Recipients: fillRecipients(p, m.fs.driveType),
RequireSignIn: m.fs.driveType != driveTypePersonal, // personal and business have conflicting requirements
Roles: p.Roles,
}
if m.fs.driveType != driveTypePersonal {
req.RetainInheritedPermissions = false // not supported for personal
}
if p.Link != nil && p.Link.Scope == api.AnonymousScope {
link, err := m.fs.PublicLink(ctx, m.remote, fs.DurationOff, false)
if err != nil {
return nil, nil, err
}
p.Link.WebURL = link
newPs = append(newPs, p)
if len(req.Recipients) == 0 {
return newPs, nil, nil
}
}
if len(req.Recipients) == 0 {
fs.Debugf(m.remote, "skipping add permission -- at least one valid recipient is required")
return nil, nil, nil
}
if len(req.Roles) == 0 {
return nil, nil, errors.New("at least one role is required to add a permission (choices: read, write, owner, member)")
}
if slices.Contains(req.Roles, api.OwnerRole) {
fs.Debugf(m.remote, "skipping add permission -- can't invite a user with 'owner' role")
return nil, nil, nil
}
newP := &api.PermissionsResponse{}
err = m.fs.pacer.Call(func() (bool, error) {
resp, err = m.fs.srv.CallJSON(ctx, &opts, &req, &newP)
return shouldRetry(ctx, resp, err)
})
return newP.Value, resp, err
}
// updatePermission updates an existing permission on an object or dir.
// This requires the permission ID and a role to update (which will error if it is the same as the existing role.)
// Role is the only property that can be updated.
func (m *Metadata) updatePermission(ctx context.Context, p *api.PermissionsType) (newP *api.PermissionsType, resp *http.Response, err error) {
opts := m.fs.newOptsCall(m.normalizedID, "PATCH", "/permissions/"+p.ID)
req := api.UpdatePermissionsRequest{Roles: p.Roles} // roles is the only property that can be updated
if len(req.Roles) == 0 {
return nil, nil, errors.New("at least one role is required to update a permission (choices: read, write, owner, member)")
}
newP = &api.PermissionsType{}
err = m.fs.pacer.Call(func() (bool, error) {
resp, err = m.fs.srv.CallJSON(ctx, &opts, &req, &newP)
return shouldRetry(ctx, resp, err)
})
return newP, resp, err
}
// removePermission removes an existing permission on an object or dir.
// This requires the permission ID.
func (m *Metadata) removePermission(ctx context.Context, p *api.PermissionsType) (resp *http.Response, err error) {
opts := m.fs.newOptsCall(m.normalizedID, "DELETE", "/permissions/"+p.ID)
opts.NoResponse = true
err = m.fs.pacer.Call(func() (bool, error) {
resp, err = m.fs.srv.CallJSON(ctx, &opts, nil, nil)
return shouldRetry(ctx, resp, err)
})
return resp, err
}
// getPermissions gets the current permissions for an object or dir, from the API.
func (f *Fs) getPermissions(ctx context.Context, normalizedID string) (p []*api.PermissionsType, resp *http.Response, err error) {
opts := f.newOptsCall(normalizedID, "GET", "/permissions")
permResp := &api.PermissionsResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &permResp)
return shouldRetry(ctx, resp, err)
})
return permResp.Value, resp, err
}
func (f *Fs) newMetadata(remote string) *Metadata {
return &Metadata{fs: f, remote: remote}
}
// returns true if metadata includes a "permissions" key and f.opt.MetadataPermissions includes "write".
func (f *Fs) needsUpdatePermissions(metadata fs.Metadata) bool {
_, ok := metadata["permissions"]
return ok && f.opt.MetadataPermissions.IsSet(rwWrite)
}
// returns a non-zero btime if we have one
// otherwise falls back to mtime
func (o *Object) tryGetBtime(modTime time.Time) time.Time {
if o.meta != nil && !o.meta.btime.IsZero() {
return o.meta.btime
}
return modTime
}
// adds metadata (except permissions) if --metadata is in use
func (o *Object) fetchMetadataForCreate(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption, modTime time.Time) (createRequest api.CreateUploadRequest, metadata fs.Metadata, err error) {
createRequest = api.CreateUploadRequest{ // we set mtime no matter what
Item: api.Metadata{
FileSystemInfo: &api.FileSystemInfoFacet{
CreatedDateTime: api.Timestamp(o.tryGetBtime(modTime)),
LastModifiedDateTime: api.Timestamp(modTime),
},
},
}
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return createRequest, nil, fmt.Errorf("failed to read metadata from source object: %w", err)
}
if meta == nil {
return createRequest, nil, nil // no metadata or --metadata not in use, so just return mtime
}
if o.meta == nil {
o.meta = o.fs.newMetadata(o.Remote())
}
o.meta.mtime = modTime
numSet, err := o.meta.Set(ctx, meta)
if err != nil {
return createRequest, meta, err
}
if numSet == 0 {
return createRequest, meta, nil
}
createRequest.Item = o.meta.toAPIMetadata()
return createRequest, meta, nil
}
// Fetch metadata and update updateInfo if --metadata is in use
// modtime will still be set when there is no metadata to set
func (f *Fs) fetchAndUpdateMetadata(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption, updateInfo *Object) (info *api.Item, err error) {
meta, err := fs.GetMetadataOptions(ctx, f, src, options)
if err != nil {
return nil, fmt.Errorf("failed to read metadata from source object: %w", err)
}
if meta == nil {
return updateInfo.setModTime(ctx, src.ModTime(ctx)) // no metadata or --metadata not in use, so just set modtime
}
if updateInfo.meta == nil {
updateInfo.meta = f.newMetadata(updateInfo.Remote())
}
newInfo, err := updateInfo.updateMetadata(ctx, meta)
if newInfo == nil {
return info, err
}
return newInfo, err
}
// updateMetadata calls Get, Set, and Write
func (o *Object) updateMetadata(ctx context.Context, meta fs.Metadata) (info *api.Item, err error) {
_, err = o.meta.Get(ctx) // refresh permissions
if err != nil {
return nil, err
}
numSet, err := o.meta.Set(ctx, meta)
if err != nil {
return nil, err
}
if numSet == 0 {
return nil, nil
}
info, err = o.meta.Write(ctx, o.fs.needsUpdatePermissions(meta))
if err != nil {
return info, err
}
err = o.setMetaData(info)
if err != nil {
return info, err
}
// Remove versions if required
if o.fs.opt.NoVersions {
err := o.deleteVersions(ctx)
if err != nil {
return info, fmt.Errorf("%v: Failed to remove versions: %v", o, err)
}
}
return info, nil
}
// MkdirMetadata makes the directory passed in as dir.
//
// It shouldn't return an error if it already exists.
//
// If the metadata is not nil it is set.
//
// It returns the directory that was created.
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
var info *api.Item
var meta *Metadata
dirID, err := f.dirCache.FindDir(ctx, dir, false)
if err == fs.ErrorDirNotFound {
// Directory does not exist so create it
var leaf, parentID string
leaf, parentID, err = f.dirCache.FindPath(ctx, dir, true)
if err != nil {
return nil, err
}
info, meta, err = f.createDir(ctx, parentID, dir, leaf, metadata)
if err != nil {
return nil, err
}
if f.driveType != driveTypePersonal {
// for some reason, OneDrive Business needs this extra step to set modtime, while Personal does not. Seems like a bug...
fs.Debugf(dir, "setting time %v", meta.mtime)
info, err = meta.Write(ctx, false)
}
} else if err == nil {
// Directory exists and needs updating
info, meta, err = f.updateDir(ctx, dirID, dir, metadata)
}
if err != nil {
return nil, err
}
// Convert the info into a directory entry
parent, _ := dircache.SplitPath(dir)
entry, err := f.itemToDirEntry(ctx, parent, info)
if err != nil {
return nil, err
}
directory, ok := entry.(*Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be a *Directory", entry)
}
directory.meta = meta
f.setSystemMetadata(info, directory.meta, entry.Remote(), dirMimeType)
dirEntry, ok := entry.(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be an fs.Directory", entry)
}
return dirEntry, nil
}
// createDir makes a directory with pathID as parent and name leaf with optional metadata
func (f *Fs) createDir(ctx context.Context, pathID, dirWithLeaf, leaf string, metadata fs.Metadata) (info *api.Item, meta *Metadata, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", dirID, leaf)
var resp *http.Response
opts := f.newOptsCall(pathID, "POST", "/children")
mkdir := api.CreateItemWithMetadataRequest{
CreateItemRequest: api.CreateItemRequest{
Name: f.opt.Enc.FromStandardName(leaf),
ConflictBehavior: "fail",
},
}
m := f.newMetadata(dirWithLeaf)
m.mimeType = dirMimeType
numSet := 0
if len(metadata) > 0 {
numSet, err = m.Set(ctx, metadata)
if err != nil {
return nil, m, err
}
if numSet > 0 {
mkdir.Metadata = m.toAPIMetadata()
}
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, m, err
}
if f.needsUpdatePermissions(metadata) && numSet > 0 { // permissions must be done as a separate step
m.normalizedID = info.GetID()
err = m.RefreshPermissions(ctx)
if err != nil {
return info, m, err
}
err = m.WritePermissions(ctx)
if err != nil {
fs.Errorf(m.remote, "error writing permissions: %v", err)
return info, m, err
}
}
return info, m, nil
}
// updateDir updates an existing a directory with the metadata passed in
func (f *Fs) updateDir(ctx context.Context, dirID, remote string, metadata fs.Metadata) (info *api.Item, meta *Metadata, err error) {
d := f.newDir(dirID, remote)
_, err = d.meta.Set(ctx, metadata)
if err != nil {
return nil, nil, err
}
info, err = d.meta.Write(ctx, f.needsUpdatePermissions(metadata))
return info, d.meta, err
}
func (f *Fs) newDir(dirID, remote string) (d *Directory) {
d = &Directory{
fs: f,
remote: remote,
size: -1,
items: -1,
id: dirID,
meta: f.newMetadata(remote),
}
d.meta.normalizedID = dirID
return d
}
// Metadata returns metadata for a DirEntry
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
err = o.readMetaData(ctx)
if err != nil {
fs.Logf(o, "Failed to read metadata: %v", err)
return nil, err
}
return o.meta.Get(ctx)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
dirID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return err
}
d := f.newDir(dirID, dir)
return d.SetModTime(ctx, modTime)
}
// SetModTime sets the metadata on the DirEntry to set the modification date
//
// If there is any other metadata it does not overwrite it.
func (d *Directory) SetModTime(ctx context.Context, t time.Time) error {
btime := t
if d.meta != nil && !d.meta.btime.IsZero() {
btime = d.meta.btime // if we already have a non-zero btime, preserve it
}
d.meta = d.fs.newMetadata(d.remote) // set only the mtime and btime
d.meta.mtime = t
d.meta.btime = btime
_, err := d.meta.Write(ctx, false)
return err
}
// Metadata returns metadata for a DirEntry
//
// It should return nil if there is no Metadata
func (d *Directory) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
return d.meta.Get(ctx)
}
// SetMetadata sets metadata for a Directory
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (d *Directory) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
_, meta, err := d.fs.updateDir(ctx, d.id, d.remote, metadata)
d.meta = meta
return err
}
// Fs returns read only access to the Fs that this object is part of
func (d *Directory) Fs() fs.Info {
return d.fs
}
// String returns the name
func (d *Directory) String() string {
return d.remote
}
// Remote returns the remote path
func (d *Directory) Remote() string {
return d.remote
}
// ModTime returns the modification date of the file
//
// If one isn't available it returns the configured --default-dir-time
func (d *Directory) ModTime(ctx context.Context) time.Time {
if !d.meta.mtime.IsZero() {
return d.meta.mtime
}
ci := fs.GetConfig(ctx)
return time.Time(ci.DefaultTime)
}
// Size returns the size of the file
func (d *Directory) Size() int64 {
return d.size
}
// Items returns the count of items in this directory or this
// directory and subdirectories if known, -1 for unknown
func (d *Directory) Items() int64 {
return d.items
}
// ID gets the optional ID
func (d *Directory) ID() string {
return d.id
}
// MimeType returns the content type of the Object if
// known, or "" if not
func (d *Directory) MimeType(ctx context.Context) string {
return dirMimeType
}

View File

@@ -0,0 +1,131 @@
OneDrive supports System Metadata (not User Metadata, as of this writing) for
both files and directories. Much of the metadata is read-only, and there are some
differences between OneDrive Personal and Business (see table below for
details).
Permissions are also supported, if `--onedrive-metadata-permissions` is set. The
accepted values for `--onedrive-metadata-permissions` are "`read`", "`write`",
"`read,write`", and "`off`" (the default). "`write`" supports adding new permissions,
updating the "role" of existing permissions, and removing permissions. Updating
and removing require the Permission ID to be known, so it is recommended to use
"`read,write`" instead of "`write`" if you wish to update/remove permissions.
Permissions are read/written in JSON format using the same schema as the
[OneDrive API](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online),
which differs slightly between OneDrive Personal and Business.
Example for OneDrive Personal:
```json
[
{
"id": "1234567890ABC!123",
"grantedTo": {
"user": {
"id": "ryan@contoso.com"
},
"application": {},
"device": {}
},
"invitation": {
"email": "ryan@contoso.com"
},
"link": {
"webUrl": "https://1drv.ms/t/s!1234567890ABC"
},
"roles": [
"read"
],
"shareId": "s!1234567890ABC"
}
]
```
Example for OneDrive Business:
```json
[
{
"id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
"grantedToIdentities": [
{
"user": {
"displayName": "ryan@contoso.com"
},
"application": {},
"device": {}
}
],
"link": {
"type": "view",
"scope": "users",
"webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
},
"roles": [
"read"
],
"shareId": "u!LKj1lkdlals90j1nlkascl"
},
{
"id": "5D33DD65C6932946",
"grantedTo": {
"user": {
"displayName": "John Doe",
"id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
},
"application": {},
"device": {}
},
"roles": [
"owner"
],
"shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
}
]
```
To write permissions, pass in a "permissions" metadata key using this same
format. The [`--metadata-mapper`](https://rclone.org/docs/#metadata-mapper) tool can
be very helpful for this.
When adding permissions, an email address can be provided in the `User.ID` or
`DisplayName` properties of `grantedTo` or `grantedToIdentities`. Alternatively,
an ObjectID can be provided in `User.ID`. At least one valid recipient must be
provided in order to add a permission for a user. Creating a Public Link is also
supported, if `Link.Scope` is set to `"anonymous"`.
Example request to add a "read" permission with `--metadata-mapper`:
```json
{
"Metadata": {
"permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
}
}
```
Note that adding a permission can fail if a conflicting permission already
exists for the file/folder.
To update an existing permission, include both the Permission ID and the new
`roles` to be assigned. `roles` is the only property that can be changed.
To remove permissions, pass in a blob containing only the permissions you wish
to keep (which can be empty, to remove all.) Note that the `owner` role will be
ignored, as it cannot be removed.
Note that both reading and writing permissions requires extra API calls, so if
you don't need to read or write permissions it is recommended to omit
`--onedrive-metadata-permissions`.
Metadata and permissions are supported for Folders (directories) as well as
Files. Note that setting the `mtime` or `btime` on a Folder requires one extra
API call on OneDrive Business only.
OneDrive does not currently support User Metadata. When writing metadata, only
writeable system properties will be written -- any read-only or unrecognized keys
passed in will be ignored.
TIP: to see the metadata and permissions for any file or folder, run:
```
rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read
```

View File

@@ -4,6 +4,7 @@ package onedrive
import (
"context"
_ "embed"
"encoding/base64"
"encoding/hex"
"encoding/json"
@@ -29,6 +30,7 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/atexit"
@@ -93,6 +95,9 @@ var (
// QuickXorHashType is the hash.Type for OneDrive
QuickXorHashType hash.Type
//go:embed metadata.md
metadataHelp string
)
// Register with Fs
@@ -103,6 +108,10 @@ func init() {
Description: "Microsoft OneDrive",
NewFs: NewFs,
Config: Config,
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: metadataHelp,
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "region",
Help: "Choose national cloud region for OneDrive.",
@@ -173,7 +182,8 @@ Choose or manually enter a custom space separated list with all scopes, that rcl
Value: "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access",
Help: "Read and write access to all resources, without the ability to browse SharePoint sites. \nSame as if disable_site_permission was set to true",
},
}}, {
},
}, {
Name: "disable_site_permission",
Help: `Disable the request for Sites.Read.All permission.
@@ -203,9 +213,11 @@ listing, set this option.`,
Allow server-side operations (e.g. copy) to work across different onedrive configs.
This will only work if you are copying between two OneDrive *Personal* drives AND
the files to copy are already shared between them. In other cases, rclone will
fall back to normal copy (which will be slightly slower).`,
This will work if you are copying between two OneDrive *Personal* drives AND the files to
copy are already shared between them. Additionally, it should also function for a user who
has access permissions both between Onedrive for *business* and *SharePoint* under the *same
tenant*, and between *SharePoint* and another *SharePoint* under the *same tenant*. In other
cases, rclone will fall back to normal copy (which will be slightly slower).`,
Advanced: true,
}, {
Name: "list_chunk",
@@ -229,6 +241,18 @@ modification time and removes all but the last version.
this flag there.
`,
Advanced: true,
}, {
Name: "hard_delete",
Help: `Permanently delete files on removal.
Normally files will get sent to the recycle bin on deletion. Setting
this flag causes them to be permanently deleted. Use with care.
OneDrive personal accounts do not support the permanentDelete API,
it only applies to OneDrive for Business and SharePoint document libraries.
`,
Advanced: true,
Default: false,
}, {
Name: "link_scope",
Default: "anonymous",
@@ -279,7 +303,7 @@ all onedrive types. If an SHA1 hash is desired then set this option
accordingly.
From July 2023 QuickXorHash will be the only available hash for
both OneDrive for Business and OneDriver Personal.
both OneDrive for Business and OneDrive Personal.
This can be set to "none" to not use any hashes.
@@ -330,7 +354,7 @@ file.
Default: false,
Help: strings.ReplaceAll(`If set rclone will use delta listing to implement recursive listings.
If this flag is set the the onedrive backend will advertise |ListR|
If this flag is set the onedrive backend will advertise |ListR|
support for recursive listings.
Setting this flag speeds up these things greatly:
@@ -356,6 +380,16 @@ It is recommended if you are mounting your onedrive at the root
(or near the root when using crypt) and using rclone |rc vfs/refresh|.
`, "|", "`"),
Advanced: true,
}, {
Name: "metadata_permissions",
Help: `Control whether permissions should be read or written in metadata.
Reading permissions metadata from files can be done quickly, but it
isn't always desirable to set the permissions from the metadata.
`,
Advanced: true,
Default: rwOff,
Examples: rwExamples,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -639,7 +673,8 @@ Examples:
opts := rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/drives/" + finalDriveID + "/root"}
Path: "/drives/" + finalDriveID + "/root",
}
var rootItem api.Item
_, err = srv.CallJSON(ctx, &opts, nil, &rootItem)
if err != nil {
@@ -672,6 +707,7 @@ type Options struct {
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
ListChunk int64 `config:"list_chunk"`
NoVersions bool `config:"no_versions"`
HardDelete bool `config:"hard_delete"`
LinkScope string `config:"link_scope"`
LinkType string `config:"link_type"`
LinkPassword string `config:"link_password"`
@@ -679,6 +715,7 @@ type Options struct {
AVOverride bool `config:"av_override"`
Delta bool `config:"delta"`
Enc encoder.MultiEncoder `config:"encoding"`
MetadataPermissions rwChoice `config:"metadata_permissions"`
}
// Fs represents a remote OneDrive
@@ -711,6 +748,17 @@ type Object struct {
id string // ID of the object
hash string // Hash of the content, usually QuickXorHash but set as hash_type
mimeType string // Content-Type of object from server (may not be as uploaded)
meta *Metadata // metadata properties
}
// Directory describes a OneDrive directory
type Directory struct {
fs *Fs // what this object is part of
remote string // The remote path
size int64 // size of directory and contents or -1 if unknown
items int64 // number of objects or -1 for unknown
id string // dir ID
meta *Metadata // metadata properties
}
// ------------------------------------------------------------
@@ -751,8 +799,10 @@ var retryErrorCodes = []int{
509, // Bandwidth Limit Exceeded
}
var gatewayTimeoutError sync.Once
var errAsyncJobAccessDenied = errors.New("async job failed - access denied")
var (
gatewayTimeoutError sync.Once
errAsyncJobAccessDenied = errors.New("async job failed - access denied")
)
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
@@ -969,10 +1019,19 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
hashType: QuickXorHashType,
}
f.features = (&fs.Features{
CaseInsensitive: true,
ReadMimeType: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
CaseInsensitive: true,
ReadMimeType: true,
WriteMimeType: false,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: false,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: false,
DirModTimeUpdatesOnWrite: false,
}).Fill(ctx, f)
f.srv.SetErrorHandler(errorHandler)
@@ -998,7 +1057,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
})
// Get rootID
var rootID = opt.RootFolderID
rootID := opt.RootFolderID
if rootID == "" {
rootInfo, _, err := f.readMetaDataForPath(ctx, "")
if err != nil {
@@ -1065,6 +1124,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Ite
o := &Object{
fs: f,
remote: remote,
meta: f.newMetadata(remote),
}
var err error
if info != nil {
@@ -1123,11 +1183,11 @@ func (f *Fs) CreateDir(ctx context.Context, dirID, leaf string) (newID string, e
return shouldRetry(ctx, resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
// fmt.Printf("...Error %v\n", err)
return "", err
}
//fmt.Printf("...Id %q\n", *info.Id)
// fmt.Printf("...Id %q\n", *info.Id)
return info.GetID(), nil
}
@@ -1216,8 +1276,9 @@ func (f *Fs) itemToDirEntry(ctx context.Context, dir string, info *api.Item) (en
// cache the directory ID for later lookups
id := info.GetID()
f.dirCache.Put(remote, id)
d := fs.NewDir(remote, time.Time(info.GetLastModifiedDateTime())).SetID(id)
d.SetItems(folder.ChildCount)
d := f.newDir(id, remote)
d.items = folder.ChildCount
f.setSystemMetadata(info, d.meta, remote, dirMimeType)
entry = d
} else {
o, err := f.newObjectWithInfo(ctx, remote, info)
@@ -1378,7 +1439,12 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
}
return list.Flush()
}
// Shutdown shutdown the fs
func (f *Fs) Shutdown(ctx context.Context) error {
f.tokenRenewer.Shutdown()
return nil
}
// Creates from the parameters passed in a half finished Object which
@@ -1426,7 +1492,12 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// deleteObject removes an object by ID
func (f *Fs) deleteObject(ctx context.Context, id string) error {
opts := f.newOptsCall(id, "DELETE", "")
var opts rest.Opts
if f.opt.HardDelete {
opts = f.newOptsCall(id, "POST", "/permanentDelete")
} else {
opts = f.newOptsCall(id, "DELETE", "")
}
opts.NoResponse = true
return f.pacer.Call(func() (bool, error) {
@@ -1473,6 +1544,9 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
if f.driveType == driveTypePersonal {
return time.Millisecond
}
return time.Second
}
@@ -1537,14 +1611,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
if f.driveType != srcObj.fs.driveType {
fs.Debugf(src, "Can't server-side copy - drive types differ")
return nil, fs.ErrorCantCopy
}
// For OneDrive Business, this is only supported within the same drive
if f.driveType != driveTypePersonal && srcObj.fs.driveID != f.driveID {
fs.Debugf(src, "Can't server-side copy - cross-drive but not OneDrive Personal")
if (f.driveType == driveTypePersonal && srcObj.fs.driveType != driveTypePersonal) || (f.driveType != driveTypePersonal && srcObj.fs.driveType == driveTypePersonal) {
fs.Debugf(src, "Can't server-side copy - cross-drive between OneDrive Personal and OneDrive for business (SharePoint)")
return nil, fs.ErrorCantCopy
} else if f.driveType == driveTypeBusiness && srcObj.fs.driveType == driveTypeBusiness && srcObj.fs.driveID != f.driveID {
fs.Debugf(src, "Can't server-side copy - cross-drive between difference OneDrive for business (Not SharePoint)")
return nil, fs.ErrorCantCopy
}
@@ -1612,12 +1684,19 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// Copy does NOT copy the modTime from the source and there seems to
// be no way to set date before
// This will create TWO versions on OneDrive
err = dstObj.SetModTime(ctx, srcObj.ModTime(ctx))
// Set modtime and adjust metadata if required
_, err = dstObj.Metadata(ctx) // make sure we get the correct new normalizedID
if err != nil {
return nil, err
}
return dstObj, nil
dstObj.meta.permsAddOnly = true // dst will have different IDs from src, so can't update/remove
info, err := f.fetchAndUpdateMetadata(ctx, src, fs.MetadataAsOpenOptions(ctx), dstObj)
if err != nil {
return nil, err
}
err = dstObj.setMetaData(info)
return dstObj, err
}
// Purge deletes all the files in the directory
@@ -1672,12 +1751,12 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
},
// We set the mod time too as it gets reset otherwise
FileSystemInfo: &api.FileSystemInfoFacet{
CreatedDateTime: api.Timestamp(srcObj.modTime),
CreatedDateTime: api.Timestamp(srcObj.tryGetBtime(srcObj.modTime)),
LastModifiedDateTime: api.Timestamp(srcObj.modTime),
},
}
var resp *http.Response
var info api.Item
var info *api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info)
return shouldRetry(ctx, resp, err)
@@ -1686,11 +1765,18 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
err = dstObj.setMetaData(&info)
err = dstObj.setMetaData(info)
if err != nil {
return nil, err
}
return dstObj, nil
// Set modtime and adjust metadata if required
info, err = f.fetchAndUpdateMetadata(ctx, src, fs.MetadataAsOpenOptions(ctx), dstObj)
if err != nil {
return nil, err
}
err = dstObj.setMetaData(info)
return dstObj, err
}
// DirMove moves src, srcRemote to this remote at dstRemote
@@ -2026,6 +2112,7 @@ func (o *Object) Size() int64 {
// setMetaData sets the metadata from info
func (o *Object) setMetaData(info *api.Item) (err error) {
if info.GetFolder() != nil {
log.Stack(o, "setMetaData called on dir instead of obj")
return fs.ErrorIsDir
}
o.hasMetaData = true
@@ -2065,9 +2152,40 @@ func (o *Object) setMetaData(info *api.Item) (err error) {
o.modTime = time.Time(info.GetLastModifiedDateTime())
}
o.id = info.GetID()
if o.meta == nil {
o.meta = o.fs.newMetadata(o.Remote())
}
o.fs.setSystemMetadata(info, o.meta, o.remote, o.mimeType)
return nil
}
// sets system metadata shared by both objects and directories
func (f *Fs) setSystemMetadata(info *api.Item, meta *Metadata, remote string, mimeType string) {
meta.fs = f
meta.remote = remote
meta.mimeType = mimeType
if info == nil {
fs.Errorf("setSystemMetadata", "internal error: info is nil")
}
fileSystemInfo := info.GetFileSystemInfo()
if fileSystemInfo != nil {
meta.mtime = time.Time(fileSystemInfo.LastModifiedDateTime)
meta.btime = time.Time(fileSystemInfo.CreatedDateTime)
} else {
meta.mtime = time.Time(info.GetLastModifiedDateTime())
meta.btime = time.Time(info.GetCreatedDateTime())
}
meta.utime = time.Time(info.GetCreatedDateTime())
meta.description = info.Description
meta.packageType = info.GetPackageType()
meta.createdBy = info.GetCreatedBy()
meta.lastModifiedBy = info.GetLastModifiedBy()
meta.malwareDetected = info.MalwareDetected()
meta.shared = info.Shared
meta.normalizedID = info.GetID()
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
@@ -2105,7 +2223,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
opts := o.fs.newOptsCallWithPath(ctx, o.remote, "PATCH", "")
update := api.SetFileSystemInfo{
FileSystemInfo: api.FileSystemInfoFacet{
CreatedDateTime: api.Timestamp(modTime),
CreatedDateTime: api.Timestamp(o.tryGetBtime(modTime)),
LastModifiedDateTime: api.Timestamp(modTime),
},
}
@@ -2154,9 +2272,23 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if o.fs.opt.AVOverride {
opts.Parameters = url.Values{"AVOverride": {"1"}}
}
// Make a note of the redirect target as we need to call it without Auth
var redirectReq *http.Request
opts.CheckRedirect = func(req *http.Request, via []*http.Request) error {
if len(via) >= 10 {
return errors.New("stopped after 10 redirects")
}
req.Header.Del("Authorization") // remove Auth header
redirectReq = req
return http.ErrUseLastResponse
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
if redirectReq != nil {
// It is a redirect which we are expecting
err = nil
}
return shouldRetry(ctx, resp, err)
})
if err != nil {
@@ -2167,20 +2299,35 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
return nil, err
}
if redirectReq != nil {
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.unAuth.Do(redirectReq)
return shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
if virus := resp.Header.Get("X-Virus-Infected"); virus != "" {
err = fmt.Errorf("server reports this file is infected with a virus - use --onedrive-av-override to download anyway: %s: %w", virus, err)
}
}
return nil, err
}
}
if resp.StatusCode == http.StatusOK && resp.ContentLength > 0 && resp.Header.Get("Content-Range") == "" {
//Overwrite size with actual size since size readings from Onedrive is unreliable.
// Overwrite size with actual size since size readings from Onedrive is unreliable.
o.size = resp.ContentLength
}
return resp.Body, err
}
// createUploadSession creates an upload session for the object
func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (response *api.CreateUploadResponse, err error) {
func (o *Object) createUploadSession(ctx context.Context, src fs.ObjectInfo, modTime time.Time) (response *api.CreateUploadResponse, metadata fs.Metadata, err error) {
opts := o.fs.newOptsCallWithPath(ctx, o.remote, "POST", "/createUploadSession")
createRequest := api.CreateUploadRequest{}
createRequest.Item.FileSystemInfo.CreatedDateTime = api.Timestamp(modTime)
createRequest.Item.FileSystemInfo.LastModifiedDateTime = api.Timestamp(modTime)
createRequest, metadata, err := o.fetchMetadataForCreate(ctx, src, opts.Options, modTime)
if err != nil {
return nil, metadata, err
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &createRequest, &response)
@@ -2192,7 +2339,7 @@ func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (re
}
return shouldRetry(ctx, resp, err)
})
return response, err
return response, metadata, err
}
// getPosition gets the current position in a multipart upload
@@ -2231,7 +2378,7 @@ func (o *Object) uploadFragment(ctx context.Context, url string, start int64, to
// var response api.UploadFragmentResponse
var resp *http.Response
var body []byte
var skip = int64(0)
skip := int64(0)
err = o.fs.pacer.Call(func() (bool, error) {
toSend := chunkSize - skip
opts := rest.Opts{
@@ -2298,14 +2445,17 @@ func (o *Object) cancelUploadSession(ctx context.Context, url string) (err error
}
// uploadMultipart uploads a file using multipart upload
func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64, modTime time.Time, options ...fs.OpenOption) (info *api.Item, err error) {
// if there is metadata, it will be set at the same time, except for permissions, which must be set after (if present and enabled).
func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (info *api.Item, err error) {
size := src.Size()
modTime := src.ModTime(ctx)
if size <= 0 {
return nil, errors.New("unknown-sized upload not supported")
}
// Create upload session
fs.Debugf(o, "Starting multipart upload")
session, err := o.createUploadSession(ctx, modTime)
session, metadata, err := o.createUploadSession(ctx, src, modTime)
if err != nil {
return nil, err
}
@@ -2338,12 +2488,25 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64,
position += n
}
return info, nil
err = o.setMetaData(info)
if err != nil {
return info, err
}
if metadata == nil || !o.fs.needsUpdatePermissions(metadata) {
return info, err
}
info, err = o.updateMetadata(ctx, metadata) // for permissions, which can't be set during original upload
if info == nil {
return nil, err
}
return info, o.setMetaData(info)
}
// Update the content of a remote file within 4 MiB size in one single request
// This function will set modtime after uploading, which will create a new version for the remote file
func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64, modTime time.Time, options ...fs.OpenOption) (info *api.Item, err error) {
// (currently only used when size is exactly 0)
// This function will set modtime and metadata after uploading, which will create a new version for the remote file
func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (info *api.Item, err error) {
size := src.Size()
if size < 0 || size > int64(fs.SizeSuffix(4*1024*1024)) {
return nil, errors.New("size passed into uploadSinglepart must be >= 0 and <= 4 MiB")
}
@@ -2374,7 +2537,11 @@ func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64,
return nil, err
}
// Set the mod time now and read metadata
return o.setModTime(ctx, modTime)
info, err = o.fs.fetchAndUpdateMetadata(ctx, src, options, o)
if err != nil {
return nil, fmt.Errorf("failed to fetch and update metadata: %w", err)
}
return info, o.setMetaData(info)
}
// Update the object with the contents of the io.Reader, modTime and size
@@ -2389,17 +2556,17 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
defer o.fs.tokenRenewer.Stop()
size := src.Size()
modTime := src.ModTime(ctx)
var info *api.Item
if size > 0 {
info, err = o.uploadMultipart(ctx, in, size, modTime, options...)
info, err = o.uploadMultipart(ctx, in, src, options...)
} else if size == 0 {
info, err = o.uploadSinglepart(ctx, in, size, modTime, options...)
info, err = o.uploadSinglepart(ctx, in, src, options...)
} else {
return errors.New("unknown-sized upload not supported")
}
if err != nil {
fs.PrettyPrint(info, "info from Update error", fs.LogLevelDebug)
return err
}
@@ -2410,8 +2577,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
fs.Errorf(o, "Failed to remove versions: %v", err)
}
}
return o.setMetaData(info)
return nil
}
// Remove an object
@@ -2759,7 +2925,15 @@ var (
_ fs.PublicLinker = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = &Object{}
_ fs.IDer = &Object{}
_ fs.Metadataer = (*Object)(nil)
_ fs.Metadataer = (*Directory)(nil)
_ fs.SetModTimer = (*Directory)(nil)
_ fs.SetMetadataer = (*Directory)(nil)
_ fs.MimeTyper = &Directory{}
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
)

View File

@@ -0,0 +1,519 @@
package onedrive
import (
"context"
"encoding/json"
"fmt"
"testing"
"time"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/backend/onedrive/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/exp/slices" // replace with slices after go1.21 is the minimum version
)
// go test -timeout 30m -run ^TestIntegration/FsMkdir/FsPutFiles/Internal$ github.com/rclone/rclone/backend/onedrive -remote TestOneDrive:meta -v
// go test -timeout 30m -run ^TestIntegration/FsMkdir/FsPutFiles/Internal$ github.com/rclone/rclone/backend/onedrive -remote TestOneDriveBusiness:meta -v
// go run ./fstest/test_all -remotes TestOneDriveBusiness:meta,TestOneDrive:meta -verbose -maxtries 1
var (
t1 = fstest.Time("2023-08-26T23:13:06.499999999Z")
t2 = fstest.Time("2020-02-29T12:34:56.789Z")
t3 = time.Date(1994, time.December, 24, 9+12, 0, 0, 525600, time.FixedZone("Eastern Standard Time", -5))
ctx = context.Background()
content = "hello"
)
const (
testUserID = "ryan@contoso.com" // demo user from doc examples (can't share files with yourself)
// https://learn.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_invite?view=odsp-graph-online#http-request-1
)
// TestMain drives the tests
func TestMain(m *testing.M) {
fstest.TestMain(m)
}
// TestWritePermissions tests reading and writing permissions
func (f *Fs) TestWritePermissions(t *testing.T, r *fstest.Run) {
// setup
ctx, ci := fs.AddConfig(ctx)
ci.Metadata = true
_ = f.opt.MetadataPermissions.Set("read,write")
file1 := r.WriteFile(randomFilename(), content, t2)
// add a permission with "read" role
permissions := defaultPermissions(f.driveType)
permissions[0].Roles[0] = api.ReadRole
expectedMeta, actualMeta := f.putWithMeta(ctx, t, &file1, permissions)
f.compareMeta(t, expectedMeta, actualMeta, false)
expectedP, actualP := unmarshalPerms(t, expectedMeta["permissions"]), unmarshalPerms(t, actualMeta["permissions"])
found, num := false, 0
foundCount := 0
for i, p := range actualP {
for _, identity := range p.GetGrantedToIdentities(f.driveType) {
if identity.User.DisplayName == testUserID {
// note: expected will always be element 0 here, but actual may be variable based on org settings
assert.Equal(t, expectedP[0].Roles, p.Roles)
found, num = true, i
foundCount++
}
}
if f.driveType == driveTypePersonal {
if p.GetGrantedTo(f.driveType) != nil && p.GetGrantedTo(f.driveType).User != (api.Identity{}) && p.GetGrantedTo(f.driveType).User.ID == testUserID { // shows up in a different place on biz vs. personal
assert.Equal(t, expectedP[0].Roles, p.Roles)
found, num = true, i
foundCount++
}
}
}
assert.True(t, found, fmt.Sprintf("no permission found with expected role (want: \n\n%v \n\ngot: \n\n%v\n\n)", indent(t, expectedMeta["permissions"]), indent(t, actualMeta["permissions"])))
assert.Equal(t, 1, foundCount, "expected to find exactly 1 match")
// update it to "write"
permissions = actualP
permissions[num].Roles[0] = api.WriteRole
expectedMeta, actualMeta = f.putWithMeta(ctx, t, &file1, permissions)
f.compareMeta(t, expectedMeta, actualMeta, false)
if f.driveType != driveTypePersonal {
// zero out some things we expect to be different
expectedP, actualP = unmarshalPerms(t, expectedMeta["permissions"]), unmarshalPerms(t, actualMeta["permissions"])
normalize(expectedP)
normalize(actualP)
expectedMeta.Set("permissions", marshalPerms(t, expectedP))
actualMeta.Set("permissions", marshalPerms(t, actualP))
}
assert.JSONEq(t, expectedMeta["permissions"], actualMeta["permissions"])
// remove it
permissions[num] = nil
_, actualMeta = f.putWithMeta(ctx, t, &file1, permissions)
if f.driveType == driveTypePersonal {
perms, ok := actualMeta["permissions"]
assert.False(t, ok, fmt.Sprintf("permissions metadata key was unexpectedly found: %v", perms))
return
}
_, actualP = unmarshalPerms(t, expectedMeta["permissions"]), unmarshalPerms(t, actualMeta["permissions"])
found = false
var foundP *api.PermissionsType
for _, p := range actualP {
if p.GetGrantedTo(f.driveType) == nil || p.GetGrantedTo(f.driveType).User == (api.Identity{}) || p.GetGrantedTo(f.driveType).User.ID != testUserID {
continue
}
found = true
foundP = p
}
assert.False(t, found, fmt.Sprintf("permission was found but expected to be removed: %v", foundP))
}
// TestUploadSinglePart tests reading/writing permissions using uploadSinglepart()
// This is only used when file size is exactly 0.
func (f *Fs) TestUploadSinglePart(t *testing.T, r *fstest.Run) {
content = ""
f.TestWritePermissions(t, r)
content = "hello"
}
// TestReadPermissions tests that no permissions are written when --onedrive-metadata-permissions has "read" but not "write"
func (f *Fs) TestReadPermissions(t *testing.T, r *fstest.Run) {
// setup
ctx, ci := fs.AddConfig(ctx)
ci.Metadata = true
file1 := r.WriteFile(randomFilename(), "hello", t2)
// try adding a permission without --onedrive-metadata-permissions -- should fail
// test that what we got before vs. after is the same
_ = f.opt.MetadataPermissions.Set("read")
_, expectedMeta := f.putWithMeta(ctx, t, &file1, []*api.PermissionsType{}) // return var intentionally switched here
permissions := defaultPermissions(f.driveType)
_, actualMeta := f.putWithMeta(ctx, t, &file1, permissions)
if f.driveType == driveTypePersonal {
perms, ok := actualMeta["permissions"]
assert.False(t, ok, fmt.Sprintf("permissions metadata key was unexpectedly found: %v", perms))
return
}
assert.JSONEq(t, expectedMeta["permissions"], actualMeta["permissions"])
}
// TestReadMetadata tests that all the read-only system properties are present and non-blank
func (f *Fs) TestReadMetadata(t *testing.T, r *fstest.Run) {
// setup
ctx, ci := fs.AddConfig(ctx)
ci.Metadata = true
file1 := r.WriteFile(randomFilename(), "hello", t2)
permissions := defaultPermissions(f.driveType)
_ = f.opt.MetadataPermissions.Set("read,write")
_, actualMeta := f.putWithMeta(ctx, t, &file1, permissions)
optionals := []string{"package-type", "shared-by-id", "shared-scope", "shared-time", "shared-owner-id"} // not always present
for k := range systemMetadataInfo {
if slices.Contains(optionals, k) {
continue
}
if k == "description" && f.driveType != driveTypePersonal {
continue // not supported
}
gotV, ok := actualMeta[k]
assert.True(t, ok, fmt.Sprintf("property is missing: %v", k))
assert.NotEmpty(t, gotV, fmt.Sprintf("property is blank: %v", k))
}
}
// TestDirectoryMetadata tests reading and writing modtime and other metadata and permissions for directories
func (f *Fs) TestDirectoryMetadata(t *testing.T, r *fstest.Run) {
// setup
ctx, ci := fs.AddConfig(ctx)
ci.Metadata = true
_ = f.opt.MetadataPermissions.Set("read,write")
permissions := defaultPermissions(f.driveType)
permissions[0].Roles[0] = api.ReadRole
expectedMeta := fs.Metadata{
"mtime": t1.Format(timeFormatOut),
"btime": t2.Format(timeFormatOut),
"content-type": dirMimeType,
"description": "that is so meta!",
}
b, err := json.MarshalIndent(permissions, "", "\t")
assert.NoError(t, err)
expectedMeta.Set("permissions", string(b))
compareDirMeta := func(expectedMeta, actualMeta fs.Metadata, ignoreID bool) {
f.compareMeta(t, expectedMeta, actualMeta, ignoreID)
// check that all required system properties are present
optionals := []string{"package-type", "shared-by-id", "shared-scope", "shared-time", "shared-owner-id"} // not always present
for k := range systemMetadataInfo {
if slices.Contains(optionals, k) {
continue
}
if k == "description" && f.driveType != driveTypePersonal {
continue // not supported
}
gotV, ok := actualMeta[k]
assert.True(t, ok, fmt.Sprintf("property is missing: %v", k))
assert.NotEmpty(t, gotV, fmt.Sprintf("property is blank: %v", k))
}
}
newDst, err := operations.MkdirMetadata(ctx, f, "subdir", expectedMeta)
assert.NoError(t, err)
require.NotNil(t, newDst)
assert.Equal(t, "subdir", newDst.Remote())
actualMeta, err := fs.GetMetadata(ctx, newDst)
assert.NoError(t, err)
assert.NotNil(t, actualMeta)
compareDirMeta(expectedMeta, actualMeta, false)
// modtime
assert.Equal(t, t1.Truncate(f.Precision()), newDst.ModTime(ctx))
// try changing it and re-check it
newDst, err = operations.SetDirModTime(ctx, f, newDst, "", t2)
assert.NoError(t, err)
assert.Equal(t, t2.Truncate(f.Precision()), newDst.ModTime(ctx))
// ensure that f.DirSetModTime also works
err = f.DirSetModTime(ctx, "subdir", t3)
assert.NoError(t, err)
entries, err := f.List(ctx, "")
assert.NoError(t, err)
entries.ForDir(func(dir fs.Directory) {
if dir.Remote() == "subdir" {
assert.True(t, t3.Truncate(f.Precision()).Equal(dir.ModTime(ctx)), fmt.Sprintf("got %v", dir.ModTime(ctx)))
}
})
// test updating metadata on existing dir
actualMeta, err = fs.GetMetadata(ctx, newDst) // get fresh info as we've been changing modtimes
assert.NoError(t, err)
expectedMeta = actualMeta
expectedMeta.Set("description", "metadata is fun!")
expectedMeta.Set("btime", t3.Format(timeFormatOut))
expectedMeta.Set("mtime", t1.Format(timeFormatOut))
expectedMeta.Set("content-type", dirMimeType)
perms := unmarshalPerms(t, expectedMeta["permissions"])
perms[0].Roles[0] = api.WriteRole
b, err = json.MarshalIndent(perms, "", "\t")
assert.NoError(t, err)
expectedMeta.Set("permissions", string(b))
newDst, err = operations.MkdirMetadata(ctx, f, "subdir", expectedMeta)
assert.NoError(t, err)
require.NotNil(t, newDst)
assert.Equal(t, "subdir", newDst.Remote())
actualMeta, err = fs.GetMetadata(ctx, newDst)
assert.NoError(t, err)
assert.NotNil(t, actualMeta)
compareDirMeta(expectedMeta, actualMeta, false)
// test copying metadata from one dir to another
copiedDir, err := operations.CopyDirMetadata(ctx, f, nil, "subdir2", newDst)
assert.NoError(t, err)
require.NotNil(t, copiedDir)
assert.Equal(t, "subdir2", copiedDir.Remote())
actualMeta, err = fs.GetMetadata(ctx, copiedDir)
assert.NoError(t, err)
assert.NotNil(t, actualMeta)
compareDirMeta(expectedMeta, actualMeta, true)
// test DirModTimeUpdatesOnWrite
expectedTime := copiedDir.ModTime(ctx)
assert.True(t, !expectedTime.IsZero())
r.WriteObject(ctx, copiedDir.Remote()+"/"+randomFilename(), "hi there", t3)
entries, err = f.List(ctx, "")
assert.NoError(t, err)
entries.ForDir(func(dir fs.Directory) {
if dir.Remote() == copiedDir.Remote() {
assert.True(t, expectedTime.Equal(dir.ModTime(ctx)), fmt.Sprintf("want %v got %v", expectedTime, dir.ModTime(ctx)))
}
})
}
// TestServerSideCopyMove tests server-side Copy and Move
func (f *Fs) TestServerSideCopyMove(t *testing.T, r *fstest.Run) {
// setup
ctx, ci := fs.AddConfig(ctx)
ci.Metadata = true
_ = f.opt.MetadataPermissions.Set("read,write")
file1 := r.WriteFile(randomFilename(), content, t2)
// add a permission with "read" role
permissions := defaultPermissions(f.driveType)
permissions[0].Roles[0] = api.ReadRole
expectedMeta, actualMeta := f.putWithMeta(ctx, t, &file1, permissions)
f.compareMeta(t, expectedMeta, actualMeta, false)
comparePerms := func(expectedMeta, actualMeta fs.Metadata) (newExpectedMeta, newActualMeta fs.Metadata) {
expectedP, actualP := unmarshalPerms(t, expectedMeta["permissions"]), unmarshalPerms(t, actualMeta["permissions"])
normalize(expectedP)
normalize(actualP)
expectedMeta.Set("permissions", marshalPerms(t, expectedP))
actualMeta.Set("permissions", marshalPerms(t, actualP))
assert.JSONEq(t, expectedMeta["permissions"], actualMeta["permissions"])
return expectedMeta, actualMeta
}
// Copy
obj1, err := f.NewObject(ctx, file1.Path)
assert.NoError(t, err)
originalMeta := actualMeta
obj2, err := f.Copy(ctx, obj1, randomFilename())
assert.NoError(t, err)
actualMeta, err = fs.GetMetadata(ctx, obj2)
assert.NoError(t, err)
expectedMeta, actualMeta = comparePerms(originalMeta, actualMeta)
f.compareMeta(t, expectedMeta, actualMeta, true)
// Move
obj3, err := f.Move(ctx, obj1, randomFilename())
assert.NoError(t, err)
actualMeta, err = fs.GetMetadata(ctx, obj3)
assert.NoError(t, err)
expectedMeta, actualMeta = comparePerms(originalMeta, actualMeta)
f.compareMeta(t, expectedMeta, actualMeta, true)
}
// TestMetadataMapper tests adding permissions with the --metadata-mapper
func (f *Fs) TestMetadataMapper(t *testing.T, r *fstest.Run) {
// setup
ctx, ci := fs.AddConfig(ctx)
ci.Metadata = true
_ = f.opt.MetadataPermissions.Set("read,write")
file1 := r.WriteFile(randomFilename(), content, t2)
blob := `{"Metadata":{"permissions":"[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"}}`
if f.driveType != driveTypePersonal {
blob = `{"Metadata":{"permissions":"[{\"grantedToIdentitiesV2\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"}}`
}
// Copy
ci.MetadataMapper = []string{"echo", blob}
require.NoError(t, ci.Dump.Set("mapper"))
obj1, err := r.Flocal.NewObject(ctx, file1.Path)
assert.NoError(t, err)
obj2, err := operations.Copy(ctx, f, nil, randomFilename(), obj1)
assert.NoError(t, err)
actualMeta, err := fs.GetMetadata(ctx, obj2)
assert.NoError(t, err)
actualP := unmarshalPerms(t, actualMeta["permissions"])
found := false
foundCount := 0
for _, p := range actualP {
for _, identity := range p.GetGrantedToIdentities(f.driveType) {
if identity.User.DisplayName == testUserID {
assert.Equal(t, []api.Role{api.ReadRole}, p.Roles)
found = true
foundCount++
}
}
if f.driveType == driveTypePersonal {
if p.GetGrantedTo(f.driveType) != nil && p.GetGrantedTo(f.driveType).User != (api.Identity{}) && p.GetGrantedTo(f.driveType).User.ID == testUserID { // shows up in a different place on biz vs. personal
assert.Equal(t, []api.Role{api.ReadRole}, p.Roles)
found = true
foundCount++
}
}
}
assert.True(t, found, fmt.Sprintf("no permission found with expected role (want: \n\n%v \n\ngot: \n\n%v\n\n)", blob, actualMeta))
assert.Equal(t, 1, foundCount, "expected to find exactly 1 match")
}
// helper function to put an object with metadata and permissions
func (f *Fs) putWithMeta(ctx context.Context, t *testing.T, file *fstest.Item, perms []*api.PermissionsType) (expectedMeta, actualMeta fs.Metadata) {
t.Helper()
expectedMeta = fs.Metadata{
"mtime": t1.Format(timeFormatOut),
"btime": t2.Format(timeFormatOut),
"description": "that is so meta!",
}
expectedMeta.Set("permissions", marshalPerms(t, perms))
obj := fstests.PutTestContentsMetadata(ctx, t, f, file, content, true, "plain/text", expectedMeta)
do, ok := obj.(fs.Metadataer)
require.True(t, ok)
actualMeta, err := do.Metadata(ctx)
require.NoError(t, err)
return expectedMeta, actualMeta
}
func randomFilename() string {
return "some file-" + random.String(8) + ".txt"
}
func (f *Fs) compareMeta(t *testing.T, expectedMeta, actualMeta fs.Metadata, ignoreID bool) {
t.Helper()
for k, v := range expectedMeta {
gotV, ok := actualMeta[k]
switch k {
case "shared-owner-id", "shared-time", "shared-by-id", "shared-scope":
continue
case "permissions":
continue
case "utime":
assert.True(t, ok, fmt.Sprintf("expected metadata key is missing: %v", k))
if f.driveType == driveTypePersonal {
compareTimeStrings(t, k, v, gotV, time.Minute) // read-only upload time, so slight difference expected -- use larger precision
continue
}
compareTimeStrings(t, k, expectedMeta["btime"], gotV, time.Minute) // another bizarre difference between personal and business...
continue
case "id":
if ignoreID {
continue // different id is expected when copying meta from one item to another
}
case "mtime", "btime":
assert.True(t, ok, fmt.Sprintf("expected metadata key is missing: %v", k))
compareTimeStrings(t, k, v, gotV, time.Second)
continue
case "description":
if f.driveType != driveTypePersonal {
continue // not supported
}
}
assert.True(t, ok, fmt.Sprintf("expected metadata key is missing: %v", k))
assert.Equal(t, v, gotV, actualMeta)
}
}
func compareTimeStrings(t *testing.T, remote, want, got string, precision time.Duration) {
wantT, err := time.Parse(timeFormatIn, want)
assert.NoError(t, err)
gotT, err := time.Parse(timeFormatIn, got)
assert.NoError(t, err)
fstest.AssertTimeEqualWithPrecision(t, remote, wantT, gotT, precision)
}
func marshalPerms(t *testing.T, p []*api.PermissionsType) string {
b, err := json.MarshalIndent(p, "", "\t")
assert.NoError(t, err)
return string(b)
}
func unmarshalPerms(t *testing.T, perms string) (p []*api.PermissionsType) {
t.Helper()
err := json.Unmarshal([]byte(perms), &p)
assert.NoError(t, err)
return p
}
func indent(t *testing.T, s string) string {
p := unmarshalPerms(t, s)
return marshalPerms(t, p)
}
func defaultPermissions(driveType string) []*api.PermissionsType {
if driveType == driveTypePersonal {
return []*api.PermissionsType{{
GrantedTo: &api.IdentitySet{User: api.Identity{}},
GrantedToIdentities: []*api.IdentitySet{{User: api.Identity{ID: testUserID}}},
Roles: []api.Role{api.WriteRole},
}}
}
return []*api.PermissionsType{{
GrantedToV2: &api.IdentitySet{User: api.Identity{}},
GrantedToIdentitiesV2: []*api.IdentitySet{{User: api.Identity{ID: testUserID}}},
Roles: []api.Role{api.WriteRole},
}}
}
// zeroes out some things we expect to be different when copying/moving between objects
func normalize(Ps []*api.PermissionsType) {
for _, ep := range Ps {
ep.ID = ""
ep.Link = nil
ep.ShareID = ""
}
}
func (f *Fs) resetTestDefaults(r *fstest.Run) {
ci := fs.GetConfig(ctx)
ci.Metadata = false
_ = f.opt.MetadataPermissions.Set("off")
r.Finalise()
}
// InternalTest dispatches all internal tests
func (f *Fs) InternalTest(t *testing.T) {
newTestF := func() (*Fs, *fstest.Run) {
r := fstest.NewRunIndividual(t)
testF, ok := r.Fremote.(*Fs)
if !ok {
t.FailNow()
}
return testF, r
}
testF, r := newTestF()
t.Run("TestWritePermissions", func(t *testing.T) { testF.TestWritePermissions(t, r) })
testF.resetTestDefaults(r)
testF, r = newTestF()
t.Run("TestUploadSinglePart", func(t *testing.T) { testF.TestUploadSinglePart(t, r) })
testF.resetTestDefaults(r)
testF, r = newTestF()
t.Run("TestReadPermissions", func(t *testing.T) { testF.TestReadPermissions(t, r) })
testF.resetTestDefaults(r)
testF, r = newTestF()
t.Run("TestReadMetadata", func(t *testing.T) { testF.TestReadMetadata(t, r) })
testF.resetTestDefaults(r)
testF, r = newTestF()
t.Run("TestDirectoryMetadata", func(t *testing.T) { testF.TestDirectoryMetadata(t, r) })
testF.resetTestDefaults(r)
testF, r = newTestF()
t.Run("TestServerSideCopyMove", func(t *testing.T) { testF.TestServerSideCopyMove(t, r) })
testF.resetTestDefaults(r)
t.Run("TestMetadataMapper", func(t *testing.T) { testF.TestMetadataMapper(t, r) })
testF.resetTestDefaults(r)
}
var _ fstests.InternalTester = (*Fs)(nil)

Some files were not shown because too many files have changed in this diff Show More