1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

555 Commits

Author SHA1 Message Date
Nick Craig-Wood
d7f23b6b97 fshttp: increase HTTP/2 buffer size from 16k to 8M to increase througput #8379 2025-02-13 11:00:32 +00:00
Janne Hellsten
539e96cc1f cmd: fix crash if rclone is invoked without any arguments - Fixes #8378 2025-02-12 21:31:05 +00:00
Anagh Kumar Baranwal
5086aad0b2 build: disable docker builds on PRs & add missing dockerfile changes
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-12 21:29:01 +00:00
ll3006
c1b414e2cf sync: copy dir modtimes even when copyEmptySrcDirs is false - fixes #8317
Before, after a sync, only file modtimes were updated when not using
--copy-empty-src-dirs. This ensures modtimes are updated to match the source
folder, regardless of copyEmptySrcDir. The flag --no-update-dir-modtime
(which previously did nothing) will disable this.
2025-02-12 21:27:15 +00:00
ll3006
2ff8aa1c20 sync: add tests to check dir modtimes are kept when syncing
This adds tests to check dir modtimes are updated from source
when syncing even if they've changed in the destination.
This should work both with and without --copy-empty-src-dirs.
2025-02-12 21:27:15 +00:00
nielash
6d2a72367a fix golangci-lint errors 2025-02-12 21:24:55 +00:00
nielash
9df751d4ec bisync: fix false positive on integration tests
5f70918e2c
introduced a new INFO log when making a directory, which differs depending on
whether the backend supports setting directory metadata. This caused false
positives on the bisync createemptysrcdirs test.

This fixes it by ignoring that log line.
2025-02-12 21:24:55 +00:00
Nick Craig-Wood
e175c863aa s3: split the GCS quirks into -s3-use-x-id and -s3-sign-accept-encoding #8373
Before this we applied both these quirks if provider == "GCS".

Splitting them like this makes them applicable for other providers
such as ActiveScale.
2025-02-12 21:08:16 +00:00
Nick Craig-Wood
64cd8ae0f0 Add Joel K Biju to contributors 2025-02-12 21:08:11 +00:00
Anagh Kumar Baranwal
46b498b86a stats: fix the speed not getting updated after a pause in the processing
This shifts the behavior of the average loop to be a persistent loop
that gets resumed/paused when transfers & checks are started/completed.

Previously, the averageLoop was stopped on completion of
transfers & checks but failed to start again due to the protection of
the sync.Once

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-11 12:52:55 +00:00
Joel K Biju
b76cd74087 opendrive: added --opendrive-access flag to handle permissions 2025-02-11 12:43:10 +00:00
nielash
3b49fd24d4 bisync: fix listings missing concurrent modifications - fixes #8359
Before this change, there was a bug affecting listing files when:

- a given bisync run had changes in the 2to1 direction
AND
- the run had NO changes in the 1to2 direction
AND
- at least one of the changed files changed AGAIN during the run
(specifically, after the initial march and before the transfers.)

In this situation, the listings on one side would still retain the prior version
of the changed file, potentially causing conflicts or errors.

This change fixes the issue by making sure that if we're updating the listings
on one side, we must also update the other. (We previously tried to skip it for
efficiency, but this failed to account for the possibility that a changed file
could change again during the run.)
2025-02-11 11:21:02 +00:00
Anagh Kumar Baranwal
c0515a51a5 Added parallel docker builds and caching for go build in the container
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-11 10:17:50 +00:00
Jonathan Giannuzzi
dc9c87279b smb: improve connection pooling efficiency
* Lower pacer minSleep to establish new connections faster
* Use Echo requests to check whether connections are working (required an upgrade of go-smb2)
* Only remount shares when needed
* Use context for connection establishment
* When returning a connection to the pool, only check the ones that encountered errors
* Close connections in parallel
2025-02-04 12:35:19 +00:00
Nick Craig-Wood
057fdb3a9d lib/oauthutil: fix redirect URL mismatch errors - fixes #8351
In this commit we introduced support for client credentials flow:

65012beea4 lib/oauthutil: add support for OAuth client credential flow

This involved re-organising the oauth credentials.

Unfortunately a small error was made which used a fixed redirect URL
rather than the one configured for the backend.

This caused the box backend oauth flow not to work properly with
redirect_uri_mismatch errors.

These backends were using the wrong redirect URL and will likely be
affected, though it is possible the backends have workarounds.

- box
- drive
- googlecloudstorage
- googlephotos
- hidrive
- pikpak
- premiumizeme
- sharefile
- yandex
2025-02-03 12:15:54 +00:00
Nick Craig-Wood
3daf62cf3d b2: fix "fatal error: concurrent map writes" - fixes #8355
This was caused by the embryonic metadata support. Since this isn't
actually visible externally, this patch removes it for the time being.
2025-02-03 11:33:21 +00:00
Nick Craig-Wood
0ef495fa76 Add Alexander Minbaev to contributors 2025-02-03 11:33:21 +00:00
Nick Craig-Wood
722c567504 Add Zachary Vorhies to contributors 2025-02-03 11:33:21 +00:00
Nick Craig-Wood
0ebe1c0f81 Add Jess to contributors 2025-02-03 11:33:21 +00:00
Alexander Minbaev
2dc06b2548 s3: add IBM IAM signer - fixes #7617 2025-02-03 11:29:31 +00:00
Zachary Vorhies
b52aabd8fe serve nfs: update docs to note Windows is not supported - fixes #8352 2025-02-01 12:20:09 +00:00
Jess
6494ac037f cmd/config(update remote): introduce --no-output option
Fixes #8190
2025-01-24 17:22:58 +00:00
jkpe
5c3a1bbf30 s3: add DigitalOcean regions SFO2, LON1, TOR1, BLR1 2025-01-24 10:41:48 +00:00
Nick Craig-Wood
c837664653 sync: fix cpu spinning when empty directory finding with leading slashes
Before this change the logic which makes sure we create all
directories could get confused with directories which started with
slashes and get into an infinite loop consuming 100% of the CPU.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
77429b154e s3: fix handling of objects with // in #5858 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
39b8f17ebb azureblob: fix handling of objects with // in #5858 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
81ecfb0f64 fstest: add integration tests objects with // on bucket based backends #5858 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
656e789c5b fs/list: tweak directory listing assertions after allowing // names 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
fe19184084 lib/bucket: fix tidying of // in object keys #5858
Before this change, bucket.Join would tidy up object keys by removing
repeated / in them. This means we can't access objects with // in them
which is valid for object keys (but not for file system paths).

This could have consequences for users who are relying on rclone to
fix improper paths for them.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
b4990cd858 lib/bucket: add IsAllSlashes function 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
8e955c6b13 azureblob: remove uncommitted blocks on InvalidBlobOrBlock error
When doing a multipart upload or copy, if a InvalidBlobOrBlock error
is received, it can mean that there are uncomitted blocks from a
previous failed attempt with a different length of ID.

This patch makes rclone attempt to clear the uncomitted blocks and
retry if it receives this error.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
3a5ddfcd3c azureblob: implement multipart server side copy
This implements multipart server side copy to improve copying from one
azure region to another by orders of magnitude (from 30s for a 100M
file to 10s for a 10G file with --azureblob-upload-concurrency 500).

- Add `--azureblob-copy-cutoff` to control the cutoff from single to multipart copy
- Add `--azureblob-copy-concurrency` to control the copy concurrency
- Add ServerSideAcrossConfigs flag as this now works properly
- Implement multipart copy using put block list API
- Shortcut multipart copy for same storage account
- Override with `--azureblob-use-copy-blob`

Fixes #8249
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
ac3f7a87c3 azureblob: speed up server side copies for small files #8249
This speeds up server side copies for small files which need the check
the copy status by using an exponential ramp up of time to check the
copy status endpoint.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
4e9b63e141 azureblob: cleanup uncommitted blocks on upload errors
Before this change, if a multipart upload was aborted, then rclone
would leave uncommitted blocks lying around. Azure has a limit of
100,000 uncommitted blocks per storage account, so when you then try
to upload other stuff into that account, or simply the same file
again, you can run into this limit. This causes errors like the
following:

BlockCountExceedsLimit: The uncommitted block count cannot exceed the
maximum limit of 100,000 blocks.

This change removes the uncommitted blocks if a multipart upload is
aborted or fails.

If there was an existing destination file, it takes care not to
overwrite it by recomitting already comitted blocks.

This means that the scheme for allocating block IDs had to change to
make them different for each block and each upload.

Fixes #5583
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
7fd7fe3c82 azureblob: factor readMetaData into readMetaDataAlways returning blob properties 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
9dff45563d Add b-wimmer to contributors 2025-01-22 11:56:05 +00:00
b-wimmer
83cf8fb821 azurefiles: add --azurefiles-use-az and --azurefiles-disable-instance-discovery
Adds additional authentication options from azureblob to azurefiles as well

See rclone#8078
2025-01-22 11:11:18 +00:00
Nick Craig-Wood
32e79a5c5c onedrive: mark German (de) region as deprecated
See: https://learn.microsoft.com/en-us/previous-versions/azure/germany/
2025-01-22 11:00:37 +00:00
Nick Craig-Wood
fc44a8114e Add Trevor Starick to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
657172ef77 Add hiddenmarten to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
71eb4199c3 Add Corentin Barreau to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
ac3c21368d Add Bruno Fernandes to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
db71b2bd5f Add Moises Lima to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
8cfe42d09f Add izouxv to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
e673a28a72 Add Robin Schneider to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
59889ce46b Add Tim White to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
62e8a01e7e Add Christoph Berger to contributors 2025-01-22 11:00:37 +00:00
Trevor Starick
87eaf37629 azureblob: add support for x-ms-tags header 2025-01-17 19:37:56 +00:00
hiddenmarten
7c7606a6cf rc: disable the metrics server when running rclone rc
Fixes #8248
2025-01-17 17:46:22 +00:00
Corentin Barreau
dbb21165d4 internetarchive: add --internetarchive-metadata="key=value" for setting item metadata
Added the ability to include item's metadata on uploads via the
Internet Archive backend using the `--internetarchive-metadata="key=value"`
argument. This is hidden from the configurator as should only
really be used on the command line.

Before this change, metadata had to be manually added after uploads.
With this new feature, users can specify metadata directly during the
upload process.
2025-01-17 16:00:34 +00:00
Dan McArdle
375953cba3 lib/batcher: Deprecate unused option: batch_commit_timeout 2025-01-17 15:56:09 +00:00
Bruno Fernandes
af5385b344 s3: Added new storage class to magalu provider 2025-01-17 15:54:34 +00:00
Moises Lima
347be176af http servers: add --user-from-header to use for authentication
Retrieve the username from a specified HTTP header if no
other authentication methods are configured
(ideal for proxied setups)
2025-01-17 15:53:23 +00:00
Pat Patterson
bf5a4774c6 b2: add SkipDestructive handling to backend commands - fixes #8194 2025-01-17 15:47:01 +00:00
izouxv
0275d3edf2 vfs: close the change notify channel on Shutdown 2025-01-17 15:38:09 +00:00
Robin Schneider
be53ae98f8 Docker image: Add label org.opencontainers.image.source for release notes in Renovate dependency updates 2025-01-17 15:29:36 +00:00
Tim White
0d9fe51632 docs: add OneDrive Impersonate instructions - fixes #5610 2025-01-17 14:30:51 +00:00
Christoph Berger
03bd795221 docs: explain the stringArray flag parameter descriptor 2025-01-17 09:50:22 +01:00
Nick Craig-Wood
5a4026ccb4 iclouddrive: add notes on ADP and Missing PCS cookies - fixes #8310 2025-01-16 10:14:52 +00:00
Dimitri Papadopoulos
b1d4de69c2 docs: fix typos found by codespell in docs and code comments 2025-01-16 10:39:01 +01:00
Nick Craig-Wood
5316acd046 fs: fix confusing "didn't find section in config file" error
This change decorates the error with the section name not found which
will hopefully save user confusion.

Fixes #8170
2025-01-15 16:32:59 +00:00
Nick Craig-Wood
2c72842c10 vfs: fix race detected by race detector
This race would only happen when --dir-cache-time was very small.

This was noticed in the VFS tests when --dir-cache-time was 100 mS so
is unlikely to affect normal users.
2025-01-14 20:46:27 +00:00
Nick Craig-Wood
4a81f12c26 Add Jonathan Giannuzzi to contributors 2025-01-14 20:46:27 +00:00
Nick Craig-Wood
aabda1cda2 Add Spencer McCullough to contributors 2025-01-14 20:46:27 +00:00
Nick Craig-Wood
572fe20f8e Add Matt Ickstadt to contributors 2025-01-14 20:46:27 +00:00
Jonathan Giannuzzi
2fd4c45b34 smb: add support for kerberos authentication
Fixes #7800
2025-01-14 19:24:31 +00:00
Spencer McCullough
ec5489e23f drive: added backend moveid command 2025-01-14 19:21:13 +00:00
Matt Ickstadt
6898375a2d docs: fix reference to serves3 setting disable_multipart_uploads which was renamed 2025-01-14 18:51:19 +01:00
Matt Ickstadt
d413443a6a docs: fix link to Rclone Serve S3 2025-01-14 18:51:19 +01:00
Nick Craig-Wood
5039747f26 serve s3: fix list objects encoding-type
Before this change rclone would always use encoding-type url even if
the client hadn't asked for it.

This confused some clients.

This fixes the problem by leaving the URL encoding to the gofakes3
library which has also been fixed.

Fixes #7836
2025-01-14 16:08:18 +00:00
Nick Craig-Wood
11ba4ac539 build: update gopkg.in/yaml.v2 to v3 2025-01-14 15:25:10 +00:00
Nick Craig-Wood
b4ed7fb7d7 build: update all dependencies 2025-01-14 15:25:10 +00:00
Nick Craig-Wood
719473565e bisync: fix go vet problems with go1.24 2025-01-14 15:25:10 +00:00
Nick Craig-Wood
bd7278d7e9 build: update to go1.24rc1 and make go1.22 the minimum required version 2025-01-14 12:13:14 +00:00
Nick Craig-Wood
45ba81c726 version: add --deps flag to show dependencies and other build info 2025-01-14 12:08:49 +00:00
Nick Craig-Wood
530658e0cc doc: make man page well formed for whatis - fixes #7430 2025-01-13 18:35:27 +00:00
Nick Craig-Wood
b742705d0c Start v1.70.0-DEV development 2025-01-12 16:31:12 +00:00
Nick Craig-Wood
cd3b08d8cf Version v1.69.0 2025-01-12 15:09:13 +00:00
Nick Craig-Wood
009660a489 test_all: disable docker plugin tests
These are not completing on the integration test server. This needs
investigating, but we need the integration tests to run properly.
2025-01-12 14:02:57 +00:00
albertony
4b6c7c6d84 docs: fix typo 2025-01-12 13:49:47 +01:00
Nick Craig-Wood
a7db375f5d accounting: fix race stopping/starting the stats counter
This was picked up by the race detector in the CI.
2025-01-11 20:25:34 +00:00
Nick Craig-Wood
101dcfe157 docs: add github.com/icholy/gomajor to RELEASE for updating major versions 2025-01-11 20:25:34 +00:00
Francesco Frassinelli
aec87b74d3 ftp: fix ls commands returning empty on "Microsoft FTP Service" servers
The problem was in the upstream library jlaffaye/ftp and this updates it.

Fixes #8224
2025-01-11 20:02:16 +00:00
Nick Craig-Wood
91c8f92ccb s3: add docs on data integrity
See: https://forum.rclone.org/t/help-me-figure-out-how-to-verify-backup-accuracy-and-completeness-on-s3/37632/5
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
965bf19065 webdav: make --webdav-auth-redirect to fix 401 unauthorized on redirect
Before this change, if the server returned a 302 redirect message when
opening a file rclone would do the redirect but drop the
Authorization: header. This is a sensible thing to do for security
reasons but breaks some setups.

This patch adds the --webdav-auth-redirect flag which makes it
preserve the auth just for this kind of request.

See: https://forum.rclone.org/t/webdav-401-unauthorized-when-server-redirects-to-another-domain/39292
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
15ef3b90fa rest: make auth preserving redirects an option 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
f6efaf2a63 box: fix panic when decoding corrupted PEM from JWT file
See: https://forum.rclone.org/t/box-jwt-config-erroring-panic/40685/
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
0e7c495395 size: make output compatible with -P
Before this change the output of `rclone size -P` would get corrupted
by the progress printing.

This is fixed by using operations.SyncPrintf instead of fmt.Printf.

Fixes #7912
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
ff0ded8f11 vfs: add remote name to vfs cache log messages - fixes #7952 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
110bf468a4 dropbox: fix return status when full to be fatal error
This will stop the sync, but won't stop a mount.

Fixes #7334
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
d4e86f4d8b rc: add relative to vfs/queue-set-expiry 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
6091a0362b vfs: fix open files disappearing from directory listings
In this commit

aaadb48d48 vfs: keep virtual directory status accurate and reduce deadlock potential

We reworked the virtual directory detection to use an atomic bool so
that we could run part of the cache forgetting only with a read lock.

Unfortunately this had a bug which meant that directories with virtual
items could be forgotten.

This commit changes the boolean into a count of virtual entries which
should be more accurate.

Fixes #8082
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
33d2747829 docker serve: parse all remaining mount and VFS options
Before this change, this code implemented an ad-hoc parser for a
subset of vfs and mount options.

After the config re-organization it can use the same parsing code as
the rest of rclone which simplifies the code and exposes all the VFS
and mount options.
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
c9e5f45d73 smb: fix panic if stat fails
Before this fix the smb backend could panic if a stat call failed.

This fix makes it return an error instead.

It should have the side effect that we do one less stat call on upload
too.

Fixes #8106
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
2f66537514 googlephotos: fix nil pointer crash on upload - fixes #8233 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
a491312c7d iclouddrive: tweak docs 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
45b7690867 serve dlna: sort the directory entries by directories first then alphabetically by name
Some media boxes don't sort the items returned from the DLNA server,
so sort them here, directories first then alphabetically by name.

See: https://forum.rclone.org/t/serve-dlna-files-order-directories-first/47790
2025-01-11 17:11:40 +00:00
Nick Craig-Wood
30ef1ddb23 serve nfs: fix missing inode numbers which was messing up ls -laR
In 6ba3e24853

    serve nfs: fix incorrect user id and group id exported to NFS #7973

We updated the stat function to output uid and gid. However this set
the inode numbers of everything to -1. This causes a problem with
doing `ls -laR` giving "not listing already-listed directory" as it
uses inode numbers to see if it has listed a directory or not.

This patch reads the inode number from the vfs.Node and sets it in the
Stat output.
2025-01-09 18:55:18 +00:00
Nick Craig-Wood
424d8e3123 serve nfs: implement --nfs-cache-type symlink
`--nfs-cache-type symlink` is similar to `--nfs-cache-type disk` in
that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance.
2025-01-09 18:55:18 +00:00
Nick Craig-Wood
04dfa6d923 azureblob,oracleobjectstorage,s3: quit multipart uploads if the context is cancelled
Before this change the multipart uploads would continue retrying even
if the context was cancelled.
2025-01-09 18:55:18 +00:00
Oleg Kunitsyn
fdff1a54ee http: fix incorrect URLs with initial slash
* http: trim initial slash building url
* Add a test for http object with leading slash

Fixes #8261
2025-01-09 17:40:00 +00:00
Eng Zer Jun
42240f4b5d build: update github.com/shirou/gopsutil to v4
v4 is the latest version with bug fixes and enhancements. While there
are 4 breaking changes in v4, they do not affect us because we do not
use the impacted functions.

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2025-01-09 17:32:09 +00:00
albertony
7692ef289f Replace Windows-specific NewLazyDLL with NewLazySystemDLL
This will only search Windows System directory for the DLL if name is a base
name (like "advapi32.dll"), which prevents DLL preloading attacks.

To get access to NewLazySystemDLL imports of syscall needs to be swapped with
golang.org/x/sys/windows.
2025-01-08 17:35:00 +01:00
Nick Craig-Wood
bfb7b88371 lib/oauthutil: don't require token to exist for client credentials flow
Before this change when setting up client credentials flow manually,
rclone would fail with this error message on first run despite the
fact that no existing token is needed.

    empty token found - please run "rclone config reconnect remote:"

This fixes the problem by ignoring token loading problems for client
credentials flow.
2025-01-08 12:38:24 +00:00
Nick Craig-Wood
5f70918e2c fs/operations: make log messages consistent for mkdir/rmdir at INFO level
Before this change, creating a new directory would write a DEBUG log
but removing it would write an INFO log.

This change makes both write an INFO log for consistency.
2025-01-08 12:38:24 +00:00
Nick Craig-Wood
abf11271fe Add Francesco Frassinelli to contributors 2025-01-08 12:38:24 +00:00
Francesco Frassinelli
a36e89bb61 smb: Add support for Kerberos authentication.
This updates go-smb2 to a version which supports kerberos.

Fixes #7600
2025-01-08 11:25:23 +00:00
Francesco Frassinelli
35614acf59 docs: smb: link to CloudSoda/go-smb2 fork 2025-01-08 11:18:55 +00:00
yuval-cloudinary
7e4b8e33f5 cloudinary: add cloudinary backend - fixes #7989 2025-01-06 10:54:03 +00:00
yuval-cloudinary
5151a663f0 operations: fix eventual consistency in TestParseSumFile test 2025-01-06 10:54:03 +00:00
Nick Craig-Wood
b85a1b684b Add TAKEI Yuya to contributors 2025-01-06 10:34:03 +00:00
Nick Craig-Wood
8fa8f146fa docs: Remove Backblaze as a Platinum sponsor 2025-01-06 10:33:57 +00:00
Nick Craig-Wood
6cad0a013e docs: add RcloneView as silver sponsor 2025-01-06 10:33:57 +00:00
TAKEI Yuya
aa743cbc60 serve docker: fix incorrect GID assignment 2025-01-05 21:42:58 +01:00
Nick Craig-Wood
a389a2979b serve s3: fix Last-Modified timestamp
This had two problems

1. It was using a single digit for day of month
2. It is supposed to be in UTC

Fixes #8277
2024-12-30 14:16:53 +00:00
Nick Craig-Wood
d6f0d1d349 Add ToM to contributors 2024-12-30 14:16:53 +00:00
Nick Craig-Wood
4ed6960d95 Add Henry Lee to contributors 2024-12-30 14:16:53 +00:00
Nick Craig-Wood
731af0c0ab Add Louis Laureys to contributors 2024-12-30 14:16:53 +00:00
ToM
5499fd3b59 docs: filtering: mention feeding --files-from from standard input 2024-12-28 16:44:44 +01:00
ToM
e0e697ca11 docs: filtering: fix --include-from copypaste error 2024-12-27 11:36:41 +00:00
Henry Lee
05f000b076 s3: rename glacier storage class to flexible retrieval 2024-12-27 11:32:43 +00:00
Louis Laureys
a34c839514 b2: add daysFromStartingToCancelingUnfinishedLargeFiles to backend lifecycle command
See: https://www.backblaze.com/blog/effortlessly-managing-unfinished-large-file-uploads-with-b2-cloud-storage/
See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
2024-12-22 10:16:31 +00:00
Nick Craig-Wood
6a217c7dc1 build: update golang.org/x/net to v0.33.0 to fix CVE-2024-45338
An attacker can craft an input to the Parse functions that would be
processed non-linearly with respect to its length, resulting in
extremely slow parsing. This could cause a denial of service.

This only affects users running rclone servers exposed to untrusted
networks.

See: https://pkg.go.dev/vuln/GO-2024-3333
See: https://github.com/advisories/GHSA-w32m-9786-jp63
2024-12-21 18:43:26 +00:00
Nick Craig-Wood
e1748a3183 azurefiles: fix missing x-ms-file-request-intent header
According to the SDK docs

> FileRequestIntent is required when using TokenCredential for
> authentication. Acceptable value is backup.

This sets the correct option in the SDK. It does it for all types of
authentication but the SDK seems clever enough not to supply it when
it isn't needed.

This fixes the error

> MissingRequiredHeader An HTTP header that's mandatory for this
> request is not specified. x-ms-file-request-intent

Fixes #8241
2024-12-19 17:01:34 +00:00
Nick Craig-Wood
bc08e05a00 Add Thomas ten Cate to contributors 2024-12-19 17:01:34 +00:00
Thomas ten Cate
9218b69afe docs: Document --url and --unix-socket on the rc page
This page started talking about what commands you can send, without
explaining how to actually send them.

Fixes #8252.
2024-12-19 15:50:43 +00:00
Nick Craig-Wood
0ce2e12d9f docs: link to the outstanding vfs symlinks issue 2024-12-16 11:01:03 +00:00
Nick Craig-Wood
7224b76801 Add Yxxx to contributors 2024-12-16 11:01:03 +00:00
Nick Craig-Wood
d2398ccb59 Add hayden.pan to contributors 2024-12-16 11:01:03 +00:00
Yxxx
0988fd9e9f docs: update pcloud doc to avoid puzzling token error when use remote rclone authorize 2024-12-16 10:29:24 +00:00
wiserain
51cde23e82 pikpak: add option to use original file links - fixes #8246 2024-12-16 01:17:58 +09:00
hayden.pan
caac95ff54 rc/job: use mutex for adding listeners thread safety
Fix in extreme cases, when the job is executing finish(), the listener added by calling OnFinish() will never be executed.

This change should not cause compatibility issues, as consumers should not make assumptions about whether listeners will be run in a new goroutine
2024-12-15 13:05:29 +00:00
albertony
19f4580aca docs: mention in serve tls options when value is path to file - fixes #8232 2024-12-14 11:48:38 +00:00
Nick Craig-Wood
27f448d14d build: update all dependencies 2024-12-13 16:07:45 +00:00
Nick Craig-Wood
500698c5be accounting: fix debug printing when debug wasn't set 2024-12-13 15:34:44 +00:00
Nick Craig-Wood
91af6da068 Add Filipe Azevedo to contributors 2024-12-13 15:34:44 +00:00
Nick Craig-Wood
b8835fe7b4 fs: make --links flag global and add new --local-links and --vfs-links flag
Before this change the --links flag when using the VFS override the
--links flag for the local backend which meant the local backend
needed explicit config to use links.

This fixes the problem by making the --links flag global and adding a
new --local-links flag and --vfs-links flags to control the features
individually if required.
2024-12-13 12:43:20 +00:00
Nick Craig-Wood
48d9e88e8f vfs: add docs for -l/--links flag 2024-12-13 12:43:20 +00:00
Nick Craig-Wood
4e7ee9310e nfsmount,serve nfs: introduce symlink support #2975 2024-12-13 12:43:20 +00:00
Filipe Azevedo
d629102fa6 mount2: introduce symlink support #2975 2024-12-13 12:43:20 +00:00
Filipe Azevedo
db1ed69693 mount: introduce symlink support #2975 2024-12-13 12:43:20 +00:00
Filipe Azevedo
06657c49a0 cmount: introduce symlink support #2975 2024-12-13 12:43:20 +00:00
Filipe Azevedo
f1d2f2b2c8 vfstest: make VFS test suite support symlinks 2024-12-13 12:43:20 +00:00
Nick Craig-Wood
a5abe4b8b3 vfs: add symlink support to VFS
This is somewhat limited in that it only resolves symlinks when files
are opened. This will work fine for the intended use in rclone mount,
but is inadequate for the other servers probably.
2024-12-13 12:43:20 +00:00
Nick Craig-Wood
c0339327be vfs: add ELOOP error 2024-12-13 12:43:20 +00:00
Filipe Azevedo
353bc3130e vfs: Add link permissions 2024-12-13 12:43:20 +00:00
Filipe Azevedo
126f00882b vfs: Add VFS --links command line switch
This will be used to enable links support for the various mount engines
in a follow up commit.
2024-12-13 12:43:20 +00:00
Nick Craig-Wood
44c3f5e1e8 vfs: add vfs.WriteFile to match os.WriteFile 2024-12-13 12:43:20 +00:00
Filipe Azevedo
c47c94e485 fs: Move link suffix to fs 2024-12-13 12:43:20 +00:00
Nick Craig-Wood
1f328fbcfd cmount: fix problems noticed by linter 2024-12-13 12:43:20 +00:00
Filipe Azevedo
7f1240516e mount2: Fix missing . and .. entries 2024-12-13 12:43:20 +00:00
Nick Craig-Wood
f9946b37f9 sftp: fix nil check when using auth proxy
An incorrect nil check was spotted while reviewing the code for
CVE-2024-45337.

The nil check failing has never happened as far as we know. The
consequences would be a nil pointer exception.
2024-12-13 12:36:15 +00:00
Nick Craig-Wood
96fe25cf0a Add Martin Hassack to contributors 2024-12-13 12:36:15 +00:00
dependabot[bot]
a176d4cbda serve sftp: resolve CVE-2024-45337
This commit resolves CVE-2024-45337 which is an a potential auth
bypass for `rclone serve sftp`.

https://nvd.nist.gov/vuln/detail/CVE-2024-45337

However after review of the code, rclone is **not** affected as it
handles the authentication correctly. Rclone already uses the
Extensions field of the Permissions return value from the various
authentication callbacks to record data associated with the
authentication attempt as suggested in the vulnerability report.

This commit includes the recommended update to golang.org/x/crypto
anyway so that this is visible in the changelog.

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.29.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.29.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-13 12:28:08 +00:00
Tony Metzidis
e704e33045 googlecloudstorage: typo fix in docs 2024-12-13 11:49:21 +00:00
Martin Hassack
2f3e90f671 onedrive: add support for OAuth client credential flow - fixes #6197
This adds support for the client credential flow oauth method which
requires some special handling in onedrive:

- Special scopes are required
- The tenant is required
- The tenant needs to be used in the oauth URLs

This also:

- refactors the oauth config creation so it isn't duplicated
- defaults the drive_id to the previous one in the config
- updates the documentation

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2024-12-13 11:34:11 +00:00
Martin Hassack
65012beea4 lib/oauthutil: add support for OAuth client credential flow
This commit reorganises the oauth code to use our own config struct
which has all the info for the normal oauth method and also the client
credentials flow method.

It updates all backends which use lib/oauthutil to use the new config
struct which shouldn't change any functionality.

It also adds code for dealing with the client credential flow config
which doesn't require the use of a browser and doesn't have or need a
refresh token.

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2024-12-13 11:34:11 +00:00
Nick Craig-Wood
704217b698 lib/oauthutil: return error messages from the oauth process better 2024-12-13 11:34:11 +00:00
Nick Craig-Wood
6ade1055d5 bin/test_backend_sizes.py fix compile flags and s3 reporting
This now compiles rclone with CGO_ENABLED=0 which is closer to the
release compile.

It also removes pikpak if testing s3 as the two depend on each
other.
2024-12-13 11:34:11 +00:00
Nick Craig-Wood
6a983d601c test makefiles: add --flat flag for making directories with many entries 2024-12-11 18:21:42 +00:00
Nick Craig-Wood
eaafae95fa Add divinity76 to contributors 2024-12-11 18:21:42 +00:00
Nick Craig-Wood
5ca1436c24 Add Ilias Ozgur Can Leonard to contributors 2024-12-11 18:21:42 +00:00
Nick Craig-Wood
c46e93cc42 Add remygrandin to contributors 2024-12-11 18:21:42 +00:00
Nick Craig-Wood
66943d3d79 Add Michael R. Davis to contributors 2024-12-11 18:21:42 +00:00
divinity76
a78bc093de cmd/mountlib: better snap mount error message
Mounting will always fail when rclone is installed from the snap package manager.
But the error message generated when trying to mount from a snap install was not
very good. Improve the error message.

Fixes #8208
2024-12-06 08:14:09 +00:00
Ilias Ozgur Can Leonard
2446c4928d vfs: with --vfs-used-is-size value is calculated and then thrown away - fixes #8220 2024-12-04 22:57:41 +00:00
albertony
e11e679e90 serve sftp: fix loading of authorized keys file with comment on last line - fixes #8227 2024-12-04 13:42:10 +01:00
Manoj Ghosh
ba8e538173 oracleobjectstorage: make specifying compartmentid optional 2024-12-03 17:54:00 +00:00
Georg Welzel
40111ba5e1 plcoud: fix failing large file uploads - fixes #8147
This changes the OpenWriterAt implementation to make client/fd
handling atomic.

This PR stabilizes the situation of bigger files and multi-threaded
uploads. The root cause boils down to the old "fun" property of
pclouds fileops API: sessions are bound to TCP connections. This
forces us to use a http client with only a single connection
underneath.

With large files, we reuse the same connection for each chunk. If that
connection interrupts (e.g. because we are talking through the
internet), all chunks will fail. The probability for latter one
increases with larger files.

As the point of the whole multi-threaded feature was to speed-up large
files in the first place, this change pulls the client creation (and
hence connection handling) into each chunk. This should stabilize the
situation, as each chunk (and retry) gets its own connection.
2024-12-03 17:52:44 +00:00
remygrandin
ab58ae5b03 docs: add docker volume plugin troubleshooting steps
This proposal expand the current docker volume plugin troubleshooting possible steps to include a state cleanup command and a reminder that a un/reinstall don't clean up those cache files.


Co-authored-by: albertony <12441419+albertony@users.noreply.github.com>
2024-11-26 20:56:10 +01:00
Michael R. Davis
ca8860177e docs: fix missing state parameter in /auth link in instructions 2024-11-22 22:40:07 +00:00
Nick Craig-Wood
d65d1a44b3 build: fix build failure on ubuntu 2024-11-21 12:05:49 +00:00
Sam Harrison
c1763a3f95 docs: upgrade fontawesome to v6
Also update the Filescom icon.
2024-11-21 11:06:38 +00:00
Nick Craig-Wood
964fcd5f59 s3: fix multitenant multipart uploads with CEPH
CEPH uses a special bucket form `tenant:bucket` for multitentant
access using S3 as documented here:

https://docs.ceph.com/en/reef/radosgw/multitenancy/#s3

However when doing multipart uploads, in the reply from
`CreateMultipart` the `tenant:` was missing from the `Bucket` response
rclone was using to build the `UploadPart` request. This caused a 404
failure return. This may be a CEPH bug, but it is easy to work around.

This changes the code to use the `Bucket` and `Key` that we used in
`CreateMultipart` in `UploadPart` rather than the one returned from
`CreateMultipart` which fixes the problem.

See: https://forum.rclone.org/t/rclone-zcat-does-not-work-with-a-multitenant-ceph-backend/48618
2024-11-21 11:04:49 +00:00
Nick Craig-Wood
c6281a1217 Add David Seifert to contributors 2024-11-21 11:04:49 +00:00
Nick Craig-Wood
ff3f8f0b33 Add vintagefuture to contributors 2024-11-21 11:04:49 +00:00
Anthony Metzidis
2d844a26c3 use better docs 2024-11-20 18:05:56 +00:00
Anthony Metzidis
1b68492c85 googlecloudstorage: update docs on service account access tokens 2024-11-20 18:05:56 +00:00
David Seifert
acd5a893e2 test_all: POSIX head/tail invocations
* head -number is not allowed by POSIX.1-2024:
  https://pubs.opengroup.org/onlinepubs/9799919799/utilities/head.html
  https://devmanual.gentoo.org/tools-reference/head-and-tail/index.html
2024-11-20 18:02:07 +00:00
vintagefuture
0214a59a8c icloud: Added note about app specific password not working 2024-11-20 17:43:42 +00:00
Nick Craig-Wood
6079cab090 s3: fix download of compressed files from Cloudflare R2 - fixes #8137
Before this change attempting to download a file with
`Content-Encoding: gzip` from Cloudflare R2 gave this error

    corrupted on transfer: sizes differ src 0 vs dst 999

This was caused by the SDK v2 overriding our attempt to set
`Accept-Encoding: gzip`.

This fixes the problem by disabling the middleware that does that
overriding.
2024-11-20 12:08:23 +00:00
Nick Craig-Wood
bf57087a6e s3: fix testing tiers which don't exist except on AWS 2024-11-20 12:08:23 +00:00
Nick Craig-Wood
d8bc542ffc Changelog updates from Version v1.68.2 2024-11-15 14:51:27 +00:00
Nick Craig-Wood
01ccf204f4 local: fix permission and ownership on symlinks with --links and --metadata
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.

This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.

This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.

See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
2024-11-14 16:20:18 +00:00
Nick Craig-Wood
84b64dcdf9 Revert "Merge commit from fork"
This reverts commit 1e2b354456.
2024-11-14 16:20:06 +00:00
Nick Craig-Wood
8cc1020a58 Add Dimitrios Slamaris to contributors 2024-11-14 16:15:49 +00:00
Nick Craig-Wood
1e2b354456 Merge commit from fork
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.

This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.

This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.

See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
2024-11-14 16:13:57 +00:00
Nick Craig-Wood
f639cd9c78 onedrive: fix integration tests after precision change
We changed the precision of the onedrive personal backend in
c053429b9c from 1mS to 1S.

However the tests did not get updated. This changes the time tests to
use `fstest.AssertTimeEqualWithPrecision` which compares with
precision so hopefully won't break again.
2024-11-12 13:09:15 +00:00
Nick Craig-Wood
e50f995d87 operations: fix TestRemoveExisting on crypt backends by shortening the file name 2024-11-12 13:09:15 +00:00
Dimitrios Slamaris
abe884e744 bisync: fix output capture restoring the wrong output for logrus
Before this change, if rclone is used as a library and logrus is used
after a call to rc `sync/bisync`, logging does not work anymore and
leads to writing to a closed pipe.

This change restores the output correctly.

Fixes #8158
2024-11-12 11:42:54 +00:00
Nick Craig-Wood
173b2ac956 serve sftp: update github.com/pkg/sftp to v1.13.7 and fix deadlock in tests
Before this change, upgrading to v1.13.7 caused a deadlock in the tests.

This was caused by additional locking in the sftp package exposing a
bad choice by the rclone code.

See https://github.com/pkg/sftp/issues/603 and thanks to @puellanivis
for the fix suggestion.
2024-11-11 18:15:00 +00:00
Nick Craig-Wood
1317fdb9b8 build: fix comments after golangci-lint upgrade 2024-11-11 18:03:36 +00:00
Nick Craig-Wood
1072173d58 build: update all dependencies 2024-11-11 18:03:34 +00:00
dependabot[bot]
df19c6f7bf build(deps): bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.0 to 4.5.1.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.0...v4.5.1)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-11 18:01:03 +00:00
Nick Craig-Wood
ee72554fb9 pikpak: fix fatal crash on startup with token that can't be refreshed 2024-11-08 19:34:09 +00:00
Nick Craig-Wood
abb4f77568 yandex: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
ca2b27422f sugarsync: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
740f6b318c putio: fix server side copying over existing object
This was causing a conflict error. This was fixed by checking for the
existing object and deleting it after the file was server side copied.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
f307d929a8 onedrive: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
ceea6753ee dropbox: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
2bafbf3c04 operations: add RemoveExisting to safely remove an existing file
This renames the file first and if the operation is successful then it
deletes the renamed file.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
3e14ba54b8 gofile: fix server side copying over existing object
This was creating a duplicate.
2024-11-08 14:01:51 +00:00
Nick Craig-Wood
2f7a30cf61 test_all: try to fix mailru rate limits in integration tests
The Mailru backend integration tests have been failing due to new rate
limits on the backend.

This patch

- Removes Mailru from the chunker tests
- Adds the flag so we only run one Mailru test at once
2024-11-08 10:02:44 +00:00
Nick Craig-Wood
0ad925278d Add shenpengfeng to contributors 2024-11-08 10:02:44 +00:00
Nick Craig-Wood
e3053350f3 Add Dimitar Ivanov to contributors 2024-11-08 10:02:44 +00:00
shenpengfeng
b9207e5727 docs: fix function name in comment 2024-10-29 09:26:37 +01:00
Dimitar Ivanov
40159e7a16 sftp: allow inline ssh public certificate for sftp
Currently rclone allows us to specify the path to a public ssh
certificate file.

That works great for cases where we can specify key path, like local
envs.

If users are using rclone with [volsync](https://github.com/backube/volsync/tree/main/docs/usage/rclone)
there currently is a limitation that users can specify only the rclone config file.
With this change users can pass the public certificate in the same fashion
as they can with `key_file`.
2024-10-25 10:40:57 +01:00
Nick Craig-Wood
16baa24964 serve s3: fix excess locking which was making serve s3 single threaded
The fix for this was in the upstream library to narrow the locking
window.

See: https://forum.rclone.org/t/can-rclone-serve-s3-handle-more-than-one-client/48329/
2024-10-25 10:36:50 +01:00
Nick Craig-Wood
72f06bcc4b lib/oauthutil: allow the browser opening function to be overridden 2024-10-24 17:56:50 +01:00
Nick Craig-Wood
c527dd8c9c Add Moises Lima to contributors 2024-10-24 17:56:50 +01:00
Moises Lima
29fd894189 lib/http: disable automatic authentication skipping for unix sockets
Disabling the authentication for unix sockets makes it impossible to
use `rclone serve` behind a proxy that that communicates with rclone
via a unix socket.

Re-enabling the authentication should not have any effect on most
users of unix sockets as they do not set authentication up with a unix
socket anyway.
2024-10-24 12:39:28 +01:00
Nick Craig-Wood
175aa07cdd onedrive: fix Retry-After handling to look at 503 errors also
According to the Microsoft docs a Retry-After header can be returned
on 429 errors and 503 errors, but before this change we were only
checking for it on 429 errors.

See: https://forum.rclone.org/t/onedrive-503-response-retry-after-not-used/48045
2024-10-23 13:00:32 +01:00
Kaloyan Raev
75257fc9cd s3: Storj provider: fix server-side copy of files bigger than 5GB
Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.

This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.

See https://github.com/storj/roadmap/issues/40
2024-10-22 21:15:04 +01:00
Nick Craig-Wood
53ff3b3b32 s3: add Selectel as a provider 2024-10-22 19:54:33 +01:00
Nick Craig-Wood
8b4b59412d fs: fix Don't know how to set key "chunkSize" on upload errors in tests
Before this testing any backend which implemented the OpenChunkWriter
gave this error:

    ERROR : writer-at-subdir/writer-at-file: Don't know how to set key "chunkSize" on upload

This was due to the ChunkOption incorrectly rendering into HTTP
headers which weren't understood by the backend.
2024-10-22 19:54:33 +01:00
Nick Craig-Wood
264c9fb2c0 drive: implement rclone backend rescue to rescue orphaned files
Fixes #4166
2024-10-21 10:15:01 +01:00
Nick Craig-Wood
1b10cd3732 Add tgfisher to contributors 2024-10-21 10:15:01 +01:00
Nick Craig-Wood
d97492cbc3 Add Diego Monti to contributors 2024-10-21 10:15:01 +01:00
Nick Craig-Wood
82a510e793 Add Randy Bush to contributors 2024-10-21 10:15:01 +01:00
Nick Craig-Wood
9f2c590e13 Add Alexandre Hamez to contributors 2024-10-21 10:15:01 +01:00
Nick Craig-Wood
11a90917ec Add Simon Bos to contributors 2024-10-21 10:15:01 +01:00
tgfisher
8ca7b2af07 docs: mention that inline comments are not supported in a filter-file 2024-10-21 09:10:09 +02:00
Diego Monti
a19ddffe92 s3: add Wasabi eu-south-1 region
Ref. https://docs.wasabi.com/docs/what-are-the-service-urls-for-wasabi-s-different-storage-regions
2024-10-14 14:05:33 +02:00
Randy Bush
3e2c0f8c04 docs: fix forward refs in step 9 of using your own client id 2024-10-14 13:25:25 +02:00
Alexandre Hamez
589458d1fe docs: fix Scaleway Glacier website URL 2024-10-12 12:13:47 +01:00
Simon Bos
69897b97fb dlna: fix loggingResponseWriter disregarding log level 2024-10-08 15:27:05 +01:00
albertony
4db09331c6 build: remove required property on boolean inputs
Since boolean inputs are now properly treated as booleans, and GitHub Web GUI shows
them as checkboxes, setting required does nothing.
2024-10-03 16:31:36 +01:00
albertony
fcd3b88332 build: use inputs context in github workflow
Currently input options are retrieved from the event payload, via github.event.inputs,
and that still works, but boolean values are represented as strings there while in the
dedicated inputs context the boolean types are preserved, which means conditional
expressions can be simplified.
2024-10-03 16:31:36 +01:00
Nick Craig-Wood
1ca3f12672 s3: fix crash when using --s3-download-url after migration to SDKv2
Before this change rclone was crashing when the download URL did not
supply an X-Amz-Storage-Class header.

This change allows the header to be missing.

See: https://forum.rclone.org/t/sigsegv-on-ubuntu-24-04/48047
2024-10-03 14:31:56 +01:00
Nick Craig-Wood
e7a0fd0f70 docs: update overview to show pcloud can set modtime
See 258092f9c6 and #7896
2024-10-03 14:31:56 +01:00
Nick Craig-Wood
c23c59544d Add André Tran to contributors 2024-10-03 14:31:56 +01:00
Nick Craig-Wood
9dec3de990 Add Matthias Gatto to contributors 2024-10-03 14:31:56 +01:00
Nick Craig-Wood
5caa695c79 Add lostb1t to contributors 2024-10-03 14:31:56 +01:00
Nick Craig-Wood
8400809900 Add Noam Ross to contributors 2024-10-03 14:31:11 +01:00
Nick Craig-Wood
e49516d5f4 Add Benjamin Legrand to contributors 2024-10-03 14:31:11 +01:00
Matthias Gatto
9614fc60f2 s3: add Outscale provider
Signed-off-by: matthias.gatto <matthias.gatto@outscale.com>
Co-authored-by: André Tran <andre.tran@outscale.com>
2024-10-02 10:26:41 +01:00
lostb1t
51db76fd47 Add ICloud Drive backend 2024-10-02 10:19:11 +01:00
Noam Ross
17e7ccfad5 drive: add support for markdown format 2024-09-30 17:22:32 +01:00
Benjamin Legrand
8a6fc8535d accounting: fix global error acounting
fs.CountError is called when an error is encountered. The method was
calling GlobalStats().Error(err) which incremented the error at the
global stats level. This led to calls to core/stats with group= filter
returning an error count of 0 even if errors actually occured.

This change requires the context to be provided when calling
fs.CountError. Doing so, we can retrieve the correct StatsInfo to
increment the errors from.

Fixes #5865
2024-09-30 17:20:42 +01:00
Nick Craig-Wood
c053429b9c onedrive: fix time precision for OneDrive personal
This reduces the precision advertised by the backend from 1ms to 1s
for OneDrive personal accounts.

The precision was set to 1ms as part of:

1473de3f04 onedrive: add metadata support

which was released in v1.66.0.

However it appears not all OneDrive personal accounts support 1ms time
precision and that Microsoft may be migrating accounts away from this
to backends which only support 1s precision.

Fixes #8101
2024-09-30 11:34:06 +01:00
Nick Craig-Wood
18989fbf85 Add RcloneView as a sponsor 2024-09-30 11:34:06 +01:00
Nick Craig-Wood
a7451c6a77 Add Leandro Piccilli to contributors 2024-09-30 11:32:13 +01:00
nielash
5147d1101c cache: skip bisync tests
per ncw: "I don't care about cache as it is deprecated - we should probably stop
it running bisync tests"
https://github.com/rclone/rclone/pull/7795#issuecomment-2163295857
2024-09-29 18:37:52 -04:00
nielash
11ad2a1316 bisync: allow blank hashes on tests
Some backends support hashes but allow them to be blank. In other words, we
can't expect them to be reliably non-blank, and we shouldn't treat a blank hash
as an error.

Before this change, the bisync integration tests errored if a backend said it
supported hashes but in fact sometimes lacked them. After this change, such
errors are ignored.
2024-09-29 18:37:52 -04:00
nielash
3c7ad8d961 box: fix server-side copying a file over existing dst - fixes #3511
Before this change, server-side copying a src file over a dst that already exists
gave `Error "item_name_in_use" (409): Item with the same name already exists`.

This change fixes the error by copying to a temporary name first, then moving it
to the real name.

There might be a more graceful way to overwrite a file during a copy, but I
didn't see one in the API docs.
https://developer.box.com/reference/post-files-id-copy/
In the meantime, this workaround is better than a critical error.

This should (hopefully) fix 8 bisync integration tests.
2024-09-29 18:37:52 -04:00
nielash
a3e8fb584a sync: add tests for copying/moving a file over itself
This should catch issues like this, for example:
https://github.com/rclone/rclone/issues/3511#issuecomment-528332895
2024-09-29 18:37:52 -04:00
nielash
9b4b3033da fs/cache: fix parent not getting pinned when remote is a file
Before this change, when cache.GetFn was called on a file rather than a
directory, two cache entries would be added (the file + its parent) but only one
of them would get pinned if the caller then called Pin(f). This left the other
one exposed to expiration if the ci.FsCacheExpireDuration was reached. This was
problematic because both entries point to the same Fs, and if one entry expires
while the other is pinned, the Shutdown method gets erroneously called on an Fs
that is still in use.

An example of the problem showed up in the Hasher backend, which uses the
Shutdown method to stop the bolt db used to store hashes. If a command was run
on a Hasher file (ex. `rclone md5sum --download hasher:somelargefile.zip`) and
hashing the file took longer than the --fs-cache-expire-duration (5m by default), the
bolt db was stopped before the hashing operation completed, resulting in an
error.

This change fixes the issue by ensuring that:
1. only one entry is added to the cache (the file's parent, not the file).
2. future lookups correctly find the entry regardless of whether they are called
	with the parent name or one of its children.
3. fs.ErrorIsFile is returned when (and only when) fsString points to a file
	(preserving the fix from 8d5bc7f28b).

Note that f.Root() should always point to the parent dir as of c69eb84573
2024-09-28 13:49:56 +01:00
Leandro Piccilli
94997d25d2 gcs: add access token auth with --gcs-access-token 2024-09-27 17:37:07 +01:00
Nick Craig-Wood
19458e8459 accounting: write the current bwlimit to the log on SIGUSR2 2024-09-26 18:01:18 +01:00
Nick Craig-Wood
7d32da441e accounting: fix wrong message on SIGUSR2 to enable/disable bwlimit
This was caused by the message code only looking at one of the
bandwidth filters, not all of them.

Fixes #8104
2024-09-26 17:53:58 +01:00
Nick Craig-Wood
22e13eea47 gphotos: implment --gphotos-proxy to allow download of full resolution media
This works in conjunction with the gphotosdl tool

https://github.com/rclone/gphotosdl
2024-09-26 12:57:28 +01:00
Nick Craig-Wood
de9b593f02 googlephotos: remove noisy debugging statements 2024-09-26 12:52:53 +01:00
Nick Craig-Wood
b2b4f8196c docs: add note to CONTRIBUTING that the overview needs editing in 2 places 2024-09-25 17:56:33 +01:00
Nick Craig-Wood
84cebb6872 test_all: add ignoretests parameter for skipping certain tests
Use like this for a `backend:` in `config.yaml`

   ignoretests:
     - "fs/operations"
     - "fs/sync"
2024-09-25 16:03:43 +01:00
Nick Craig-Wood
cb9f4f8461 build: replace "golang.org/x/exp/slices" with "slices" now go1.21 is required 2024-09-25 16:03:43 +01:00
Nick Craig-Wood
498d9cfa85 Changelog updates from Version v1.68.1 2024-09-24 17:26:49 +01:00
Dan McArdle
109e4ed0ed Makefile: Fail when doc recipes create dir named '$HOME'
This commit makes the `commanddocs` and `backenddocs` fail if they
accidentally create a directory named '$HOME'. This is basically a
regression test for issue #8092.

It also makes those recipes rmdir the '$HOME/.config/rclone/'
directories. This will only delete empty directories, so nothing of
value should ever be deleted.
2024-09-24 10:38:25 +01:00
Dan McArdle
353270263a Makefile: Prevent doc recipe from creating dir named '$HOME'
Prior to this commit, running `make doc` had the unwanted side effect of
creating a directory literally named `$HOME` in the source tree.

Fixed #8092
2024-09-24 10:38:25 +01:00
wiserain
f8d782c02d pikpak: fix cid/gcid calculations for fs.OverrideRemote
Previously, cid/gcid (custom hash for pikpak) calculations failed when 
attempting to unwrap object info from `fs.OverrideRemote`. 

This commit introduces a new function that can correctly unwrap 
object info from both regular objects and `fs.OverrideRemote` types, 
ensuring uploads with accurate cid/gcid calculations in all scenarios.
2024-09-21 10:22:31 +09:00
albertony
3dec664a19 bisync: change exit code from 2 to 7 for critically aborted run 2024-09-20 18:51:08 +02:00
albertony
a849fd59f0 cmd: change exit code from 1 to 2 for syntax and usage errors 2024-09-20 18:51:08 +02:00
nielash
462a1cf491 local: fix --copy-links on macOS when cloning
Before this change, --copy-links erroneously behaved like --links when using cloning
on macOS, and cloning was not supported at all when using --links.

After this change, --copy-links does what it's supposed to, and takes advantage of
cloning when possible, by copying the file being linked to instead of the link
itself.

Cloning is now also supported in --links mode for regular files (which benefit
most from cloning). symlinks in --links mode continue to be tossed back to be
handled by rclone's special translation logic.

See https://forum.rclone.org/t/macos-local-to-local-copy-with-copy-links-causes-error/47671/5?u=nielash
2024-09-20 17:43:52 +01:00
Nick Craig-Wood
0b7b3cacdc azureblob: add --azureblob-use-az to force the use of the Azure CLI for auth
Setting this can be useful if you wish to use the az CLI on a host with
a System Managed Identity that you do not want to use.

Fixes #8078
2024-09-20 16:16:09 +01:00
Nick Craig-Wood
976103d50b azureblob: add --azureblob-disable-instance-discovery
If set this skips requesting Microsoft Entra instance metadata

See #8078
2024-09-20 16:16:09 +01:00
Nick Craig-Wood
192524c004 s3: add initial --s3-directory-bucket to support AWS Directory Buckets
This will ensure no Content-Md5 headers are sent and ensure ETags are not
interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all objects
whether single or multipart uploaded.

This also sets "no_check_bucket = true".

This is enough to make the integration tests pass, but there are some
limitations as noted in the docs.

See: https://forum.rclone.org/t/support-s3-directory-bucket/47653/
2024-09-19 12:01:24 +01:00
Nick Craig-Wood
28667f58bf Add Lawrence Murray to contributors 2024-09-19 12:01:24 +01:00
Lawrence Murray
c669f4e218 backend/protondrive: improve performance of Proton Drive backend
This change removes redundant calls to the Proton Drive Bridge when
creating Objects. Specifically, the function List() would get a
directory listing, get a link for each file, construct a remote path
from that link, then get a link for that remote path again by calling
getObjectLink() unnecessarily. This change removes that unnecessary
call, and tidies up a couple of functions around this with unused
parameters.

Related to performance issues reported in #7322 and #7413
2024-09-18 18:15:24 +01:00
Nick Craig-Wood
1a9e6a527d ftp: implement --ftp-no-check-upload to allow upload to write only dirs
Fixes #8079
2024-09-18 12:57:01 +01:00
Nick Craig-Wood
8c48cadd9c docs: document that fusermount3 may be needed when mounting/unmounting
See: https://forum.rclone.org/t/documentation-fusermount-vs-fusermount3/47816/
2024-09-18 12:57:01 +01:00
Nick Craig-Wood
76e1ba8c46 Add rishi.sridhar to contributors 2024-09-18 12:57:01 +01:00
Nick Craig-Wood
232e4cd18f Add quiescens to contributors 2024-09-18 12:57:00 +01:00
buengese
88141928f2 docs/zoho: update options 2024-09-17 20:40:42 +01:00
buengese
a2a0388036 zoho: make upload cutoff configurable 2024-09-17 20:40:42 +01:00
buengese
48543d38e8 zoho: add support for private spaces 2024-09-17 20:40:42 +01:00
buengese
eceb390152 zoho: try to handle rate limits a bit better 2024-09-17 20:40:42 +01:00
buengese
f4deffdc96 zoho: print clear error message when missing oauth scope 2024-09-17 20:40:42 +01:00
buengese
c172742cef zoho: switch to large file upload API for larger files, fix missing URL encoding of filenames for the upload API 2024-09-17 20:40:42 +01:00
buengese
7daed30754 zoho: use download server to accelerate downloads
Co-authored-by: rishi.sridhar <rishi.sridhar@zohocorp.com>
2024-09-17 20:40:42 +01:00
quiescens
b1b4c7f27b opendrive: add about support to backend 2024-09-17 17:20:42 +01:00
wiserain
ed84553dc1 pikpak: fix login issue where token retrieval fails
This addresses the login issue caused by pikpak's recent cancellation 
of existing login methods and requirement for additional verifications. 

To resolve this, we've made the following changes:

1. Similar to lib/oauthutil, we've integrated a mechanism to handle 
captcha tokens.

2. A new pikpakClient has been introduced to wrap the existing 
rest.Client and incorporate the necessary headers including 
x-captcha-token for each request.

3. Several options have been added/removed to support persistent 
user/client identification.

* client_id: No longer configurable.
* client_secret: Deprecated as it's no longer used.
* user_agent: A new option that defaults to PC/Firefox's user agent 
but can be overridden using the --pikpak-user-agent flag.
* device_id: A new option that is randomly generated if invalid. 
It is recommended not to delete or change it frequently.
* captcha_token: A new option that is automatically managed 
by rclone, similar to the OAuth token.

Fixes #7950 #8005
2024-09-18 01:09:21 +09:00
Nick Craig-Wood
c94edbb76b webdav: nextcloud: implement backoff and retry for 423 LOCKED errors
When uploading chunked files to nextcloud, it gives a 423 error while
it is merging files.

This waits for an exponentially increasing amount of time for it to
clear.

If after we have received a 423 error we receive a 404 error then we
assume all is good as this is what appears to happen in practice.

Fixes #7109
2024-09-17 16:46:02 +01:00
Nick Craig-Wood
2dcb327bc0 s3: fix rclone ignoring static credentials when env_auth=true
The SDKv2 conversion introduced a regression to do with setting
credentials with env_auth=true. The rclone documentation explicitly
states that env_auth only applies if secret_access_key and
access_key_id are blank and users had been relying on that.

However after the SDKv2 conversion we were ignoring static credentials
if env_auth=true.

This fixes the problem by ignoring env_auth=true if secret_access_key
and access_key_id are both provided. This brings rclone back into line
with the documentation and users expectations.

Fixes #8067
2024-09-17 16:07:56 +01:00
Nick Craig-Wood
874d66658e fs: fix setting stringArray config values from environment variables
After the config re-organisation, the setting of stringArray config
values (eg `--exclude` set with `RCLONE_EXCLUDE`) was broken and gave
a message like this for `RCLONE_EXCLUDE=*.jpg`:

    Failed to load "filter" default values: failed to initialise "filter" options:
    couldn't parse config item "exclude" = "*.jpg" as []string: parsing "*.jpg" as []string failed:
    invalid character '/' looking for beginning of value

This was caused by the parser trying to parse the input string as a
JSON value.

When the config was re-organised it was thought that the internal
representation of stringArray values was not important as it was never
visible externally, however this turned out not to be true.

A defined representation was chosen - a comma separated string and
this was documented and tests were introduced in this patch.

This potentially introduces a very small backwards incompatibility. In
rclone v1.67.0

    RCLONE_EXCLUDE=a,b

Would be interpreted as

    --exclude "a,b"

Whereas this new code will interpret it as

    --exclude "a" --exclude "b"

The benefit of being able to set multiple values with an environment
variable was deemed to outweigh the very small backwards compatibility
risk.

If a value with a `,` is needed, then use CSV escaping, eg

    RCLONE_EXCLUDE="a,b"

(Note this needs to have the quotes in so at the unix shell that would be

    RCLONE_EXCLUDE='"a,b"'

Fixes #8063
2024-09-13 15:52:51 +01:00
Nick Craig-Wood
3af757e26d rc: fix default value of --metrics-addr
Before this fix it was empty string, which isn't a good default for a
stringArray.
2024-09-13 15:52:51 +01:00
Nick Craig-Wood
fef1b61585 fs: fix --dump filters not always appearing
Before this fix, we initialised the options blocks in a random order.
This meant that there was a 50/50 chance whether --dump filters would
show the filters or not as it was depending on the "main" block having
being read first to set the Dump flags.

This initialises the options blocks in a defined order which is
alphabetically but with main first which fixes the problem.
2024-09-13 15:52:51 +01:00
Nick Craig-Wood
3fca7a60a5 docs: correct notes on docker manual build 2024-09-13 15:52:51 +01:00
Nick Craig-Wood
6b3f41fa0c Add ttionya to contributors 2024-09-13 15:52:51 +01:00
ttionya
3d0ee47aa2 build: fix docker release build - fixes #8062
This updates the action to use `docker/build-push-action` instead of `ilteoood/docker_buildx`
which fixes the build problem in testing.
2024-09-12 17:57:53 +01:00
Pawel Palucha
da70088b11 docs: add section for improving performance for s3 2024-09-12 11:29:35 +01:00
Nick Craig-Wood
1bc9b94cf2 onedrive: fix spurious "Couldn't decode error response: EOF" DEBUG
This DEBUG was being generated on redirects which don't have a JSON
body and is irrelevant.
2024-09-10 16:19:38 +01:00
Nick Craig-Wood
15a026d3be Add Divyam to contributors 2024-09-10 16:19:38 +01:00
Divyam
ad122c6f6f serve docker: add missing vfs-read-chunk-streams option in docker volume driver 2024-09-09 10:07:25 +01:00
Nick Craig-Wood
b155231cdd Start v1.69.0-DEV development 2024-09-08 17:22:19 +01:00
Nick Craig-Wood
49f69196c2 Version v1.68.0 2024-09-08 16:21:56 +01:00
Nick Craig-Wood
3f7651291b gofile: fix failed downloads on newly uploaded objects
The upload routine no longer returns a url to download the object.

This fixes the problem by fetching it if necessary when we attempt to
Open the object.
2024-09-08 14:58:29 +01:00
Nick Craig-Wood
796013dd06 gofile: fix Move a file
For some reason the parent ID got out of date in the Object (exact
reason not known - but the fact that this was OK before suggests a
change in the provider).

However we know the parent ID as it is in the directory cache, so use
that instead.
2024-09-08 14:58:29 +01:00
Nick Craig-Wood
e0da406ca7 test_all: mark linkbox fs/sync test TestSyncOverlapWithFilter as ignore
This gives the error

> Update second step failed: Linkbox error 500: The file name needs to include a suffix, such as xxx.mp4

As linkbox can't have files starting with "." and we are trying to save a file called ".ignore".
2024-09-08 14:58:14 +01:00
albertony
9a02c04028 jottacloud: fix setting of metadata on server side move - fixes #7900 2024-09-07 16:00:03 +01:00
albertony
918185273f docs: group the different options affecting lsjson output 2024-09-07 09:41:08 +01:00
Nick Craig-Wood
3f2074901a fichier: fix server side move - fixes #7856
The server side move had a combination of bugs
- Fichier changed the API disallowing a move to the same name
- Rclone was using the wrong object for some operations
2024-09-06 18:20:10 +01:00
Nick Craig-Wood
648afc7df4 fichier: Fix detection of Flood Detected error 2024-09-06 18:20:10 +01:00
Nick Craig-Wood
16e0245a8e rc: add vfs/queue-set-expiry to adjust expiry of items in the VFS queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
59acb9dfa9 rc: add vfs/queue to show the status of the upload queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
bfec159504 vfs: keep a record of the file size in the writeback queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
842396c8a0 build: fix gocritic change missed in merge
The original problem was introduced here

bcdfad3c83 build: update logging statements to make json log work #6038

And this was fixed non-optimally here

f1a84d171e build: fix build after update
2024-09-06 17:33:35 +01:00
Nick Craig-Wood
5f9a201b45 Add Oleg Kunitsyn to contributors 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
22583d0a5f Add fsantagostinobietti to contributors 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
f1466a429c Add Mathieu Moreau to contributors 2024-09-06 17:33:35 +01:00
Florian Klink
e3b09211b8 lib/sd-activation: wrap coreos/go-systemd
It fails to build on plan9, which is part of the rclone CI matrix, and
the PR fixing it upstream doesn't seem to be getting traction.

Stub it on our side, we can still remove this once it gets merged.
2024-09-06 17:21:56 +01:00
Florian Klink
156feff9f2 sftp: support listening on passed FDs 2024-09-06 17:21:56 +01:00
Florian Klink
b29a22095f http: fix addr CLI arg help text
This was missing the fact rclone also supports listening on Unix Domain
Sockets.
2024-09-06 17:21:56 +01:00
Florian Klink
861c01caf5 http: support listening on passed FDs
Instead of the listening addresses specified above, rclone will listen to all
FDs passed by the service manager, if any (and ignore any arguments passed by
`--{{ .Prefix }}addr`.

This allows rclone to be a socket-activated service. It can be configured as described in
https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

It's possible to test this interactively through `systemd-socket-activate`,
firing of a request in a second terminal:

```
❯ systemd-socket-activate -l 8088 -l 8089 --fdname=foo:bar -- ./rclone serve webdav :local:test/
Listening on [::]:8088 as 3.
Listening on [::]:8089 as 4.
Communication attempt on fd 3.
Execing ./rclone (./rclone serve webdav :local:test/)
2024/04/24 18:14:42 NOTICE: Local file system at /home/flokli/dev/flokli/rclone/test: WebDav Server started on [sd-listen:bar-0/ sd-listen:foo-0/]
```
2024-09-06 17:21:56 +01:00
Nick Craig-Wood
f1a84d171e build: fix build after update
This adds an import missed in

bcdfad3c83 build: update logging statements to make json log work #6038
2024-09-06 17:18:46 +01:00
albertony
bcdfad3c83 build: update logging statements to make json log work - fixes #6038
This changes log statements from log to fs package, which is required for --use-json-log
to properly make log output in JSON format. The recently added custom linting rule,
handled by ruleguard via gocritic via golangci-lint, warns about these and suggests
the alternative. Fixing was therefore basically running "golangci-lint run --fix",
although some manual fixup of mainly imports are necessary following that.
2024-09-06 17:04:18 +01:00
albertony
88b0757288 build: update custom linting rule for log to suggest new non-format functions 2024-09-06 17:04:18 +01:00
albertony
33d6c3f92f fs: add non-format variants of log functions to avoid non-constant format string warnings 2024-09-06 17:04:18 +01:00
albertony
752809309d fs: add log Printf, Fatalf and Panicf 2024-09-06 17:04:18 +01:00
albertony
4a54cc134f fs: refactor base log method name for improved consistency 2024-09-06 17:04:18 +01:00
albertony
dfc2c98bbf fs: refactor log statements to use common helper 2024-09-06 17:04:18 +01:00
albertony
604d6bcb9c build: enable custom linting rules with ruleguard via gocritic 2024-09-06 17:04:18 +01:00
Oleg Kunitsyn
d15704ef9f rcserver: implement prometheus metrics on a dedicated port - fixes #7940 2024-09-06 15:00:36 +01:00
fsantagostinobietti
26bc9826e5 swift: add total/free space info in about command.
With the enhancement in version v2.0.3 of ncw/swift library, we can now get Total and Free space info from remotes that support this feature (ex. Blomp storage)
2024-09-06 12:46:51 +01:00
Mathieu Moreau
2a28b0eaf0 docs: filtering: added Byte unit for min/max-size parameters. 2024-09-06 12:28:29 +01:00
Nick Craig-Wood
2d1c2b1f76 config encryption: set, remove and check to manage config file encryption #7859 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
ffb2e2a6de config: use --password-command to set config file password if supplied
Before this change, rclone ignored the --password-command on the
rclone config setting except when decrypting an existing config file.

This change allows for offloading the password storage/generation into
external hardware key or other protected password storage.

Fixes #7859
2024-09-06 10:34:29 +01:00
Nick Craig-Wood
c9c283533c config: factor --password-command code into its own function #7859 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
71799d7efd Add yuval-cloudinary to contributors 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
8f4fdf6cc8 Add nipil to contributors 2024-09-06 10:34:29 +01:00
yuval-cloudinary
91b11f9eac documentation: add cheatsheet for configuration encryption 2024-09-05 17:39:25 +01:00
nipil
b49927fbd0 docs: more secure two-step signature and hash validation 2024-09-05 16:54:26 +01:00
Nick Craig-Wood
1a8b7662e7 serve nfs: unify the nfs library logging with rclone's logging better
Before this we ignored the logging levels and logged everything as
debug. This will obey the rclone logging flags and log at the correct
level.
2024-09-04 10:50:21 +01:00
Nick Craig-Wood
6ba3e24853 serve nfs: fix incorrect user id and group id exported to NFS #7973
Before this change all exports were exported as root and the --uid and
--gid flags of the VFS were ignored.

This fixes the issue by exporting the UID and GID correctly which
default to the current user and group unless set explicitly.
2024-09-04 10:50:21 +01:00
Nick Craig-Wood
802a938bd1 zoho: fix inefficiencies uploading with new API to avoid throttling
Before this fix, rclone queried the uploaded object to find its size
and modtime after upload as the API did not return these items.

Zoho have subsequently modified the API to return these items so
rclone uses them to avoid an API call.

This should help with rclone being throttled by Zoho.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697/20
2024-09-04 10:45:47 +01:00
Nick Craig-Wood
9deb3e8adf Add crystalstall to contributors 2024-09-04 10:45:12 +01:00
crystalstall
296281a6eb docs: fix some function names in comments
Signed-off-by: crystalstall <crystalruby@qq.com>
2024-09-02 18:20:08 +02:00
albertony
711478554e lib/file: use builtin MkdirAll with go1.22 instead of our own custom version for windows
Starting with go1.22 the standard os.MkdirAll has improved its handling of volume names,
and as part of that it now stops recursing into parent directory if it is a volume name
(see: cd589c8a73).
This is similar to what was our main change and reason for creating a custom version. When
building with go1.22 or newer we can therefore stop using our custom version, with the
advantage that we automatically get current and future relevant improvements from golang.
To support building with go1.21 the existing custom version is still kept, and therefore
also our wrapper function file.MkdirAll - but it now just calls os.MkdirAll with go1.22
or newer on Windows.

See #5401, #6420 and acf1e2df84 for details about the
creation of our custom version of MkdirAll.
2024-09-02 18:16:38 +02:00
albertony
906aef91fa docs: document that paths using volume guids are supported 2024-09-02 18:01:47 +02:00
Nick Craig-Wood
6b58cd0870 s3: fix accounting for mulpart transfers after migration to SDKv2 #4989 2024-08-31 20:11:53 +01:00
Sebastian Bünger
af9f8ced80 yandex: implement custom user agent to help with upload speeds 2024-08-29 18:25:08 +01:00
Georg Welzel
c63f1865f3 operations: copy: generate stable partial suffix 2024-08-28 08:45:38 +02:00
Nick Craig-Wood
1bb89bc818 docs: add missing sftp providers to README and main docs page - fixes #8038 2024-08-28 07:25:49 +01:00
Nick Craig-Wood
a365503750 nfsmount: fix stale handle problem after converting options to new style
This problem was caused by the defaults not being set for the options
after the conversion to the new config system in

28ba4b832d serve nfs: convert options to new style

This makes the nfs serve options globally available so nfsmount can
use them directly.

Fixes #8029
2024-08-28 07:08:23 +01:00
Nick Craig-Wood
3bb6d0a42b docs: mark flags.md as auto generated so contributors don't edit it 2024-08-28 07:08:23 +01:00
Nick Craig-Wood
f65755b3a3 Add Pawel Palucha to contributors 2024-08-28 07:08:21 +01:00
Nick Craig-Wood
33c5f35935 Add John Oxley to contributors 2024-08-28 07:07:34 +01:00
Nick Craig-Wood
4367b999c9 Add Georg Welzel to contributors 2024-08-28 07:04:02 +01:00
Nick Craig-Wood
b57e6213aa Add Péter Bozsó to contributors 2024-08-28 07:04:02 +01:00
Nick Craig-Wood
cd90ba4337 Add Sam Harrison to contributors 2024-08-28 07:04:02 +01:00
Pawel Palucha
0e5eb7a9bb s3: allow restoring from intelligent-tiering storage class 2024-08-25 15:08:06 +02:00
nielash
956c2963fd bisync: don't convert modtime precision in listings - fixes #8025
Before this change, bisync proactively converted modtime precision when greater
than what the destination backend supported.

This dates back to a time before bisync considered the modifyWindow for same-side
comparisons. Back then, it was problematic to save a listing with 12:54:49.7 for
a backend that can't handle that precision, as on the next run the backend would
report the time as 12:54:50 and bisync would think the file had changed. So the
truncation was a workaround to anticipate this and proactively record the time
with the precision we expect to receive next time.

However, this caused problems for backends (such as dropbox) that round instead
of truncating as bisync expected.

After this change, bisync preserves the original precision in the listing
(without conversion), even when greater than what the backend supports, to avoid
rounding error. On the next run, bisync will compare it to the rounded time
reported by the backend, and if it's within the modifyWindow, it will treat them
as equivalent.
2024-08-24 22:32:48 -04:00
John Oxley
146562975b build: rename Unknwon/goconfig to unknwon/goconfig
Before this change we used the repo with an initial uppercase `U`. However it is now canonically spelled with a lower case `u`.

This package is too old to have a go.mod but the README clearly states the desired capitalization.

In 4b0d4b818a the
recommended capitalization was changed to lower case.

Co-authored-by: John Oxley <joxley@meta.com>
2024-08-23 11:03:27 +01:00
Georg Welzel
4c1cb0622e backend: pcloud: Implement OpenWriterAt feature 2024-08-22 23:42:32 +02:00
Georg Welzel
258092f9c6 backend: pcloud: implement SetModTime - Fixes #7896 2024-08-22 23:42:32 +02:00
Sam Harrison
be448c9e13 filescom: don't make an extra fetch call on each item in a list response 2024-08-19 22:31:57 +02:00
albertony
4e708e59f2 local: fix incorrect conversion between integer types 2024-08-18 10:29:36 +02:00
albertony
c8366dfef3 local: fix incorrect conversion between integer types 2024-08-17 17:07:17 +02:00
albertony
1e14523b82 docs: make tardigrade page auto redirect to storj page 2024-08-17 16:00:42 +02:00
albertony
da25305ba0 docs: update backend config samples 2024-08-17 16:00:18 +02:00
albertony
e439121ab2 config: fix size computation for allocation may overflow 2024-08-17 15:03:39 +02:00
albertony
37c12732f9 lib: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
4c488e7517 serve docker: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
7261f47bd2 local: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
1db8b20fbc s3: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
a87d8967fc s3: fix potentially unsafe quoting issue 2024-08-17 15:03:39 +02:00
albertony
4804f1f1e9 dropbox: fix potentially unsafe quoting issue 2024-08-17 15:03:39 +02:00
Eng Zer Jun
d1c84f9115 refactor: replace min/max helpers with built-in min/max
We upgraded our minimum Go version in commit ca24447090. We can now use
the built-in `min` and `max` functions directly.

Reference: https://go.dev/ref/spec#Min_and_max
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2024-08-17 13:09:44 +02:00
JT Olio
e0b08883cb go.mod: update storj.io/uplink to latest release
this has a couple of bug fixes and small enhancements.

we are working on reducing the size of this library, but this
version bump does not yet have those improvements.
2024-08-16 23:06:45 +02:00
Péter Bozsó
a0af72c27a docs: update ssh tunnel example 2024-08-16 20:27:23 +02:00
Péter Bozsó
28d6985764 docs: update rclone authorize section 2024-08-16 20:27:23 +02:00
Péter Bozsó
f2ce9a9557 docs: fix command highlight 2024-08-16 20:27:23 +02:00
albertony
95151eac82 docs: fix alignment of some of the icons in the storage system dropdown 2024-08-16 11:07:54 +02:00
Sam Harrison
bd9bf4eb1c docs: mark filescom as supporting link sharing 2024-08-15 22:55:45 +01:00
albertony
705c72d293 build: enable gocritic linter
Running with default set of checks, except disabling the following
- appendAssign: append result not assigned to the same slice (diagnostics check, many false positives)
- captLocal: using capitalized names for local variables (style check, too opinionated)
- commentFormatting: not having a space between `//` and comment text (style check, too opinionated)
- exitAfterDefer: log.Fatalln will exit, and `defer func(){...}(...)` will not run (diagnostics check, to be revisited)
- ifElseChain: rewrite if-else to switch statement (style check, many occurrences and a bit opinionated, to be revisited)
- singleCaseSwitch: should rewrite switch statement to if statement (style check, many occurrences and a bit opinionated, to be revisited)
2024-08-15 22:08:34 +01:00
albertony
330c6702eb build: ignore remaining gocritic lint issues 2024-08-15 22:08:34 +01:00
albertony
4d787ae87f build: fix gocritic lint issue unlambda 2024-08-15 22:08:34 +01:00
albertony
86e9a56d73 build: fix gocritic lint issue dupbranchbody 2024-08-15 22:08:34 +01:00
albertony
64e8013c1b build: fix gocritic lint issue sloppylen 2024-08-15 22:08:34 +01:00
albertony
33bff6fe71 build: fix gocritic lint issue wrapperfunc 2024-08-15 22:08:34 +01:00
albertony
e82b5b11af build: fix gocritic lint issue elseif 2024-08-15 22:08:34 +01:00
albertony
4454ed9d3b build: fix gocritic lint issue underef 2024-08-15 22:08:34 +01:00
albertony
bad8207378 build: fix gocritic lint issue valswap 2024-08-15 22:08:34 +01:00
albertony
c6d3714e73 build: fix gocritic lint issue assignop 2024-08-15 22:08:34 +01:00
albertony
59501fcdb6 build: fix gocritic lint issue unslice 2024-08-15 22:08:34 +01:00
Florian Klink
afd199d756 dlna: document external subtitle feature 2024-08-15 22:01:52 +01:00
Florian Klink
00e073df1e dlna: set more correct mime type
The code currently hardcodes `text/srt` for all subtitles.

`text/srt` is wrong, it seems `application/x-subrip` is the official
extension coming from the official mime database, at least (and still
works with the Samsung TV I tested with). Also add that one to `fs/
mimetype.go`.

Compared to previous iterations of this PR, I dropped tests ensuring
certain mime types are present - as detection still seems to be fairly
platform-specific.
2024-08-15 22:01:52 +01:00
Florian Klink
2e007f89c7 dlna: don't swallow video.{idx,sub}
.idx and .sub subtitle files only work if both are present, but the code
was overwriting the first-inserted element to subtitlesByName, as it was
keyed by the basename without extension.

Make subtitlesByName point to a slice of nodes instead.
2024-08-15 22:01:52 +01:00
Florian Klink
edd9347694 dlna: add cds_test.go
This tests the mediaWithResources function in various scenarios.
2024-08-15 22:01:52 +01:00
Florian Klink
1fad49ee35 dlna: also look at "Subs" subdirectory
Apparently it seems pretty common for subtitles to be put in a
subdirectory called "Subs", rather than in the same directory as the
media file itself.

This covers that usecase, by checking the returned listing for a
directory called "Subs" to exist.

If it does, its child nodes are added to the list before they're being
passed to mediaWithResources, allowing these subtitles to be discovered
automatically.
2024-08-15 22:01:52 +01:00
Sam Harrison
182b2a6417 chore: add childish-sambino as filescom maintainer 2024-08-15 22:00:26 +01:00
albertony
da9faf1ffe Make filtering rules for help and listremotes more lenient 2024-08-15 18:45:12 +02:00
albertony
303358eeda help: cleanup template syntax (consistent whitespace) 2024-08-15 18:45:12 +02:00
albertony
62233b4993 help: avoid empty additional help topics header 2024-08-15 18:45:12 +02:00
albertony
498abcc062 help: make help command output less distracting 2024-08-15 18:45:12 +02:00
albertony
482bfae8fa docs: consistent newline of first line in command output 2024-08-15 18:26:34 +02:00
Sam Harrison
ae9960a4ed filescom: add Files.com backend 2024-08-15 17:00:39 +01:00
Nick Craig-Wood
089c168fb9 fstests: attempt to fix flaky serve s3 test
Sometimes (particularly on macOS amd64) the serve s3 test fails with
TestIntegration/FsMkdir/FsPutError where it wasn't expecting to get an
object but it did.

This is likely caused by a race between the serve s3 goroutine
deleting the half uploaded file and the fstests code looking for it to
not exist.

This fix treats it like any other eventual consistency problem and
retries the check using the test framework.
2024-08-15 16:30:29 +01:00
albertony
6f515ded8f docs: move the link to global flags page to the main options header 2024-08-15 16:41:45 +02:00
albertony
91c6faff71 docs: make command group options subsections of main options 2024-08-15 16:41:45 +02:00
albertony
874616a73e docs: stop shouting the SEE ALSO header 2024-08-15 16:41:45 +02:00
albertony
458d93ea7e docs: fix the rclone root command header levels 2024-08-15 16:41:45 +02:00
albertony
513653910c docs: make the see also section header consistent and listed in toc of command pages 2024-08-15 16:41:45 +02:00
nielash
bd5199910b local: --local-no-clone flag to disable cloning for server-side copies
This flag allows users to disable the reflink cloning feature and instead force
"deep" copies, for certain use cases where data redundancy is preferable. It is
functionally equivalent to using `--disable Copy` on local.
2024-08-15 15:36:38 +01:00
nielash
f6d836eefd local: support setting custom --metadata during server-side Copy 2024-08-15 15:36:38 +01:00
nielash
87ec26001f local: add server-side copy with xattrs on macOS (part-fix #1710)
Before this change, macOS-specific metadata was not preserved by rclone, even for
local-to-local transfers (it does not use the "user." prefix, nor is Mac metadata
limited to xattrs.) Additionally, rclone did not take advantage of APFS's native
"cloning" functionality for fast and deduplicated transfers.

After this change, local (on macOS only) supports "server-side copy" similarly to
other remotes, and achieves this by using (when possible) macOS's native APFS
"cloning", which is the same underlying mechanism deployed when a user
duplicates a file via the Finder UI. This has several advantages over the
previous behavior:

- It is extremely fast (even large files can be cloned instantly)
- It is very efficient in terms of storage, as it automatically deduplicates when
possible (i.e. so that having two identical files does not consume more storage
than having just one.) (The concept is similar to a "hard link", but subsequent
modifications will not affect the original file.)
- It preserves Mac-specific metadata to the maximum degree, including not only
xattrs but also metadata not easily settable by other methods, including Finder
and Spotlight params.

When server-side "clone" is not available (for example, on non-APFS volumes), it
falls back to server-side "copy" (still preserving metadata but using more disk
storage.) It is only used when both remotes are local (and not wrapped by other
remotes, such as crypt.) The behavior of local on non-mac systems is unchanged.
2024-08-15 15:36:38 +01:00
albertony
3e12612aae docs: add automatic alias redirects for command pages 2024-08-15 16:18:38 +02:00
Florian Klink
aee2480fc4 cmd/rc: add --unix-socket option
This adds an additional flag --unix-socket, and if supplied connects
to the unix socket given.

    rclone rcd --rc-addr unix:///tmp/my.socket
    rclone rc --unix-socket /tmp/my.socket core/stats
2024-08-15 15:14:51 +01:00
Florian Klink
3ffa47ea16 webdav: add --webdav-unix-socket-path to connect to a unix socket
This adds a new optional parameter to the backend, to specify a path
to a unix domain socket to connect to, instead the specified URL.

The URL itself is still used for the rest of the HTTP client, allowing
host and subpath to stay intact.

This allows using rclone with the webdav backend to connect to a WebDAV
server provided at a Unix Domain socket:

    rclone serve webdav --addr unix:///tmp/my.socket remote:path
    rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
2024-08-15 15:14:51 +01:00
Nick Craig-Wood
70e8ad456f serve nfs: implement on disk cache for file handles 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
55b9b3e33a serve nfs: factor caching to its own file 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
ce7dfa075c serve nfs: update github.com/willscott/go-nfs to latest
This fixes various cache invalidation bugs
2024-08-14 21:55:26 +01:00
Nick Craig-Wood
a697d27455 serve nfs: store billy FS in the Handler 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
cae22a7562 serve nfs: mask unimplemented error from chmod 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
877321c2fb serve nfs: add tracing to filesystem calls 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
574378e871 serve nfs: rename types and methods which should be internal 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
50d42babd8 nfsmount: require --vfs-cache-mode writes or above in tests
These tests fail for --vfs-cache-mode minimal on Linux for the same
reason they don't work properly with --vfs-cache-mode off
2024-08-14 21:55:26 +01:00
Nick Craig-Wood
13ea77dd71 nfsmount: allow tests to run on any unix where sudo mount/umount works 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
62b76b631c nfsmount: make the --sudo flag work for umount as well as mount 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
96f92b7364 nfsmount: add tcp option to NFS mount options to fix mounting under Linux 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
7c02a63884 build: install NFS client libraries to allow nfsmount tests to run 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
67d4394a37 vfstest: fix crash if open failed 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
c1a98768bc Implement Gofile backend - fixes #4632 2024-08-14 21:15:37 +01:00
Nick Craig-Wood
bac9abebfb lib/encoder: add Exclamation mark encoding 2024-08-14 21:15:37 +01:00
Nick Craig-Wood
27b281ef69 chunkedreader: add --vfs-read-chunk-streams to parallel read chunks
This converts the ChunkedReader into an interface and provides two
implementations one sequential and one parallel.

This can be used to improve the performance of the VFS on high
bandwidth or high latency links.

Fixes #4760
2024-08-14 21:13:09 +01:00
Nick Craig-Wood
10270a4354 accounting: fix race detected by the race detector 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
d08b49d723 pool: Add ability to wait for a write to RW 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
cb2d2d72a0 pool: Make RW thread safe so can read and write at the same time 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
e686e34f89 multipart: make pool buffer size public 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
5f66350331 Add Fornax to contributors 2024-08-14 21:12:56 +01:00
Nick Craig-Wood
e1d935b854 build: use go1.23 for the linter
This reverts commit 485aa90d13.

As the upstream problem is now fixed by golangci-lint v1.60.1
2024-08-14 18:27:13 +01:00
Nick Craig-Wood
61b27cda80 build: fix govet lint errors with golangci-lint v1.60.1
There were a lot of instances of this lint error

    printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)

Which were fixed by re-arranging the arguments and adding "%s".

There were quite a few genuine bugs which were found too.
2024-08-14 18:25:40 +01:00
Nick Craig-Wood
83613634f9 build: bisync: fix govet lint errors with golangci-lint v1.60.1
There were a lot of instances of this lint error

    printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)

Most of these could not easily be fixed so had nolint lines added.

This should probably be done in a neater way perhaps by making
LogColorf/ErrorColorf functions.
2024-08-14 18:21:31 +01:00
Nick Craig-Wood
1c80cbd13a build: fix staticcheck lint errors with golangci-lint v1.60.1 2024-08-14 17:48:24 +01:00
Nick Craig-Wood
9d5315a944 build: fix gosimple lint errors with golangci-lint v1.60.1 2024-08-14 17:46:12 +01:00
Nick Craig-Wood
8d1d096c11 drive: fix copying Google Docs to a backend which only supports SHA1
When copying Google Docs to Backblaze B2 errors like this would happen

    ERROR : test.docx: Failed to calculate src hash: hash type not supported
    ERROR : test.docx: corrupted on transfer: sha1 hashes differ src

This was due to an oversight in

8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum

Which omitted to change the base object (which includes Google Docs) so
that it supported SHA-1 and SHA-256.
2024-08-12 20:27:12 +01:00
Nick Craig-Wood
4b922d86d7 drive: update docs on creating admin service accounts 2024-08-12 20:27:12 +01:00
Fornax
3b3625037c Add pixeldrain backend
This commit adds support for pixeldrain's experimental filesystem API.
2024-08-12 13:35:44 +01:00
kapitainsky
bfa3278f30 docs: add comment how to reduce rclone binary size (#8000)
See #7998
2024-08-10 17:52:32 +01:00
albertony
e334366345 Make listremotes long output backwards compatible - fixes #7995
The format was changed to include the source attribute in #7404, but that is now
reverted and the source information is only shown in json output.
2024-08-09 17:39:00 +01:00
Nick Craig-Wood
642d4082ac test_backend_sizes.py calculates space in the binary each backend uses #7998 2024-08-09 12:13:24 +01:00
albertony
024ff6ed15 listremotes: added options for filtering, ordering and json output 2024-08-08 13:41:31 +01:00
albertony
d6b0743cf4 config: make getting config values more consistent 2024-08-08 13:41:31 +01:00
albertony
e4749cf0d0 config: make listing of remotes more consistent 2024-08-08 13:41:31 +01:00
albertony
8d2907d8f5 config: avoid remote with empty name from environment 2024-08-08 13:41:31 +01:00
albertony
1720d3e11c help: global flags help command extended filtering 2024-08-08 13:41:31 +01:00
albertony
c6352231e4 help: global flags help command now takes glob filter 2024-08-08 13:41:31 +01:00
albertony
731947f3ca filter: add options for glob to regexp without anchors and special path rules 2024-08-08 13:41:31 +01:00
albertony
16d642825d docs: remove old genautocomplete command docs and add as alias from the newer completion command 2024-08-08 13:34:10 +01:00
albertony
50aebcf403 docs: replace references to genautocomplete with the new name completion 2024-08-08 13:34:10 +01:00
Nick Craig-Wood
c8555d1b16 serve s3: update to AWS SDKv2 by updating github.com/rclone/gofakes3
This is the last dependency for the SDKv1 and this commit removes it
from go.mod also.
2024-08-07 16:35:39 +01:00
Nick Craig-Wood
3ec0ff5d8f s3: fix SSE-C after SDKv2 change
The new SDK apparently keeds the customer key to be base64 encoded
where the old one did that for you automatically.

See: https://github.com/aws/aws-sdk-go-v2/issues/2736
See: https://forum.rclone.org/t/new-s3-backend-help-testing-needed/47139/3
2024-08-07 12:13:13 +01:00
wiserain
746516511d pikpak: update to using AWS SDK v2 #4989 2024-08-07 12:13:13 +01:00
Nick Craig-Wood
8aef1de695 s3: fix Cloudflare R2 integration tests after SDKv2 update #4989
Cloudflare will normally automatically decompress files with
`Content-Encoding: gzip` when downloaded. This is not what AWS S3 does
and it breaks the integration tests.

This fudges the integration tests to upload the test file with
`Cache-Control: no-transform` on Cloudflare R2 and puts a note in the
docs about this problem.
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
cb611b8330 s3: add --s3-sdk-log-mode to control SDK debugging 2024-08-07 12:13:13 +01:00
Nick Craig-Wood
66ae050a8b s3: fix GCS provider after SDKv2 update #4989
This also adds GCS via S3 to the integration tester.
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
fd9049c83d s3: update to using AWS SDK v2 - fixes #4989
SDK v2 conversion

Changes

  - `--s3-sts-endpoint` is no longer supported
  - `--s3-use-unsigned-payload` to control use of trailer checksums (needed for non AWS)
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
a1f52bcf50 fstest: implement method to skip ChunkedCopy tests 2024-08-06 12:45:07 +01:00
Nick Craig-Wood
0470450583 build: disable wasm/js build due to go bug
Rclone is too big for js/wasm until
https://github.com/golang/go/issues/64856 is fixed
2024-08-04 12:18:34 +01:00
Nick Craig-Wood
1901bae4eb Add @dmcardle as gitannex maintainer 2024-08-01 17:48:39 +01:00
Nick Craig-Wood
9866d1c636 docs: s3: add section on using too much memory #7974 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
c5c7bcdd45 docs: link the workaround for big directory syncs in the FAQ #7974 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
d5c7b55ba5 Add David Seifert to contributors 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
feafbfca52 Add Will Miles to contributors 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
abe01179ae Add Ernie Hershey to contributors 2024-08-01 16:33:09 +01:00
David Seifert
612c717ea0 docs: rc: fix correct _path to _root in on the fly backend docs 2024-07-30 10:19:47 +01:00
Saleh Dindar
f26d2c6ba8 fs/http: reload client certificates on expiry
In corporate environments, client certificates have short life times
for added security, and they get renewed automatically. This means
that client certificate can expire in the middle of long running
command such as `mount`.

This commit attempts to reload the client certificates 30s before they
expire.

This will be active for all backends which use HTTP.
2024-07-24 15:02:32 +01:00
Will Miles
dcecb0ede4 docs: clarify hasher operation
Add a line to the "other operations" block to indicate that the hasher overlay will apply auto-size and other checks for all commands.
2024-07-24 11:07:52 +01:00
Ernie Hershey
47588a7fd0 docs: fix typo in batcher docs for dropbox and googlephotos 2024-07-24 10:58:22 +01:00
Nick Craig-Wood
ba381f8721 b2: update versions documentation - fixes #7878 2024-07-24 10:52:05 +01:00
Nick Craig-Wood
8f0ddcca4e s3: document need to set force_path_style for buckets with invalid DNS names
Fixes #6110
2024-07-23 11:34:08 +01:00
Nick Craig-Wood
404ef80025 ncdu: document that excludes are not shown - fixes #6087 2024-07-23 11:29:07 +01:00
Nick Craig-Wood
13fa583368 sftp: clarify the docs for key_pem - fixes #7921 2024-07-23 10:07:44 +01:00
Nick Craig-Wood
e111ffba9e serve ftp: fix failed startup due to config changes
See: https://forum.rclone.org/t/failed-to-ftp-failed-to-parse-host-port/46959
2024-07-22 14:54:32 +01:00
Nick Craig-Wood
30ba7542ff docs: add Route4Me as a sponsor 2024-07-22 14:48:41 +01:00
wiserain
31fabb3402 pikpak: correct file transfer progress for uploads by hash
Pikpak can accelerate file uploads by leveraging existing content 
in its storage (identified by a custom hash called gcid). 
Previously, file transfer statistics were incorrect for uploads 
without outbound traffic as the input stream remained unchanged.

This commit addresses the issue by:

* Removing unnecessary unwrapping/wrapping of accountings 
before/after gcid calculation, leading immediate AccountRead() on buffering.
* Correctly tracking file transfer statistics for uploads 
with no incoming/outgoing traffic by marking them as Server Side Copies.

This change ensures correct statistics tracking and improves overall user experience.
2024-07-20 21:50:08 +09:00
Nick Craig-Wood
b3edc9d360 fs: fix --use-json-log and -vv after config reorganization 2024-07-20 12:49:08 +01:00
Nick Craig-Wood
04f35fc3ac Add Tobias Markus to contributors 2024-07-20 12:49:08 +01:00
Tobias Markus
8e5dd79e4d ulozto: fix upload of > 2GB files on 32 bit platforms - fixes #7960 2024-07-20 11:29:34 +01:00
Nick Craig-Wood
b809e71d6f lib/mmap: fix lint error on deprecated reflect.SliceHeader
reflect.SliceHeader is deprecated, however the replacement gives a go
vet warning so this disables the lint warning in one use of
reflect.SliceHeader and replaces it in the other.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d149d1ec3e lib/http: fix tests after go1.23 update
go1.22 output the Content-Length on a bad Range request on a file but
go1.23 doesn't - adapt the tests accordingly.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
3b51ad24b2 rc: fix tests after go1.23 upgrade
go1.23 adds a doctype to the HTML output when serving file listings.
This adapts the tests for that.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
485aa90d13 build: use go1.22 for the linter to fix excess memory usage
golangci-lint seems to have a bug which uses excess memory under go1.23

See: https://github.com/golangci/golangci-lint/issues/4874
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
8958d06456 build: update all dependencies 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
ca24447090 build: update to go1.23rc1 and make go1.21 the minimum required version 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d008381e59 Add AThePeanut4 to contributors 2024-07-20 10:54:47 +01:00
AThePeanut4
14629c66f9 systemd: prevent unmount rc command from sending a STOPPING=1 sd-notify message
This prevents an `rclone rcd` server from prematurely going into the
'deactivating' state, which was causing systemd to kill it with a
SIGABRT after the stop timeout.

Fixes #7540
2024-07-19 10:32:34 +01:00
Nick Craig-Wood
4824837eed azureblob: allow anonymous access for public resources
See: https://forum.rclone.org/t/azure-blob-public-resources/46882
2024-07-18 11:13:29 +01:00
Nick Craig-Wood
5287a9b5fa Add Ke Wang to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
f2ce1767f0 Add itsHenry to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
7f048ac901 Add Tomasz Melcer to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
b0d0e0b267 Add Paul Collins to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
f5eef420a4 Add Russ Bubley to contributors 2024-07-18 11:13:29 +01:00
Sawjan Gurung
9de485f949 serve s3: implement --auth-proxy
This implements --auth-proxy for serve s3. In addition it:

* add listbuckets tests with and without authProxy
* use auth proxy test framework
* servetest: implement workaround for #7454
* update github.com/rclone/gofakes3 to fix race condition
2024-07-17 15:14:08 +01:00
Kyle Reynolds
d4b29fef92 fs: Allow semicolons as well as spaces in --bwlimit timetable parsing - fixes #7595 2024-07-17 11:04:01 +01:00
wiserain
471531eb6a pikpak: optimize upload by pre-fetching gcid from API
This commit optimizes the PikPak upload process by pre-fetching the Global 
Content Identifier (gcid) from the API server before calculating it locally.

Previously, a gcid required for uploads was calculated locally. This process was 
resource-intensive and time-consuming. By first checking for a cached gcid 
on the server, we can potentially avoid the local calculation entirely. 
This significantly improves upload speed especially for large files.
2024-07-17 12:20:09 +09:00
Nick Craig-Wood
afd2663057 rc: add option blocks parameter to options/get and options/info 2024-07-16 15:02:50 +01:00
Ke Wang
97d6a00483 chore(deps): update github.com/rclone/gofakes3 2024-07-16 10:58:02 +01:00
Nick Craig-Wood
5ddedae431 fstest: fix compile after merge
After merging this commit

56caab2033 b2: Include custom upload headers in large file info

The compile failed as a change had been missed. Should have rebased
before merging!
2024-07-15 12:18:14 +01:00
URenko
e1b7bf7701 local: fix encoding of root path
fix #7824
Statements like rclone copy <somewhere> . will spontaneously miss
if . expands to a path with a Full Width replacement character.
This is due to the incorrect order in which
relative paths and decoding were handled in the original implementation.
2024-07-15 12:10:04 +01:00
URenko
2a615f4681 vfs: fix cache encoding with special characters - #7760
The vfs use the hardcoded OS encoding when creating temp file,
but decode it with encoding for the local filesystem (--local-encoding)
when copying it to remote.
This caused failures when the filenames contained special characters.
The hardcoded OS encoding is now used uniformly.
2024-07-15 12:10:04 +01:00
URenko
e041796bfe docs: correct description of encoding None and add Raw. 2024-07-15 12:10:04 +01:00
URenko
1b9217bc78 lib/encoder: add EncodeRaw 2024-07-15 12:10:04 +01:00
wiserain
846c1aeed0 pikpak: non-buffered hash calculation for local source files 2024-07-15 11:53:01 +01:00
Pat Patterson
56caab2033 b2: Include custom upload headers in large file info - fixes #7744 2024-07-15 11:51:37 +01:00
itsHenry
495a5759d3 chore(deps): update github.com/rclone/gofakes3 2024-07-15 11:34:28 +01:00
Nick Craig-Wood
d9bd6f35f2 fs/test: fix erratic test 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
532a0818f7 fs: make sure we load the options defaults to start with 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
91558ce6aa fs: fix the defaults overriding the actual config
After re-organising the config it became apparent that there was a bug
in the config system which hadn't manifested until now.

This was the default config overriding the main config and was fixed
by noting when the defaults had actually changed.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8fbb259091 rc: add options/info call to enumerate options
This also makes some fields in the Options block optional - these are
documented in rc.md
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
4d2bc190cc fs: convert main options to new config system
There are some flags which haven't been converted which could be
converted in the future.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c2bf300dd8 accounting: fix creating of global stats ignoring the config
Before this change the global stats were created before the global
config which meant they ignored the global config completely.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c954c397d9 filter: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
25c6379688 filter: rename Opt to Options for consistency 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
ce1859cd82 rc: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
cf25ae69ad lib/http: convert options to new style
There are still users of the old style options which haven't been
converted yet.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
dce8317042 log: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
eff2497633 serve sftp: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
28ba4b832d serve nfs: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
58da1a165c serve ftp: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
eec95a164d serve dlna: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
44cd2e07ca cmd/mountlib: convert mount options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
a28287e96d vfs: convert vfs options to new style
This also
- move in use options (Opt) from vfsflags to vfscommon
- change os.FileMode to vfscommon.FileMode in parameters
- rework vfscommon.FileMode and add tests
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
fc1d8dafd5 vfs: convert time.Duration option to fs.Duration 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2c57fe9826 cmd/mountlib: convert time.Duration option to fs.Duration 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
7c51b10d15 configstruct: skip items with config:"-" 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
3280b6b83c configstruct: allow parsing of []string encoded as JSON 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
1a77a2f92b configstruct: make nested config structs work 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c156716d01 configstruct: fix parsing of invalid booleans in the config
Apparently fmt.Sscanln doesn't parse bool's properly and this isn't
likely to be fixed by the Go team who regard sscanf as a mistake.

This only uses sscan for integers and uses the correct routine for
everything else.

This also implements parsing time.Duration

See: https://github.com/golang/go/issues/43306
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
0d9d0eef4c fs: check the names and types of the options blocks are correct 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2e653f8128 fs: make Flagger and FlaggerNP interfaces public so we can test flags elsewhere 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
e79273f9c9 fs: add Options registry and rework rc to use it 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8e10fe71f7 fs: allow []string to work in Options 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c6ab37a59f flags: factor AddFlagsFromOptions from cmd
This is in preparation for generalising the backend config
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
671a15f65f fs: add Groups and FieldName to Option 2024-07-15 11:09:53 +01:00
Nick Craig-Wood
8d72698d5a fs: refactor fs.ConfigMap to take a prefix and Options rather than an fs.RegInfo
This is in preparation for generalising the backend config system
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
6e853c82d8 sftp: ignore errors when closing the connection pool
There is no need to report errors when draining the connection pool -
they are useless at this point.

See: https://forum.rclone.org/t/rclone-fails-to-close-unused-tcp-connections-due-to-use-of-closed-network-connection/46735
2024-07-15 10:48:45 +01:00
Tomasz Melcer
27267547b9 sftp: use uint32 for mtime
The SFTP protocol (and the golang sftp package) internally uses uint32 unix
time for expressing mtime. Hence it is a waste of memory to store it as 24-byte
time.Time data structure in long-lived data structures. So despite that the
golang sftp package uses time.Time as external interface, we can re-encode the
value back to the original format and save memory.

Co-authored-by: Tomasz Melcer <tomasz@melcer.pl>
2024-07-09 10:23:11 +01:00
wiserain
cdcf0e5cb8 pikpak: optimize file move by removing unnecessary readMetaData() call
Previously, the code relied on calling `readMetaData()` after every file move operation.
This introduced an unnecessary API call and potentially impacted performance.

This change removes the redundant `readMetaData()` call, improving efficiency.
2024-07-08 18:16:00 +09:00
wiserain
6507770014 pikpak: fix error with copyto command
Fixes an issue where copied files could not be renamed when using the
`copyto` command. This occurred because the object ID was empty
before calling `readMetaData`. The fix preemptively calls `readMetaData`
to ensure an object ID is available before attempting the rename operation.
2024-07-08 10:37:42 +09:00
Paul Collins
bd5799c079 swift: add workarounds for bad listings in Ceph RGW
Ceph's Swift API emulation does not fully confirm to the API spec.
As a result, it sometimes returns fewer items in a container than
the requested limit, which according to the spec should means
that there are no more objects left in the container.  (Note that
python-swiftclient always fetches unless the current page is empty.)

This commit adds a pair of new Swift backend settings to handle this.

Set `fetch_until_empty_page` to true to always fetch another
page of the container listing unless there are no items left.

Alternatively, set `partial_page_fetch_threshold` to an integer
percentage.  In this case rclone will fetch a new page only when
the current page is within this percentage of the limit.

Swift API reference: https://docs.openstack.org/swift/latest/api/pagination.html

PR against ncw/swift with research and discussion: https://github.com/ncw/swift/pull/167

Fixes #7924
2024-06-28 11:14:26 +01:00
Russ Bubley
c834eb7dcb sftp: fix docs on connections not to refer to concurrency 2024-06-28 10:42:52 +01:00
Nick Craig-Wood
754e53dbcc docs: remove warp as silver sponsor 2024-06-24 10:33:18 +01:00
Nick Craig-Wood
5511fa441a onedrive: fix nil pointer error when uploading small files
Before this fix when uploading a single part file, if the
o.fetchAndUpdateMetadata() call failed rclone would call
o.setMetaData() with a nil info which caused a crash.

This fixes the problem by returning the error from
o.fetchAndUpdateMetadata() explicitly.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
4ed4483bbc vfs: fix fatal error: sync: unlock of unlocked mutex in panics
Before this change a panic could be overwritten with the message

    fatal error: sync: unlock of unlocked mutex

This was because we temporarily unlocked the mutex, but failed to lock
it again if there was a panic.

This is code is never the cause of an error but it masks the
underlying error by overwriting the panic cause.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
0e85ba5080 Add Filipe Herculano to contributors 2024-06-24 09:30:59 +01:00
Nick Craig-Wood
e5095a7d7b Add Thearas to contributors 2024-06-24 09:30:59 +01:00
wiserain
300851e8bf pikpak: implement custom hash to replace wrong sha1
This improves PikPak's file integrity verification by implementing a custom 
hash function named gcid and replacing the previously used SHA-1 hash.
2024-06-20 00:57:21 +09:00
wiserain
cbccad9491 pikpak: improves data consistency by ensuring async tasks complete
Similar to uploads implemented in commit ce5024bf33, 
this change ensures most asynchronous file operations (copy, move, delete, 
purge, and cleanup) complete before proceeding with subsequent actions. 
This reduces the risk of data inconsistencies and improves overall reliability.
2024-06-20 00:07:05 +09:00
dependabot[bot]
9f1a7cfa67 build(deps): bump docker/build-push-action from 5 to 6
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-18 14:48:30 +01:00
Filipe Herculano
d84a4c9ac1 s3: fix incorrect region for Magalu provider 2024-06-15 17:40:28 +01:00
Thearas
1c9da8c96a docs: recommend no_check_bucket = true for Alibaba - fixes #7889
Change-Id: Ib6246e416ce67dddc3cb69350de69129a8826ce3
2024-06-15 17:39:05 +01:00
Nick Craig-Wood
af9c5fef93 docs: tidy .gitignore for docs 2024-06-15 13:08:20 +01:00
Nick Craig-Wood
7060777d1d docs: fix hugo warning: found no layout file for "html" for kind "term"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "term": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

This turned out to be the addition of the `groups:` keyword to the
command frontmatter. Hugo is doing something with this keyword though
this isn't documented in the frontmatter documentation.

The fix was removing the `groups:` keyword from the frontmatter since
it was never used by hugo.
2024-06-15 12:59:49 +01:00
Nick Craig-Wood
0197e7f4e5 docs: remove slug and url from command pages since they are no longer needed 2024-06-15 12:37:43 +01:00
Nick Craig-Wood
c1c9e209f3 docs: fix hugo warning: found no layout file for "html" for kind "section"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "section": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

It turned out to be
- the arrangement of the oracle object storage docs and sub page
- the fact that a section template was missing
2024-06-15 12:29:37 +01:00
Nick Craig-Wood
fd182af866 serve dlna: fix panic: invalid argument to Int63n
This updates the upstream github.com/anacrolix/dms to master to fix
the problem.

Fixes #7911
2024-06-15 10:58:57 +01:00
704 changed files with 85424 additions and 52313 deletions

View File

@@ -17,22 +17,21 @@ on:
manual:
description: Manual run (bypass default conditions)
type: boolean
required: true
default: true
jobs:
build:
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.20', 'go1.21']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.22', 'go1.23']
include:
- job_name: linux
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '>=1.24.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -43,14 +42,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '>=1.24.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-latest
go: '>=1.22.0-rc.1'
go: '>=1.24.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -59,14 +58,14 @@ jobs:
- job_name: mac_arm64
os: macos-latest
go: '>=1.22.0-rc.1'
go: '>=1.24.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '>=1.22.0-rc.1'
go: '>=1.24.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -76,20 +75,20 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '>=1.24.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.20
- job_name: go1.22
os: ubuntu-latest
go: '1.20'
go: '1.22'
quicktest: true
racequicktest: true
- job_name: go1.21
- job_name: go1.23
os: ubuntu-latest
go: '1.21'
go: '1.23'
quicktest: true
racequicktest: true
@@ -124,7 +123,8 @@ jobs:
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse3 libfuse-dev rpm pkg-config git-annex git-annex-remote-rclone
sudo apt-get update
sudo apt-get install -y fuse3 libfuse-dev rpm pkg-config git-annex git-annex-remote-rclone nfs-common
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
@@ -217,7 +217,7 @@ jobs:
if: env.RCLONE_CONFIG_PASS != '' && matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
lint:
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 30
name: "lint"
runs-on: ubuntu-latest
@@ -237,7 +237,7 @@ jobs:
id: setup-go
uses: actions/setup-go@v5
with:
go-version: '>=1.22.0-rc.1'
go-version: '>=1.23.0-rc.1'
check-latest: true
cache: false
@@ -296,7 +296,7 @@ jobs:
run: govulncheck ./...
android:
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 30
name: "android-all"
runs-on: ubuntu-latest
@@ -311,7 +311,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '>=1.22.0-rc.1'
go-version: '>=1.24.0-rc.1'
- name: Set global environment variables
shell: bash

View File

@@ -1,77 +0,0 @@
name: Docker beta build
on:
push:
branches:
- master
jobs:
build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
# `secrets.GITHUB_TOKEN` is a secret that's automatically generated by
# GitHub Actions at the start of a workflow run to identify the job.
# This is used to authenticate against GitHub Container Registry.
# See https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret
# for more detailed information.
password: ${{ secrets.GITHUB_TOKEN }}
- name: Show disk usage
shell: bash
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v5
with:
file: Dockerfile
context: .
push: true # push the image to ghcr
tags: |
ghcr.io/rclone/rclone:beta
rclone/rclone:beta
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, mode=max, scope=${{ github.workflow }}
provenance: false
# Eventually cache will need to be cleared if builds more frequent than once a week
# https://github.com/docker/build-push-action/issues/252
- name: Show disk usage
shell: bash
run: |
df -h .

View File

@@ -0,0 +1,294 @@
---
# Github Actions release for rclone
# -*- compile-command: "yamllint -f parsable build_publish_docker_image.yml" -*-
name: Build & Push Docker Images
# Trigger the workflow on push or pull request
on:
push:
branches:
- '**'
tags:
- '**'
workflow_dispatch:
inputs:
manual:
description: Manual run (bypass default conditions)
type: boolean
default: true
jobs:
build-image:
if: inputs.manual || (github.repository == 'rclone/rclone' && github.event_name != 'pull_request')
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
include:
- platform: linux/amd64
runs-on: ubuntu-24.04
- platform: linux/386
runs-on: ubuntu-24.04
- platform: linux/arm64
runs-on: ubuntu-24.04-arm
- platform: linux/arm/v7
runs-on: ubuntu-24.04-arm
- platform: linux/arm/v6
runs-on: ubuntu-24.04-arm
name: Build Docker Image for ${{ matrix.platform }}
runs-on: ${{ matrix.runs-on }}
steps:
- name: Free Space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout Repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set REPO_NAME Variable
run: |
echo "REPO_NAME=`echo ${{github.repository}} | tr '[:upper:]' '[:lower:]'`" >> ${GITHUB_ENV}
- name: Set PLATFORM Variable
run: |
platform=${{ matrix.platform }}
echo "PLATFORM=${platform//\//-}" >> $GITHUB_ENV
- name: Set CACHE_NAME Variable
shell: python
run: |
import os, re
def slugify(input_string, max_length=63):
slug = input_string.lower()
slug = re.sub(r'[^a-z0-9 -]', ' ', slug)
slug = slug.strip()
slug = re.sub(r'\s+', '-', slug)
slug = re.sub(r'-+', '-', slug)
slug = slug[:max_length]
slug = re.sub(r'[-]+$', '', slug)
return slug
ref_name_slug = "cache"
if os.environ.get("GITHUB_REF_NAME") and os.environ['GITHUB_EVENT_NAME'] == "pull_request":
ref_name_slug += "-pr-" + slugify(os.environ['GITHUB_REF_NAME'])
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"CACHE_NAME={ref_name_slug}\n")
- name: Get ImageOS
# There's no way around this, because "ImageOS" is only available to
# processes, but the setup-go action uses it in its key.
id: imageos
uses: actions/github-script@v7
with:
result-encoding: string
script: |
return process.env.ImageOS
- name: Extract Metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: manifest,manifest-descriptor # Important for digest annotation (used by Github packages)
with:
images: |
ghcr.io/${{ env.REPO_NAME }}
labels: |
org.opencontainers.image.url=https://github.com/rclone/rclone/pkgs/container/rclone
org.opencontainers.image.vendor=${{ github.repository_owner }}
org.opencontainers.image.authors=rclone <https://github.com/rclone>
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
tags: |
type=sha
type=ref,event=pr
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=beta,enable={{is_default_branch}}
- name: Setup QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Load Go Build Cache for Docker
id: go-cache
uses: actions/cache@v4
with:
key: ${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}-${{ hashFiles('**/go.mod') }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}
# Cache only the go builds, the module download is cached via the docker layer caching
path: |
go-build-cache
- name: Inject Go Build Cache into Docker
uses: reproducible-containers/buildkit-cache-dance@v3
with:
cache-map: |
{
"go-build-cache": "/root/.cache/go-build"
}
skip-extraction: ${{ steps.go-cache.outputs.cache-hit }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Publish Image Digest
id: build
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
provenance: false
# don't specify 'tags' here (error "get can't push tagged ref by digest")
# tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
annotations: ${{ steps.meta.outputs.annotations }}
platforms: ${{ matrix.platform }}
outputs: |
type=image,name=ghcr.io/${{ env.REPO_NAME }},push-by-digest=true,name-canonical=true,push=true
cache-from: |
type=registry,ref=ghcr.io/${{ env.REPO_NAME }}:build-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}
cache-to: |
type=registry,ref=ghcr.io/${{ env.REPO_NAME }}:build-${{ env.CACHE_NAME }}-${{ env.PLATFORM }},image-manifest=true,mode=max,compression=zstd
- name: Export Image Digest
run: |
mkdir -p /tmp/digests
digest="${{ steps.build.outputs.digest }}"
touch "/tmp/digests/${digest#sha256:}"
- name: Upload Image Digest
uses: actions/upload-artifact@v4
with:
name: digests-${{ env.PLATFORM }}
path: /tmp/digests/*
retention-days: 1
if-no-files-found: error
merge-image:
name: Merge & Push Final Docker Image
runs-on: ubuntu-24.04
needs:
- build-image
steps:
- name: Download Image Digests
uses: actions/download-artifact@v4
with:
path: /tmp/digests
pattern: digests-*
merge-multiple: true
- name: Set REPO_NAME Variable
run: |
echo "REPO_NAME=`echo ${{github.repository}} | tr '[:upper:]' '[:lower:]'`" >> ${GITHUB_ENV}
- name: Extract Metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
with:
images: |
${{ env.REPO_NAME }}
ghcr.io/${{ env.REPO_NAME }}
labels: |
org.opencontainers.image.url=https://github.com/rclone/rclone/pkgs/container/rclone
org.opencontainers.image.vendor=${{ github.repository_owner }}
org.opencontainers.image.authors=rclone <https://github.com/rclone>
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
tags: |
type=sha
type=ref,event=pr
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=beta,enable={{is_default_branch}}
- name: Extract Tags
shell: python
run: |
import json, os
metadata_json = os.environ['DOCKER_METADATA_OUTPUT_JSON']
metadata = json.loads(metadata_json)
tags = [f"--tag '{tag}'" for tag in metadata["tags"]]
tags_string = " ".join(tags)
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"TAGS={tags_string}\n")
- name: Extract Annotations
shell: python
run: |
import json, os
metadata_json = os.environ['DOCKER_METADATA_OUTPUT_JSON']
metadata = json.loads(metadata_json)
annotations = [f"--annotation '{annotation}'" for annotation in metadata["annotations"]]
annotations_string = " ".join(annotations)
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"ANNOTATIONS={annotations_string}\n")
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create & Push Manifest List
working-directory: /tmp/digests
run: |
docker buildx imagetools create \
${{ env.TAGS }} \
${{ env.ANNOTATIONS }} \
$(printf 'ghcr.io/${{ env.REPO_NAME }}@sha256:%s ' *)
- name: Inspect and Run Multi-Platform Image
run: |
docker buildx imagetools inspect --raw ${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }}
docker buildx imagetools inspect --raw ghcr.io/${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }}
docker run --rm ghcr.io/${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }} version

View File

@@ -0,0 +1,45 @@
---
# Github Actions release for rclone
# -*- compile-command: "yamllint -f parsable build_publish_docker_plugin.yml" -*-
name: Release Build for Docker Plugin
on:
release:
types: [published]
jobs:
build_docker_volume_plugin:
if: github.repository == 'rclone/rclone'
needs: build
runs-on: ubuntu-latest
name: Build docker plugin job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build and publish docker plugin
shell: bash
run: |
VER=${GITHUB_REF#refs/tags/}
PLUGIN_USER=rclone
docker login --username ${{ secrets.DOCKER_HUB_USER }} \
--password-stdin <<< "${{ secrets.DOCKER_HUB_PASSWORD }}"
for PLUGIN_ARCH in amd64 arm64 arm/v7 arm/v6 ;do
export PLUGIN_USER PLUGIN_ARCH
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}-${VER#v}
done
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=${VER#v}

View File

@@ -1,77 +0,0 @@
name: Docker release build
on:
release:
types: [published]
jobs:
build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get actual patch version
id: actual_patch_version
run: echo ::set-output name=ACTUAL_PATCH_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g')
- name: Get actual minor version
id: actual_minor_version
run: echo ::set-output name=ACTUAL_MINOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1,2)
- name: Get actual major version
id: actual_major_version
run: echo ::set-output name=ACTUAL_MAJOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1)
- name: Build and publish image
uses: ilteoood/docker_buildx@1.1.0
with:
tag: latest,${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }},${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }},${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
imageName: rclone/rclone
platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
publish: true
dockerHubUser: ${{ secrets.DOCKER_HUB_USER }}
dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }}
build_docker_volume_plugin:
if: github.repository == 'rclone/rclone'
needs: build
runs-on: ubuntu-latest
name: Build docker plugin job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build and publish docker plugin
shell: bash
run: |
VER=${GITHUB_REF#refs/tags/}
PLUGIN_USER=rclone
docker login --username ${{ secrets.DOCKER_HUB_USER }} \
--password-stdin <<< "${{ secrets.DOCKER_HUB_PASSWORD }}"
for PLUGIN_ARCH in amd64 arm64 arm/v7 arm/v6 ;do
export PLUGIN_USER PLUGIN_ARCH
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}-${VER#v}
done
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=${VER#v}

5
.gitignore vendored
View File

@@ -3,7 +3,9 @@ _junk/
rclone
rclone.exe
build
docs/public
/docs/public/
/docs/.hugo_build.lock
/docs/static/img/logos/
rclone.iml
.idea
.history
@@ -16,6 +18,5 @@ fuzz-build.zip
Thumbs.db
__pycache__
.DS_Store
/docs/static/img/logos/
resource_windows_*.syso
.devcontainer

View File

@@ -13,6 +13,7 @@ linters:
- stylecheck
- unused
- misspell
- gocritic
#- prealloc
#- maligned
disable-all: true
@@ -98,3 +99,46 @@ linters-settings:
# Only enable the checks performed by the staticcheck stand-alone tool,
# as documented here: https://staticcheck.io/docs/configuration/options/#checks
checks: ["all", "-ST1000", "-ST1003", "-ST1016", "-ST1020", "-ST1021", "-ST1022", "-ST1023"]
gocritic:
# Enable all default checks with some exceptions and some additions (commented).
# Cannot use both enabled-checks and disabled-checks, so must specify all to be used.
disable-all: true
enabled-checks:
#- appendAssign # Enabled by default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Enabled by default
- caseOrder
- codegenComment
#- commentFormatting # Enabled by default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Enabled by default
- flagDeref
- flagName
#- ifElseChain # Enabled by default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Not enabled by default
#- singleCaseSwitch # Enabled by default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: "${configDir}/bin/rules.go"

View File

@@ -209,7 +209,7 @@ altogether with an HTML report and test retries then from the
project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backend drive
test_all -backends drive
### Full integration testing
@@ -490,7 +490,7 @@ alphabetical order of full name of remote (e.g. `drive` is ordered as
- `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs
- `docs/content/overview.md` - overview docs - add an entry into the Features table and the Optional Features table.
- `docs/content/docs.md` - list of remotes in config section
- `docs/content/_index.md` - front page of rclone.org
- `docs/layouts/chrome/navbar.html` - add it to the website navigation
@@ -508,7 +508,7 @@ You'll need to modify the following files
- `backend/s3/s3.go`
- Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from genric config questions (eg `region` and `endpoint).
- Exclude your provider from generic config questions (eg `region` and `endpoint).
- Add the provider to the `setQuirks` function - see the documentation there.
- `docs/content/s3.md`
- Add the provider at the top of the page.

View File

@@ -1,19 +1,47 @@
FROM golang:alpine AS builder
COPY . /go/src/github.com/rclone/rclone/
ARG CGO_ENABLED=0
WORKDIR /go/src/github.com/rclone/rclone/
RUN apk add --no-cache make bash gawk git
RUN \
CGO_ENABLED=0 \
make
RUN ./rclone version
RUN echo "**** Set Go Environment Variables ****" && \
go env -w GOCACHE=/root/.cache/go-build
RUN echo "**** Install Dependencies ****" && \
apk add --no-cache \
make \
bash \
gawk \
git
COPY go.mod .
COPY go.sum .
RUN echo "**** Download Go Dependencies ****" && \
go mod download -x
RUN echo "**** Verify Go Dependencies ****" && \
go mod verify
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build,sharing=locked \
echo "**** Build Binary ****" && \
make
RUN echo "**** Print Version Binary ****" && \
./rclone version
# Begin final image
FROM alpine:latest
RUN apk --no-cache add ca-certificates fuse3 tzdata && \
echo "user_allow_other" >> /etc/fuse.conf
RUN echo "**** Install Dependencies ****" && \
apk add --no-cache \
ca-certificates \
fuse3 \
tzdata && \
echo "Enable user_allow_other in fuse" && \
echo "user_allow_other" >> /etc/fuse.conf
COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/

View File

@@ -21,6 +21,8 @@ Current active maintainers of rclone are:
| Chun-Hung Tseng | @henrybear327 | Proton Drive Backend |
| Hideo Aoyama | @boukendesho | snap packaging |
| nielash | @nielash | bisync |
| Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom |
**This is a work in progress Draft**

5881
MANUAL.html generated

File diff suppressed because it is too large Load Diff

6370
MANUAL.md generated

File diff suppressed because it is too large Load Diff

6311
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -144,10 +144,14 @@ MANUAL.txt: MANUAL.md
pandoc -s --from markdown-smart --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs docs/content/
-@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs --config=/notfound docs/content/
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
backenddocs: rclone bin/make_backend_docs.py
-@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" ./bin/make_backend_docs.py
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
rcdocs: rclone
bin/make_rc_docs.sh

View File

@@ -55,14 +55,18 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
* Files.com [:page_facing_up:](https://rclone.org/filescom/)
* FTP [:page_facing_up:](https://rclone.org/ftp/)
* GoFile [:page_facing_up:](https://rclone.org/gofile/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
* Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
* HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
* iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/)
* ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
* Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
@@ -89,10 +93,12 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
* Outscale [:page_facing_up:](https://rclone.org/s3/#outscale)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
* PikPak [:page_facing_up:](https://rclone.org/pikpak/)
* Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)
* Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
@@ -101,9 +107,11 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
* rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* Seafile [:page_facing_up:](https://rclone.org/seafile/)
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
* Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)

View File

@@ -47,13 +47,20 @@ Early in the next release cycle update the dependencies.
* `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod`
then go to manual mode. `go1.20` here is the lowest supported version
go 1.22.0
then go to manual mode. `go1.22` here is the lowest supported version
in the `go.mod`.
If `make updatedirect` added a `toolchain` directive then remove it.
We don't want to force a toolchain on our users. Linux packagers are
often using a version of Go that is a few versions out of date.
```
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.20 -compat=1.20
go mod tidy -go=1.22 -compat=1.22
```
If the `go mod tidy` fails use the output from it to remove the
@@ -86,6 +93,16 @@ build.
Once it compiles locally, push it on a test branch and commit fixes
until the tests pass.
### Major versions
The above procedure will not upgrade major versions, so v2 to v3.
However this tool can show which major versions might need to be
upgraded:
go run github.com/icholy/gomajor@latest list -major
Expect API breakage when updating major versions.
## Tidy beta
At some point after the release run
@@ -168,6 +185,8 @@ docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/a
To make a full build then set the tags correctly and add `--push`
Note that you can't only build one architecture - you need to build them all.
```
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
```

View File

@@ -1 +1 @@
v1.68.0
v1.70.0

View File

@@ -23,8 +23,8 @@ func prepare(t *testing.T, root string) {
configfile.Install()
// Configure the remote
config.FileSet(remoteName, "type", "alias")
config.FileSet(remoteName, "remote", root)
config.FileSetValue(remoteName, "type", "alias")
config.FileSetValue(remoteName, "remote", root)
}
func TestNewFS(t *testing.T) {

View File

@@ -10,6 +10,7 @@ import (
_ "github.com/rclone/rclone/backend/box"
_ "github.com/rclone/rclone/backend/cache"
_ "github.com/rclone/rclone/backend/chunker"
_ "github.com/rclone/rclone/backend/cloudinary"
_ "github.com/rclone/rclone/backend/combine"
_ "github.com/rclone/rclone/backend/compress"
_ "github.com/rclone/rclone/backend/crypt"
@@ -17,13 +18,16 @@ import (
_ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/filefabric"
_ "github.com/rclone/rclone/backend/filescom"
_ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/gofile"
_ "github.com/rclone/rclone/backend/googlecloudstorage"
_ "github.com/rclone/rclone/backend/googlephotos"
_ "github.com/rclone/rclone/backend/hasher"
_ "github.com/rclone/rclone/backend/hdfs"
_ "github.com/rclone/rclone/backend/hidrive"
_ "github.com/rclone/rclone/backend/http"
_ "github.com/rclone/rclone/backend/iclouddrive"
_ "github.com/rclone/rclone/backend/imagekit"
_ "github.com/rclone/rclone/backend/internetarchive"
_ "github.com/rclone/rclone/backend/jottacloud"
@@ -39,6 +43,7 @@ import (
_ "github.com/rclone/rclone/backend/oracleobjectstorage"
_ "github.com/rclone/rclone/backend/pcloud"
_ "github.com/rclone/rclone/backend/pikpak"
_ "github.com/rclone/rclone/backend/pixeldrain"
_ "github.com/rclone/rclone/backend/premiumizeme"
_ "github.com/rclone/rclone/backend/protondrive"
_ "github.com/rclone/rclone/backend/putio"

File diff suppressed because it is too large Load Diff

View File

@@ -3,16 +3,149 @@
package azureblob
import (
"context"
"encoding/base64"
"strings"
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func (f *Fs) InternalTest(t *testing.T) {
// Check first feature flags are set on this
// remote
func TestBlockIDCreator(t *testing.T) {
// Check creation and random number
bic, err := newBlockIDCreator()
require.NoError(t, err)
bic2, err := newBlockIDCreator()
require.NoError(t, err)
assert.NotEqual(t, bic.random, bic2.random)
assert.NotEqual(t, bic.random, [8]byte{})
// Set random to known value for tests
bic.random = [8]byte{1, 2, 3, 4, 5, 6, 7, 8}
chunkNumber := uint64(0xFEDCBA9876543210)
// Check creation of ID
want := base64.StdEncoding.EncodeToString([]byte{0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10, 1, 2, 3, 4, 5, 6, 7, 8})
assert.Equal(t, "/ty6mHZUMhABAgMEBQYHCA==", want)
got := bic.newBlockID(chunkNumber)
assert.Equal(t, want, got)
assert.Equal(t, "/ty6mHZUMhABAgMEBQYHCA==", got)
// Test checkID is working
assert.NoError(t, bic.checkID(chunkNumber, got))
assert.ErrorContains(t, bic.checkID(chunkNumber, "$"+got), "illegal base64")
assert.ErrorContains(t, bic.checkID(chunkNumber, "AAAA"+got), "bad block ID length")
assert.ErrorContains(t, bic.checkID(chunkNumber+1, got), "expecting decoded")
assert.ErrorContains(t, bic2.checkID(chunkNumber, got), "random bytes")
}
func (f *Fs) testFeatures(t *testing.T) {
// Check first feature flags are set on this remote
enabled := f.Features().SetTier
assert.True(t, enabled)
enabled = f.Features().GetTier
assert.True(t, enabled)
}
type ReadSeekCloser struct {
*strings.Reader
}
func (r *ReadSeekCloser) Close() error {
return nil
}
// Stage a block at remote but don't commit it
func (f *Fs) stageBlockWithoutCommit(ctx context.Context, t *testing.T, remote string) {
var (
containerName, blobPath = f.split(remote)
containerClient = f.cntSVC(containerName)
blobClient = containerClient.NewBlockBlobClient(blobPath)
data = "uncommitted data"
blockID = "1"
blockIDBase64 = base64.StdEncoding.EncodeToString([]byte(blockID))
)
r := &ReadSeekCloser{strings.NewReader(data)}
_, err := blobClient.StageBlock(ctx, blockIDBase64, r, nil)
require.NoError(t, err)
// Verify the block is staged but not committed
blockList, err := blobClient.GetBlockList(ctx, blockblob.BlockListTypeAll, nil)
require.NoError(t, err)
found := false
for _, block := range blockList.UncommittedBlocks {
if *block.Name == blockIDBase64 {
found = true
break
}
}
require.True(t, found, "Block ID not found in uncommitted blocks")
}
// This tests uploading a blob where it has uncommitted blocks with a different ID size.
//
// https://gauravmantri.com/2013/05/18/windows-azure-blob-storage-dealing-with-the-specified-blob-or-block-content-is-invalid-error/
//
// TestIntegration/FsMkdir/FsPutFiles/Internal/WriteUncommittedBlocks
func (f *Fs) testWriteUncommittedBlocks(t *testing.T) {
var (
ctx = context.Background()
remote = "testBlob"
)
// Multipart copy the blob please
oldUseCopyBlob, oldCopyCutoff := f.opt.UseCopyBlob, f.opt.CopyCutoff
f.opt.UseCopyBlob = false
f.opt.CopyCutoff = f.opt.ChunkSize
defer func() {
f.opt.UseCopyBlob, f.opt.CopyCutoff = oldUseCopyBlob, oldCopyCutoff
}()
// Create a blob with uncommitted blocks
f.stageBlockWithoutCommit(ctx, t, remote)
// Now attempt to overwrite the block with a different sized block ID to provoke this error
// Check the object does not exist
_, err := f.NewObject(ctx, remote)
require.Equal(t, fs.ErrorObjectNotFound, err)
// Upload a multipart file over the block with uncommitted chunks of a different ID size
size := 4*int(f.opt.ChunkSize) - 1
contents := random.String(size)
item := fstest.NewItem(remote, contents, fstest.Time("2001-05-06T04:05:06.499Z"))
o := fstests.PutTestContents(ctx, t, f, &item, contents, true)
// Check size
assert.Equal(t, int64(size), o.Size())
// Create a new blob with uncommitted blocks
newRemote := "testBlob2"
f.stageBlockWithoutCommit(ctx, t, newRemote)
// Copy over that block
dst, err := f.Copy(ctx, o, newRemote)
require.NoError(t, err)
// Check basics
assert.Equal(t, int64(size), dst.Size())
assert.Equal(t, newRemote, dst.Remote())
// Check contents
gotContents := fstests.ReadObject(ctx, t, dst, -1)
assert.Equal(t, contents, gotContents)
// Remove the object
require.NoError(t, dst.Remove(ctx))
}
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Features", f.testFeatures)
t.Run("WriteUncommittedBlocks", f.testWriteUncommittedBlocks)
}

View File

@@ -15,13 +15,17 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
name := "TestAzureBlob"
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureBlob:",
RemoteName: name + ":",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool", "Cold"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: defaultChunkSize,
},
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "use_copy_blob", Value: "false"},
},
})
}
@@ -40,6 +44,7 @@ func TestIntegration2(t *testing.T) {
},
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "directory_markers", Value: "true"},
{Name: name, Key: "use_copy_blob", Value: "false"},
},
})
}
@@ -48,8 +53,13 @@ func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetCopyCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setCopyCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetCopyCutoffer = (*Fs)(nil)
)
func TestValidateAccessTier(t *testing.T) {

View File

@@ -237,6 +237,30 @@ msi_client_id, or msi_mi_res_id parameters.`,
Help: "Azure resource ID of the user-assigned MSI to use, if any.\n\nLeave blank if msi_client_id or msi_object_id specified.",
Advanced: true,
Sensitive: true,
}, {
Name: "disable_instance_discovery",
Help: `Skip requesting Microsoft Entra instance metadata
This should be set true only by applications authenticating in
disconnected clouds, or private clouds such as Azure Stack.
It determines whether rclone requests Microsoft Entra instance
metadata from ` + "`https://login.microsoft.com/`" + ` before
authenticating.
Setting this to true will skip this request, making you responsible
for ensuring the configured authority is valid and trustworthy.
`,
Default: false,
Advanced: true,
}, {
Name: "use_az",
Help: `Use Azure CLI tool az for authentication
Set to use the [Azure CLI tool az](https://learn.microsoft.com/en-us/cli/azure/)
as the sole means of authentication.
Setting this can be useful if you wish to use the az CLI on a host with
a System Managed Identity that you do not want to use.
Don't set env_auth at the same time.
`,
Default: false,
Advanced: true,
}, {
Name: "endpoint",
Help: "Endpoint for the service.\n\nLeave blank normally.",
@@ -319,10 +343,12 @@ type Options struct {
Username string `config:"username"`
Password string `config:"password"`
ServicePrincipalFile string `config:"service_principal_file"`
DisableInstanceDiscovery bool `config:"disable_instance_discovery"`
UseMSI bool `config:"use_msi"`
MSIObjectID string `config:"msi_object_id"`
MSIClientID string `config:"msi_client_id"`
MSIResourceID string `config:"msi_mi_res_id"`
UseAZ bool `config:"use_az"`
Endpoint string `config:"endpoint"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
MaxStreamSize fs.SizeSuffix `config:"max_stream_size"`
@@ -393,8 +419,10 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
policyClientOptions := policy.ClientOptions{
Transport: newTransporter(ctx),
}
backup := service.ShareTokenIntentBackup
clientOpt := service.ClientOptions{
ClientOptions: policyClientOptions,
ClientOptions: policyClientOptions,
FileRequestIntent: &backup,
}
// Here we auth by setting one of cred, sharedKeyCred or f.client
@@ -412,7 +440,8 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
}
// Read credentials from the environment
options := azidentity.DefaultAzureCredentialOptions{
ClientOptions: policyClientOptions,
ClientOptions: policyClientOptions,
DisableInstanceDiscovery: opt.DisableInstanceDiscovery,
}
cred, err = azidentity.NewDefaultAzureCredential(&options)
if err != nil {
@@ -423,6 +452,13 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
if err != nil {
return nil, fmt.Errorf("create new shared key credential failed: %w", err)
}
case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{}
cred, err = azidentity.NewAzureCLICredential(&options)
fmt.Println(cred)
if err != nil {
return nil, fmt.Errorf("failed to create Azure CLI credentials: %w", err)
}
case opt.SASURL != "":
client, err = service.NewClientWithNoCredential(opt.SASURL, &clientOpt)
if err != nil {
@@ -897,7 +933,7 @@ func (o *Object) getMetadata(ctx context.Context) error {
// Hash returns the MD5 of an object returning a lowercase hex string
//
// May make a network request becaue the [fs.List] method does not
// May make a network request because the [fs.List] method does not
// return MD5 hashes for DirEntry
func (o *Object) Hash(ctx context.Context, ty hash.Type) (string, error) {
if ty != hash.MD5 {
@@ -1035,12 +1071,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if _, createErr := fc.Create(ctx, size, nil); createErr != nil {
return fmt.Errorf("update: unable to create file: %w", createErr)
}
} else {
} else if size != o.Size() {
// Resize the file if needed
if size != o.Size() {
if _, resizeErr := fc.Resize(ctx, size, nil); resizeErr != nil {
return fmt.Errorf("update: unable to resize while trying to update: %w ", resizeErr)
}
if _, resizeErr := fc.Resize(ctx, size, nil); resizeErr != nil {
return fmt.Errorf("update: unable to resize while trying to update: %w ", resizeErr)
}
}

View File

@@ -42,9 +42,10 @@ type Bucket struct {
// LifecycleRule is a single lifecycle rule
type LifecycleRule struct {
DaysFromHidingToDeleting *int `json:"daysFromHidingToDeleting"`
DaysFromUploadingToHiding *int `json:"daysFromUploadingToHiding"`
FileNamePrefix string `json:"fileNamePrefix"`
DaysFromHidingToDeleting *int `json:"daysFromHidingToDeleting"`
DaysFromUploadingToHiding *int `json:"daysFromUploadingToHiding"`
DaysFromStartingToCancelingUnfinishedLargeFiles *int `json:"daysFromStartingToCancelingUnfinishedLargeFiles"`
FileNamePrefix string `json:"fileNamePrefix"`
}
// Timestamp is a UTC time when this file was uploaded. It is a base

View File

@@ -42,11 +42,11 @@ func TestTimestampIsZero(t *testing.T) {
}
func TestTimestampEqual(t *testing.T) {
assert.False(t, emptyT.Equal(emptyT))
assert.False(t, emptyT.Equal(emptyT)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
assert.False(t, t0.Equal(emptyT))
assert.False(t, emptyT.Equal(t0))
assert.False(t, t0.Equal(t1))
assert.False(t, t1.Equal(t0))
assert.True(t, t0.Equal(t0))
assert.True(t, t1.Equal(t1))
assert.True(t, t0.Equal(t0)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
assert.True(t, t1.Equal(t1)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
}

View File

@@ -30,6 +30,7 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
@@ -1317,16 +1318,22 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden b
// Check current version of the file
if deleteHidden && object.Action == "hide" {
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
toBeDeleted <- object
if !operations.SkipDestructive(ctx, object.Name, "remove hide marker") {
toBeDeleted <- object
}
} else if deleteUnfinished && object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
fs.Debugf(remote, "Deleting current version (id %q) as it is a start marker (upload started at %s)", object.ID, time.Time(object.UploadTimestamp).Local())
toBeDeleted <- object
if !operations.SkipDestructive(ctx, object.Name, "remove pending upload") {
toBeDeleted <- object
}
} else {
fs.Debugf(remote, "Not deleting current version (id %q) %q dated %v (%v ago)", object.ID, object.Action, time.Time(object.UploadTimestamp).Local(), time.Since(time.Time(object.UploadTimestamp)))
}
} else {
fs.Debugf(remote, "Deleting (id %q)", object.ID)
toBeDeleted <- object
if !operations.SkipDestructive(ctx, object.Name, "delete") {
toBeDeleted <- object
}
}
last = remote
tr.Done(ctx, nil)
@@ -1593,7 +1600,11 @@ func (o *Object) decodeMetaDataRaw(ID, SHA1 string, Size int64, UploadTimestamp
o.size = Size
// Use the UploadTimestamp if can't get file info
o.modTime = time.Time(UploadTimestamp)
return o.parseTimeString(Info[timeKey])
err = o.parseTimeString(Info[timeKey])
if err != nil {
return err
}
return nil
}
// decodeMetaData sets the metadata in the object from an api.File
@@ -1695,6 +1706,16 @@ func timeString(modTime time.Time) string {
return strconv.FormatInt(modTime.UnixNano()/1e6, 10)
}
// parseTimeStringHelper converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time
func parseTimeStringHelper(timeString string) (time.Time, error) {
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
if err != nil {
return time.Time{}, err
}
return time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC(), nil
}
// parseTimeString converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time and stores it in
// the modTime variable.
@@ -1702,12 +1723,12 @@ func (o *Object) parseTimeString(timeString string) (err error) {
if timeString == "" {
return nil
}
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
modTime, err := parseTimeStringHelper(timeString)
if err != nil {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return nil
}
o.modTime = time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC()
o.modTime = modTime
return nil
}
@@ -1861,6 +1882,7 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
ContentType: resp.Header.Get("Content-Type"),
Info: Info,
}
// When reading files from B2 via cloudflare using
// --b2-download-url cloudflare strips the Content-Length
// headers (presumably so it can inject stuff) so use the old
@@ -1958,7 +1980,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err == nil {
fs.Debugf(o, "File is big enough for chunked streaming")
up, err := o.fs.newLargeUpload(ctx, o, in, src, o.fs.opt.ChunkSize, false, nil)
up, err := o.fs.newLargeUpload(ctx, o, in, src, o.fs.opt.ChunkSize, false, nil, options...)
if err != nil {
o.fs.putRW(rw)
return err
@@ -1990,7 +2012,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return o.decodeMetaDataFileInfo(up.info)
}
modTime := src.ModTime(ctx)
modTime, err := o.getModTime(ctx, src, options)
if err != nil {
return err
}
calculatedSha1, _ := src.Hash(ctx, hash.SHA1)
if calculatedSha1 == "" {
@@ -2095,6 +2120,36 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return o.decodeMetaDataFileInfo(&response)
}
// Get modTime from the source; if --metadata is set, fetch the src metadata and get it from there.
// When metadata support is added to b2, this method will need a more generic name
func (o *Object) getModTime(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption) (time.Time, error) {
modTime := src.ModTime(ctx)
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return time.Time{}, fmt.Errorf("failed to read metadata from source object: %w", err)
}
// merge metadata into request and user metadata
for k, v := range meta {
k = strings.ToLower(k)
// For now, the only metadata we're concerned with is "mtime"
switch k {
case "mtime":
// mtime in meta overrides source ModTime
metaModTime, err := time.Parse(time.RFC3339Nano, v)
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", k, v, err)
} else {
modTime = metaModTime
}
default:
// Do nothing for now
}
}
return modTime, nil
}
// OpenChunkWriter returns the chunk size and a ChunkWriter
//
// Pass in the remote and the src object
@@ -2126,7 +2181,7 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
Concurrency: o.fs.opt.UploadConcurrency,
//LeavePartsOnError: o.fs.opt.LeavePartsOnError,
}
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil)
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil, options...)
return info, up, err
}
@@ -2172,6 +2227,7 @@ This will dump something like this showing the lifecycle rules.
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
"daysFromStartingToCancelingUnfinishedLargeFiles": null,
"fileNamePrefix": ""
}
]
@@ -2198,8 +2254,9 @@ overwrites will still cause versions to be made.
See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
`,
Opts: map[string]string{
"daysFromHidingToDeleting": "After a file has been hidden for this many days it is deleted. 0 is off.",
"daysFromUploadingToHiding": "This many days after uploading a file is hidden",
"daysFromHidingToDeleting": "After a file has been hidden for this many days it is deleted. 0 is off.",
"daysFromUploadingToHiding": "This many days after uploading a file is hidden",
"daysFromStartingToCancelingUnfinishedLargeFiles": "Cancels any unfinished large file versions after this many days",
},
}
@@ -2219,14 +2276,23 @@ func (f *Fs) lifecycleCommand(ctx context.Context, name string, arg []string, op
}
newRule.DaysFromUploadingToHiding = &days
}
if daysStr := opt["daysFromStartingToCancelingUnfinishedLargeFiles"]; daysStr != "" {
days, err := strconv.Atoi(daysStr)
if err != nil {
return nil, fmt.Errorf("bad daysFromStartingToCancelingUnfinishedLargeFiles: %w", err)
}
newRule.DaysFromStartingToCancelingUnfinishedLargeFiles = &days
}
bucketName, _ := f.split("")
if bucketName == "" {
return nil, errors.New("bucket required")
}
skip := operations.SkipDestructive(ctx, name, "update lifecycle rules")
var bucket *api.Bucket
if newRule.DaysFromHidingToDeleting != nil || newRule.DaysFromUploadingToHiding != nil {
if !skip && (newRule.DaysFromHidingToDeleting != nil || newRule.DaysFromUploadingToHiding != nil || newRule.DaysFromStartingToCancelingUnfinishedLargeFiles != nil) {
bucketID, err := f.getBucketID(ctx, bucketName)
if err != nil {
return nil, err

View File

@@ -5,6 +5,7 @@ import (
"crypto/sha1"
"fmt"
"path"
"sort"
"strings"
"testing"
"time"
@@ -13,6 +14,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/bucket"
@@ -184,57 +186,120 @@ func TestParseTimeString(t *testing.T) {
}
// This is adapted from the s3 equivalent.
func (f *Fs) InternalTestMetadata(t *testing.T) {
ctx := context.Background()
original := random.String(1000)
contents := fstest.Gz(t, original)
mimeType := "text/html"
item := fstest.NewItem("test-metadata", contents, fstest.Time("2001-05-06T04:05:06.499Z"))
btime := time.Now()
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, contents, true, mimeType, nil)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
o := obj.(*Object)
gotMetadata, err := o.getMetaData(ctx)
require.NoError(t, err)
// We currently have a limited amount of metadata to test with B2
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
var mtime api.Timestamp
err = mtime.UnmarshalJSON([]byte(gotMetadata.Info[timeKey]))
if err != nil {
fs.Debugf(o, "Bad "+timeHeader+" header: %v", err)
// Return a map of the headers in the options with keys stripped of the "x-bz-info-" prefix
func OpenOptionToMetaData(options []fs.OpenOption) map[string]string {
var headers = make(map[string]string)
for _, option := range options {
k, v := option.Header()
k = strings.ToLower(k)
if strings.HasPrefix(k, headerPrefix) {
headers[k[len(headerPrefix):]] = v
}
}
assert.Equal(t, item.ModTime, time.Time(mtime), "Modification time")
// Upload time
gotBtime := time.Time(gotMetadata.UploadTimestamp)
dt := gotBtime.Sub(btime)
assert.True(t, dt < time.Minute && dt > -time.Minute, fmt.Sprintf("btime more than 1 minute out want %v got %v delta %v", btime, gotBtime, dt))
return headers
}
t.Run("GzipEncoding", func(t *testing.T) {
// Test that the gzipped file we uploaded can be
// downloaded
checkDownload := func(wantContents string, wantSize int64, wantHash string) {
gotContents := fstests.ReadObject(ctx, t, o, -1)
assert.Equal(t, wantContents, gotContents)
assert.Equal(t, wantSize, o.Size())
gotHash, err := o.Hash(ctx, hash.SHA1)
func (f *Fs) internalTestMetadata(t *testing.T, size string, uploadCutoff string, chunkSize string) {
what := fmt.Sprintf("Size%s/UploadCutoff%s/ChunkSize%s", size, uploadCutoff, chunkSize)
t.Run(what, func(t *testing.T) {
ctx := context.Background()
ss := fs.SizeSuffix(0)
err := ss.Set(size)
require.NoError(t, err)
original := random.String(int(ss))
contents := fstest.Gz(t, original)
mimeType := "text/html"
if chunkSize != "" {
ss := fs.SizeSuffix(0)
err := ss.Set(chunkSize)
require.NoError(t, err)
_, err = f.SetUploadChunkSize(ss)
require.NoError(t, err)
assert.Equal(t, wantHash, gotHash)
}
t.Run("NoDecompress", func(t *testing.T) {
checkDownload(contents, int64(len(contents)), sha1Sum(t, contents))
if uploadCutoff != "" {
ss := fs.SizeSuffix(0)
err := ss.Set(uploadCutoff)
require.NoError(t, err)
_, err = f.SetUploadCutoff(ss)
require.NoError(t, err)
}
item := fstest.NewItem("test-metadata", contents, fstest.Time("2001-05-06T04:05:06.499Z"))
btime := time.Now()
metadata := fs.Metadata{
// Just mtime for now - limit to milliseconds since x-bz-info-src_last_modified_millis can't support any
"mtime": "2009-05-06T04:05:06.499Z",
}
// Need to specify HTTP options with the header prefix since they are passed as-is
options := []fs.OpenOption{
&fs.HTTPOption{Key: "X-Bz-Info-a", Value: "1"},
&fs.HTTPOption{Key: "X-Bz-Info-b", Value: "2"},
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, mimeType, metadata, options...)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
o := obj.(*Object)
gotMetadata, err := o.getMetaData(ctx)
require.NoError(t, err)
// X-Bz-Info-a & X-Bz-Info-b
optMetadata := OpenOptionToMetaData(options)
for k, v := range optMetadata {
got := gotMetadata.Info[k]
assert.Equal(t, v, got, k)
}
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
var mtime api.Timestamp
err = mtime.UnmarshalJSON([]byte(gotMetadata.Info[timeKey]))
if err != nil {
fs.Debugf(o, "Bad "+timeHeader+" header: %v", err)
}
assert.Equal(t, item.ModTime, time.Time(mtime), "Modification time")
// Upload time
gotBtime := time.Time(gotMetadata.UploadTimestamp)
dt := gotBtime.Sub(btime)
assert.True(t, dt < time.Minute && dt > -time.Minute, fmt.Sprintf("btime more than 1 minute out want %v got %v delta %v", btime, gotBtime, dt))
t.Run("GzipEncoding", func(t *testing.T) {
// Test that the gzipped file we uploaded can be
// downloaded
checkDownload := func(wantContents string, wantSize int64, wantHash string) {
gotContents := fstests.ReadObject(ctx, t, o, -1)
assert.Equal(t, wantContents, gotContents)
assert.Equal(t, wantSize, o.Size())
gotHash, err := o.Hash(ctx, hash.SHA1)
require.NoError(t, err)
assert.Equal(t, wantHash, gotHash)
}
t.Run("NoDecompress", func(t *testing.T) {
checkDownload(contents, int64(len(contents)), sha1Sum(t, contents))
})
})
})
}
func (f *Fs) InternalTestMetadata(t *testing.T) {
// 1 kB regular file
f.internalTestMetadata(t, "1kiB", "", "")
// 10 MiB large file
f.internalTestMetadata(t, "10MiB", "6MiB", "6MiB")
}
func sha1Sum(t *testing.T, s string) string {
hash := sha1.Sum([]byte(s))
return fmt.Sprintf("%x", hash)
@@ -394,24 +459,161 @@ func (f *Fs) InternalTestVersions(t *testing.T) {
})
t.Run("Cleanup", func(t *testing.T) {
require.NoError(t, f.cleanUp(ctx, true, false, 0))
items := append([]fstest.Item{newItem}, fstests.InternalTestFiles...)
fstest.CheckListing(t, f, items)
// Set --b2-versions for this test
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
fstest.CheckListing(t, f, items)
t.Run("DryRun", func(t *testing.T) {
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Listing should be unchanged after dry run
before := listAllFiles(ctx, t, f, dirName)
ctx, ci := fs.AddConfig(ctx)
ci.DryRun = true
require.NoError(t, f.cleanUp(ctx, true, false, 0))
after := listAllFiles(ctx, t, f, dirName)
assert.Equal(t, before, after)
})
t.Run("RealThing", func(t *testing.T) {
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Listing should reflect current state after cleanup
require.NoError(t, f.cleanUp(ctx, true, false, 0))
items := append([]fstest.Item{newItem}, fstests.InternalTestFiles...)
fstest.CheckListing(t, f, items)
})
})
// Purge gets tested later
}
func (f *Fs) InternalTestCleanupUnfinished(t *testing.T) {
ctx := context.Background()
// B2CleanupHidden tests cleaning up hidden files
t.Run("CleanupUnfinished", func(t *testing.T) {
dirName := "unfinished"
fileCount := 5
expectedFiles := []string{}
for i := 1; i < fileCount; i++ {
fileName := fmt.Sprintf("%s/unfinished-%d", dirName, i)
expectedFiles = append(expectedFiles, fileName)
obj := &Object{
fs: f,
remote: fileName,
}
objInfo := object.NewStaticObjectInfo(fileName, fstest.Time("2002-02-03T04:05:06.499999999Z"), -1, true, nil, nil)
_, err := f.newLargeUpload(ctx, obj, nil, objInfo, f.opt.ChunkSize, false, nil)
require.NoError(t, err)
}
checkListing(ctx, t, f, dirName, expectedFiles)
t.Run("DryRun", func(t *testing.T) {
// Listing should not change after dry run
ctx, ci := fs.AddConfig(ctx)
ci.DryRun = true
require.NoError(t, f.cleanUp(ctx, false, true, 0))
checkListing(ctx, t, f, dirName, expectedFiles)
})
t.Run("RealThing", func(t *testing.T) {
// Listing should be empty after real cleanup
require.NoError(t, f.cleanUp(ctx, false, true, 0))
checkListing(ctx, t, f, dirName, []string{})
})
})
}
func listAllFiles(ctx context.Context, t *testing.T, f *Fs, dirName string) []string {
bucket, directory := f.split(dirName)
foundFiles := []string{}
require.NoError(t, f.list(ctx, bucket, directory, "", false, true, 0, true, false, func(remote string, object *api.File, isDirectory bool) error {
if !isDirectory {
foundFiles = append(foundFiles, object.Name)
}
return nil
}))
sort.Strings(foundFiles)
return foundFiles
}
func checkListing(ctx context.Context, t *testing.T, f *Fs, dirName string, expectedFiles []string) {
foundFiles := listAllFiles(ctx, t, f, dirName)
sort.Strings(expectedFiles)
assert.Equal(t, expectedFiles, foundFiles)
}
func (f *Fs) InternalTestLifecycleRules(t *testing.T) {
ctx := context.Background()
opt := map[string]string{}
t.Run("InitState", func(t *testing.T) {
// There should be no lifecycle rules at the outset
lifecycleRulesIf, err := f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules := lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
})
t.Run("DryRun", func(t *testing.T) {
// There should still be no lifecycle rules after each dry run operation
ctx, ci := fs.AddConfig(ctx)
ci.DryRun = true
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err := f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules := lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
delete(opt, "daysFromHidingToDeleting")
opt["daysFromUploadingToHiding"] = "40"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
})
t.Run("RealThing", func(t *testing.T) {
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err := f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules := lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 1, len(lifecycleRules))
assert.Equal(t, 30, *lifecycleRules[0].DaysFromHidingToDeleting)
delete(opt, "daysFromHidingToDeleting")
opt["daysFromUploadingToHiding"] = "40"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 1, len(lifecycleRules))
assert.Equal(t, 40, *lifecycleRules[0].DaysFromUploadingToHiding)
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 1, len(lifecycleRules))
assert.Equal(t, 30, *lifecycleRules[0].DaysFromHidingToDeleting)
assert.Equal(t, 40, *lifecycleRules[0].DaysFromUploadingToHiding)
})
}
// -run TestIntegration/FsMkdir/FsPutFiles/Internal
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Metadata", f.InternalTestMetadata)
t.Run("Versions", f.InternalTestVersions)
t.Run("CleanupUnfinished", f.InternalTestCleanupUnfinished)
t.Run("LifecycleRules", f.InternalTestLifecycleRules)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -91,7 +91,7 @@ type largeUpload struct {
// newLargeUpload starts an upload of object o from in with metadata in src
//
// If newInfo is set then metadata from that will be used instead of reading it from src
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File) (up *largeUpload, err error) {
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File, options ...fs.OpenOption) (up *largeUpload, err error) {
size := src.Size()
parts := 0
chunkSize := defaultChunkSize
@@ -104,11 +104,6 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
parts++
}
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
}
bucket, bucketPath := o.split()
bucketID, err := f.getBucketID(ctx, bucket)
if err != nil {
@@ -118,12 +113,27 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
BucketID: bucketID,
Name: f.opt.Enc.FromStandardPath(bucketPath),
}
optionsToSend := make([]fs.OpenOption, 0, len(options))
if newInfo == nil {
modTime := src.ModTime(ctx)
modTime, err := o.getModTime(ctx, src, options)
if err != nil {
return nil, err
}
request.ContentType = fs.MimeType(ctx, src)
request.Info = map[string]string{
timeKey: timeString(modTime),
}
// Custom upload headers - remove header prefix since they are sent in the body
for _, option := range options {
k, v := option.Header()
k = strings.ToLower(k)
if strings.HasPrefix(k, headerPrefix) {
request.Info[k[len(headerPrefix):]] = v
} else {
optionsToSend = append(optionsToSend, option)
}
}
// Set the SHA1 if known
if !o.fs.opt.DisableCheckSum || doCopy {
if calculatedSha1, err := src.Hash(ctx, hash.SHA1); err == nil && calculatedSha1 != "" {
@@ -134,6 +144,11 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
request.ContentType = newInfo.ContentType
request.Info = newInfo.Info
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
Options: optionsToSend,
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)

View File

@@ -43,9 +43,9 @@ import (
"github.com/rclone/rclone/lib/jwtutil"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/rest"
"github.com/youmark/pkcs8"
"golang.org/x/oauth2"
)
const (
@@ -64,12 +64,10 @@ const (
// Globals
var (
// Description of how to auth for this app
oauthConfig = &oauth2.Config{
Scopes: nil,
Endpoint: oauth2.Endpoint{
AuthURL: "https://app.box.com/api/oauth2/authorize",
TokenURL: "https://app.box.com/api/oauth2/token",
},
oauthConfig = &oauthutil.Config{
Scopes: nil,
AuthURL: "https://app.box.com/api/oauth2/authorize",
TokenURL: "https://app.box.com/api/oauth2/token",
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
@@ -256,8 +254,10 @@ func getQueryParams(boxConfig *api.ConfigJSON) map[string]string {
}
func getDecryptedPrivateKey(boxConfig *api.ConfigJSON) (key *rsa.PrivateKey, err error) {
block, rest := pem.Decode([]byte(boxConfig.BoxAppSettings.AppAuth.PrivateKey))
if block == nil {
return nil, errors.New("box: failed to PEM decode private key")
}
if len(rest) > 0 {
return nil, fmt.Errorf("box: extra data included in private key: %w", err)
}
@@ -619,7 +619,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
return shouldRetry(ctx, resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
// fmt.Printf("...Error %v\n", err)
return "", err
}
// fmt.Printf("...Id %q\n", *info.Id)
@@ -966,6 +966,26 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// check if dest already exists
item, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
if err != nil {
return nil, err
}
if item != nil { // dest already exists, need to copy to temp name and then move
tempSuffix := "-rclone-copy-" + random.String(8)
fs.Debugf(remote, "dst already exists, copying to temp name %v", remote+tempSuffix)
tempObj, err := f.Copy(ctx, src, remote+tempSuffix)
if err != nil {
return nil, err
}
fs.Debugf(remote+tempSuffix, "moving to real name %v", remote)
err = f.deleteObject(ctx, item.ID)
if err != nil {
return nil, err
}
return f.Move(ctx, tempObj, remote)
}
// Copy the object
opts := rest.Opts{
Method: "POST",

View File

@@ -409,18 +409,16 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
} else {
if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
} else if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
}
}

View File

@@ -10,7 +10,6 @@ import (
goflag "flag"
"fmt"
"io"
"log"
"math/rand"
"os"
"path"
@@ -33,7 +32,7 @@ import (
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/stretchr/testify/require"
)
@@ -93,7 +92,7 @@ func TestMain(m *testing.M) {
goflag.Parse()
var rc int
log.Printf("Running with the following params: \n remote: %v", remoteName)
fs.Logf(nil, "Running with the following params: \n remote: %v", remoteName)
runInstance = newRun()
rc = m.Run()
os.Exit(rc)
@@ -123,10 +122,10 @@ func TestInternalListRootAndInnerRemotes(t *testing.T) {
/* TODO: is this testing something?
func TestInternalVfsCache(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 30
vfscommon.Opt.DirCacheTime = time.Second * 30
testSize := int64(524288000)
vfsflags.Opt.CacheMode = vfs.CacheModeWrites
vfscommon.Opt.CacheMode = vfs.CacheModeWrites
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"writes": "true", "info_age": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
@@ -338,7 +337,7 @@ func TestInternalCachedUpdatedContentMatches(t *testing.T) {
func TestInternalWrappedWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tiwwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
@@ -368,7 +367,7 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
func TestInternalLargeWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tilwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
@@ -408,7 +407,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
// update in the wrapped fs
originalSize, err := runInstance.size(t, rootFs, "data.bin")
require.NoError(t, err)
log.Printf("original size: %v", originalSize)
fs.Logf(nil, "original size: %v", originalSize)
o, err := cfs.UnWrap().NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin"))
require.NoError(t, err)
@@ -417,7 +416,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
if runInstance.rootIsCrypt {
data2, err = base64.StdEncoding.DecodeString(cryptedText3Base64)
require.NoError(t, err)
expectedSize = expectedSize + 1 // FIXME newline gets in, likely test data issue
expectedSize++ // FIXME newline gets in, likely test data issue
} else {
data2 = []byte("test content")
}
@@ -425,7 +424,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
err = o.Update(context.Background(), bytes.NewReader(data2), objInfo)
require.NoError(t, err)
require.Equal(t, int64(len(data2)), o.Size())
log.Printf("updated size: %v", len(data2))
fs.Logf(nil, "updated size: %v", len(data2))
// get a new instance from the cache
if runInstance.wrappedIsExternal {
@@ -485,49 +484,49 @@ func TestInternalMoveWithNotify(t *testing.T) {
err = runInstance.retryBlock(func() error {
li, err := runInstance.list(t, rootFs, "test")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 2 {
log.Printf("not expected listing /test: %v", li)
fs.Logf(nil, "not expected listing /test: %v", li)
return fmt.Errorf("not expected listing /test: %v", li)
}
li, err = runInstance.list(t, rootFs, "test/one")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 0 {
log.Printf("not expected listing /test/one: %v", li)
fs.Logf(nil, "not expected listing /test/one: %v", li)
return fmt.Errorf("not expected listing /test/one: %v", li)
}
li, err = runInstance.list(t, rootFs, "test/second")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 1 {
log.Printf("not expected listing /test/second: %v", li)
fs.Logf(nil, "not expected listing /test/second: %v", li)
return fmt.Errorf("not expected listing /test/second: %v", li)
}
if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "data.bin" {
log.Printf("not expected name: %v", fi.Name())
fs.Logf(nil, "not expected name: %v", fi.Name())
return fmt.Errorf("not expected name: %v", fi.Name())
}
} else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/second/data.bin" {
log.Printf("not expected remote: %v", di.Remote())
fs.Logf(nil, "not expected remote: %v", di.Remote())
return fmt.Errorf("not expected remote: %v", di.Remote())
}
} else {
log.Printf("unexpected listing: %v", li)
fs.Logf(nil, "unexpected listing: %v", li)
return fmt.Errorf("unexpected listing: %v", li)
}
log.Printf("complete listing: %v", li)
fs.Logf(nil, "complete listing: %v", li)
return nil
}, 12, time.Second*10)
require.NoError(t, err)
@@ -577,43 +576,43 @@ func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
err = runInstance.retryBlock(func() error {
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test")))
if !found {
log.Printf("not found /test")
fs.Logf(nil, "not found /test")
return fmt.Errorf("not found /test")
}
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one")))
if !found {
log.Printf("not found /test/one")
fs.Logf(nil, "not found /test/one")
return fmt.Errorf("not found /test/one")
}
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one"), runInstance.encryptRemoteIfNeeded(t, "test2")))
if !found {
log.Printf("not found /test/one/test2")
fs.Logf(nil, "not found /test/one/test2")
return fmt.Errorf("not found /test/one/test2")
}
li, err := runInstance.list(t, rootFs, "test/one")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 1 {
log.Printf("not expected listing /test/one: %v", li)
fs.Logf(nil, "not expected listing /test/one: %v", li)
return fmt.Errorf("not expected listing /test/one: %v", li)
}
if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "test2" {
log.Printf("not expected name: %v", fi.Name())
fs.Logf(nil, "not expected name: %v", fi.Name())
return fmt.Errorf("not expected name: %v", fi.Name())
}
} else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/one/test2" {
log.Printf("not expected remote: %v", di.Remote())
fs.Logf(nil, "not expected remote: %v", di.Remote())
return fmt.Errorf("not expected remote: %v", di.Remote())
}
} else {
log.Printf("unexpected listing: %v", li)
fs.Logf(nil, "unexpected listing: %v", li)
return fmt.Errorf("unexpected listing: %v", li)
}
log.Printf("complete listing /test/one/test2")
fs.Logf(nil, "complete listing /test/one/test2")
return nil
}, 12, time.Second*10)
require.NoError(t, err)
@@ -708,7 +707,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
func TestInternalExpiredEntriesRemoved(t *testing.T) {
id := fmt.Sprintf("tieer%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second * 4 // needs to be lower than the defined
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second * 4) // needs to be lower than the defined
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -743,7 +742,7 @@ func TestInternalExpiredEntriesRemoved(t *testing.T) {
}
func TestInternalBug2117(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 10
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second * 10)
id := fmt.Sprintf("tib2117%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
@@ -771,24 +770,24 @@ func TestInternalBug2117(t *testing.T) {
di, err := runInstance.list(t, rootFs, "test/dir1/dir2")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 1)
time.Sleep(time.Second * 30)
di, err = runInstance.list(t, rootFs, "test/dir1/dir2")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 1)
di, err = runInstance.list(t, rootFs, "test/dir1")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 4)
di, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 4)
}
@@ -829,7 +828,7 @@ func newRun() *run {
} else {
r.tmpUploadDir = uploadDir
}
log.Printf("Temp Upload Dir: %v", r.tmpUploadDir)
fs.Logf(nil, "Temp Upload Dir: %v", r.tmpUploadDir)
return r
}
@@ -850,8 +849,8 @@ func (r *run) encryptRemoteIfNeeded(t *testing.T, remote string) string {
func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool, flags map[string]string) (fs.Fs, *cache.Persistent) {
fstest.Initialise()
remoteExists := false
for _, s := range config.FileSections() {
if s == remote {
for _, s := range config.GetRemotes() {
if s.Name == remote {
remoteExists = true
}
}
@@ -875,12 +874,12 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
cacheRemote := remote
if !remoteExists {
localRemote := remote + "-local"
config.FileSet(localRemote, "type", "local")
config.FileSet(localRemote, "nounc", "true")
config.FileSetValue(localRemote, "type", "local")
config.FileSetValue(localRemote, "nounc", "true")
m.Set("type", "cache")
m.Set("remote", localRemote+":"+filepath.Join(os.TempDir(), localRemote))
} else {
remoteType := config.FileGet(remote, "type")
remoteType := config.GetValue(remote, "type")
if remoteType == "" {
t.Skipf("skipped due to invalid remote type for %v", remote)
return nil, nil
@@ -891,14 +890,14 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
m.Set("password", cryptPassword1)
m.Set("password2", cryptPassword2)
}
remoteRemote := config.FileGet(remote, "remote")
remoteRemote := config.GetValue(remote, "remote")
if remoteRemote == "" {
t.Skipf("skipped due to invalid remote wrapper for %v", remote)
return nil, nil
}
remoteRemoteParts := strings.Split(remoteRemote, ":")
remoteWrapping := remoteRemoteParts[0]
remoteType := config.FileGet(remoteWrapping, "type")
remoteType := config.GetValue(remoteWrapping, "type")
if remoteType != "cache" {
t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType)
return nil, nil
@@ -1192,7 +1191,7 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
func (r *run) cleanSize(t *testing.T, size int64) int64 {
if r.rootIsCrypt {
denominator := int64(65536 + 16)
size = size - 32
size -= 32
quotient := size / denominator
remainder := size % denominator
return (quotient*65536 + remainder - 16)

View File

@@ -208,7 +208,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
offset := chunkStart % int64(r.cacheFs().opt.ChunkSize)
// we align the start offset of the first chunk to a likely chunk in the storage
chunkStart = chunkStart - offset
chunkStart -= offset
r.queueOffset(chunkStart)
found := false
@@ -327,7 +327,7 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize))
if chunkStart >= int64(r.cacheFs().opt.ChunkSize) {
chunkStart = chunkStart - int64(r.cacheFs().opt.ChunkSize)
chunkStart -= int64(r.cacheFs().opt.ChunkSize)
}
r.queueOffset(chunkStart)
@@ -415,10 +415,8 @@ func (w *worker) run() {
continue
}
}
} else {
if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
} else if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
chunkEnd := chunkStart + int64(w.r.cacheFs().opt.ChunkSize)

View File

@@ -987,7 +987,7 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
}
if o.main == nil && (o.chunks == nil || len(o.chunks) == 0) {
if o.main == nil && len(o.chunks) == 0 {
// Scanning hasn't found data chunks with conforming names.
if f.useMeta || quickScan {
// Metadata is required but absent and there are no chunks.
@@ -2480,7 +2480,7 @@ func unmarshalSimpleJSON(ctx context.Context, metaObject fs.Object, data []byte)
if len(data) > maxMetadataSizeWritten {
return nil, false, ErrMetaTooBig
}
if data == nil || len(data) < 2 || data[0] != '{' || data[len(data)-1] != '}' {
if len(data) < 2 || data[0] != '{' || data[len(data)-1] != '}' {
return nil, false, errors.New("invalid json")
}
var metadata metaSimpleJSON

View File

@@ -0,0 +1,48 @@
// Package api has type definitions for cloudinary
package api
import (
"fmt"
)
// CloudinaryEncoder extends the built-in encoder
type CloudinaryEncoder interface {
// FromStandardPath takes a / separated path in Standard encoding
// and converts it to a / separated path in this encoding.
FromStandardPath(string) string
// FromStandardName takes name in Standard encoding and converts
// it in this encoding.
FromStandardName(string) string
// ToStandardPath takes a / separated path in this encoding
// and converts it to a / separated path in Standard encoding.
ToStandardPath(string) string
// ToStandardName takes name in this encoding and converts
// it in Standard encoding.
ToStandardName(string) string
// Encoded root of the remote (as passed into NewFs)
FromStandardFullPath(string) string
}
// UpdateOptions was created to pass options from Update to Put
type UpdateOptions struct {
PublicID string
ResourceType string
DeliveryType string
AssetFolder string
DisplayName string
}
// Header formats the option as a string
func (o *UpdateOptions) Header() (string, string) {
return "UpdateOption", fmt.Sprintf("%s/%s/%s", o.ResourceType, o.DeliveryType, o.PublicID)
}
// Mandatory returns whether the option must be parsed or can be ignored
func (o *UpdateOptions) Mandatory() bool {
return false
}
// String formats the option into human-readable form
func (o *UpdateOptions) String() string {
return fmt.Sprintf("Fully qualified Public ID: %s/%s/%s", o.ResourceType, o.DeliveryType, o.PublicID)
}

View File

@@ -0,0 +1,711 @@
// Package cloudinary provides an interface to the Cloudinary DAM
package cloudinary
import (
"context"
"encoding/hex"
"errors"
"fmt"
"io"
"net/http"
"path"
"strconv"
"strings"
"time"
"github.com/cloudinary/cloudinary-go/v2"
SDKApi "github.com/cloudinary/cloudinary-go/v2/api"
"github.com/cloudinary/cloudinary-go/v2/api/admin"
"github.com/cloudinary/cloudinary-go/v2/api/admin/search"
"github.com/cloudinary/cloudinary-go/v2/api/uploader"
"github.com/rclone/rclone/backend/cloudinary/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
"github.com/zeebo/blake3"
)
// Cloudinary shouldn't have a trailing dot if there is no path
func cldPathDir(somePath string) string {
if somePath == "" || somePath == "." {
return somePath
}
dir := path.Dir(somePath)
if dir == "." {
return ""
}
return dir
}
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "cloudinary",
Description: "Cloudinary",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "cloud_name",
Help: "Cloudinary Environment Name",
Required: true,
Sensitive: true,
},
{
Name: "api_key",
Help: "Cloudinary API Key",
Required: true,
Sensitive: true,
},
{
Name: "api_secret",
Help: "Cloudinary API Secret",
Required: true,
Sensitive: true,
},
{
Name: "upload_prefix",
Help: "Specify the API endpoint for environments out of the US",
},
{
Name: "upload_preset",
Help: "Upload Preset to select asset manipulation on upload",
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Base | // Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
encoder.EncodeSlash |
encoder.EncodeLtGt |
encoder.EncodeDoubleQuote |
encoder.EncodeQuestion |
encoder.EncodeAsterisk |
encoder.EncodePipe |
encoder.EncodeHash |
encoder.EncodePercent |
encoder.EncodeBackSlash |
encoder.EncodeDel |
encoder.EncodeCtl |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8 |
encoder.EncodeDot),
},
{
Name: "eventually_consistent_delay",
Default: fs.Duration(0),
Advanced: true,
Help: "Wait N seconds for eventual consistency of the databases that support the backend operation",
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
CloudName string `config:"cloud_name"`
APIKey string `config:"api_key"`
APISecret string `config:"api_secret"`
UploadPrefix string `config:"upload_prefix"`
UploadPreset string `config:"upload_preset"`
Enc encoder.MultiEncoder `config:"encoding"`
EventuallyConsistentDelay fs.Duration `config:"eventually_consistent_delay"`
}
// Fs represents a remote cloudinary server
type Fs struct {
name string
root string
opt Options
features *fs.Features
pacer *fs.Pacer
srv *rest.Client // For downloading assets via the Cloudinary CDN
cld *cloudinary.Cloudinary // API calls are going through the Cloudinary SDK
lastCRUD time.Time
}
// Object describes a cloudinary object
type Object struct {
fs *Fs
remote string
size int64
modTime time.Time
url string
md5sum string
publicID string
resourceType string
deliveryType string
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(ctx context.Context, name string, root string, m configmap.Mapper) (fs.Fs, error) {
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
// Initialize the Cloudinary client
cld, err := cloudinary.NewFromParams(opt.CloudName, opt.APIKey, opt.APISecret)
if err != nil {
return nil, fmt.Errorf("failed to create Cloudinary client: %w", err)
}
cld.Admin.Client = *fshttp.NewClient(ctx)
cld.Upload.Client = *fshttp.NewClient(ctx)
if opt.UploadPrefix != "" {
cld.Config.API.UploadPrefix = opt.UploadPrefix
}
client := fshttp.NewClient(ctx)
f := &Fs{
name: name,
root: root,
opt: *opt,
cld: cld,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(1000), pacer.MaxSleep(10000), pacer.DecayConstant(2))),
srv: rest.NewClient(client),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
if root != "" {
// Check to see if the root actually an existing file
remote := path.Base(root)
f.root = cldPathDir(root)
_, err := f.NewObject(ctx, remote)
if err != nil {
if err == fs.ErrorObjectNotFound || errors.Is(err, fs.ErrorNotAFile) {
// File doesn't exist so return the previous root
f.root = root
return f, nil
}
return nil, err
}
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
// ------------------------------------------------------------
// FromStandardPath implementation of the api.CloudinaryEncoder
func (f *Fs) FromStandardPath(s string) string {
return strings.ReplaceAll(f.opt.Enc.FromStandardPath(s), "&", "\uFF06")
}
// FromStandardName implementation of the api.CloudinaryEncoder
func (f *Fs) FromStandardName(s string) string {
return strings.ReplaceAll(f.opt.Enc.FromStandardName(s), "&", "\uFF06")
}
// ToStandardPath implementation of the api.CloudinaryEncoder
func (f *Fs) ToStandardPath(s string) string {
return strings.ReplaceAll(f.opt.Enc.ToStandardPath(s), "\uFF06", "&")
}
// ToStandardName implementation of the api.CloudinaryEncoder
func (f *Fs) ToStandardName(s string) string {
return strings.ReplaceAll(f.opt.Enc.ToStandardName(s), "\uFF06", "&")
}
// FromStandardFullPath encodes a full path to Cloudinary standard
func (f *Fs) FromStandardFullPath(dir string) string {
return path.Join(api.CloudinaryEncoder.FromStandardPath(f, f.root), api.CloudinaryEncoder.FromStandardPath(f, dir))
}
// ToAssetFolderAPI encodes folders as expected by the Cloudinary SDK
func (f *Fs) ToAssetFolderAPI(dir string) string {
return strings.ReplaceAll(dir, "%", "%25")
}
// ToDisplayNameElastic encodes a special case of elasticsearch
func (f *Fs) ToDisplayNameElastic(dir string) string {
return strings.ReplaceAll(dir, "!", "\\!")
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// WaitEventuallyConsistent waits till the FS is eventually consistent
func (f *Fs) WaitEventuallyConsistent() {
if f.opt.EventuallyConsistentDelay == fs.Duration(0) {
return
}
delay := time.Duration(f.opt.EventuallyConsistentDelay)
timeSinceLastCRUD := time.Since(f.lastCRUD)
if timeSinceLastCRUD < delay {
time.Sleep(delay - timeSinceLastCRUD)
}
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("Cloudinary root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// List the objects and directories in dir into entries
func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
remotePrefix := f.FromStandardFullPath(dir)
if remotePrefix != "" && !strings.HasSuffix(remotePrefix, "/") {
remotePrefix += "/"
}
var entries fs.DirEntries
dirs := make(map[string]struct{})
nextCursor := ""
f.WaitEventuallyConsistent()
for {
// user the folders api to list folders.
folderParams := admin.SubFoldersParams{
Folder: f.ToAssetFolderAPI(remotePrefix),
MaxResults: 500,
}
if nextCursor != "" {
folderParams.NextCursor = nextCursor
}
results, err := f.cld.Admin.SubFolders(ctx, folderParams)
if err != nil {
return nil, fmt.Errorf("failed to list sub-folders: %w", err)
}
if results.Error.Message != "" {
if strings.HasPrefix(results.Error.Message, "Can't find folder with path") {
return nil, fs.ErrorDirNotFound
}
return nil, fmt.Errorf("failed to list sub-folders: %s", results.Error.Message)
}
for _, folder := range results.Folders {
relativePath := api.CloudinaryEncoder.ToStandardPath(f, strings.TrimPrefix(folder.Path, remotePrefix))
parts := strings.Split(relativePath, "/")
// It's a directory
dirName := parts[len(parts)-1]
if _, found := dirs[dirName]; !found {
d := fs.NewDir(path.Join(dir, dirName), time.Time{})
entries = append(entries, d)
dirs[dirName] = struct{}{}
}
}
// Break if there are no more results
if results.NextCursor == "" {
break
}
nextCursor = results.NextCursor
}
for {
// Use the assets.AssetsByAssetFolder API to list assets
assetsParams := admin.AssetsByAssetFolderParams{
AssetFolder: remotePrefix,
MaxResults: 500,
}
if nextCursor != "" {
assetsParams.NextCursor = nextCursor
}
results, err := f.cld.Admin.AssetsByAssetFolder(ctx, assetsParams)
if err != nil {
return nil, fmt.Errorf("failed to list assets: %w", err)
}
for _, asset := range results.Assets {
remote := api.CloudinaryEncoder.ToStandardName(f, asset.DisplayName)
if dir != "" {
remote = path.Join(dir, api.CloudinaryEncoder.ToStandardName(f, asset.DisplayName))
}
o := &Object{
fs: f,
remote: remote,
size: int64(asset.Bytes),
modTime: asset.CreatedAt,
url: asset.SecureURL,
publicID: asset.PublicID,
resourceType: asset.AssetType,
deliveryType: asset.Type,
}
entries = append(entries, o)
}
// Break if there are no more results
if results.NextCursor == "" {
break
}
nextCursor = results.NextCursor
}
return entries, nil
}
// NewObject finds the Object at remote. If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
searchParams := search.Query{
Expression: fmt.Sprintf("asset_folder:\"%s\" AND display_name:\"%s\"",
f.FromStandardFullPath(cldPathDir(remote)),
f.ToDisplayNameElastic(api.CloudinaryEncoder.FromStandardName(f, path.Base(remote)))),
SortBy: []search.SortByField{{"uploaded_at": "desc"}},
MaxResults: 2,
}
var results *admin.SearchResult
f.WaitEventuallyConsistent()
err := f.pacer.Call(func() (bool, error) {
var err1 error
results, err1 = f.cld.Admin.Search(ctx, searchParams)
if err1 == nil && results.TotalCount != len(results.Assets) {
err1 = errors.New("partial response so waiting for eventual consistency")
}
return shouldRetry(ctx, nil, err1)
})
if err != nil {
return nil, fs.ErrorObjectNotFound
}
if results.TotalCount == 0 || len(results.Assets) == 0 {
return nil, fs.ErrorObjectNotFound
}
asset := results.Assets[0]
o := &Object{
fs: f,
remote: remote,
size: int64(asset.Bytes),
modTime: asset.UploadedAt,
url: asset.SecureURL,
md5sum: asset.Etag,
publicID: asset.PublicID,
resourceType: asset.ResourceType,
deliveryType: asset.Type,
}
return o, nil
}
func (f *Fs) getSuggestedPublicID(assetFolder string, displayName string, modTime time.Time) string {
payload := []byte(path.Join(assetFolder, displayName))
hash := blake3.Sum256(payload)
return hex.EncodeToString(hash[:])
}
// Put uploads content to Cloudinary
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if src.Size() == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
}
params := uploader.UploadParams{
UploadPreset: f.opt.UploadPreset,
}
updateObject := false
var modTime time.Time
for _, option := range options {
if updateOptions, ok := option.(*api.UpdateOptions); ok {
if updateOptions.PublicID != "" {
updateObject = true
params.Overwrite = SDKApi.Bool(true)
params.Invalidate = SDKApi.Bool(true)
params.PublicID = updateOptions.PublicID
params.ResourceType = updateOptions.ResourceType
params.Type = SDKApi.DeliveryType(updateOptions.DeliveryType)
params.AssetFolder = updateOptions.AssetFolder
params.DisplayName = updateOptions.DisplayName
modTime = src.ModTime(ctx)
}
}
}
if !updateObject {
params.AssetFolder = f.FromStandardFullPath(cldPathDir(src.Remote()))
params.DisplayName = api.CloudinaryEncoder.FromStandardName(f, path.Base(src.Remote()))
// We want to conform to the unique asset ID of rclone, which is (asset_folder,display_name,last_modified).
// We also want to enable customers to choose their own public_id, in case duplicate names are not a crucial use case.
// Upload_presets that apply randomness to the public ID would not work well with rclone duplicate assets support.
params.FilenameOverride = f.getSuggestedPublicID(params.AssetFolder, params.DisplayName, src.ModTime(ctx))
}
uploadResult, err := f.cld.Upload.Upload(ctx, in, params)
f.lastCRUD = time.Now()
if err != nil {
return nil, fmt.Errorf("failed to upload to Cloudinary: %w", err)
}
if !updateObject {
modTime = uploadResult.CreatedAt
}
if uploadResult.Error.Message != "" {
return nil, errors.New(uploadResult.Error.Message)
}
o := &Object{
fs: f,
remote: src.Remote(),
size: int64(uploadResult.Bytes),
modTime: modTime,
url: uploadResult.SecureURL,
md5sum: uploadResult.Etag,
publicID: uploadResult.PublicID,
resourceType: uploadResult.ResourceType,
deliveryType: uploadResult.Type,
}
return o, nil
}
// Precision of the remote
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Hashes returns the supported hash sets
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
}
// Mkdir creates empty folders
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
params := admin.CreateFolderParams{Folder: f.ToAssetFolderAPI(f.FromStandardFullPath(dir))}
res, err := f.cld.Admin.CreateFolder(ctx, params)
f.lastCRUD = time.Now()
if err != nil {
return err
}
if res.Error.Message != "" {
return errors.New(res.Error.Message)
}
return nil
}
// Rmdir deletes empty folders
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// Additional test because Cloudinary will delete folders without
// assets, regardless of empty sub-folders
folder := f.ToAssetFolderAPI(f.FromStandardFullPath(dir))
folderParams := admin.SubFoldersParams{
Folder: folder,
MaxResults: 1,
}
results, err := f.cld.Admin.SubFolders(ctx, folderParams)
if err != nil {
return err
}
if results.TotalCount > 0 {
return fs.ErrorDirectoryNotEmpty
}
params := admin.DeleteFolderParams{Folder: folder}
res, err := f.cld.Admin.DeleteFolder(ctx, params)
f.lastCRUD = time.Now()
if err != nil {
return err
}
if res.Error.Message != "" {
if strings.HasPrefix(res.Error.Message, "Can't find folder with path") {
return fs.ErrorDirNotFound
}
return errors.New(res.Error.Message)
}
return nil
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
420, // Too Many Requests (legacy)
429, // Too Many Requests
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err != nil {
tryAgain := "Try again on "
if idx := strings.Index(err.Error(), tryAgain); idx != -1 {
layout := "2006-01-02 15:04:05 UTC"
dateStr := err.Error()[idx+len(tryAgain) : idx+len(tryAgain)+len(layout)]
timestamp, err2 := time.Parse(layout, dateStr)
if err2 == nil {
return true, fserrors.NewErrorRetryAfter(time.Until(timestamp))
}
}
fs.Debugf(nil, "Retrying API error %v", err)
return true, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// ------------------------------------------------------------
// Hash returns the MD5 of an object
func (o *Object) Hash(ctx context.Context, ty hash.Type) (string, error) {
if ty != hash.MD5 {
return "", hash.ErrUnsupported
}
return o.md5sum, nil
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// ModTime returns the modification time of the object
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// Size of object in bytes
func (o *Object) Size() int64 {
return o.size
}
// Storable returns if this object is storable
func (o *Object) Storable() bool {
return true
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return fs.ErrorCantSetModTime
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
var resp *http.Response
opts := rest.Opts{
Method: "GET",
RootURL: o.url,
Options: options,
}
var offset int64
var count int64
var key string
var value string
fs.FixRangeOption(options, o.size)
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(o.size)
if count < 0 {
count = o.size - offset
}
key, value = option.Header()
case *fs.SeekOption:
offset = x.Offset
count = o.size - offset
key, value = option.Header()
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
if key != "" && value != "" {
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders[key] = value
}
// Make sure that the asset is fully available
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
if err == nil {
cl, clErr := strconv.Atoi(resp.Header.Get("content-length"))
if clErr == nil && count == int64(cl) {
return false, nil
}
}
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("failed download of \"%s\": %w", o.url, err)
}
return resp.Body, err
}
// Update the object with the contents of the io.Reader
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
options = append(options, &api.UpdateOptions{
PublicID: o.publicID,
ResourceType: o.resourceType,
DeliveryType: o.deliveryType,
DisplayName: api.CloudinaryEncoder.FromStandardName(o.fs, path.Base(o.Remote())),
AssetFolder: o.fs.FromStandardFullPath(cldPathDir(o.Remote())),
})
updatedObj, err := o.fs.Put(ctx, in, src, options...)
if err != nil {
return err
}
if uo, ok := updatedObj.(*Object); ok {
o.size = uo.size
o.modTime = time.Now() // Skipping uo.modTime because the API returns the create time
o.url = uo.url
o.md5sum = uo.md5sum
o.publicID = uo.publicID
o.resourceType = uo.resourceType
o.deliveryType = uo.deliveryType
}
return nil
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
params := uploader.DestroyParams{
PublicID: o.publicID,
ResourceType: o.resourceType,
Type: o.deliveryType,
}
res, dErr := o.fs.cld.Upload.Destroy(ctx, params)
o.fs.lastCRUD = time.Now()
if dErr != nil {
return dErr
}
if res.Error.Message != "" {
return errors.New(res.Error.Message)
}
if res.Result != "ok" {
return errors.New(res.Result)
}
return nil
}

View File

@@ -0,0 +1,23 @@
// Test Cloudinary filesystem interface
package cloudinary_test
import (
"testing"
"github.com/rclone/rclone/backend/cloudinary"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
name := "TestCloudinary"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*cloudinary.Object)(nil),
SkipInvalidUTF8: true,
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "eventually_consistent_delay", Value: "7"},
},
})
}

View File

@@ -38,6 +38,7 @@ import (
const (
initialChunkSize = 262144 // Initial and max sizes of chunks when reading parts of the file. Currently
maxChunkSize = 8388608 // at 256 KiB and 8 MiB.
chunkStreams = 0 // Streams to use for reading
bufferSize = 8388608
heuristicBytes = 1048576
@@ -1362,7 +1363,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
}
}
// Get a chunkedreader for the wrapped object
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize)
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize, chunkStreams)
// Get file handle
var file io.Reader
if offset != 0 {

View File

@@ -329,7 +329,7 @@ func (c *Cipher) obfuscateSegment(plaintext string) string {
for _, runeValue := range plaintext {
dir += int(runeValue)
}
dir = dir % 256
dir %= 256
// We'll use this number to store in the result filename...
var result bytes.Buffer
@@ -450,7 +450,7 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
if pos >= 26 {
pos -= 6
}
pos = pos - thisdir
pos -= thisdir
if pos < 0 {
pos += 52
}
@@ -888,7 +888,7 @@ func (fh *decrypter) fillBuffer() (err error) {
fs.Errorf(nil, "crypt: ignoring: %v", ErrorEncryptedBadBlock)
// Zero out the bad block and continue
for i := range (*fh.buf)[:n] {
(*fh.buf)[i] = 0
fh.buf[i] = 0
}
}
fh.bufIndex = 0

View File

@@ -80,9 +80,10 @@ const (
// Globals
var (
// Description of how to auth for this app
driveConfig = &oauth2.Config{
driveConfig = &oauthutil.Config{
Scopes: []string{scopePrefix + "drive"},
Endpoint: google.Endpoint,
AuthURL: google.Endpoint.AuthURL,
TokenURL: google.Endpoint.TokenURL,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
@@ -120,6 +121,7 @@ var (
"text/html": ".html",
"text/plain": ".txt",
"text/tab-separated-values": ".tsv",
"text/markdown": ".md",
}
_mimeTypeToExtensionLinks = map[string]string{
"application/x-link-desktop": ".desktop",
@@ -201,7 +203,6 @@ func driveScopesContainsAppFolder(scopes []string) bool {
if scope == scopePrefix+"drive.appfolder" {
return true
}
}
return false
}
@@ -1210,6 +1211,7 @@ func fixMimeType(mimeTypeIn string) string {
}
return mimeTypeOut
}
func fixMimeTypeMap(in map[string][]string) (out map[string][]string) {
out = make(map[string][]string, len(in))
for k, v := range in {
@@ -1220,9 +1222,11 @@ func fixMimeTypeMap(in map[string][]string) (out map[string][]string) {
}
return out
}
func isInternalMimeType(mimeType string) bool {
return strings.HasPrefix(mimeType, "application/vnd.google-apps.")
}
func isLinkMimeType(mimeType string) bool {
return strings.HasPrefix(mimeType, "application/x-link-")
}
@@ -1655,7 +1659,8 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *drive.F
// When the drive.File cannot be represented as an fs.Object it will return (nil, nil).
func (f *Fs) newObjectWithExportInfo(
ctx context.Context, remote string, info *drive.File,
extension, exportName, exportMimeType string, isDocument bool) (o fs.Object, err error) {
extension, exportName, exportMimeType string, isDocument bool,
) (o fs.Object, err error) {
// Note that resolveShortcut will have been called already if
// we are being called from a listing. However the drive.Item
// will have been resolved so this will do nothing.
@@ -1846,6 +1851,7 @@ func linkTemplate(mt string) *template.Template {
})
return _linkTemplates[mt]
}
func (f *Fs) fetchFormats(ctx context.Context) {
fetchFormatsOnce.Do(func() {
var about *drive.About
@@ -1891,7 +1897,8 @@ func (f *Fs) importFormats(ctx context.Context) map[string][]string {
// Look through the exportExtensions and find the first format that can be
// converted. If none found then return ("", "", false)
func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string) (
extension, mimeType string, isDocument bool) {
extension, mimeType string, isDocument bool,
) {
exportMimeTypes, isDocument := f.exportFormats(ctx)[itemMimeType]
if isDocument {
for _, _extension := range f.exportExtensions {
@@ -2219,7 +2226,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
case in <- job:
default:
overflow = append(overflow, job)
wg.Add(-1)
wg.Done()
}
}
@@ -2687,7 +2694,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if shortcutID != "" {
return f.delete(ctx, shortcutID, f.opt.UseTrash)
}
var trashedFiles = false
trashedFiles := false
if check {
found, err := f.list(ctx, []string{directoryID}, "", false, false, f.opt.TrashedOnly, true, func(item *drive.File) bool {
if !item.Trashed {
@@ -2924,7 +2931,6 @@ func (f *Fs) CleanUp(ctx context.Context) error {
err := f.svc.Files.EmptyTrash().Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return err
}
@@ -3185,6 +3191,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
}
}()
}
func (f *Fs) changeNotifyStartPageToken(ctx context.Context) (pageToken string, err error) {
var startPageToken *drive.StartPageToken
err = f.pacer.Call(func() (bool, error) {
@@ -3523,14 +3530,14 @@ func (f *Fs) unTrashDir(ctx context.Context, dir string, recurse bool) (r unTras
return f.unTrash(ctx, dir, directoryID, true)
}
// copy file with id to dest
func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
// copy or move file with id to dest
func (f *Fs) copyOrMoveID(ctx context.Context, operation string, id, dest string) (err error) {
info, err := f.getFile(ctx, id, f.getFileFields(ctx))
if err != nil {
return fmt.Errorf("couldn't find id: %w", err)
}
if info.MimeType == driveFolderType {
return fmt.Errorf("can't copy directory use: rclone copy --drive-root-folder-id %s %s %s", id, fs.ConfigString(f), dest)
return fmt.Errorf("can't %s directory use: rclone %s --drive-root-folder-id %s %s %s", operation, operation, id, fs.ConfigString(f), dest)
}
info.Name = f.opt.Enc.ToStandardName(info.Name)
o, err := f.newObjectWithInfo(ctx, info.Name, info)
@@ -3551,14 +3558,21 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
if err != nil {
return err
}
_, err = operations.Copy(ctx, dstFs, nil, destLeaf, o)
if err != nil {
return fmt.Errorf("copy failed: %w", err)
var opErr error
if operation == "moveid" {
_, opErr = operations.Move(ctx, dstFs, nil, destLeaf, o)
} else {
_, opErr = operations.Copy(ctx, dstFs, nil, destLeaf, o)
}
if opErr != nil {
return fmt.Errorf("%s failed: %w", operation, opErr)
}
return nil
}
func (f *Fs) query(ctx context.Context, query string) (entries []*drive.File, err error) {
// Run the drive query calling fn on each entry found
func (f *Fs) queryFn(ctx context.Context, query string, fn func(*drive.File)) (err error) {
list := f.svc.Files.List()
if query != "" {
list.Q(query)
@@ -3577,10 +3591,7 @@ func (f *Fs) query(ctx context.Context, query string) (entries []*drive.File, er
if f.rootFolderID == "appDataFolder" {
list.Spaces("appDataFolder")
}
fields := fmt.Sprintf("files(%s),nextPageToken,incompleteSearch", f.getFileFields(ctx))
var results []*drive.File
for {
var files *drive.FileList
err = f.pacer.Call(func() (bool, error) {
@@ -3588,20 +3599,66 @@ func (f *Fs) query(ctx context.Context, query string) (entries []*drive.File, er
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("failed to execute query: %w", err)
return fmt.Errorf("failed to execute query: %w", err)
}
if files.IncompleteSearch {
fs.Errorf(f, "search result INCOMPLETE")
}
results = append(results, files.Files...)
for _, item := range files.Files {
fn(item)
}
if files.NextPageToken == "" {
break
}
list.PageToken(files.NextPageToken)
}
return nil
}
// Run the drive query returning the entries found
func (f *Fs) query(ctx context.Context, query string) (entries []*drive.File, err error) {
var results []*drive.File
err = f.queryFn(ctx, query, func(item *drive.File) {
results = append(results, item)
})
if err != nil {
return nil, err
}
return results, nil
}
// Rescue, list or delete orphaned files
func (f *Fs) rescue(ctx context.Context, dirID string, delete bool) (err error) {
return f.queryFn(ctx, "'me' in owners and trashed=false", func(item *drive.File) {
if len(item.Parents) != 0 {
return
}
// Have found an orphaned entry
if delete {
fs.Infof(item.Name, "Deleting orphan %q into trash", item.Id)
err = f.delete(ctx, item.Id, true)
if err != nil {
fs.Errorf(item.Name, "Failed to delete orphan %q: %v", item.Id, err)
}
} else if dirID == "" {
operations.SyncPrintf("%q, %q\n", item.Name, item.Id)
} else {
fs.Infof(item.Name, "Rescuing orphan %q", item.Id)
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Files.Update(item.Id, nil).
AddParents(dirID).
Fields(f.getFileFields(ctx)).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
fs.Errorf(item.Name, "Failed to rescue orphan %q: %v", item.Id, err)
}
}
})
}
var commandHelp = []fs.CommandHelp{{
Name: "get",
Short: "Get command for fetching the drive config parameters",
@@ -3746,6 +3803,28 @@ attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
`,
}, {
Name: "moveid",
Short: "Move files by ID",
Long: `This command moves files by ID
Usage:
rclone backend moveid drive: ID path
rclone backend moveid drive: ID1 path1 ID2 path2
It moves the drive file with ID given to the path (an rclone path which
will be passed internally to rclone moveto).
The path should end with a / to indicate move the file as named to
this directory. If it doesn't end with a / then the last path
component will be used as the file name.
If the destination is a drive backend then server-side moving will be
attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be moved beforehand.
`,
}, {
Name: "exportformats",
Short: "Dump the export formats for debug purposes",
@@ -3793,6 +3872,37 @@ The result is a JSON array of matches, for example:
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]`,
}, {
Name: "rescue",
Short: "Rescue or delete any orphaned files",
Long: `This command rescues or deletes any orphaned files or directories.
Sometimes files can get orphaned in Google Drive. This means that they
are no longer in any folder in Google Drive.
This command finds those files and either rescues them to a directory
you specify or deletes them.
Usage:
This can be used in 3 ways.
First, list all orphaned files
rclone backend rescue drive:
Second rescue all orphaned files to the directory indicated
rclone backend rescue drive: "relative/path/to/rescue/directory"
e.g. To rescue all orphans to a directory called "Orphans" in the top level
rclone backend rescue drive: Orphans
Third delete all orphaned files to the trash
rclone backend rescue drive: -o delete
`,
}}
// Command the backend to run a named command
@@ -3893,16 +4003,16 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
dir = arg[0]
}
return f.unTrashDir(ctx, dir, true)
case "copyid":
case "copyid", "moveid":
if len(arg)%2 != 0 {
return nil, errors.New("need an even number of arguments")
}
for len(arg) > 0 {
id, dest := arg[0], arg[1]
arg = arg[2:]
err = f.copyID(ctx, id, dest)
err = f.copyOrMoveID(ctx, name, id, dest)
if err != nil {
return nil, fmt.Errorf("failed copying %q to %q: %w", id, dest, err)
return nil, fmt.Errorf("failed %s %q to %q: %w", name, id, dest, err)
}
}
return nil, nil
@@ -3913,14 +4023,29 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
case "query":
if len(arg) == 1 {
query := arg[0]
var results, err = f.query(ctx, query)
results, err := f.query(ctx, query)
if err != nil {
return nil, fmt.Errorf("failed to execute query: %q, error: %w", query, err)
}
return results, nil
} else {
return nil, errors.New("need a query argument")
}
return nil, errors.New("need a query argument")
case "rescue":
dirID := ""
_, delete := opt["delete"]
if len(arg) == 0 {
// no arguments - list only
} else if !delete && len(arg) == 1 {
dir := arg[0]
dirID, err = f.dirCache.FindDir(ctx, dir, true)
if err != nil {
return nil, fmt.Errorf("failed to find or create rescue directory %q: %w", dir, err)
}
fs.Infof(f, "Rescuing orphans into %q", dir)
} else {
return nil, errors.New("syntax error: need 0 or 1 args or -o delete")
}
return nil, f.rescue(ctx, dirID, delete)
default:
return nil, fs.ErrorCommandNotFound
}
@@ -3964,8 +4089,9 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
}
return "", hash.ErrUnsupported
}
func (o *baseObject) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
if t != hash.MD5 && t != hash.SHA1 && t != hash.SHA256 {
return "", hash.ErrUnsupported
}
return "", nil
@@ -3978,7 +4104,8 @@ func (o *baseObject) Size() int64 {
// getRemoteInfoWithExport returns a drive.File and the export settings for the remote
func (f *Fs) getRemoteInfoWithExport(ctx context.Context, remote string) (
info *drive.File, extension, exportName, exportMimeType string, isDocument bool, err error) {
info *drive.File, extension, exportName, exportMimeType string, isDocument bool, err error,
) {
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, false)
if err != nil {
if err == fs.ErrorDirNotFound {
@@ -4191,12 +4318,13 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
return o.baseObject.open(ctx, o.url, options...)
}
func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Update the size with what we are reading as it can change from
// the HEAD in the listing to this GET. This stops rclone marking
// the transfer as corrupted.
var offset, end int64 = 0, -1
var newOptions = options[:0]
newOptions := options[:0]
for _, o := range options {
// Note that Range requests don't work on Google docs:
// https://developers.google.com/drive/v3/web/manage-downloads#partial_download
@@ -4223,9 +4351,10 @@ func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in
}
return
}
func (o *linkObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
var data = o.content
data := o.content
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
@@ -4250,7 +4379,8 @@ func (o *linkObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.
}
func (o *baseObject) update(ctx context.Context, updateInfo *drive.File, uploadMimeType string, in io.Reader,
src fs.ObjectInfo) (info *drive.File, err error) {
src fs.ObjectInfo,
) (info *drive.File, err error) {
// Make the API request to upload metadata and file data.
size := src.Size()
if size >= 0 && size < int64(o.fs.opt.UploadCutoff) {
@@ -4328,6 +4458,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return nil
}
func (o *documentObject) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
srcMimeType := fs.MimeType(ctx, src)
importMimeType := ""
@@ -4423,6 +4554,7 @@ func (o *baseObject) Metadata(ctx context.Context) (metadata fs.Metadata, err er
func (o *documentObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}
func (o *linkObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}

View File

@@ -95,7 +95,7 @@ func TestInternalParseExtensions(t *testing.T) {
wantErr error
}{
{"doc", []string{".doc"}, nil},
{" docx ,XLSX, pptx,svg", []string{".docx", ".xlsx", ".pptx", ".svg"}, nil},
{" docx ,XLSX, pptx,svg,md", []string{".docx", ".xlsx", ".pptx", ".svg", ".md"}, nil},
{"docx,svg,Docx", []string{".docx", ".svg"}, nil},
{"docx,potato,docx", []string{".docx"}, errors.New(`couldn't find MIME type for extension ".potato"`)},
} {
@@ -479,8 +479,8 @@ func (f *Fs) InternalTestUnTrash(t *testing.T) {
require.NoError(t, f.Purge(ctx, "trashDir"))
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/CopyID
func (f *Fs) InternalTestCopyID(t *testing.T) {
// TestIntegration/FsMkdir/FsPutFiles/Internal/CopyOrMoveID
func (f *Fs) InternalTestCopyOrMoveID(t *testing.T) {
ctx := context.Background()
obj, err := f.NewObject(ctx, existingFile)
require.NoError(t, err)
@@ -498,7 +498,7 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
}
t.Run("BadID", func(t *testing.T) {
err = f.copyID(ctx, "ID-NOT-FOUND", dir+"/")
err = f.copyOrMoveID(ctx, "moveid", "ID-NOT-FOUND", dir+"/")
require.Error(t, err)
assert.Contains(t, err.Error(), "couldn't find id")
})
@@ -506,19 +506,31 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
t.Run("Directory", func(t *testing.T) {
rootID, err := f.dirCache.RootID(ctx, false)
require.NoError(t, err)
err = f.copyID(ctx, rootID, dir+"/")
err = f.copyOrMoveID(ctx, "moveid", rootID, dir+"/")
require.Error(t, err)
assert.Contains(t, err.Error(), "can't copy directory")
assert.Contains(t, err.Error(), "can't moveid directory")
})
t.Run("WithoutDestName", func(t *testing.T) {
err = f.copyID(ctx, o.id, dir+"/")
t.Run("MoveWithoutDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "moveid", o.id, dir+"/")
require.NoError(t, err)
checkFile(path.Base(existingFile))
})
t.Run("WithDestName", func(t *testing.T) {
err = f.copyID(ctx, o.id, dir+"/potato.txt")
t.Run("CopyWithoutDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "copyid", o.id, dir+"/")
require.NoError(t, err)
checkFile(path.Base(existingFile))
})
t.Run("MoveWithDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "moveid", o.id, dir+"/potato.txt")
require.NoError(t, err)
checkFile("potato.txt")
})
t.Run("CopyWithDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "copyid", o.id, dir+"/potato.txt")
require.NoError(t, err)
checkFile("potato.txt")
})
@@ -566,7 +578,7 @@ func (f *Fs) InternalTestAgeQuery(t *testing.T) {
// Check set up for filtering
assert.True(t, f.Features().FilterAware)
opt := &filter.Opt{}
opt := &filter.Options{}
err := opt.MaxAge.Set("1h")
assert.NoError(t, err)
flt, err := filter.NewFilter(opt)
@@ -647,7 +659,7 @@ func (f *Fs) InternalTest(t *testing.T) {
})
t.Run("Shortcuts", f.InternalTestShortcuts)
t.Run("UnTrash", f.InternalTestUnTrash)
t.Run("CopyID", f.InternalTestCopyID)
t.Run("CopyOrMoveID", f.InternalTestCopyOrMoveID)
t.Run("Query", f.InternalTestQuery)
t.Run("AgeQuery", f.InternalTestAgeQuery)
t.Run("ShouldRetry", f.InternalTestShouldRetry)

View File

@@ -11,7 +11,6 @@ import (
"fmt"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox/files"
"github.com/rclone/rclone/fs/fserrors"
)
// finishBatch commits the batch, returning a batch status to poll or maybe complete
@@ -21,14 +20,10 @@ func (f *Fs) finishBatch(ctx context.Context, items []*files.UploadSessionFinish
}
err = f.pacer.Call(func() (bool, error) {
complete, err = f.srv.UploadSessionFinishBatchV2(arg)
// If error is insufficient space then don't retry
if e, ok := err.(files.UploadSessionFinishAPIError); ok {
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.WriteErrorInsufficientSpace {
err = fserrors.NoRetryError(err)
return false, err
}
if retry, err := shouldRetryExclude(ctx, err); !retry {
return retry, err
}
// after the first chunk is uploaded, we retry everything
// after the first chunk is uploaded, we retry everything except the excluded errors
return err != nil, err
})
if err != nil {

View File

@@ -47,6 +47,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/batcher"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
@@ -93,7 +94,7 @@ const (
var (
// Description of how to auth for this app
dropboxConfig = &oauth2.Config{
dropboxConfig = &oauthutil.Config{
Scopes: []string{
"files.metadata.write",
"files.content.write",
@@ -108,7 +109,8 @@ var (
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
// TokenURL: "https://api.dropboxapi.com/1/oauth2/token",
// },
Endpoint: dropbox.OAuthEndpoint(""),
AuthURL: dropbox.OAuthEndpoint("").AuthURL,
TokenURL: dropbox.OAuthEndpoint("").TokenURL,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -133,7 +135,7 @@ var (
)
// Gets an oauth config with the right scopes
func getOauthConfig(m configmap.Mapper) *oauth2.Config {
func getOauthConfig(m configmap.Mapper) *oauthutil.Config {
// If not impersonating, use standard scopes
if impersonate, _ := m.Get("impersonate"); impersonate == "" {
return dropboxConfig
@@ -316,32 +318,46 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// Some specific errors which should be excluded from retries
func shouldRetryExclude(ctx context.Context, err error) (bool, error) {
if err == nil {
return false, err
}
errString := err.Error()
if fserrors.ContextError(ctx, &err) {
return false, err
}
// First check for specific errors
//
// These come back from the SDK in a whole host of different
// error types, but there doesn't seem to be a consistent way
// of reading the error cause, so here we just check using the
// error string which isn't perfect but does the job.
errString := err.Error()
if strings.Contains(errString, "insufficient_space") {
return false, fserrors.FatalError(err)
} else if strings.Contains(errString, "malformed_path") {
return false, fserrors.NoRetryError(err)
}
return true, err
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
if retry, err := shouldRetryExclude(ctx, err); !retry {
return retry, err
}
// Then handle any official Retry-After header from Dropbox's SDK
switch e := err.(type) {
case auth.RateLimitAPIError:
if e.RateLimitError.RetryAfter > 0 {
fs.Logf(errString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter)
fs.Logf(nil, "Error %v. Too many requests or write operations. Trying again in %d seconds.", err, e.RateLimitError.RetryAfter)
err = pacer.RetryAfterError(err, time.Duration(e.RateLimitError.RetryAfter)*time.Second)
}
return true, err
}
// Keep old behavior for backward compatibility
errString := err.Error()
if strings.Contains(errString, "too_many_write_operations") || strings.Contains(errString, "too_many_requests") || errString == "" {
return true, err
}
@@ -386,7 +402,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
oldToken = strings.TrimSpace(oldToken)
if ok && oldToken != "" && oldToken[0] != '{' {
fs.Infof(name, "Converting token to new format")
newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
newToken := fmt.Sprintf(`{"access_token":%q,"token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
err := config.SetValueAndSave(name, config.ConfigToken, newToken)
if err != nil {
return nil, fmt.Errorf("NewFS convert token: %w", err)
@@ -1020,13 +1036,20 @@ func (f *Fs) Precision() time.Duration {
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Object, err error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
// Find and remove existing object
cleanup, err := operations.RemoveExisting(ctx, f, remote, "server side copy")
if err != nil {
return nil, err
}
defer cleanup(&err)
// Temporary Object under construction
dstObj := &Object{
fs: f,
@@ -1040,7 +1063,6 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
ToPath: f.opt.Enc.FromStandardPath(dstObj.remotePath()),
},
}
var err error
var result *files.RelocationResult
err = f.pacer.Call(func() (bool, error) {
result, err = f.srv.CopyV2(&arg)
@@ -1692,14 +1714,10 @@ func (o *Object) uploadChunked(ctx context.Context, in0 io.Reader, commitInfo *f
err = o.fs.pacer.Call(func() (bool, error) {
entry, err = o.fs.srv.UploadSessionFinish(args, nil)
// If error is insufficient space then don't retry
if e, ok := err.(files.UploadSessionFinishAPIError); ok {
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.WriteErrorInsufficientSpace {
err = fserrors.NoRetryError(err)
return false, err
}
if retry, err := shouldRetryExclude(ctx, err); !retry {
return retry, err
}
// after the first chunk is uploaded, we retry everything
// after the first chunk is uploaded, we retry everything except the excluded errors
return err != nil, err
})
if err != nil {

View File

@@ -61,7 +61,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
return false, err // No such user
case 186:
return false, err // IP blocked?
case 374:
case 374, 412: // Flood detected seems to be #412 now
fs.Debugf(nil, "Sleeping for 30 seconds due to: %v", err)
time.Sleep(30 * time.Second)
default:

View File

@@ -441,23 +441,28 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
srcFs := srcObj.fs
// Find current directory ID
_, currentDirectoryID, err := f.dirCache.FindPath(ctx, remote, false)
srcLeaf, srcDirectoryID, err := srcFs.dirCache.FindPath(ctx, srcObj.remote, false)
if err != nil {
return nil, err
}
// Create temporary object
dstObj, leaf, directoryID, err := f.createObject(ctx, remote)
dstObj, dstLeaf, dstDirectoryID, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
// If it is in the correct directory, just rename it
var url string
if currentDirectoryID == directoryID {
resp, err := f.renameFile(ctx, srcObj.file.URL, leaf)
if srcDirectoryID == dstDirectoryID {
// No rename needed
if srcLeaf == dstLeaf {
return src, nil
}
resp, err := f.renameFile(ctx, srcObj.file.URL, dstLeaf)
if err != nil {
return nil, fmt.Errorf("couldn't rename file: %w", err)
}
@@ -466,11 +471,16 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
url = resp.URLs[0].URL
} else {
folderID, err := strconv.Atoi(directoryID)
dstFolderID, err := strconv.Atoi(dstDirectoryID)
if err != nil {
return nil, err
}
resp, err := f.moveFile(ctx, srcObj.file.URL, folderID, leaf)
rename := dstLeaf
// No rename needed
if srcLeaf == dstLeaf {
rename = ""
}
resp, err := f.moveFile(ctx, srcObj.file.URL, dstFolderID, rename)
if err != nil {
return nil, fmt.Errorf("couldn't move file: %w", err)
}

View File

@@ -0,0 +1,901 @@
// Package filescom provides an interface to the Files.com
// object storage system.
package filescom
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"path"
"strings"
"time"
files_sdk "github.com/Files-com/files-sdk-go/v3"
"github.com/Files-com/files-sdk-go/v3/bundle"
"github.com/Files-com/files-sdk-go/v3/file"
file_migration "github.com/Files-com/files-sdk-go/v3/filemigration"
"github.com/Files-com/files-sdk-go/v3/folder"
"github.com/Files-com/files-sdk-go/v3/session"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
)
/*
Run of rclone info
stringNeedsEscaping = []rune{
'/', '\x00'
}
maxFileLength = 512 // for 1 byte unicode characters
maxFileLength = 512 // for 2 byte unicode characters
maxFileLength = 512 // for 3 byte unicode characters
maxFileLength = 512 // for 4 byte unicode characters
canWriteUnnormalized = true
canReadUnnormalized = true
canReadRenormalized = true
canStream = true
*/
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
folderNotEmpty = "processing-failure/folder-not-empty"
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "filescom",
Description: "Files.com",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "site",
Help: "Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com).",
}, {
Name: "username",
Help: "The username used to authenticate with Files.com.",
}, {
Name: "password",
Help: "The password used to authenticate with Files.com.",
IsPassword: true,
}, {
Name: "api_key",
Help: "The API key used to authenticate with Files.com.",
Advanced: true,
Sensitive: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeRightSpace |
encoder.EncodeRightCrLfHtVt |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Site string `config:"site"`
Username string `config:"username"`
Password string `config:"password"`
APIKey string `config:"api_key"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote files.com server
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
fileClient *file.Client // the connection to the file API
folderClient *folder.Client // the connection to the folder API
migrationClient *file_migration.Client // the connection to the file migration API
bundleClient *bundle.Client // the connection to the bundle API
pacer *fs.Pacer // pacer for API calls
}
// Object describes a files object
//
// Will definitely have info but maybe not meta
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
size int64 // size of the object
crc32 string // CRC32 of the object content
md5 string // MD5 of the object content
mimeType string // Content-Type of the object
modTime time.Time // modification time of the object
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("files root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Encode remote and turn it into an absolute path in the share
func (f *Fs) absPath(remote string) string {
return f.opt.Enc.FromStandardPath(path.Join(f.root, remote))
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if apiErr, ok := err.(files_sdk.ResponseError); ok {
for _, e := range retryErrorCodes {
if apiErr.HttpCode == e {
fs.Debugf(nil, "Retrying API error %v", err)
return true, err
}
}
}
return fserrors.ShouldRetry(err), err
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *files_sdk.File, err error) {
params := files_sdk.FileFindParams{
Path: f.absPath(path),
}
var file files_sdk.File
err = f.pacer.Call(func() (bool, error) {
file, err = f.fileClient.Find(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
return &file, nil
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = strings.Trim(root, "/")
config, err := newClientConfig(ctx, opt)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
root: root,
opt: *opt,
fileClient: &file.Client{Config: config},
folderClient: &folder.Client{Config: config},
migrationClient: &file_migration.Client{Config: config},
bundleClient: &bundle.Client{Config: config},
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: true,
CanHaveEmptyDirectories: true,
ReadMimeType: true,
DirModTimeUpdatesOnWrite: true,
}).Fill(ctx, f)
if f.root != "" {
info, err := f.readMetaDataForPath(ctx, "")
if err == nil && !info.IsDir() {
f.root = path.Dir(f.root)
if f.root == "." {
f.root = ""
}
return f, fs.ErrorIsFile
}
}
return f, err
}
func newClientConfig(ctx context.Context, opt *Options) (config files_sdk.Config, err error) {
if opt.Site != "" {
if strings.Contains(opt.Site, ".") {
config.EndpointOverride = opt.Site
} else {
config.Subdomain = opt.Site
}
_, err = url.ParseRequestURI(config.Endpoint())
if err != nil {
err = fmt.Errorf("invalid domain or subdomain: %v", opt.Site)
return
}
}
config = config.Init().SetCustomClient(fshttp.NewClient(ctx))
if opt.APIKey != "" {
config.APIKey = opt.APIKey
return
}
if opt.Username == "" {
err = errors.New("username not found")
return
}
if opt.Password == "" {
err = errors.New("password not found")
return
}
opt.Password, err = obscure.Reveal(opt.Password)
if err != nil {
return
}
sessionClient := session.Client{Config: config}
params := files_sdk.SessionCreateParams{
Username: opt.Username,
Password: opt.Password,
}
thisSession, err := sessionClient.Create(params, files_sdk.WithContext(ctx))
if err != nil {
err = fmt.Errorf("couldn't create session: %w", err)
return
}
config.SessionId = thisSession.Id
return
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *files_sdk.File) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
}
var err error
if file != nil {
err = o.setMetaData(file)
} else {
err = o.readMetaData(ctx) // reads info and meta, returning an error
}
if err != nil {
return nil, err
}
return o, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil)
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
var it *folder.Iter
params := files_sdk.FolderListForParams{
Path: f.absPath(dir),
}
err = f.pacer.Call(func() (bool, error) {
it, err = f.folderClient.ListFor(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("couldn't list files: %w", err)
}
for it.Next() {
item := ptr(it.File())
remote := f.opt.Enc.ToStandardPath(item.DisplayName)
remote = path.Join(dir, remote)
if remote == dir {
continue
}
if item.IsDir() {
d := fs.NewDir(remote, item.ModTime())
entries = append(entries, d)
} else {
o, err := f.newObjectWithInfo(ctx, remote, item)
if err != nil {
return nil, err
}
entries = append(entries, o)
}
}
err = it.Err()
if files_sdk.IsNotExist(err) {
return nil, fs.ErrorDirNotFound
}
return
}
// Creates from the parameters passed in a half finished Object which
// must have setMetaData called on it
//
// Returns the object and error.
//
// Used to create new objects
func (f *Fs) createObject(ctx context.Context, remote string) (o *Object, err error) {
// Create the directory for the object if it doesn't exist
err = f.mkParentDir(ctx, remote)
if err != nil {
return
}
// Temporary Object under construction
o = &Object{
fs: f,
remote: remote,
}
return o, nil
}
// Put the object
//
// Copy the reader in to the new object which is returned.
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// Temporary Object under construction
fs := &Object{
fs: f,
remote: src.Remote(),
}
return fs, fs.Update(ctx, in, src, options...)
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
func (f *Fs) mkdir(ctx context.Context, path string) error {
if path == "" || path == "." {
return nil
}
params := files_sdk.FolderCreateParams{
Path: path,
MkdirParents: ptr(true),
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.folderClient.Create(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if files_sdk.IsExist(err) {
return nil
}
return err
}
// Make the parent directory of remote
func (f *Fs) mkParentDir(ctx context.Context, remote string) error {
return f.mkdir(ctx, path.Dir(f.absPath(remote)))
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.mkdir(ctx, f.absPath(dir))
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
o := Object{
fs: f,
remote: dir,
}
return o.SetModTime(ctx, modTime)
}
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
path := f.absPath(dir)
if path == "" || path == "." {
return errors.New("can't purge root directory")
}
params := files_sdk.FileDeleteParams{
Path: path,
Recursive: ptr(!check),
}
err := f.pacer.Call(func() (bool, error) {
err := f.fileClient.Delete(params, files_sdk.WithContext(ctx))
// Allow for eventual consistency deletion of child objects.
if isFolderNotEmpty(err) {
return true, err
}
return shouldRetry(ctx, err)
})
if err != nil {
if files_sdk.IsNotExist(err) {
return fs.ErrorDirNotFound
} else if isFolderNotEmpty(err) {
return fs.ErrorDirectoryNotEmpty
}
return fmt.Errorf("rmdir failed: %w", err)
}
return nil
}
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true)
}
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Copy src to this remote using server-side copy operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dstObj fs.Object, err error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
err = srcObj.readMetaData(ctx)
if err != nil {
return
}
srcPath := srcObj.fs.absPath(srcObj.remote)
dstPath := f.absPath(remote)
if strings.EqualFold(srcPath, dstPath) {
return nil, fmt.Errorf("can't copy %q -> %q as are same name when lowercase", srcPath, dstPath)
}
// Create temporary object
dstObj, err = f.createObject(ctx, remote)
if err != nil {
return
}
// Copy the object
params := files_sdk.FileCopyParams{
Path: srcPath,
Destination: dstPath,
Overwrite: ptr(true),
}
var action files_sdk.FileAction
err = f.pacer.Call(func() (bool, error) {
action, err = f.fileClient.Copy(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return
}
err = f.waitForAction(ctx, action, "copy")
if err != nil {
return
}
err = dstObj.SetModTime(ctx, srcObj.modTime)
return
}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// move a file or folder
func (f *Fs) move(ctx context.Context, src *Fs, srcRemote string, dstRemote string) (info *files_sdk.File, err error) {
// Move the object
params := files_sdk.FileMoveParams{
Path: src.absPath(srcRemote),
Destination: f.absPath(dstRemote),
}
var action files_sdk.FileAction
err = f.pacer.Call(func() (bool, error) {
action, err = f.fileClient.Move(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
err = f.waitForAction(ctx, action, "move")
if err != nil {
return nil, err
}
info, err = f.readMetaDataForPath(ctx, dstRemote)
return
}
func (f *Fs) waitForAction(ctx context.Context, action files_sdk.FileAction, operation string) (err error) {
var migration files_sdk.FileMigration
err = f.pacer.Call(func() (bool, error) {
migration, err = f.migrationClient.Wait(action, func(migration files_sdk.FileMigration) {
// noop
}, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err == nil && migration.Status != "completed" {
return fmt.Errorf("%v did not complete successfully: %v", operation, migration.Status)
}
return
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Create temporary object
dstObj, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
// Do the move
info, err := f.move(ctx, srcObj.fs, srcObj.remote, dstObj.remote)
if err != nil {
return nil, err
}
err = dstObj.setMetaData(info)
if err != nil {
return nil, err
}
return dstObj, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
// Check if destination exists
_, err = f.readMetaDataForPath(ctx, dstRemote)
if err == nil {
return fs.ErrorDirExists
}
// Create temporary object
dstObj, err := f.createObject(ctx, dstRemote)
if err != nil {
return
}
// Do the move
_, err = f.move(ctx, srcFs, srcRemote, dstObj.remote)
return
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (url string, err error) {
params := files_sdk.BundleCreateParams{
Paths: []string{f.absPath(remote)},
}
if expire < fs.DurationOff {
params.ExpiresAt = ptr(time.Now().Add(time.Duration(expire)))
}
var bundle files_sdk.Bundle
err = f.pacer.Call(func() (bool, error) {
bundle, err = f.bundleClient.Create(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
url = bundle.Url
return
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.NewHashSet(hash.CRC32, hash.MD5)
}
// ------------------------------------------------------------
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Hash returns the MD5 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
switch t {
case hash.CRC32:
if o.crc32 == "" {
return "", nil
}
return fmt.Sprintf("%08s", o.crc32), nil
case hash.MD5:
return o.md5, nil
}
return "", hash.ErrUnsupported
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return o.size
}
// setMetaData sets the metadata from info
func (o *Object) setMetaData(file *files_sdk.File) error {
o.modTime = file.ModTime()
if !file.IsDir() {
o.size = file.Size
o.crc32 = file.Crc32
o.md5 = file.Md5
o.mimeType = file.MimeType
}
return nil
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
func (o *Object) readMetaData(ctx context.Context) (err error) {
file, err := o.fs.readMetaDataForPath(ctx, o.remote)
if err != nil {
if files_sdk.IsNotExist(err) {
return fs.ErrorObjectNotFound
}
return err
}
if file.IsDir() {
return fs.ErrorIsDir
}
return o.setMetaData(file)
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) {
params := files_sdk.FileUpdateParams{
Path: o.fs.absPath(o.remote),
ProvidedMtime: &modTime,
}
var file files_sdk.File
err = o.fs.pacer.Call(func() (bool, error) {
file, err = o.fs.fileClient.Update(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return err
}
return o.setMetaData(&file)
}
// Storable returns a boolean showing whether this object storable
func (o *Object) Storable() bool {
return true
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Offset and Count for range download
var offset, count int64
fs.FixRangeOption(options, o.size)
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(o.size)
if count < 0 {
count = o.size - offset
}
case *fs.SeekOption:
offset = x.Offset
count = o.size - offset
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
params := files_sdk.FileDownloadParams{
Path: o.fs.absPath(o.remote),
}
headers := &http.Header{}
headers.Set("Range", fmt.Sprintf("bytes=%v-%v", offset, offset+count-1))
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.fileClient.Download(
params,
files_sdk.WithContext(ctx),
files_sdk.RequestHeadersOption(headers),
files_sdk.ResponseBodyOption(func(closer io.ReadCloser) error {
in = closer
return err
}),
)
return shouldRetry(ctx, err)
})
return
}
// Returns a pointer to t - useful for returning pointers to constants
func ptr[T any](t T) *T {
return &t
}
func isFolderNotEmpty(err error) bool {
var re files_sdk.ResponseError
ok := errors.As(err, &re)
return ok && re.Type == folderNotEmpty
}
// Update the object with the contents of the io.Reader, modTime and size
//
// If existing is set then it updates the object rather than creating a new one.
//
// The new object may have been created if an error is returned.
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
uploadOpts := []file.UploadOption{
file.UploadWithContext(ctx),
file.UploadWithReader(in),
file.UploadWithDestinationPath(o.fs.absPath(o.remote)),
file.UploadWithProvidedMtime(src.ModTime(ctx)),
}
err := o.fs.pacer.Call(func() (bool, error) {
err := o.fs.fileClient.Upload(uploadOpts...)
return shouldRetry(ctx, err)
})
if err != nil {
return err
}
return o.readMetaData(ctx)
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
params := files_sdk.FileDeleteParams{
Path: o.fs.absPath(o.remote),
}
return o.fs.pacer.Call(func() (bool, error) {
err := o.fs.fileClient.Delete(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType(ctx context.Context) string {
return o.mimeType
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
)

View File

@@ -0,0 +1,17 @@
// Test Files filesystem interface
package filescom_test
import (
"testing"
"github.com/rclone/rclone/backend/filescom"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFilesCom:",
NilObject: (*filescom.Object)(nil),
})
}

View File

@@ -85,7 +85,7 @@ to an encrypted one. Cannot be used in combination with implicit FTPS.`,
Default: false,
}, {
Name: "concurrency",
Help: strings.Replace(`Maximum number of FTP simultaneous connections, 0 for unlimited.
Help: strings.ReplaceAll(`Maximum number of FTP simultaneous connections, 0 for unlimited.
Note that setting this is very likely to cause deadlocks so it should
be used with care.
@@ -99,7 +99,7 @@ maximum of |--checkers| and |--transfers|.
So for |concurrency 3| you'd use |--checkers 2 --transfers 2
--check-first| or |--checkers 1 --transfers 1|.
`, "|", "`", -1),
`, "|", "`"),
Default: 0,
Advanced: true,
}, {
@@ -180,12 +180,28 @@ If this is set and no password is supplied then rclone will ask for a password
Default: "",
Help: `Socks 5 proxy host.
Supports the format user:pass@host:port, user@host:port, host:port.
Supports the format user:pass@host:port, user@host:port, host:port.
Example:
Example:
myUser:myPass@localhost:9005
`,
myUser:myPass@localhost:9005
`,
Advanced: true,
}, {
Name: "no_check_upload",
Default: false,
Help: `Don't check the upload is OK
Normally rclone will try to check the upload exists after it has
uploaded a file to make sure the size and modification time are as
expected.
This flag stops rclone doing these checks. This enables uploading to
folders which are write only.
You will likely need to use the --inplace flag also if uploading to
a write only folder.
`,
Advanced: true,
}, {
Name: config.ConfigEncoding,
@@ -232,6 +248,7 @@ type Options struct {
AskPassword bool `config:"ask_password"`
Enc encoder.MultiEncoder `config:"encoding"`
SocksProxy string `config:"socks_proxy"`
NoCheckUpload bool `config:"no_check_upload"`
}
// Fs represents a remote FTP server
@@ -1303,6 +1320,16 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return fmt.Errorf("update stor: %w", err)
}
o.fs.putFtpConnection(&c, nil)
if o.fs.opt.NoCheckUpload {
o.info = &FileInfo{
Name: o.remote,
Size: uint64(src.Size()),
ModTime: src.ModTime(ctx),
precise: true,
IsDir: false,
}
return nil
}
if err = o.SetModTime(ctx, src.ModTime(ctx)); err != nil {
return fmt.Errorf("SetModTime: %w", err)
}

311
backend/gofile/api/types.go Normal file
View File

@@ -0,0 +1,311 @@
// Package api has type definitions for gofile
//
// Converted from the API docs with help from https://mholt.github.io/json-to-go/
package api
import (
"fmt"
"time"
)
const (
// 2017-05-03T07:26:10-07:00
timeFormat = `"` + time.RFC3339 + `"`
)
// Time represents date and time information for the
// gofile API, by using RFC3339
type Time time.Time
// MarshalJSON turns a Time into JSON (in UTC)
func (t *Time) MarshalJSON() (out []byte, err error) {
timeString := (*time.Time)(t).Format(timeFormat)
return []byte(timeString), nil
}
// UnmarshalJSON turns JSON into a Time
func (t *Time) UnmarshalJSON(data []byte) error {
newT, err := time.Parse(timeFormat, string(data))
if err != nil {
return err
}
*t = Time(newT)
return nil
}
// Error is returned from gofile when things go wrong
type Error struct {
Status string `json:"status"`
}
// Error returns a string for the error and satisfies the error interface
func (e Error) Error() string {
out := fmt.Sprintf("Error %q", e.Status)
return out
}
// IsError returns true if there is an error
func (e Error) IsError() bool {
return e.Status != "ok"
}
// Err returns err if not nil, or e if IsError or nil
func (e Error) Err(err error) error {
if err != nil {
return err
}
if e.IsError() {
return e
}
return nil
}
// Check Error satisfies the error interface
var _ error = (*Error)(nil)
// Types of things in Item
const (
ItemTypeFolder = "folder"
ItemTypeFile = "file"
)
// Item describes a folder or a file as returned by /contents
type Item struct {
ID string `json:"id"`
ParentFolder string `json:"parentFolder"`
Type string `json:"type"`
Name string `json:"name"`
Size int64 `json:"size"`
Code string `json:"code"`
CreateTime int64 `json:"createTime"`
ModTime int64 `json:"modTime"`
Link string `json:"link"`
MD5 string `json:"md5"`
MimeType string `json:"mimetype"`
ChildrenCount int `json:"childrenCount"`
DirectLinks map[string]*DirectLink `json:"directLinks"`
//Public bool `json:"public"`
//ServerSelected string `json:"serverSelected"`
//Thumbnail string `json:"thumbnail"`
//DownloadCount int `json:"downloadCount"`
//TotalDownloadCount int64 `json:"totalDownloadCount"`
//TotalSize int64 `json:"totalSize"`
//ChildrenIDs []string `json:"childrenIds"`
Children map[string]*Item `json:"children"`
}
// ToNativeTime converts a go time to a native time
func ToNativeTime(t time.Time) int64 {
return t.Unix()
}
// FromNativeTime converts native time to a go time
func FromNativeTime(t int64) time.Time {
return time.Unix(t, 0)
}
// DirectLink describes a direct link to a file so it can be
// downloaded by third parties.
type DirectLink struct {
ExpireTime int64 `json:"expireTime"`
SourceIpsAllowed []any `json:"sourceIpsAllowed"`
DomainsAllowed []any `json:"domainsAllowed"`
Auth []any `json:"auth"`
IsReqLink bool `json:"isReqLink"`
DirectLink string `json:"directLink"`
}
// Contents is returned from the /contents call
type Contents struct {
Error
Data struct {
Item
} `json:"data"`
Metadata Metadata `json:"metadata"`
}
// Metadata is returned when paging is in use
type Metadata struct {
TotalCount int `json:"totalCount"`
TotalPages int `json:"totalPages"`
Page int `json:"page"`
PageSize int `json:"pageSize"`
HasNextPage bool `json:"hasNextPage"`
}
// AccountsGetID is the result of /accounts/getid
type AccountsGetID struct {
Error
Data struct {
ID string `json:"id"`
} `json:"data"`
}
// Stats of storage and traffic
type Stats struct {
FolderCount int64 `json:"folderCount"`
FileCount int64 `json:"fileCount"`
Storage int64 `json:"storage"`
TrafficDirectGenerated int64 `json:"trafficDirectGenerated"`
TrafficReqDownloaded int64 `json:"trafficReqDownloaded"`
TrafficWebDownloaded int64 `json:"trafficWebDownloaded"`
}
// AccountsGet is the result of /accounts/{id}
type AccountsGet struct {
Error
Data struct {
ID string `json:"id"`
Email string `json:"email"`
Tier string `json:"tier"`
PremiumType string `json:"premiumType"`
Token string `json:"token"`
RootFolder string `json:"rootFolder"`
SubscriptionProvider string `json:"subscriptionProvider"`
SubscriptionEndDate int `json:"subscriptionEndDate"`
SubscriptionLimitDirectTraffic int64 `json:"subscriptionLimitDirectTraffic"`
SubscriptionLimitStorage int64 `json:"subscriptionLimitStorage"`
StatsCurrent Stats `json:"statsCurrent"`
// StatsHistory map[int]map[int]map[int]Stats `json:"statsHistory"`
} `json:"data"`
}
// CreateFolderRequest is the input to /contents/createFolder
type CreateFolderRequest struct {
ParentFolderID string `json:"parentFolderId"`
FolderName string `json:"folderName"`
ModTime int64 `json:"modTime,omitempty"`
}
// CreateFolderResponse is the output from /contents/createFolder
type CreateFolderResponse struct {
Error
Data Item `json:"data"`
}
// DeleteRequest is the input to DELETE /contents
type DeleteRequest struct {
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// DeleteResponse is the input to DELETE /contents
type DeleteResponse struct {
Error
Data map[string]Error
}
// Server is an upload server
type Server struct {
Name string `json:"name"`
Zone string `json:"zone"`
}
// String returns a string representation of the Server
func (s *Server) String() string {
return fmt.Sprintf("%s (%s)", s.Name, s.Zone)
}
// Root returns the root URL for the server
func (s *Server) Root() string {
return fmt.Sprintf("https://%s.gofile.io/", s.Name)
}
// URL returns the upload URL for the server
func (s *Server) URL() string {
return fmt.Sprintf("https://%s.gofile.io/contents/uploadfile", s.Name)
}
// ServersResponse is the output from /servers
type ServersResponse struct {
Error
Data struct {
Servers []Server `json:"servers"`
} `json:"data"`
}
// UploadResponse is returned by POST /contents/uploadfile
type UploadResponse struct {
Error
Data Item `json:"data"`
}
// DirectLinksRequest specifies the parameters for the direct link
type DirectLinksRequest struct {
ExpireTime int64 `json:"expireTime,omitempty"`
SourceIpsAllowed []any `json:"sourceIpsAllowed,omitempty"`
DomainsAllowed []any `json:"domainsAllowed,omitempty"`
Auth []any `json:"auth,omitempty"`
}
// DirectLinksResult is returned from POST /contents/{id}/directlinks
type DirectLinksResult struct {
Error
Data struct {
ExpireTime int64 `json:"expireTime"`
SourceIpsAllowed []any `json:"sourceIpsAllowed"`
DomainsAllowed []any `json:"domainsAllowed"`
Auth []any `json:"auth"`
IsReqLink bool `json:"isReqLink"`
ID string `json:"id"`
DirectLink string `json:"directLink"`
} `json:"data"`
}
// UpdateItemRequest describes the updates to be done to an item for PUT /contents/{id}/update
//
// The Value of the attribute to define :
// For Attribute "name" : The name of the content (file or folder)
// For Attribute "description" : The description displayed on the download page (folder only)
// For Attribute "tags" : A comma-separated list of tags (folder only)
// For Attribute "public" : either true or false (folder only)
// For Attribute "expiry" : A unix timestamp of the expiration date (folder only)
// For Attribute "password" : The password to set (folder only)
type UpdateItemRequest struct {
Attribute string `json:"attribute"`
Value any `json:"attributeValue"`
}
// UpdateItemResponse is returned by PUT /contents/{id}/update
type UpdateItemResponse struct {
Error
Data Item `json:"data"`
}
// MoveRequest is the input to /contents/move
type MoveRequest struct {
FolderID string `json:"folderId"`
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// MoveResponse is returned by POST /contents/move
type MoveResponse struct {
Error
Data map[string]struct {
Error
Item `json:"data"`
} `json:"data"`
}
// CopyRequest is the input to /contents/copy
type CopyRequest struct {
FolderID string `json:"folderId"`
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// CopyResponse is returned by POST /contents/copy
type CopyResponse struct {
Error
Data map[string]struct {
Error
Item `json:"data"`
} `json:"data"`
}
// UploadServerStatus is returned when fetching the root of an upload server
type UploadServerStatus struct {
Error
Data struct {
Server string `json:"server"`
Test string `json:"test"`
} `json:"data"`
}

1659
backend/gofile/gofile.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
// Test Gofile filesystem interface
package gofile_test
import (
"testing"
"github.com/rclone/rclone/backend/gofile"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestGoFile:",
NilObject: (*gofile.Object)(nil),
})
}

View File

@@ -62,9 +62,10 @@ const (
var (
// Description of how to auth for this app
storageConfig = &oauth2.Config{
storageConfig = &oauthutil.Config{
Scopes: []string{storage.DevstorageReadWriteScope},
Endpoint: google.Endpoint,
AuthURL: google.Endpoint.AuthURL,
TokenURL: google.Endpoint.TokenURL,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
@@ -106,6 +107,12 @@ func init() {
Help: "Service Account Credentials JSON blob.\n\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
Hide: fs.OptionHideBoth,
Sensitive: true,
}, {
Name: "access_token",
Help: "Short-lived access token.\n\nLeave blank normally.\nNeeded only if you want use short-lived access token instead of interactive login.",
Hide: fs.OptionHideConfigurator,
Sensitive: true,
Advanced: true,
}, {
Name: "anonymous",
Help: "Access public buckets and objects without credentials.\n\nSet to 'true' if you just want to download files and don't configure credentials.",
@@ -379,6 +386,7 @@ type Options struct {
Enc encoder.MultiEncoder `config:"encoding"`
EnvAuth bool `config:"env_auth"`
DirectoryMarkers bool `config:"directory_markers"`
AccessToken string `config:"access_token"`
}
// Fs represents a remote storage server
@@ -535,6 +543,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("failed to configure Google Cloud Storage: %w", err)
}
} else if opt.AccessToken != "" {
ts := oauth2.Token{AccessToken: opt.AccessToken}
oAuthClient = oauth2.NewClient(ctx, oauth2.StaticTokenSource(&ts))
} else {
oAuthClient, _, err = oauthutil.NewClient(ctx, name, m, storageConfig)
if err != nil {
@@ -944,7 +955,6 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
return e
}
return f.createDirectoryMarker(ctx, bucket, dir)
}
// mkdirParent creates the parent bucket/directory if it doesn't exist

View File

@@ -28,13 +28,11 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/lib/batcher"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
)
@@ -61,13 +59,14 @@ const (
var (
// Description of how to auth for this app
oauthConfig = &oauth2.Config{
oauthConfig = &oauthutil.Config{
Scopes: []string{
"openid",
"profile",
scopeReadWrite, // this must be at position scopeAccess
},
Endpoint: google.Endpoint,
AuthURL: google.Endpoint.AuthURL,
TokenURL: google.Endpoint.TokenURL,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
@@ -160,6 +159,34 @@ listings and transferred.
Without this flag, archived media will not be visible in directory
listings and won't be transferred.`,
Advanced: true,
}, {
Name: "proxy",
Default: "",
Help: strings.ReplaceAll(`Use the gphotosdl proxy for downloading the full resolution images
The Google API will deliver images and video which aren't full
resolution, and/or have EXIF data missing.
However if you ue the gphotosdl proxy tnen you can download original,
unchanged images.
This runs a headless browser in the background.
Download the software from [gphotosdl](https://github.com/rclone/gphotosdl)
First run with
gphotosdl -login
Then once you have logged into google photos close the browser window
and run
gphotosdl
Then supply the parameter |--gphotos-proxy "http://localhost:8282"| to make
rclone use the proxy.
`, "|", "`"),
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -181,6 +208,7 @@ type Options struct {
BatchMode string `config:"batch_mode"`
BatchSize int `config:"batch_size"`
BatchTimeout fs.Duration `config:"batch_timeout"`
Proxy string `config:"proxy"`
}
// Fs represents a remote storage server
@@ -454,7 +482,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Med
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
defer log.Trace(f, "remote=%q", remote)("")
// defer log.Trace(f, "remote=%q", remote)("")
return f.newObjectWithInfo(ctx, remote, nil)
}
@@ -667,7 +695,7 @@ func (f *Fs) listUploads(ctx context.Context, dir string) (entries fs.DirEntries
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
defer log.Trace(f, "dir=%q", dir)("err=%v", &err)
// defer log.Trace(f, "dir=%q", dir)("err=%v", &err)
match, prefix, pattern := patterns.match(f.root, dir, false)
if pattern == nil || pattern.isFile {
return nil, fs.ErrorDirNotFound
@@ -684,7 +712,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
defer log.Trace(f, "src=%+v", src)("")
// defer log.Trace(f, "src=%+v", src)("")
// Temporary Object under construction
o := &Object{
fs: f,
@@ -737,7 +765,7 @@ func (f *Fs) getOrCreateAlbum(ctx context.Context, albumTitle string) (album *ap
// Mkdir creates the album if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
defer log.Trace(f, "dir=%q", dir)("err=%v", &err)
// defer log.Trace(f, "dir=%q", dir)("err=%v", &err)
match, prefix, pattern := patterns.match(f.root, dir, false)
if pattern == nil {
return fs.ErrorDirNotFound
@@ -761,7 +789,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
defer log.Trace(f, "dir=%q")("err=%v", &err)
// defer log.Trace(f, "dir=%q")("err=%v", &err)
match, _, pattern := patterns.match(f.root, dir, false)
if pattern == nil {
return fs.ErrorDirNotFound
@@ -834,7 +862,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
defer log.Trace(o, "")("")
// defer log.Trace(o, "")("")
if !o.fs.opt.ReadSize || o.bytes >= 0 {
return o.bytes
}
@@ -935,7 +963,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
defer log.Trace(o, "")("")
// defer log.Trace(o, "")("")
err := o.readMetaData(ctx)
if err != nil {
fs.Debugf(o, "ModTime: Failed to read metadata: %v", err)
@@ -965,16 +993,20 @@ func (o *Object) downloadURL() string {
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
defer log.Trace(o, "")("")
// defer log.Trace(o, "")("")
err = o.readMetaData(ctx)
if err != nil {
fs.Debugf(o, "Open: Failed to read metadata: %v", err)
return nil, err
}
url := o.downloadURL()
if o.fs.opt.Proxy != "" {
url = strings.TrimRight(o.fs.opt.Proxy, "/") + "/id/" + o.id
}
var resp *http.Response
opts := rest.Opts{
Method: "GET",
RootURL: o.downloadURL(),
RootURL: url,
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
@@ -1067,7 +1099,7 @@ func (f *Fs) commitBatch(ctx context.Context, items []uploadedItem, results []*a
//
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
defer log.Trace(o, "src=%+v", src)("err=%v", &err)
// defer log.Trace(o, "src=%+v", src)("err=%v", &err)
match, _, pattern := patterns.match(o.fs.root, o.remote, true)
if pattern == nil || !pattern.isFile || !pattern.canUpload {
return errCantUpload
@@ -1136,7 +1168,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
errors := make([]error, 1)
results := make([]*api.MediaItem, 1)
err = o.fs.commitBatch(ctx, []uploadedItem{uploaded}, results, errors)
if err != nil {
if err == nil {
err = errors[0]
info = results[0]
}

View File

@@ -2,6 +2,7 @@ package googlephotos
import (
"context"
"errors"
"fmt"
"io"
"net/http"
@@ -35,7 +36,7 @@ func TestIntegration(t *testing.T) {
*fstest.RemoteName = "TestGooglePhotos:"
}
f, err := fs.NewFs(ctx, *fstest.RemoteName)
if err == fs.ErrorNotFoundInConfigFile {
if errors.Is(err, fs.ErrorNotFoundInConfigFile) {
t.Skipf("Couldn't create google photos backend - skipping tests: %v", err)
}
require.NoError(t, err)

View File

@@ -31,7 +31,6 @@ import (
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/oauth2"
)
const (
@@ -48,11 +47,9 @@ const (
// Globals
var (
// Description of how to auth for this app.
oauthConfig = &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: "https://my.hidrive.com/client/authorize",
TokenURL: "https://my.hidrive.com/oauth2/token",
},
oauthConfig = &oauthutil.Config{
AuthURL: "https://my.hidrive.com/client/authorize",
TokenURL: "https://my.hidrive.com/oauth2/token",
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.TitleBarRedirectURL,

View File

@@ -331,12 +331,13 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// Join's the remote onto the base URL
func (f *Fs) url(remote string) string {
trimmedRemote := strings.TrimLeft(remote, "/") // remove leading "/" since we always have it in f.endpointURL
if f.opt.NoEscape {
// Directly concatenate without escaping, no_escape behavior
return f.endpointURL + remote
return f.endpointURL + trimmedRemote
}
// Default behavior
return f.endpointURL + rest.URLPathEscape(remote)
return f.endpointURL + rest.URLPathEscape(trimmedRemote)
}
// Errors returned by parseName

View File

@@ -191,6 +191,33 @@ func TestNewObject(t *testing.T) {
assert.Equal(t, fs.ErrorObjectNotFound, err)
}
func TestNewObjectWithLeadingSlash(t *testing.T) {
f := prepare(t)
o, err := f.NewObject(context.Background(), "/four/under four.txt")
require.NoError(t, err)
assert.Equal(t, "/four/under four.txt", o.Remote())
assert.Equal(t, int64(8+lineEndSize), o.Size())
_, ok := o.(*Object)
assert.True(t, ok)
// Test the time is correct on the object
tObj := o.ModTime(context.Background())
fi, err := os.Stat(filepath.Join(filesPath, "four", "under four.txt"))
require.NoError(t, err)
tFile := fi.ModTime()
fstest.AssertTimeEqualWithPrecision(t, o.Remote(), tFile, tObj, time.Second)
// check object not found
o, err = f.NewObject(context.Background(), "/not found.txt")
assert.Nil(t, o)
assert.Equal(t, fs.ErrorObjectNotFound, err)
}
func TestOpen(t *testing.T) {
m := prepareServer(t)

View File

@@ -0,0 +1,166 @@
// Package api provides functionality for interacting with the iCloud API.
package api
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"strings"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/lib/rest"
)
const (
baseEndpoint = "https://www.icloud.com"
homeEndpoint = "https://www.icloud.com"
setupEndpoint = "https://setup.icloud.com/setup/ws/1"
authEndpoint = "https://idmsa.apple.com/appleauth/auth"
)
type sessionSave func(*Session)
// Client defines the client configuration
type Client struct {
appleID string
password string
srv *rest.Client
Session *Session
sessionSaveCallback sessionSave
drive *DriveService
}
// New creates a new Client instance with the provided Apple ID, password, trust token, cookies, and session save callback.
//
// Parameters:
// - appleID: the Apple ID of the user.
// - password: the password of the user.
// - trustToken: the trust token for the session.
// - clientID: the client id for the session.
// - cookies: the cookies for the session.
// - sessionSaveCallback: the callback function to save the session.
func New(appleID, password, trustToken string, clientID string, cookies []*http.Cookie, sessionSaveCallback sessionSave) (*Client, error) {
icloud := &Client{
appleID: appleID,
password: password,
srv: rest.NewClient(fshttp.NewClient(context.Background())),
Session: NewSession(),
sessionSaveCallback: sessionSaveCallback,
}
icloud.Session.TrustToken = trustToken
icloud.Session.Cookies = cookies
icloud.Session.ClientID = clientID
return icloud, nil
}
// DriveService returns the DriveService instance associated with the Client.
func (c *Client) DriveService() (*DriveService, error) {
var err error
if c.drive == nil {
c.drive, err = NewDriveService(c)
if err != nil {
return nil, err
}
}
return c.drive, nil
}
// Request makes a request and retries it if the session is invalid.
//
// This function is the main entry point for making requests to the iCloud
// API. If the initial request returns a 401 (Unauthorized), it will try to
// reauthenticate and retry the request.
func (c *Client) Request(ctx context.Context, opts rest.Opts, request interface{}, response interface{}) (resp *http.Response, err error) {
resp, err = c.Session.Request(ctx, opts, request, response)
if err != nil && resp != nil {
// try to reauth
if resp.StatusCode == 401 || resp.StatusCode == 421 {
err = c.Authenticate(ctx)
if err != nil {
return nil, err
}
if c.Session.Requires2FA() {
return nil, errors.New("trust token expired, please reauth")
}
return c.RequestNoReAuth(ctx, opts, request, response)
}
}
return resp, err
}
// RequestNoReAuth makes a request without re-authenticating.
//
// This function is useful when you have a session that is already
// authenticated, but you need to make a request without triggering
// a re-authentication.
func (c *Client) RequestNoReAuth(ctx context.Context, opts rest.Opts, request interface{}, response interface{}) (resp *http.Response, err error) {
// Make the request without re-authenticating
resp, err = c.Session.Request(ctx, opts, request, response)
return resp, err
}
// Authenticate authenticates the client with the iCloud API.
func (c *Client) Authenticate(ctx context.Context) error {
if c.Session.Cookies != nil {
if err := c.Session.ValidateSession(ctx); err == nil {
fs.Debugf("icloud", "Valid session, no need to reauth")
return nil
}
c.Session.Cookies = nil
}
fs.Debugf("icloud", "Authenticating as %s\n", c.appleID)
err := c.Session.SignIn(ctx, c.appleID, c.password)
if err == nil {
err = c.Session.AuthWithToken(ctx)
if err == nil && c.sessionSaveCallback != nil {
c.sessionSaveCallback(c.Session)
}
}
return err
}
// SignIn signs in the client using the provided context and credentials.
func (c *Client) SignIn(ctx context.Context) error {
return c.Session.SignIn(ctx, c.appleID, c.password)
}
// IntoReader marshals the provided values into a JSON encoded reader
func IntoReader(values any) (*bytes.Reader, error) {
m, err := json.Marshal(values)
if err != nil {
return nil, err
}
return bytes.NewReader(m), nil
}
// RequestError holds info on a result state, icloud can return a 200 but the result is unknown
type RequestError struct {
Status string
Text string
}
// Error satisfy the error interface.
func (e *RequestError) Error() string {
return fmt.Sprintf("%s: %s", e.Text, e.Status)
}
func newRequestError(Status string, Text string) *RequestError {
return &RequestError{
Status: strings.ToLower(Status),
Text: Text,
}
}
// newErr orf makes a new error from sprintf parameters.
func newRequestErrorf(Status string, Text string, Parameters ...interface{}) *RequestError {
return newRequestError(strings.ToLower(Status), fmt.Sprintf(Text, Parameters...))
}

View File

@@ -0,0 +1,913 @@
package api
import (
"bytes"
"context"
"io"
"mime"
"net/http"
"net/url"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/google/uuid"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
const (
defaultZone = "com.apple.CloudDocs"
statusOk = "OK"
statusEtagConflict = "ETAG_CONFLICT"
)
// DriveService represents an iCloud Drive service.
type DriveService struct {
icloud *Client
RootID string
endpoint string
docsEndpoint string
}
// NewDriveService creates a new DriveService instance.
func NewDriveService(icloud *Client) (*DriveService, error) {
return &DriveService{icloud: icloud, RootID: "FOLDER::com.apple.CloudDocs::root", endpoint: icloud.Session.AccountInfo.Webservices["drivews"].URL, docsEndpoint: icloud.Session.AccountInfo.Webservices["docws"].URL}, nil
}
// GetItemByDriveID retrieves a DriveItem by its Drive ID.
func (d *DriveService) GetItemByDriveID(ctx context.Context, id string, includeChildren bool) (*DriveItem, *http.Response, error) {
items, resp, err := d.GetItemsByDriveID(ctx, []string{id}, includeChildren)
if err != nil {
return nil, resp, err
}
return items[0], resp, err
}
// GetItemsByDriveID retrieves DriveItems by their Drive IDs.
func (d *DriveService) GetItemsByDriveID(ctx context.Context, ids []string, includeChildren bool) ([]*DriveItem, *http.Response, error) {
var err error
_items := []map[string]any{}
for _, id := range ids {
_items = append(_items, map[string]any{
"drivewsid": id,
"partialData": false,
"includeHierarchy": false,
})
}
var body *bytes.Reader
var path string
if !includeChildren {
values := []map[string]any{{
"items": _items,
}}
body, err = IntoReader(values)
if err != nil {
return nil, nil, err
}
path = "/retrieveItemDetails"
} else {
values := _items
body, err = IntoReader(values)
if err != nil {
return nil, nil, err
}
path = "/retrieveItemDetailsInFolders"
}
opts := rest.Opts{
Method: "POST",
Path: path,
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.endpoint,
Body: body,
}
var items []*DriveItem
resp, err := d.icloud.Request(ctx, opts, nil, &items)
if err != nil {
return nil, resp, err
}
return items, resp, err
}
// GetDocByPath retrieves a document by its path.
func (d *DriveService) GetDocByPath(ctx context.Context, path string) (*Document, *http.Response, error) {
values := url.Values{}
values.Set("unified_format", "false")
body, err := IntoReader(path)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/ws/" + defaultZone + "/list/lookup_by_path",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Parameters: values,
Body: body,
}
var item []*Document
resp, err := d.icloud.Request(ctx, opts, nil, &item)
if err != nil {
return nil, resp, err
}
return item[0], resp, err
}
// GetItemByPath retrieves a DriveItem by its path.
func (d *DriveService) GetItemByPath(ctx context.Context, path string) (*DriveItem, *http.Response, error) {
values := url.Values{}
values.Set("unified_format", "true")
body, err := IntoReader(path)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/ws/" + defaultZone + "/list/lookup_by_path",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Parameters: values,
Body: body,
}
var item []*DriveItem
resp, err := d.icloud.Request(ctx, opts, nil, &item)
if err != nil {
return nil, resp, err
}
return item[0], resp, err
}
// GetDocByItemID retrieves a document by its item ID.
func (d *DriveService) GetDocByItemID(ctx context.Context, id string) (*Document, *http.Response, error) {
values := url.Values{}
values.Set("document_id", id)
values.Set("unified_format", "false") // important
opts := rest.Opts{
Method: "GET",
Path: "/ws/" + defaultZone + "/list/lookup_by_id",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Parameters: values,
}
var item *Document
resp, err := d.icloud.Request(ctx, opts, nil, &item)
if err != nil {
return nil, resp, err
}
return item, resp, err
}
// GetItemRawByItemID retrieves a DriveItemRaw by its item ID.
func (d *DriveService) GetItemRawByItemID(ctx context.Context, id string) (*DriveItemRaw, *http.Response, error) {
opts := rest.Opts{
Method: "GET",
Path: "/v1/item/" + id,
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
}
var item *DriveItemRaw
resp, err := d.icloud.Request(ctx, opts, nil, &item)
if err != nil {
return nil, resp, err
}
return item, resp, err
}
// GetItemsInFolder retrieves a list of DriveItemRaw objects in a folder with the given ID.
func (d *DriveService) GetItemsInFolder(ctx context.Context, id string, limit int64) ([]*DriveItemRaw, *http.Response, error) {
values := url.Values{}
values.Set("limit", strconv.FormatInt(limit, 10))
opts := rest.Opts{
Method: "GET",
Path: "/v1/enumerate/" + id,
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Parameters: values,
}
items := struct {
Items []*DriveItemRaw `json:"drive_item"`
}{}
resp, err := d.icloud.Request(ctx, opts, nil, &items)
if err != nil {
return nil, resp, err
}
return items.Items, resp, err
}
// GetDownloadURLByDriveID retrieves the download URL for a file in the DriveService.
func (d *DriveService) GetDownloadURLByDriveID(ctx context.Context, id string) (string, *http.Response, error) {
_, zone, docid := DeconstructDriveID(id)
values := url.Values{}
values.Set("document_id", docid)
if zone == "" {
zone = defaultZone
}
opts := rest.Opts{
Method: "GET",
Path: "/ws/" + zone + "/download/by_id",
Parameters: values,
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
}
var filer *FileRequest
resp, err := d.icloud.Request(ctx, opts, nil, &filer)
if err != nil {
return "", resp, err
}
var url string
if filer.DataToken != nil {
url = filer.DataToken.URL
} else {
url = filer.PackageToken.URL
}
return url, resp, err
}
// DownloadFile downloads a file from the given URL using the provided options.
func (d *DriveService) DownloadFile(ctx context.Context, url string, opt []fs.OpenOption) (*http.Response, error) {
opts := &rest.Opts{
Method: "GET",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: url,
Options: opt,
}
resp, err := d.icloud.srv.Call(ctx, opts)
if err != nil {
// icloud has some weird http codes
if resp.StatusCode == 330 {
loc, err := resp.Location()
if err == nil {
return d.DownloadFile(ctx, loc.String(), opt)
}
}
return resp, err
}
return d.icloud.srv.Call(ctx, opts)
}
// MoveItemToTrashByItemID moves an item to the trash based on the item ID.
func (d *DriveService) MoveItemToTrashByItemID(ctx context.Context, id, etag string, force bool) (*DriveItem, *http.Response, error) {
doc, resp, err := d.GetDocByItemID(ctx, id)
if err != nil {
return nil, resp, err
}
return d.MoveItemToTrashByID(ctx, doc.DriveID(), etag, force)
}
// MoveItemToTrashByID moves an item to the trash based on the item ID.
func (d *DriveService) MoveItemToTrashByID(ctx context.Context, drivewsid, etag string, force bool) (*DriveItem, *http.Response, error) {
values := map[string]any{
"items": []map[string]any{{
"drivewsid": drivewsid,
"etag": etag,
"clientId": drivewsid,
}}}
body, err := IntoReader(values)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/moveItemsToTrash",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.endpoint,
Body: body,
}
item := struct {
Items []*DriveItem `json:"items"`
}{}
resp, err := d.icloud.Request(ctx, opts, nil, &item)
if err != nil {
return nil, resp, err
}
if item.Items[0].Status != statusOk {
// rerun with latest etag
if force && item.Items[0].Status == "ETAG_CONFLICT" {
return d.MoveItemToTrashByID(ctx, drivewsid, item.Items[0].Etag, false)
}
err = newRequestError(item.Items[0].Status, "unknown request status")
}
return item.Items[0], resp, err
}
// CreateNewFolderByItemID creates a new folder by item ID.
func (d *DriveService) CreateNewFolderByItemID(ctx context.Context, id, name string) (*DriveItem, *http.Response, error) {
doc, resp, err := d.GetDocByItemID(ctx, id)
if err != nil {
return nil, resp, err
}
return d.CreateNewFolderByDriveID(ctx, doc.DriveID(), name)
}
// CreateNewFolderByDriveID creates a new folder by its Drive ID.
func (d *DriveService) CreateNewFolderByDriveID(ctx context.Context, drivewsid, name string) (*DriveItem, *http.Response, error) {
values := map[string]any{
"destinationDrivewsId": drivewsid,
"folders": []map[string]any{{
"clientId": "FOLDER::UNKNOWN_ZONE::TempId-" + uuid.New().String(),
"name": name,
}},
}
body, err := IntoReader(values)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/createFolders",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.endpoint,
Body: body,
}
var fResp *CreateFoldersResponse
resp, err := d.icloud.Request(ctx, opts, nil, &fResp)
if err != nil {
return nil, resp, err
}
status := fResp.Folders[0].Status
if status != statusOk {
err = newRequestError(status, "unknown request status")
}
return fResp.Folders[0], resp, err
}
// RenameItemByItemID renames a DriveItem by its item ID.
func (d *DriveService) RenameItemByItemID(ctx context.Context, id, etag, name string, force bool) (*DriveItem, *http.Response, error) {
doc, resp, err := d.GetDocByItemID(ctx, id)
if err != nil {
return nil, resp, err
}
return d.RenameItemByDriveID(ctx, doc.DriveID(), doc.Etag, name, force)
}
// RenameItemByDriveID renames a DriveItem by its drive ID.
func (d *DriveService) RenameItemByDriveID(ctx context.Context, id, etag, name string, force bool) (*DriveItem, *http.Response, error) {
values := map[string]any{
"items": []map[string]any{{
"drivewsid": id,
"name": name,
"etag": etag,
// "extension": split[1],
}},
}
body, err := IntoReader(values)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/renameItems",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.endpoint,
Body: body,
}
var items *DriveItem
resp, err := d.icloud.Request(ctx, opts, nil, &items)
if err != nil {
return nil, resp, err
}
status := items.Items[0].Status
if status != statusOk {
// rerun with latest etag
if force && status == "ETAG_CONFLICT" {
return d.RenameItemByDriveID(ctx, id, items.Items[0].Etag, name, false)
}
err = newRequestErrorf(status, "unknown inner status for: %s %s", opts.Method, resp.Request.URL)
}
return items.Items[0], resp, err
}
// MoveItemByItemID moves an item by its item ID to a destination item ID.
func (d *DriveService) MoveItemByItemID(ctx context.Context, id, etag, dstID string, force bool) (*DriveItem, *http.Response, error) {
docSrc, resp, err := d.GetDocByItemID(ctx, id)
if err != nil {
return nil, resp, err
}
docDst, resp, err := d.GetDocByItemID(ctx, dstID)
if err != nil {
return nil, resp, err
}
return d.MoveItemByDriveID(ctx, docSrc.DriveID(), docSrc.Etag, docDst.DriveID(), force)
}
// MoveItemByDocID moves an item by its doc ID.
// func (d *DriveService) MoveItemByDocID(ctx context.Context, srcDocID, srcEtag, dstDocID string, force bool) (*DriveItem, *http.Response, error) {
// return d.MoveItemByDriveID(ctx, srcDocID, srcEtag, docDst.DriveID(), force)
// }
// MoveItemByDriveID moves an item by its drive ID.
func (d *DriveService) MoveItemByDriveID(ctx context.Context, id, etag, dstID string, force bool) (*DriveItem, *http.Response, error) {
values := map[string]any{
"destinationDrivewsId": dstID,
"items": []map[string]any{{
"drivewsid": id,
"etag": etag,
"clientId": id,
}},
}
body, err := IntoReader(values)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/moveItems",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.endpoint,
Body: body,
}
var items *DriveItem
resp, err := d.icloud.Request(ctx, opts, nil, &items)
if err != nil {
return nil, resp, err
}
status := items.Items[0].Status
if status != statusOk {
// rerun with latest etag
if force && status == "ETAG_CONFLICT" {
return d.MoveItemByDriveID(ctx, id, items.Items[0].Etag, dstID, false)
}
err = newRequestErrorf(status, "unknown inner status for: %s %s", opts.Method, resp.Request.URL)
}
return items.Items[0], resp, err
}
// CopyDocByItemID copies a document by its item ID.
func (d *DriveService) CopyDocByItemID(ctx context.Context, itemID string) (*DriveItemRaw, *http.Response, error) {
// putting name in info doesnt work. extension does work so assume this is a bug in the endpoint
values := map[string]any{
"info_to_update": map[string]any{},
}
body, err := IntoReader(values)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/v1/item/copy/" + itemID,
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Body: body,
}
var info *DriveItemRaw
resp, err := d.icloud.Request(ctx, opts, nil, &info)
if err != nil {
return nil, resp, err
}
return info, resp, err
}
// CreateUpload creates an url for an upload.
func (d *DriveService) CreateUpload(ctx context.Context, size int64, name string) (*UploadResponse, *http.Response, error) {
// first we need to request an upload url
values := map[string]any{
"filename": name,
"type": "FILE",
"size": strconv.FormatInt(size, 10),
"content_type": GetContentTypeForFile(name),
}
body, err := IntoReader(values)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/ws/" + defaultZone + "/upload/web",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Body: body,
}
var responseInfo []*UploadResponse
resp, err := d.icloud.Request(ctx, opts, nil, &responseInfo)
if err != nil {
return nil, resp, err
}
return responseInfo[0], resp, err
}
// Upload uploads a file to the given url
func (d *DriveService) Upload(ctx context.Context, in io.Reader, size int64, name, uploadURL string) (*SingleFileResponse, *http.Response, error) {
// TODO: implement multipart upload
opts := rest.Opts{
Method: "POST",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: uploadURL,
Body: in,
ContentLength: &size,
ContentType: GetContentTypeForFile(name),
// MultipartContentName: "files",
MultipartFileName: name,
}
var singleFileResponse *SingleFileResponse
resp, err := d.icloud.Request(ctx, opts, nil, &singleFileResponse)
if err != nil {
return nil, resp, err
}
return singleFileResponse, resp, err
}
// UpdateFile updates a file in the DriveService.
//
// ctx: the context.Context object for the request.
// r: a pointer to the UpdateFileInfo struct containing the information for the file update.
// Returns a pointer to the DriveItem struct representing the updated file, the http.Response object, and an error if any.
func (d *DriveService) UpdateFile(ctx context.Context, r *UpdateFileInfo) (*DriveItem, *http.Response, error) {
body, err := IntoReader(r)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/ws/" + defaultZone + "/update/documents",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Body: body,
}
var responseInfo *DocumentUpdateResponse
resp, err := d.icloud.Request(ctx, opts, nil, &responseInfo)
if err != nil {
return nil, resp, err
}
doc := responseInfo.Results[0].Document
item := DriveItem{
Drivewsid: "FILE::com.apple.CloudDocs::" + doc.DocumentID,
Docwsid: doc.DocumentID,
Itemid: doc.ItemID,
Etag: doc.Etag,
ParentID: doc.ParentID,
DateModified: time.Unix(r.Mtime, 0),
DateCreated: time.Unix(r.Mtime, 0),
Type: doc.Type,
Name: doc.Name,
Size: doc.Size,
}
return &item, resp, err
}
// UpdateFileInfo represents the information for an update to a file in the DriveService.
type UpdateFileInfo struct {
AllowConflict bool `json:"allow_conflict"`
Btime int64 `json:"btime"`
Command string `json:"command"`
CreateShortGUID bool `json:"create_short_guid"`
Data struct {
Receipt string `json:"receipt,omitempty"`
ReferenceSignature string `json:"reference_signature,omitempty"`
Signature string `json:"signature,omitempty"`
Size int64 `json:"size,omitempty"`
WrappingKey string `json:"wrapping_key,omitempty"`
} `json:"data,omitempty"`
DocumentID string `json:"document_id"`
FileFlags FileFlags `json:"file_flags"`
Mtime int64 `json:"mtime"`
Path struct {
Path string `json:"path"`
StartingDocumentID string `json:"starting_document_id"`
} `json:"path"`
}
// FileFlags defines the file flags for a document.
type FileFlags struct {
IsExecutable bool `json:"is_executable"`
IsHidden bool `json:"is_hidden"`
IsWritable bool `json:"is_writable"`
}
// NewUpdateFileInfo creates a new UpdateFileInfo object with default values.
//
// Returns an UpdateFileInfo object.
func NewUpdateFileInfo() UpdateFileInfo {
return UpdateFileInfo{
Command: "add_file",
CreateShortGUID: true,
AllowConflict: true,
FileFlags: FileFlags{
IsExecutable: true,
IsHidden: false,
IsWritable: false,
},
}
}
// DriveItemRaw is a raw drive item.
// not suure what to call this but there seems to be a "unified" and non "unified" drive item response. This is the non unified.
type DriveItemRaw struct {
ItemID string `json:"item_id"`
ItemInfo *DriveItemRawInfo `json:"item_info"`
}
// SplitName splits the name of a DriveItemRaw into its name and extension.
//
// It returns the name and extension as separate strings. If the name ends with a dot,
// it means there is no extension, so an empty string is returned for the extension.
// If the name does not contain a dot, it means
func (d *DriveItemRaw) SplitName() (string, string) {
name := d.ItemInfo.Name
// ends with a dot, no extension
if strings.HasSuffix(name, ".") {
return name, ""
}
lastInd := strings.LastIndex(name, ".")
if lastInd == -1 {
return name, ""
}
return name[:lastInd], name[lastInd+1:]
}
// ModTime returns the modification time of the DriveItemRaw.
//
// It parses the ModifiedAt field of the ItemInfo struct and converts it to a time.Time value.
// If the parsing fails, it returns the zero value of time.Time.
// The returned time.Time value represents the modification time of the DriveItemRaw.
func (d *DriveItemRaw) ModTime() time.Time {
i, err := strconv.ParseInt(d.ItemInfo.ModifiedAt, 10, 64)
if err != nil {
return time.Time{}
}
return time.UnixMilli(i)
}
// CreatedTime returns the creation time of the DriveItemRaw.
//
// It parses the CreatedAt field of the ItemInfo struct and converts it to a time.Time value.
// If the parsing fails, it returns the zero value of time.Time.
// The returned time.Time
func (d *DriveItemRaw) CreatedTime() time.Time {
i, err := strconv.ParseInt(d.ItemInfo.CreatedAt, 10, 64)
if err != nil {
return time.Time{}
}
return time.UnixMilli(i)
}
// DriveItemRawInfo is the raw information about a drive item.
type DriveItemRawInfo struct {
Name string `json:"name"`
// Extension is absolutely borked on endpoints so dont use it.
Extension string `json:"extension"`
Size int64 `json:"size,string"`
Type string `json:"type"`
Version string `json:"version"`
ModifiedAt string `json:"modified_at"`
CreatedAt string `json:"created_at"`
Urls struct {
URLDownload string `json:"url_download"`
} `json:"urls"`
}
// IntoDriveItem converts a DriveItemRaw into a DriveItem.
//
// It takes no parameters.
// It returns a pointer to a DriveItem.
func (d *DriveItemRaw) IntoDriveItem() *DriveItem {
name, extension := d.SplitName()
return &DriveItem{
Itemid: d.ItemID,
Name: name,
Extension: extension,
Type: d.ItemInfo.Type,
Etag: d.ItemInfo.Version,
DateModified: d.ModTime(),
DateCreated: d.CreatedTime(),
Size: d.ItemInfo.Size,
Urls: d.ItemInfo.Urls,
}
}
// DocumentUpdateResponse is the response of a document update request.
type DocumentUpdateResponse struct {
Status struct {
StatusCode int `json:"status_code"`
ErrorMessage string `json:"error_message"`
} `json:"status"`
Results []struct {
Status struct {
StatusCode int `json:"status_code"`
ErrorMessage string `json:"error_message"`
} `json:"status"`
OperationID interface{} `json:"operation_id"`
Document *Document `json:"document"`
} `json:"results"`
}
// Document represents a document on iCloud.
type Document struct {
Status struct {
StatusCode int `json:"status_code"`
ErrorMessage string `json:"error_message"`
} `json:"status"`
DocumentID string `json:"document_id"`
ItemID string `json:"item_id"`
Urls struct {
URLDownload string `json:"url_download"`
} `json:"urls"`
Etag string `json:"etag"`
ParentID string `json:"parent_id"`
Name string `json:"name"`
Type string `json:"type"`
Deleted bool `json:"deleted"`
Mtime int64 `json:"mtime"`
LastEditorName string `json:"last_editor_name"`
Data DocumentData `json:"data"`
Size int64 `json:"size"`
Btime int64 `json:"btime"`
Zone string `json:"zone"`
FileFlags struct {
IsExecutable bool `json:"is_executable"`
IsWritable bool `json:"is_writable"`
IsHidden bool `json:"is_hidden"`
} `json:"file_flags"`
LastOpenedTime int64 `json:"lastOpenedTime"`
RestorePath interface{} `json:"restorePath"`
HasChainedParent bool `json:"hasChainedParent"`
}
// DriveID returns the drive ID of the Document.
func (d *Document) DriveID() string {
if d.Zone == "" {
d.Zone = defaultZone
}
return d.Type + "::" + d.Zone + "::" + d.DocumentID
}
// DocumentData represents the data of a document.
type DocumentData struct {
Signature string `json:"signature"`
Owner string `json:"owner"`
Size int64 `json:"size"`
ReferenceSignature string `json:"reference_signature"`
WrappingKey string `json:"wrapping_key"`
PcsInfo string `json:"pcsInfo"`
}
// SingleFileResponse is the response of a single file request.
type SingleFileResponse struct {
SingleFile *SingleFileInfo `json:"singleFile"`
}
// SingleFileInfo represents the information of a single file.
type SingleFileInfo struct {
ReferenceSignature string `json:"referenceChecksum"`
Size int64 `json:"size"`
Signature string `json:"fileChecksum"`
WrappingKey string `json:"wrappingKey"`
Receipt string `json:"receipt"`
}
// UploadResponse is the response of an upload request.
type UploadResponse struct {
URL string `json:"url"`
DocumentID string `json:"document_id"`
}
// FileRequestToken represents the token of a file request.
type FileRequestToken struct {
URL string `json:"url"`
Token string `json:"token"`
Signature string `json:"signature"`
WrappingKey string `json:"wrapping_key"`
ReferenceSignature string `json:"reference_signature"`
}
// FileRequest represents the request of a file.
type FileRequest struct {
DocumentID string `json:"document_id"`
ItemID string `json:"item_id"`
OwnerDsid int64 `json:"owner_dsid"`
DataToken *FileRequestToken `json:"data_token,omitempty"`
PackageToken *FileRequestToken `json:"package_token,omitempty"`
DoubleEtag string `json:"double_etag"`
}
// CreateFoldersResponse is the response of a create folders request.
type CreateFoldersResponse struct {
Folders []*DriveItem `json:"folders"`
}
// DriveItem represents an item on iCloud.
type DriveItem struct {
DateCreated time.Time `json:"dateCreated"`
Drivewsid string `json:"drivewsid"`
Docwsid string `json:"docwsid"`
Itemid string `json:"item_id"`
Zone string `json:"zone"`
Name string `json:"name"`
ParentID string `json:"parentId"`
Hierarchy []DriveItem `json:"hierarchy"`
Etag string `json:"etag"`
Type string `json:"type"`
AssetQuota int64 `json:"assetQuota"`
FileCount int64 `json:"fileCount"`
ShareCount int64 `json:"shareCount"`
ShareAliasCount int64 `json:"shareAliasCount"`
DirectChildrenCount int64 `json:"directChildrenCount"`
Items []*DriveItem `json:"items"`
NumberOfItems int64 `json:"numberOfItems"`
Status string `json:"status"`
Extension string `json:"extension,omitempty"`
DateModified time.Time `json:"dateModified,omitempty"`
DateChanged time.Time `json:"dateChanged,omitempty"`
Size int64 `json:"size,omitempty"`
LastOpenTime time.Time `json:"lastOpenTime,omitempty"`
Urls struct {
URLDownload string `json:"url_download"`
} `json:"urls"`
}
// IsFolder returns true if the item is a folder.
func (d *DriveItem) IsFolder() bool {
return d.Type == "FOLDER" || d.Type == "APP_CONTAINER" || d.Type == "APP_LIBRARY"
}
// DownloadURL returns the download URL of the item.
func (d *DriveItem) DownloadURL() string {
return d.Urls.URLDownload
}
// FullName returns the full name of the item.
// name + extension
func (d *DriveItem) FullName() string {
if d.Extension != "" {
return d.Name + "." + d.Extension
}
return d.Name
}
// GetDocIDFromDriveID returns the DocumentID from the drive ID.
func GetDocIDFromDriveID(id string) string {
split := strings.Split(id, "::")
return split[len(split)-1]
}
// DeconstructDriveID returns the document type, zone, and document ID from the drive ID.
func DeconstructDriveID(id string) (docType, zone, docid string) {
split := strings.Split(id, "::")
if len(split) < 3 {
return "", "", id
}
return split[0], split[1], split[2]
}
// ConstructDriveID constructs a drive ID from the given components.
func ConstructDriveID(id string, zone string, t string) string {
return strings.Join([]string{t, zone, id}, "::")
}
// GetContentTypeForFile detects content type for given file name.
func GetContentTypeForFile(name string) string {
// detect MIME type by looking at the filename only
mimeType := mime.TypeByExtension(filepath.Ext(name))
if mimeType == "" {
// api requires a mime type passed in
mimeType = "text/plain"
}
return strings.Split(mimeType, ";")[0]
}

View File

@@ -0,0 +1,412 @@
package api
import (
"context"
"fmt"
"net/http"
"net/url"
"slices"
"strings"
"github.com/oracle/oci-go-sdk/v65/common"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/lib/rest"
)
// Session represents an iCloud session
type Session struct {
SessionToken string `json:"session_token"`
Scnt string `json:"scnt"`
SessionID string `json:"session_id"`
AccountCountry string `json:"account_country"`
TrustToken string `json:"trust_token"`
ClientID string `json:"client_id"`
Cookies []*http.Cookie `json:"cookies"`
AccountInfo AccountInfo `json:"account_info"`
srv *rest.Client `json:"-"`
}
// String returns the session as a string
// func (s *Session) String() string {
// jsession, _ := json.Marshal(s)
// return string(jsession)
// }
// Request makes a request
func (s *Session) Request(ctx context.Context, opts rest.Opts, request interface{}, response interface{}) (*http.Response, error) {
resp, err := s.srv.CallJSON(ctx, &opts, &request, &response)
if err != nil {
return resp, err
}
if val := resp.Header.Get("X-Apple-ID-Account-Country"); val != "" {
s.AccountCountry = val
}
if val := resp.Header.Get("X-Apple-ID-Session-Id"); val != "" {
s.SessionID = val
}
if val := resp.Header.Get("X-Apple-Session-Token"); val != "" {
s.SessionToken = val
}
if val := resp.Header.Get("X-Apple-TwoSV-Trust-Token"); val != "" {
s.TrustToken = val
}
if val := resp.Header.Get("scnt"); val != "" {
s.Scnt = val
}
return resp, nil
}
// Requires2FA returns true if the session requires 2FA
func (s *Session) Requires2FA() bool {
return s.AccountInfo.DsInfo.HsaVersion == 2 && s.AccountInfo.HsaChallengeRequired
}
// SignIn signs in the session
func (s *Session) SignIn(ctx context.Context, appleID, password string) error {
trustTokens := []string{}
if s.TrustToken != "" {
trustTokens = []string{s.TrustToken}
}
values := map[string]any{
"accountName": appleID,
"password": password,
"rememberMe": true,
"trustTokens": trustTokens,
}
body, err := IntoReader(values)
if err != nil {
return err
}
opts := rest.Opts{
Method: "POST",
Path: "/signin",
Parameters: url.Values{},
ExtraHeaders: s.GetAuthHeaders(map[string]string{}),
RootURL: authEndpoint,
IgnoreStatus: true, // need to handle 409 for hsa2
NoResponse: true,
Body: body,
}
opts.Parameters.Set("isRememberMeEnabled", "true")
_, err = s.Request(ctx, opts, nil, nil)
return err
}
// AuthWithToken authenticates the session
func (s *Session) AuthWithToken(ctx context.Context) error {
values := map[string]any{
"accountCountryCode": s.AccountCountry,
"dsWebAuthToken": s.SessionToken,
"extended_login": true,
"trustToken": s.TrustToken,
}
body, err := IntoReader(values)
if err != nil {
return err
}
opts := rest.Opts{
Method: "POST",
Path: "/accountLogin",
ExtraHeaders: GetCommonHeaders(map[string]string{}),
RootURL: setupEndpoint,
Body: body,
}
resp, err := s.Request(ctx, opts, nil, &s.AccountInfo)
if err == nil {
s.Cookies = resp.Cookies()
}
return err
}
// Validate2FACode validates the 2FA code
func (s *Session) Validate2FACode(ctx context.Context, code string) error {
values := map[string]interface{}{"securityCode": map[string]string{"code": code}}
body, err := IntoReader(values)
if err != nil {
return err
}
headers := s.GetAuthHeaders(map[string]string{})
headers["scnt"] = s.Scnt
headers["X-Apple-ID-Session-Id"] = s.SessionID
opts := rest.Opts{
Method: "POST",
Path: "/verify/trusteddevice/securitycode",
ExtraHeaders: headers,
RootURL: authEndpoint,
Body: body,
NoResponse: true,
}
_, err = s.Request(ctx, opts, nil, nil)
if err == nil {
if err := s.TrustSession(ctx); err != nil {
return err
}
return nil
}
return fmt.Errorf("validate2FACode failed: %w", err)
}
// TrustSession trusts the session
func (s *Session) TrustSession(ctx context.Context) error {
headers := s.GetAuthHeaders(map[string]string{})
headers["scnt"] = s.Scnt
headers["X-Apple-ID-Session-Id"] = s.SessionID
opts := rest.Opts{
Method: "GET",
Path: "/2sv/trust",
ExtraHeaders: headers,
RootURL: authEndpoint,
NoResponse: true,
ContentLength: common.Int64(0),
}
_, err := s.Request(ctx, opts, nil, nil)
if err != nil {
return fmt.Errorf("trustSession failed: %w", err)
}
return s.AuthWithToken(ctx)
}
// ValidateSession validates the session
func (s *Session) ValidateSession(ctx context.Context) error {
opts := rest.Opts{
Method: "POST",
Path: "/validate",
ExtraHeaders: s.GetHeaders(map[string]string{}),
RootURL: setupEndpoint,
ContentLength: common.Int64(0),
}
_, err := s.Request(ctx, opts, nil, &s.AccountInfo)
if err != nil {
return fmt.Errorf("validateSession failed: %w", err)
}
return nil
}
// GetAuthHeaders returns the authentication headers for the session.
//
// It takes an `overwrite` map[string]string parameter which allows
// overwriting the default headers. It returns a map[string]string.
func (s *Session) GetAuthHeaders(overwrite map[string]string) map[string]string {
headers := map[string]string{
"Accept": "application/json",
"Content-Type": "application/json",
"X-Apple-OAuth-Client-Id": s.ClientID,
"X-Apple-OAuth-Client-Type": "firstPartyAuth",
"X-Apple-OAuth-Redirect-URI": "https://www.icloud.com",
"X-Apple-OAuth-Require-Grant-Code": "true",
"X-Apple-OAuth-Response-Mode": "web_message",
"X-Apple-OAuth-Response-Type": "code",
"X-Apple-OAuth-State": s.ClientID,
"X-Apple-Widget-Key": s.ClientID,
"Origin": homeEndpoint,
"Referer": fmt.Sprintf("%s/", homeEndpoint),
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:103.0) Gecko/20100101 Firefox/103.0",
}
for k, v := range overwrite {
headers[k] = v
}
return headers
}
// GetHeaders Gets the authentication headers required for a request
func (s *Session) GetHeaders(overwrite map[string]string) map[string]string {
headers := GetCommonHeaders(map[string]string{})
headers["Cookie"] = s.GetCookieString()
for k, v := range overwrite {
headers[k] = v
}
return headers
}
// GetCookieString returns the cookie header string for the session.
func (s *Session) GetCookieString() string {
cookieHeader := ""
// we only care about name and value.
for _, cookie := range s.Cookies {
cookieHeader = cookieHeader + cookie.Name + "=" + cookie.Value + ";"
}
return cookieHeader
}
// GetCommonHeaders generates common HTTP headers with optional overwrite.
func GetCommonHeaders(overwrite map[string]string) map[string]string {
headers := map[string]string{
"Content-Type": "application/json",
"Origin": baseEndpoint,
"Referer": fmt.Sprintf("%s/", baseEndpoint),
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:103.0) Gecko/20100101 Firefox/103.0",
}
for k, v := range overwrite {
headers[k] = v
}
return headers
}
// MergeCookies merges two slices of http.Cookies, ensuring no duplicates are added.
func MergeCookies(left []*http.Cookie, right []*http.Cookie) ([]*http.Cookie, error) {
var hashes []string
for _, cookie := range right {
hashes = append(hashes, cookie.Raw)
}
for _, cookie := range left {
if !slices.Contains(hashes, cookie.Raw) {
right = append(right, cookie)
}
}
return right, nil
}
// GetCookiesForDomain filters the provided cookies based on the domain of the given URL.
func GetCookiesForDomain(url *url.URL, cookies []*http.Cookie) ([]*http.Cookie, error) {
var domainCookies []*http.Cookie
for _, cookie := range cookies {
if strings.HasSuffix(url.Host, cookie.Domain) {
domainCookies = append(domainCookies, cookie)
}
}
return domainCookies, nil
}
// NewSession creates a new Session instance with default values.
func NewSession() *Session {
session := &Session{}
session.srv = rest.NewClient(fshttp.NewClient(context.Background())).SetRoot(baseEndpoint)
//session.ClientID = "auth-" + uuid.New().String()
return session
}
// AccountInfo represents an account info
type AccountInfo struct {
DsInfo *ValidateDataDsInfo `json:"dsInfo"`
HasMinimumDeviceForPhotosWeb bool `json:"hasMinimumDeviceForPhotosWeb"`
ICDPEnabled bool `json:"iCDPEnabled"`
Webservices map[string]*webService `json:"webservices"`
PcsEnabled bool `json:"pcsEnabled"`
TermsUpdateNeeded bool `json:"termsUpdateNeeded"`
ConfigBag struct {
Urls struct {
AccountCreateUI string `json:"accountCreateUI"`
AccountLoginUI string `json:"accountLoginUI"`
AccountLogin string `json:"accountLogin"`
AccountRepairUI string `json:"accountRepairUI"`
DownloadICloudTerms string `json:"downloadICloudTerms"`
RepairDone string `json:"repairDone"`
AccountAuthorizeUI string `json:"accountAuthorizeUI"`
VettingURLForEmail string `json:"vettingUrlForEmail"`
AccountCreate string `json:"accountCreate"`
GetICloudTerms string `json:"getICloudTerms"`
VettingURLForPhone string `json:"vettingUrlForPhone"`
} `json:"urls"`
AccountCreateEnabled bool `json:"accountCreateEnabled"`
} `json:"configBag"`
HsaTrustedBrowser bool `json:"hsaTrustedBrowser"`
AppsOrder []string `json:"appsOrder"`
Version int `json:"version"`
IsExtendedLogin bool `json:"isExtendedLogin"`
PcsServiceIdentitiesIncluded bool `json:"pcsServiceIdentitiesIncluded"`
IsRepairNeeded bool `json:"isRepairNeeded"`
HsaChallengeRequired bool `json:"hsaChallengeRequired"`
RequestInfo struct {
Country string `json:"country"`
TimeZone string `json:"timeZone"`
Region string `json:"region"`
} `json:"requestInfo"`
PcsDeleted bool `json:"pcsDeleted"`
ICloudInfo struct {
SafariBookmarksHasMigratedToCloudKit bool `json:"SafariBookmarksHasMigratedToCloudKit"`
} `json:"iCloudInfo"`
Apps map[string]*ValidateDataApp `json:"apps"`
}
// ValidateDataDsInfo represents an validation info
type ValidateDataDsInfo struct {
HsaVersion int `json:"hsaVersion"`
LastName string `json:"lastName"`
ICDPEnabled bool `json:"iCDPEnabled"`
TantorMigrated bool `json:"tantorMigrated"`
Dsid string `json:"dsid"`
HsaEnabled bool `json:"hsaEnabled"`
IsHideMyEmailSubscriptionActive bool `json:"isHideMyEmailSubscriptionActive"`
IroncadeMigrated bool `json:"ironcadeMigrated"`
Locale string `json:"locale"`
BrZoneConsolidated bool `json:"brZoneConsolidated"`
ICDRSCapableDeviceList string `json:"ICDRSCapableDeviceList"`
IsManagedAppleID bool `json:"isManagedAppleID"`
IsCustomDomainsFeatureAvailable bool `json:"isCustomDomainsFeatureAvailable"`
IsHideMyEmailFeatureAvailable bool `json:"isHideMyEmailFeatureAvailable"`
ContinueOnDeviceEligibleDeviceInfo []string `json:"ContinueOnDeviceEligibleDeviceInfo"`
Gilligvited bool `json:"gilligvited"`
AppleIDAliases []interface{} `json:"appleIdAliases"`
UbiquityEOLEnabled bool `json:"ubiquityEOLEnabled"`
IsPaidDeveloper bool `json:"isPaidDeveloper"`
CountryCode string `json:"countryCode"`
NotificationID string `json:"notificationId"`
PrimaryEmailVerified bool `json:"primaryEmailVerified"`
ADsID string `json:"aDsID"`
Locked bool `json:"locked"`
ICDRSCapableDeviceCount int `json:"ICDRSCapableDeviceCount"`
HasICloudQualifyingDevice bool `json:"hasICloudQualifyingDevice"`
PrimaryEmail string `json:"primaryEmail"`
AppleIDEntries []struct {
IsPrimary bool `json:"isPrimary"`
Type string `json:"type"`
Value string `json:"value"`
} `json:"appleIdEntries"`
GilliganEnabled bool `json:"gilligan-enabled"`
IsWebAccessAllowed bool `json:"isWebAccessAllowed"`
FullName string `json:"fullName"`
MailFlags struct {
IsThreadingAvailable bool `json:"isThreadingAvailable"`
IsSearchV2Provisioned bool `json:"isSearchV2Provisioned"`
SCKMail bool `json:"sCKMail"`
IsMppSupportedInCurrentCountry bool `json:"isMppSupportedInCurrentCountry"`
} `json:"mailFlags"`
LanguageCode string `json:"languageCode"`
AppleID string `json:"appleId"`
HasUnreleasedOS bool `json:"hasUnreleasedOS"`
AnalyticsOptInStatus bool `json:"analyticsOptInStatus"`
FirstName string `json:"firstName"`
ICloudAppleIDAlias string `json:"iCloudAppleIdAlias"`
NotesMigrated bool `json:"notesMigrated"`
BeneficiaryInfo struct {
IsBeneficiary bool `json:"isBeneficiary"`
} `json:"beneficiaryInfo"`
HasPaymentInfo bool `json:"hasPaymentInfo"`
PcsDelet bool `json:"pcsDelet"`
AppleIDAlias string `json:"appleIdAlias"`
BrMigrated bool `json:"brMigrated"`
StatusCode int `json:"statusCode"`
FamilyEligible bool `json:"familyEligible"`
}
// ValidateDataApp represents an app
type ValidateDataApp struct {
CanLaunchWithOneFactor bool `json:"canLaunchWithOneFactor"`
IsQualifiedForBeta bool `json:"isQualifiedForBeta"`
}
// WebService represents a web service
type webService struct {
PcsRequired bool `json:"pcsRequired"`
URL string `json:"url"`
UploadURL string `json:"uploadUrl"`
Status string `json:"status"`
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,18 @@
//go:build !plan9 && !solaris
package iclouddrive_test
import (
"testing"
"github.com/rclone/rclone/backend/iclouddrive"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestICloudDrive:",
NilObject: (*iclouddrive.Object)(nil),
})
}

View File

@@ -0,0 +1,7 @@
// Build for iclouddrive for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || solaris
// Package iclouddrive implements the iCloud Drive backend
package iclouddrive

View File

@@ -75,7 +75,7 @@ type MoveFolderParam struct {
DestinationPath string `validate:"nonzero" json:"destinationPath"`
}
// JobIDResponse respresents response struct with JobID for folder operations
// JobIDResponse represents response struct with JobID for folder operations
type JobIDResponse struct {
JobID string `json:"jobId"`
}

View File

@@ -56,7 +56,7 @@ func (ik *ImageKit) URL(params URLParam) (string, error) {
var expires = strconv.FormatInt(now+params.ExpireSeconds, 10)
var path = strings.Replace(resultURL, endpoint, "", 1)
path = path + expires
path += expires
mac := hmac.New(sha1.New, []byte(ik.PrivateKey))
mac.Write([]byte(path))
signature := hex.EncodeToString(mac.Sum(nil))

View File

@@ -151,6 +151,19 @@ Owner is able to add custom keys. Metadata feature grabs all the keys including
Help: "Host of InternetArchive Frontend.\n\nLeave blank for default value.",
Default: "https://archive.org",
Advanced: true,
}, {
Name: "item_metadata",
Help: `Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set.
Format is key=value and the 'x-archive-meta-' prefix is automatically added.`,
Default: []string{},
Hide: fs.OptionHideConfigurator,
Advanced: true,
}, {
Name: "item_derive",
Help: `Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload.
The derive process produces a number of secondary files from an upload to make an upload more usable on the web.
Setting this to false is useful for uploading files that are already in a format that IA can display or reduce burden on IA's infrastructure.`,
Default: true,
}, {
Name: "disable_checksum",
Help: `Don't ask the server to test against MD5 checksum calculated by rclone.
@@ -201,6 +214,8 @@ type Options struct {
Endpoint string `config:"endpoint"`
FrontEndpoint string `config:"front_endpoint"`
DisableChecksum bool `config:"disable_checksum"`
ItemMetadata []string `config:"item_metadata"`
ItemDerive bool `config:"item_derive"`
WaitArchive fs.Duration `config:"wait_archive"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -790,17 +805,23 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
"x-amz-filemeta-rclone-update-track": updateTracker,
// we add some more headers for intuitive actions
"x-amz-auto-make-bucket": "1", // create an item if does not exist, do nothing if already
"x-archive-auto-make-bucket": "1", // same as above in IAS3 original way
"x-archive-keep-old-version": "0", // do not keep old versions (a.k.a. trashes in other clouds)
"x-archive-meta-mediatype": "data", // mark media type of the uploading file as "data"
"x-archive-queue-derive": "0", // skip derivation process (e.g. encoding to smaller files, OCR on PDFs)
"x-archive-cascade-delete": "1", // enable "cascate delete" (delete all derived files in addition to the file itself)
"x-amz-auto-make-bucket": "1", // create an item if does not exist, do nothing if already
"x-archive-auto-make-bucket": "1", // same as above in IAS3 original way
"x-archive-keep-old-version": "0", // do not keep old versions (a.k.a. trashes in other clouds)
"x-archive-cascade-delete": "1", // enable "cascate delete" (delete all derived files in addition to the file itself)
}
if size >= 0 {
headers["Content-Length"] = fmt.Sprintf("%d", size)
headers["x-archive-size-hint"] = fmt.Sprintf("%d", size)
}
// This is IA's ITEM metadata, not file metadata
headers, err = o.appendItemMetadataHeaders(headers, o.fs.opt)
if err != nil {
return err
}
var mdata fs.Metadata
mdata, err = fs.GetMetadataOptions(ctx, o.fs, src, options)
if err == nil && mdata != nil {
@@ -863,6 +884,51 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
func (o *Object) appendItemMetadataHeaders(headers map[string]string, options Options) (newHeaders map[string]string, err error) {
metadataCounter := make(map[string]int)
metadataValues := make(map[string][]string)
// First pass: count occurrences and collect values
for _, v := range options.ItemMetadata {
parts := strings.SplitN(v, "=", 2)
if len(parts) != 2 {
return newHeaders, errors.New("item metadata key=value should be in the form key=value")
}
key, value := parts[0], parts[1]
metadataCounter[key]++
metadataValues[key] = append(metadataValues[key], value)
}
// Second pass: add headers with appropriate prefixes
for key, count := range metadataCounter {
if count == 1 {
// Only one occurrence, use x-archive-meta-
headers[fmt.Sprintf("x-archive-meta-%s", key)] = metadataValues[key][0]
} else {
// Multiple occurrences, use x-archive-meta01-, x-archive-meta02-, etc.
for i, value := range metadataValues[key] {
headers[fmt.Sprintf("x-archive-meta%02d-%s", i+1, key)] = value
}
}
}
if o.fs.opt.ItemDerive {
headers["x-archive-queue-derive"] = "1"
} else {
headers["x-archive-queue-derive"] = "0"
}
fs.Debugf(o, "Setting IA item derive: %t", o.fs.opt.ItemDerive)
for k, v := range headers {
if strings.HasPrefix(k, "x-archive-meta") {
fs.Debugf(o, "Setting IA item metadata: %s=%s", k, v)
}
}
return headers, nil
}
// Remove an object
func (o *Object) Remove(ctx context.Context) (err error) {
bucket, bucketPath := o.split()

View File

@@ -277,11 +277,9 @@ machines.`)
m.Set(configClientID, teliaseCloudClientID)
m.Set(configTokenURL, teliaseCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: teliaseCloudAuthURL,
TokenURL: teliaseCloudTokenURL,
},
OAuth2Config: &oauthutil.Config{
AuthURL: teliaseCloudAuthURL,
TokenURL: teliaseCloudTokenURL,
ClientID: teliaseCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -292,11 +290,9 @@ machines.`)
m.Set(configClientID, telianoCloudClientID)
m.Set(configTokenURL, telianoCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: telianoCloudAuthURL,
TokenURL: telianoCloudTokenURL,
},
OAuth2Config: &oauthutil.Config{
AuthURL: telianoCloudAuthURL,
TokenURL: telianoCloudTokenURL,
ClientID: telianoCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -307,11 +303,9 @@ machines.`)
m.Set(configClientID, tele2CloudClientID)
m.Set(configTokenURL, tele2CloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: tele2CloudAuthURL,
TokenURL: tele2CloudTokenURL,
},
OAuth2Config: &oauthutil.Config{
AuthURL: tele2CloudAuthURL,
TokenURL: tele2CloudTokenURL,
ClientID: tele2CloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -322,11 +316,9 @@ machines.`)
m.Set(configClientID, onlimeCloudClientID)
m.Set(configTokenURL, onlimeCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: onlimeCloudAuthURL,
TokenURL: onlimeCloudTokenURL,
},
OAuth2Config: &oauthutil.Config{
AuthURL: onlimeCloudAuthURL,
TokenURL: onlimeCloudTokenURL,
ClientID: onlimeCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -924,19 +916,17 @@ func getOAuthClient(ctx context.Context, name string, m configmap.Mapper) (oAuth
}
baseClient := fshttp.NewClient(ctx)
oauthConfig := &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: defaultTokenURL,
TokenURL: defaultTokenURL,
},
oauthConfig := &oauthutil.Config{
AuthURL: defaultTokenURL,
TokenURL: defaultTokenURL,
}
if ver == configVersion {
oauthConfig.ClientID = defaultClientID
// if custom endpoints are set use them else stick with defaults
if tokenURL, ok := m.Get(configTokenURL); ok {
oauthConfig.Endpoint.TokenURL = tokenURL
oauthConfig.TokenURL = tokenURL
// jottacloud is weird. we need to use the tokenURL as authURL
oauthConfig.Endpoint.AuthURL = tokenURL
oauthConfig.AuthURL = tokenURL
}
} else if ver == legacyConfigVersion {
clientID, ok := m.Get(configClientID)
@@ -950,8 +940,8 @@ func getOAuthClient(ctx context.Context, name string, m configmap.Mapper) (oAuth
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
oauthConfig.Endpoint.TokenURL = legacyTokenURL
oauthConfig.Endpoint.AuthURL = legacyTokenURL
oauthConfig.TokenURL = legacyTokenURL
oauthConfig.AuthURL = legacyTokenURL
// add the request filter to fix token refresh
if do, ok := baseClient.Transport.(interface {
@@ -1555,7 +1545,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
info, err := f.copyOrMove(ctx, "mv", srcObj.filePath(), remote)
if err != nil && meta != nil {
if err == nil && meta != nil {
createTime, createTimeMeta := srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime

View File

@@ -59,7 +59,7 @@ func (f *Fs) InternalTestMetadata(t *testing.T) {
//"utime" - read-only
//"content-type" - read-only
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, contents, true, "text/html", metadata)
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, false, contents, true, "text/html", metadata)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()

View File

@@ -5,18 +5,18 @@ package local
import (
"context"
"fmt"
"syscall"
"unsafe"
"github.com/rclone/rclone/fs"
"golang.org/x/sys/windows"
)
var getFreeDiskSpace = syscall.NewLazyDLL("kernel32.dll").NewProc("GetDiskFreeSpaceExW")
var getFreeDiskSpace = windows.NewLazySystemDLL("kernel32.dll").NewProc("GetDiskFreeSpaceExW")
// About gets quota information
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var available, total, free int64
root, e := syscall.UTF16PtrFromString(f.root)
root, e := windows.UTF16PtrFromString(f.root)
if e != nil {
return nil, fmt.Errorf("failed to read disk usage: %w", e)
}
@@ -26,7 +26,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
uintptr(unsafe.Pointer(&total)), // lpTotalNumberOfBytes
uintptr(unsafe.Pointer(&free)), // lpTotalNumberOfFreeBytes
)
if e1 != syscall.Errno(0) {
if e1 != windows.Errno(0) {
return nil, fmt.Errorf("failed to read disk usage: %w", e1)
}
usage := &fs.Usage{

View File

@@ -0,0 +1,93 @@
//go:build darwin && cgo
// Package local provides a filesystem interface
package local
import (
"context"
"fmt"
"path/filepath"
"runtime"
"github.com/go-darwin/apfs"
"github.com/rclone/rclone/fs"
)
// Copy src to this remote using server-side copy operations.
//
// # This is stored with the remote path given
//
// # It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
if runtime.GOOS != "darwin" || f.opt.NoClone {
return nil, fs.ErrorCantCopy
}
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't clone - not same remote type")
return nil, fs.ErrorCantCopy
}
if f.opt.TranslateSymlinks && srcObj.translatedLink { // in --links mode, use cloning only for regular files
return nil, fs.ErrorCantCopy
}
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, fmt.Errorf("copy: failed to read metadata: %w", err)
}
// Create destination
dstObj := f.newObject(remote)
err = dstObj.mkdirAll()
if err != nil {
return nil, err
}
srcPath := srcObj.path
if f.opt.FollowSymlinks { // in --copy-links mode, find the real file being pointed to and pass that in instead
srcPath, err = filepath.EvalSymlinks(srcPath)
if err != nil {
return nil, err
}
}
err = Clone(srcPath, f.localPath(remote))
if err != nil {
return nil, err
}
// Set metadata if --metadata is in use
if meta != nil {
err = dstObj.writeMetadata(meta)
if err != nil {
return nil, fmt.Errorf("copy: failed to set metadata: %w", err)
}
}
return f.NewObject(ctx, remote)
}
// Clone uses APFS cloning if possible, otherwise falls back to copying (with full metadata preservation)
// note that this is closely related to unix.Clonefile(src, dst, unix.CLONE_NOFOLLOW) but not 100% identical
// https://opensource.apple.com/source/copyfile/copyfile-173.40.2/copyfile.c.auto.html
func Clone(src, dst string) error {
state := apfs.CopyFileStateAlloc()
defer func() {
if err := apfs.CopyFileStateFree(state); err != nil {
fs.Errorf(dst, "free state error: %v", err)
}
}()
cloned, err := apfs.CopyFile(src, dst, state, apfs.COPYFILE_CLONE)
fs.Debugf(dst, "isCloned: %v, error: %v", cloned, err)
return err
}
// Check the interfaces are satisfied
var (
_ fs.Copier = &Fs{}
)

16
backend/local/lchmod.go Normal file
View File

@@ -0,0 +1,16 @@
//go:build windows || plan9 || js || linux
package local
import "os"
const haveLChmod = false
// lChmod changes the mode of the named file to mode. If the file is a symbolic
// link, it changes the link, not the target. If there is an error,
// it will be of type *PathError.
func lChmod(name string, mode os.FileMode) error {
// Can't do this safely on this OS - chmoding a symlink always
// changes the destination.
return nil
}

View File

@@ -0,0 +1,41 @@
//go:build !windows && !plan9 && !js && !linux
package local
import (
"os"
"syscall"
"golang.org/x/sys/unix"
)
const haveLChmod = true
// syscallMode returns the syscall-specific mode bits from Go's portable mode bits.
//
// Borrowed from the syscall source since it isn't public.
func syscallMode(i os.FileMode) (o uint32) {
o |= uint32(i.Perm())
if i&os.ModeSetuid != 0 {
o |= syscall.S_ISUID
}
if i&os.ModeSetgid != 0 {
o |= syscall.S_ISGID
}
if i&os.ModeSticky != 0 {
o |= syscall.S_ISVTX
}
return o
}
// lChmod changes the mode of the named file to mode. If the file is a symbolic
// link, it changes the link, not the target. If there is an error,
// it will be of type *PathError.
func lChmod(name string, mode os.FileMode) error {
// NB linux does not support AT_SYMLINK_NOFOLLOW as a parameter to fchmodat
// and returns ENOTSUP if you try, so we don't support this on linux
if e := unix.Fchmodat(unix.AT_FDCWD, name, syscallMode(mode), unix.AT_SYMLINK_NOFOLLOW); e != nil {
return &os.PathError{Op: "lChmod", Path: name, Err: e}
}
return nil
}

View File

@@ -1,4 +1,4 @@
//go:build windows || plan9 || js
//go:build plan9 || js
package local

View File

@@ -0,0 +1,19 @@
//go:build windows
package local
import (
"time"
)
const haveLChtimes = true
// lChtimes changes the access and modification times of the named
// link, similar to the Unix utime() or utimes() functions.
//
// The underlying filesystem may truncate or round the values to a
// less precise time unit.
// If there is an error, it will be of type *PathError.
func lChtimes(name string, atime time.Time, mtime time.Time) error {
return setTimes(name, atime, mtime, time.Time{}, true)
}

View File

@@ -32,9 +32,10 @@ import (
)
// Constants
const devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
const linkSuffix = ".rclonelink" // The suffix added to a translated symbolic link
const useReadDir = (runtime.GOOS == "windows" || runtime.GOOS == "plan9") // these OSes read FileInfos directly
const (
devUnset = 0xdeadbeefcafebabe // a device id meaning it is unset
useReadDir = (runtime.GOOS == "windows" || runtime.GOOS == "plan9") // these OSes read FileInfos directly
)
// timeType allows the user to choose what exactly ModTime() returns
type timeType = fs.Enum[timeTypeChoices]
@@ -78,41 +79,44 @@ supported by all file systems) under the "user.*" prefix.
Metadata is supported on files and directories.
`,
},
Options: []fs.Option{{
Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows.",
Default: false,
Advanced: runtime.GOOS != "windows",
Examples: []fs.OptionExample{{
Value: "true",
Help: "Disables long file names.",
}},
}, {
Name: "copy_links",
Help: "Follow symlinks and copy the pointed to item.",
Default: false,
NoPrefix: true,
ShortOpt: "L",
Advanced: true,
}, {
Name: "links",
Help: "Translate symlinks to/from regular files with a '" + linkSuffix + "' extension.",
Default: false,
NoPrefix: true,
ShortOpt: "l",
Advanced: true,
}, {
Name: "skip_links",
Help: `Don't warn about skipped symlinks.
Options: []fs.Option{
{
Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows.",
Default: false,
Advanced: runtime.GOOS != "windows",
Examples: []fs.OptionExample{{
Value: "true",
Help: "Disables long file names.",
}},
},
{
Name: "copy_links",
Help: "Follow symlinks and copy the pointed to item.",
Default: false,
NoPrefix: true,
ShortOpt: "L",
Advanced: true,
},
{
Name: "links",
Help: "Translate symlinks to/from regular files with a '" + fs.LinkSuffix + "' extension for the local backend.",
Default: false,
Advanced: true,
},
{
Name: "skip_links",
Help: `Don't warn about skipped symlinks.
This flag disables warning messages on skipped symlinks or junction
points, as you explicitly acknowledge that they should be skipped.`,
Default: false,
NoPrefix: true,
Advanced: true,
}, {
Name: "zero_size_links",
Help: `Assume the Stat size of links is zero (and read them instead) (deprecated).
Default: false,
NoPrefix: true,
Advanced: true,
},
{
Name: "zero_size_links",
Help: `Assume the Stat size of links is zero (and read them instead) (deprecated).
Rclone used to use the Stat size of links as the link size, but this fails in quite a few places:
@@ -122,11 +126,12 @@ Rclone used to use the Stat size of links as the link size, but this fails in qu
So rclone now always reads the link.
`,
Default: false,
Advanced: true,
}, {
Name: "unicode_normalization",
Help: `Apply unicode NFC normalization to paths and filenames.
Default: false,
Advanced: true,
},
{
Name: "unicode_normalization",
Help: `Apply unicode NFC normalization to paths and filenames.
This flag can be used to normalize file names into unicode NFC form
that are read from the local filesystem.
@@ -140,11 +145,12 @@ some OSes.
Note that rclone compares filenames with unicode normalization in the sync
routine so this flag shouldn't normally be used.`,
Default: false,
Advanced: true,
}, {
Name: "no_check_updated",
Help: `Don't check to see if the files change during upload.
Default: false,
Advanced: true,
},
{
Name: "no_check_updated",
Help: `Don't check to see if the files change during upload.
Normally rclone checks the size and modification time of files as they
are being uploaded and aborts with a message which starts "can't copy -
@@ -175,68 +181,96 @@ directory listing (where the initial stat value comes from on Windows)
and when stat is called on them directly. Other copy tools always use
the direct stat value and setting this flag will disable that.
`,
Default: false,
Advanced: true,
}, {
Name: "one_file_system",
Help: "Don't cross filesystem boundaries (unix/macOS only).",
Default: false,
NoPrefix: true,
ShortOpt: "x",
Advanced: true,
}, {
Name: "case_sensitive",
Help: `Force the filesystem to report itself as case sensitive.
Default: false,
Advanced: true,
},
{
Name: "one_file_system",
Help: "Don't cross filesystem boundaries (unix/macOS only).",
Default: false,
NoPrefix: true,
ShortOpt: "x",
Advanced: true,
},
{
Name: "case_sensitive",
Help: `Force the filesystem to report itself as case sensitive.
Normally the local backend declares itself as case insensitive on
Windows/macOS and case sensitive for everything else. Use this flag
to override the default choice.`,
Default: false,
Advanced: true,
}, {
Name: "case_insensitive",
Help: `Force the filesystem to report itself as case insensitive.
Default: false,
Advanced: true,
},
{
Name: "case_insensitive",
Help: `Force the filesystem to report itself as case insensitive.
Normally the local backend declares itself as case insensitive on
Windows/macOS and case sensitive for everything else. Use this flag
to override the default choice.`,
Default: false,
Advanced: true,
}, {
Name: "no_preallocate",
Help: `Disable preallocation of disk space for transferred files.
Default: false,
Advanced: true,
},
{
Name: "no_clone",
Help: `Disable reflink cloning for server-side copies.
Normally, for local-to-local transfers, rclone will "clone" the file when
possible, and fall back to "copying" only when cloning is not supported.
Cloning creates a shallow copy (or "reflink") which initially shares blocks with
the original file. Unlike a "hardlink", the two files are independent and
neither will affect the other if subsequently modified.
Cloning is usually preferable to copying, as it is much faster and is
deduplicated by default (i.e. having two identical files does not consume more
storage than having just one.) However, for use cases where data redundancy is
preferable, --local-no-clone can be used to disable cloning and force "deep" copies.
Currently, cloning is only supported when using APFS on macOS (support for other
platforms may be added in the future.)`,
Default: false,
Advanced: true,
},
{
Name: "no_preallocate",
Help: `Disable preallocation of disk space for transferred files.
Preallocation of disk space helps prevent filesystem fragmentation.
However, some virtual filesystem layers (such as Google Drive File
Stream) may incorrectly set the actual file size equal to the
preallocated space, causing checksum and file size checks to fail.
Use this flag to disable preallocation.`,
Default: false,
Advanced: true,
}, {
Name: "no_sparse",
Help: `Disable sparse files for multi-thread downloads.
Default: false,
Advanced: true,
},
{
Name: "no_sparse",
Help: `Disable sparse files for multi-thread downloads.
On Windows platforms rclone will make sparse files when doing
multi-thread downloads. This avoids long pauses on large files where
the OS zeros the file. However sparse files may be undesirable as they
cause disk fragmentation and can be slow to work with.`,
Default: false,
Advanced: true,
}, {
Name: "no_set_modtime",
Help: `Disable setting modtime.
Default: false,
Advanced: true,
},
{
Name: "no_set_modtime",
Help: `Disable setting modtime.
Normally rclone updates modification time of files after they are done
uploading. This can cause permissions issues on Linux platforms when
the user rclone is running as does not own the file uploaded, such as
when copying to a CIFS mount owned by another user. If this option is
enabled, rclone will no longer update the modtime after copying a file.`,
Default: false,
Advanced: true,
}, {
Name: "time_type",
Help: `Set what kind of time is returned.
Default: false,
Advanced: true,
},
{
Name: "time_type",
Help: `Set what kind of time is returned.
Normally rclone does all operations on the mtime or Modification time.
@@ -255,27 +289,29 @@ will silently replace it with the modification time which all OSes support.
Note that setting the time will still set the modified time so this is
only useful for reading.
`,
Default: mTime,
Advanced: true,
Examples: []fs.OptionExample{{
Value: mTime.String(),
Help: "The last modification time.",
}, {
Value: aTime.String(),
Help: "The last access time.",
}, {
Value: bTime.String(),
Help: "The creation time.",
}, {
Value: cTime.String(),
Help: "The last status change time.",
}},
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: encoder.OS,
}},
Default: mTime,
Advanced: true,
Examples: []fs.OptionExample{{
Value: mTime.String(),
Help: "The last modification time.",
}, {
Value: aTime.String(),
Help: "The last access time.",
}, {
Value: bTime.String(),
Help: "The creation time.",
}, {
Value: cTime.String(),
Help: "The last status change time.",
}},
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: encoder.OS,
},
},
}
fs.Register(fsi)
}
@@ -296,6 +332,7 @@ type Options struct {
NoSetModTime bool `config:"no_set_modtime"`
TimeType timeType `config:"time_type"`
Enc encoder.MultiEncoder `config:"encoding"`
NoClone bool `config:"no_clone"`
}
// Fs represents a local filesystem rooted at root
@@ -339,17 +376,22 @@ type Directory struct {
var (
errLinksAndCopyLinks = errors.New("can't use -l/--links with -L/--copy-links")
errLinksNeedsSuffix = errors.New("need \"" + linkSuffix + "\" suffix to refer to symlink when using -l/--links")
errLinksNeedsSuffix = errors.New("need \"" + fs.LinkSuffix + "\" suffix to refer to symlink when using -l/--links")
)
// NewFs constructs an Fs from the path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
ci := fs.GetConfig(ctx)
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
// Override --local-links with --links if set
if ci.Links {
opt.TranslateSymlinks = true
}
if opt.TranslateSymlinks && opt.FollowSymlinks {
return nil, errLinksAndCopyLinks
}
@@ -384,6 +426,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if opt.FollowSymlinks {
f.lstat = os.Stat
}
if opt.NoClone {
// Disable server-side copy when --local-no-clone is set
f.features.Copy = nil
}
// Check to see if this points to a file
fi, err := f.lstat(f.root)
@@ -391,9 +437,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.dev = readDevice(fi, f.opt.OneFileSystem)
}
// Check to see if this is a .rclonelink if not found
hasLinkSuffix := strings.HasSuffix(f.root, linkSuffix)
hasLinkSuffix := strings.HasSuffix(f.root, fs.LinkSuffix)
if hasLinkSuffix && opt.TranslateSymlinks && os.IsNotExist(err) {
fi, err = f.lstat(strings.TrimSuffix(f.root, linkSuffix))
fi, err = f.lstat(strings.TrimSuffix(f.root, fs.LinkSuffix))
}
if err == nil && f.isRegular(fi.Mode()) {
// Handle the odd case, that a symlink was specified by name without the link suffix
@@ -464,8 +510,8 @@ func (f *Fs) caseInsensitive() bool {
//
// for regular files, localPath is returned unchanged
func translateLink(remote, localPath string) (newLocalPath string, isTranslatedLink bool) {
isTranslatedLink = strings.HasSuffix(remote, linkSuffix)
newLocalPath = strings.TrimSuffix(localPath, linkSuffix)
isTranslatedLink = strings.HasSuffix(remote, fs.LinkSuffix)
newLocalPath = strings.TrimSuffix(localPath, fs.LinkSuffix)
return newLocalPath, isTranslatedLink
}
@@ -648,7 +694,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
} else {
// Check whether this link should be translated
if f.opt.TranslateSymlinks && fi.Mode()&os.ModeSymlink != 0 {
newRemote += linkSuffix
newRemote += fs.LinkSuffix
}
// Don't include non directory if not included
// we leave directory filtering to the layer above
@@ -1568,32 +1614,47 @@ func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
}
func cleanRootPath(s string, noUNC bool, enc encoder.MultiEncoder) string {
if runtime.GOOS != "windows" || !strings.HasPrefix(s, "\\") {
if !filepath.IsAbs(s) {
s2, err := filepath.Abs(s)
if err == nil {
s = s2
}
} else {
s = filepath.Clean(s)
}
}
var vol string
if runtime.GOOS == "windows" {
s = filepath.ToSlash(s)
vol := filepath.VolumeName(s)
vol = filepath.VolumeName(s)
if vol == `\\?` && len(s) >= 6 {
// `\\?\C:`
vol = s[:6]
}
s = vol + enc.FromStandardPath(s[len(vol):])
s = filepath.FromSlash(s)
if !noUNC {
// Convert to UNC
s = file.UNCPath(s)
}
return s
s = s[len(vol):]
}
// Don't use FromStandardPath. Make sure Dot (`.`, `..`) as name will not be reencoded
// Take care of the case Standard: .// (the first dot means current directory)
if enc != encoder.Standard {
s = filepath.ToSlash(s)
parts := strings.Split(s, "/")
encoded := make([]string, len(parts))
changed := false
for i, p := range parts {
if (p == ".") || (p == "..") {
encoded[i] = p
continue
}
part := enc.FromStandardName(p)
changed = changed || part != p
encoded[i] = part
}
if changed {
s = strings.Join(encoded, "/")
}
s = filepath.FromSlash(s)
}
if runtime.GOOS == "windows" {
s = vol + s
}
s2, err := filepath.Abs(s)
if err == nil {
s = s2
}
if !noUNC {
// Convert to UNC. It does nothing on non windows platforms.
s = file.UNCPath(s)
}
s = enc.FromStandardPath(s)
return s
}

View File

@@ -73,7 +73,6 @@ func TestUpdatingCheck(t *testing.T) {
r.WriteFile(filePath, "content updated", time.Now())
_, err = in.Read(buf)
require.NoError(t, err)
}
// Test corrupted on transfer
@@ -111,7 +110,7 @@ func TestSymlink(t *testing.T) {
require.NoError(t, lChtimes(symlinkPath, modTime2, modTime2))
// Object viewed as symlink
file2 := fstest.NewItem("symlink.txt"+linkSuffix, "file.txt", modTime2)
file2 := fstest.NewItem("symlink.txt"+fs.LinkSuffix, "file.txt", modTime2)
// Object viewed as destination
file2d := fstest.NewItem("symlink.txt", "hello", modTime1)
@@ -140,7 +139,7 @@ func TestSymlink(t *testing.T) {
// Create a symlink
modTime3 := fstest.Time("2002-03-03T04:05:10.123123123Z")
file3 := r.WriteObjectTo(ctx, r.Flocal, "symlink2.txt"+linkSuffix, "file.txt", modTime3, false)
file3 := r.WriteObjectTo(ctx, r.Flocal, "symlink2.txt"+fs.LinkSuffix, "file.txt", modTime3, false)
fstest.CheckListingWithPrecision(t, r.Flocal, []fstest.Item{file1, file2, file3}, nil, fs.ModTimeNotSupported)
if haveLChtimes {
r.CheckLocalItems(t, file1, file2, file3)
@@ -156,9 +155,9 @@ func TestSymlink(t *testing.T) {
assert.Equal(t, "file.txt", linkText)
// Check that NewObject gets the correct object
o, err := r.Flocal.NewObject(ctx, "symlink2.txt"+linkSuffix)
o, err := r.Flocal.NewObject(ctx, "symlink2.txt"+fs.LinkSuffix)
require.NoError(t, err)
assert.Equal(t, "symlink2.txt"+linkSuffix, o.Remote())
assert.Equal(t, "symlink2.txt"+fs.LinkSuffix, o.Remote())
assert.Equal(t, int64(8), o.Size())
// Check that NewObject doesn't see the non suffixed version
@@ -166,7 +165,7 @@ func TestSymlink(t *testing.T) {
require.Equal(t, fs.ErrorObjectNotFound, err)
// Check that NewFs works with the suffixed version and --links
f2, err := NewFs(ctx, "local", filepath.Join(dir, "symlink2.txt"+linkSuffix), configmap.Simple{
f2, err := NewFs(ctx, "local", filepath.Join(dir, "symlink2.txt"+fs.LinkSuffix), configmap.Simple{
"links": "true",
})
require.Equal(t, fs.ErrorIsFile, err)
@@ -224,7 +223,7 @@ func TestHashOnUpdate(t *testing.T) {
assert.Equal(t, "9a0364b9e99bb480dd25e1f0284c8555", md5)
// Reupload it with different contents but same size and timestamp
var b = bytes.NewBufferString("CONTENT")
b := bytes.NewBufferString("CONTENT")
src := object.NewStaticObjectInfo(filePath, when, int64(b.Len()), true, nil, f)
err = o.Update(ctx, b, src)
require.NoError(t, err)
@@ -269,22 +268,66 @@ func TestMetadata(t *testing.T) {
r := fstest.NewRun(t)
const filePath = "metafile.txt"
when := time.Now()
const dayLength = len("2001-01-01")
whenRFC := when.Format(time.RFC3339Nano)
r.WriteFile(filePath, "metadata file contents", when)
f := r.Flocal.(*Fs)
// Set fs into "-l" / "--links" mode
f.opt.TranslateSymlinks = true
// Write a symlink to the file
symlinkPath := "metafile-link.txt"
osSymlinkPath := filepath.Join(f.root, symlinkPath)
symlinkPath += fs.LinkSuffix
require.NoError(t, os.Symlink(filePath, osSymlinkPath))
symlinkModTime := fstest.Time("2002-02-03T04:05:10.123123123Z")
require.NoError(t, lChtimes(osSymlinkPath, symlinkModTime, symlinkModTime))
// Get the object
obj, err := f.NewObject(ctx, filePath)
require.NoError(t, err)
o := obj.(*Object)
// Get the symlink object
symlinkObj, err := f.NewObject(ctx, symlinkPath)
require.NoError(t, err)
symlinkO := symlinkObj.(*Object)
// Record metadata for o
oMeta, err := o.Metadata(ctx)
require.NoError(t, err)
// Test symlink first to check it doesn't mess up file
t.Run("Symlink", func(t *testing.T) {
testMetadata(t, r, symlinkO, symlinkModTime)
})
// Read it again
oMetaNew, err := o.Metadata(ctx)
require.NoError(t, err)
// Check that operating on the symlink didn't change the file it was pointing to
// See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
assert.Equal(t, oMeta, oMetaNew, "metadata setting on symlink messed up file")
// Now run the same tests on the file
t.Run("File", func(t *testing.T) {
testMetadata(t, r, o, when)
})
}
func testMetadata(t *testing.T, r *fstest.Run, o *Object, when time.Time) {
ctx := context.Background()
whenRFC := when.Format(time.RFC3339Nano)
const dayLength = len("2001-01-01")
f := r.Flocal.(*Fs)
features := f.Features()
var hasXID, hasAtime, hasBtime bool
var hasXID, hasAtime, hasBtime, canSetXattrOnLinks bool
switch runtime.GOOS {
case "darwin", "freebsd", "netbsd", "linux":
hasXID, hasAtime, hasBtime = true, true, true
canSetXattrOnLinks = runtime.GOOS != "linux"
case "openbsd", "solaris":
hasXID, hasAtime = true, true
case "windows":
@@ -307,6 +350,10 @@ func TestMetadata(t *testing.T) {
require.NoError(t, err)
assert.Nil(t, m)
if !canSetXattrOnLinks && o.translatedLink {
t.Skip("Skip remainder of test as can't set xattr on symlinks on this OS")
}
inM := fs.Metadata{
"potato": "chips",
"cabbage": "soup",
@@ -321,18 +368,21 @@ func TestMetadata(t *testing.T) {
})
checkTime := func(m fs.Metadata, key string, when time.Time) {
t.Helper()
mt, ok := o.parseMetadataTime(m, key)
assert.True(t, ok)
dt := mt.Sub(when)
precision := time.Second
assert.True(t, dt >= -precision && dt <= precision, fmt.Sprintf("%s: dt %v outside +/- precision %v", key, dt, precision))
assert.True(t, dt >= -precision && dt <= precision, fmt.Sprintf("%s: dt %v outside +/- precision %v want %v got %v", key, dt, precision, mt, when))
}
checkInt := func(m fs.Metadata, key string, base int) int {
t.Helper()
value, ok := o.parseMetadataInt(m, key, base)
assert.True(t, ok)
return value
}
t.Run("Read", func(t *testing.T) {
m, err := o.Metadata(ctx)
require.NoError(t, err)
@@ -342,13 +392,12 @@ func TestMetadata(t *testing.T) {
checkInt(m, "mode", 8)
checkTime(m, "mtime", when)
assert.Equal(t, len(whenRFC), len(m["mtime"]))
assert.Equal(t, whenRFC[:dayLength], m["mtime"][:dayLength])
if hasAtime {
if hasAtime && !o.translatedLink { // symlinks generally don't record atime
checkTime(m, "atime", when)
}
if hasBtime {
if hasBtime && !o.translatedLink { // symlinks generally don't record btime
checkTime(m, "btime", when)
}
if hasXID {
@@ -372,6 +421,10 @@ func TestMetadata(t *testing.T) {
"mode": "0767",
"potato": "wedges",
}
if !canSetXattrOnLinks && o.translatedLink {
// Don't change xattr if not supported on symlinks
delete(newM, "potato")
}
err := o.writeMetadata(newM)
require.NoError(t, err)
@@ -381,7 +434,11 @@ func TestMetadata(t *testing.T) {
mode := checkInt(m, "mode", 8)
if runtime.GOOS != "windows" {
assert.Equal(t, 0767, mode&0777, fmt.Sprintf("mode wrong - expecting 0767 got 0%o", mode&0777))
expectedMode := 0767
if o.translatedLink && runtime.GOOS == "linux" {
expectedMode = 0777 // perms of symlinks always read as 0777 on linux
}
assert.Equal(t, expectedMode, mode&0777, fmt.Sprintf("mode wrong - expecting 0%o got 0%o", expectedMode, mode&0777))
}
checkTime(m, "mtime", newMtime)
@@ -391,11 +448,10 @@ func TestMetadata(t *testing.T) {
if haveSetBTime {
checkTime(m, "btime", newBtime)
}
if xattrSupported {
if xattrSupported && (canSetXattrOnLinks || !o.translatedLink) {
assert.Equal(t, "wedges", m["potato"])
}
})
}
func TestFilter(t *testing.T) {
@@ -572,4 +628,35 @@ func TestCopySymlink(t *testing.T) {
linkContents, err := os.Readlink(dstPath)
require.NoError(t, err)
assert.Equal(t, "file.txt", linkContents)
// Set fs into "-L/--copy-links" mode
f.opt.FollowSymlinks = true
f.opt.TranslateSymlinks = false
f.lstat = os.Stat
// Create dst
require.NoError(t, f.Mkdir(ctx, "dst2"))
// Do copy from src into dst
src, err = f.NewObject(ctx, "src/link.txt")
require.NoError(t, err)
require.NotNil(t, src)
dst, err = operations.Copy(ctx, f, nil, "dst2/link.txt", src)
require.NoError(t, err)
require.NotNil(t, dst)
// Test that we made a NON-symlink and it has the right contents
dstPath = filepath.Join(r.LocalName, "dst2", "link.txt")
fi, err := os.Lstat(dstPath)
require.NoError(t, err)
assert.True(t, fi.Mode()&os.ModeSymlink == 0)
want := fstest.NewItem("dst2/link.txt", "hello world", when)
fstest.CompareItems(t, []fs.DirEntry{dst}, []fstest.Item{want}, nil, f.precision, "")
// Test that copying a normal file also works
dst, err = operations.Copy(ctx, f, nil, "dst2/file.txt", dst)
require.NoError(t, err)
require.NotNil(t, dst)
want = fstest.NewItem("dst2/file.txt", "hello world", when)
fstest.CompareItems(t, []fs.DirEntry{dst}, []fstest.Item{want}, nil, f.precision, "")
}

View File

@@ -2,6 +2,7 @@ package local
import (
"fmt"
"math"
"os"
"runtime"
"strconv"
@@ -72,12 +73,12 @@ func (o *Object) parseMetadataInt(m fs.Metadata, key string, base int) (result i
value, ok := m[key]
if ok {
var err error
result64, err := strconv.ParseInt(value, base, 64)
parsed, err := strconv.ParseInt(value, base, 0)
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", key, value, err)
ok = false
}
result = int(result64)
result = int(parsed)
}
return result, ok
}
@@ -104,7 +105,11 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
}
if haveSetBTime {
if btimeOK {
err = setBTime(o.path, btime)
if o.translatedLink {
err = lsetBTime(o.path, btime)
} else {
err = setBTime(o.path, btime)
}
if err != nil {
outErr = fmt.Errorf("failed to set birth (creation) time: %w", err)
}
@@ -120,7 +125,11 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
if runtime.GOOS == "windows" || runtime.GOOS == "plan9" {
fs.Debugf(o, "Ignoring request to set ownership %o.%o on this OS", gid, uid)
} else {
err = os.Chown(o.path, uid, gid)
if o.translatedLink {
err = os.Lchown(o.path, uid, gid)
} else {
err = os.Chown(o.path, uid, gid)
}
if err != nil {
outErr = fmt.Errorf("failed to change ownership: %w", err)
}
@@ -128,9 +137,23 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
}
mode, hasMode := o.parseMetadataInt(m, "mode", 8)
if hasMode {
err = os.Chmod(o.path, os.FileMode(mode))
if err != nil {
outErr = fmt.Errorf("failed to change permissions: %w", err)
if mode >= 0 {
umode := uint(mode)
if umode <= math.MaxUint32 {
if o.translatedLink {
if haveLChmod {
err = lChmod(o.path, os.FileMode(umode))
} else {
fs.Debugf(o, "Unable to set mode %v on a symlink on this OS", os.FileMode(umode))
err = nil
}
} else {
err = os.Chmod(o.path, os.FileMode(umode))
}
if err != nil {
outErr = fmt.Errorf("failed to change permissions: %w", err)
}
}
}
}
// FIXME not parsing rdev yet

View File

@@ -13,3 +13,9 @@ func setBTime(name string, btime time.Time) error {
// Does nothing
return nil
}
// lsetBTime changes the birth time of the link passed in
func lsetBTime(name string, btime time.Time) error {
// Does nothing
return nil
}

View File

@@ -9,15 +9,20 @@ import (
const haveSetBTime = true
// setBTime sets the birth time of the file passed in
func setBTime(name string, btime time.Time) (err error) {
// setTimes sets any of atime, mtime or btime
// if link is set it sets a link rather than the target
func setTimes(name string, atime, mtime, btime time.Time, link bool) (err error) {
pathp, err := syscall.UTF16PtrFromString(name)
if err != nil {
return err
}
fileFlag := uint32(syscall.FILE_FLAG_BACKUP_SEMANTICS)
if link {
fileFlag |= syscall.FILE_FLAG_OPEN_REPARSE_POINT
}
h, err := syscall.CreateFile(pathp,
syscall.FILE_WRITE_ATTRIBUTES, syscall.FILE_SHARE_WRITE, nil,
syscall.OPEN_EXISTING, syscall.FILE_FLAG_BACKUP_SEMANTICS, 0)
syscall.OPEN_EXISTING, fileFlag, 0)
if err != nil {
return err
}
@@ -27,6 +32,28 @@ func setBTime(name string, btime time.Time) (err error) {
err = closeErr
}
}()
bFileTime := syscall.NsecToFiletime(btime.UnixNano())
return syscall.SetFileTime(h, &bFileTime, nil, nil)
var patime, pmtime, pbtime *syscall.Filetime
if !atime.IsZero() {
t := syscall.NsecToFiletime(atime.UnixNano())
patime = &t
}
if !mtime.IsZero() {
t := syscall.NsecToFiletime(mtime.UnixNano())
pmtime = &t
}
if !btime.IsZero() {
t := syscall.NsecToFiletime(btime.UnixNano())
pbtime = &t
}
return syscall.SetFileTime(h, pbtime, patime, pmtime)
}
// setBTime sets the birth time of the file passed in
func setBTime(name string, btime time.Time) (err error) {
return setTimes(name, time.Time{}, time.Time{}, btime, false)
}
// lsetBTime changes the birth time of the link passed in
func lsetBTime(name string, btime time.Time) error {
return setTimes(name, time.Time{}, time.Time{}, btime, true)
}

View File

@@ -68,14 +68,12 @@ var (
)
// Description of how to authorize
var oauthConfig = &oauth2.Config{
var oauthConfig = &oauthutil.Config{
ClientID: api.OAuthClientID,
ClientSecret: "",
Endpoint: oauth2.Endpoint{
AuthURL: api.OAuthURL,
TokenURL: api.OAuthURL,
AuthStyle: oauth2.AuthStyleInParams,
},
AuthURL: api.OAuthURL,
TokenURL: api.OAuthURL,
AuthStyle: oauth2.AuthStyleInParams,
}
// Register with Fs
@@ -438,7 +436,9 @@ func (f *Fs) authorize(ctx context.Context, force bool) (err error) {
if err != nil || !tokenIsValid(t) {
fs.Infof(f, "Valid token not found, authorizing.")
ctx := oauthutil.Context(ctx, f.cli)
t, err = oauthConfig.PasswordCredentialsToken(ctx, f.opt.Username, f.opt.Password)
oauth2Conf := oauthConfig.MakeOauth2Config()
t, err = oauth2Conf.PasswordCredentialsToken(ctx, f.opt.Username, f.opt.Password)
}
if err == nil && !tokenIsValid(t) {
err = errors.New("invalid token")

View File

@@ -923,9 +923,7 @@ func (f *Fs) netStorageStatRequest(ctx context.Context, URL string, directory bo
entrywanted := (directory && files[i].Type == "dir") ||
(!directory && files[i].Type != "dir")
if entrywanted {
filestamp := files[0]
files[0] = files[i]
files[i] = filestamp
files[0], files[i] = files[i], files[0]
}
}
return files, nil

View File

@@ -202,9 +202,14 @@ type SharingLinkType struct {
type LinkType string
const (
ViewLinkType LinkType = "view" // ViewLinkType (role: read) A view-only sharing link, allowing read-only access.
EditLinkType LinkType = "edit" // EditLinkType (role: write) An edit sharing link, allowing read-write access.
EmbedLinkType LinkType = "embed" // EmbedLinkType (role: read) A view-only sharing link that can be used to embed content into a host webpage. Embed links are not available for OneDrive for Business or SharePoint.
// ViewLinkType (role: read) A view-only sharing link, allowing read-only access.
ViewLinkType LinkType = "view"
// EditLinkType (role: write) An edit sharing link, allowing read-write access.
EditLinkType LinkType = "edit"
// EmbedLinkType (role: read) A view-only sharing link that can be used to embed
// content into a host webpage. Embed links are not available for OneDrive for
// Business or SharePoint.
EmbedLinkType LinkType = "embed"
)
// LinkScope represents the scope of the link represented by this permission.
@@ -212,9 +217,12 @@ const (
type LinkScope string
const (
AnonymousScope LinkScope = "anonymous" // AnonymousScope = Anyone with the link has access, without needing to sign in. This may include people outside of your organization.
OrganizationScope LinkScope = "organization" // OrganizationScope = Anyone signed into your organization (tenant) can use the link to get access. Only available in OneDrive for Business and SharePoint.
// AnonymousScope = Anyone with the link has access, without needing to sign in.
// This may include people outside of your organization.
AnonymousScope LinkScope = "anonymous"
// OrganizationScope = Anyone signed into your organization (tenant) can use the
// link to get access. Only available in OneDrive for Business and SharePoint.
OrganizationScope LinkScope = "organization"
)
// PermissionsType provides information about a sharing permission granted for a DriveItem resource.
@@ -236,10 +244,14 @@ type PermissionsType struct {
type Role string
const (
ReadRole Role = "read" // ReadRole provides the ability to read the metadata and contents of the item.
WriteRole Role = "write" // WriteRole provides the ability to read and modify the metadata and contents of the item.
OwnerRole Role = "owner" // OwnerRole represents the owner role for SharePoint and OneDrive for Business.
MemberRole Role = "member" // MemberRole represents the member role for SharePoint and OneDrive for Business.
// ReadRole provides the ability to read the metadata and contents of the item.
ReadRole Role = "read"
// WriteRole provides the ability to read and modify the metadata and contents of the item.
WriteRole Role = "write"
// OwnerRole represents the owner role for SharePoint and OneDrive for Business.
OwnerRole Role = "owner"
// MemberRole represents the member role for SharePoint and OneDrive for Business.
MemberRole Role = "member"
)
// PermissionsResponse is the response to the list permissions method

View File

@@ -6,6 +6,7 @@ import (
"errors"
"fmt"
"net/http"
"slices"
"strings"
"time"
@@ -14,7 +15,6 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/errcount"
"golang.org/x/exp/slices" // replace with slices after go1.21 is the minimum version
)
const (

View File

@@ -40,7 +40,6 @@ import (
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/oauth2"
)
const (
@@ -65,14 +64,21 @@ const (
// Globals
var (
authPath = "/common/oauth2/v2.0/authorize"
tokenPath = "/common/oauth2/v2.0/token"
// Define the paths used for token operations
commonPathPrefix = "/common" // prefix for the paths if tenant isn't known
authPath = "/oauth2/v2.0/authorize"
tokenPath = "/oauth2/v2.0/token"
scopeAccess = fs.SpaceSepList{"Files.Read", "Files.ReadWrite", "Files.Read.All", "Files.ReadWrite.All", "Sites.Read.All", "offline_access"}
scopeAccessWithoutSites = fs.SpaceSepList{"Files.Read", "Files.ReadWrite", "Files.Read.All", "Files.ReadWrite.All", "offline_access"}
// Description of how to auth for this app for a business account
oauthConfig = &oauth2.Config{
// When using client credential OAuth flow, scope of .default is required in order
// to use the permissions configured for the application within the tenant
scopeAccessClientCred = fs.SpaceSepList{".default"}
// Base config for how to auth
oauthConfig = &oauthutil.Config{
Scopes: scopeAccess,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
@@ -125,7 +131,7 @@ func init() {
Help: "Microsoft Cloud for US Government",
}, {
Value: regionDE,
Help: "Microsoft Cloud Germany",
Help: "Microsoft Cloud Germany (deprecated - try " + regionGlobal + " region first).",
}, {
Value: regionCN,
Help: "Azure and Office 365 operated by Vnet Group in China",
@@ -183,6 +189,14 @@ Choose or manually enter a custom space separated list with all scopes, that rcl
Help: "Read and write access to all resources, without the ability to browse SharePoint sites. \nSame as if disable_site_permission was set to true",
},
},
}, {
Name: "tenant",
Help: `ID of the service principal's tenant. Also called its directory ID.
Set this if using
- Client Credential flow
`,
Sensitive: true,
}, {
Name: "disable_site_permission",
Help: `Disable the request for Sites.Read.All permission.
@@ -527,28 +541,54 @@ func chooseDrive(ctx context.Context, name string, m configmap.Mapper, srv *rest
})
}
// Config the backend
func Config(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
region, graphURL := getRegionURL(m)
// Make the oauth config for the backend
func makeOauthConfig(ctx context.Context, opt *Options) (*oauthutil.Config, error) {
// Copy the default oauthConfig
oauthConfig := *oauthConfig
if config.State == "" {
var accessScopes fs.SpaceSepList
accessScopesString, _ := m.Get("access_scopes")
err := accessScopes.Set(accessScopesString)
// Set the scopes
oauthConfig.Scopes = opt.AccessScopes
if opt.DisableSitePermission {
oauthConfig.Scopes = scopeAccessWithoutSites
}
// Construct the auth URLs
prefix := commonPathPrefix
if opt.Tenant != "" {
prefix = "/" + opt.Tenant
}
oauthConfig.TokenURL = authEndpoint[opt.Region] + prefix + tokenPath
oauthConfig.AuthURL = authEndpoint[opt.Region] + prefix + authPath
// Check to see if we are using client credentials flow
if opt.ClientCredentials {
// Override scope to .default
oauthConfig.Scopes = scopeAccessClientCred
if opt.Tenant == "" {
return nil, fmt.Errorf("tenant parameter must be set when using %s", config.ConfigClientCredentials)
}
}
return &oauthConfig, nil
}
// Config the backend
func Config(ctx context.Context, name string, m configmap.Mapper, conf fs.ConfigIn) (*fs.ConfigOut, error) {
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
_, graphURL := getRegionURL(m)
// Check to see if this is the start of the state machine execution
if conf.State == "" {
conf, err := makeOauthConfig(ctx, opt)
if err != nil {
return nil, fmt.Errorf("failed to parse access_scopes: %w", err)
}
oauthConfig.Scopes = []string(accessScopes)
disableSitePermission, _ := m.Get("disable_site_permission")
if disableSitePermission == "true" {
oauthConfig.Scopes = scopeAccessWithoutSites
}
oauthConfig.Endpoint = oauth2.Endpoint{
AuthURL: authEndpoint[region] + authPath,
TokenURL: authEndpoint[region] + tokenPath,
return nil, err
}
return oauthutil.ConfigOut("choose_type", &oauthutil.Options{
OAuth2Config: oauthConfig,
OAuth2Config: conf,
})
}
@@ -556,9 +596,11 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
if err != nil {
return nil, fmt.Errorf("failed to configure OneDrive: %w", err)
}
// Create a REST client, build on the OAuth client created above
srv := rest.NewClient(oAuthClient)
switch config.State {
switch conf.State {
case "choose_type":
return fs.ConfigChooseExclusiveFixed("choose_type_done", "config_type", "Type of connection", []fs.OptionExample{{
Value: "onedrive",
@@ -584,7 +626,7 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
}})
case "choose_type_done":
// Jump to next state according to config chosen
return fs.ConfigGoto(config.Result)
return fs.ConfigGoto(conf.Result)
case "onedrive":
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
opts: rest.Opts{
@@ -602,16 +644,22 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
},
})
case "driveid":
return fs.ConfigInput("driveid_end", "config_driveid_fixed", "Drive ID")
out, err := fs.ConfigInput("driveid_end", "config_driveid_fixed", "Drive ID")
if err != nil {
return out, err
}
// Default the drive_id to the previous version in the config
out.Option.Default, _ = m.Get("drive_id")
return out, nil
case "driveid_end":
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
finalDriveID: config.Result,
finalDriveID: conf.Result,
})
case "siteid":
return fs.ConfigInput("siteid_end", "config_siteid", "Site ID")
case "siteid_end":
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
siteID: config.Result,
siteID: conf.Result,
})
case "url":
return fs.ConfigInput("url_end", "config_site_url", `Site URL
@@ -622,7 +670,7 @@ Examples:
- "https://XXX.sharepoint.com/teams/ID"
`)
case "url_end":
siteURL := config.Result
siteURL := conf.Result
re := regexp.MustCompile(`https://.*\.sharepoint\.com(/.*)`)
match := re.FindStringSubmatch(siteURL)
if len(match) == 2 {
@@ -637,12 +685,12 @@ Examples:
return fs.ConfigInput("path_end", "config_sharepoint_url", `Server-relative URL`)
case "path_end":
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
relativePath: config.Result,
relativePath: conf.Result,
})
case "search":
return fs.ConfigInput("search_end", "config_search_term", `Search term`)
case "search_end":
searchTerm := config.Result
searchTerm := conf.Result
opts := rest.Opts{
Method: "GET",
RootURL: graphURL,
@@ -664,10 +712,10 @@ Examples:
})
case "search_sites":
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
siteID: config.Result,
siteID: conf.Result,
})
case "driveid_final":
finalDriveID := config.Result
finalDriveID := conf.Result
// Test the driveID and get drive type
opts := rest.Opts{
@@ -686,12 +734,12 @@ Examples:
return fs.ConfigConfirm("driveid_final_end", true, "config_drive_ok", fmt.Sprintf("Drive OK?\n\nFound drive %q of type %q\nURL: %s\n", rootItem.Name, rootItem.ParentReference.DriveType, rootItem.WebURL))
case "driveid_final_end":
if config.Result == "true" {
if conf.Result == "true" {
return nil, nil
}
return fs.ConfigGoto("choose_type")
}
return nil, fmt.Errorf("unknown state %q", config.State)
return nil, fmt.Errorf("unknown state %q", conf.State)
}
// Options defines the configuration for this backend
@@ -702,7 +750,9 @@ type Options struct {
DriveType string `config:"drive_type"`
RootFolderID string `config:"root_folder_id"`
DisableSitePermission bool `config:"disable_site_permission"`
ClientCredentials bool `config:"client_credentials"`
AccessScopes fs.SpaceSepList `config:"access_scopes"`
Tenant string `config:"tenant"`
ExposeOneNoteFiles bool `config:"expose_onenote_files"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
ListChunk int64 `config:"list_chunk"`
@@ -827,7 +877,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
retry = true
fs.Debugf(nil, "HTTP 401: Unable to initialize RPS. Trying again.")
}
case 429: // Too Many Requests.
case 429, 503: // Too Many Requests, Server Too Busy
// see https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online
if values := resp.Header["Retry-After"]; len(values) == 1 && values[0] != "" {
retryAfter, parseErr := strconv.Atoi(values[0])
@@ -942,7 +992,8 @@ func errorHandler(resp *http.Response) error {
// Decode error response
errResponse := new(api.Error)
err := rest.DecodeJSON(resp, &errResponse)
if err != nil {
// Redirects have no body so don't report an error
if err != nil && resp.Header.Get("Location") == "" {
fs.Debugf(nil, "Couldn't decode error response: %v", err)
}
if errResponse.ErrorInfo.Code == "" {
@@ -989,13 +1040,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
rootURL := graphAPIEndpoint[opt.Region] + "/v1.0" + "/drives/" + opt.DriveID
oauthConfig.Scopes = opt.AccessScopes
if opt.DisableSitePermission {
oauthConfig.Scopes = scopeAccessWithoutSites
}
oauthConfig.Endpoint = oauth2.Endpoint{
AuthURL: authEndpoint[opt.Region] + authPath,
TokenURL: authEndpoint[opt.Region] + tokenPath,
oauthConfig, err := makeOauthConfig(ctx, opt)
if err != nil {
return nil, err
}
client := fshttp.NewClient(ctx)
@@ -1544,9 +1592,12 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
if f.driveType == driveTypePersonal {
return time.Millisecond
}
// While this is true for some OneDrive personal accounts, it
// isn't true for all of them. See #8101 for details
//
// if f.driveType == driveTypePersonal {
// return time.Millisecond
// }
return time.Second
}
@@ -1605,7 +1656,7 @@ func (f *Fs) waitForJob(ctx context.Context, location string, o *Object) error {
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Object, err error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
@@ -1620,11 +1671,18 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantCopy
}
err := srcObj.readMetaData(ctx)
err = srcObj.readMetaData(ctx)
if err != nil {
return nil, err
}
// Find and remove existing object
cleanup, err := operations.RemoveExisting(ctx, f, remote, "server side copy")
if err != nil {
return nil, err
}
defer cleanup(&err)
// Check we aren't overwriting a file on the same remote
if srcObj.fs == f {
srcPath := srcObj.rootPath()
@@ -1927,7 +1985,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
return shareURL, nil
}
cnvFailMsg := "Don't know how to convert share link to direct link - returning the link as is"
const cnvFailMsg = "Don't know how to convert share link to direct link - returning the link as is"
directURL := ""
segments := strings.Split(shareURL, "/")
switch f.driveType {
@@ -2538,6 +2596,9 @@ func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, src fs.Obje
}
// Set the mod time now and read metadata
info, err = o.fs.fetchAndUpdateMetadata(ctx, src, options, o)
if err != nil {
return nil, fmt.Errorf("failed to fetch and update metadata: %w", err)
}
return info, o.setMetaData(info)
}
@@ -2549,8 +2610,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return errors.New("can't upload content to a OneNote file")
}
o.fs.tokenRenewer.Start()
defer o.fs.tokenRenewer.Stop()
// Only start the renewer if we have a valid one
if o.fs.tokenRenewer != nil {
o.fs.tokenRenewer.Start()
defer o.fs.tokenRenewer.Stop()
}
size := src.Size()

View File

@@ -4,6 +4,7 @@ import (
"context"
"encoding/json"
"fmt"
"slices"
"testing"
"time"
@@ -16,7 +17,6 @@ import (
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/exp/slices" // replace with slices after go1.21 is the minimum version
)
// go test -timeout 30m -run ^TestIntegration/FsMkdir/FsPutFiles/Internal$ github.com/rclone/rclone/backend/onedrive -remote TestOneDrive:meta -v
@@ -215,11 +215,11 @@ func (f *Fs) TestDirectoryMetadata(t *testing.T, r *fstest.Run) {
compareDirMeta(expectedMeta, actualMeta, false)
// modtime
assert.Equal(t, t1.Truncate(f.Precision()), newDst.ModTime(ctx))
fstest.AssertTimeEqualWithPrecision(t, newDst.Remote(), t1, newDst.ModTime(ctx), f.Precision())
// try changing it and re-check it
newDst, err = operations.SetDirModTime(ctx, f, newDst, "", t2)
assert.NoError(t, err)
assert.Equal(t, t2.Truncate(f.Precision()), newDst.ModTime(ctx))
fstest.AssertTimeEqualWithPrecision(t, newDst.Remote(), t2, newDst.ModTime(ctx), f.Precision())
// ensure that f.DirSetModTime also works
err = f.DirSetModTime(ctx, "subdir", t3)
assert.NoError(t, err)
@@ -227,7 +227,7 @@ func (f *Fs) TestDirectoryMetadata(t *testing.T, r *fstest.Run) {
assert.NoError(t, err)
entries.ForDir(func(dir fs.Directory) {
if dir.Remote() == "subdir" {
assert.True(t, t3.Truncate(f.Precision()).Equal(dir.ModTime(ctx)), fmt.Sprintf("got %v", dir.ModTime(ctx)))
fstest.AssertTimeEqualWithPrecision(t, dir.Remote(), t3, dir.ModTime(ctx), f.Precision())
}
})
@@ -379,7 +379,7 @@ func (f *Fs) putWithMeta(ctx context.Context, t *testing.T, file *fstest.Item, p
}
expectedMeta.Set("permissions", marshalPerms(t, perms))
obj := fstests.PutTestContentsMetadata(ctx, t, f, file, content, true, "plain/text", expectedMeta)
obj := fstests.PutTestContentsMetadata(ctx, t, f, file, false, content, true, "plain/text", expectedMeta)
do, ok := obj.(fs.Metadataer)
require.True(t, ok)
actualMeta, err := do.Metadata(ctx)

View File

@@ -26,7 +26,10 @@ package quickxorhash
// OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
// PERFORMANCE OF THIS SOFTWARE.
import "hash"
import (
"crypto/subtle"
"hash"
)
const (
// BlockSize is the preferred size for hashing
@@ -48,6 +51,11 @@ func New() hash.Hash {
return &quickXorHash{}
}
// xor dst with src
func xorBytes(dst, src []byte) int {
return subtle.XORBytes(dst, src, dst)
}
// Write (via the embedded io.Writer interface) adds more data to the running hash.
// It never returns an error.
//

View File

@@ -1,20 +0,0 @@
//go:build !go1.20
package quickxorhash
func xorBytes(dst, src []byte) int {
n := len(dst)
if len(src) < n {
n = len(src)
}
if n == 0 {
return 0
}
dst = dst[:n]
//src = src[:n]
src = src[:len(dst)] // remove bounds check in loop
for i := range dst {
dst[i] ^= src[i]
}
return n
}

View File

@@ -1,9 +0,0 @@
//go:build go1.20
package quickxorhash
import "crypto/subtle"
func xorBytes(dst, src []byte) int {
return subtle.XORBytes(dst, src, dst)
}

View File

@@ -92,6 +92,21 @@ Note that these chunks are buffered in memory so increasing them will
increase memory use.`,
Default: 10 * fs.Mebi,
Advanced: true,
}, {
Name: "access",
Help: "Files and folders will be uploaded with this access permission (default private)",
Default: "private",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "private",
Help: "The file or folder access can be granted in a way that will allow select users to view, read or write what is absolutely essential for them.",
}, {
Value: "public",
Help: "The file or folder can be downloaded by anyone from a web browser. The link can be shared in any way,",
}, {
Value: "hidden",
Help: "The file or folder can be accessed has the same restrictions as Public if the user knows the URL of the file or folder link in order to access the contents",
}},
}},
})
}
@@ -102,6 +117,7 @@ type Options struct {
Password string `config:"password"`
Enc encoder.MultiEncoder `config:"encoding"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Access string `config:"access"`
}
// Fs represents a remote server
@@ -404,6 +420,32 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return dstObj, nil
}
// About gets quota information
func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var uInfo usersInfoResponse
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
opts := rest.Opts{
Method: "GET",
Path: "/users/info.json/" + f.session.SessionID,
}
resp, err = f.srv.CallJSON(ctx, &opts, nil, &uInfo)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
}
usage = &fs.Usage{
Used: fs.NewUsageValue(uInfo.StorageUsed),
Total: fs.NewUsageValue(uInfo.MaxStorage * 1024 * 1024), // MaxStorage appears to be in MB
Free: fs.NewUsageValue(uInfo.MaxStorage*1024*1024 - uInfo.StorageUsed),
}
return usage, nil
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
@@ -709,6 +751,23 @@ func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (b
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// getAccessLevel is a helper function to determine access level integer
func getAccessLevel(access string) int64 {
var accessLevel int64
switch access {
case "private":
accessLevel = 0
case "public":
accessLevel = 1
case "hidden":
accessLevel = 2
default:
accessLevel = 0
fs.Errorf(nil, "Invalid access: %s, defaulting to private", access)
}
return accessLevel
}
// DirCacher methods
// CreateDir makes a directory with pathID as parent and name leaf
@@ -721,7 +780,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
SessionID: f.session.SessionID,
FolderName: f.opt.Enc.FromStandardName(leaf),
FolderSubParent: pathID,
FolderIsPublic: 0,
FolderIsPublic: getAccessLevel(f.opt.Access),
FolderPublicUpl: 0,
FolderPublicDisplay: 0,
FolderPublicDnl: 0,
@@ -1054,7 +1113,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Set permissions
err = o.fs.pacer.Call(func() (bool, error) {
update := permissions{SessionID: o.fs.session.SessionID, FileID: o.id, FileIsPublic: 0}
update := permissions{SessionID: o.fs.session.SessionID, FileID: o.id, FileIsPublic: getAccessLevel(o.fs.opt.Access)}
// fs.Debugf(nil, "Permissions : %#v", update)
opts := rest.Opts{
Method: "POST",
@@ -1147,6 +1206,7 @@ var (
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
_ fs.ParentIDer = (*Object)(nil)

View File

@@ -231,3 +231,10 @@ type permissions struct {
type uploadFileChunkReply struct {
TotalWritten int64 `json:"TotalWritten"`
}
// usersInfoResponse describes OpenDrive users/info.json response
type usersInfoResponse struct {
// This response contains many other values but these are the only ones currently in use
StorageUsed int64 `json:"StorageUsed,string"`
MaxStorage int64 `json:"MaxStorage,string"`
}

View File

@@ -58,12 +58,10 @@ func populateSSECustomerKeys(opt *Options) error {
sha256Checksum := base64.StdEncoding.EncodeToString(getSha256(decoded))
if opt.SSECustomerKeySha256 == "" {
opt.SSECustomerKeySha256 = sha256Checksum
} else {
if opt.SSECustomerKeySha256 != sha256Checksum {
return fmt.Errorf("the computed SHA256 checksum "+
"(%v) of the key doesn't match the config entry sse_customer_key_sha256=(%v)",
sha256Checksum, opt.SSECustomerKeySha256)
}
} else if opt.SSECustomerKeySha256 != sha256Checksum {
return fmt.Errorf("the computed SHA256 checksum "+
"(%v) of the key doesn't match the config entry sse_customer_key_sha256=(%v)",
sha256Checksum, opt.SSECustomerKeySha256)
}
if opt.SSECustomerAlgorithm == "" {
opt.SSECustomerAlgorithm = sseDefaultAlgorithm

View File

@@ -22,6 +22,7 @@ import (
"github.com/oracle/oci-go-sdk/v65/objectstorage"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/chunksize"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
)
@@ -148,7 +149,7 @@ func (w *objectChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, rea
}
md5sumBinary := m.Sum([]byte{})
w.addMd5(&md5sumBinary, int64(chunkNumber))
md5sum := base64.StdEncoding.EncodeToString(md5sumBinary[:])
md5sum := base64.StdEncoding.EncodeToString(md5sumBinary)
// Object storage requires 1 <= PartNumber <= 10000
ossPartNumber := chunkNumber + 1
@@ -183,6 +184,9 @@ func (w *objectChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, rea
if ossPartNumber <= 8 {
return shouldRetry(ctx, resp.HTTPResponse(), err)
}
if fserrors.ContextError(ctx, &err) {
return false, err
}
// retry all chunks once have done the first few
return true, err
}
@@ -279,7 +283,7 @@ func (w *objectChunkWriter) addMd5(md5binary *[]byte, chunkNumber int64) {
if extend := end - int64(len(w.md5s)); extend > 0 {
w.md5s = append(w.md5s, make([]byte, extend)...)
}
copy(w.md5s[start:end], (*md5binary)[:])
copy(w.md5s[start:end], (*md5binary))
}
func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption) (ui uploadInfo, err error) {

View File

@@ -106,9 +106,9 @@ func newOptions() []fs.Option {
Sensitive: true,
}, {
Name: "compartment",
Help: "Object storage compartment OCID",
Help: "Specify compartment OCID, if you need to list buckets.\n\nList objects works without compartment OCID.",
Provider: "!no_auth",
Required: true,
Required: false,
Sensitive: true,
}, {
Name: "region",

View File

@@ -109,6 +109,37 @@ type Hashes struct {
SHA256 string `json:"sha256"`
}
// FileTruncateResponse is the response from /file_truncate
type FileTruncateResponse struct {
Error
}
// FileCloseResponse is the response from /file_close
type FileCloseResponse struct {
Error
}
// FileOpenResponse is the response from /file_open
type FileOpenResponse struct {
Error
Fileid int64 `json:"fileid"`
FileDescriptor int64 `json:"fd"`
}
// FileChecksumResponse is the response from /file_checksum
type FileChecksumResponse struct {
Error
MD5 string `json:"md5"`
SHA1 string `json:"sha1"`
SHA256 string `json:"sha256"`
}
// FilePWriteResponse is the response from /file_pwrite
type FilePWriteResponse struct {
Error
Bytes int64 `json:"bytes"`
}
// UploadFileResponse is the response from /uploadfile
type UploadFileResponse struct {
Error

View File

@@ -14,6 +14,7 @@ import (
"net/http"
"net/url"
"path"
"strconv"
"strings"
"time"
@@ -47,12 +48,10 @@ const (
// Globals
var (
// Description of how to auth for this app
oauthConfig = &oauth2.Config{
Scopes: nil,
Endpoint: oauth2.Endpoint{
AuthURL: "https://my.pcloud.com/oauth2/authorize",
// TokenURL: "https://api.pcloud.com/oauth2_token", set by updateTokenURL
},
oauthConfig = &oauthutil.Config{
Scopes: nil,
AuthURL: "https://my.pcloud.com/oauth2/authorize",
// TokenURL: "https://api.pcloud.com/oauth2_token", set by updateTokenURL
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -60,8 +59,8 @@ var (
)
// Update the TokenURL with the actual hostname
func updateTokenURL(oauthConfig *oauth2.Config, hostname string) {
oauthConfig.Endpoint.TokenURL = "https://" + hostname + "/oauth2_token"
func updateTokenURL(oauthConfig *oauthutil.Config, hostname string) {
oauthConfig.TokenURL = "https://" + hostname + "/oauth2_token"
}
// Register with Fs
@@ -78,7 +77,7 @@ func init() {
fs.Errorf(nil, "Failed to read config: %v", err)
}
updateTokenURL(oauthConfig, optc.Hostname)
checkAuth := func(oauthConfig *oauth2.Config, auth *oauthutil.AuthResult) error {
checkAuth := func(oauthConfig *oauthutil.Config, auth *oauthutil.AuthResult) error {
if auth == nil || auth.Form == nil {
return errors.New("form not found in response")
}
@@ -146,7 +145,8 @@ we have to rely on user password authentication for it.`,
Help: "Your pcloud password.",
IsPassword: true,
Advanced: true,
}}...),
},
}...),
})
}
@@ -161,15 +161,16 @@ type Options struct {
// Fs represents a remote pcloud
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
cleanupSrv *rest.Client // the connection used for the cleanup method
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
ts *oauthutil.TokenSource // the token source, used to create new clients
srv *rest.Client // the connection to the server
cleanupSrv *rest.Client // the connection used for the cleanup method
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
}
// Object describes a pcloud object
@@ -317,6 +318,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
name: name,
root: root,
opt: *opt,
ts: ts,
srv: rest.NewClient(oAuthClient).SetRoot("https://" + opt.Hostname),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
@@ -326,6 +328,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features = (&fs.Features{
CaseInsensitive: false,
CanHaveEmptyDirectories: true,
PartialUploads: true,
}).Fill(ctx, f)
if !canCleanup {
f.features.CleanUp = nil
@@ -333,7 +336,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.srv.SetErrorHandler(errorHandler)
// Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
f.tokenRenewer = oauthutil.NewRenew(f.String(), f.ts, func() error {
_, err := f.readMetaDataForPath(ctx, "")
return err
})
@@ -375,6 +378,57 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return f, nil
}
// OpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
//
// It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
client, err := f.newSingleConnClient(ctx)
if err != nil {
return nil, fmt.Errorf("create client: %w", err)
}
// init an empty file
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true)
if err != nil {
return nil, fmt.Errorf("resolve src: %w", err)
}
openResult, err := fileOpenNew(ctx, client, f, directoryID, leaf)
if err != nil {
return nil, fmt.Errorf("open file: %w", err)
}
if _, err := fileClose(ctx, client, f.pacer, openResult.FileDescriptor); err != nil {
return nil, fmt.Errorf("close file: %w", err)
}
writer := &writerAt{
ctx: ctx,
fs: f,
size: size,
remote: remote,
fileID: openResult.Fileid,
}
return writer, nil
}
// Create a new http client, accepting keep-alive headers, limited to single connection.
// Necessary for pcloud fileops API, as it binds the session to the underlying TCP connection.
// File descriptors are only valid within the same connection and auto-closed when the connection is closed,
// hence we need a separate client (with single connection) for each fd to avoid all sorts of errors and race conditions.
func (f *Fs) newSingleConnClient(ctx context.Context) (*rest.Client, error) {
baseClient := fshttp.NewClient(ctx)
baseClient.Transport = fshttp.NewTransportCustom(ctx, func(t *http.Transport) {
t.MaxConnsPerHost = 1
t.DisableKeepAlives = false
})
// Set our own http client in the context
ctx = oauthutil.Context(ctx, baseClient)
// create a new oauth client, reuse the token source
oAuthClient := oauth2.NewClient(ctx, f.ts)
return rest.NewClient(oAuthClient).SetRoot("https://" + f.opt.Hostname), nil
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
@@ -1094,9 +1148,42 @@ func (o *Object) ModTime(ctx context.Context) time.Time {
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
// Pcloud doesn't have a way of doing this so returning this
// error will cause the file to be re-uploaded to set the time.
return fs.ErrorCantSetModTime
filename, directoryID, err := o.fs.dirCache.FindPath(ctx, o.Remote(), true)
if err != nil {
return err
}
fileID := fileIDtoNumber(o.id)
filename = o.fs.opt.Enc.FromStandardName(filename)
opts := rest.Opts{
Method: "PUT",
Path: "/copyfile",
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
ExtraHeaders: map[string]string{
"Connection": "keep-alive",
},
}
opts.Parameters.Set("fileid", fileID)
opts.Parameters.Set("folderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("toname", filename)
opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("ctime", strconv.FormatInt(modTime.Unix(), 10))
opts.Parameters.Set("mtime", strconv.FormatInt(modTime.Unix(), 10))
result := &api.ItemResult{}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, result)
err = result.Error.Update(err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return fmt.Errorf("update mtime: copyfile: %w", err)
}
if err := o.setMetaData(&result.Metadata); err != nil {
return err
}
return nil
}
// Storable returns a boolean showing whether this object storable

261
backend/pcloud/writer_at.go Normal file
View File

@@ -0,0 +1,261 @@
package pcloud
import (
"bytes"
"context"
"crypto/sha1"
"encoding/hex"
"fmt"
"net/url"
"strconv"
"time"
"github.com/rclone/rclone/backend/pcloud/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
// writerAt implements fs.WriterAtCloser, adding the OpenWrtierAt feature to pcloud.
type writerAt struct {
ctx context.Context
fs *Fs
size int64
remote string
fileID int64
}
// Close implements WriterAt.Close.
func (c *writerAt) Close() error {
// Avoiding race conditions: Depending on the tcp connection, there might be
// caching issues when checking the size immediately after write.
// Hence we try avoiding them by checking the resulting size on a different connection.
if c.size < 0 {
// Without knowing the size, we cannot do size checks.
// Falling back to a sleep of 1s for sake of hope.
time.Sleep(1 * time.Second)
return nil
}
sizeOk := false
sizeLastSeen := int64(0)
for retry := 0; retry < 5; retry++ {
fs.Debugf(c.remote, "checking file size: try %d/5", retry)
obj, err := c.fs.NewObject(c.ctx, c.remote)
if err != nil {
return fmt.Errorf("get uploaded obj: %w", err)
}
sizeLastSeen = obj.Size()
if obj.Size() == c.size {
sizeOk = true
break
}
time.Sleep(1 * time.Second)
}
if !sizeOk {
return fmt.Errorf("incorrect size after upload: got %d, want %d", sizeLastSeen, c.size)
}
return nil
}
// WriteAt implements fs.WriteAt.
func (c *writerAt) WriteAt(buffer []byte, offset int64) (n int, err error) {
contentLength := len(buffer)
inSHA1Bytes := sha1.Sum(buffer)
inSHA1 := hex.EncodeToString(inSHA1Bytes[:])
client, err := c.fs.newSingleConnClient(c.ctx)
if err != nil {
return 0, fmt.Errorf("create client: %w", err)
}
openResult, err := fileOpen(c.ctx, client, c.fs, c.fileID)
if err != nil {
return 0, fmt.Errorf("open file: %w", err)
}
// get target hash
outChecksum, err := fileChecksum(c.ctx, client, c.fs.pacer, openResult.FileDescriptor, offset, int64(contentLength))
if err != nil {
return 0, err
}
outSHA1 := outChecksum.SHA1
if outSHA1 == "" || inSHA1 == "" {
return 0, fmt.Errorf("expect both hashes to be filled: src: %q, target: %q", inSHA1, outSHA1)
}
// check hash of buffer, skip if fits
if inSHA1 == outSHA1 {
return contentLength, nil
}
// upload buffer with offset if necessary
if _, err := filePWrite(c.ctx, client, c.fs.pacer, openResult.FileDescriptor, offset, buffer); err != nil {
return 0, err
}
// close fd
if _, err := fileClose(c.ctx, client, c.fs.pacer, openResult.FileDescriptor); err != nil {
return contentLength, fmt.Errorf("close fd: %w", err)
}
return contentLength, nil
}
// Call pcloud file_open using folderid and name with O_CREAT and O_WRITE flags, see [API Doc.]
// [API Doc]: https://docs.pcloud.com/methods/fileops/file_open.html
func fileOpenNew(ctx context.Context, c *rest.Client, srcFs *Fs, directoryID, filename string) (*api.FileOpenResponse, error) {
opts := rest.Opts{
Method: "PUT",
Path: "/file_open",
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
ExtraHeaders: map[string]string{
"Connection": "keep-alive",
},
}
filename = srcFs.opt.Enc.FromStandardName(filename)
opts.Parameters.Set("name", filename)
opts.Parameters.Set("folderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("flags", "0x0042") // O_CREAT, O_WRITE
result := &api.FileOpenResponse{}
err := srcFs.pacer.CallNoRetry(func() (bool, error) {
resp, err := c.CallJSON(ctx, &opts, nil, result)
err = result.Error.Update(err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("open new file descriptor: %w", err)
}
return result, nil
}
// Call pcloud file_open using fileid with O_WRITE flags, see [API Doc.]
// [API Doc]: https://docs.pcloud.com/methods/fileops/file_open.html
func fileOpen(ctx context.Context, c *rest.Client, srcFs *Fs, fileID int64) (*api.FileOpenResponse, error) {
opts := rest.Opts{
Method: "PUT",
Path: "/file_open",
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
ExtraHeaders: map[string]string{
"Connection": "keep-alive",
},
}
opts.Parameters.Set("fileid", strconv.FormatInt(fileID, 10))
opts.Parameters.Set("flags", "0x0002") // O_WRITE
result := &api.FileOpenResponse{}
err := srcFs.pacer.CallNoRetry(func() (bool, error) {
resp, err := c.CallJSON(ctx, &opts, nil, result)
err = result.Error.Update(err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("open new file descriptor: %w", err)
}
return result, nil
}
// Call pcloud file_checksum, see [API Doc.]
// [API Doc]: https://docs.pcloud.com/methods/fileops/file_checksum.html
func fileChecksum(
ctx context.Context,
client *rest.Client,
pacer *fs.Pacer,
fd, offset, count int64,
) (*api.FileChecksumResponse, error) {
opts := rest.Opts{
Method: "PUT",
Path: "/file_checksum",
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
ExtraHeaders: map[string]string{
"Connection": "keep-alive",
},
}
opts.Parameters.Set("fd", strconv.FormatInt(fd, 10))
opts.Parameters.Set("offset", strconv.FormatInt(offset, 10))
opts.Parameters.Set("count", strconv.FormatInt(count, 10))
result := &api.FileChecksumResponse{}
err := pacer.CallNoRetry(func() (bool, error) {
resp, err := client.CallJSON(ctx, &opts, nil, result)
err = result.Error.Update(err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("checksum of fd %d with offset %d and size %d: %w", fd, offset, count, err)
}
return result, nil
}
// Call pcloud file_pwrite, see [API Doc.]
// [API Doc]: https://docs.pcloud.com/methods/fileops/file_pwrite.html
func filePWrite(
ctx context.Context,
client *rest.Client,
pacer *fs.Pacer,
fd int64,
offset int64,
buf []byte,
) (*api.FilePWriteResponse, error) {
contentLength := int64(len(buf))
opts := rest.Opts{
Method: "PUT",
Path: "/file_pwrite",
Body: bytes.NewReader(buf),
ContentLength: &contentLength,
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
Close: false,
ExtraHeaders: map[string]string{
"Connection": "keep-alive",
},
}
opts.Parameters.Set("fd", strconv.FormatInt(fd, 10))
opts.Parameters.Set("offset", strconv.FormatInt(offset, 10))
result := &api.FilePWriteResponse{}
err := pacer.CallNoRetry(func() (bool, error) {
resp, err := client.CallJSON(ctx, &opts, nil, result)
err = result.Error.Update(err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("write %d bytes to fd %d with offset %d: %w", contentLength, fd, offset, err)
}
return result, nil
}
// Call pcloud file_close, see [API Doc.]
// [API Doc]: https://docs.pcloud.com/methods/fileops/file_close.html
func fileClose(
ctx context.Context,
client *rest.Client,
pacer *fs.Pacer,
fd int64,
) (*api.FileCloseResponse, error) {
opts := rest.Opts{
Method: "PUT",
Path: "/file_close",
Parameters: url.Values{},
TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
Close: true,
}
opts.Parameters.Set("fd", strconv.FormatInt(fd, 10))
result := &api.FileCloseResponse{}
err := pacer.CallNoRetry(func() (bool, error) {
resp, err := client.CallJSON(ctx, &opts, nil, result)
err = result.Error.Update(err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("close file descriptor: %w", err)
}
return result, nil
}

View File

@@ -176,7 +176,7 @@ type File struct {
FileCategory string `json:"file_category,omitempty"` // "AUDIO", "VIDEO"
FileExtension string `json:"file_extension,omitempty"`
FolderType string `json:"folder_type,omitempty"`
Hash string `json:"hash,omitempty"` // sha1 but NOT a valid file hash. looks like a torrent hash
Hash string `json:"hash,omitempty"` // custom hash with a form of sha1sum
IconLink string `json:"icon_link,omitempty"`
ID string `json:"id,omitempty"`
Kind string `json:"kind,omitempty"` // "drive#file"
@@ -486,7 +486,7 @@ type RequestNewFile struct {
ParentID string `json:"parent_id"`
FolderType string `json:"folder_type"`
// only when uploading a new file
Hash string `json:"hash,omitempty"` // sha1sum
Hash string `json:"hash,omitempty"` // gcid
Resumable map[string]string `json:"resumable,omitempty"` // {"provider": "PROVIDER_ALIYUN"}
Size int64 `json:"size,omitempty"`
UploadType string `json:"upload_type,omitempty"` // "UPLOAD_TYPE_FORM" or "UPLOAD_TYPE_RESUMABLE"
@@ -513,6 +513,72 @@ type RequestDecompress struct {
DefaultParent bool `json:"default_parent,omitempty"`
}
// ------------------------------------------------------------ authorization
// CaptchaToken is a response to requestCaptchaToken api call
type CaptchaToken struct {
CaptchaToken string `json:"captcha_token"`
ExpiresIn int64 `json:"expires_in"` // currently 300s
// API doesn't provide Expiry field and thus it should be populated from ExpiresIn on retrieval
Expiry time.Time `json:"expiry,omitempty"`
URL string `json:"url,omitempty"` // a link for users to solve captcha
}
// expired reports whether the token is expired.
// t must be non-nil.
func (t *CaptchaToken) expired() bool {
if t.Expiry.IsZero() {
return false
}
expiryDelta := time.Duration(10) * time.Second // same as oauth2's defaultExpiryDelta
return t.Expiry.Round(0).Add(-expiryDelta).Before(time.Now())
}
// Valid reports whether t is non-nil, has an AccessToken, and is not expired.
func (t *CaptchaToken) Valid() bool {
return t != nil && t.CaptchaToken != "" && !t.expired()
}
// CaptchaTokenRequest is to request for captcha token
type CaptchaTokenRequest struct {
Action string `json:"action,omitempty"`
CaptchaToken string `json:"captcha_token,omitempty"`
ClientID string `json:"client_id,omitempty"`
DeviceID string `json:"device_id,omitempty"`
Meta *CaptchaTokenMeta `json:"meta,omitempty"`
}
// CaptchaTokenMeta contains meta info for CaptchaTokenRequest
type CaptchaTokenMeta struct {
CaptchaSign string `json:"captcha_sign,omitempty"`
ClientVersion string `json:"client_version,omitempty"`
PackageName string `json:"package_name,omitempty"`
Timestamp string `json:"timestamp,omitempty"`
UserID string `json:"user_id,omitempty"` // webdrive uses this instead of UserName
UserName string `json:"username,omitempty"`
Email string `json:"email,omitempty"`
PhoneNumber string `json:"phone_number,omitempty"`
}
// Token represents oauth2 token used for pikpak which needs to be converted to be compatible with oauth2.Token
type Token struct {
TokenType string `json:"token_type"`
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
ExpiresIn int `json:"expires_in"`
Sub string `json:"sub"`
}
// Expiry returns expiry from expires in, so it should be called on retrieval
// e must be non-nil.
func (e *Token) Expiry() (t time.Time) {
if v := e.ExpiresIn; v != 0 {
return time.Now().Add(time.Duration(v) * time.Second)
}
return
}
// ------------------------------------------------------------
// NOT implemented YET

View File

@@ -3,23 +3,32 @@ package pikpak
import (
"bytes"
"context"
"crypto/md5"
"crypto/sha1"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"io"
"math/rand"
"net/http"
"net/url"
"os"
"strconv"
"strings"
"sync"
"time"
"github.com/rclone/rclone/backend/pikpak/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/rest"
)
// Globals
const (
cachePrefix = "rclone-pikpak-sha1sum-"
cachePrefix = "rclone-pikpak-gcid-"
)
// requestDecompress requests decompress of compressed files
@@ -82,19 +91,21 @@ func (f *Fs) getVIPInfo(ctx context.Context) (info *api.VIP, err error) {
// action can be one of batch{Copy,Delete,Trash,Untrash}
func (f *Fs) requestBatchAction(ctx context.Context, action string, req *api.RequestBatch) (err error) {
opts := rest.Opts{
Method: "POST",
Path: "/drive/v1/files:" + action,
NoResponse: true, // Only returns `{"task_id":""}
Method: "POST",
Path: "/drive/v1/files:" + action,
}
info := struct {
TaskID string `json:"task_id"`
}{}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rst.CallJSON(ctx, &opts, &req, nil)
resp, err = f.rst.CallJSON(ctx, &opts, &req, &info)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return fmt.Errorf("batch action %q failed: %w", action, err)
}
return nil
return f.waitTask(ctx, info.TaskID)
}
// requestNewTask requests a new api.NewTask and returns api.Task
@@ -148,6 +159,9 @@ func (f *Fs) getFile(ctx context.Context, ID string) (info *api.File, err error)
}
return f.shouldRetry(ctx, resp, err)
})
if err == nil {
info.Name = f.opt.Enc.ToStandardName(info.Name)
}
return
}
@@ -179,8 +193,8 @@ func (f *Fs) getTask(ctx context.Context, ID string, checkPhase bool) (info *api
resp, err = f.rst.CallJSON(ctx, &opts, nil, &info)
if checkPhase {
if err == nil && info.Phase != api.PhaseTypeComplete {
// could be pending right after file is created/uploaded.
return true, errors.New(info.Phase)
// could be pending right after the task is created
return true, fmt.Errorf("%s (%s) is still in %s", info.Name, info.Type, info.Phase)
}
}
return f.shouldRetry(ctx, resp, err)
@@ -188,6 +202,18 @@ func (f *Fs) getTask(ctx context.Context, ID string, checkPhase bool) (info *api
return
}
// waitTask waits for async tasks to be completed
func (f *Fs) waitTask(ctx context.Context, ID string) (err error) {
time.Sleep(taskWaitTime)
if info, err := f.getTask(ctx, ID, true); err != nil {
if info == nil {
return fmt.Errorf("can't verify the task is completed: %q", ID)
}
return fmt.Errorf("can't verify the task is completed: %#v", info)
}
return
}
// deleteTask remove a task having the specified ID
func (f *Fs) deleteTask(ctx context.Context, ID string, deleteFiles bool) (err error) {
params := url.Values{}
@@ -235,16 +261,47 @@ func (f *Fs) requestShare(ctx context.Context, req *api.RequestShare) (info *api
return
}
// Read the sha1 of in returning a reader which will read the same contents
// getGcid retrieves Gcid cached in API server
func (f *Fs) getGcid(ctx context.Context, src fs.ObjectInfo) (gcid string, err error) {
cid, err := calcCid(ctx, src)
if err != nil {
return
}
if src.Size() == 0 {
// If src is zero-length, the API will return
// Error "cid and file_size is required" (400)
// In this case, we can simply return cid == gcid
return cid, nil
}
params := url.Values{}
params.Set("cid", cid)
params.Set("file_size", strconv.FormatInt(src.Size(), 10))
opts := rest.Opts{
Method: "GET",
Path: "/drive/v1/resource/cid",
Parameters: params,
}
info := struct {
Gcid string `json:"gcid,omitempty"`
}{}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rst.CallJSON(ctx, &opts, nil, &info)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return "", err
}
return info.Gcid, nil
}
// Read the gcid of in returning a reader which will read the same contents
//
// The cleanup function should be called when out is finished with
// regardless of whether this function returned an error or not.
func readSHA1(in io.Reader, size, threshold int64) (sha1sum string, out io.Reader, cleanup func(), err error) {
// we need an SHA1
hash := sha1.New()
// use the teeReader to write to the local file AND calculate the SHA1 while doing so
teeReader := io.TeeReader(in, hash)
func readGcid(in io.Reader, size, threshold int64) (gcid string, out io.Reader, cleanup func(), err error) {
// nothing to clean up by default
cleanup = func() {}
@@ -267,8 +324,11 @@ func readSHA1(in io.Reader, size, threshold int64) (sha1sum string, out io.Reade
_ = os.Remove(tempFile.Name()) // delete the cache file after we are done - may be deleted already
}
// copy the ENTIRE file to disc and calculate the SHA1 in the process
if _, err = io.Copy(tempFile, teeReader); err != nil {
// use the teeReader to write to the local file AND calculate the gcid while doing so
teeReader := io.TeeReader(in, tempFile)
// copy the ENTIRE file to disk and calculate the gcid in the process
if gcid, err = calcGcid(teeReader, size); err != nil {
return
}
// jump to the start of the local file so we can pass it along
@@ -279,15 +339,319 @@ func readSHA1(in io.Reader, size, threshold int64) (sha1sum string, out io.Reade
// replace the already read source with a reader of our cached file
out = tempFile
} else {
// that's a small file, just read it into memory
var inData []byte
inData, err = io.ReadAll(teeReader)
if err != nil {
buf := &bytes.Buffer{}
teeReader := io.TeeReader(in, buf)
if gcid, err = calcGcid(teeReader, size); err != nil {
return
}
// set the reader to our read memory block
out = bytes.NewReader(inData)
out = buf
}
return hex.EncodeToString(hash.Sum(nil)), out, cleanup, nil
return
}
// calcGcid calculates Gcid from reader
//
// Gcid is a custom hash to index a file contents
func calcGcid(r io.Reader, size int64) (string, error) {
calcBlockSize := func(j int64) int64 {
var psize int64 = 0x40000
for float64(j)/float64(psize) > 0x200 && psize < 0x200000 {
psize <<= 1
}
return psize
}
totalHash := sha1.New()
blockHash := sha1.New()
readSize := calcBlockSize(size)
for {
blockHash.Reset()
if n, err := io.CopyN(blockHash, r, readSize); err != nil && n == 0 {
if err != io.EOF {
return "", err
}
break
}
totalHash.Write(blockHash.Sum(nil))
}
return hex.EncodeToString(totalHash.Sum(nil)), nil
}
// unWrapObjectInfo returns the underlying Object unwrapped as much as
// possible or nil even if it is an OverrideRemote
func unWrapObjectInfo(oi fs.ObjectInfo) fs.Object {
if o, ok := oi.(fs.Object); ok {
return fs.UnWrapObject(o)
} else if do, ok := oi.(*fs.OverrideRemote); ok {
// Unwrap if it is an operations.OverrideRemote
return do.UnWrap()
}
return nil
}
// calcCid calculates Cid from source
//
// Cid is a simplified version of Gcid
func calcCid(ctx context.Context, src fs.ObjectInfo) (cid string, err error) {
srcObj := unWrapObjectInfo(src)
if srcObj == nil {
return "", fmt.Errorf("failed to unwrap object from src: %s", src)
}
size := src.Size()
hash := sha1.New()
var rc io.ReadCloser
readHash := func(start, length int64) (err error) {
end := start + length - 1
if rc, err = srcObj.Open(ctx, &fs.RangeOption{Start: start, End: end}); err != nil {
return fmt.Errorf("failed to open src with range (%d, %d): %w", start, end, err)
}
defer fs.CheckClose(rc, &err)
_, err = io.Copy(hash, rc)
return err
}
if size <= 0xF000 { // 61440 = 60KB
err = readHash(0, size)
} else { // 20KB from three different parts
for _, start := range []int64{0, size / 3, size - 0x5000} {
err = readHash(start, 0x5000)
if err != nil {
break
}
}
}
if err != nil {
return "", fmt.Errorf("failed to hash: %w", err)
}
cid = strings.ToUpper(hex.EncodeToString(hash.Sum(nil)))
return
}
// ------------------------------------------------------------ authorization
// randomly generates device id used for request header 'x-device-id'
//
// original javascript implementation
//
// return "xxxxxxxxxxxx4xxxyxxxxxxxxxxxxxxx".replace(/[xy]/g, (e) => {
// const t = (16 * Math.random()) | 0;
// return ("x" == e ? t : (3 & t) | 8).toString(16);
// });
func genDeviceID() string {
base := []byte("xxxxxxxxxxxx4xxxyxxxxxxxxxxxxxxx")
for i, char := range base {
switch char {
case 'x':
base[i] = fmt.Sprintf("%x", rand.Intn(16))[0]
case 'y':
base[i] = fmt.Sprintf("%x", rand.Intn(16)&3|8)[0]
}
}
return string(base)
}
var md5Salt = []string{
"C9qPpZLN8ucRTaTiUMWYS9cQvWOE",
"+r6CQVxjzJV6LCV",
"F",
"pFJRC",
"9WXYIDGrwTCz2OiVlgZa90qpECPD6olt",
"/750aCr4lm/Sly/c",
"RB+DT/gZCrbV",
"",
"CyLsf7hdkIRxRm215hl",
"7xHvLi2tOYP0Y92b",
"ZGTXXxu8E/MIWaEDB+Sm/",
"1UI3",
"E7fP5Pfijd+7K+t6Tg/NhuLq0eEUVChpJSkrKxpO",
"ihtqpG6FMt65+Xk+tWUH2",
"NhXXU9rg4XXdzo7u5o",
}
func md5Sum(text string) string {
hash := md5.Sum([]byte(text))
return hex.EncodeToString(hash[:])
}
func calcCaptchaSign(deviceID string) (timestamp, sign string) {
timestamp = fmt.Sprint(time.Now().UnixMilli())
str := fmt.Sprint(clientID, clientVersion, packageName, deviceID, timestamp)
for _, salt := range md5Salt {
str = md5Sum(str + salt)
}
sign = "1." + str
return
}
func newCaptchaTokenRequest(action, oldToken string, opt *Options) (req *api.CaptchaTokenRequest) {
req = &api.CaptchaTokenRequest{
Action: action,
CaptchaToken: oldToken, // can be empty initially
ClientID: clientID,
DeviceID: opt.DeviceID,
Meta: new(api.CaptchaTokenMeta),
}
switch action {
case "POST:/v1/auth/signin":
req.Meta.UserName = opt.Username
default:
timestamp, captchaSign := calcCaptchaSign(opt.DeviceID)
req.Meta.CaptchaSign = captchaSign
req.Meta.Timestamp = timestamp
req.Meta.ClientVersion = clientVersion
req.Meta.PackageName = packageName
req.Meta.UserID = opt.UserID
}
return
}
// CaptchaTokenSource stores updated captcha tokens in the config file
type CaptchaTokenSource struct {
mu sync.Mutex
m configmap.Mapper
opt *Options
token *api.CaptchaToken
ctx context.Context
rst *pikpakClient
}
// initialize CaptchaTokenSource from rclone.conf if possible
func newCaptchaTokenSource(ctx context.Context, opt *Options, m configmap.Mapper) *CaptchaTokenSource {
token := new(api.CaptchaToken)
tokenString, ok := m.Get("captcha_token")
if !ok || tokenString == "" {
fs.Debugf(nil, "failed to read captcha token out of config file")
} else {
if err := json.Unmarshal([]byte(tokenString), token); err != nil {
fs.Debugf(nil, "failed to parse captcha token out of config file: %v", err)
}
}
return &CaptchaTokenSource{
m: m,
opt: opt,
token: token,
ctx: ctx,
rst: newPikpakClient(getClient(ctx, opt), opt),
}
}
// requestToken retrieves captcha token from API
func (cts *CaptchaTokenSource) requestToken(ctx context.Context, req *api.CaptchaTokenRequest) (err error) {
opts := rest.Opts{
Method: "POST",
RootURL: "https://user.mypikpak.com/v1/shield/captcha/init",
}
var info *api.CaptchaToken
_, err = cts.rst.CallJSON(ctx, &opts, &req, &info)
if err == nil && info.ExpiresIn != 0 {
// populate to Expiry
info.Expiry = time.Now().Add(time.Duration(info.ExpiresIn) * time.Second)
cts.token = info // update with a new one
}
return
}
func (cts *CaptchaTokenSource) refreshToken(opts *rest.Opts) (string, error) {
oldToken := ""
if cts.token != nil {
oldToken = cts.token.CaptchaToken
}
action := "GET:/drive/v1/about"
if opts.RootURL == "" && opts.Path != "" {
action = fmt.Sprintf("%s:%s", opts.Method, opts.Path)
} else if u, err := url.Parse(opts.RootURL); err == nil {
action = fmt.Sprintf("%s:%s", opts.Method, u.Path)
}
req := newCaptchaTokenRequest(action, oldToken, cts.opt)
if err := cts.requestToken(cts.ctx, req); err != nil {
return "", fmt.Errorf("failed to retrieve captcha token from api: %w", err)
}
// put it into rclone.conf
tokenBytes, err := json.Marshal(cts.token)
if err != nil {
return "", fmt.Errorf("failed to marshal captcha token: %w", err)
}
cts.m.Set("captcha_token", string(tokenBytes))
return cts.token.CaptchaToken, nil
}
// Invalidate resets existing captcha token for a forced refresh
func (cts *CaptchaTokenSource) Invalidate() {
cts.mu.Lock()
cts.token.CaptchaToken = ""
cts.mu.Unlock()
}
// Token returns a valid captcha token
func (cts *CaptchaTokenSource) Token(opts *rest.Opts) (string, error) {
cts.mu.Lock()
defer cts.mu.Unlock()
if cts.token.Valid() {
return cts.token.CaptchaToken, nil
}
return cts.refreshToken(opts)
}
// pikpakClient wraps rest.Client with a handle of captcha token
type pikpakClient struct {
opt *Options
client *rest.Client
captcha *CaptchaTokenSource
}
// newPikpakClient takes an (oauth) http.Client and makes a new api instance for pikpak with
// * error handler
// * root url
// * default headers
func newPikpakClient(c *http.Client, opt *Options) *pikpakClient {
client := rest.NewClient(c).SetErrorHandler(errorHandler).SetRoot(rootURL)
for key, val := range map[string]string{
"Referer": "https://mypikpak.com/",
"x-client-id": clientID,
"x-client-version": clientVersion,
"x-device-id": opt.DeviceID,
// "x-device-model": "firefox%2F129.0",
// "x-device-name": "PC-Firefox",
// "x-device-sign": fmt.Sprintf("wdi10.%sxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", opt.DeviceID),
// "x-net-work-type": "NONE",
// "x-os-version": "Win32",
// "x-platform-version": "1",
// "x-protocol-version": "301",
// "x-provider-name": "NONE",
// "x-sdk-version": "8.0.3",
} {
client.SetHeader(key, val)
}
return &pikpakClient{
client: client,
opt: opt,
}
}
// This should be called right after pikpakClient initialized
func (c *pikpakClient) SetCaptchaTokener(ctx context.Context, m configmap.Mapper) *pikpakClient {
c.captcha = newCaptchaTokenSource(ctx, c.opt, m)
return c
}
func (c *pikpakClient) CallJSON(ctx context.Context, opts *rest.Opts, request interface{}, response interface{}) (resp *http.Response, err error) {
if c.captcha != nil {
token, err := c.captcha.Token(opts)
if err != nil || token == "" {
return nil, fserrors.FatalError(fmt.Errorf("couldn't get captcha token: %v", err))
}
if opts.ExtraHeaders == nil {
opts.ExtraHeaders = make(map[string]string)
}
opts.ExtraHeaders["x-captcha-token"] = token
}
return c.client.CallJSON(ctx, opts, request, response)
}
func (c *pikpakClient) Call(ctx context.Context, opts *rest.Opts) (resp *http.Response, err error) {
return c.client.Call(ctx, opts)
}

View File

@@ -7,8 +7,6 @@ package pikpak
// md5sum is not always available, sometimes given empty.
// sha1sum used for upload differs from the one with official apps.
// Trashed files are not restored to the original location when using `batchUntrash`
// Can't stream without `--vfs-cache-mode=full`
@@ -25,6 +23,7 @@ package pikpak
import (
"bytes"
"context"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
@@ -39,10 +38,11 @@ import (
"sync"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/aws/aws-sdk-go-v2/aws"
awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/rclone/rclone/backend/pikpak/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
@@ -52,6 +52,7 @@ import (
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/dircache"
@@ -65,64 +66,74 @@ import (
// Constants
const (
rcloneClientID = "YNxT9w7GMdWvEOKa"
rcloneEncryptedClientSecret = "aqrmB6M1YJ1DWCBxVxFSjFo7wzWEky494YMmkqgAl1do1WKOe2E"
minSleep = 100 * time.Millisecond
maxSleep = 2 * time.Second
waitTime = 500 * time.Millisecond
decayConstant = 2 // bigger for slower decay, exponential
rootURL = "https://api-drive.mypikpak.com"
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
defaultUploadConcurrency = s3manager.DefaultUploadConcurrency
clientID = "YUMx5nI8ZU8Ap8pm"
clientVersion = "2.0.0"
packageName = "mypikpak.com"
defaultUserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"
minSleep = 100 * time.Millisecond
maxSleep = 2 * time.Second
taskWaitTime = 500 * time.Millisecond
decayConstant = 2 // bigger for slower decay, exponential
rootURL = "https://api-drive.mypikpak.com"
minChunkSize = fs.SizeSuffix(manager.MinUploadPartSize)
defaultUploadConcurrency = manager.DefaultUploadConcurrency
)
// Globals
var (
// Description of how to auth for this app
oauthConfig = &oauth2.Config{
Scopes: nil,
Endpoint: oauth2.Endpoint{
AuthURL: "https://user.mypikpak.com/v1/auth/signin",
TokenURL: "https://user.mypikpak.com/v1/auth/token",
AuthStyle: oauth2.AuthStyleInParams,
},
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
oauthConfig = &oauthutil.Config{
Scopes: nil,
AuthURL: "https://user.mypikpak.com/v1/auth/signin",
TokenURL: "https://user.mypikpak.com/v1/auth/token",
AuthStyle: oauth2.AuthStyleInParams,
ClientID: clientID,
RedirectURL: oauthutil.RedirectURL,
}
)
// Returns OAuthOptions modified for pikpak
func pikpakOAuthOptions() []fs.Option {
opts := []fs.Option{}
for _, opt := range oauthutil.SharedOptions {
if opt.Name == config.ConfigClientID {
opt.Advanced = true
} else if opt.Name == config.ConfigClientSecret {
opt.Advanced = true
}
opts = append(opts, opt)
}
return opts
}
// pikpakAutorize retrieves OAuth token using user/pass and save it to rclone.conf
func pikpakAuthorize(ctx context.Context, opt *Options, name string, m configmap.Mapper) error {
// override default client id/secret
if id, ok := m.Get("client_id"); ok && id != "" {
oauthConfig.ClientID = id
}
if secret, ok := m.Get("client_secret"); ok && secret != "" {
oauthConfig.ClientSecret = secret
if opt.Username == "" {
return errors.New("no username")
}
pass, err := obscure.Reveal(opt.Password)
if err != nil {
return fmt.Errorf("failed to decode password - did you obscure it?: %w", err)
}
t, err := oauthConfig.PasswordCredentialsToken(ctx, opt.Username, pass)
// new device id if necessary
if len(opt.DeviceID) != 32 {
opt.DeviceID = genDeviceID()
m.Set("device_id", opt.DeviceID)
fs.Infof(nil, "Using new device id %q", opt.DeviceID)
}
opts := rest.Opts{
Method: "POST",
RootURL: "https://user.mypikpak.com/v1/auth/signin",
}
req := map[string]string{
"username": opt.Username,
"password": pass,
"client_id": clientID,
}
var token api.Token
rst := newPikpakClient(getClient(ctx, opt), opt).SetCaptchaTokener(ctx, m)
_, err = rst.CallJSON(ctx, &opts, req, &token)
if apiErr, ok := err.(*api.Error); ok {
if apiErr.Reason == "captcha_invalid" && apiErr.Code == 4002 {
rst.captcha.Invalidate()
_, err = rst.CallJSON(ctx, &opts, req, &token)
}
}
if err != nil {
return fmt.Errorf("failed to retrieve token using username/password: %w", err)
}
t := &oauth2.Token{
AccessToken: token.AccessToken,
TokenType: token.TokenType,
RefreshToken: token.RefreshToken,
Expiry: token.Expiry(),
}
return oauthutil.PutToken(name, m, t, false)
}
@@ -161,7 +172,7 @@ func init() {
}
return nil, fmt.Errorf("unknown state %q", config.State)
},
Options: append(pikpakOAuthOptions(), []fs.Option{{
Options: []fs.Option{{
Name: "user",
Help: "Pikpak username.",
Required: true,
@@ -171,6 +182,18 @@ func init() {
Help: "Pikpak password.",
Required: true,
IsPassword: true,
}, {
Name: "device_id",
Help: "Device ID used for authorization.",
Advanced: true,
Sensitive: true,
}, {
Name: "user_agent",
Default: defaultUserAgent,
Advanced: true,
Help: fmt.Sprintf(`HTTP user agent for pikpak.
Defaults to "%s" or "--pikpak-user-agent" provided on command line.`, defaultUserAgent),
}, {
Name: "root_folder_id",
Help: `ID of the root folder.
@@ -190,6 +213,11 @@ Fill in for rclone to use a non root folder as its starting point.
Default: false,
Help: "Only show files that are in the trash.\n\nThis will show trashed files in their original directory structure.",
Advanced: true,
}, {
Name: "no_media_link",
Default: false,
Help: "Use original file links instead of media links.\n\nThis avoids issues caused by invalid media links, but may reduce download speeds.",
Advanced: true,
}, {
Name: "hash_memory_limit",
Help: "Files bigger than this will be cached on disk to calculate hash if required.",
@@ -249,7 +277,7 @@ this may help to speed up the transfers.`,
encoder.EncodeRightSpace |
encoder.EncodeRightPeriod |
encoder.EncodeInvalidUtf8),
}}...),
}},
})
}
@@ -257,9 +285,13 @@ this may help to speed up the transfers.`,
type Options struct {
Username string `config:"user"`
Password string `config:"pass"`
UserID string `config:"user_id"` // only available during runtime
DeviceID string `config:"device_id"`
UserAgent string `config:"user_agent"`
RootFolderID string `config:"root_folder_id"`
UseTrash bool `config:"use_trash"`
TrashedOnly bool `config:"trashed_only"`
NoMediaLink bool `config:"no_media_link"`
HashMemoryThreshold fs.SizeSuffix `config:"hash_memory_limit"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
UploadConcurrency int `config:"upload_concurrency"`
@@ -272,7 +304,7 @@ type Fs struct {
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
rst *rest.Client // the connection to the server
rst *pikpakClient // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
rootFolderID string // the id of the root folder
@@ -291,6 +323,7 @@ type Object struct {
modTime time.Time // modification time of the object
mimeType string // The object MIME type
parent string // ID of the parent directories
gcid string // custom hash of the object
md5sum string // md5sum of the object
link *api.Link // link to download the object
linkMu *sync.Mutex
@@ -428,6 +461,12 @@ func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (b
} else if apiErr.Reason == "file_space_not_enough" {
// "file_space_not_enough" (8): Storage space is not enough
return false, fserrors.FatalError(err)
} else if apiErr.Reason == "captcha_invalid" && apiErr.Code == 9 {
// "captcha_invalid" (9): Verification code is invalid
// This error occurred on the POST:/drive/v1/files endpoint
// when a zero-byte file was uploaded with an invalid captcha token
f.rst.captcha.Invalidate()
return true, err
}
}
@@ -451,13 +490,36 @@ func errorHandler(resp *http.Response) error {
return errResponse
}
// getClient makes an http client according to the options
func getClient(ctx context.Context, opt *Options) *http.Client {
// Override few config settings and create a client
newCtx, ci := fs.AddConfig(ctx)
ci.UserAgent = opt.UserAgent
return fshttp.NewClient(newCtx)
}
// newClientWithPacer sets a new http/rest client with a pacer to Fs
func (f *Fs) newClientWithPacer(ctx context.Context) (err error) {
f.client, _, err = oauthutil.NewClient(ctx, f.name, f.m, oauthConfig)
var ts *oauthutil.TokenSource
f.client, ts, err = oauthutil.NewClientWithBaseClient(ctx, f.name, f.m, oauthConfig, getClient(ctx, &f.opt))
if err != nil {
return fmt.Errorf("failed to create oauth client: %w", err)
}
f.rst = rest.NewClient(f.client).SetRoot(rootURL).SetErrorHandler(errorHandler)
token, err := ts.Token()
if err != nil {
return err
}
// parse user_id from oauth access token for later use
if parts := strings.Split(token.AccessToken, "."); len(parts) > 1 {
jsonStr, _ := base64.URLEncoding.DecodeString(parts[1] + "===")
info := struct {
UserID string `json:"sub,omitempty"`
}{}
if jsonErr := json.Unmarshal(jsonStr, &info); jsonErr == nil {
f.opt.UserID = info.UserID
}
}
f.rst = newPikpakClient(f.client, &f.opt).SetCaptchaTokener(ctx, f.m)
f.pacer = fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant)))
return nil
}
@@ -491,7 +553,18 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
NoMultiThreading: true, // can't have multiple threads downloading
}).Fill(ctx, f)
// new device id if necessary
if len(f.opt.DeviceID) != 32 {
f.opt.DeviceID = genDeviceID()
m.Set("device_id", f.opt.DeviceID)
fs.Infof(nil, "Using new device id %q", f.opt.DeviceID)
}
if err := f.newClientWithPacer(ctx); err != nil {
// re-authorize if necessary
if strings.Contains(err.Error(), "invalid_grant") {
return f, f.reAuthorize(ctx)
}
return nil, err
}
@@ -917,19 +990,21 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
// CleanUp empties the trash
func (f *Fs) CleanUp(ctx context.Context) (err error) {
opts := rest.Opts{
Method: "PATCH",
Path: "/drive/v1/files/trash:empty",
NoResponse: true, // Only returns `{"task_id":""}
Method: "PATCH",
Path: "/drive/v1/files/trash:empty",
}
info := struct {
TaskID string `json:"task_id"`
}{}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rst.Call(ctx, &opts)
resp, err = f.rst.CallJSON(ctx, &opts, nil, &info)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return fmt.Errorf("couldn't empty trash: %w", err)
}
return nil
return f.waitTask(ctx, info.TaskID)
}
// Move the object
@@ -1015,6 +1090,7 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
o = &Object{
fs: f,
remote: remote,
parent: dirID,
size: size,
modTime: modTime,
linkMu: new(sync.Mutex),
@@ -1047,7 +1123,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// Create temporary object
// Create temporary object - still missing id, mimeType, gcid, md5sum
dstObj, dstLeaf, dstParentID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size)
if err != nil {
return nil, err
@@ -1059,7 +1135,12 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
}
// Manually update info of moved object to save API calls
dstObj.id = srcObj.id
dstObj.mimeType = srcObj.mimeType
dstObj.gcid = srcObj.gcid
dstObj.md5sum = srcObj.md5sum
dstObj.hasMetaData = true
if srcLeaf != dstLeaf {
// Rename
@@ -1067,16 +1148,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil {
return nil, fmt.Errorf("move: couldn't rename moved file: %w", err)
}
err = dstObj.setMetaData(info)
if err != nil {
return nil, err
}
} else {
// Update info
err = dstObj.readMetaData(ctx)
if err != nil {
return nil, fmt.Errorf("move: couldn't locate moved file: %w", err)
}
return dstObj, dstObj.setMetaData(info)
}
return dstObj, nil
}
@@ -1116,7 +1188,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// Create temporary object
// Create temporary object - still missing id, mimeType, gcid, md5sum
dstObj, dstLeaf, dstParentID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size)
if err != nil {
return nil, err
@@ -1130,6 +1202,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err := f.copyObjects(ctx, []string{srcObj.id}, dstParentID); err != nil {
return nil, fmt.Errorf("couldn't copy file: %w", err)
}
// Update info of the copied object with new parent but source name
if info, err := dstObj.fs.readMetaDataForPath(ctx, srcObj.remote); err != nil {
return nil, fmt.Errorf("copy: couldn't locate copied file: %w", err)
} else if err = dstObj.setMetaData(info); err != nil {
return nil, err
}
// Can't copy and change name in one step so we have to check if we have
// the correct name after copy
@@ -1144,16 +1222,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil {
return nil, fmt.Errorf("copy: couldn't rename copied file: %w", err)
}
err = dstObj.setMetaData(info)
if err != nil {
return nil, err
}
} else {
// Update info
err = dstObj.readMetaData(ctx)
if err != nil {
return nil, fmt.Errorf("copy: couldn't locate copied file: %w", err)
}
return dstObj, dstObj.setMetaData(info)
}
return dstObj, nil
}
@@ -1193,36 +1262,37 @@ func (f *Fs) uploadByForm(ctx context.Context, in io.Reader, name string, size i
func (f *Fs) uploadByResumable(ctx context.Context, in io.Reader, name string, size int64, resumable *api.Resumable) (err error) {
p := resumable.Params
endpoint := strings.Join(strings.Split(p.Endpoint, ".")[1:], ".") // "mypikpak.com"
cfg := &aws.Config{
Credentials: credentials.NewStaticCredentials(p.AccessKeyID, p.AccessKeySecret, p.SecurityToken),
Region: aws.String("pikpak"),
Endpoint: &endpoint,
}
sess, err := session.NewSession(cfg)
// Create a credentials provider
creds := credentials.NewStaticCredentialsProvider(p.AccessKeyID, p.AccessKeySecret, p.SecurityToken)
cfg, err := awsconfig.LoadDefaultConfig(ctx,
awsconfig.WithCredentialsProvider(creds),
awsconfig.WithRegion("pikpak"))
if err != nil {
return
}
partSize := chunksize.Calculator(name, size, s3manager.MaxUploadParts, f.opt.ChunkSize)
// Create an uploader with the session and custom options
uploader := s3manager.NewUploader(sess, func(u *s3manager.Uploader) {
client := s3.NewFromConfig(cfg, func(o *s3.Options) {
o.BaseEndpoint = aws.String("https://mypikpak.com/")
})
partSize := chunksize.Calculator(name, size, int(manager.MaxUploadParts), f.opt.ChunkSize)
// Create an uploader with custom options
uploader := manager.NewUploader(client, func(u *manager.Uploader) {
u.PartSize = int64(partSize)
u.Concurrency = f.opt.UploadConcurrency
})
// Upload input parameters
uParams := &s3manager.UploadInput{
// Perform an upload
_, err = uploader.Upload(ctx, &s3.PutObjectInput{
Bucket: &p.Bucket,
Key: &p.Key,
Body: in,
}
// Perform an upload
_, err = uploader.UploadWithContext(ctx, uParams)
})
return
}
func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str string, size int64, options ...fs.OpenOption) (info *api.File, err error) {
func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string, size int64, options ...fs.OpenOption) (info *api.File, err error) {
// determine upload type
uploadType := api.UploadTypeResumable
// if size >= 0 && size < int64(5*fs.Mebi) {
@@ -1237,7 +1307,7 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str stri
ParentID: parentIDForRequest(dirID),
FolderType: "NORMAL",
Size: size,
Hash: strings.ToUpper(sha1Str),
Hash: strings.ToUpper(gcid),
UploadType: uploadType,
}
if uploadType == api.UploadTypeResumable {
@@ -1251,6 +1321,12 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str stri
return nil, fmt.Errorf("invalid response: %+v", new)
} else if new.File.Phase == api.PhaseTypeComplete {
// early return; in case of zero-byte objects
if acc, ok := in.(*accounting.Account); ok && acc != nil {
// if `in io.Reader` is still in type of `*accounting.Account` (meaning that it is unused)
// it is considered as a server side copy as no incoming/outgoing traffic occur at all
acc.ServerSideTransferStart()
acc.ServerSideCopyEnd(size)
}
return new.File, nil
}
@@ -1262,8 +1338,8 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str stri
if cancelErr := f.deleteTask(ctx, new.Task.ID, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", waitTime)
time.Sleep(waitTime)
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", taskWaitTime)
time.Sleep(taskWaitTime)
})()
if uploadType == api.UploadTypeForm && new.Form != nil {
@@ -1277,12 +1353,7 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str stri
if err != nil {
return nil, fmt.Errorf("failed to upload: %w", err)
}
fs.Debugf(leaf, "sleeping for %v before checking upload status", waitTime)
time.Sleep(waitTime)
if _, err = f.getTask(ctx, new.Task.ID, true); err != nil {
return nil, fmt.Errorf("unable to complete the upload: %w", err)
}
return new.File, nil
return new.File, f.waitTask(ctx, new.Task.ID)
}
// Put the object
@@ -1506,18 +1577,18 @@ func (o *Object) setMetaData(info *api.File) (err error) {
} else {
o.parent = info.ParentID
}
o.gcid = info.Hash
o.md5sum = info.Md5Checksum
if info.Links.ApplicationOctetStream != nil {
o.link = info.Links.ApplicationOctetStream
if fid := parseFileID(o.link.URL); fid != "" {
for mid, media := range info.Medias {
if media.Link == nil {
continue
}
if mfid := parseFileID(media.Link.URL); fid == mfid {
fs.Debugf(o, "Using a media link from Medias[%d]", mid)
o.link = media.Link
break
if !o.fs.opt.NoMediaLink {
if fid := parseFileID(o.link.URL); fid != "" {
for _, media := range info.Medias {
if media.Link != nil && parseFileID(media.Link.URL) == fid {
fs.Debugf(o, "Using a media link")
o.link = media.Link
break
}
}
}
}
@@ -1579,9 +1650,6 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
if o.md5sum == "" {
return "", nil
}
return strings.ToLower(o.md5sum), nil
}
@@ -1705,25 +1773,34 @@ func (o *Object) upload(ctx context.Context, in io.Reader, src fs.ObjectInfo, wi
return err
}
// Calculate sha1sum; grabbed from package jottacloud
hashStr, err := src.Hash(ctx, hash.SHA1)
if err != nil || hashStr == "" {
// unwrap the accounting from the input, we use wrap to put it
// back on after the buffering
var wrap accounting.WrapFn
in, wrap = accounting.UnWrap(in)
var cleanup func()
hashStr, in, cleanup, err = readSHA1(in, size, int64(o.fs.opt.HashMemoryThreshold))
defer cleanup()
if err != nil {
return fmt.Errorf("failed to calculate SHA1: %w", err)
// Calculate gcid; grabbed from package jottacloud
gcid, err := o.fs.getGcid(ctx, src)
if err != nil || gcid == "" {
fs.Debugf(o, "calculating gcid: %v", err)
if srcObj := unWrapObjectInfo(src); srcObj != nil && srcObj.Fs().Features().IsLocal {
// No buffering; directly calculate gcid from source
rc, err := srcObj.Open(ctx)
if err != nil {
return fmt.Errorf("failed to open src: %w", err)
}
defer fs.CheckClose(rc, &err)
if gcid, err = calcGcid(rc, srcObj.Size()); err != nil {
return fmt.Errorf("failed to calculate gcid: %w", err)
}
} else {
var cleanup func()
gcid, in, cleanup, err = readGcid(in, size, int64(o.fs.opt.HashMemoryThreshold))
defer cleanup()
if err != nil {
return fmt.Errorf("failed to calculate gcid: %w", err)
}
}
// Wrap the accounting back onto the stream
in = wrap(in)
}
fs.Debugf(o, "gcid = %s", gcid)
if !withTemp {
info, err := o.fs.upload(ctx, in, leaf, dirID, hashStr, size, options...)
info, err := o.fs.upload(ctx, in, leaf, dirID, gcid, size, options...)
if err != nil {
return err
}
@@ -1732,7 +1809,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, src fs.ObjectInfo, wi
// We have to fall back to upload + rename
tempName := "rcloneTemp" + random.String(8)
info, err := o.fs.upload(ctx, in, tempName, dirID, hashStr, size, options...)
info, err := o.fs.upload(ctx, in, tempName, dirID, gcid, size, options...)
if err != nil {
return err
}

View File

@@ -0,0 +1,397 @@
package pixeldrain
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/rest"
)
// FilesystemPath is the object which is returned from the pixeldrain API when
// running the stat command on a path. It includes the node information for all
// the members of the path and for all the children of the requested directory.
type FilesystemPath struct {
Path []FilesystemNode `json:"path"`
BaseIndex int `json:"base_index"`
Children []FilesystemNode `json:"children"`
}
// Base returns the base node of the path, this is the node that the path points
// to
func (fsp *FilesystemPath) Base() FilesystemNode {
return fsp.Path[fsp.BaseIndex]
}
// FilesystemNode is a single node in the pixeldrain filesystem. Usually part of
// a Path or Children slice. The Node is also returned as response from update
// commands, if requested
type FilesystemNode struct {
Type string `json:"type"`
Path string `json:"path"`
Name string `json:"name"`
Created time.Time `json:"created"`
Modified time.Time `json:"modified"`
ModeOctal string `json:"mode_octal"`
// File params
FileSize int64 `json:"file_size"`
FileType string `json:"file_type"`
SHA256Sum string `json:"sha256_sum"`
// ID is only filled in when the file/directory is publicly shared
ID string `json:"id,omitempty"`
}
// ChangeLog is a log of changes that happened in a filesystem. Changes returned
// from the API are on chronological order from old to new. A change log can be
// requested for any directory or file, but change logging needs to be enabled
// with the update API before any log entries will be made. Changes are logged
// for 24 hours after logging was enabled. Each time a change log is requested
// the timer is reset to 24 hours.
type ChangeLog []ChangeLogEntry
// ChangeLogEntry is a single entry in a directory's change log. It contains the
// time at which the change occurred. The path relative to the requested
// directory and the action that was performend (update, move or delete). In
// case of a move operation the new path of the file is stored in the path_new
// field
type ChangeLogEntry struct {
Time time.Time `json:"time"`
Path string `json:"path"`
PathNew string `json:"path_new"`
Action string `json:"action"`
Type string `json:"type"`
}
// UserInfo contains information about the logged in user
type UserInfo struct {
Username string `json:"username"`
Subscription SubscriptionType `json:"subscription"`
StorageSpaceUsed int64 `json:"storage_space_used"`
}
// SubscriptionType contains information about a subscription type. It's not the
// active subscription itself, only the properties of the subscription. Like the
// perks and cost
type SubscriptionType struct {
Name string `json:"name"`
StorageSpace int64 `json:"storage_space"`
}
// APIError is the error type returned by the pixeldrain API
type APIError struct {
StatusCode string `json:"value"`
Message string `json:"message"`
}
func (e APIError) Error() string { return e.StatusCode }
// Generalized errors which are caught in our own handlers and translated to
// more specific errors from the fs package.
var (
errNotFound = errors.New("pd api: path not found")
errExists = errors.New("pd api: node already exists")
errAuthenticationFailed = errors.New("pd api: authentication failed")
)
func apiErrorHandler(resp *http.Response) (err error) {
var e APIError
if err = json.NewDecoder(resp.Body).Decode(&e); err != nil {
return fmt.Errorf("failed to parse error json: %w", err)
}
// We close the body here so that the API handlers can be sure that the
// response body is not still open when an error was returned
if err = resp.Body.Close(); err != nil {
return fmt.Errorf("failed to close resp body: %w", err)
}
if e.StatusCode == "path_not_found" {
return errNotFound
} else if e.StatusCode == "directory_not_empty" {
return fs.ErrorDirectoryNotEmpty
} else if e.StatusCode == "node_already_exists" {
return errExists
} else if e.StatusCode == "authentication_failed" {
return errAuthenticationFailed
} else if e.StatusCode == "permission_denied" {
return fs.ErrorPermissionDenied
}
return e
}
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
}
// shouldRetry returns a boolean as to whether this resp and err deserve to be
// retried. It returns the err as a convenience so it can be used as the return
// value in the pacer function
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// paramsFromMetadata turns the fs.Metadata into instructions the pixeldrain API
// can understand.
func paramsFromMetadata(meta fs.Metadata) (params url.Values) {
params = make(url.Values)
if modified, ok := meta["mtime"]; ok {
params.Set("modified", modified)
}
if created, ok := meta["btime"]; ok {
params.Set("created", created)
}
if mode, ok := meta["mode"]; ok {
params.Set("mode", mode)
}
if shared, ok := meta["shared"]; ok {
params.Set("shared", shared)
}
if loggingEnabled, ok := meta["logging_enabled"]; ok {
params.Set("logging_enabled", loggingEnabled)
}
return params
}
// nodeToObject converts a single FilesystemNode API response to an object. The
// node is usually a single element from a directory listing
func (f *Fs) nodeToObject(node FilesystemNode) (o *Object) {
// Trim the path prefix. The path prefix is hidden from rclone during all
// operations. Saving it here would confuse rclone a lot. So instead we
// strip it here and add it back for every API request we need to perform
node.Path = strings.TrimPrefix(node.Path, f.pathPrefix)
return &Object{fs: f, base: node}
}
func (f *Fs) nodeToDirectory(node FilesystemNode) fs.DirEntry {
return fs.NewDir(strings.TrimPrefix(node.Path, f.pathPrefix), node.Modified).SetID(node.ID)
}
func (f *Fs) escapePath(p string) (out string) {
// Add the path prefix, encode all the parts and combine them together
var parts = strings.Split(f.pathPrefix+p, "/")
for i := range parts {
parts[i] = url.PathEscape(parts[i])
}
return strings.Join(parts, "/")
}
func (f *Fs) put(
ctx context.Context,
path string,
body io.Reader,
meta fs.Metadata,
options []fs.OpenOption,
) (node FilesystemNode, err error) {
var params = paramsFromMetadata(meta)
// Tell the server to automatically create parent directories if they don't
// exist yet
params.Set("make_parents", "true")
return node, f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "PUT",
Path: f.escapePath(path),
Body: body,
Parameters: params,
Options: options,
},
nil,
&node,
)
return shouldRetry(ctx, resp, err)
})
}
func (f *Fs) read(ctx context.Context, path string, options []fs.OpenOption) (in io.ReadCloser, err error) {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &rest.Opts{
Method: "GET",
Path: f.escapePath(path),
Options: options,
})
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
}
return resp.Body, err
}
func (f *Fs) stat(ctx context.Context, path string) (fsp FilesystemPath, err error) {
return fsp, f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "GET",
Path: f.escapePath(path),
// To receive node info from the pixeldrain API you need to add the
// ?stat query. Without it pixeldrain will return the file contents
// in the URL points to a file
Parameters: url.Values{"stat": []string{""}},
},
nil,
&fsp,
)
return shouldRetry(ctx, resp, err)
})
}
func (f *Fs) changeLog(ctx context.Context, start, end time.Time) (changeLog ChangeLog, err error) {
return changeLog, f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "GET",
Path: f.escapePath(""),
Parameters: url.Values{
"change_log": []string{""},
"start": []string{start.Format(time.RFC3339Nano)},
"end": []string{end.Format(time.RFC3339Nano)},
},
},
nil,
&changeLog,
)
return shouldRetry(ctx, resp, err)
})
}
func (f *Fs) update(ctx context.Context, path string, fields fs.Metadata) (node FilesystemNode, err error) {
var params = paramsFromMetadata(fields)
params.Set("action", "update")
return node, f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "POST",
Path: f.escapePath(path),
MultipartParams: params,
},
nil,
&node,
)
return shouldRetry(ctx, resp, err)
})
}
func (f *Fs) mkdir(ctx context.Context, dir string) (err error) {
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "POST",
Path: f.escapePath(dir),
MultipartParams: url.Values{"action": []string{"mkdirall"}},
NoResponse: true,
},
nil,
nil,
)
return shouldRetry(ctx, resp, err)
})
}
var errIncompatibleSourceFS = errors.New("source filesystem is not the same as target")
// Renames a file on the server side. Can be used for both directories and files
func (f *Fs) rename(ctx context.Context, src fs.Fs, from, to string, meta fs.Metadata) (node FilesystemNode, err error) {
srcFs, ok := src.(*Fs)
if !ok {
// This is not a pixeldrain FS, can't move
return node, errIncompatibleSourceFS
} else if srcFs.opt.RootFolderID != f.opt.RootFolderID {
// Path is not in the same root dir, can't move
return node, errIncompatibleSourceFS
}
var params = paramsFromMetadata(meta)
params.Set("action", "rename")
// The target is always in our own filesystem so here we use our
// own pathPrefix
params.Set("target", f.pathPrefix+to)
// Create parent directories if the parent directory of the file
// does not exist yet
params.Set("make_parents", "true")
return node, f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "POST",
// Important: We use the source FS path prefix here
Path: srcFs.escapePath(from),
MultipartParams: params,
},
nil,
&node,
)
return shouldRetry(ctx, resp, err)
})
}
func (f *Fs) delete(ctx context.Context, path string, recursive bool) (err error) {
var params url.Values
if recursive {
// Tell the server to recursively delete all child files
params = url.Values{"recursive": []string{"true"}}
}
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "DELETE",
Path: f.escapePath(path),
Parameters: params,
NoResponse: true,
},
nil, nil,
)
return shouldRetry(ctx, resp, err)
})
}
func (f *Fs) userInfo(ctx context.Context) (user UserInfo, err error) {
return user, f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(
ctx,
&rest.Opts{
Method: "GET",
// The default RootURL points at the filesystem endpoint. We can't
// use that to request user information. So here we override it to
// the user endpoint
RootURL: f.opt.APIURL + "/user",
},
nil,
&user,
)
return shouldRetry(ctx, resp, err)
})
}

View File

@@ -0,0 +1,567 @@
// Package pixeldrain provides an interface to the Pixeldrain object storage
// system.
package pixeldrain
import (
"context"
"errors"
"fmt"
"io"
"path"
"strconv"
"strings"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
const (
timeFormat = time.RFC3339Nano
minSleep = pacer.MinSleep(10 * time.Millisecond)
maxSleep = pacer.MaxSleep(1 * time.Second)
decayConstant = pacer.DecayConstant(2) // bigger for slower decay, exponential
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "pixeldrain",
Description: "Pixeldrain Filesystem",
NewFs: NewFs,
Config: nil,
Options: []fs.Option{{
Name: "api_key",
Help: "API key for your pixeldrain account.\n" +
"Found on https://pixeldrain.com/user/api_keys.",
Sensitive: true,
}, {
Name: "root_folder_id",
Help: "Root of the filesystem to use.\n\n" +
"Set to 'me' to use your personal filesystem. " +
"Set to a shared directory ID to use a shared directory.",
Default: "me",
}, {
Name: "api_url",
Help: "The API endpoint to connect to. In the vast majority of cases it's fine to leave\n" +
"this at default. It is only intended to be changed for testing purposes.",
Default: "https://pixeldrain.com/api",
Advanced: true,
Required: true,
}},
MetadataInfo: &fs.MetadataInfo{
System: map[string]fs.MetadataHelp{
"mode": {
Help: "File mode",
Type: "octal, unix style",
Example: "755",
},
"mtime": {
Help: "Time of last modification",
Type: "RFC 3339",
Example: timeFormat,
},
"btime": {
Help: "Time of file birth (creation)",
Type: "RFC 3339",
Example: timeFormat,
},
},
Help: "Pixeldrain supports file modes and creation times.",
},
})
}
// Options defines the configuration for this backend
type Options struct {
APIKey string `config:"api_key"`
RootFolderID string `config:"root_folder_id"`
APIURL string `config:"api_url"`
}
// Fs represents a remote box
type Fs struct {
name string // name of this remote, as given to NewFS
root string // the path we are working on, as given to NewFS
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
pacer *fs.Pacer
loggedIn bool // if the user is authenticated
// Pathprefix is the directory we're working in. The pathPrefix is stripped
// from every API response containing a path. The pathPrefix always begins
// and ends with a slash for concatenation convenience
pathPrefix string
}
// Object describes a pixeldrain file
type Object struct {
fs *Fs // what this object is part of
base FilesystemNode // the node this object references
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(fshttp.NewClient(ctx)).SetErrorHandler(apiErrorHandler),
pacer: fs.NewPacer(ctx, pacer.NewDefault(minSleep, maxSleep, decayConstant)),
}
f.features = (&fs.Features{
ReadMimeType: true,
CanHaveEmptyDirectories: true,
ReadMetadata: true,
WriteMetadata: true,
}).Fill(ctx, f)
// Set the path prefix. This is the path to the root directory on the
// server. We add it to each request and strip it from each response because
// rclone does not want to see it
f.pathPrefix = "/" + path.Join(opt.RootFolderID, f.root) + "/"
// The root URL equates to https://pixeldrain.com/api/filesystem during
// normal operation. API handlers need to manually add the pathPrefix to
// each request
f.srv.SetRoot(opt.APIURL + "/filesystem")
// If using an APIKey, set the Authorization header
if len(opt.APIKey) > 0 {
f.srv.SetUserPass("", opt.APIKey)
// Check if credentials are correct
user, err := f.userInfo(ctx)
if err != nil {
return nil, fmt.Errorf("failed to get user data: %w", err)
}
f.loggedIn = true
fs.Infof(f,
"Logged in as '%s', subscription '%s', storage limit %d",
user.Username, user.Subscription.Name, user.Subscription.StorageSpace,
)
}
if !f.loggedIn && opt.RootFolderID == "me" {
return nil, errors.New("authentication required: the 'me' directory can only be accessed while logged in")
}
// Satisfy TestFsIsFile. This test expects that we throw an error if the
// filesystem root is a file
fsp, err := f.stat(ctx, "")
if err != errNotFound && err != nil {
// It doesn't matter if the root directory does not exist, as long as it
// is not a file. This is what the test dictates
return f, err
} else if err == nil && fsp.Base().Type == "file" {
// The filesystem root is a file, rclone wants us to set the root to the
// parent directory
f.root = path.Dir(f.root)
f.pathPrefix = "/" + path.Join(opt.RootFolderID, f.root) + "/"
return f, fs.ErrorIsFile
}
return f, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
fsp, err := f.stat(ctx, dir)
if err == errNotFound {
return nil, fs.ErrorDirNotFound
} else if err != nil {
return nil, err
} else if fsp.Base().Type == "file" {
return nil, fs.ErrorIsFile
}
entries = make(fs.DirEntries, len(fsp.Children))
for i := range fsp.Children {
if fsp.Children[i].Type == "dir" {
entries[i] = f.nodeToDirectory(fsp.Children[i])
} else {
entries[i] = f.nodeToObject(fsp.Children[i])
}
}
return entries, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
fsp, err := f.stat(ctx, remote)
if err == errNotFound {
return nil, fs.ErrorObjectNotFound
} else if err != nil {
return nil, err
} else if fsp.Base().Type == "dir" {
return nil, fs.ErrorIsDir
}
return f.nodeToObject(fsp.Base()), nil
}
// Put the object
//
// Copy the reader in to the new object which is returned.
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
meta, err := fs.GetMetadataOptions(ctx, f, src, options)
if err != nil {
return nil, fmt.Errorf("failed to get object metadata")
}
// Overwrite the mtime if it was not already set in the metadata
if _, ok := meta["mtime"]; !ok {
if meta == nil {
meta = make(fs.Metadata)
}
meta["mtime"] = src.ModTime(ctx).Format(timeFormat)
}
node, err := f.put(ctx, src.Remote(), in, meta, options)
if err != nil {
return nil, fmt.Errorf("failed to put object: %w", err)
}
return f.nodeToObject(node), nil
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
err = f.mkdir(ctx, dir)
if err == errNotFound {
return fs.ErrorDirNotFound
} else if err == errExists {
// Spec says we do not return an error if the directory already exists
return nil
}
return err
}
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
err = f.delete(ctx, dir, false)
if err == errNotFound {
return fs.ErrorDirNotFound
}
return err
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string { return f.name }
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string { return f.root }
// String converts this Fs to a string
func (f *Fs) String() string { return fmt.Sprintf("pixeldrain root '%s'", f.root) }
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration { return time.Millisecond }
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set { return hash.Set(hash.SHA256) }
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features { return f.features }
// Purge all files in the directory specified
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context, dir string) (err error) {
err = f.delete(ctx, dir, true)
if err == errNotFound {
return fs.ErrorDirNotFound
}
return err
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
// This is not a pixeldrain object. Can't move
return nil, fs.ErrorCantMove
}
node, err := f.rename(ctx, srcObj.fs, srcObj.base.Path, remote, fs.GetConfig(ctx).MetadataSet)
if err == errIncompatibleSourceFS {
return nil, fs.ErrorCantMove
} else if err == errNotFound {
return nil, fs.ErrorObjectNotFound
}
return f.nodeToObject(node), nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
_, err = f.rename(ctx, src, srcRemote, dstRemote, nil)
if err == errIncompatibleSourceFS {
return fs.ErrorCantDirMove
} else if err == errNotFound {
return fs.ErrorDirNotFound
} else if err == errExists {
return fs.ErrorDirExists
}
return err
}
// ChangeNotify calls the passed function with a path
// that has had changes. If the implementation
// uses polling, it should adhere to the given interval.
// At least one value will be written to the channel,
// specifying the initial value and updated values might
// follow. A 0 Duration should pause the polling.
// The ChangeNotify implementation must empty the channel
// regularly. When the channel gets closed, the implementation
// should stop polling and release resources.
func (f *Fs) ChangeNotify(ctx context.Context, notify func(string, fs.EntryType), newInterval <-chan time.Duration) {
// If the bucket ID is not /me we need to explicitly enable change logging
// for this directory or file
if f.pathPrefix != "/me/" {
_, err := f.update(ctx, "", fs.Metadata{"logging_enabled": "true"})
if err != nil {
fs.Errorf(f, "Failed to set up change logging for path '%s': %s", f.pathPrefix, err)
}
}
go f.changeNotify(ctx, notify, newInterval)
}
func (f *Fs) changeNotify(ctx context.Context, notify func(string, fs.EntryType), newInterval <-chan time.Duration) {
var ticker = time.NewTicker(<-newInterval)
var lastPoll = time.Now()
for {
select {
case dur, ok := <-newInterval:
if !ok {
ticker.Stop()
return
}
fs.Debugf(f, "Polling changes at an interval of %s", dur)
ticker.Reset(dur)
case t := <-ticker.C:
clog, err := f.changeLog(ctx, lastPoll, t)
if err != nil {
fs.Errorf(f, "Failed to get change log for path '%s': %s", f.pathPrefix, err)
continue
}
for i := range clog {
fs.Debugf(f, "Path '%s' (%s) changed (%s) in directory '%s'",
clog[i].Path, clog[i].Type, clog[i].Action, f.pathPrefix)
if clog[i].Type == "dir" {
notify(strings.TrimPrefix(clog[i].Path, "/"), fs.EntryDirectory)
} else if clog[i].Type == "file" {
notify(strings.TrimPrefix(clog[i].Path, "/"), fs.EntryObject)
}
}
lastPoll = t
}
}
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// Put already supports streaming so we just use that
return f.Put(ctx, in, src, options...)
}
// DirSetModTime sets the mtime metadata on a directory
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) (err error) {
_, err = f.update(ctx, dir, fs.Metadata{"mtime": modTime.Format(timeFormat)})
return err
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
fsn, err := f.update(ctx, remote, fs.Metadata{"shared": strconv.FormatBool(!unlink)})
if err != nil {
return "", err
}
if fsn.ID != "" {
return strings.Replace(f.opt.APIURL, "/api", "/d/", 1) + fsn.ID, nil
}
return "", nil
}
// About gets quota information
func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
user, err := f.userInfo(ctx)
if err != nil {
return nil, fmt.Errorf("failed to read user info: %w", err)
}
usage = &fs.Usage{Used: fs.NewUsageValue(user.StorageSpaceUsed)}
if user.Subscription.StorageSpace > -1 {
usage.Total = fs.NewUsageValue(user.Subscription.StorageSpace)
}
return usage, nil
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) {
_, err = o.fs.update(ctx, o.base.Path, fs.Metadata{"mtime": modTime.Format(timeFormat)})
if err == nil {
o.base.Modified = modTime
}
return err
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
return o.fs.read(ctx, o.base.Path, options)
}
// Update the object with the contents of the io.Reader, modTime and size
//
// If existing is set then it updates the object rather than creating a new one.
//
// The new object may have been created if an error is returned.
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
// Copy the parameters and update the object
o.base.Modified = src.ModTime(ctx)
o.base.FileSize = src.Size()
o.base.SHA256Sum, _ = src.Hash(ctx, hash.SHA256)
_, err = o.fs.Put(ctx, in, o, options...)
return err
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return o.fs.delete(ctx, o.base.Path, false)
}
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Hash returns the SHA-256 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.SHA256 {
return "", hash.ErrUnsupported
}
return o.base.SHA256Sum, nil
}
// Storable returns a boolean showing whether this object storable
func (o *Object) Storable() bool {
return true
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.base.Path
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.base.Path
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.base.Modified
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return o.base.FileSize
}
// MimeType returns the content type of the Object if known, or "" if not
func (o *Object) MimeType(ctx context.Context) string {
return o.base.FileType
}
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return fs.Metadata{
"mode": o.base.ModeOctal,
"mtime": o.base.Modified.Format(timeFormat),
"btime": o.base.Created.Format(timeFormat),
}, nil
}
// Verify that all the interfaces are implemented correctly
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Info = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.DirEntry = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
_ fs.Metadataer = (*Object)(nil)
)

Some files were not shown because too many files have changed in this diff Show More