1
0
mirror of https://github.com/rclone/rclone.git synced 2026-02-18 10:23:31 +00:00

Compare commits

..

22 Commits

Author SHA1 Message Date
Nick Craig-Wood
c7dab94e94 Version v1.73.1 2026-02-17 16:55:43 +00:00
Nick Craig-Wood
615a6f57bb build: fix build using go 1.26.0 instead of go 1.25.7
In the actions config use Go ~1.25.7 to pin the go version to 1.25.x,
x >= 7.

Before this it was choosing Go 1.26.0 which isn't what we want.
2026-02-17 16:48:52 +00:00
Nick Craig-Wood
b52fab6868 fs/march: fix runtime: program exceeds 10000-thread limit
Before this change when doing a sync with `--no-traverse` and
`--files-from` we could call `NewObject` a total of `--checkers` *
`--checkers` times simultaneously.

With `--checkers 128` this can exceed the 10,000 thread limit and
fails when run on a local to local transfer because `NewObject` calls
`lstat` which is a syscall which needs an OS thread of its own.

This patch uses a weighted semaphore to limit the number of
simultaneous calls to `NewObject` to `--checkers` instead which won't
blow the 10,000 thread limit and is far more sensible use of OS
resources.

Fixes #9073
2026-02-17 16:35:26 +00:00
Nick Craig-Wood
b740e9c299 accounting: fix missing server side stats from core/stats rc
These stats weren't being updated in the global stats read by rc
core/stats:

- transferQueue
- deletesSize
- serverSideCopies
- serverSideCopyBytes
- serverSideMoves
- serverSideMoveBytes
2026-02-17 16:35:26 +00:00
Nick Craig-Wood
3b63bd280c pacer: re-read the sleep time as it may be stale
Before this change we read sleepTime before acquiring the pacer token
and uses that possibly stale value to schedule the token return. When
many goroutines enter while sleepTime is high (e.g., 10s), each
goroutine caches this 10s value. Even if successful calls rapidly
decay the pacer state to 0, the queued goroutines still schedule 10s
token returns, so the queue drains at 1 req/10s for the entire herd.
This can create multi‑minute delays even after the pacer has dropped
to 0.

After this change we refresh the sleep time after getting the token.

This problem was introduced by the desire to skip reading the pacer
token entirely when sleepTime is 0 in high performance backends (eg
s3, azure blob).
2026-02-17 16:35:26 +00:00
Nick Craig-Wood
3a902dd1a0 pacer: fix deadlock between pacer token and --max-connections
It was possible in the presence of --max-connections and recursive
calls to the pacer to deadlock it leaving all connections waiting on
either a max connection token or a pacer token.

This fixes the problem by making sure we return the pacer token on
schedule if we take it.

This also short circuits the pacer token if sleepTime is 0.
2026-02-17 16:35:26 +00:00
Nick Craig-Wood
1e24958861 build: fix CVE-2025-68121 by updating go to 1.25.7 or later - fixes #9167 2026-02-17 16:35:26 +00:00
Nick Craig-Wood
77e0a760d8 drime: fix files and directories being created in the default workspace
Before this change directories and files were created in the default
workspace, not the workspace specified by --drime-workspace-id.
2026-02-17 16:35:26 +00:00
Nick Craig-Wood
e16ac436f7 docs: update sponsors 2026-02-17 16:35:26 +00:00
Jack Kelly
1f34163857 copyurl: Extend copyurl docs with an example of CSV FILENAMEs starting with a path. 2026-02-17 16:35:26 +00:00
José Zúniga
132184a47f internxt: implement re-login under refresh logic, improve retry logic - fixes #9174 2026-02-17 16:35:26 +00:00
Nick Craig-Wood
6e78bb1c40 docs: add ExchangeRate-API as a sponsor 2026-02-17 16:35:26 +00:00
albertony
d720452656 build: bump github.com/go-chi/chi/v5 from 5.2.3 to 5.2.5 to fix GO-2026-4316 2026-02-17 16:35:26 +00:00
kingston125
a8d374f068 Set list_version to 2 for FileLu S3 configuration 2026-02-17 16:35:26 +00:00
kingston125
53fbeb21c8 filelu: add multipart upload support with configurable cutoff 2026-02-17 16:35:26 +00:00
kingston125
350a9bc389 filelu: add multipart init response type 2026-02-17 16:35:25 +00:00
kingston125
6ba40cc97e filelu: add comment for response body wrapping 2026-02-17 16:35:25 +00:00
kingston125
1d2a159c6a filelu: avoid buffering entire file in memory
Avoid buffering the entire file in memory during download, especially
for large files.
2026-02-17 16:35:25 +00:00
Nick Craig-Wood
1adc3e241d docs: update sponsor logos 2026-02-17 16:35:25 +00:00
Enduriel
d5483e3e93 filen: fix potential panic in case of error during upload 2026-02-17 16:35:25 +00:00
Enduriel
d6bc7a69a1 filen: fix 32 bit targets not being able to list directories Fixes #9142
or do pretty much anything,
this was caused by timestamps not being read to 64 bit integers
2026-02-17 16:35:25 +00:00
Nick Craig-Wood
3311b72407 Start v1.73.1-DEV development 2026-02-17 12:16:33 +00:00
36 changed files with 2299 additions and 1482 deletions

View File

@@ -34,7 +34,7 @@ jobs:
include:
- job_name: linux
os: ubuntu-latest
go: '>=1.25.0-rc.1'
go: '~1.25.7'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -45,14 +45,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '>=1.25.0-rc.1'
go: '~1.25.7'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-latest
go: '>=1.25.0-rc.1'
go: '~1.25.7'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -61,14 +61,14 @@ jobs:
- job_name: mac_arm64
os: macos-latest
go: '>=1.25.0-rc.1'
go: '~1.25.7'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '>=1.25.0-rc.1'
go: '~1.25.7'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -78,7 +78,7 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '>=1.25.0-rc.1'
go: '~1.25.7'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
@@ -224,7 +224,7 @@ jobs:
id: setup-go
uses: actions/setup-go@v6
with:
go-version: '>=1.24.0-rc.1'
go-version: '~1.24.0'
check-latest: true
cache: false
@@ -315,7 +315,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v6
with:
go-version: '>=1.25.0-rc.1'
go-version: '~1.25.7'
- name: Set global environment variables
run: |

2659
MANUAL.html generated

File diff suppressed because it is too large Load Diff

81
MANUAL.md generated
View File

@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Jan 30, 2026
% Feb 17, 2026
# NAME
@@ -5388,12 +5388,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20260130
// Output: stories/The Quick Brown Fox!-20260217
```
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM
// Output: stories/The Quick Brown Fox!-2026-02-17 0451PM
```
```console
@@ -5841,6 +5841,15 @@ Note that `--stdout` and `--print-filename` are incompatible with `--urls`.
This will do `--transfers` copies in parallel. Note that if `--auto-filename`
is desired for all URLs then a file with only URLs and no filename can be used.
Each FILENAME in the CSV file can start with a relative path which will be appended
to the destination path provided at the command line. For example, running the command
shown above with the following CSV file will write two files to the destination:
`remote:dir/local/path/bar.json` and `remote:dir/another/local/directory/qux.json`
```csv
https://example.org/foo/bar.json,local/path/bar.json
https://example.org/qux/baz.json,another/local/directory/qux.json
```
## Troubleshooting
If you can't get `rclone copyurl` to work then here are some things you can try:
@@ -24776,7 +24785,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.73.1")
```
@@ -25260,9 +25269,11 @@ Backend-only flags (these can be set in the config file also).
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
--filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi)
--filelu-description string Description of the remote
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
--filelu-key string Your FileLu Rclone key from My Account
--filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi)
--filen-api-key string API Key for your Filen account (obscured)
--filen-auth-version string Authentication Version (internal use only)
--filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
@@ -27528,11 +27539,14 @@ The following backends have known issues that need more investigation:
<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
- `TestDropbox` (`dropbox`)
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
- `TestSeafile` (`seafile`)
- [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafile-1.txt)
- `TestSeafileV6` (`seafile`)
- [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafileV6-1.txt)
- Updated: 2026-01-30-010015
- `TestInternxt` (`internxt`)
- [`TestBisyncLocalRemote/all_changed`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
- [`TestBisyncLocalRemote/ext_paths`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
- [`TestBisyncLocalRemote/max_delete_path1`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
- [`TestBisyncRemoteRemote/basic`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
- [`TestBisyncRemoteRemote/concurrent`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
- [5 more](https://pub.rclone.org/integration-tests/current/)
- Updated: 2026-02-17-010016
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
The following backends either have not been tested recently or have known issues
@@ -44693,6 +44707,28 @@ Properties:
Here are the Advanced options specific to filelu (FileLu Cloud Storage).
#### --filelu-upload-cutoff
Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size.
Properties:
- Config: upload_cutoff
- Env Var: RCLONE_FILELU_UPLOAD_CUTOFF
- Type: SizeSuffix
- Default: 500Mi
#### --filelu-chunk-size
Chunk size to use for uploading. Used for multipart uploads.
Properties:
- Config: chunk_size
- Env Var: RCLONE_FILELU_CHUNK_SIZE
- Type: SizeSuffix
- Default: 64Mi
#### --filelu-encoding
The encoding for the backend.
@@ -68058,6 +68094,33 @@ Options:
# Changelog
## v1.73.1 - 2026-02-17
[See commits](https://github.com/rclone/rclone/compare/v1.73.0...v1.73.1)
- Bug Fixes
- accounting: Fix missing server side stats from core/stats rc (Nick Craig-Wood)
- build
- Fix CVE-2025-68121 by updating go to 1.25.7 or later (Nick Craig-Wood)
- Bump github.com/go-chi/chi/v5 from 5.2.3 to 5.2.5 to fix GO-2026-4316 (albertony)
- docs: Extend copyurl docs with an example of CSV FILENAMEs starting with a path. (Jack Kelly)
- march: Fix runtime: program exceeds 10000-thread limit (Nick Craig-Wood)
- pacer
- Fix deadlock between pacer token and --max-connections (Nick Craig-Wood)
- Re-read the sleep time as it may be stale (Nick Craig-Wood)
- Drime
- Fix files and directories being created in the default workspace (Nick Craig-Wood)
- Filelu
- Avoid buffering entire file in memory (kingston125)
- Add multipart upload support with configurable cutoff (kingston125)
- Filen
- Fix 32 bit targets not being able to list directories (Enduriel)
- Fix potential panic in case of error during upload (Enduriel)
- Internxt
- Implement re-login under refresh logic, improve retry logic (José Zúniga)
-S3
- Set list_version to 2 for FileLu S3 configuration (kingston125)
## v1.73.0 - 2026-01-30
[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.73.0)

90
MANUAL.txt generated
View File

@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
Jan 30, 2026
Feb 17, 2026
NAME
@@ -4607,10 +4607,10 @@ Examples:
// Output: stories/The Quick Brown Fox!.txt
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20260130
// Output: stories/The Quick Brown Fox!-20260217
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM
// Output: stories/The Quick Brown Fox!-2026-02-17 0451PM
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
// Output: ababababababab/ababab ababababab ababababab ababab!abababab
@@ -5032,6 +5032,15 @@ incompatible with --urls. This will do --transfers copies in parallel.
Note that if --auto-filename is desired for all URLs then a file with
only URLs and no filename can be used.
Each FILENAME in the CSV file can start with a relative path which will
be appended to the destination path provided at the command line. For
example, running the command shown above with the following CSV file
will write two files to the destination: remote:dir/local/path/bar.json
and remote:dir/another/local/directory/qux.json
https://example.org/foo/bar.json,local/path/bar.json
https://example.org/qux/baz.json,another/local/directory/qux.json
Troubleshooting
If you can't get rclone copyurl to work then here are some things you
@@ -22962,7 +22971,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.73.1")
Performance
@@ -23416,9 +23425,11 @@ Backend-only flags (these can be set in the config file also).
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
--filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi)
--filelu-description string Description of the remote
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
--filelu-key string Your FileLu Rclone key from My Account
--filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi)
--filen-api-key string API Key for your Filen account (obscured)
--filen-auth-version string Authentication Version (internal use only)
--filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
@@ -25626,11 +25637,14 @@ The following backends have known issues that need more investigation:
- TestDropbox (dropbox)
- TestBisyncRemoteRemote/normalization
- TestSeafile (seafile)
- TestBisyncLocalRemote/volatile
- TestSeafileV6 (seafile)
- TestBisyncLocalRemote/volatile
- Updated: 2026-01-30-010015
- TestInternxt (internxt)
- TestBisyncLocalRemote/all_changed
- TestBisyncLocalRemote/ext_paths
- TestBisyncLocalRemote/max_delete_path1
- TestBisyncRemoteRemote/basic
- TestBisyncRemoteRemote/concurrent
- 5 more
- Updated: 2026-02-17-010016
The following backends either have not been tested recently or have
known issues that are deemed unfixable for the time being:
@@ -42261,6 +42275,29 @@ Advanced options
Here are the Advanced options specific to filelu (FileLu Cloud Storage).
--filelu-upload-cutoff
Cutoff for switching to chunked upload. Any files larger than this will
be uploaded in chunks of chunk_size.
Properties:
- Config: upload_cutoff
- Env Var: RCLONE_FILELU_UPLOAD_CUTOFF
- Type: SizeSuffix
- Default: 500Mi
--filelu-chunk-size
Chunk size to use for uploading. Used for multipart uploads.
Properties:
- Config: chunk_size
- Env Var: RCLONE_FILELU_CHUNK_SIZE
- Type: SizeSuffix
- Default: 64Mi
--filelu-encoding
The encoding for the backend.
@@ -65100,6 +65137,41 @@ Options:
Changelog
v1.73.1 - 2026-02-17
See commits
- Bug Fixes
- accounting: Fix missing server side stats from core/stats rc
(Nick Craig-Wood)
- build
- Fix CVE-2025-68121 by updating go to 1.25.7 or later (Nick
Craig-Wood)
- Bump github.com/go-chi/chi/v5 from 5.2.3 to 5.2.5 to fix
GO-2026-4316 (albertony)
- docs: Extend copyurl docs with an example of CSV FILENAMEs
starting with a path. (Jack Kelly)
- march: Fix runtime: program exceeds 10000-thread limit (Nick
Craig-Wood)
- pacer
- Fix deadlock between pacer token and --max-connections (Nick
Craig-Wood)
- Re-read the sleep time as it may be stale (Nick Craig-Wood)
- Drime
- Fix files and directories being created in the default workspace
(Nick Craig-Wood)
- Filelu
- Avoid buffering entire file in memory (kingston125)
- Add multipart upload support with configurable cutoff
(kingston125)
- Filen
- Fix 32 bit targets not being able to list directories (Enduriel)
- Fix potential panic in case of error during upload (Enduriel)
- Internxt
- Implement re-login under refresh logic, improve retry logic
(José Zúniga) -S3
- Set list_version to 2 for FileLu S3 configuration (kingston125)
v1.73.0 - 2026-01-30
See commits

View File

@@ -1 +1 @@
v1.73.0
v1.73.1

View File

@@ -173,6 +173,7 @@ type MultiPartCreateRequest struct {
Extension string `json:"extension"`
ParentID json.Number `json:"parent_id"`
RelativePath string `json:"relativePath"`
WorkspaceID string `json:"workspaceId,omitempty"`
}
// MultiPartCreateResponse is returned by POST /s3/multipart/create

View File

@@ -476,8 +476,12 @@ func (f *Fs) createDir(ctx context.Context, pathID, leaf string, modTime time.Ti
var resp *http.Response
var result api.CreateFolderResponse
opts := rest.Opts{
Method: "POST",
Path: "/folders",
Method: "POST",
Path: "/folders",
Parameters: url.Values{},
}
if f.opt.WorkspaceID != "" {
opts.Parameters.Set("workspaceId", f.opt.WorkspaceID)
}
mkdir := api.CreateFolderRequest{
Name: f.opt.Enc.FromStandardName(leaf),
@@ -779,8 +783,12 @@ func (f *Fs) patch(ctx context.Context, id, attribute string, value string) (ite
}
var result api.UpdateItemResponse
opts := rest.Opts{
Method: "PUT",
Path: "/file-entries/" + id,
Method: "PUT",
Path: "/file-entries/" + id,
Parameters: url.Values{},
}
if f.opt.WorkspaceID != "" {
opts.Parameters.Set("workspaceId", f.opt.WorkspaceID)
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
@@ -807,8 +815,12 @@ func (f *Fs) move(ctx context.Context, id, newDirID string) (err error) {
}
var result api.MoveResponse
opts := rest.Opts{
Method: "POST",
Path: "/file-entries/move",
Method: "POST",
Path: "/file-entries/move",
Parameters: url.Values{},
}
if f.opt.WorkspaceID != "" {
opts.Parameters.Set("workspaceId", f.opt.WorkspaceID)
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
@@ -945,8 +957,12 @@ func (f *Fs) copy(ctx context.Context, id, newDirID string) (item *api.Item, err
}
var result api.CopyResponse
opts := rest.Opts{
Method: "POST",
Path: "/file-entries/duplicate",
Method: "POST",
Path: "/file-entries/duplicate",
Parameters: url.Values{},
}
if f.opt.WorkspaceID != "" {
opts.Parameters.Set("workspaceId", f.opt.WorkspaceID)
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
@@ -1114,6 +1130,7 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
Extension: strings.TrimPrefix(path.Ext(leaf), `.`),
ParentID: json.Number(directoryID),
RelativePath: f.opt.Enc.FromStandardPath(path.Join(f.root, remote)),
WorkspaceID: f.opt.WorkspaceID,
}
var resp api.MultiPartCreateResponse
@@ -1509,6 +1526,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
MultipartParams: url.Values{
"parentId": {directoryID},
"relativePath": {encodedLeaf},
"workspaceId": {o.fs.opt.WorkspaceID},
},
MultipartContentName: "file",
MultipartFileName: encodedLeaf,

View File

@@ -3,6 +3,19 @@ package api
import "encoding/json"
// MultipartInitResponse represents the response from multipart/init.
type MultipartInitResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result struct {
UploadID string `json:"upload_id"`
SessID string `json:"sess_id"`
Server string `json:"server"`
FolderID int64 `json:"folder_id"`
ObjectPath string `json:"object_path"`
} `json:"result"`
}
// CreateFolderResponse represents the response for creating a folder.
type CreateFolderResponse struct {
Status int `json:"status"`

View File

@@ -21,6 +21,11 @@ import (
"github.com/rclone/rclone/lib/rest"
)
const (
defaultUploadCutoff = fs.SizeSuffix(500 * 1024 * 1024)
defaultChunkSize = fs.SizeSuffix(64 * 1024 * 1024)
)
// Register the backend with Rclone
func init() {
fs.Register(&fs.RegInfo{
@@ -33,6 +38,17 @@ func init() {
Required: true,
Sensitive: true,
},
{
Name: "upload_cutoff",
Help: "Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size.",
Default: defaultUploadCutoff,
Advanced: true,
}, {
Name: "chunk_size",
Help: "Chunk size to use for uploading. Used for multipart uploads.",
Default: defaultChunkSize,
Advanced: true,
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -72,8 +88,10 @@ func init() {
// Options defines the configuration for the FileLu backend
type Options struct {
Key string `config:"key"`
Enc encoder.MultiEncoder `config:"encoding"`
Key string `config:"key"`
Enc encoder.MultiEncoder `config:"encoding"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
}
// Fs represents the FileLu file system
@@ -189,7 +207,6 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.deleteFolder(ctx, fullPath)
}
// List returns a list of files and folders
// List returns a list of files and folders for the given directory
func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
// Compose full path for API call
@@ -250,23 +267,11 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
// Put uploads a file directly to the destination folder in the FileLu storage system.
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if src.Size() == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
o := &Object{
fs: f,
remote: src.Remote(),
}
err := f.uploadFile(ctx, in, src.Remote())
if err != nil {
return nil, err
}
newObject := &Object{
fs: f,
remote: src.Remote(),
size: src.Size(),
modTime: src.ModTime(ctx),
}
fs.Infof(f, "Put: Successfully uploaded new file %q", src.Remote())
return newObject, nil
return o, o.Update(ctx, in, src, options...)
}
// Move moves the file to the specified location

View File

@@ -16,6 +16,59 @@ import (
"github.com/rclone/rclone/lib/rest"
)
// multipartInit starts a new multipart upload and returns server details.
func (f *Fs) multipartInit(ctx context.Context, folderPath, filename string) (*api.MultipartInitResponse, error) {
opts := rest.Opts{
Method: "GET",
Path: "/multipart/init",
Parameters: url.Values{
"key": {f.opt.Key},
"filename": {filename},
"folder_path": {folderPath},
},
}
var result api.MultipartInitResponse
err := f.pacer.Call(func() (bool, error) {
_, err := f.srv.CallJSON(ctx, &opts, nil, &result)
return fserrors.ShouldRetry(err), err
})
if err != nil {
return nil, err
}
if result.Status != 200 {
return nil, fmt.Errorf("multipart init error: %s", result.Msg)
}
return &result, nil
}
// completeMultipart finalizes the multipart upload on the file server.
func (f *Fs) completeMultipart(ctx context.Context, server string, uploadID string, sessID string, objectPath string) error {
req, err := http.NewRequestWithContext(ctx, "POST", server, nil)
if err != nil {
return err
}
req.Header.Set("X-RC-Upload-Id", uploadID)
req.Header.Set("X-Sess-ID", sessID)
req.Header.Set("X-Object-Path", objectPath)
resp, err := f.client.Do(req)
if err != nil {
return err
}
defer func() { _ = resp.Body.Close() }()
if resp.StatusCode != 202 {
body, _ := io.ReadAll(resp.Body)
return fmt.Errorf("completeMultipart failed %d: %s", resp.StatusCode, string(body))
}
return nil
}
// createFolder creates a folder at the specified path.
func (f *Fs) createFolder(ctx context.Context, dirPath string) (*api.CreateFolderResponse, error) {
encodedDir := f.fromStandardPath(dirPath)

View File

@@ -1,6 +1,7 @@
package filelu
import (
"bytes"
"context"
"encoding/json"
"errors"
@@ -15,6 +16,105 @@ import (
"github.com/rclone/rclone/fs"
)
// multipartUpload uploads a file in fixed-size chunks using the multipart API.
func (f *Fs) multipartUpload(ctx context.Context, in io.Reader, remote string) error {
dir := path.Dir(remote)
if dir == "." {
dir = ""
}
if dir != "" {
_ = f.Mkdir(ctx, dir)
}
folder := strings.Trim(dir, "/")
if folder != "" {
folder = "/" + folder
}
file := path.Base(remote)
initResp, err := f.multipartInit(ctx, folder, file)
if err != nil {
return fmt.Errorf("multipart init failed: %w", err)
}
uploadID := initResp.Result.UploadID
sessID := initResp.Result.SessID
server := initResp.Result.Server
objectPath := initResp.Result.ObjectPath
chunkSize := int(f.opt.ChunkSize)
buf := make([]byte, 0, chunkSize)
tmp := make([]byte, 1024*1024)
partNo := 1
for {
n, errRead := in.Read(tmp)
if n > 0 {
buf = append(buf, tmp[:n]...)
// If buffer reached chunkSize, upload a full part
if len(buf) >= chunkSize {
err = f.uploadPart(ctx, server, uploadID, sessID, objectPath, partNo, bytes.NewReader(buf))
if err != nil {
return fmt.Errorf("upload part %d failed: %w", partNo, err)
}
partNo++
buf = buf[:0]
}
}
if errRead == io.EOF {
break
}
if errRead != nil {
return fmt.Errorf("read failed: %w", errRead)
}
}
if len(buf) > 0 {
err = f.uploadPart(ctx, server, uploadID, sessID, objectPath, partNo, bytes.NewReader(buf))
if err != nil {
return fmt.Errorf("upload part %d failed: %w", partNo, err)
}
}
err = f.completeMultipart(ctx, server, uploadID, sessID, objectPath)
if err != nil {
return fmt.Errorf("complete multipart failed: %w", err)
}
return nil
}
// uploadPart sends a single multipart chunk to the upload server.
func (f *Fs) uploadPart(ctx context.Context, server, uploadID, sessID, objectPath string, partNo int, r io.Reader) error {
url := fmt.Sprintf("%s?partNumber=%d&uploadId=%s", server, partNo, uploadID)
req, err := http.NewRequestWithContext(ctx, "PUT", url, r)
if err != nil {
return err
}
req.Header.Set("X-RC-Upload-Id", uploadID)
req.Header.Set("X-RC-Part-No", fmt.Sprintf("%d", partNo))
req.Header.Set("X-Sess-ID", sessID)
req.Header.Set("X-Object-Path", objectPath)
resp, err := f.client.Do(req)
if err != nil {
return err
}
defer func() { _ = resp.Body.Close() }()
if resp.StatusCode != 200 {
return fmt.Errorf("uploadPart failed: %s", resp.Status)
}
return nil
}
// uploadFile uploads a file to FileLu
func (f *Fs) uploadFile(ctx context.Context, fileContent io.Reader, fileFullPath string) error {
directory := path.Dir(fileFullPath)

View File

@@ -1,7 +1,6 @@
package filelu
import (
"bytes"
"context"
"encoding/json"
"fmt"
@@ -88,6 +87,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
}
}
// Wrap the response body to handle offset and count
var reader io.ReadCloser
err = o.fs.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", directLink, nil)
@@ -109,22 +109,25 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
return false, fmt.Errorf("failed to download file: HTTP %d", resp.StatusCode)
}
// Wrap the response body to handle offset and count
currentContents, err := io.ReadAll(resp.Body)
if err != nil {
return false, fmt.Errorf("failed to read response body: %w", err)
if offset > 0 {
_, err = io.CopyN(io.Discard, resp.Body, offset)
if err != nil {
_ = resp.Body.Close()
return false, fmt.Errorf("failed to skip offset: %w", err)
}
}
if offset > 0 {
if offset > int64(len(currentContents)) {
return false, fmt.Errorf("offset %d exceeds file size %d", offset, len(currentContents))
if count > 0 {
reader = struct {
io.Reader
io.Closer
}{
Reader: io.LimitReader(resp.Body, count),
Closer: resp.Body,
}
currentContents = currentContents[offset:]
} else {
reader = resp.Body
}
if count > 0 && count < int64(len(currentContents)) {
currentContents = currentContents[:count]
}
reader = io.NopCloser(bytes.NewReader(currentContents))
return false, nil
})
@@ -137,15 +140,23 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
// Update updates the object with new data
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
if src.Size() <= 0 {
return fs.ErrorCantUploadEmptyFiles
size := src.Size()
if size <= int64(o.fs.opt.UploadCutoff) {
err := o.fs.uploadFile(ctx, in, o.remote)
if err != nil {
return err
}
} else {
fullPath := path.Join(o.fs.root, o.remote)
err := o.fs.multipartUpload(ctx, in, fullPath)
if err != nil {
return fmt.Errorf("failed to upload file: %w", err)
}
}
err := o.fs.uploadFile(ctx, in, o.remote)
if err != nil {
return fmt.Errorf("failed to upload file: %w", err)
}
o.size = src.Size()
o.size = size
o.modTime = src.ModTime(ctx)
return nil
}

View File

@@ -355,19 +355,19 @@ type chunkWriter struct {
chunkSize int64
chunksLock sync.Mutex
knownChunks map[int][]byte // known chunks to be hashed
nextChunkToHash int
knownChunks map[int64][]byte // known chunks to be hashed
nextChunkToHash int64
sizeLock sync.Mutex
size int64
}
func (cw *chunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader io.ReadSeeker) (bytesWritten int64, err error) {
realChunkNumber := int(int64(chunkNumber) * (cw.chunkSize) / sdk.ChunkSize)
realChunkNumber := int64(chunkNumber) * (cw.chunkSize) / sdk.ChunkSize
chunk := make([]byte, sdk.ChunkSize, sdk.ChunkSize+cw.EncryptionKey.Cipher.Overhead())
totalWritten := int64(0)
for sliceStart := 0; sliceStart < int(cw.chunkSize); sliceStart += sdk.ChunkSize {
for sliceStart := int64(0); sliceStart < cw.chunkSize; sliceStart += sdk.ChunkSize {
chunk = chunk[:sdk.ChunkSize]
chunkRead := 0
for {
@@ -415,13 +415,13 @@ func (cw *chunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader i
return totalWritten, err
}
resp, err := cw.filen.UploadChunk(ctx, &cw.FileUpload, realChunkNumber, chunkReadSlice)
if err != nil {
return totalWritten, err
}
select { // only care about getting this once
case cw.bucketAndRegion <- *resp:
default:
}
if err != nil {
return totalWritten, err
}
totalWritten += int64(len(chunkReadSlice))
realChunkNumber++
}
@@ -496,7 +496,7 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
filen: f.filen,
chunkSize: chunkSize,
bucketAndRegion: make(chan client.V3UploadResponse, 1),
knownChunks: make(map[int][]byte),
knownChunks: make(map[int64][]byte),
nextChunkToHash: 0,
size: 0,
}, nil
@@ -1122,8 +1122,8 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
return nil, err
}
total := int64(userInfo.MaxStorage)
used := int64(userInfo.UsedStorage)
total := userInfo.MaxStorage
used := userInfo.UsedStorage
free := total - used
return &fs.Usage{
Total: &total,

View File

@@ -13,8 +13,12 @@ import (
"github.com/golang-jwt/jwt/v5"
internxtauth "github.com/internxt/rclone-adapter/auth"
internxtconfig "github.com/internxt/rclone-adapter/config"
sdkerrors "github.com/internxt/rclone-adapter/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/lib/oauthutil"
"golang.org/x/oauth2"
)
@@ -101,7 +105,6 @@ func jwtToOAuth2Token(jwtString string) (*oauth2.Token, error) {
}
// computeBasicAuthHeader creates the BasicAuthHeader for bucket operations
// Following the pattern from SDK's auth/access.go:96-102
func computeBasicAuthHeader(bridgeUser, userID string) string {
sum := sha256.Sum256([]byte(userID))
hexPass := hex.EncodeToString(sum[:])
@@ -144,3 +147,100 @@ func refreshJWTToken(ctx context.Context, name string, m configmap.Mapper) error
fs.Debugf(name, "Token refreshed successfully, new expiry: %v", token.Expiry)
return nil
}
// reLogin performs a full re-login using stored email+password credentials.
// Returns the AccessResponse on success, or an error if 2FA is required or login fails.
func (f *Fs) reLogin(ctx context.Context) (*internxtauth.AccessResponse, error) {
password, err := obscure.Reveal(f.opt.Pass)
if err != nil {
return nil, fmt.Errorf("couldn't decrypt password: %w", err)
}
cfg := internxtconfig.NewDefaultToken("")
cfg.HTTPClient = fshttp.NewClient(ctx)
loginResp, err := internxtauth.Login(ctx, cfg, f.opt.Email)
if err != nil {
return nil, fmt.Errorf("re-login check failed: %w", err)
}
if loginResp.TFA {
return nil, errors.New("account requires 2FA - please run: rclone config reconnect " + f.name + ":")
}
resp, err := internxtauth.DoLogin(ctx, cfg, f.opt.Email, password, "")
if err != nil {
return nil, fmt.Errorf("re-login failed: %w", err)
}
return resp, nil
}
// refreshOrReLogin tries to refresh the JWT token first; if that fails with 401,
// it falls back to a full re-login using stored credentials.
func (f *Fs) refreshOrReLogin(ctx context.Context) error {
refreshErr := refreshJWTToken(ctx, f.name, f.m)
if refreshErr == nil {
newToken, err := oauthutil.GetToken(f.name, f.m)
if err != nil {
return fmt.Errorf("failed to get refreshed token: %w", err)
}
f.cfg.Token = newToken.AccessToken
f.cfg.BasicAuthHeader = computeBasicAuthHeader(f.bridgeUser, f.userID)
fs.Debugf(f, "Token refresh succeeded")
return nil
}
var httpErr *sdkerrors.HTTPError
if !errors.As(refreshErr, &httpErr) || httpErr.StatusCode() != 401 {
if fserrors.ShouldRetry(refreshErr) {
return refreshErr
}
return refreshErr
}
fs.Debugf(f, "Token refresh returned 401, attempting re-login with stored credentials")
resp, err := f.reLogin(ctx)
if err != nil {
return fmt.Errorf("re-login fallback failed: %w", err)
}
oauthToken, err := jwtToOAuth2Token(resp.NewToken)
if err != nil {
return fmt.Errorf("failed to parse re-login token: %w", err)
}
err = oauthutil.PutToken(f.name, f.m, oauthToken, true)
if err != nil {
return fmt.Errorf("failed to save re-login token: %w", err)
}
f.cfg.Token = oauthToken.AccessToken
f.bridgeUser = resp.User.BridgeUser
f.userID = resp.User.UserID
f.cfg.BasicAuthHeader = computeBasicAuthHeader(f.bridgeUser, f.userID)
f.cfg.Bucket = resp.User.Bucket
f.cfg.RootFolderID = resp.User.RootFolderID
fs.Debugf(f, "Re-login succeeded, new token expiry: %v", oauthToken.Expiry)
return nil
}
// reAuthorize is called after getting 401 from the server.
// It serializes re-auth attempts and uses a circuit-breaker to avoid infinite loops.
func (f *Fs) reAuthorize(ctx context.Context) error {
f.authMu.Lock()
defer f.authMu.Unlock()
if f.authFailed {
return errors.New("re-authorization permanently failed")
}
err := f.refreshOrReLogin(ctx)
if err != nil {
f.authFailed = true
return err
}
return nil
}

View File

@@ -11,6 +11,7 @@ import (
"path"
"path/filepath"
"strings"
"sync"
"time"
"github.com/internxt/rclone-adapter/auth"
@@ -41,16 +42,34 @@ const (
decayConstant = 2 // bigger for slower decay, exponential
)
// shouldRetry determines if an error should be retried
func shouldRetry(ctx context.Context, err error) (bool, error) {
// shouldRetry determines if an error should be retried.
// On 401, it attempts to re-authorize before retrying.
// On 429, it honours the server's rate limit retry delay.
func (f *Fs) shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
var httpErr *sdkerrors.HTTPError
if errors.As(err, &httpErr) && httpErr.StatusCode() == 401 {
return true, err
if errors.As(err, &httpErr) {
switch httpErr.StatusCode() {
case 401:
if !f.authFailed {
authErr := f.reAuthorize(ctx)
if authErr != nil {
fs.Debugf(f, "Re-authorization failed: %v", authErr)
return false, err
}
return true, err
}
return false, err
case 429:
delay := httpErr.RetryAfter()
if delay <= 0 {
delay = time.Second
}
return true, pacer.RetryAfterError(err, delay)
}
}
return fserrors.ShouldRetry(err), err
}
@@ -184,6 +203,7 @@ type Fs struct {
name string
root string
opt Options
m configmap.Mapper
dirCache *dircache.DirCache
cfg *config.Config
features *fs.Features
@@ -191,6 +211,8 @@ type Fs struct {
tokenRenewer *oauthutil.Renew
bridgeUser string
userID string
authMu sync.Mutex
authFailed bool
}
// Object holds the data for a remote file object
@@ -263,45 +285,62 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
cfg.SkipHashValidation = opt.SkipHashValidation
cfg.HTTPClient = fshttp.NewClient(ctx)
userInfo, err := getUserInfo(ctx, &userInfoConfig{Token: cfg.Token})
if err != nil {
return nil, fmt.Errorf("failed to fetch user info: %w", err)
}
cfg.RootFolderID = userInfo.RootFolderID
cfg.Bucket = userInfo.Bucket
cfg.BasicAuthHeader = computeBasicAuthHeader(userInfo.BridgeUser, userInfo.UserID)
f := &Fs{
name: name,
root: strings.Trim(root, "/"),
opt: *opt,
cfg: cfg,
bridgeUser: userInfo.BridgeUser,
userID: userInfo.UserID,
name: name,
root: strings.Trim(root, "/"),
opt: *opt,
m: m,
cfg: cfg,
}
f.pacer = fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant)))
var userInfo *userInfo
const maxRetries = 3
for attempt := 1; attempt <= maxRetries; attempt++ {
userInfo, err = getUserInfo(ctx, &userInfoConfig{Token: f.cfg.Token})
if err == nil {
break
}
var httpErr *sdkerrors.HTTPError
if errors.As(err, &httpErr) && httpErr.StatusCode() == 401 {
fs.Debugf(f, "getUserInfo returned 401, attempting re-auth")
authErr := f.refreshOrReLogin(ctx)
if authErr != nil {
return nil, fmt.Errorf("failed to fetch user info (re-auth failed): %w", err)
}
userInfo, err = getUserInfo(ctx, &userInfoConfig{Token: f.cfg.Token})
if err == nil {
break
}
return nil, fmt.Errorf("failed to fetch user info after re-auth: %w", err)
}
if fserrors.ShouldRetry(err) && attempt < maxRetries {
fs.Debugf(f, "getUserInfo transient error (attempt %d/%d): %v", attempt, maxRetries, err)
time.Sleep(time.Duration(attempt) * time.Second)
continue
}
return nil, fmt.Errorf("failed to fetch user info: %w", err)
}
f.cfg.RootFolderID = userInfo.RootFolderID
f.cfg.Bucket = userInfo.Bucket
f.cfg.BasicAuthHeader = computeBasicAuthHeader(userInfo.BridgeUser, userInfo.UserID)
f.bridgeUser = userInfo.BridgeUser
f.userID = userInfo.UserID
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
if ts != nil {
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
err := refreshJWTToken(ctx, name, m)
if err != nil {
return err
}
newToken, err := oauthutil.GetToken(name, m)
if err != nil {
return fmt.Errorf("failed to get refreshed token: %w", err)
}
f.cfg.Token = newToken.AccessToken
f.cfg.BasicAuthHeader = computeBasicAuthHeader(f.bridgeUser, f.userID)
return nil
f.authMu.Lock()
defer f.authMu.Unlock()
return f.refreshOrReLogin(ctx)
})
f.tokenRenewer.Start()
}
@@ -312,9 +351,19 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
// Assume it might be a file
newRoot, remote := dircache.SplitPath(f.root)
tempF := *f
tempF.dirCache = dircache.New(newRoot, f.cfg.RootFolderID, &tempF)
tempF.root = newRoot
tempF := &Fs{
name: f.name,
root: newRoot,
opt: f.opt,
m: f.m,
cfg: f.cfg,
features: f.features,
pacer: f.pacer,
tokenRenewer: f.tokenRenewer,
bridgeUser: f.bridgeUser,
userID: f.userID,
}
tempF.dirCache = dircache.New(newRoot, f.cfg.RootFolderID, tempF)
err = tempF.dirCache.FindRoot(ctx, false)
if err != nil {
@@ -367,7 +416,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
err = f.pacer.Call(func() (bool, error) {
var err error
childFolders, err = folders.ListAllFolders(ctx, f.cfg, id)
return shouldRetry(ctx, err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return err
@@ -380,7 +429,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
err = f.pacer.Call(func() (bool, error) {
var err error
childFiles, err = folders.ListAllFiles(ctx, f.cfg, id)
return shouldRetry(ctx, err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return err
@@ -395,7 +444,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
if err != nil && strings.Contains(err.Error(), "404") {
return false, fs.ErrorDirNotFound
}
return shouldRetry(ctx, err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return err
@@ -412,7 +461,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (string, bool, e
err := f.pacer.Call(func() (bool, error) {
var err error
entries, err = folders.ListAllFolders(ctx, f.cfg, pathID)
return shouldRetry(ctx, err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return "", false, err
@@ -437,7 +486,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (string, error)
err := f.pacer.CallNoRetry(func() (bool, error) {
var err error
resp, err = folders.CreateFolder(ctx, f.cfg, request)
return shouldRetry(ctx, err)
return f.shouldRetry(ctx, err)
})
if err != nil {
// If folder already exists (409 conflict), try to find it
@@ -525,7 +574,7 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
err = f.pacer.Call(func() (bool, error) {
var err error
foldersList, err = folders.ListAllFolders(ctx, f.cfg, dirID)
return shouldRetry(ctx, err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -538,7 +587,7 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
err = f.pacer.Call(func() (bool, error) {
var err error
filesList, err = folders.ListAllFiles(ctx, f.cfg, dirID)
return shouldRetry(ctx, err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -616,7 +665,7 @@ func (f *Fs) Remove(ctx context.Context, remote string) error {
}
err = f.pacer.Call(func() (bool, error) {
err := folders.DeleteFolder(ctx, f.cfg, dirID)
return shouldRetry(ctx, err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return err
@@ -642,7 +691,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
err = f.pacer.Call(func() (bool, error) {
var err error
files, err = folders.ListAllFiles(ctx, f.cfg, dirID)
return shouldRetry(ctx, err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -720,7 +769,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
err := f.pacer.Call(func() (bool, error) {
var err error
internxtLimit, err = users.GetLimit(ctx, f.cfg)
return shouldRetry(ctx, err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -730,7 +779,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
err = f.pacer.Call(func() (bool, error) {
var err error
internxtUsage, err = users.GetUsage(ctx, f.cfg)
return shouldRetry(ctx, err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -776,7 +825,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
err := o.f.pacer.Call(func() (bool, error) {
var err error
stream, err = buckets.DownloadFileStream(ctx, o.f.cfg, o.id, rangeValue)
return shouldRetry(ctx, err)
return o.f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -826,7 +875,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return false, nil
}
}
return shouldRetry(ctx, err)
return o.f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to rename existing file to backup: %w", err)
@@ -847,7 +896,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
src.Size(),
src.ModTime(ctx),
)
return shouldRetry(ctx, err)
return o.f.shouldRetry(ctx, err)
})
if err != nil && isEmptyFileLimitError(err) {
@@ -885,7 +934,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
}
}
return shouldRetry(ctx, err)
return o.f.shouldRetry(ctx, err)
})
if err != nil {
fs.Errorf(o.f, "Failed to delete backup file %s.%s (UUID: %s): %v. This may leave an orphaned backup file.",
@@ -939,7 +988,7 @@ func (o *Object) recoverFromTimeoutConflict(ctx context.Context, uploadErr error
checkErr := o.f.pacer.Call(func() (bool, error) {
existingFile, err := o.f.preUploadCheck(ctx, encodedName, dirID)
if err != nil {
return shouldRetry(ctx, err)
return o.f.shouldRetry(ctx, err)
}
if existingFile != nil {
name := strings.TrimSuffix(baseName, filepath.Ext(baseName))
@@ -978,7 +1027,7 @@ func (o *Object) restoreBackupFile(ctx context.Context, backupUUID, origName, or
_ = o.f.pacer.Call(func() (bool, error) {
err := files.RenameFile(ctx, o.f.cfg, backupUUID,
o.f.opt.Encoding.FromStandardName(origName), origType)
return shouldRetry(ctx, err)
return o.f.shouldRetry(ctx, err)
})
}
@@ -986,6 +1035,6 @@ func (o *Object) restoreBackupFile(ctx context.Context, backupUUID, origName, or
func (o *Object) Remove(ctx context.Context) error {
return o.f.pacer.Call(func() (bool, error) {
err := files.DeleteFile(ctx, o.f.cfg, o.uuid)
return shouldRetry(ctx, err)
return o.f.shouldRetry(ctx, err)
})
}

View File

@@ -15,7 +15,7 @@ endpoint:
acl: {}
bucket_acl: true
quirks:
list_version: 1
list_version: 2
force_path_style: true
list_url_encode: false
use_multipart_etag: false

View File

@@ -70,6 +70,15 @@ Note that |--stdout| and |--print-filename| are incompatible with |--urls|.
This will do |--transfers| copies in parallel. Note that if |--auto-filename|
is desired for all URLs then a file with only URLs and no filename can be used.
Each FILENAME in the CSV file can start with a relative path which will be appended
to the destination path provided at the command line. For example, running the command
shown above with the following CSV file will write two files to the destination:
|remote:dir/local/path/bar.json| and |remote:dir/another/local/directory/qux.json|
|||csv
https://example.org/foo/bar.json,local/path/bar.json
https://example.org/qux/baz.json,another/local/directory/qux.json
|||
### Troubleshooting
If you can't get |rclone copyurl| to work then here are some things you can try:

View File

@@ -1049,11 +1049,14 @@ The following backends have known issues that need more investigation:
<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
- `TestDropbox` (`dropbox`)
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
- `TestSeafile` (`seafile`)
- [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafile-1.txt)
- `TestSeafileV6` (`seafile`)
- [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafileV6-1.txt)
- Updated: 2026-01-30-010015
- `TestInternxt` (`internxt`)
- [`TestBisyncLocalRemote/all_changed`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
- [`TestBisyncLocalRemote/ext_paths`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
- [`TestBisyncLocalRemote/max_delete_path1`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
- [`TestBisyncRemoteRemote/basic`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
- [`TestBisyncRemoteRemote/concurrent`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
- [5 more](https://pub.rclone.org/integration-tests/current/)
- Updated: 2026-02-17-010016
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
The following backends either have not been tested recently or have known issues

View File

@@ -6,6 +6,33 @@ description: "Rclone Changelog"
# Changelog
## v1.73.1 - 2026-02-17
[See commits](https://github.com/rclone/rclone/compare/v1.73.0...v1.73.1)
- Bug Fixes
- accounting: Fix missing server side stats from core/stats rc (Nick Craig-Wood)
- build
- Fix CVE-2025-68121 by updating go to 1.25.7 or later (Nick Craig-Wood)
- Bump github.com/go-chi/chi/v5 from 5.2.3 to 5.2.5 to fix GO-2026-4316 (albertony)
- docs: Extend copyurl docs with an example of CSV FILENAMEs starting with a path. (Jack Kelly)
- march: Fix runtime: program exceeds 10000-thread limit (Nick Craig-Wood)
- pacer
- Fix deadlock between pacer token and --max-connections (Nick Craig-Wood)
- Re-read the sleep time as it may be stale (Nick Craig-Wood)
- Drime
- Fix files and directories being created in the default workspace (Nick Craig-Wood)
- Filelu
- Avoid buffering entire file in memory (kingston125)
- Add multipart upload support with configurable cutoff (kingston125)
- Filen
- Fix 32 bit targets not being able to list directories (Enduriel)
- Fix potential panic in case of error during upload (Enduriel)
- Internxt
- Implement re-login under refresh logic, improve retry logic (José Zúniga)
-S3
- Set list_version to 2 for FileLu S3 configuration (kingston125)
## v1.73.0 - 2026-01-30
[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.73.0)

View File

@@ -329,9 +329,11 @@ rclone [flags]
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
--filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi)
--filelu-description string Description of the remote
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
--filelu-key string Your FileLu Rclone key from My Account
--filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi)
--filen-api-key string API Key for your Filen account (obscured)
--filen-auth-version string Authentication Version (internal use only)
--filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
@@ -1063,7 +1065,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.73.1")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect

View File

@@ -231,12 +231,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20260130
// Output: stories/The Quick Brown Fox!-20260217
```
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM
// Output: stories/The Quick Brown Fox!-2026-02-17 0451PM
```
```console

View File

@@ -39,6 +39,15 @@ Note that `--stdout` and `--print-filename` are incompatible with `--urls`.
This will do `--transfers` copies in parallel. Note that if `--auto-filename`
is desired for all URLs then a file with only URLs and no filename can be used.
Each FILENAME in the CSV file can start with a relative path which will be appended
to the destination path provided at the command line. For example, running the command
shown above with the following CSV file will write two files to the destination:
`remote:dir/local/path/bar.json` and `remote:dir/another/local/directory/qux.json`
```csv
https://example.org/foo/bar.json,local/path/bar.json
https://example.org/qux/baz.json,another/local/directory/qux.json
```
## Troubleshooting
If you can't get `rclone copyurl` to work then here are some things you can try:

View File

@@ -219,6 +219,28 @@ Properties:
Here are the Advanced options specific to filelu (FileLu Cloud Storage).
#### --filelu-upload-cutoff
Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size.
Properties:
- Config: upload_cutoff
- Env Var: RCLONE_FILELU_UPLOAD_CUTOFF
- Type: SizeSuffix
- Default: 500Mi
#### --filelu-chunk-size
Chunk size to use for uploading. Used for multipart uploads.
Properties:
- Config: chunk_size
- Env Var: RCLONE_FILELU_CHUNK_SIZE
- Type: SizeSuffix
- Default: 64Mi
#### --filelu-encoding
The encoding for the backend.

View File

@@ -121,7 +121,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.73.1")
```
@@ -605,9 +605,11 @@ Backend-only flags (these can be set in the config file also).
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
--filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi)
--filelu-description string Description of the remote
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
--filelu-key string Your FileLu Rclone key from My Account
--filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi)
--filen-api-key string API Key for your Filen account (obscured)
--filen-auth-version string Authentication Version (internal use only)
--filen-base-folder-uuid string UUID of Account Root Directory (internal use only)

View File

@@ -16,16 +16,18 @@ Thank you to our sponsors:
{{< sponsor src="/img/logos/rabata.svg" width="300" height="200" title="Visit our sponsor Rabata.io" link="https://rabata.io/?utm_source=banner&utm_medium=rclone&utm_content=general">}}
{{< sponsor src="/img/logos/idrive_e2.svg" width="300" height="200" title="Visit our sponsor IDrive e2" link="https://www.idrive.com/e2/?refer=rclone">}}
{{< sponsor src="/img/logos/filescom-enterprise-grade-workflows.png" width="300" height="200" title="Start Your Free Trial Today" link="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=rclone">}}
{{< sponsor src="/img/logos/internxt.jpg" width="300" height="200" title="Visit rclone's sponsor Internxt" link="https://internxt.com/specialoffer/rclone">}}
{{< sponsor src="/img/logos/mega-s4.svg" width="300" height="200" title="MEGA S4: New S3 compatible object storage. High scale. Low cost. Free egress." link="https://mega.io/objectstorage?utm_source=rclone&utm_medium=referral&utm_campaign=rclone-mega-s4&mct=rclonepromo">}}
{{< sponsor src="/img/logos/sia.svg" width="200" height="200" title="Visit our sponsor sia" link="https://sia.tech">}}
{{< sponsor src="/img/logos/route4me.svg" width="400" height="200" title="Visit our sponsor Route4Me" link="https://route4me.com/">}}
{{< sponsor src="/img/logos/rcloneview-banner.svg" width="300" height="200" title="Visit our sponsor RcloneView" link="https://rcloneview.com/">}}
{{< sponsor src="/img/logos/rcloneview.svg" width="300" height="200" title="Visit our sponsor RcloneView" link="https://rcloneview.com/">}}
{{< sponsor src="/img/logos/rcloneui.svg" width="300" height="200" title="Visit our sponsor RcloneUI" link="https://github.com/rclone-ui/rclone-ui">}}
{{< sponsor src="/img/logos/shade.svg" width="300" height="200" title="Visit our sponsor Shade" link="https://shade.inc">}}
{{< sponsor src="/img/logos/filelu-rclone.svg" width="300" height="200" title="Visit our sponsor FileLu" link="https://filelu.com/">}}
{{< sponsor src="/img/logos/torbox.png" width="200" height="200" title="Visit our sponsor TORBOX" link="https://www.torbox.app/">}}
{{< sponsor src="/img/logos/spectra-logic.svg" width="300" height="200" title="Visit our sponsor Spectra Logic" link="https://spectralogic.com/">}}
{{< sponsor src="/img/logos/servercore.svg" width="300" height="200" title="Visit our sponsor servercore" link="https://servercore.com/services/object-storage/?utm_source=rclone.org&utm_medium=referral&utm_campaign=cloud-s3_rclone_231025_paid">}}
{{< sponsor src="/img/logos/exchangerate-api.png" width="300" height="200" title="Visit our sponsor ExchangeRate-API" link="https://www.exchangerate-api.com/">}}
<!-- markdownlint-restore -->

View File

@@ -33,7 +33,7 @@
<div class="card">
<div class="card-header">Gold Sponsor</div>
<div class="card-body">
<a href="https://internxt.com/?utm_source=rclone" target="_blank" rel="noopener" title="Visit rclone's sponsor Internxt"><img style="max-width: 100%; height: auto;" src="/img/logos/internxt.jpg"></a><br />
<a href="https://internxt.com/specialoffer/rclone" target="_blank" rel="noopener" title="Visit rclone's sponsor Internxt"><img style="max-width: 100%; height: auto;" src="/img/logos/internxt.jpg"></a><br />
</div>
</div>
@@ -44,12 +44,6 @@
<a href="https://rcloneview.com/?utm_source=rclone" target="_blank" rel="noopener" title="Visit rclone's sponsor RcloneView"><img src="/img/logos/rcloneview.svg"></a><br />
</div>
</div>
<div class="card">
<div class="card-header">Silver Sponsor</div>
<div class="card-body">
<a href="https://rcloneui.com" target="_blank" rel="noopener" title="Visit rclone's sponsor rclone UI"><img src="/img/logos/rcloneui.svg"></a><br />
</div>
</div>
<div class="card">
<div class="card-header">Silver Sponsor</div>
<div class="card-body">

View File

@@ -1 +1 @@
v1.73.0
v1.73.1

View File

@@ -385,12 +385,14 @@ func (sg *statsGroups) sum(ctx context.Context) *StatsInfo {
sum.checkQueueSize += stats.checkQueueSize
sum.transfers += stats.transfers
sum.transferring.merge(stats.transferring)
sum.transferQueue += stats.transferQueue
sum.transferQueueSize += stats.transferQueueSize
sum.listed += stats.listed
sum.renames += stats.renames
sum.renameQueue += stats.renameQueue
sum.renameQueueSize += stats.renameQueueSize
sum.deletes += stats.deletes
sum.deletesSize += stats.deletesSize
sum.deletedDirs += stats.deletedDirs
sum.inProgress.merge(stats.inProgress)
sum.startedTransfers = append(sum.startedTransfers, stats.startedTransfers...)
@@ -399,6 +401,10 @@ func (sg *statsGroups) sum(ctx context.Context) *StatsInfo {
stats.average.mu.Lock()
sum.average.speed += stats.average.speed
stats.average.mu.Unlock()
sum.serverSideCopies += stats.serverSideCopies
sum.serverSideCopyBytes += stats.serverSideCopyBytes
sum.serverSideMoves += stats.serverSideMoves
sum.serverSideMoveBytes += stats.serverSideMoveBytes
}
stats.mu.RUnlock()
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/transform"
"golang.org/x/sync/semaphore"
"golang.org/x/text/unicode/norm"
)
@@ -41,9 +42,10 @@ type March struct {
NoCheckDest bool // transfer all objects regardless without checking dst
NoUnicodeNormalization bool // don't normalize unicode characters in filenames
// internal state
srcListDir listDirFn // function to call to list a directory in the src
dstListDir listDirFn // function to call to list a directory in the dst
transforms []matchTransformFn
srcListDir listDirFn // function to call to list a directory in the src
dstListDir listDirFn // function to call to list a directory in the dst
transforms []matchTransformFn
newObjectSem *semaphore.Weighted // make sure we don't call too many NewObjects simultaneously
}
// Marcher is called on each match
@@ -78,6 +80,8 @@ func (m *March) init(ctx context.Context) {
if m.Fdst.Features().CaseInsensitive || ci.IgnoreCaseSync {
m.transforms = append(m.transforms, strings.ToLower)
}
// Only allow ci.Checkers simultaneous calls to NewObject
m.newObjectSem = semaphore.NewWeighted(int64(ci.Checkers))
}
// srcOrDstKey turns a directory entry into a sort key using the defined transforms.
@@ -461,7 +465,12 @@ func (m *March) processJob(job listDirJob) ([]listDirJob, error) {
continue
}
leaf := path.Base(t.src.Remote())
if err := m.newObjectSem.Acquire(m.Ctx, 1); err != nil {
t.dstMatch <- nil
continue
}
dst, err := m.Fdst.NewObject(m.Ctx, path.Join(job.dstRemote, leaf))
m.newObjectSem.Release(1)
if err != nil {
dst = nil
}

View File

@@ -1,4 +1,4 @@
package fs
// VersionTag of rclone
var VersionTag = "v1.73.0"
var VersionTag = "v1.73.1"

6
go.mod
View File

@@ -11,7 +11,7 @@ require (
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3
github.com/Azure/azure-sdk-for-go/sdk/storage/azfile v1.5.3
github.com/Azure/go-ntlmssp v0.0.2-0.20251110135918-10b7b7e7cd26
github.com/FilenCloudDienste/filen-sdk-go v0.0.35
github.com/FilenCloudDienste/filen-sdk-go v0.0.37
github.com/Files-com/files-sdk-go/v3 v3.2.264
github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd
github.com/a1ex3/zstd-seekable-format-go/pkg v0.10.0
@@ -39,13 +39,13 @@ require (
github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5
github.com/gabriel-vasile/mimetype v1.4.11
github.com/gdamore/tcell/v2 v2.9.0
github.com/go-chi/chi/v5 v5.2.3
github.com/go-chi/chi/v5 v5.2.5
github.com/go-darwin/apfs v0.0.0-20211011131704-f84b94dbf348
github.com/go-git/go-billy/v5 v5.6.2
github.com/golang-jwt/jwt/v5 v5.3.0
github.com/google/uuid v1.6.0
github.com/hanwen/go-fuse/v2 v2.9.0
github.com/internxt/rclone-adapter v0.0.0-20260130171252-c3c6ebb49276
github.com/internxt/rclone-adapter v0.0.0-20260213125353-6f59c89fcb7c
github.com/jcmturner/gokrb5/v8 v8.4.4
github.com/jlaffaye/ftp v0.2.1-0.20240918233326-1b970516f5d3
github.com/josephspurrier/goversioninfo v1.5.0

12
go.sum
View File

@@ -61,8 +61,8 @@ github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0 h1:XRzhVemXdgv
github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/FilenCloudDienste/filen-sdk-go v0.0.35 h1:geuYpD/1ZXSp1H3kdW7si+KRUIrHHqM1kk8lqoA8Y9M=
github.com/FilenCloudDienste/filen-sdk-go v0.0.35/go.mod h1:0cBhKXQg49XbKZZfk5TCDa3sVLP+xMxZTWL+7KY0XR0=
github.com/FilenCloudDienste/filen-sdk-go v0.0.37 h1:W8S9TrAyZ4//3PXsU6+Bi+fe/6uIL986GyS7PVzIDL4=
github.com/FilenCloudDienste/filen-sdk-go v0.0.37/go.mod h1:0cBhKXQg49XbKZZfk5TCDa3sVLP+xMxZTWL+7KY0XR0=
github.com/Files-com/files-sdk-go/v3 v3.2.264 h1:lMHTplAYI9FtmCo/QOcpRxmPA5REVAct1r2riQmDQKw=
github.com/Files-com/files-sdk-go/v3 v3.2.264/go.mod h1:wGqkOzRu/ClJibvDgcfuJNAqI2nLhe8g91tPlDKRCdE=
github.com/IBM/go-sdk-core/v5 v5.18.5 h1:g0JRl3sYXJczB/yuDlrN6x22LJ6jIxhp0Sa4ARNW60c=
@@ -280,8 +280,8 @@ github.com/gin-contrib/sse v1.0.0 h1:y3bT1mUWUxDpW4JLQg/HnTqV4rozuW4tC9eFKTxYI9E
github.com/gin-contrib/sse v1.0.0/go.mod h1:zNuFdwarAygJBht0NTKiSi3jRf6RbqeILZ9Sp6Slhe0=
github.com/gin-gonic/gin v1.10.0 h1:nTuyha1TYqgedzytsKYqna+DfLos46nTv2ygFy86HFU=
github.com/gin-gonic/gin v1.10.0/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y=
github.com/go-chi/chi/v5 v5.2.3 h1:WQIt9uxdsAbgIYgid+BpYc+liqQZGMHRaUwp0JUcvdE=
github.com/go-chi/chi/v5 v5.2.3/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops=
github.com/go-chi/chi/v5 v5.2.5 h1:Eg4myHZBjyvJmAFjFvWgrqDTXFyOzjj7YIm3L3mu6Ug=
github.com/go-chi/chi/v5 v5.2.5/go.mod h1:X7Gx4mteadT3eDOMTsXzmI4/rwUpOwBHLpAfupzFJP0=
github.com/go-darwin/apfs v0.0.0-20211011131704-f84b94dbf348 h1:JnrjqG5iR07/8k7NqrLNilRsl3s1EPRQEGvbPyOce68=
github.com/go-darwin/apfs v0.0.0-20211011131704-f84b94dbf348/go.mod h1:Czxo/d1g948LtrALAZdL04TL/HnkopquAjxYUuI02bo=
github.com/go-git/go-billy/v5 v5.6.2 h1:6Q86EsPXMa7c3YZ3aLAQsMA0VlWmy43r6FHqa/UNbRM=
@@ -423,8 +423,8 @@ github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyf
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/internxt/rclone-adapter v0.0.0-20260130171252-c3c6ebb49276 h1:PTJPYovznNqc9t/9MjvtqhrgEVC9OiK75ZPL6hqm6gM=
github.com/internxt/rclone-adapter v0.0.0-20260130171252-c3c6ebb49276/go.mod h1:vdPya4AIcDjvng4ViaAzqjegJf0VHYpYHQguFx5xBp0=
github.com/internxt/rclone-adapter v0.0.0-20260213125353-6f59c89fcb7c h1:r+KtxPyrhsYeNbsfeqTfEM8xRdwgV6LuNhLZxpXecb4=
github.com/internxt/rclone-adapter v0.0.0-20260213125353-6f59c89fcb7c/go.mod h1:vdPya4AIcDjvng4ViaAzqjegJf0VHYpYHQguFx5xBp0=
github.com/jcmturner/aescts/v2 v2.0.0 h1:9YKLH6ey7H4eDBXW8khjYslgyqG2xZikXP0EQFKrle8=
github.com/jcmturner/aescts/v2 v2.0.0/go.mod h1:AiaICIRyfYg35RUkr8yESTqvSy7csK90qZ5xfvvsoNs=
github.com/jcmturner/dnsutils/v2 v2.0.0 h1:lltnkeZGL0wILNvrNiVCR6Ro5PGU/SeBvVO/8c/iPbo=

View File

@@ -159,18 +159,30 @@ func (p *Pacer) beginCall(limitConnections bool) {
// XXX ms later we put another in. We could do this with a
// Ticker more accurately, but then we'd have to work out how
// not to run it when it wasn't needed
<-p.pacer
p.mu.Lock()
sleepTime := p.state.SleepTime
p.mu.Unlock()
if sleepTime > 0 {
<-p.pacer
// Re-read the sleep time as it may be stale
// after waiting for the pacer token
p.mu.Lock()
sleepTime = p.state.SleepTime
p.mu.Unlock()
// Restart the timer
go func(t time.Duration) {
time.Sleep(t)
p.pacer <- struct{}{}
}(sleepTime)
}
if limitConnections {
<-p.connTokens
}
p.mu.Lock()
// Restart the timer
go func(t time.Duration) {
time.Sleep(t)
p.pacer <- struct{}{}
}(p.state.SleepTime)
p.mu.Unlock()
}
// endCall implements the pacing algorithm

View File

@@ -367,6 +367,43 @@ func TestCallMaxConnectionsRecursiveDeadlock(t *testing.T) {
assert.Equal(t, errFoo, err)
}
func TestCallMaxConnectionsRecursiveDeadlock2(t *testing.T) {
p := New(CalculatorOption(NewDefault(MinSleep(1*time.Millisecond), MaxSleep(2*time.Millisecond))))
p.SetMaxConnections(1)
dp := &dummyPaced{retry: false}
wg := new(sync.WaitGroup)
// Normal
for range 100 {
wg.Add(1)
go func() {
defer wg.Done()
err := p.Call(func() (bool, error) {
// check we have taken the connection token
assert.Equal(t, 0, len(p.connTokens))
return false, nil
})
assert.NoError(t, err)
}()
// Now attempt a recursive call
wg.Add(1)
go func() {
defer wg.Done()
err := p.Call(func() (bool, error) {
// check we have taken the connection token
assert.Equal(t, 0, len(p.connTokens))
// Do recursive call
return false, p.Call(dp.fn)
})
assert.Equal(t, errFoo, err)
}()
}
// Tidy up
wg.Wait()
}
func TestRetryAfterError_NonNilErr(t *testing.T) {
orig := errors.New("test failure")
dur := 2 * time.Second

View File

@@ -218,12 +218,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20260130
// Output: stories/The Quick Brown Fox!-20260217
```
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2026-01-30 0852PM
// Output: stories/The Quick Brown Fox!-2026-02-17 0454PM
```
```console

139
rclone.1 generated
View File

@@ -15,7 +15,7 @@
. ftr VB CB
. ftr VBI CBI
.\}
.TH "rclone" "1" "Jan 30, 2026" "User Manual" ""
.TH "rclone" "1" "Feb 17, 2026" "User Manual" ""
.hy
.SH NAME
.PP
@@ -6292,14 +6292,14 @@ rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]a
.nf
\f[C]
rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{YYYYMMDD}\[dq]
// Output: stories/The Quick Brown Fox!-20260130
// Output: stories/The Quick Brown Fox!-20260217
\f[R]
.fi
.IP
.nf
\f[C]
rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{macfriendlytime}\[dq]
// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM
// Output: stories/The Quick Brown Fox!-2026-02-17 0451PM
\f[R]
.fi
.IP
@@ -6832,6 +6832,20 @@ incompatible with \f[V]--urls\f[R].
This will do \f[V]--transfers\f[R] copies in parallel.
Note that if \f[V]--auto-filename\f[R] is desired for all URLs then a
file with only URLs and no filename can be used.
.PP
Each FILENAME in the CSV file can start with a relative path which will
be appended to the destination path provided at the command line.
For example, running the command shown above with the following CSV file
will write two files to the destination:
\f[V]remote:dir/local/path/bar.json\f[R] and
\f[V]remote:dir/another/local/directory/qux.json\f[R]
.IP
.nf
\f[C]
https://example.org/foo/bar.json,local/path/bar.json
https://example.org/qux/baz.json,another/local/directory/qux.json
\f[R]
.fi
.SS Troubleshooting
.PP
If you can\[aq]t get \f[V]rclone copyurl\f[R] to work then here are some
@@ -29878,7 +29892,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.73.0\[dq])
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.73.1\[dq])
\f[R]
.fi
.SS Performance
@@ -30362,9 +30376,11 @@ Backend-only flags (these can be set in the config file also).
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
--filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi)
--filelu-description string Description of the remote
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
--filelu-key string Your FileLu Rclone key from My Account
--filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi)
--filen-api-key string API Key for your Filen account (obscured)
--filen-auth-version string Authentication Version (internal use only)
--filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
@@ -33145,19 +33161,23 @@ The following backends have known issues that need more investigation:
\f[V]TestBisyncRemoteRemote/normalization\f[R] (https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
.RE
.IP \[bu] 2
\f[V]TestSeafile\f[R] (\f[V]seafile\f[R])
\f[V]TestInternxt\f[R] (\f[V]internxt\f[R])
.RS 2
.IP \[bu] 2
\f[V]TestBisyncLocalRemote/volatile\f[R] (https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafile-1.txt)
\f[V]TestBisyncLocalRemote/all_changed\f[R] (https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
.IP \[bu] 2
\f[V]TestBisyncLocalRemote/ext_paths\f[R] (https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
.IP \[bu] 2
\f[V]TestBisyncLocalRemote/max_delete_path1\f[R] (https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
.IP \[bu] 2
\f[V]TestBisyncRemoteRemote/basic\f[R] (https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
.IP \[bu] 2
\f[V]TestBisyncRemoteRemote/concurrent\f[R] (https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt)
.IP \[bu] 2
5 more (https://pub.rclone.org/integration-tests/current/)
.RE
.IP \[bu] 2
\f[V]TestSeafileV6\f[R] (\f[V]seafile\f[R])
.RS 2
.IP \[bu] 2
\f[V]TestBisyncLocalRemote/volatile\f[R] (https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafileV6-1.txt)
.RE
.IP \[bu] 2
Updated: 2026-01-30-010015
Updated: 2026-02-17-010016
.PP
The following backends either have not been tested recently or have
known issues that are deemed unfixable for the time being:
@@ -56514,6 +56534,34 @@ Required: true
.SS Advanced options
.PP
Here are the Advanced options specific to filelu (FileLu Cloud Storage).
.SS --filelu-upload-cutoff
.PP
Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size.
.PP
Properties:
.IP \[bu] 2
Config: upload_cutoff
.IP \[bu] 2
Env Var: RCLONE_FILELU_UPLOAD_CUTOFF
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
Default: 500Mi
.SS --filelu-chunk-size
.PP
Chunk size to use for uploading.
Used for multipart uploads.
.PP
Properties:
.IP \[bu] 2
Config: chunk_size
.IP \[bu] 2
Env Var: RCLONE_FILELU_CHUNK_SIZE
.IP \[bu] 2
Type: SizeSuffix
.IP \[bu] 2
Default: 64Mi
.SS --filelu-encoding
.PP
The encoding for the backend.
@@ -86814,6 +86862,71 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: Return an error based on option value.
.SH Changelog
.SS v1.73.1 - 2026-02-17
.PP
See commits (https://github.com/rclone/rclone/compare/v1.73.0...v1.73.1)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
accounting: Fix missing server side stats from core/stats rc (Nick
Craig-Wood)
.IP \[bu] 2
build
.RS 2
.IP \[bu] 2
Fix CVE-2025-68121 by updating go to 1.25.7 or later (Nick Craig-Wood)
.IP \[bu] 2
Bump github.com/go-chi/chi/v5 from 5.2.3 to 5.2.5 to fix GO-2026-4316
(albertony)
.RE
.IP \[bu] 2
docs: Extend copyurl docs with an example of CSV FILENAMEs starting with
a path.
(Jack Kelly)
.IP \[bu] 2
march: Fix runtime: program exceeds 10000-thread limit (Nick Craig-Wood)
.IP \[bu] 2
pacer
.RS 2
.IP \[bu] 2
Fix deadlock between pacer token and --max-connections (Nick Craig-Wood)
.IP \[bu] 2
Re-read the sleep time as it may be stale (Nick Craig-Wood)
.RE
.RE
.IP \[bu] 2
Drime
.RS 2
.IP \[bu] 2
Fix files and directories being created in the default workspace (Nick
Craig-Wood)
.RE
.IP \[bu] 2
Filelu
.RS 2
.IP \[bu] 2
Avoid buffering entire file in memory (kingston125)
.IP \[bu] 2
Add multipart upload support with configurable cutoff (kingston125)
.RE
.IP \[bu] 2
Filen
.RS 2
.IP \[bu] 2
Fix 32 bit targets not being able to list directories (Enduriel)
.IP \[bu] 2
Fix potential panic in case of error during upload (Enduriel)
.RE
.IP \[bu] 2
Internxt
.RS 2
.IP \[bu] 2
Implement re-login under refresh logic, improve retry logic (José
Zúniga) -S3
.IP \[bu] 2
Set list_version to 2 for FileLu S3 configuration (kingston125)
.RE
.SS v1.73.0 - 2026-01-30
.PP
See commits (https://github.com/rclone/rclone/compare/v1.72.0...v1.73.0)