1
0
mirror of https://github.com/gilbertchen/duplicacy synced 2025-12-06 00:03:38 +00:00

Compare commits

..

31 Commits

Author SHA1 Message Date
Gilbert Chen
504d07bd51 Bump version to 2.3.0 2019-11-25 15:45:41 -05:00
Gilbert Chen
0abb4099f6 Fixed test errors -- parse test flags in one place 2019-11-25 15:44:03 -05:00
Gilbert Chen
694494ea54 Throw an error, instead of a warning, if pre/post script fails 2019-11-24 22:38:29 -05:00
Gilbert Chen
165152493c For the check command, -tabular should imply -all just like -stats 2019-11-24 20:45:05 -05:00
Gilbert Chen
e02041f4ed Increase the number of retries for the b2 backend from 10 to 15
Retrying 10 times means a retry window of about 5 minutes, which might be too
short.  15 corresponds to about 10 minutes.
2019-11-23 15:28:03 -05:00
Gilbert Chen
a99f059b52 Allow a custom location for the filters file
You can now add a key 'filters' in the preferences file that points to the
path of the filters file.  If this key is not found in the preferences,
the default location '.duplicacy/filters' is used.

There is a new option '-filters' for the set command that set this key in
the preferences, but you can also edit the file directly.
2019-11-23 15:23:26 -05:00
Gilbert Chen
f022a6f684 Fixed build errors in tests 2019-11-22 21:17:17 -05:00
Gilbert Chen
791c61eecb Fixed missing format parameters 2019-11-22 20:32:19 -05:00
gilbertchen
6ad27adaea Merge pull request #578 from gboudreau/vss-catalina
Bugfix: allow -vss usage on Mac OS Catalina
2019-11-22 16:46:31 -05:00
Gilbert Chen
9abfbe1ee0 Update pkg/sftp to 1.10.1
The old version has a bug where a connection closed by the server may cause
a deadlock due to a full channel buffer.
2019-11-21 23:36:17 -05:00
Gilbert Chen
b32c3b2cd5 If a symlink is a directory, match it against the patterns as a directory 2019-11-21 23:10:54 -05:00
Gilbert Chen
9baafdafa2 Remove a log message meant for debugging only 2019-11-21 21:23:31 -05:00
Gilbert Chen
ca7d927840 Use joinPath instead of filepath.Join to generate UNC paths
This fix isn't probably necessary since filepath.Join can now produce UNC
paths too with the latest versions of go.  However, we still want to keep
it for consistency.
2019-11-21 14:56:31 -05:00
Guillaume Boudreau
0ca9cd476e Bugfix: allow -vss usage on Mac OS Catalina
Using `tmutil listlocalsnapshots` to find the snapshot name we need to use; fallback to `com.apple.TimeMachine.SNAPSHOT_DATE` (same as before) if we can't find it.
2019-10-28 11:55:15 -04:00
gilbertchen
abf9a94fc9 Merge pull request #575 from gilbertchen/rsa_encryption
Implement RSA encryption
2019-10-12 11:14:29 -04:00
Gilbert Chen
9a0d60ca84 Store the public key in the config to ensure one key policy.
Also make sure that RSA encrpytion works with the copy command.
2019-09-23 12:53:43 -04:00
Gilbert Chen
90833f9d86 Implement RSA encryption
This is to support public key encryption in the backup operation.  You can use
the -key option to supply the public key to the backup command, and then the
same option to supply the private key when restoring a previous revision.

The storage must be encrypted for this to work.
2019-09-20 14:19:18 -04:00
Gilbert Chen
58387c0951 Bump version to 2.2.3 2019-06-28 10:06:55 -04:00
gilbertchen
81bb188211 Merge pull request #570 from philband/fix-b2_findbucket_401
Bugfix [B2]: Add BucketName to API call in FindBucket function
2019-06-28 09:53:36 -04:00
Philipp Bandow
5821cad8c5 Add BucketName to API call in FindBucket function 2019-06-28 12:15:45 +02:00
Gilbert Chen
662805fbbd Update ACKNOWLEDGEMENTS.md 2019-06-25 22:59:22 -04:00
Gilbert Chen
fc35ddf7d1 Bump version to 2.2.2 2019-06-20 22:22:41 -04:00
gilbertchen
6efcd37c5c Merge pull request #562 from gilbertchen/azure_retry
Retry on broken pipe in Azure backend
2019-06-20 12:14:48 -04:00
gilbertchen
58558b8a2f Merge pull request #566 from TheBestPessimist/patch-1
Update the issue template
2019-06-20 12:14:08 -04:00
Gilbert Chen
045be3905b Better handling of B2 authorization failures
This commit fixed 2 issues wrt Backblaze B2 authorization:
* every thread may call b2_authorize_account at the same time when there
are 401 errors
* if B2 has a login outage, then all threads will call b2_authorize_account
repeatedly without delay

A simple solution is to limit one b2_authorize_account call to once every
30 second regardless of how many threads there are.  If the call to
b2_authorize_account is not allowed, the random exponential backoff will
be performed.
2019-06-13 22:43:07 -04:00
Gilbert Chen
4da7f7b6f9 Check -files may download a chunk multple times
This commit fixed a bug that caused 'check -files' to download the same chunk
multiple times if shared by multiple small files.
2019-06-13 14:47:21 -04:00
Gilbert Chen
41668d4bbd Update dependency github.com/gilbertchen/go.dbus 2019-06-07 15:17:46 -04:00
Gilbert Chen
9d4ac34f4b Don't compare hashes of empty files in the diff command
Empty files may or may not have a hash depending if the -hash option is used
during backup.
2019-06-06 12:35:34 -04:00
TheBestPessimist
282fe4edd2 Update the issue template 2019-05-24 21:10:14 +03:00
TheBestPessimist
33c71ca5f8 Update the issue template
Use the new template format and ask people to use the forum **more thoroughly**.
2019-05-24 16:19:59 +03:00
Gilbert Chen
8e9caea201 Retry on broken pipe in Azure backend
Azure sometimes disconnect the connection randomly when uploading files.  The
returned error was 'broken pipe' but this error is wrapped deep in multiple
levels of errors so we have to check the error string instead.
2019-05-07 22:35:51 -04:00
23 changed files with 480 additions and 92 deletions

View File

@@ -1,5 +1,17 @@
Please submit an issue for bug reports or feature requests. If you have any questions please post them on https://forum.duplicacy.com.
---
name: Please use the official forum
about: Please use the official forum instead of Github
title: 'Please use the official forum'
labels: ''
assignees: ''
When you're reporting a bug, please specify the OS, version, command line arguments, or any info that you think is helpful for the diagnosis. If Duplicacy reports an error, please post the program output here.
---
Note that this repository hosts the CLI version of Duplicacy only. If you're reporting anything related to the GUI version, please visit https://forum.duplicacy.com.
Please **use the [Duplicacy Forum](https://forum.duplicacy.com/)** when reporting bugs, making feature requests, asking for help or simply praising Duplicacy for its ease of use.
We strongly encourage you to create an account on the forum and use that platform for discussion as there is a higher chance that someone there will talk to you.
There is a handful of people watching the Github Issues and we are in the process of moving **all** of them to the forum as well. Most likely you will not receive an answer here or it will be very slow and you will be pointed to the forum.
We have already created a comprehensive [Guide](https://forum.duplicacy.com/t/duplicacy-user-guide/1197), and a [How-To](https://forum.duplicacy.com/c/how-to) category which stores more wisdom than these issues on Github.

View File

@@ -14,3 +14,4 @@ Duplicacy is based on the following open source projects:
|https://github.com/pcwizz/xattr | BSD-2-Clause |
|https://github.com/minio/blake2b-simd | Apache-2.0 |
|https://github.com/go-ole/go-ole | MIT |
https://github.com/ncw/swift | MIT |

8
Gopkg.lock generated
View File

@@ -71,7 +71,7 @@
branch = "master"
name = "github.com/gilbertchen/go.dbus"
packages = ["."]
revision = "9e442e6378618c083fd3b85b703ffd202721fb17"
revision = "8591994fa32f1dbe3fa9486bc6f4d4361ac16649"
[[projects]]
branch = "master"
@@ -153,8 +153,8 @@
[[projects]]
name = "github.com/pkg/sftp"
packages = ["."]
revision = "98203f5a8333288eb3163b7c667d4260fe1333e9"
version = "1.0.0"
revision = "3edd153f213d8d4191a0ee4577c61cca19436632"
version = "v1.10.1"
[[projects]]
name = "github.com/satori/go.uuid"
@@ -225,6 +225,6 @@
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
inputs-digest = "eff5ae2d9507f0d62cd2e5bdedebb5c59d64f70f476b087c01c35d4a5e1be72d"
inputs-digest = "8636a9db1eb54be5374f9914687693122efdde511f11c47d10c22f9e245e7f70"
solver-name = "gps-cdcl"
solver-version = 1

View File

@@ -75,7 +75,7 @@
[[constraint]]
name = "github.com/pkg/sftp"
version = "1.0.0"
version = "1.10.1"
[[constraint]]
branch = "master"

View File

@@ -201,13 +201,24 @@ func runScript(context *cli.Context, storageName string, phase string) bool {
}
if err != nil {
duplicacy.LOG_WARN("SCRIPT_ERROR", "Failed to run script: %v", err)
duplicacy.LOG_ERROR("SCRIPT_ERROR", "Failed to run %s script: %v", script, err)
return false
}
return true
}
func loadRSAPrivateKey(keyFile string, preference *duplicacy.Preference, backupManager *duplicacy.BackupManager, resetPasswords bool) {
if keyFile == "" {
return
}
prompt := fmt.Sprintf("Enter the passphrase for %s:", keyFile)
passphrase := duplicacy.GetPassword(*preference, "rsa_passphrase", prompt, false, resetPasswords)
backupManager.LoadRSAPrivateKey(keyFile, passphrase)
duplicacy.SavePassword(*preference, "rsa_passphrase", passphrase)
}
func initRepository(context *cli.Context) {
configRepository(context, true)
}
@@ -319,6 +330,11 @@ func configRepository(context *cli.Context, init bool) {
if preference.Encrypted {
prompt := fmt.Sprintf("Enter storage password for %s:", preference.StorageURL)
storagePassword = duplicacy.GetPassword(preference, "password", prompt, false, true)
} else {
if context.String("key") != "" {
duplicacy.LOG_ERROR("STORAGE_CONFIG", "RSA encryption can't be enabled with an unencrypted storage")
return
}
}
existingConfig, _, err := duplicacy.DownloadConfig(storage, storagePassword)
@@ -434,7 +450,7 @@ func configRepository(context *cli.Context, init bool) {
iterations = duplicacy.CONFIG_DEFAULT_ITERATIONS
}
duplicacy.ConfigStorage(storage, iterations, compressionLevel, averageChunkSize, maximumChunkSize,
minimumChunkSize, storagePassword, otherConfig, bitCopy)
minimumChunkSize, storagePassword, otherConfig, bitCopy, context.String("key"))
}
duplicacy.Preferences = append(duplicacy.Preferences, preference)
@@ -532,7 +548,13 @@ func setPreference(context *cli.Context) {
newPreference.DoNotSavePassword = triBool.IsTrue()
}
newPreference.NobackupFile = context.String("nobackup-file")
if context.String("nobackup-file") != "" {
newPreference.NobackupFile = context.String("nobackup-file")
}
if context.String("filters") != "" {
newPreference.FiltersFile = context.String("filters")
}
key := context.String("key")
value := context.String("value")
@@ -715,7 +737,7 @@ func backupRepository(context *cli.Context) {
uploadRateLimit := context.Int("limit-rate")
enumOnly := context.Bool("enum-only")
storage.SetRateLimits(0, uploadRateLimit)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, preference.NobackupFile)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, preference.NobackupFile, preference.FiltersFile)
duplicacy.SavePassword(*preference, "password", password)
backupManager.SetupSnapshotCache(preference.Name)
@@ -783,10 +805,8 @@ func restoreRepository(context *cli.Context) {
}
patterns = append(patterns, pattern)
}
patterns = duplicacy.ProcessFilterLines(patterns, make([]string, 0))
duplicacy.LOG_DEBUG("REGEX_DEBUG", "There are %d compiled regular expressions stored", len(duplicacy.RegexMap))
@@ -794,9 +814,11 @@ func restoreRepository(context *cli.Context) {
duplicacy.LOG_INFO("SNAPSHOT_FILTER", "Loaded %d include/exclude pattern(s)", len(patterns))
storage.SetRateLimits(context.Int("limit-rate"), 0)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, preference.NobackupFile)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, preference.NobackupFile, preference.FiltersFile)
duplicacy.SavePassword(*preference, "password", password)
loadRSAPrivateKey(context.String("key"), preference, backupManager, false)
backupManager.SetupSnapshotCache(preference.Name)
backupManager.Restore(repository, revision, true, quickMode, threads, overwrite, deleteMode, setOwner, showStatistics, patterns)
@@ -834,7 +856,7 @@ func listSnapshots(context *cli.Context) {
tag := context.String("t")
revisions := getRevisions(context)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, preference.NobackupFile)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, "", "")
duplicacy.SavePassword(*preference, "password", password)
id := preference.SnapshotID
@@ -847,6 +869,9 @@ func listSnapshots(context *cli.Context) {
showFiles := context.Bool("files")
showChunks := context.Bool("chunks")
// list doesn't need to decrypt file chunks; but we need -key here so we can reset the passphrase for the private key
loadRSAPrivateKey(context.String("key"), preference, backupManager, resetPassword)
backupManager.SetupSnapshotCache(preference.Name)
backupManager.SnapshotManager.ListSnapshots(id, revisions, tag, showFiles, showChunks)
@@ -882,9 +907,11 @@ func checkSnapshots(context *cli.Context) {
tag := context.String("t")
revisions := getRevisions(context)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, preference.NobackupFile)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, "", "")
duplicacy.SavePassword(*preference, "password", password)
loadRSAPrivateKey(context.String("key"), preference, backupManager, false)
id := preference.SnapshotID
if context.Bool("all") {
id = ""
@@ -937,9 +964,11 @@ func printFile(context *cli.Context) {
snapshotID = context.String("id")
}
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, preference.NobackupFile)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, "", "")
duplicacy.SavePassword(*preference, "password", password)
loadRSAPrivateKey(context.String("key"), preference, backupManager, false)
backupManager.SetupSnapshotCache(preference.Name)
file := ""
@@ -993,11 +1022,13 @@ func diff(context *cli.Context) {
}
compareByHash := context.Bool("hash")
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, preference.NobackupFile)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, "", "")
duplicacy.SavePassword(*preference, "password", password)
loadRSAPrivateKey(context.String("key"), preference, backupManager, false)
backupManager.SetupSnapshotCache(preference.Name)
backupManager.SnapshotManager.Diff(repository, snapshotID, revisions, path, compareByHash, preference.NobackupFile)
backupManager.SnapshotManager.Diff(repository, snapshotID, revisions, path, compareByHash, preference.NobackupFile, preference.FiltersFile)
runScript(context, preference.Name, "post")
}
@@ -1036,7 +1067,7 @@ func showHistory(context *cli.Context) {
revisions := getRevisions(context)
showLocalHash := context.Bool("hash")
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, preference.NobackupFile)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, "", "")
duplicacy.SavePassword(*preference, "password", password)
backupManager.SetupSnapshotCache(preference.Name)
@@ -1099,7 +1130,7 @@ func pruneSnapshots(context *cli.Context) {
os.Exit(ArgumentExitCode)
}
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, preference.NobackupFile)
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password, "", "")
duplicacy.SavePassword(*preference, "password", password)
backupManager.SetupSnapshotCache(preference.Name)
@@ -1139,10 +1170,12 @@ func copySnapshots(context *cli.Context) {
sourcePassword = duplicacy.GetPassword(*source, "password", "Enter source storage password:", false, false)
}
sourceManager := duplicacy.CreateBackupManager(source.SnapshotID, sourceStorage, repository, sourcePassword, source.NobackupFile)
sourceManager := duplicacy.CreateBackupManager(source.SnapshotID, sourceStorage, repository, sourcePassword, "", "")
sourceManager.SetupSnapshotCache(source.Name)
duplicacy.SavePassword(*source, "password", sourcePassword)
loadRSAPrivateKey(context.String("key"), source, sourceManager, false)
_, destination := getRepositoryPreference(context, context.String("to"))
if destination.Name == source.Name {
@@ -1172,7 +1205,7 @@ func copySnapshots(context *cli.Context) {
destinationStorage.SetRateLimits(0, context.Int("upload-limit-rate"))
destinationManager := duplicacy.CreateBackupManager(destination.SnapshotID, destinationStorage, repository,
destinationPassword, destination.NobackupFile)
destinationPassword, "", "")
duplicacy.SavePassword(*destination, "password", destinationPassword)
destinationManager.SetupSnapshotCache(destination.Name)
@@ -1350,6 +1383,11 @@ func main() {
Usage: "initialize a new repository at the specified path rather than the current working directory",
Argument: "<path>",
},
cli.StringFlag{
Name: "key",
Usage: "the RSA public key to encrypt file chunks",
Argument: "<public key>",
},
},
Usage: "Initialize the storage if necessary and the current directory as the repository",
ArgsUsage: "<snapshot id> <storage url>",
@@ -1457,6 +1495,11 @@ func main() {
Usage: "restore from the specified storage instead of the default one",
Argument: "<storage name>",
},
cli.StringFlag{
Name: "key",
Usage: "the RSA private key to decrypt file chunks",
Argument: "<private key>",
},
},
Usage: "Restore the repository to a previously saved snapshot",
ArgsUsage: "[--] [pattern] ...",
@@ -1502,6 +1545,11 @@ func main() {
Usage: "retrieve snapshots from the specified storage",
Argument: "<storage name>",
},
cli.StringFlag{
Name: "key",
Usage: "the RSA private key to decrypt file chunks",
Argument: "<private key>",
},
},
Usage: "List snapshots",
ArgsUsage: " ",
@@ -1554,6 +1602,11 @@ func main() {
Usage: "retrieve snapshots from the specified storage",
Argument: "<storage name>",
},
cli.StringFlag{
Name: "key",
Usage: "the RSA private key to decrypt file chunks",
Argument: "<private key>",
},
},
Usage: "Check the integrity of snapshots",
ArgsUsage: " ",
@@ -1577,6 +1630,11 @@ func main() {
Usage: "retrieve the file from the specified storage",
Argument: "<storage name>",
},
cli.StringFlag{
Name: "key",
Usage: "the RSA private key to decrypt file chunks",
Argument: "<private key>",
},
},
Usage: "Print to stdout the specified file, or the snapshot content if no file is specified",
ArgsUsage: "[<file>]",
@@ -1605,6 +1663,11 @@ func main() {
Usage: "retrieve files from the specified storage",
Argument: "<storage name>",
},
cli.StringFlag{
Name: "key",
Usage: "the RSA private key to decrypt file chunks",
Argument: "<private key>",
},
},
Usage: "Compare two snapshots or two revisions of a file",
ArgsUsage: "[<file>]",
@@ -1769,6 +1832,11 @@ func main() {
Usage: "specify the path of the repository (instead of the current working directory)",
Argument: "<path>",
},
cli.StringFlag{
Name: "key",
Usage: "the RSA public key to encrypt file chunks",
Argument: "<public key>",
},
},
Usage: "Add an additional storage to be used for the existing repository",
ArgsUsage: "<storage name> <snapshot id> <storage url>",
@@ -1821,6 +1889,11 @@ func main() {
Usage: "use the specified storage instead of the default one",
Argument: "<storage name>",
},
cli.StringFlag{
Name: "filters",
Usage: "specify the path of the filters file containing include/exclude patterns",
Argument: "<file path>",
},
},
Usage: "Change the options for the default or specified storage",
ArgsUsage: " ",
@@ -1867,6 +1940,11 @@ func main() {
Usage: "number of uploading threads",
Argument: "<n>",
},
cli.StringFlag{
Name: "key",
Usage: "the RSA private key to decrypt file chunks from the source storage",
Argument: "<public key>",
},
},
Usage: "Copy snapshots between compatible storages",
ArgsUsage: " ",
@@ -1981,7 +2059,7 @@ func main() {
app.Name = "duplicacy"
app.HelpName = "duplicacy"
app.Usage = "A new generation cloud backup tool based on lock-free deduplication"
app.Version = "2.2.1" + " (" + GitCommit + ")"
app.Version = "2.3.0" + " (" + GitCommit + ")"
// If the program is interrupted, call the RunAtError function.
c := make(chan os.Signal, 1)

View File

@@ -166,9 +166,21 @@ func (storage *AzureStorage) DownloadFile(threadIndex int, filePath string, chun
// UploadFile writes 'content' to the file at 'filePath'.
func (storage *AzureStorage) UploadFile(threadIndex int, filePath string, content []byte) (err error) {
reader := CreateRateLimitedReader(content, storage.UploadRateLimit/len(storage.containers))
blob := storage.containers[threadIndex].GetBlobReference(filePath)
return blob.CreateBlockBlobFromReader(reader, nil)
tries := 0
for {
reader := CreateRateLimitedReader(content, storage.UploadRateLimit/len(storage.containers))
blob := storage.containers[threadIndex].GetBlobReference(filePath)
err = blob.CreateBlockBlobFromReader(reader, nil)
if err == nil || !strings.Contains(err.Error(), "write: broken pipe") || tries >= 3 {
return err
}
LOG_INFO("AZURE_RETRY", "Connection unexpectedly terminated: %v; retrying", err)
tries++
}
}

View File

@@ -62,6 +62,8 @@ type B2Client struct {
Threads int
MaximumRetries int
TestMode bool
LastAuthorizationTime int64
}
// URL encode the given path but keep the slashes intact
@@ -83,7 +85,7 @@ func NewB2Client(applicationKeyID string, applicationKey string, storageDir stri
storageDir += "/"
}
maximumRetries := 10
maximumRetries := 15
if value, found := os.LookupEnv("DUPLICACY_B2_RETRIES"); found && value != "" {
maximumRetries, _ = strconv.Atoi(value)
LOG_INFO("B2_RETRIES", "Setting maximum retries for B2 to %d", maximumRetries)
@@ -253,8 +255,12 @@ func (client *B2Client) call(threadIndex int, requestURL string, method string,
if requestURL == B2AuthorizationURL {
return nil, nil, 0, fmt.Errorf("Authorization failure")
}
client.AuthorizeAccount(threadIndex)
continue
// Attempt authorization again. If authorization is actually not done, run the random backoff
_, allowed := client.AuthorizeAccount(threadIndex)
if allowed {
continue
}
} else if response.StatusCode == 403 {
if !client.TestMode {
return nil, nil, 0, fmt.Errorf("B2 cap exceeded")
@@ -291,13 +297,18 @@ type B2AuthorizeAccountOutput struct {
DownloadURL string
}
func (client *B2Client) AuthorizeAccount(threadIndex int) (err error) {
func (client *B2Client) AuthorizeAccount(threadIndex int) (err error, allowed bool) {
client.Lock.Lock()
defer client.Lock.Unlock()
// Don't authorize if the previous one was done less than 30 seconds ago
if client.LastAuthorizationTime != 0 && client.LastAuthorizationTime > time.Now().Unix() - 30 {
return nil, false
}
readCloser, _, _, err := client.call(threadIndex, B2AuthorizationURL, http.MethodPost, nil, make(map[string]string))
if err != nil {
return err
return err, true
}
defer readCloser.Close()
@@ -305,7 +316,7 @@ func (client *B2Client) AuthorizeAccount(threadIndex int) (err error) {
output := &B2AuthorizeAccountOutput{}
if err = json.NewDecoder(readCloser).Decode(&output); err != nil {
return err
return err, true
}
// The account id may be different from the application key id so we're getting the account id from the returned
@@ -317,7 +328,9 @@ func (client *B2Client) AuthorizeAccount(threadIndex int) (err error) {
client.DownloadURL = output.DownloadURL
client.IsAuthorized = true
return nil
client.LastAuthorizationTime = time.Now().Unix()
return nil, true
}
type ListBucketOutput struct {
@@ -331,6 +344,7 @@ func (client *B2Client) FindBucket(bucketName string) (err error) {
input := make(map[string]string)
input["accountId"] = client.AccountID
input["bucketName"] = bucketName
url := client.getAPIURL() + "/b2api/v1/b2_list_buckets"

View File

@@ -50,7 +50,7 @@ func TestB2Client(t *testing.T) {
b2Client.TestMode = true
err := b2Client.AuthorizeAccount(0)
err, _ := b2Client.AuthorizeAccount(0)
if err != nil {
t.Errorf("Failed to authorize the b2 account: %v", err)
return

View File

@@ -19,7 +19,7 @@ func CreateB2Storage(accountID string, applicationKey string, bucket string, sto
client := NewB2Client(accountID, applicationKey, storageDir, threads)
err = client.AuthorizeAccount(0)
err, _ = client.AuthorizeAccount(0)
if err != nil {
return nil, err
}

View File

@@ -35,6 +35,7 @@ type BackupManager struct {
config *Config // contains a number of options
nobackupFile string // don't backup directory when this file name is found
filtersFile string // the path to the filters file
}
func (manager *BackupManager) SetDryRun(dryRun bool) {
@@ -44,7 +45,7 @@ func (manager *BackupManager) SetDryRun(dryRun bool) {
// CreateBackupManager creates a backup manager using the specified 'storage'. 'snapshotID' is a unique id to
// identify snapshots created for this repository. 'top' is the top directory of the repository. 'password' is the
// master key which can be nil if encryption is not enabled.
func CreateBackupManager(snapshotID string, storage Storage, top string, password string, nobackupFile string) *BackupManager {
func CreateBackupManager(snapshotID string, storage Storage, top string, password string, nobackupFile string, filtersFile string) *BackupManager {
config, _, err := DownloadConfig(storage, password)
if err != nil {
@@ -67,6 +68,7 @@ func CreateBackupManager(snapshotID string, storage Storage, top string, passwor
config: config,
nobackupFile: nobackupFile,
filtersFile: filtersFile,
}
if IsDebugging() {
@@ -76,6 +78,11 @@ func CreateBackupManager(snapshotID string, storage Storage, top string, passwor
return backupManager
}
// loadRSAPrivateKey loads the specifed private key file for decrypting file chunks
func (manager *BackupManager) LoadRSAPrivateKey(keyFile string, passphrase string) {
manager.config.loadRSAPrivateKey(keyFile, passphrase)
}
// SetupSnapshotCache creates the snapshot cache, which is merely a local storage under the default .duplicacy
// directory
func (manager *BackupManager) SetupSnapshotCache(storageName string) bool {
@@ -103,6 +110,7 @@ func (manager *BackupManager) SetupSnapshotCache(storageName string) bool {
return true
}
// setEntryContent sets the 4 content pointers for each entry in 'entries'. 'offset' indicates the value
// to be added to the StartChunk and EndChunk points, used when intending to append 'entries' to the
// original unchanged entry list.
@@ -176,6 +184,10 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
LOG_DEBUG("BACKUP_PARAMETERS", "top: %s, quick: %t, tag: %s", top, quickMode, tag)
if manager.config.rsaPublicKey != nil && len(manager.config.FileKey) > 0 {
LOG_INFO("BACKUP_KEY", "RSA encryption is enabled" )
}
remoteSnapshot := manager.SnapshotManager.downloadLatestSnapshot(manager.snapshotID)
if remoteSnapshot == nil {
remoteSnapshot = CreateEmptySnapshot(manager.snapshotID)
@@ -188,7 +200,8 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
defer DeleteShadowCopy()
LOG_INFO("BACKUP_INDEXING", "Indexing %s", top)
localSnapshot, skippedDirectories, skippedFiles, err := CreateSnapshotFromDirectory(manager.snapshotID, shadowTop, manager.nobackupFile)
localSnapshot, skippedDirectories, skippedFiles, err := CreateSnapshotFromDirectory(manager.snapshotID, shadowTop,
manager.nobackupFile, manager.filtersFile)
if err != nil {
LOG_ERROR("SNAPSHOT_LIST", "Failed to list the directory %s: %v", top, err)
return false
@@ -760,7 +773,8 @@ func (manager *BackupManager) Restore(top string, revision int, inPlace bool, qu
remoteSnapshot := manager.SnapshotManager.DownloadSnapshot(manager.snapshotID, revision)
manager.SnapshotManager.DownloadSnapshotContents(remoteSnapshot, patterns, true)
localSnapshot, _, _, err := CreateSnapshotFromDirectory(manager.snapshotID, top, manager.nobackupFile)
localSnapshot, _, _, err := CreateSnapshotFromDirectory(manager.snapshotID, top, manager.nobackupFile,
manager.filtersFile)
if err != nil {
LOG_ERROR("SNAPSHOT_LIST", "Failed to list the repository: %v", err)
return false
@@ -1716,6 +1730,7 @@ func (manager *BackupManager) CopySnapshots(otherManager *BackupManager, snapsho
newChunk := otherManager.config.GetChunk()
newChunk.Reset(true)
newChunk.Write(chunk.GetBytes())
newChunk.encryptionVersion = chunk.encryptionVersion
chunkUploader.StartChunk(newChunk, chunkIndex)
totalCopied++
} else {

View File

@@ -227,11 +227,11 @@ func TestBackupManager(t *testing.T) {
time.Sleep(time.Duration(delay) * time.Second)
if testFixedChunkSize {
if !ConfigStorage(storage, 16384, 100, 64*1024, 64*1024, 64*1024, password, nil, false) {
if !ConfigStorage(storage, 16384, 100, 64*1024, 64*1024, 64*1024, password, nil, false, "") {
t.Errorf("Failed to initialize the storage")
}
} else {
if !ConfigStorage(storage, 16384, 100, 64*1024, 256*1024, 16*1024, password, nil, false) {
if !ConfigStorage(storage, 16384, 100, 64*1024, 256*1024, 16*1024, password, nil, false, "") {
t.Errorf("Failed to initialize the storage")
}
}
@@ -239,7 +239,7 @@ func TestBackupManager(t *testing.T) {
time.Sleep(time.Duration(delay) * time.Second)
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")
backupManager := CreateBackupManager("host1", storage, testDir, password, "")
backupManager := CreateBackupManager("host1", storage, testDir, password, "", "")
backupManager.SetupSnapshotCache("default")
SetDuplicacyPreferencePath(testDir + "/repository1/.duplicacy")

View File

@@ -41,7 +41,7 @@ func benchmarkSplit(reader *bytes.Reader, fileSize int64, chunkSize int, compres
if encryption {
key = "0123456789abcdef0123456789abcdef"
}
err := chunk.Encrypt([]byte(key), "")
err := chunk.Encrypt([]byte(key), "", false)
if err != nil {
LOG_ERROR("BENCHMARK_ENCRYPT", "Failed to encrypt the chunk: %v", err)
}

View File

@@ -8,11 +8,13 @@ import (
"bytes"
"compress/zlib"
"crypto/aes"
"crypto/rsa"
"crypto/cipher"
"crypto/hmac"
"crypto/rand"
"crypto/sha256"
"encoding/hex"
"encoding/binary"
"fmt"
"hash"
"io"
@@ -60,11 +62,15 @@ type Chunk struct {
config *Config // Every chunk is associated with a Config object. Which hashing algorithm to use is determined
// by the config
encryptionVersion byte // The version type in the encrytion header
}
// Magic word to identify a duplicacy format encrypted file, plus a version number.
var ENCRYPTION_HEADER = "duplicacy\000"
var ENCRYPTION_VERSION_RSA byte = 2
// CreateChunk creates a new chunk.
func CreateChunk(config *Config, bufferNeeded bool) *Chunk {
@@ -170,7 +176,7 @@ func (chunk *Chunk) VerifyID() {
// Encrypt encrypts the plain data stored in the chunk buffer. If derivationKey is not nil, the actual
// encryption key will be HMAC-SHA256(encryptionKey, derivationKey).
func (chunk *Chunk) Encrypt(encryptionKey []byte, derivationKey string) (err error) {
func (chunk *Chunk) Encrypt(encryptionKey []byte, derivationKey string, isSnapshot bool) (err error) {
var aesBlock cipher.Block
var gcm cipher.AEAD
@@ -186,8 +192,17 @@ func (chunk *Chunk) Encrypt(encryptionKey []byte, derivationKey string) (err err
if len(encryptionKey) > 0 {
key := encryptionKey
if len(derivationKey) > 0 {
usingRSA := false
if chunk.config.rsaPublicKey != nil && (!isSnapshot || chunk.encryptionVersion == ENCRYPTION_VERSION_RSA) {
// If the chunk is not a snpashot chunk, we attempt to encrypt it with the RSA publick key if there is one
randomKey := make([]byte, 32)
_, err := rand.Read(randomKey)
if err != nil {
return err
}
key = randomKey
usingRSA = true
} else if len(derivationKey) > 0 {
hasher := chunk.config.NewKeyedHasher([]byte(derivationKey))
hasher.Write(encryptionKey)
key = hasher.Sum(nil)
@@ -204,7 +219,21 @@ func (chunk *Chunk) Encrypt(encryptionKey []byte, derivationKey string) (err err
}
// Start with the magic number and the version number.
encryptedBuffer.Write([]byte(ENCRYPTION_HEADER))
if usingRSA {
// RSA encryption starts "duplicacy\002"
encryptedBuffer.Write([]byte(ENCRYPTION_HEADER)[:len(ENCRYPTION_HEADER) - 1])
encryptedBuffer.Write([]byte{ENCRYPTION_VERSION_RSA})
// Then the encrypted key
encryptedKey, err := rsa.EncryptOAEP(sha256.New(), rand.Reader, chunk.config.rsaPublicKey, key, nil)
if err != nil {
return err
}
binary.Write(encryptedBuffer, binary.LittleEndian, uint16(len(encryptedKey)))
encryptedBuffer.Write(encryptedKey)
} else {
encryptedBuffer.Write([]byte(ENCRYPTION_HEADER))
}
// Followed by the nonce
nonce = make([]byte, gcm.NonceSize())
@@ -214,7 +243,6 @@ func (chunk *Chunk) Encrypt(encryptionKey []byte, derivationKey string) (err err
}
encryptedBuffer.Write(nonce)
offset = encryptedBuffer.Len()
}
// offset is either 0 or the length of header + nonce
@@ -291,6 +319,9 @@ func (chunk *Chunk) Decrypt(encryptionKey []byte, derivationKey string) (err err
}()
chunk.buffer, encryptedBuffer = encryptedBuffer, chunk.buffer
headerLength := len(ENCRYPTION_HEADER)
chunk.encryptionVersion = 0
if len(encryptionKey) > 0 {
@@ -308,6 +339,41 @@ func (chunk *Chunk) Decrypt(encryptionKey []byte, derivationKey string) (err err
key = hasher.Sum(nil)
}
if len(encryptedBuffer.Bytes()) < headerLength + 12 {
return fmt.Errorf("No enough encrypted data (%d bytes) provided", len(encryptedBuffer.Bytes()))
}
if string(encryptedBuffer.Bytes()[:headerLength-1]) != ENCRYPTION_HEADER[:headerLength-1] {
return fmt.Errorf("The storage doesn't seem to be encrypted")
}
chunk.encryptionVersion = encryptedBuffer.Bytes()[headerLength-1]
if chunk.encryptionVersion != 0 && chunk.encryptionVersion != ENCRYPTION_VERSION_RSA {
return fmt.Errorf("Unsupported encryption version %d", chunk.encryptionVersion)
}
if chunk.encryptionVersion == ENCRYPTION_VERSION_RSA {
if chunk.config.rsaPrivateKey == nil {
LOG_ERROR("CHUNK_DECRYPT", "An RSA private key is required to decrypt the chunk")
return fmt.Errorf("An RSA private key is required to decrypt the chunk")
}
encryptedKeyLength := binary.LittleEndian.Uint16(encryptedBuffer.Bytes()[headerLength:headerLength+2])
if len(encryptedBuffer.Bytes()) < headerLength + 14 + int(encryptedKeyLength) {
return fmt.Errorf("No enough encrypted data (%d bytes) provided", len(encryptedBuffer.Bytes()))
}
encryptedKey := encryptedBuffer.Bytes()[headerLength + 2:headerLength + 2 + int(encryptedKeyLength)]
headerLength += 2 + int(encryptedKeyLength)
decryptedKey, err := rsa.DecryptOAEP(sha256.New(), rand.Reader, chunk.config.rsaPrivateKey, encryptedKey, nil)
if err != nil {
return err
}
key = decryptedKey
}
aesBlock, err := aes.NewCipher(key)
if err != nil {
return err
@@ -318,21 +384,7 @@ func (chunk *Chunk) Decrypt(encryptionKey []byte, derivationKey string) (err err
return err
}
headerLength := len(ENCRYPTION_HEADER)
offset = headerLength + gcm.NonceSize()
if len(encryptedBuffer.Bytes()) < offset {
return fmt.Errorf("No enough encrypted data (%d bytes) provided", len(encryptedBuffer.Bytes()))
}
if string(encryptedBuffer.Bytes()[:headerLength-1]) != ENCRYPTION_HEADER[:headerLength-1] {
return fmt.Errorf("The storage doesn't seem to be encrypted")
}
if encryptedBuffer.Bytes()[headerLength-1] != 0 {
return fmt.Errorf("Unsupported encryption version %d", encryptedBuffer.Bytes()[headerLength-1])
}
nonce := encryptedBuffer.Bytes()[headerLength:offset]
decryptedBytes, err := gcm.Open(encryptedBuffer.Bytes()[:offset], nonce,

View File

@@ -7,6 +7,7 @@ package duplicacy
import (
"bytes"
crypto_rand "crypto/rand"
"crypto/rsa"
"math/rand"
"testing"
)
@@ -22,6 +23,15 @@ func TestChunk(t *testing.T) {
config.CompressionLevel = DEFAULT_COMPRESSION_LEVEL
maxSize := 1000000
if testRSAEncryption {
privateKey, err := rsa.GenerateKey(crypto_rand.Reader, 2048)
if err != nil {
t.Errorf("Failed to generate a random private key: %v", err)
}
config.rsaPrivateKey = privateKey
config.rsaPublicKey = privateKey.Public().(*rsa.PublicKey)
}
remainderLength := -1
for i := 0; i < 500; i++ {
@@ -37,7 +47,7 @@ func TestChunk(t *testing.T) {
hash := chunk.GetHash()
id := chunk.GetID()
err := chunk.Encrypt(key, "")
err := chunk.Encrypt(key, "", false)
if err != nil {
t.Errorf("Failed to encrypt the data: %v", err)
continue

View File

@@ -197,6 +197,16 @@ func (downloader *ChunkDownloader) Reclaim(chunkIndex int) {
downloader.lastChunkIndex = chunkIndex
}
// Return the chunk last downloaded and its hash
func (downloader *ChunkDownloader) GetLastDownloadedChunk() (chunk *Chunk, chunkHash string) {
if downloader.lastChunkIndex >= len(downloader.taskList) {
return nil, ""
}
task := downloader.taskList[downloader.lastChunkIndex]
return task.chunk, task.chunkHash
}
// WaitForChunk waits until the specified chunk is ready
func (downloader *ChunkDownloader) WaitForChunk(chunkIndex int) (chunk *Chunk) {

View File

@@ -128,7 +128,7 @@ func (uploader *ChunkUploader) Upload(threadIndex int, task ChunkUploadTask) boo
}
// Encrypt the chunk only after we know that it must be uploaded.
err = chunk.Encrypt(uploader.config.ChunkKey, chunk.GetHash())
err = chunk.Encrypt(uploader.config.ChunkKey, chunk.GetHash(), uploader.snapshotCache != nil)
if err != nil {
LOG_ERROR("UPLOAD_CHUNK", "Failed to encrypt the chunk %s: %v", chunkID, err)
return false

View File

@@ -9,15 +9,20 @@ import (
"crypto/hmac"
"crypto/rand"
"crypto/sha256"
"crypto/rsa"
"crypto/x509"
"encoding/binary"
"encoding/hex"
"encoding/json"
"encoding/pem"
"fmt"
"hash"
"os"
"runtime"
"runtime/debug"
"sync/atomic"
"io/ioutil"
"reflect"
blake2 "github.com/minio/blake2b-simd"
)
@@ -65,6 +70,10 @@ type Config struct {
// for encrypting a non-chunk file
FileKey []byte `json:"-"`
// for RSA encryption
rsaPrivateKey *rsa.PrivateKey
rsaPublicKey *rsa.PublicKey
chunkPool chan *Chunk
numberOfChunks int32
dryRun bool
@@ -80,10 +89,15 @@ type jsonableConfig struct {
IDKey string `json:"id-key"`
ChunkKey string `json:"chunk-key"`
FileKey string `json:"file-key"`
RSAPublicKey string `json:"rsa-public-key"`
}
func (config *Config) MarshalJSON() ([]byte, error) {
publicKey := []byte {}
if config.rsaPublicKey != nil {
publicKey, _ = x509.MarshalPKIXPublicKey(config.rsaPublicKey)
}
return json.Marshal(&jsonableConfig{
aliasedConfig: (*aliasedConfig)(config),
ChunkSeed: hex.EncodeToString(config.ChunkSeed),
@@ -91,6 +105,7 @@ func (config *Config) MarshalJSON() ([]byte, error) {
IDKey: hex.EncodeToString(config.IDKey),
ChunkKey: hex.EncodeToString(config.ChunkKey),
FileKey: hex.EncodeToString(config.FileKey),
RSAPublicKey: hex.EncodeToString(publicKey),
})
}
@@ -120,6 +135,19 @@ func (config *Config) UnmarshalJSON(description []byte) (err error) {
return fmt.Errorf("Invalid representation of the file key in the config")
}
if publicKey, err := hex.DecodeString(aliased.RSAPublicKey); err != nil {
return fmt.Errorf("Invalid hex encoding of the RSA public key in the config")
} else if len(publicKey) > 0 {
parsedKey, err := x509.ParsePKIXPublicKey(publicKey)
if err != nil {
return fmt.Errorf("Invalid RSA public key in the config: %v", err)
}
config.rsaPublicKey = parsedKey.(*rsa.PublicKey)
if config.rsaPublicKey == nil {
return fmt.Errorf("Unsupported public key type %s in the config", reflect.TypeOf(parsedKey))
}
}
return nil
}
@@ -140,6 +168,29 @@ func (config *Config) Print() {
LOG_INFO("CONFIG_INFO", "Maximum chunk size: %d", config.MaximumChunkSize)
LOG_INFO("CONFIG_INFO", "Minimum chunk size: %d", config.MinimumChunkSize)
LOG_INFO("CONFIG_INFO", "Chunk seed: %x", config.ChunkSeed)
LOG_TRACE("CONFIG_INFO", "Hash key: %x", config.HashKey)
LOG_TRACE("CONFIG_INFO", "ID key: %x", config.IDKey)
if len(config.ChunkKey) >= 0 {
LOG_TRACE("CONFIG_INFO", "File chunks are encrypted")
}
if len(config.FileKey) >= 0 {
LOG_TRACE("CONFIG_INFO", "Metadata chunks are encrypted")
}
if config.rsaPublicKey != nil {
pkisPublicKey, _ := x509.MarshalPKIXPublicKey(config.rsaPublicKey)
publicKey := pem.EncodeToMemory(&pem.Block{
Type: "PUBLIC KEY",
Bytes: pkisPublicKey,
})
LOG_TRACE("CONFIG_INFO", "RSA public key: %s", publicKey)
}
}
func CreateConfigFromParameters(compressionLevel int, averageChunkSize int, maximumChunkSize int, mininumChunkSize int,
@@ -430,7 +481,7 @@ func UploadConfig(storage Storage, config *Config, password string, iterations i
if len(password) > 0 {
// Encrypt the config file with masterKey. If masterKey is nil then no encryption is performed.
err = chunk.Encrypt(masterKey, "")
err = chunk.Encrypt(masterKey, "", true)
if err != nil {
LOG_ERROR("CONFIG_CREATE", "Failed to create the config file: %v", err)
return false
@@ -477,7 +528,7 @@ func UploadConfig(storage Storage, config *Config, password string, iterations i
// it simply creates a file named 'config' that stores various parameters as well as a set of keys if encryption
// is enabled.
func ConfigStorage(storage Storage, iterations int, compressionLevel int, averageChunkSize int, maximumChunkSize int,
minimumChunkSize int, password string, copyFrom *Config, bitCopy bool) bool {
minimumChunkSize int, password string, copyFrom *Config, bitCopy bool, keyFile string) bool {
exist, _, _, err := storage.GetFileInfo(0, "config")
if err != nil {
@@ -496,5 +547,108 @@ func ConfigStorage(storage Storage, iterations int, compressionLevel int, averag
return false
}
if keyFile != "" {
config.loadRSAPublicKey(keyFile)
}
return UploadConfig(storage, config, password, iterations)
}
func (config *Config) loadRSAPublicKey(keyFile string) {
encodedKey, err := ioutil.ReadFile(keyFile)
if err != nil {
LOG_ERROR("BACKUP_KEY", "Failed to read the public key file: %v", err)
return
}
decodedKey, _ := pem.Decode(encodedKey)
if decodedKey == nil {
LOG_ERROR("RSA_PUBLIC", "unrecognized public key in %s", keyFile)
return
}
if decodedKey.Type != "PUBLIC KEY" {
LOG_ERROR("RSA_PUBLIC", "Unsupported public key type %s in %s", decodedKey.Type, keyFile)
return
}
parsedKey, err := x509.ParsePKIXPublicKey(decodedKey.Bytes)
if err != nil {
LOG_ERROR("RSA_PUBLIC", "Failed to parse the public key in %s: %v", keyFile, err)
return
}
key, ok := parsedKey.(*rsa.PublicKey)
if !ok {
LOG_ERROR("RSA_PUBLIC", "Unsupported public key type %s in %s", reflect.TypeOf(parsedKey), keyFile)
return
}
config.rsaPublicKey = key
}
// loadRSAPrivateKey loads the specifed private key file for decrypting file chunks
func (config *Config) loadRSAPrivateKey(keyFile string, passphrase string) {
encodedKey, err := ioutil.ReadFile(keyFile)
if err != nil {
LOG_ERROR("RSA_PRIVATE", "Failed to read the private key file: %v", err)
return
}
decodedKey, _ := pem.Decode(encodedKey)
if decodedKey == nil {
LOG_ERROR("RSA_PRIVATE", "unrecognized private key in %s", keyFile)
return
}
if decodedKey.Type != "RSA PRIVATE KEY" {
LOG_ERROR("RSA_PRIVATE", "Unsupported private key type %s in %s", decodedKey.Type, keyFile)
return
}
var decodedKeyBytes []byte
if passphrase != "" {
decodedKeyBytes, err = x509.DecryptPEMBlock(decodedKey, []byte(passphrase))
} else {
decodedKeyBytes = decodedKey.Bytes
}
var parsedKey interface{}
if parsedKey, err = x509.ParsePKCS1PrivateKey(decodedKeyBytes); err != nil {
if parsedKey, err = x509.ParsePKCS8PrivateKey(decodedKeyBytes); err != nil {
LOG_ERROR("RSA_PRIVATE", "Failed to parse the private key in %s: %v", keyFile, err)
return
}
}
key, ok := parsedKey.(*rsa.PrivateKey)
if !ok {
LOG_ERROR("RSA_PRIVATE", "Unsupported private key type %s in %s", reflect.TypeOf(parsedKey), keyFile)
return
}
data := make([]byte, 32)
_, err = rand.Read(data)
if err != nil {
LOG_ERROR("RSA_PRIVATE", "Failed to generate random data for testing the private key: %v", err)
return
}
// Now test if the private key matches the public key
encryptedData, err := rsa.EncryptOAEP(sha256.New(), rand.Reader, config.rsaPublicKey, data, nil)
if err != nil {
LOG_ERROR("RSA_PRIVATE", "Failed to encrypt random data with the public key: %v", err)
return
}
decryptedData, err := rsa.DecryptOAEP(sha256.New(), rand.Reader, key, encryptedData, nil)
if err != nil {
LOG_ERROR("RSA_PRIVATE", "Incorrect private key: %v", err)
return
}
if !bytes.Equal(data, decryptedData) {
LOG_ERROR("RSA_PRIVATE", "Decrypted data do not match the original data")
return
}
config.rsaPrivateKey = key
}

View File

@@ -490,7 +490,7 @@ func ListEntries(top string, path string, fileList *[]*Entry, patterns []string,
}
if entry.IsLink() {
isRegular := false
isRegular, entry.Link, err = Readlink(filepath.Join(top, entry.Path))
isRegular, entry.Link, err = Readlink(joinPath(top, entry.Path))
if err != nil {
LOG_WARN("LIST_LINK", "Failed to read the symlink %s: %v", entry.Path, err)
skippedFiles = append(skippedFiles, entry.Path)
@@ -500,7 +500,7 @@ func ListEntries(top string, path string, fileList *[]*Entry, patterns []string,
if isRegular {
entry.Mode ^= uint32(os.ModeSymlink)
} else if path == "" && (filepath.IsAbs(entry.Link) || filepath.HasPrefix(entry.Link, `\\`)) && !strings.HasPrefix(entry.Link, normalizedTop) {
stat, err := os.Stat(filepath.Join(top, entry.Path))
stat, err := os.Stat(joinPath(top, entry.Path))
if err != nil {
LOG_WARN("LIST_LINK", "Failed to read the symlink: %v", err)
skippedFiles = append(skippedFiles, entry.Path)
@@ -513,6 +513,9 @@ func ListEntries(top string, path string, fileList *[]*Entry, patterns []string,
// path from f.Name(); note that a "/" is append assuming a symbolic link is always a directory
newEntry.Path = filepath.Join(normalizedPath, f.Name()) + "/"
}
if len(patterns) > 0 && !MatchPath(newEntry.Path, patterns) {
continue
}
entry = newEntry
}
}

View File

@@ -25,6 +25,7 @@ type Preference struct {
DoNotSavePassword bool `json:"no_save_password"`
NobackupFile string `json:"nobackup_file"`
Keys map[string]string `json:"keys"`
FiltersFile string `json:"filters"`
}
var preferencePath string

View File

@@ -13,6 +13,7 @@ import (
"io/ioutil"
"os"
"os/exec"
"regexp"
"strings"
"syscall"
"time"
@@ -123,11 +124,11 @@ func CreateShadowCopy(top string, shadowCopy bool, timeoutInSeconds int) (shadow
}
deviceIdRepository, err := GetPathDeviceId(top)
if err != nil {
LOG_ERROR("VSS_INIT", "Unable to get device ID of path: ", top)
LOG_ERROR("VSS_INIT", "Unable to get device ID of path: %s", top)
return top
}
if deviceIdLocal != deviceIdRepository {
LOG_WARN("VSS_PATH", "VSS not supported for non-local repository path: ", top)
LOG_WARN("VSS_PATH", "VSS not supported for non-local repository path: %s", top)
return top
}
@@ -145,22 +146,37 @@ func CreateShadowCopy(top string, shadowCopy bool, timeoutInSeconds int) (shadow
// Use tmutil to create snapshot
tmutilOutput, err := CommandWithTimeout(timeoutInSeconds, "tmutil", "snapshot")
if err != nil {
LOG_ERROR("VSS_CREATE", "Error while calling tmutil: ", err)
LOG_ERROR("VSS_CREATE", "Error while calling tmutil: %v", err)
return top
}
colonPos := strings.IndexByte(tmutilOutput, ':')
if colonPos < 0 {
LOG_ERROR("VSS_CREATE", "Snapshot creation failed: ", tmutilOutput)
LOG_ERROR("VSS_CREATE", "Snapshot creation failed: %s", tmutilOutput)
return top
}
snapshotDate = strings.TrimSpace(tmutilOutput[colonPos+1:])
tmutilOutput, err = CommandWithTimeout(timeoutInSeconds, "tmutil", "listlocalsnapshots", ".")
if err != nil {
LOG_ERROR("VSS_CREATE", "Error while calling 'tmutil listlocalsnapshots': %v", err)
return top
}
snapshotName := "com.apple.TimeMachine." + snapshotDate
r := regexp.MustCompile(`(?m)^(.+` + snapshotDate + `.*)$`)
snapshotNames := r.FindStringSubmatch(tmutilOutput)
if len(snapshotNames) > 0 {
snapshotName = snapshotNames[0]
} else {
LOG_WARN("VSS_CREATE", "Error while using 'tmutil listlocalsnapshots' to find snapshot name. Will fallback to 'com.apple.TimeMachine.SNAPSHOT_DATE'")
}
// Mount snapshot as readonly and hide from GUI i.e. Finder
_, err = CommandWithTimeout(timeoutInSeconds,
"/sbin/mount", "-t", "apfs", "-o", "nobrowse,-r,-s=com.apple.TimeMachine."+snapshotDate, "/", snapshotPath)
"/sbin/mount", "-t", "apfs", "-o", "nobrowse,-r,-s="+snapshotName, "/", snapshotPath)
if err != nil {
LOG_ERROR("VSS_CREATE", "Error while mounting snapshot: ", err)
LOG_ERROR("VSS_CREATE", "Error while mounting snapshot: %v", err)
return top
}

View File

@@ -58,7 +58,7 @@ func CreateEmptySnapshot(id string) (snapshto *Snapshot) {
// CreateSnapshotFromDirectory creates a snapshot from the local directory 'top'. Only 'Files'
// will be constructed, while 'ChunkHashes' and 'ChunkLengths' can only be populated after uploading.
func CreateSnapshotFromDirectory(id string, top string, nobackupFile string) (snapshot *Snapshot, skippedDirectories []string,
func CreateSnapshotFromDirectory(id string, top string, nobackupFile string, filtersFile string) (snapshot *Snapshot, skippedDirectories []string,
skippedFiles []string, err error) {
snapshot = &Snapshot{
@@ -69,7 +69,10 @@ func CreateSnapshotFromDirectory(id string, top string, nobackupFile string) (sn
var patterns []string
patterns = ProcessFilters()
if filtersFile == "" {
filtersFile = joinPath(GetDuplicacyPreferencePath(), "filters")
}
patterns = ProcessFilters(filtersFile)
directories := make([]*Entry, 0, 256)
directories = append(directories, CreateEntry("", 0, 0, 0))
@@ -121,8 +124,8 @@ func AppendPattern(patterns []string, new_pattern string) (new_patterns []string
new_patterns = append(patterns, new_pattern)
return new_patterns
}
func ProcessFilters() (patterns []string) {
patterns = ProcessFilterFile(joinPath(GetDuplicacyPreferencePath(), "filters"), make([]string, 0))
func ProcessFilters(filtersFile string) (patterns []string) {
patterns = ProcessFilterFile(filtersFile, make([]string, 0))
LOG_DEBUG("REGEX_DEBUG", "There are %d compiled regular expressions stored", len(RegexMap))

View File

@@ -759,8 +759,8 @@ func (manager *SnapshotManager) ListSnapshots(snapshotID string, revisionsToList
func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToCheck []int, tag string, showStatistics bool, showTabular bool,
checkFiles bool, searchFossils bool, resurrect bool) bool {
LOG_DEBUG("LIST_PARAMETERS", "id: %s, revisions: %v, tag: %s, showStatistics: %t, checkFiles: %t, searchFossils: %t, resurrect: %t",
snapshotID, revisionsToCheck, tag, showStatistics, checkFiles, searchFossils, resurrect)
LOG_DEBUG("LIST_PARAMETERS", "id: %s, revisions: %v, tag: %s, showStatistics: %t, showTabular: %t, checkFiles: %t, searchFossils: %t, resurrect: %t",
snapshotID, revisionsToCheck, tag, showStatistics, showTabular, checkFiles, searchFossils, resurrect)
snapshotMap := make(map[string][]*Snapshot)
var err error
@@ -790,7 +790,7 @@ func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToChe
chunkSizeMap[chunk] = allSizes[i]
}
if snapshotID == "" || showStatistics {
if snapshotID == "" || showStatistics || showTabular {
snapshotIDs, err := manager.ListSnapshotIDs()
if err != nil {
LOG_ERROR("SNAPSHOT_LIST", "Failed to list all snapshots: %v", err)
@@ -810,7 +810,7 @@ func (manager *SnapshotManager) CheckSnapshots(snapshotID string, revisionsToChe
for snapshotID = range snapshotMap {
revisions := revisionsToCheck
if len(revisions) == 0 || showStatistics {
if len(revisions) == 0 || showStatistics || showTabular {
revisions, err = manager.ListSnapshotRevisions(snapshotID)
if err != nil {
LOG_ERROR("SNAPSHOT_LIST", "Failed to list all revisions for snapshot %s: %v", snapshotID, err)
@@ -1194,7 +1194,6 @@ func (manager *SnapshotManager) RetrieveFile(snapshot *Snapshot, file *Entry, ou
}
var chunk *Chunk
currentHash := ""
for i := file.StartChunk; i <= file.EndChunk; i++ {
start := 0
@@ -1207,10 +1206,12 @@ func (manager *SnapshotManager) RetrieveFile(snapshot *Snapshot, file *Entry, ou
}
hash := snapshot.ChunkHashes[i]
if currentHash != hash {
lastChunk, lastChunkHash := manager.chunkDownloader.GetLastDownloadedChunk()
if lastChunkHash != hash {
i := manager.chunkDownloader.AddChunk(hash)
chunk = manager.chunkDownloader.WaitForChunk(i)
currentHash = hash
} else {
chunk = lastChunk
}
output(chunk.GetBytes()[start:end])
@@ -1298,7 +1299,7 @@ func (manager *SnapshotManager) PrintFile(snapshotID string, revision int, path
// Diff compares two snapshots, or two revision of a file if the file argument is given.
func (manager *SnapshotManager) Diff(top string, snapshotID string, revisions []int,
filePath string, compareByHash bool, nobackupFile string) bool {
filePath string, compareByHash bool, nobackupFile string, filtersFile string) bool {
LOG_DEBUG("DIFF_PARAMETERS", "top: %s, id: %s, revision: %v, path: %s, compareByHash: %t",
top, snapshotID, revisions, filePath, compareByHash)
@@ -1311,7 +1312,7 @@ func (manager *SnapshotManager) Diff(top string, snapshotID string, revisions []
if len(revisions) <= 1 {
// Only scan the repository if filePath is not provided
if len(filePath) == 0 {
rightSnapshot, _, _, err = CreateSnapshotFromDirectory(snapshotID, top, nobackupFile)
rightSnapshot, _, _, err = CreateSnapshotFromDirectory(snapshotID, top, nobackupFile, filtersFile)
if err != nil {
LOG_ERROR("SNAPSHOT_LIST", "Failed to list the directory %s: %v", top, err)
return false
@@ -1482,7 +1483,11 @@ func (manager *SnapshotManager) Diff(top string, snapshotID string, revisions []
same = right.IsSameAs(left)
}
} else {
same = left.Hash == right.Hash
if left.Size == 0 && right.Size == 0 {
same = true
} else {
same = left.Hash == right.Hash
}
}
if !same {
@@ -1853,7 +1858,7 @@ func (manager *SnapshotManager) PruneSnapshots(selfID string, snapshotID string,
if _, found := newChunks[chunk]; found {
// The fossil is referenced so it can't be deleted.
if dryRun {
LOG_INFO("FOSSIL_RESURRECT", "Fossil %s would be resurrected: %v", chunk)
LOG_INFO("FOSSIL_RESURRECT", "Fossil %s would be resurrected", chunk)
continue
}
@@ -2461,7 +2466,7 @@ func (manager *SnapshotManager) UploadFile(path string, derivationKey string, co
derivationKey = derivationKey[len(derivationKey)-64:]
}
err := manager.fileChunk.Encrypt(manager.config.FileKey, derivationKey)
err := manager.fileChunk.Encrypt(manager.config.FileKey, derivationKey, true)
if err != nil {
LOG_ERROR("UPLOAD_File", "Failed to encrypt the file %s: %v", path, err)
return false

View File

@@ -27,6 +27,7 @@ var testRateLimit int
var testQuickMode bool
var testThreads int
var testFixedChunkSize bool
var testRSAEncryption bool
func init() {
flag.StringVar(&testStorageName, "storage", "", "the test storage to use")
@@ -34,6 +35,7 @@ func init() {
flag.BoolVar(&testQuickMode, "quick", false, "quick test")
flag.IntVar(&testThreads, "threads", 1, "number of downloading/uploading threads")
flag.BoolVar(&testFixedChunkSize, "fixed-chunk-size", false, "fixed chunk size")
flag.BoolVar(&testRSAEncryption, "rsa", false, "enable RSA encryption")
flag.Parse()
}