1
0
mirror of https://github.com/gilbertchen/duplicacy synced 2025-12-06 00:03:38 +00:00

Compare commits

..

10 Commits

Author SHA1 Message Date
Gilbert Chen
817e36c7a6 Bump version to 2.0.4 2017-06-29 22:22:59 -04:00
Gilbert Chen
b7b54478fc Don't compute file hashes when DUPLICACY_SKIP_FILE_HASH is set; handle vertical backup-style hashes in restore 2017-06-29 22:19:41 -04:00
Gilbert Chen
8d06fa491a Merge branch 'master' of https://github.com/gilbertchen/duplicacy 2017-06-29 13:19:53 -04:00
Gilbert Chen
42a6ab9140 In fixed-size chunking, create a new chunk after returning the old one 2017-06-29 13:11:28 -04:00
gilbertchen
bad990e702 Merge pull request #81 from chbmuc/master
Move error parsing behind status code handling
2017-06-26 11:08:38 -04:00
gilbertchen
d27335ad8d Update README.md 2017-06-23 22:02:15 -04:00
Gilbert Chen
a584828e1b Merge branch 'master' of https://github.com/gilbertchen/duplicacy 2017-06-22 22:53:42 -04:00
Gilbert Chen
d0c376f593 Implement fast resume; refactor GetDuplicacyPreferencePath() 2017-06-22 22:53:33 -04:00
gilbertchen
a54029cf2b Update GUIDE.md 2017-06-22 13:11:04 -04:00
Christian Brunner
6a73a62591 Move error parsing behind status code handling
Otherwise request throttling won't work and you will get errors like this:

PUT https://api.onedrive.com/v1.0/drive/root:/dup/chunks/91xxx08:/content
Failed to upload the chunk 91xxx08: 503 Unexpected response
2017-06-16 14:06:14 +02:00
18 changed files with 462 additions and 164 deletions

View File

@@ -28,7 +28,7 @@ for those commands. This default storage actually has a name, *default*.
After that, it will prepare the the current working directory as the repository to be backed up. Under the hood, it will create a directory
named *.duplicacy* in the repository and put a file named *preferences* that stores the snapshot id and encryption and storage options.
The snapshot id is an id used to distinguish different repositories connected to the same storage. Each repository must have a unique snapshot id.
The snapshot id is an id used to distinguish different repositories connected to the same storage. Each repository must have a unique snapshot id. A snapshot id must contain only characters valid in Linux and Windows paths (alphabet, digits, underscore, dash, etc), but cannot include `/`, `\`, or `@`.
The -e option controls whether or not encryption will be enabled for the storage. If encryption is enabled, you will be prompted to enter a storage password.

View File

@@ -223,9 +223,9 @@ It is unclear if the lack of cloud backends is due to difficulties in porting th
[not recommended](http://librelist.com/browser//attic/2014/11/11/backing-up-multiple-servers-into-a-single-repository/#e96345aa5a3469a87786675d65da492b) by the developer due to chunk indices being kept in a local cache.
Concurrent access is not only a convenience; it is a necessity for better deduplication. For instance, if multiple machines with the same OS installed can back up their entire drives to the same storage, only one copy of the system files needs to be stored, greatly reducing the storage space regardless of the number of machines. Attic still adopts the traditional approach of using a centralized indexing database to manage chunks, and relies heavily on caching to improve performance. The presence of exclusive locking makes it hard to be adapted for cloud storage APIs and reduces the level of deduplication.
[restic](https://restic.github.io) is a more recent addition. It is worth mentioning here because, like Duplicacy, it is written in Go. It uses a format similar to the git packfile format, but not exactly the same. Multiple clients backing up to the same storage are still guarded by
[locks](https://github.com/restic/restic/blob/master/doc/Design.md#locks).
A command to delete old backups is in the developer's [plan](https://github.com/restic/restic/issues/18). S3 storage is supported, although it is unclear how hard it is to support other cloud storage APIs because of the need for locking. Overall, it still falls in the same category as Attic. Whether it will eventually reach the same level as Attic remains to be seen.
[restic](https://restic.github.io) is a more recent addition. It is worth mentioning here because, like Duplicacy, it is written in Go. It uses a format similar to the git packfile format. Multiple clients backing up to the same storage are still guarded by
[locks](https://github.com/restic/restic/blob/master/doc/Design.md#locks). A prune operation will therefore completely block all other clients connected to the storage from doing their regular backups. Moreover, since most cloud storage services do not provide a locking service, the best effort is to use some basic file operations to simulate a lock, but distributed locking is known to be a hard problem and it is unclear how reliable restic's lock implementation is. A faulty implementation may cause a prune operation to accidentally delete data still in use, resulting in unrecoverable data loss. This is the exact problem that we avoided by taking the lock-free approach.
The following table compares the feature lists of all these backup tools:
@@ -238,7 +238,7 @@ The following table compares the feature lists of all these backup tools:
| Encryption | Yes | Yes | Yes | Yes | Yes | **Yes** |
| Deletion | No | No | Yes | Yes | No | **Yes** |
| Concurrent Access | No | No | Exclusive locking | Not recommended | Exclusive locking | **Lock-free** |
| Cloud Support | Extensive | No | No | No | S3 only | **S3, GCS, Azure, Dropbox, Backblaze, Google Drive, OneDrive, and Hubic**|
| Cloud Support | Extensive | No | No | No | S3, B2, OpenStack | **S3, GCS, Azure, Dropbox, Backblaze B2, Google Drive, OneDrive, and Hubic**|
| Snapshot Migration | No | No | No | No | No | **Yes** |

View File

@@ -13,6 +13,7 @@ import (
"strings"
"strconv"
"os/exec"
"os/signal"
"encoding/json"
"github.com/gilbertchen/cli"
@@ -57,7 +58,7 @@ func getRepositoryPreference(context *cli.Context, storageName string) (reposito
}
duplicacy.LoadPreferences(repository)
preferencePath := duplicacy.GetDuplicacyPreferencePath(repository)
preferencePath := duplicacy.GetDuplicacyPreferencePath()
duplicacy.SetKeyringFile(path.Join(preferencePath, "keyring"))
if storageName == "" {
@@ -138,13 +139,13 @@ func setGlobalOptions(context *cli.Context) {
duplicacy.RunInBackground = context.GlobalBool("background")
}
func runScript(context *cli.Context, repository string, storageName string, phase string) bool {
func runScript(context *cli.Context, storageName string, phase string) bool {
if !ScriptEnabled {
return false
}
preferencePath := duplicacy.GetDuplicacyPreferencePath(repository)
preferencePath := duplicacy.GetDuplicacyPreferencePath()
scriptDir, _ := filepath.Abs(path.Join(preferencePath, "scripts"))
scriptName := phase + "-" + context.Command.Name
@@ -225,7 +226,6 @@ func configRepository(context *cli.Context, init bool) {
preferencePath := context.String("pref-dir")
if preferencePath == "" {
preferencePath = path.Join(repository, duplicacy.DUPLICACY_DIRECTORY) // TOKEEP
}
@@ -252,6 +252,7 @@ func configRepository(context *cli.Context, init bool) {
return
}
}
duplicacy.SetDuplicacyPreferencePath(preferencePath)
duplicacy.SetKeyringFile(path.Join(preferencePath, "keyring"))
} else {
@@ -269,7 +270,7 @@ func configRepository(context *cli.Context, init bool) {
Encrypted: context.Bool("encrypt"),
}
storage := duplicacy.CreateStorage(repository, preference, true, 1)
storage := duplicacy.CreateStorage(preference, true, 1)
storagePassword := ""
if preference.Encrypted {
prompt := fmt.Sprintf("Enter storage password for %s:", preference.StorageURL)
@@ -359,7 +360,7 @@ func configRepository(context *cli.Context, init bool) {
}
otherStorage := duplicacy.CreateStorage(repository, *otherPreference, false, 1)
otherStorage := duplicacy.CreateStorage(*otherPreference, false, 1)
otherPassword := ""
if otherPreference.Encrypted {
@@ -386,7 +387,7 @@ func configRepository(context *cli.Context, init bool) {
duplicacy.Preferences = append(duplicacy.Preferences, preference)
duplicacy.SavePreferences(repository)
duplicacy.SavePreferences()
duplicacy.LOG_INFO("REPOSITORY_INIT", "%s will be backed up to %s with id %s",
repository, preference.StorageURL, preference.SnapshotID)
@@ -507,7 +508,7 @@ func setPreference(context *cli.Context) {
oldPreference.StorageURL)
} else {
*oldPreference = newPreference
duplicacy.SavePreferences(repository)
duplicacy.SavePreferences()
duplicacy.LOG_INFO("STORAGE_SET", "New options for storage %s have been saved", oldPreference.StorageURL)
}
}
@@ -524,9 +525,9 @@ func changePassword(context *cli.Context) {
os.Exit(ArgumentExitCode)
}
repository, preference := getRepositoryPreference(context, "")
_, preference := getRepositoryPreference(context, "")
storage := duplicacy.CreateStorage(repository, *preference, false, 1)
storage := duplicacy.CreateStorage(*preference, false, 1)
if storage == nil {
return
}
@@ -583,7 +584,7 @@ func backupRepository(context *cli.Context) {
return
}
runScript(context, repository, preference.Name, "pre")
runScript(context, preference.Name, "pre")
threads := context.Int("threads")
if threads < 1 {
@@ -591,7 +592,7 @@ func backupRepository(context *cli.Context) {
}
duplicacy.LOG_INFO("STORAGE_SET", "Storage set to %s", preference.StorageURL)
storage := duplicacy.CreateStorage(repository, *preference, false, threads)
storage := duplicacy.CreateStorage(*preference, false, threads)
if storage == nil {
return
}
@@ -615,10 +616,10 @@ func backupRepository(context *cli.Context) {
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password)
duplicacy.SavePassword(*preference, "password", password)
backupManager.SetupSnapshotCache(repository, preference.Name)
backupManager.SetupSnapshotCache(preference.Name)
backupManager.Backup(repository, quickMode, threads, context.String("t"), showStatistics, enableVSS)
runScript(context, repository, preference.Name, "post")
runScript(context, preference.Name, "post")
}
func restoreRepository(context *cli.Context) {
@@ -640,7 +641,7 @@ func restoreRepository(context *cli.Context) {
return
}
runScript(context, repository, preference.Name, "pre")
runScript(context, preference.Name, "pre")
threads := context.Int("threads")
if threads < 1 {
@@ -648,7 +649,7 @@ func restoreRepository(context *cli.Context) {
}
duplicacy.LOG_INFO("STORAGE_SET", "Storage set to %s", preference.StorageURL)
storage := duplicacy.CreateStorage(repository, *preference, false, threads)
storage := duplicacy.CreateStorage(*preference, false, threads)
if storage == nil {
return
}
@@ -690,10 +691,10 @@ func restoreRepository(context *cli.Context) {
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password)
duplicacy.SavePassword(*preference, "password", password)
backupManager.SetupSnapshotCache(repository, preference.Name)
backupManager.SetupSnapshotCache(preference.Name)
backupManager.Restore(repository, revision, true, quickMode, threads, overwrite, deleteMode, showStatistics, patterns)
runScript(context, repository, preference.Name, "post")
runScript(context, preference.Name, "post")
}
func listSnapshots(context *cli.Context) {
@@ -710,10 +711,10 @@ func listSnapshots(context *cli.Context) {
duplicacy.LOG_INFO("STORAGE_SET", "Storage set to %s", preference.StorageURL)
runScript(context, repository, preference.Name, "pre")
runScript(context, preference.Name, "pre")
resetPassword := context.Bool("reset-passwords")
storage := duplicacy.CreateStorage(repository, *preference, resetPassword, 1)
storage := duplicacy.CreateStorage(*preference, resetPassword, 1)
if storage == nil {
return
}
@@ -740,10 +741,10 @@ func listSnapshots(context *cli.Context) {
showFiles := context.Bool("files")
showChunks := context.Bool("chunks")
backupManager.SetupSnapshotCache(repository, preference.Name)
backupManager.SetupSnapshotCache(preference.Name)
backupManager.SnapshotManager.ListSnapshots(id, revisions, tag, showFiles, showChunks)
runScript(context, repository, preference.Name, "post")
runScript(context, preference.Name, "post")
}
func checkSnapshots(context *cli.Context) {
@@ -760,9 +761,9 @@ func checkSnapshots(context *cli.Context) {
duplicacy.LOG_INFO("STORAGE_SET", "Storage set to %s", preference.StorageURL)
runScript(context, repository, preference.Name, "pre")
runScript(context, preference.Name, "pre")
storage := duplicacy.CreateStorage(repository, *preference, false, 1)
storage := duplicacy.CreateStorage(*preference, false, 1)
if storage == nil {
return
}
@@ -790,10 +791,10 @@ func checkSnapshots(context *cli.Context) {
searchFossils := context.Bool("fossils")
resurrect := context.Bool("resurrect")
backupManager.SetupSnapshotCache(repository, preference.Name)
backupManager.SetupSnapshotCache(preference.Name)
backupManager.SnapshotManager.CheckSnapshots(id, revisions, tag, showStatistics, checkFiles, searchFossils, resurrect)
runScript(context, repository, preference.Name, "post")
runScript(context, preference.Name, "post")
}
func printFile(context *cli.Context) {
@@ -808,11 +809,11 @@ func printFile(context *cli.Context) {
repository, preference := getRepositoryPreference(context, "")
runScript(context, repository, preference.Name, "pre")
runScript(context, preference.Name, "pre")
// Do not print out storage for this command
//duplicacy.LOG_INFO("STORAGE_SET", "Storage set to %s", preference.StorageURL)
storage := duplicacy.CreateStorage(repository, *preference, false, 1)
storage := duplicacy.CreateStorage(*preference, false, 1)
if storage == nil {
return
}
@@ -832,7 +833,7 @@ func printFile(context *cli.Context) {
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password)
duplicacy.SavePassword(*preference, "password", password)
backupManager.SetupSnapshotCache(repository, preference.Name)
backupManager.SetupSnapshotCache(preference.Name)
file := ""
if len(context.Args()) > 0 {
@@ -840,7 +841,7 @@ func printFile(context *cli.Context) {
}
backupManager.SnapshotManager.PrintFile(snapshotID, revision, file)
runScript(context, repository, preference.Name, "post")
runScript(context, preference.Name, "post")
}
func diff(context *cli.Context) {
@@ -855,10 +856,10 @@ func diff(context *cli.Context) {
repository, preference := getRepositoryPreference(context, "")
runScript(context, repository, preference.Name, "pre")
runScript(context, preference.Name, "pre")
duplicacy.LOG_INFO("STORAGE_SET", "Storage set to %s", preference.StorageURL)
storage := duplicacy.CreateStorage(repository, *preference, false, 1)
storage := duplicacy.CreateStorage(*preference, false, 1)
if storage == nil {
return
}
@@ -889,10 +890,10 @@ func diff(context *cli.Context) {
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password)
duplicacy.SavePassword(*preference, "password", password)
backupManager.SetupSnapshotCache(repository, preference.Name)
backupManager.SetupSnapshotCache(preference.Name)
backupManager.SnapshotManager.Diff(repository, snapshotID, revisions, path, compareByHash)
runScript(context, repository, preference.Name, "post")
runScript(context, preference.Name, "post")
}
func showHistory(context *cli.Context) {
@@ -907,10 +908,10 @@ func showHistory(context *cli.Context) {
repository, preference := getRepositoryPreference(context, "")
runScript(context, repository, preference.Name, "pre")
runScript(context, preference.Name, "pre")
duplicacy.LOG_INFO("STORAGE_SET", "Storage set to %s", preference.StorageURL)
storage := duplicacy.CreateStorage(repository, *preference, false, 1)
storage := duplicacy.CreateStorage(*preference, false, 1)
if storage == nil {
return
}
@@ -932,10 +933,10 @@ func showHistory(context *cli.Context) {
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password)
duplicacy.SavePassword(*preference, "password", password)
backupManager.SetupSnapshotCache(repository, preference.Name)
backupManager.SetupSnapshotCache(preference.Name)
backupManager.SnapshotManager.ShowHistory(repository, snapshotID, revisions, path, showLocalHash)
runScript(context, repository, preference.Name, "post")
runScript(context, preference.Name, "post")
}
func pruneSnapshots(context *cli.Context) {
@@ -950,10 +951,10 @@ func pruneSnapshots(context *cli.Context) {
repository, preference := getRepositoryPreference(context, "")
runScript(context, repository, preference.Name, "pre")
runScript(context, preference.Name, "pre")
duplicacy.LOG_INFO("STORAGE_SET", "Storage set to %s", preference.StorageURL)
storage := duplicacy.CreateStorage(repository, *preference, false, 1)
storage := duplicacy.CreateStorage(*preference, false, 1)
if storage == nil {
return
}
@@ -990,11 +991,11 @@ func pruneSnapshots(context *cli.Context) {
backupManager := duplicacy.CreateBackupManager(preference.SnapshotID, storage, repository, password)
duplicacy.SavePassword(*preference, "password", password)
backupManager.SetupSnapshotCache(repository, preference.Name)
backupManager.SnapshotManager.PruneSnapshots(repository, selfID, snapshotID, revisions, tags, retentions,
backupManager.SetupSnapshotCache(preference.Name)
backupManager.SnapshotManager.PruneSnapshots(selfID, snapshotID, revisions, tags, retentions,
exhaustive, exclusive, ignoredIDs, dryRun, deleteOnly, collectOnly)
runScript(context, repository, preference.Name, "post")
runScript(context, preference.Name, "post")
}
func copySnapshots(context *cli.Context) {
@@ -1009,10 +1010,10 @@ func copySnapshots(context *cli.Context) {
repository, source := getRepositoryPreference(context, context.String("from"))
runScript(context, repository, source.Name, "pre")
runScript(context, source.Name, "pre")
duplicacy.LOG_INFO("STORAGE_SET", "Source storage set to %s", source.StorageURL)
sourceStorage := duplicacy.CreateStorage(repository, *source, false, 1)
sourceStorage := duplicacy.CreateStorage(*source, false, 1)
if sourceStorage == nil {
return
}
@@ -1023,7 +1024,7 @@ func copySnapshots(context *cli.Context) {
}
sourceManager := duplicacy.CreateBackupManager(source.SnapshotID, sourceStorage, repository, sourcePassword)
sourceManager.SetupSnapshotCache(repository, source.Name)
sourceManager.SetupSnapshotCache(source.Name)
duplicacy.SavePassword(*source, "password", sourcePassword)
@@ -1042,7 +1043,7 @@ func copySnapshots(context *cli.Context) {
duplicacy.LOG_INFO("STORAGE_SET", "Destination storage set to %s", destination.StorageURL)
destinationStorage := duplicacy.CreateStorage(repository, *destination, false, 1)
destinationStorage := duplicacy.CreateStorage(*destination, false, 1)
if destinationStorage == nil {
return
}
@@ -1059,7 +1060,7 @@ func copySnapshots(context *cli.Context) {
destinationManager := duplicacy.CreateBackupManager(destination.SnapshotID, destinationStorage, repository,
destinationPassword)
duplicacy.SavePassword(*destination, "password", destinationPassword)
destinationManager.SetupSnapshotCache(repository, destination.Name)
destinationManager.SetupSnapshotCache(destination.Name)
revisions := getRevisions(context)
snapshotID := ""
@@ -1073,7 +1074,7 @@ func copySnapshots(context *cli.Context) {
}
sourceManager.CopySnapshots(destinationManager, snapshotID, revisions, threads)
runScript(context, repository, source.Name, "post")
runScript(context, source.Name, "post")
}
func infoStorage(context *cli.Context) {
@@ -1088,7 +1089,8 @@ func infoStorage(context *cli.Context) {
repository := context.String("repository")
if repository != "" {
preferencePath := duplicacy.GetDuplicacyPreferencePath(repository)
preferencePath := path.Join(repository, duplicacy.DUPLICACY_DIRECTORY)
duplicacy.SetDuplicacyPreferencePath(preferencePath)
duplicacy.SetKeyringFile(path.Join(preferencePath, "keyring"))
}
@@ -1106,7 +1108,7 @@ func infoStorage(context *cli.Context) {
password = duplicacy.GetPassword(preference, "password", "Enter the storage password:", false, false)
}
storage := duplicacy.CreateStorage("", preference, context.Bool("reset-passwords"), 1)
storage := duplicacy.CreateStorage(preference, context.Bool("reset-passwords"), 1)
config, isStorageEncrypted, err := duplicacy.DownloadConfig(storage, password)
if isStorageEncrypted {
@@ -1681,7 +1683,18 @@ func main() {
app.Name = "duplicacy"
app.HelpName = "duplicacy"
app.Usage = "A new generation cloud backup tool based on lock-free deduplication"
app.Version = "2.0.3"
app.Version = "2.0.4"
// If the program is interrupted, call the RunAtError function.
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
go func() {
for _ = range c {
duplicacy.RunAtError()
os.Exit(1)
}
}()
err := app.Run(os.Args)
if err != nil {
os.Exit(2)

18
integration_tests/fixed_test.sh Executable file
View File

@@ -0,0 +1,18 @@
#!/bin/bash
# Sanity test for the fixed-size chunking algorithm
. ./test_functions.sh
fixture
pushd ${TEST_REPO}
${DUPLICACY} init integration-tests $TEST_STORAGE -c 64 -max 64 -min 64
add_file file3
add_file file4
${DUPLICACY} backup
${DUPLICACY} check --files -stats
popd

View File

@@ -0,0 +1,42 @@
#!/bin/bash
. ./test_functions.sh
fixture
pushd ${TEST_REPO}
${DUPLICACY} init integration-tests $TEST_STORAGE -c 4k
# Create 10 20k files
add_file file1 20000
add_file file2 20000
add_file file3 20000
add_file file4 20000
add_file file5 20000
add_file file6 20000
add_file file7 20000
add_file file8 20000
add_file file9 20000
add_file file10 20000
# Limit the rate to 10k/s so the backup will take about 10 seconds
${DUPLICACY} backup -limit-rate 10 -threads 4 &
# Kill the backup after 3 seconds
DUPLICACY_PID=$!
sleep 3
kill -2 ${DUPLICACY_PID}
# Try it again to test the multiple-resume case
${DUPLICACY} backup -limit-rate 10 -threads 4&
DUPLICACY_PID=$!
sleep 3
kill -2 ${DUPLICACY_PID}
# Fail the backup before uploading the snapshot
env DUPLICACY_FAIL_SNAPSHOT=true ${DUPLICACY} backup
# Now complete the backup
${DUPLICACY} backup
${DUPLICACY} check --files
popd

View File

@@ -91,8 +91,9 @@ function init_repo_pref_dir()
function add_file()
{
FILE_NAME=$1
FILE_SIZE=${2:-20000000}
pushd ${TEST_REPO}
dd if=/dev/urandom of=${FILE_NAME} bs=1000 count=20000
dd if=/dev/urandom of=${FILE_NAME} bs=1 count=$(($RANDOM % ${FILE_SIZE})) &> /dev/null
popd
}

View File

@@ -13,6 +13,7 @@ import (
"path"
"time"
"sort"
"sync"
"sync/atomic"
"strings"
"strconv"
@@ -70,9 +71,9 @@ func CreateBackupManager(snapshotID string, storage Storage, top string, passwor
// SetupSnapshotCache creates the snapshot cache, which is merely a local storage under the default .duplicacy
// directory
func (manager *BackupManager) SetupSnapshotCache(top string, storageName string) bool {
func (manager *BackupManager) SetupSnapshotCache(storageName string) bool {
preferencePath := GetDuplicacyPreferencePath(top)
preferencePath := GetDuplicacyPreferencePath()
cacheDir := path.Join(preferencePath, "cache", storageName)
storage, err := CreateFileStorage(cacheDir, 1)
@@ -94,11 +95,19 @@ func (manager *BackupManager) SetupSnapshotCache(top string, storageName string)
return true
}
// setEntryContent sets the 4 content pointers for each entry in 'entries'. 'offset' indicates the value
// to be added to the StartChunk and EndChunk points, used when intending to append 'entries' to the
// original unchanged entry list.
//
// This function assumes the Size field of each entry is equal to the length of the chunk content that belong
// to the file.
func setEntryContent(entries[] *Entry, chunkLengths[]int, offset int) {
if len(entries) == 0 {
return
}
// The following code works by iterating over 'entries' and 'chunkLength' and keeping track of the
// accumulated total file size and the accumulated total chunk size.
i := 0
totalChunkSize := int64(0)
totalFileSize := entries[i].Size
@@ -115,6 +124,8 @@ func setEntryContent(entries[] *Entry, chunkLengths[]int, offset int) {
break
}
// If the current file ends at the end of the current chunk, the next file will
// start at the next chunk
if totalChunkSize + int64(length) == totalFileSize {
entries[i].StartChunk = j + 1 + offset
entries[i].StartOffset = 0
@@ -126,6 +137,9 @@ func setEntryContent(entries[] *Entry, chunkLengths[]int, offset int) {
totalFileSize += entries[i].Size
}
if i >= len(entries) {
break
}
totalChunkSize += int64(length)
}
}
@@ -150,7 +164,6 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
remoteSnapshot := manager.SnapshotManager.downloadLatestSnapshot(manager.snapshotID)
if remoteSnapshot == nil {
quickMode = false
remoteSnapshot = CreateEmptySnapshot(manager.snapshotID)
LOG_INFO("BACKUP_START", "No previous backup found")
} else {
@@ -171,28 +184,72 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
// UploadChunk.
chunkCache := make(map[string]bool)
var incompleteSnapshot *Snapshot
// A revision number of 0 means this is the initial backup
if remoteSnapshot.Revision > 0 {
// Add all chunks in the last snapshot to the
// Add all chunks in the last snapshot to the cache
for _, chunkID := range manager.SnapshotManager.GetSnapshotChunks(remoteSnapshot) {
chunkCache[chunkID] = true
}
} else if manager.storage.IsFastListing() {
// If the listing operation is fast, list all chunks and put them in the cache.
LOG_INFO("BACKUP_LIST", "Listing all chunks")
allChunks, _ := manager.SnapshotManager.ListAllFiles(manager.storage, "chunks/")
for _, chunk := range allChunks {
if len(chunk) == 0 || chunk[len(chunk) - 1] == '/' {
continue
}
if strings.HasSuffix(chunk, ".fsl") {
continue
}
chunk = strings.Replace(chunk, "/", "", -1)
chunkCache[chunk] = true
} else {
// In quick mode, attempt to load the incomplete snapshot from last incomplete backup if there is one.
if quickMode {
incompleteSnapshot = LoadIncompleteSnapshot()
}
// If the listing operation is fast or there is an incomplete snapshot, list all chunks and
// put them in the cache.
if manager.storage.IsFastListing() || incompleteSnapshot != nil {
LOG_INFO("BACKUP_LIST", "Listing all chunks")
allChunks, _ := manager.SnapshotManager.ListAllFiles(manager.storage, "chunks/")
for _, chunk := range allChunks {
if len(chunk) == 0 || chunk[len(chunk) - 1] == '/' {
continue
}
if strings.HasSuffix(chunk, ".fsl") {
continue
}
chunk = strings.Replace(chunk, "/", "", -1)
chunkCache[chunk] = true
}
}
if incompleteSnapshot != nil {
// This is the last chunk from the incomplete snapshot that can be found in the cache
lastCompleteChunk := -1
for i, chunkHash := range incompleteSnapshot.ChunkHashes {
chunkID := manager.config.GetChunkIDFromHash(chunkHash)
if _, ok := chunkCache[chunkID]; ok {
lastCompleteChunk = i
} else {
break
}
}
// Only keep those files whose chunks exist in the cache
var files []*Entry
for _, file := range incompleteSnapshot.Files {
if file.StartChunk <= lastCompleteChunk && file.EndChunk <= lastCompleteChunk {
files = append(files, file)
} else {
break
}
}
incompleteSnapshot.Files = files
// Remove incomplete chunks (they may not have been uploaded)
incompleteSnapshot.ChunkHashes = incompleteSnapshot.ChunkHashes[:lastCompleteChunk + 1]
incompleteSnapshot.ChunkLengths = incompleteSnapshot.ChunkLengths[:lastCompleteChunk + 1]
remoteSnapshot = incompleteSnapshot
}
}
var numberOfNewFileChunks int // number of new file chunks
@@ -211,10 +268,11 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
var modifiedEntries [] *Entry // Files that has been modified or newly created
var preservedEntries [] *Entry // Files unchanges
// If the quick mode is enabled, we simply treat all files as if they were new, and break them into chunks.
// If the quick mode is disable and there isn't an incomplete snapshot from last (failed) backup,
// we simply treat all files as if they were new, and break them into chunks.
// Otherwise, we need to find those that are new or recently modified
if !quickMode {
if remoteSnapshot.Revision == 0 && incompleteSnapshot == nil {
modifiedEntries = localSnapshot.Files
for _, entry := range modifiedEntries {
totalModifiedFileSize += entry.Size
@@ -268,7 +326,7 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
var preservedChunkHashes []string
var preservedChunkLengths []int
// For each preserved file, adjust the indices StartChunk and EndChunk. This is done by finding gaps
// For each preserved file, adjust the StartChunk and EndChunk pointers. This is done by finding gaps
// between these indices and subtracting the number of deleted chunks.
last := -1
deletedChunks := 0
@@ -295,6 +353,7 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
var uploadedEntries [] *Entry
var uploadedChunkHashes []string
var uploadedChunkLengths []int
var uploadedChunkLock = &sync.Mutex{}
// the file reader implements the Reader interface. When an EOF is encounter, it opens the next file unless it
// is the last file.
@@ -318,6 +377,37 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
chunkMaker := CreateChunkMaker(manager.config, false)
chunkUploader := CreateChunkUploader(manager.config, manager.storage, nil, threads, nil)
localSnapshotReady := false
var once sync.Once
if remoteSnapshot.Revision == 0 {
// In case an error occurs during the initial backup, save the incomplete snapshot
RunAtError = func() {
once.Do(
func() {
if !localSnapshotReady {
// Lock it to gain exclusive access to uploadedChunkHashes and uploadedChunkLengths
uploadedChunkLock.Lock()
for _, entry := range uploadedEntries {
entry.EndChunk = -1
}
setEntryContent(uploadedEntries, uploadedChunkLengths, len(preservedChunkHashes))
if len(preservedChunkHashes) > 0 {
localSnapshot.ChunkHashes = preservedChunkHashes
localSnapshot.ChunkHashes = append(localSnapshot.ChunkHashes, uploadedChunkHashes...)
localSnapshot.ChunkLengths = preservedChunkLengths
localSnapshot.ChunkLengths = append(localSnapshot.ChunkLengths, uploadedChunkLengths...)
} else {
localSnapshot.ChunkHashes = uploadedChunkHashes
localSnapshot.ChunkLengths = uploadedChunkLengths
}
uploadedChunkLock.Unlock()
}
SaveIncompleteSnapshot(localSnapshot)
})
}
}
if fileReader.CurrentFile != nil {
LOG_TRACE("PACK_START", "Packing %s", fileReader.CurrentEntry.Path)
@@ -398,8 +488,11 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
chunkUploader.StartChunk(chunk, chunkIndex)
}
// Must lock it because the RunAtError function called by other threads may access these two slices
uploadedChunkLock.Lock()
uploadedChunkHashes = append(uploadedChunkHashes, hash)
uploadedChunkLengths = append(uploadedChunkLengths, chunkSize)
uploadedChunkLock.Unlock()
},
func (fileSize int64, hash string) (io.Reader, bool) {
@@ -445,6 +538,8 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
localSnapshot.ChunkLengths = uploadedChunkLengths
}
localSnapshotReady = true
localSnapshot.EndTime = time.Now().Unix()
err = manager.SnapshotManager.CheckSnapshot(localSnapshot)
@@ -455,10 +550,15 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
localSnapshot.Tag = tag
localSnapshot.Options = ""
if !quickMode {
if !quickMode || remoteSnapshot.Revision == 0 {
localSnapshot.Options = "-hash"
}
if _, found := os.LookupEnv("DUPLICACY_FAIL_SNAPSHOT"); found {
LOG_ERROR("SNAPSHOT_FAIL", "Artificially fail the backup for testing purposes")
return false
}
if shadowCopy {
if localSnapshot.Options == "" {
localSnapshot.Options = "-vss"
@@ -505,6 +605,8 @@ func (manager *BackupManager) Backup(top string, quickMode bool, threads int, ta
manager.SnapshotManager.CleanSnapshotCache(localSnapshot, nil)
LOG_INFO("BACKUP_END", "Backup for %s at revision %d completed", top, localSnapshot.Revision)
RunAtError = func() {}
RemoveIncompleteSnapshot()
totalSnapshotChunks := len(localSnapshot.FileSequence) + len(localSnapshot.ChunkSequence) +
len(localSnapshot.LengthSequence)
@@ -981,7 +1083,7 @@ func (manager *BackupManager) RestoreFile(chunkDownloader *ChunkDownloader, chun
var existingFile, newFile *os.File
var err error
preferencePath := GetDuplicacyPreferencePath(top)
preferencePath := GetDuplicacyPreferencePath()
temporaryPath := path.Join(preferencePath, "temporary")
fullPath := joinPath(top, entry.Path)
@@ -1030,7 +1132,7 @@ func (manager *BackupManager) RestoreFile(chunkDownloader *ChunkDownloader, chun
fileHash = hash
return nil, false
})
if fileHash == entry.Hash {
if fileHash == entry.Hash && fileHash != "" {
LOG_TRACE("DOWNLOAD_SKIP", "File %s unchanged (by hash)", entry.Path)
return false
}
@@ -1131,7 +1233,7 @@ func (manager *BackupManager) RestoreFile(chunkDownloader *ChunkDownloader, chun
// Verify the download by hash
hash := hex.EncodeToString(hasher.Sum(nil))
if hash != entry.Hash {
if hash != entry.Hash && hash != "" && entry.Hash != "" && !strings.HasPrefix(entry.Hash, "#") {
LOG_ERROR("DOWNLOAD_HASH", "File %s has a mismatched hash: %s instead of %s (in-place)",
fullPath, "", entry.Hash)
return false
@@ -1204,7 +1306,7 @@ func (manager *BackupManager) RestoreFile(chunkDownloader *ChunkDownloader, chun
}
hash := hex.EncodeToString(hasher.Sum(nil))
if hash != entry.Hash {
if hash != entry.Hash && hash != "" && entry.Hash != "" && !strings.HasPrefix(entry.Hash, "#") {
LOG_ERROR("DOWNLOAD_HASH", "File %s has a mismatched hash: %s instead of %s",
entry.Path, hash, entry.Hash)
return false

View File

@@ -215,8 +215,9 @@ func TestBackupManager(t *testing.T) {
time.Sleep(time.Duration(delay) * time.Second)
SetDuplicacyPreferencePath(testDir + "/repository1")
backupManager := CreateBackupManager("host1", storage, testDir, password)
backupManager.SetupSnapshotCache(testDir + "/repository1", "default")
backupManager.SetupSnapshotCache("default")
backupManager.Backup(testDir + "/repository1", /*quickMode=*/true, threads, "first", false, false)
time.Sleep(time.Duration(delay) * time.Second)

View File

@@ -146,7 +146,6 @@ func (maker *ChunkMaker) ForEachChunk(reader io.Reader, endOfChunk func(chunk *C
}
for {
startNewChunk()
maker.bufferStart = 0
for maker.bufferStart < maker.minimumChunkSize && !isEOF {
count, err := reader.Read(maker.buffer[maker.bufferStart : maker.minimumChunkSize])
@@ -174,6 +173,7 @@ func (maker *ChunkMaker) ForEachChunk(reader io.Reader, endOfChunk func(chunk *C
return
} else {
endOfChunk(chunk, false)
startNewChunk()
fileSize = 0
fileHasher = maker.config.NewFileHasher()
isEOF = false

View File

@@ -225,8 +225,41 @@ func (config *Config) NewKeyedHasher(key []byte) hash.Hash {
}
}
var SkipFileHash = false
func init() {
if value, found := os.LookupEnv("DUPLICACY_SKIP_FILE_HASH"); found && value != "" && value != "0" {
SkipFileHash = true
}
}
// Implement a dummy hasher to be used when SkipFileHash is true.
type DummyHasher struct {
}
func (hasher *DummyHasher) Write(p []byte) (int, error) {
return len(p), nil
}
func (hasher *DummyHasher) Sum(b []byte) []byte {
return []byte("")
}
func (hasher *DummyHasher) Reset() {
}
func (hasher *DummyHasher) Size() int {
return 0
}
func (hasher *DummyHasher) BlockSize() int {
return 0
}
func (config *Config) NewFileHasher() hash.Hash {
if config.CompressionLevel == DEFAULT_COMPRESSION_LEVEL {
if SkipFileHash {
return &DummyHasher {}
} else if config.CompressionLevel == DEFAULT_COMPRESSION_LEVEL {
hasher, _ := blake2.New(&blake2.Config{ Size: 32 })
return hasher
} else {

View File

@@ -160,6 +160,9 @@ const (
otherExitCode = 101
)
// This is the function to be called before exiting when an error occurs.
var RunAtError func() = func() {}
func CatchLogException() {
if r := recover(); r != nil {
switch e := r.(type) {
@@ -167,10 +170,12 @@ func CatchLogException() {
if printStackTrace {
debug.PrintStack()
}
RunAtError()
os.Exit(duplicacyExitCode)
default:
fmt.Fprintf(os.Stderr, "%v\n", e)
debug.PrintStack()
RunAtError()
os.Exit(otherExitCode)
}
}

View File

@@ -128,12 +128,6 @@ func (client *OneDriveClient) call(url string, method string, input interface{},
Error: OneDriveError { Status: response.StatusCode },
}
if err := json.NewDecoder(response.Body).Decode(errorResponse); err != nil {
return nil, 0, OneDriveError { Status: response.StatusCode, Message: fmt.Sprintf("Unexpected response"), }
}
errorResponse.Error.Status = response.StatusCode
if response.StatusCode == 401 {
if url == OneDriveRefreshTokenURL {
@@ -152,6 +146,12 @@ func (client *OneDriveClient) call(url string, method string, input interface{},
backoff *= 2
continue
} else {
if err := json.NewDecoder(response.Body).Decode(errorResponse); err != nil {
return nil, 0, OneDriveError { Status: response.StatusCode, Message: fmt.Sprintf("Unexpected response"), }
}
errorResponse.Error.Status = response.StatusCode
return nil, 0, errorResponse.Error
}
}

View File

@@ -24,53 +24,38 @@ type Preference struct {
Keys map[string]string `json:"keys"`
}
var preferencePath string
var Preferences [] Preference
// Compute .duplicacy directory path name:
// - if .duplicacy is a directory -> compute absolute path name and return it
// - if .duplicacy is a file -> assumed this file contains the real path name of .duplicacy
// - if pointed directory does not exits... return error
func GetDuplicacyPreferencePath( repository string) (preferencePath string){
func LoadPreferences(repository string) bool {
preferencePath = path.Join(repository, DUPLICACY_DIRECTORY) //TOKEEP
preferencePath = path.Join(repository, DUPLICACY_DIRECTORY)
stat, err := os.Stat(preferencePath)
if err != nil && !os.IsNotExist(err) {
LOG_ERROR("DOT_DUPLICACY_PATH", "Failed to retrieve the information about the directory %s: %v",
repository, err)
return ""
if err != nil {
LOG_ERROR("PREFERENCE_PATH", "Failed to retrieve the information about the directory %s: %v", repository, err)
return false
}
if stat != nil && stat.IsDir() {
// $repository/.duplicacy exists and is a directory --> we found the .duplicacy directory
return path.Clean(preferencePath)
}
if stat != nil && stat.Mode().IsRegular() {
b, err := ioutil.ReadFile(preferencePath) // just pass the file name
if !stat.IsDir() {
content, err := ioutil.ReadFile(preferencePath)
if err != nil {
LOG_ERROR("DOT_DUPLICACY_PATH", "Failed to read file %s: %v",
preferencePath, err)
return ""
LOG_ERROR("DOT_DUPLICACY_PATH", "Failed to locate the preference path: %v", err)
return false
}
dotDuplicacyContent := string(b) // convert content to a 'string'
stat, err := os.Stat(dotDuplicacyContent)
if err != nil && !os.IsNotExist(err) {
LOG_ERROR("DOT_DUPLICACY_PATH", "Failed to retrieve the information about the directory %s: %v",
repository, err)
return ""
realPreferencePath := string(content)
stat, err := os.Stat(realPreferencePath)
if err != nil {
LOG_ERROR("PREFERENCE_PATH", "Failed to retrieve the information about the directory %s: %v", content, err)
return false
}
if stat != nil && stat.IsDir() {
// If expression read from .duplicacy file is a directory --> we found the .duplicacy directory
return path.Clean(dotDuplicacyContent)
if !stat.IsDir() {
LOG_ERROR("PREFERENCE_PATH", "The preference path %s is not a directory", realPreferencePath)
}
}
return ""
}
func LoadPreferences(repository string) (bool) {
preferencePath = realPreferencePath
}
preferencePath := GetDuplicacyPreferencePath(repository)
description, err := ioutil.ReadFile(path.Join(preferencePath, "preferences"))
if err != nil {
LOG_ERROR("PREFERENCE_OPEN", "Failed to read the preference file from repository %s: %v", repository, err)
@@ -91,14 +76,27 @@ func LoadPreferences(repository string) (bool) {
return true
}
func SavePreferences(repository string) (bool) {
func GetDuplicacyPreferencePath() string {
if preferencePath == "" {
LOG_ERROR("PREFERENCE_PATH", "The preference path has not been set")
return ""
}
return preferencePath
}
// Normally 'preferencePath' is set in LoadPreferences; however, if LoadPreferences is not called, this function
// provide another change to set 'preferencePath'
func SetDuplicacyPreferencePath(p string) {
preferencePath = p
}
func SavePreferences() (bool) {
description, err := json.MarshalIndent(Preferences, "", " ")
if err != nil {
LOG_ERROR("PREFERENCE_MARSHAL", "Failed to marshal the repository preferences: %v", err)
return false
}
preferencePath := GetDuplicacyPreferencePath(repository)
preferenceFile := path.Join(preferencePath, "/preferences")
preferenceFile := path.Join(GetDuplicacyPreferencePath(), "preferences")
err = ioutil.WriteFile(preferenceFile, description, 0644)
if err != nil {

View File

@@ -509,7 +509,7 @@ func CreateShadowCopy(top string, shadowCopy bool) (shadowTop string) {
snapshotPath := uint16ArrayToString(properties.SnapshotDeviceObject)
preferencePath := GetDuplicacyPreferencePath(top)
preferencePath := GetDuplicacyPreferencePath()
shadowLink = preferencePath + "\\shadow"
os.Remove(shadowLink)
err = os.Symlink(snapshotPath + "\\", shadowLink)

View File

@@ -68,8 +68,7 @@ func CreateSnapshotFromDirectory(id string, top string) (snapshot *Snapshot, ski
var patterns []string
preferencePath := GetDuplicacyPreferencePath(top)
patternFile, err := ioutil.ReadFile(path.Join(preferencePath, "filters"))
patternFile, err := ioutil.ReadFile(path.Join(GetDuplicacyPreferencePath(), "filters"))
if err == nil {
for _, pattern := range strings.Split(string(patternFile), "\n") {
pattern = strings.TrimSpace(pattern)
@@ -138,6 +137,96 @@ func CreateSnapshotFromDirectory(id string, top string) (snapshot *Snapshot, ski
return snapshot, skippedDirectories, skippedFiles, nil
}
// This is the struct used to save/load incomplete snapshots
type IncompleteSnapshot struct {
Files [] *Entry
ChunkHashes []string
ChunkLengths [] int
}
// LoadIncompleteSnapshot loads the incomplete snapshot if it exists
func LoadIncompleteSnapshot() (snapshot *Snapshot) {
snapshotFile := path.Join(GetDuplicacyPreferencePath(), "incomplete")
description, err := ioutil.ReadFile(snapshotFile)
if err != nil {
return nil
}
var incompleteSnapshot IncompleteSnapshot
err = json.Unmarshal(description, &incompleteSnapshot)
if err != nil {
return nil
}
var chunkHashes []string
for _, chunkHash := range incompleteSnapshot.ChunkHashes {
hash, err := hex.DecodeString(chunkHash)
if err != nil {
return nil
}
chunkHashes = append(chunkHashes, string(hash))
}
snapshot = &Snapshot {
Files: incompleteSnapshot.Files,
ChunkHashes: chunkHashes,
ChunkLengths: incompleteSnapshot.ChunkLengths,
}
LOG_INFO("INCOMPLETE_LOAD", "Incomplete snpashot loaded from %s", snapshotFile)
return snapshot
}
// SaveIncompleteSnapshot saves the incomplete snapshot under the preference directory
func SaveIncompleteSnapshot(snapshot *Snapshot) {
var files []*Entry
for _, file := range snapshot.Files {
if file.EndChunk >= 0 {
file.Attributes = nil
files = append(files, file)
} else {
break
}
}
var chunkHashes []string
for _, chunkHash := range snapshot.ChunkHashes {
chunkHashes = append(chunkHashes, hex.EncodeToString([]byte(chunkHash)))
}
incompleteSnapshot := IncompleteSnapshot {
Files: files,
ChunkHashes: chunkHashes,
ChunkLengths: snapshot.ChunkLengths,
}
description, err := json.Marshal(incompleteSnapshot)
if err != nil {
LOG_WARN("INCOMPLETE_ENCODE", "Failed to encode the incomplete snapshot: %v", err)
return
}
snapshotFile := path.Join(GetDuplicacyPreferencePath(), "incomplete")
err = ioutil.WriteFile(snapshotFile, description, 0644)
if err != nil {
LOG_WARN("INCOMPLETE_WRITE", "Failed to save the incomplete snapshot: %v", err)
return
}
LOG_INFO("INCOMPLETE_SAVE", "Incomplete snapshot saved to %s", snapshotFile)
}
func RemoveIncompleteSnapshot() {
snapshotFile := path.Join(GetDuplicacyPreferencePath(), "incomplete")
if stat, err := os.Stat(snapshotFile); err == nil && !stat.IsDir() {
err = os.Remove(snapshotFile)
if err != nil {
LOG_INFO("INCOMPLETE_SAVE", "Failed to remove ncomplete snapshot: %v", err)
} else {
LOG_INFO("INCOMPLETE_SAVE", "Removed incomplete snapshot %s", snapshotFile)
}
}
}
// CreateSnapshotFromDescription creates a snapshot from json decription.
func CreateSnapshotFromDescription(description []byte) (snapshot *Snapshot, err error) {

View File

@@ -1084,7 +1084,7 @@ func (manager *SnapshotManager) RetrieveFile(snapshot *Snapshot, file *Entry, ou
if alternateHash {
fileHash = "#" + fileHash
}
if strings.ToLower(fileHash) != strings.ToLower(file.Hash) {
if strings.ToLower(fileHash) != strings.ToLower(file.Hash) && !SkipFileHash {
LOG_WARN("SNAPSHOT_HASH", "File %s has mismatched hashes: %s vs %s", file.Path, file.Hash, fileHash)
return false
}
@@ -1496,7 +1496,7 @@ func (manager *SnapshotManager) resurrectChunk(fossilPath string, chunkID string
// Note that a snapshot being created when step 2 is in progress may reference a fossil. To avoid this
// problem, never remove the lastest revision (unless exclusive is true), and only cache chunks referenced
// by the lastest revision.
func (manager *SnapshotManager) PruneSnapshots(top string, selfID string, snapshotID string, revisionsToBeDeleted []int,
func (manager *SnapshotManager) PruneSnapshots(selfID string, snapshotID string, revisionsToBeDeleted []int,
tags []string, retentions []string,
exhaustive bool, exclusive bool, ignoredIDs []string,
dryRun bool, deleteOnly bool, collectOnly bool) bool {
@@ -1511,7 +1511,7 @@ func (manager *SnapshotManager) PruneSnapshots(top string, selfID string, snapsh
LOG_WARN("DELETE_OPTIONS", "Tags or retention policy will be ignored if at least one revision is specified")
}
preferencePath := GetDuplicacyPreferencePath(top)
preferencePath := GetDuplicacyPreferencePath()
logDir := path.Join(preferencePath, "logs")
os.Mkdir(logDir, 0700)
logFileName := path.Join(logDir, time.Now().Format("prune-log-20060102-150405"))

View File

@@ -248,11 +248,11 @@ func TestSingleRepositoryPrune(t *testing.T) {
checkTestSnapshots(snapshotManager, 3, 0)
t.Logf("Removing snapshot repository1 revision 1 with --exclusive")
snapshotManager.PruneSnapshots(testDir, "repository1", "repository1", []int{1}, []string{}, []string{}, false, true, []string{}, false, false, false)
snapshotManager.PruneSnapshots("repository1", "repository1", []int{1}, []string{}, []string{}, false, true, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 2, 0)
t.Logf("Removing snapshot repository1 revision 2 without --exclusive")
snapshotManager.PruneSnapshots(testDir, "repository1", "repository1", []int{2}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("repository1", "repository1", []int{2}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 1, 2)
t.Logf("Creating 1 snapshot")
@@ -261,7 +261,7 @@ func TestSingleRepositoryPrune(t *testing.T) {
checkTestSnapshots(snapshotManager, 2, 2)
t.Logf("Prune without removing any snapshots -- fossils will be deleted")
snapshotManager.PruneSnapshots(testDir, "repository1", "repository1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("repository1", "repository1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 2, 0)
}
@@ -288,11 +288,11 @@ func TestSingleHostPrune(t *testing.T) {
checkTestSnapshots(snapshotManager, 3, 0)
t.Logf("Removing snapshot vm1@host1 revision 1 without --exclusive")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{1}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{1}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 2, 2)
t.Logf("Prune without removing any snapshots -- no fossils will be deleted")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 2, 2)
t.Logf("Creating 1 snapshot")
@@ -301,7 +301,7 @@ func TestSingleHostPrune(t *testing.T) {
checkTestSnapshots(snapshotManager, 3, 2)
t.Logf("Prune without removing any snapshots -- fossils will be deleted")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 3, 0)
}
@@ -329,11 +329,11 @@ func TestMultipleHostPrune(t *testing.T) {
checkTestSnapshots(snapshotManager, 3, 0)
t.Logf("Removing snapshot vm1@host1 revision 1 without --exclusive")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{1}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{1}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 2, 2)
t.Logf("Prune without removing any snapshots -- no fossils will be deleted")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 2, 2)
t.Logf("Creating 1 snapshot")
@@ -342,7 +342,7 @@ func TestMultipleHostPrune(t *testing.T) {
checkTestSnapshots(snapshotManager, 3, 2)
t.Logf("Prune without removing any snapshots -- no fossils will be deleted")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 3, 2)
t.Logf("Creating 1 snapshot")
@@ -351,7 +351,7 @@ func TestMultipleHostPrune(t *testing.T) {
checkTestSnapshots(snapshotManager, 4, 2)
t.Logf("Prune without removing any snapshots -- fossils will be deleted")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 4, 0)
}
@@ -376,7 +376,7 @@ func TestPruneAndResurrect(t *testing.T) {
checkTestSnapshots(snapshotManager, 2, 0)
t.Logf("Removing snapshot vm1@host1 revision 1 without --exclusive")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{1}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{1}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 1, 2)
t.Logf("Creating 1 snapshot")
@@ -385,7 +385,7 @@ func TestPruneAndResurrect(t *testing.T) {
checkTestSnapshots(snapshotManager, 2, 2)
t.Logf("Prune without removing any snapshots -- one fossil will be resurrected")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 2, 0)
}
@@ -413,11 +413,11 @@ func TestInactiveHostPrune(t *testing.T) {
checkTestSnapshots(snapshotManager, 3, 0)
t.Logf("Removing snapshot vm1@host1 revision 1")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{1}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{1}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 2, 2)
t.Logf("Prune without removing any snapshots -- no fossils will be deleted")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 2, 2)
t.Logf("Creating 1 snapshot")
@@ -426,7 +426,7 @@ func TestInactiveHostPrune(t *testing.T) {
checkTestSnapshots(snapshotManager, 3, 2)
t.Logf("Prune without removing any snapshots -- fossils will be deleted")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{}, false, false, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 3, 0)
}
@@ -454,14 +454,14 @@ func TestRetentionPolicy(t *testing.T) {
checkTestSnapshots(snapshotManager, 30, 0)
t.Logf("Removing snapshot vm1@host1 0:20 with --exclusive")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{}, []string{}, []string{"0:20"}, false, true, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{"0:20"}, false, true, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 19, 0)
t.Logf("Removing snapshot vm1@host1 -k 0:20 with --exclusive")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{}, []string{}, []string{"0:20"}, false, true, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{"0:20"}, false, true, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 19, 0)
t.Logf("Removing snapshot vm1@host1 -k 3:14 -k 2:7 with --exclusive")
snapshotManager.PruneSnapshots(testDir, "vm1@host1", "vm1@host1", []int{}, []string{}, []string{"3:14", "2:7"}, false, true, []string{}, false, false, false)
snapshotManager.PruneSnapshots("vm1@host1", "vm1@host1", []int{}, []string{}, []string{"3:14", "2:7"}, false, true, []string{}, false, false, false)
checkTestSnapshots(snapshotManager, 12, 0)
}

View File

@@ -75,13 +75,9 @@ func (storage *RateLimitedStorage) SetRateLimits(downloadRateLimit int, uploadRa
storage.UploadRateLimit = uploadRateLimit
}
func checkHostKey(repository string, hostname string, remote net.Addr, key ssh.PublicKey) error {
if len(repository) == 0 {
return nil
}
func checkHostKey(hostname string, remote net.Addr, key ssh.PublicKey) error {
preferencePath := GetDuplicacyPreferencePath(repository)
preferencePath := GetDuplicacyPreferencePath()
hostFile := path.Join(preferencePath, "known_hosts")
file, err := os.OpenFile(hostFile, os.O_RDWR | os.O_CREATE, 0600)
if err != nil {
@@ -126,7 +122,7 @@ func checkHostKey(repository string, hostname string, remote net.Addr, key ssh.P
}
// CreateStorage creates a storage object based on the provide storage URL.
func CreateStorage(repository string, preference Preference, resetPassword bool, threads int) (storage Storage) {
func CreateStorage(preference Preference, resetPassword bool, threads int) (storage Storage) {
storageURL := preference.StorageURL
@@ -282,7 +278,7 @@ func CreateStorage(repository string, preference Preference, resetPassword bool,
}
hostKeyChecker := func(hostname string, remote net.Addr, key ssh.PublicKey) error {
return checkHostKey(repository, hostname, remote, key)
return checkHostKey(hostname, remote, key)
}
sftpStorage, err := CreateSFTPStorage(server, port, username, storageDir, authMethods, hostKeyChecker, threads)