[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Duplicity-talk] merge full and incremental on backup side
From: |
edgar . soldin |
Subject: |
Re: [Duplicity-talk] merge full and incremental on backup side |
Date: |
Wed, 20 Apr 2011 22:25:17 +0200 |
User-agent: |
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.15) Gecko/20110303 Thunderbird/3.1.9 |
On 20.04.2011 18:19, Tim Riemenschneider wrote:
> Am 20.04.2011 15:39, schrieb Erno Kovacs:
>> Okay, then I need a helper script, which I could run on the
>> backup side for doing the merge.
>>
> A point to keep in mind: running a merge (however it is done) on the
> backup-system would require decrypting the volumes (on that backup system).
> IMO that would be inappropriate... Encrypting the backups is the main
> reason I use duplicity ;-)
>
i agree .. what good is a secure backup to a 'non trusted storage' if you place
scripts there that can decrypt and reassemble your backup.
if this is really what you want you can find other solution not involving
encryption. it is kind of pointless, because the reassembling would have to be
done on the backup machine for security reasons, but if you do that you could
much easier do a new full without the doubled datatransfer from/to the backend.
btw. it is suggested to do a full every once in a while e.g. weekly or monthly
to shorten the chains (performance) and minimize the probability of data
corruption affecting a lot of backups.
ede/duply.net
PS: @ken,the rsync idea is kind of cute... but are you sure that rsyncing a
flattened chain would result in a gain of transferred data over a new full? i
argue that due to the last stage encryption the files would always differ in a
whole and not only partially as needed by rsync optimization.