duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] long term incrementals and scalability question


From: Elvar
Subject: Re: [Duplicity-talk] long term incrementals and scalability question
Date: Tue, 16 Apr 2013 14:14:28 -0500
User-agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:17.0) Gecko/20130328 Thunderbird/17.0.5


On 4/16/2013 1:32 PM, address@hidden wrote:
On 16.04.2013 20:22, Elvar wrote:
On 4/16/2013 12:16 PM, address@hidden wrote:
On 16.04.2013 18:57, Elvar wrote:
On 4/16/2013 11:24 AM, address@hidden wrote:
On 16.04.2013 18:05, Elvar wrote:
I am currently using Duplicity to make backups of a fast growing email archive 
solution. I have Duplicity backing the data up via FTP to an offsite server. I 
performed the initial full backup and have been doing incrementals since. I'm 
using 250M volumes to try and cut down on the number of files on the remote 
server. The question I have is, is this a viable long term method I'm using? 
Performing semi routine full backups is not an option due to how long they take 
and the amount of data that has to be transferred.

no. currently when one signature/volume becomes corrupt all following backups 
become unusable as well. so you either

1. have to do full on a regular schedule
or
2. doing new backups against an old full by moving incrementals manually somewhere else 
on the backend (and back if you want to restore a backup contained in them). NOTE: this 
is a hack and not advised, but the only way currently to "rebase" incrementals.

also, in #2 you'd assume that your full will never get corrupted, which is 
probably not very clever either.

..ede/duply.net


Would I be better off doing a straight rsync of the archive then in your 
opinion?

if you are in no need of encryption, yes.

how much data are we talking? alltogether and change per week/month?

..ede/duply.net

The current trend is showing about 1G of growth per day, 7G/week, etc. 
Encryption is not necessary as it will be encrypted over the wire via ipsec or 
some other mechanism. The total data is around 30G currently.

Thanks,

if you trust your backend, then there is no need for duplicity. simply rsync 
your archives. provided you rsync the originals and no compressed copies.

beware, by default rsync also compares timestamps only. the more costly binary 
comparison needs to be enabled extra, see manpage '-c'. make sure files do not 
change during backup.

.ede/duply.net

That's the unfortunate thing, this archiving product stores everything in .Maildir format so data that hasn't been rotated yet is constantly changing / being added to. I appreciate the tips.

Thanks!



reply via email to

[Prev in Thread] Current Thread [Next in Thread]