duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] long term incrementals and scalability question


From: edgar . soldin
Subject: Re: [Duplicity-talk] long term incrementals and scalability question
Date: Tue, 16 Apr 2013 19:16:32 +0200
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20130328 Thunderbird/17.0.5

On 16.04.2013 18:57, Elvar wrote:
> 
> On 4/16/2013 11:24 AM, address@hidden wrote:
>> On 16.04.2013 18:05, Elvar wrote:
>>> I am currently using Duplicity to make backups of a fast growing email 
>>> archive solution. I have Duplicity backing the data up via FTP to an 
>>> offsite server. I performed the initial full backup and have been doing 
>>> incrementals since. I'm using 250M volumes to try and cut down on the 
>>> number of files on the remote server. The question I have is, is this a 
>>> viable long term method I'm using? Performing semi routine full backups is 
>>> not an option due to how long they take and the amount of data that has to 
>>> be transferred.
>>>
>> no. currently when one signature/volume becomes corrupt all following 
>> backups become unusable as well. so you either
>>
>> 1. have to do full on a regular schedule
>> or
>> 2. doing new backups against an old full by moving incrementals manually 
>> somewhere else on the backend (and back if you want to restore a backup 
>> contained in them). NOTE: this is a hack and not advised, but the only way 
>> currently to "rebase" incrementals.
>>
>> also, in #2 you'd assume that your full will never get corrupted, which is 
>> probably not very clever either.
>>
>> ..ede/duply.net
>>
>>
> 
> Would I be better off doing a straight rsync of the archive then in your 
> opinion?
> 

if you are in no need of encryption, yes.

how much data are we talking? alltogether and change per week/month?

..ede/duply.net



reply via email to

[Prev in Thread] Current Thread [Next in Thread]