duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] duplicity verification and corrupt backups


From: edgar . soldin
Subject: Re: [Duplicity-talk] duplicity verification and corrupt backups
Date: Mon, 22 Aug 2011 13:14:38 +0200
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:6.0) Gecko/20110812 Thunderbird/6.0

On 22.08.2011 13:00, Rob Verduijn wrote:
> 
> Ed: Do you verify your backups from time to time?
> Are you telling me there are people who don't do that after each backup run ? 
> ;-)
> 

This was targeted towards Ed with the 10day duplcity runs. ;)

> 
> Is there a way to make duplicity verify each volume after sending and upload 
> it again if it fails ?

no, you can only verify complete backups like the incremental 3D ago. status 
gives you information about the existing backups,.


ede/duply.net

> 
> Regards
> Rob Verduijn
> 
> 
> 
> 2011/8/22 <address@hidden <mailto:address@hidden>>
> 
>     On 22.08.2011 04:25, Ed Blackman wrote:
>     > On Wed, Aug 17, 2011 at 06:15:26PM +0200, address@hidden 
> <mailto:address@hidden> wrote:
>     >> Good to know. But seriously. A slim line also minimizes the throughput 
> and therefor the data you can put through it over a timeframe. Doing a full 
> over a timeframe of more than a day is challenging at best. I would not 
> advise it.
>     >>
>     >> Rather
>     >>
>     >> A) split the backup into small parts that are not backed that often
>     >> or
>     >> B) do what lot's of people with slow upload channels do. Do duplicity 
> backup to a local file:// target and rsync or upload it with the software of 
> your preference to the remote site.
>     >
>     > or
>     > C) take filesystem snapshots (I use LVM on Linux), then backup from the 
> snapshots.
>     >
>     > The advantage of snapshots over option B is that the snapshots are 
> created in a matter of seconds, and so represent a much more consistent view 
> of the system than even a quick backup to a local file:// target.
>     >
>     > The disadvantage is that there's a significant scripting overhead.  Not 
> only setting up and tearing down the snapshots, but also just interacting 
> with duplicity.  "--rename $snapshotroot /" gets you most of the way, and it 
> wouldn't be an option without it, but you also have to change all the 
> --includes and --excludes (including filelists) to be relative to the root of 
> the snapshot.
>     >
>     > But in the end, it works.  Some of my full backups take 10 days to 
> "trickle" up to Amazon S3, by my script creates the snapshot for it and all 
> the incrementals blocked while the full backup completes.
>     >
> 
>     Still the probability of line reset or something else interrupting 
> duplicity uploading will significantly raise the probability of resuming gone 
> wrong or corrupt files in general on the backend. I definitely would not 
> advise to have duplicity running that long.
> 
>     Ed: Do you verify your backups from time to time?
> 
>     ede/duply.net <http://duply.net>
> 
>     _______________________________________________
>     Duplicity-talk mailing list
>     address@hidden <mailto:address@hidden>
>     https://lists.nongnu.org/mailman/listinfo/duplicity-talk
> 
> 
> 
> 
> _______________________________________________
> Duplicity-talk mailing list
> address@hidden
> https://lists.nongnu.org/mailman/listinfo/duplicity-talk



reply via email to

[Prev in Thread] Current Thread [Next in Thread]