duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Create full backup from incremental


From: edgar . soldin
Subject: Re: [Duplicity-talk] Create full backup from incremental
Date: Sun, 19 Apr 2015 18:55:15 +0200
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0

still like the diff (changes since last full) approach better. it's very 
transparent what it does and it can be "easily" added as a new backup type 
additionally to full and incr.

eg.
Chain1: full10 inc11 inc12 inc13 inc14 inc15
Chain2: full10 diff20 inc21 inc22 inc23 ...

this will obviously destroy both chains if the full gets corrupted, but on the 
other hand it saves backend space and transfer for the full's size. obviously 
this is suited for a big amount of unchanging base data, like photo archives or 
such.

also we could easily tag the file names accordingly (eg. 
duplicity-diff.<time>.vol<num>.difftar.gpg) , so it will be visible by file 
name on the backend.

..ede/duply.net

On 19.04.2015 17:17, Kenneth Loafman wrote:
> There is another option that we might consider, starting a new incremental 
> chain by date.  It might look like:
> 
> Chain1: full inc11 inc12 inc13 inc14 inc15
> Chain2: full inc11 inc12 inc21 inc22 inc23 inc24
> 
> So that Chain2 is based on [full inc11 inc12] of Chain1.
> 
> Lots of complications, but a possible scenario that would solve the scenario 
> where you forgot to backup a major and mostly unchanging directory.  Could it 
> be extended to chains of chains?  Yes, if written correctly.  I still thing 
> chain coalescing should be done, but that will require max network and space 
> bandwidth.
> 
> ...Ken
> 
> 
> On Sun, Apr 19, 2015 at 8:34 AM, <address@hidden <mailto:address@hidden>> 
> wrote:
> 
>     Eric,
> 
>     On 17.04.2015 18:42, Eric O'Connor wrote:
>     > On 04/17/2015 06:40 AM, Scott Hannahs wrote:
>     >> I am still not clear how this scheme could be implemented without
>     >> the remote machine having all the files and lengths etc.  But this
>     >> meta data is not supposed to be in the clear on the remote machine
>     >> ever. Thus if it is local then all the incremental files would need
>     >> to be transferred back to the local machine for combining with the
>     >> full. Not saving bandwidth which I believe is the original intent.
>     >
>     > The remote machine (say, S3) doesn't have any use for files and lengths
>     > -- it's just a dumb bucket of bits. Anyway, Duplicity already stores a
>     > bunch of metadata locally, such as a rolling checksum for every file
>     > that's backed up. Unless that local metadata became corrupted or lost,
>     > why would it need to be repeatedly transferred back?
> 
>     obviously a misunderstanding. he means recreating a synthetic full from 
> an existing remote chain (full+incr's). to do that w/o using the local data 
> you will have to locally recreate the latest states which essentially is a 
> local restore which in turn means transfer of the complete chain (volumes are 
> not cached locally). be aware that the metadata is not sufficient to recreate 
> data. it is there so it does not have to be downloaded/cdecrypted for every 
> backup and mainly describes a latest state for incrementals to decide what is 
> new.
> 
>     taking that into account it is much easier to do a new full locally.
> 
>     > Anyway, it sounds like this isn't wanted, so I'll be on my way. Cheers 
> :)
> 
>     maybe to you. to me it sounds like an idea developing after revisiting 
> the initial requirement to shorten the chain.
> 
>     ..ede/duply.net <http://duply.net>
> 
>     _______________________________________________
>     Duplicity-talk mailing list
>     address@hidden <mailto:address@hidden>
>     https://lists.nongnu.org/mailman/listinfo/duplicity-talk
> 
> 
> 
> 
> _______________________________________________
> Duplicity-talk mailing list
> address@hidden
> https://lists.nongnu.org/mailman/listinfo/duplicity-talk
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]