[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Much Larger Duplicity backups when compared to Sou

From: william pink
Subject: Re: [Duplicity-talk] Much Larger Duplicity backups when compared to Source
Date: Wed, 10 Mar 2010 12:36:08 +0000

On Wed, Mar 10, 2010 at 12:26 PM, Kenneth Loafman <address@hidden> wrote:
william pink wrote:
> On Mon, Mar 8, 2010 at 10:09 PM, Jacob Godserv <address@hidden
> <mailto:address@hidden>> wrote:
>     On Mon, Mar 8, 2010 at 06:53, william pink <address@hidden
>     <mailto:address@hidden>> wrote:
>     > Any help most appreciated,
>     > Will
>     I can't claim any professional experience with MySQL, but I can try to
>     help regardless. I need some more information, first. Do you back up
>     the entire /var/lib/mysql/ (or wherever the raw databases are stored)
>     or do you back up a dump? What does 'duplicity collection-status' say?
> Hi Jacob,
> Sorry for the delayed response - I use Mysqldump each day which
> compresses them with tar and gunzip,  I then with duplicity use full
> backup initially then incremental after.

This mechanism will guarantee that the incremental ends up being about
the same size as the original.  Duplicity saves space by only sending
the changed parts of the file as part of the incremental.  By using gzip
compression, you defeat the comparison process, so each file is totally
changed as far as duplicity is concerned, thus the large backups.

To make best use of duplicity, do the mysqldump straight to text files
and make that the input to duplicity.  It will compress and tar them
before sending to the remote system and you will see much smaller
incremental backups.


Duplicity-talk mailing list

Ahh awesome, just means having large backups on the server but I can get around that somehow.

Thanks for the help!


reply via email to

[Prev in Thread] Current Thread [Next in Thread]