duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] duplicity collection status slowness


From: Scott Hannahs
Subject: Re: [Duplicity-talk] duplicity collection status slowness
Date: Mon, 25 Aug 2014 12:54:17 -0400

20MB is small, but how many files?  Is it calling gpg routines a *lot*?

Is it spending the time searching deep directory structures?

It sounds like the encryption routines would be the CPU usage.  Check the 
process table to see what commands are using most of the CPU.

-Scott

On Aug 25, 2014, at 12:19, Erik Romijn <address@hidden> wrote:

> Hello all,
> 
> Is there anyone who might be able to provide any insights regarding this 
> issue? I'm happy to dig into the code myself, but I don't really know what 
> would be the best place to start.
> 
> cheers,
> Erik
> 
> On 20 Jul 2014, at 19:39, Erik Romijn <address@hidden> wrote:
> 
>> Hello all,
>> 
>> I'm using duplicity to run a few backups for my servers, and generally found 
>> it to work very well. However, although my data is incredibly tiny, 
>> duplicity has become incredibly slow, which I think I've narrowed down to 
>> the collection status process.
>> 
>> My source file size is only 20MB, but running this backup takes about 7 
>> minutes, and is almost completely cpu bound. Running the collection status 
>> takes nearly the same amount, so it would seem that this is where the 
>> slowness comes from.
>> 
>> I make incremental backups every 15 minutes, with a full backup after 23 
>> hours, so 92 sets per day. I currently have 19 backup chains, according to 
>> collection status, and there are no orphaned or incomplete sets.  The source 
>> file size is about 20MB. In total the destination volume is 154MB now. 
>> Running verify confirms that the backups are correct.
>> 
>> These numbers are for the backups of my /var/log, but I have another backup 
>> of an unrelated directory of about 300MB on the same backup schema, which 
>> shows similar numbers for collection status.
>> 
>> One workaround would be for me to move the files away from the duplicity 
>> destination, so that the total collection appears smaller. But that leaves 
>> me to wonder: why does collection status take so much time, particularly 
>> considering it's cpu bound?
>> 
>> I'm running duplicity 0.6.23 with python 2.7.6 on an Ubuntu 14.04 VPS.
>> 
>> The full duplicity command line I use is:
>> /usr/bin/duplicity --full-if-older-than 23h --encrypt-sign-key [...] 
>> --verbosity info --ssh-options=-oIdentityFile=/root/.ssh/backup_rsa 
>> --exclude-globbing-filelist /root/duplicity_log_exclude_filelist.txt 
>> /var/log sftp://address@hidden/[...]/backups/log
>> 
>> Can anyone here provide insights into what might be the issue, and what 
>> would be the best approach to tackle this?
>> 
>> cheers,
>> Erik
>> 
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Duplicity-talk mailing list
>> address@hidden
>> https://lists.nongnu.org/mailman/listinfo/duplicity-talk
> 
> 
> _______________________________________________
> Duplicity-talk mailing list
> address@hidden
> https://lists.nongnu.org/mailman/listinfo/duplicity-talk




reply via email to

[Prev in Thread] Current Thread [Next in Thread]