duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Duplicity-talk] duplicity collection status slowness


From: Erik Romijn
Subject: [Duplicity-talk] duplicity collection status slowness
Date: Sun, 20 Jul 2014 19:39:05 +0200

Hello all,

I'm using duplicity to run a few backups for my servers, and generally found it 
to work very well. However, although my data is incredibly tiny, duplicity has 
become incredibly slow, which I think I've narrowed down to the collection 
status process.

My source file size is only 20MB, but running this backup takes about 7 
minutes, and is almost completely cpu bound. Running the collection status 
takes nearly the same amount, so it would seem that this is where the slowness 
comes from.

I make incremental backups every 15 minutes, with a full backup after 23 hours, 
so 92 sets per day. I currently have 19 backup chains, according to collection 
status, and there are no orphaned or incomplete sets.  The source file size is 
about 20MB. In total the destination volume is 154MB now. Running verify 
confirms that the backups are correct.

These numbers are for the backups of my /var/log, but I have another backup of 
an unrelated directory of about 300MB on the same backup schema, which shows 
similar numbers for collection status.

One workaround would be for me to move the files away from the duplicity 
destination, so that the total collection appears smaller. But that leaves me 
to wonder: why does collection status take so much time, particularly 
considering it's cpu bound?

I'm running duplicity 0.6.23 with python 2.7.6 on an Ubuntu 14.04 VPS.

The full duplicity command line I use is:
/usr/bin/duplicity --full-if-older-than 23h --encrypt-sign-key [...] 
--verbosity info --ssh-options=-oIdentityFile=/root/.ssh/backup_rsa 
--exclude-globbing-filelist /root/duplicity_log_exclude_filelist.txt /var/log 
sftp://address@hidden/[...]/backups/log

Can anyone here provide insights into what might be the issue, and what would 
be the best approach to tackle this?

cheers,
Erik








reply via email to

[Prev in Thread] Current Thread [Next in Thread]