duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Duplicity-talk] duplicity cpu increases over time on large (20GB) files


From: Tom Pepper
Subject: [Duplicity-talk] duplicity cpu increases over time on large (20GB) files?
Date: Wed, 18 Mar 2009 10:38:42 -0700

Greetings:

I've been attempting to use duplicity to back up ~ 500GB of VMware ESX virtual disk images, sourced as snapmirror snapshots on a Netapp filer on NFS. I've tried local storage, s3, and scp as targets, both with and without gpg in the loop.

I'm noticing as duplicity reads through these large .vmdk files (8-50GB) that nfs read speeds drop from ~ 15MB/sec at the beginning of the read to < 100kB/sec as seeks approach the 1GB mark and beyond. Duplicity throughout has CPU pegged in the duplicity pid and not gpg. plenty of ram free, as well.

running dd reads @ 16k blocksize shows > 90MB/sec read throughput (if=same vmdk files, of=/dev/null), so I'm pretty sure it isn't network related. modifying volsize to smaller or larger values doesn't seem to help. the netapp's cpu load is negligible during the slowdown.

Using duplicity to backup smaller (<100MB) files maintains pretty good speed throughout - ~ 5-15MB/sec.

Is there a design aspect to duplicity which causes it to run more slowly on extremely large files? I'd love to be able to store incremental diffs of these files in s3 but it's taking in excess of an hour to read 100MB as duplicity progresses.

duplicity 0.5.11 under ubuntu 8.04.2 (2.6.24-23-server)

TIA,
-t





reply via email to

[Prev in Thread] Current Thread [Next in Thread]