[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Duplicity-talk] Very long backup times, maybe needs for a distribut
From: |
Rubin Abdi |
Subject: |
Re: [Duplicity-talk] Very long backup times, maybe needs for a distributed backup system |
Date: |
Wed, 18 Jun 2014 21:43:43 -0700 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:32.0) Gecko/20100101 Thunderbird/32.0a2 |
I'm new to duplicity and am using duply to make things a little less
painful. Out of my limited experience so far I can sort of kind of say
the following...
* The first backup takes forever. Backing up something around 800GB to
1.5TB from my laptop to a home server running a RAID6 over gigabit
ethernet and SSH took about 40 hours and a lot of cussing.
* If all you know you'll do is local backups and restores over a fast
network, raising the --volsize helps. I pushed mine to 250MB a pop.
However it sucks horribly when you need to restore something tiny (like
when I needed to unbork /var/lib/apt/extended_states last week) over the
internet and your house has crappy DSL.
* Do you need encryption for each of your backups? If not, turn it off,
will save you a lot of cycles. The biggest bottleneck for me was simply
building the encrypted difftars, sending it over the network and writing
it to a consumer grade RAID6 was mostly a drop in the bucket in comparison.
* The i/o and crypto is resource heavy, to a certain degree. Running
backups when nothing else is utilizing the machine is better.
With all that being said, I too am curious if there's a way to
distribute/parallelize gnupg or any of it actually over several cores?
--
Rubin
address@hidden
signature.asc
Description: OpenPGP digital signature