duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Problem getting the first large backup to work w/sc


From: Bill Wraith
Subject: Re: [Duplicity-talk] Problem getting the first large backup to work w/scp
Date: Mon, 04 Dec 2006 11:11:50 -0500
User-agent: Microsoft-Entourage/11.2.5.060620

>> -- Provide some way to recover from an aborted first backup, or provide
>>    a way to do this first backup in stages. I have a 50GB filesystem I
>>    need to backup and 50GB of space on another continent where I'd like
>>    this data to land. I can't use duplicity for that. There is simply no
>>    way to do the first backup without it being interrupted by something
>>    -- a network glitch, usually. As it stands now, I simply cannot
>>    backup that data.
>> 
I've had a similar issue with the first backup failing due to "network
glitches". I have had success with the following approach.

1) Use a local server as my duplicity remote backup site. I have several
servers that all send backups to one duplicity remote site, so I made one of
my servers on the LAN receive all those backups into one common directory.
    a) I still had one or two glitches when doing a 5GB folder with about
20000 photos in it, and that kept the first backup from working, even on my
LAN.
    b) I then made small changes in the backends.py script as described in
my previous posting w/"Duplicity, scp, backend exception problem", which
just tries one more time to do the scp transfer or sftp listing, if an error
on the first try is returned. After this change, I've had no problems and
created about 20GB of full backups on a local server.
    c) For very large backups, it may well make sense to create a local
version first, regardless of the network glitch issue, just to have the
option to send the remote backups when it makes sense for bandwidth usage
and timing, rather than when creating the compressed encrypted duplicity
files.
2) Used rsync to save the the entire duplicity repository to a remote site.
The rsync tool is very good at handling remote incremental backups over an
unreliable connection. It ran in one try for all 20GB for two different
remote servers - running for about 30 hours each time.
3) Incremental backups then work easily, first to the local copy, then rsync
to the remote copy, and are very quick - seconds for each local duplicity
run, and seconds for the rsync to the remote server (if only a few files
have changed).

It so far works for me without any problems, but a  more complete solution
for the scp backend that does retries periodically until successful would
probably be much better, especially if for a less reliable connection than
on a LAN, but I'm not familiar w/python, so I haven't tried to program that,
given my very simple one retry is working fine. There was a previous posting
regarding retries in the ftp method that could be used as a model for the
retry rules and parameters. Also, I put those changes in the scpBackend
class, not up in the main section, so I copied the run_command and popen
methods into the scpBackends class and then modified them in the same way as
my previous posting, so as to isolate my changes to just the scp backend
subclass. I attached my backends.py for reference, but please understand it
is not at all tested except in my very limited situation. Sorry if this is
all redundant and developers have already added something like this in the
next version. If so, I will very much look forward to going back to just
using the standard build from your download.

Bill Wraith

Attachment: backends.py
Description: Binary data


reply via email to

[Prev in Thread] Current Thread [Next in Thread]