[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] initial rdiff-backup for large repository over

From: Adrian Klaver
Subject: Re: [rdiff-backup-users] initial rdiff-backup for large repository over Internet (and connection lost)
Date: Wed, 24 Feb 2016 16:27:56 -0800
User-agent: Mozilla/5.0 (X11; Linux i686; rv:38.0) Gecko/20100101 Thunderbird/38.6.0

On 02/24/2016 01:57 PM, Nicolas wrote:
Hi all,

I'm an happy rdiff-backup user for many years.

Years ago I setup a rdiff-backup from a WWW server to a local server.
Internet connection is limited to 2Mbps between the 2 servers.
As /var/www was initially empty, rdiff-backup as done its backup job
everyday without problem, with many sites added in /var/www.

Now /var/www is 14G large and I need to fresh start the backup
(destination directory has been deleted).

Bad thing is that trying to first sync source and destination via the
command :
nice -n 19 rdiff-backup --force $SOURCE $DEST::$DESTDIR
always ends with network (Internet error) like these :
Found interrupted initial backup. Removing...
Write failed: Broken pipe
Fatal Error: Lost connection to the remote system

What would be proper way to first sync rdiff-backup ?
Would a previous rsync help, so that only meta-datas would be written by
rdiff ?

Take a look here:



I would use the --include-filelist option and associated file to have rdiff back up /var/www a subdirectory at a time. In other words add a subdirectory to the file and run rdiff. After that completes, add another subdirectory and rerun, etc. Basically build up the destination directory incrementally.

Thanks in advance for your thought.

rdiff-backup-users mailing list at address@hidden
Wiki URL:

Adrian Klaver

reply via email to

[Prev in Thread] Current Thread [Next in Thread]