rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] Staged backups


From: Michael Crider
Subject: Re: [rdiff-backup-users] Staged backups
Date: Mon, 11 Aug 2008 09:42:15 -0500
User-agent: Thunderbird 2.0.0.12 (X11/20080213)

I may not be the most qualified person to answer this, but since nobody
else has, I'll take a stab at it. There are (at least) two possible
approaches that you should look at, each with advantages and
disadvantages. One way would be to set up different backup jobs for
different directories. With this approach you could stagger backup times
and even run multiple backups at the same time, although bandwidth, disk
speed, and CPU speed will all be limiting factors there. This would also
give you separate rdiff-backup-data directories for each job, for better
or worse.
The second way would be to make a single job that points to /dir, then
use include statements to get /dir/a, /dir/b, etc., with --exclude '**'
after those to knock out anything else. There were several backup jobs
(for several servers on a lan) that I ran first with a single include
statement, then added an include statement on each run until I had
everything I wanted. From what I understand of the way rdiff-backup
works, when a new include statement shows up, it will copy those files
just like on the first run. Any include statements that were processed
previously will get a normal backup: librsync will check hash sums of
all files on both machines and only pass those that have changes, at
which point rdiff-backup will store the new file and create reverse
diffs against the old file.
While I have the floor, I would like to say that rdiff-backup is working
very well here backing up 20 different servers and workstations to a
single backup server, from which we use rsync with --delete-during to
push a copy to an offsite location.  Before using rdiff-backup, we were
using simple rsync with no deletes. That gave us protection for deleted
files (until the backup server drive gets full and we have to do a run
with deletes), but no versioning protection. Now we keep 30 days of
versions and deleted files, with only a 2-3% growth in disk usage from
the original server, and very little manual maintenance.


Ian Jones wrote:
Hello. I have about 20GB of data to backup from a remote site. Clearly, it's not practical to do this over the Internet in one go, so I would like to stage the backup over several sessions. So, my question is: what it the best way to do it? If I backup, say /dir/a, then subsequently /dir/a and /dir/b, will /dir/a get copied a second time?

An alternative approach would be to make a preliminary backup on DVDs and copy the files to the backup machine. If I then use rdiff-backup to do incremental backups, how to I ensure that the files that are already there are not copied again, i.e. how to I add them to the archive?

Thanks,
Ian.



_______________________________________________
rdiff-backup-users mailing list at address@hidden
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki


--
Michael Crider
Howell-Oregon Electric Cooperative
West Plains MO
http://www.hoecoop.org


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]