[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] State of the rdiff-backup project

From: Janne Peltonen
Subject: Re: [rdiff-backup-users] State of the rdiff-backup project
Date: Fri, 14 Aug 2015 22:42:05 +0300
User-agent: Mutt/1.5.21 (2010-09-15)


On Thu, Aug 13, 2015 at 08:43:53PM +0200, Claus-Justus Heine wrote:
> - - still, performance sucks: if a previous backup failed, then the
> "regression" regularly takes ages (I am not talking about hours, but
> several days for large backup sets)

How large are your backup sets? Mine (on ext4 on luks on mdadm raid1 mirror on
a pair of usb3 disks) recover from a failed backup in well under an hour (it
actually took a lot less than an hour even when I had them on ext3 on mdadm
raid1 mirror on a pair of usb2 disks). But they are small by modern standards;
the current mirrors are less than a 100 GB each.

Would it be possible simply split extremely large directory trees into their
subtree components? Or is the problem that your backup sets just have horribly
large directories?

Another point: if the backups succeed, my scripts rsync all the backups to
another file system, so that if the main backup fails, I can just rsync
everything back from the "metabackup" (since the backup failed, there will be a
consistent previous version there). Maybe slightly faster than rdiff-backup
failure recovery. Or I can just swap the roles of backup and metabackup (by
remounting the block devices differently). Disk space is cheap. :)

(I'm not actually answering your questions, just offering my 2 cents... ;) )

Janne Peltonen <address@hidden> PGP Key ID: 0x9CFAC88B

reply via email to

[Prev in Thread] Current Thread [Next in Thread]