rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] Maintenance.


From: Edward Ned Harvey (rdiff-backup)
Subject: Re: [rdiff-backup-users] Maintenance.
Date: Fri, 6 Dec 2013 04:04:02 +0000

> From: rdiff-backup-users-bounces+rdiff-
> address@hidden [mailto:rdiff-backup-users-
> address@hidden On Behalf Of Alvin
> Starr
> 
> Clearly there are hundreds of better ways to back up a sparse file.
> 
> The point is that I sort of expected rdiff-backup to be as smart as tar
> and rsync in that perspective.

I certainly haven't had any good experiences backing up (or even copying) 
sparse files with tar.  Yes I've done it, but by default it's not supported 
(unless you add the switch) and even with that switch, I wouldn't call it a 
good experience.  No matter how you cut it, you have to read the entire sparse 
file (including empty space) the question is whether or not sparseness is 
preserved on the destination.  There is unfortunately no such thing as a flag 
or an attribute you can check on a file to see if it's sparse; your only choice 
is to simply read every file, and optionally apply sparseness to a destination. 
 But since you have no good way to know if the source is sparse, you just 
unconditionally make every file on the destination sparse.

For large sparse files, as suggested, it's much better to backup with a tool 
that recognizes the internal contents of the file.  Something which can read 
the structure, and only copy out the useful parts.  Not to mention, if it's a 
database file, it's important to ensure data integrity.  You don't want to be 
reading byte # 178,343,543,344 with 877,344,563,233 to go, and some other 
process writes to the file, thus making all of your work so far invalid.

Or use compression.  Cuz guess what, a large sequence of 0's is highly 
compressible.   ;-)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]