|Subject:||Re: [rdiff-backup-users] 25gig files|
|Date:||Tue, 03 May 2005 10:03:59 -0500|
|User-agent:||Mozilla Thunderbird 1.0.2 (X11/20050324)|
May I suggest that you might want to try some other technique for
a 25 gig file?
Maybe using LVM and "snapshot" techniques. I cannot say that I've ever
implemented them myself, but it would seem to be a better way of being
able to get a consistant backup of a file that could change before you
even finish backing it up.
You might check out the good folks over at the
Enterprise Volume Management System project:
My couriousity is peaked wondering what a 25 gigabyte file would contain.
NEL Frequency Controls, Inc.
Clint Silvester wrote:
dean gaudet wrote:is this with librsync 0.9.6 or librsync 0.9.7? 0.9.6 has known problems with files >= 4GiB. also if it's linux you need to ensure that everything is built with -D_FILE_OFFSET_BITS=64 (which it typically is on recent enough distros). -dean On Fri, 29 Apr 2005, Clint Silvester wrote:Has anyone looked at the problem when working on files larger than 25Gig? Previous messages about this problem were titled "error 107" and "librsync error 107 while in patch cycle" it looks like a problem with librsync. I get this error python: ERROR: (rs_job_iter) internal error: job made no progress [orig_in=129238, orig_out=65536, final_in=129238, final_out=65536] UpdateError TWDOCS2001.adm librsync error 107 while in patch cycle The orig_out and final_out look like something isn't keeping enough precision (only 16 bit?) to store the needed amount of segments/blocks/whatever that represents. Anybody have an idea about this? Clint Silvester _______________________________________________ rdiff-backup-users mailing list at address@hidden http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWikiSorry, I sent this offlist, Dean. I'm resending to the list now, too. This is with 0.9.7, but I don't see it using that in any of the gcc commands when it's compiling. The configure script says checking for special C compiler options needed for large files... no checking for _FILE_OFFSET_BITS value needed for large files... 64 checking for _LARGE_FILES value needed for large files... no ... and further down checking for _LARGEFILE_SOURCE value needed for large files... no Does that look right? I can successfully backup files less than 25G. 6 gig, 10 gig 15 gig all work fine. Something just goes wrong at ~25 gig. I've just recompiled making sure the whole thing used -D_FILE_OFFSET_BITS=64 by running CC="gcc -D_FILE_OFFSET_BITS=64" ./configure --prefix=/usr etc. and still after testing this i got the error python: ERROR: (rs_job_iter) internal error: job made no progress [orig_in=84948, orig_out=65536, final_in=84948, final_out=65536] UpdateError TWDOCS2002.adm librsync error 107 while in patch cycle Anyone tried doing 25gig+ files with this? The initial backup works fine, but trying to run it again to get the differences fails. Maybe I'll take my vacation soon and I can look at the code then. I've tried understanding what librsync is doing, but I am a novice programmer at best so it will probably take me some time. Clint Silvester _______________________________________________ rdiff-backup-users mailing list at address@hidden http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
|[Prev in Thread]||Current Thread||[Next in Thread]|