rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] "AF_UNIX path too long" error


From: Andrew K. Bressen
Subject: Re: [rdiff-backup-users] "AF_UNIX path too long" error
Date: Tue, 10 Jun 2003 23:31:42 -0400
User-agent: Emacs Gnus


Alan <address@hidden> writes:
> Well, both server and client are up to .11.4 now, but I'm getting the
> following error:
>
> Executing rdiff
> Warning: Metadata file not found.
> Metadata will be read from filesystem.


I got that error as well on upgrade, 
but it went away after the first run,
so I assumed that the new version was creating the metadata that
the previous version didn't store. 

However, rdiff-backup did not crash out during the run; it complained
and continued on, leaving me with a successful backup (I think...). 

>From your mail, I'm unclear on whether those errors 
terminated the run for you. I don't have a python traceback in my logs
of stderr, just the complaints quoted above. 

While I haven't yet tried a restore of any data, some quick checks
on current changed (ie, since the upgrade) and unchanged (since before
I went from 0.10 to 0.11.4) versions of files in my backup directories
seems to imply that all is well. 

>I don't see a way to not regress and just start a fresh backup, or
>delete just that one last failed backup.  

When you say fresh, do you mean from scratch? 
IE, you want rdiff-backup to nuke existing data and start over? 

Or do you mean you want rdiff-backup to salvage what it can from
the current target directories and move forward from there? 

If it did that, it would have to warn on a restore that there was a
discontinuity in its history, and anything from before the start-over
point was suspect... it would be an interesting fault-tolerance
feature for dealing with partially corrupted backups, but myself, if
my backups were partly corrupt, I'd be more likely to want to start
over with a new set than to try to reuse chunks of an old one, but I
suppose that could depend on just how much data one has, and how fast
a link...










reply via email to

[Prev in Thread] Current Thread [Next in Thread]