[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] State of the rdiff-backup project

From: Frank Crawford
Subject: Re: [rdiff-backup-users] State of the rdiff-backup project
Date: Sat, 15 Aug 2015 20:17:22 +1000


I don't think it is a major issue for you, unless you are regularly linking and unlinking files.  While I haven't studied the code, what I believe it is taking about is that when an archive is first made it will duplicate hard-links, if they exist in the source, unless --no-hard-links is specified.

However, during regression, if the change was the deletion of one of the linked files, it will not relink it, but create the file as a separate file.  This is not surprising as finding related linked files is a very hard problem, involving searching the entire archive, each time.

However, these days, hard links are not that common, as most people prefer symbolic links.

Also, ultimately, when you finally expire old archives, both copies of the files will be removed, so it is not that you will lose space long-term.

If you do want to see how common links are on your computer you can run something like:

find / -type f -links +1 -ls

and then try it on the areas that you usual have changes in, e.g. /home.


On Sat, 2015-08-15 at 10:30 +0200, Yves Martin wrote:

I just read "regress.py" and I am concerned by this comment:
Currently this does not recover hard links.  This may make the
regressed directory take up more disk space, but hard links can still
be recovered.

As I use rdiff-backup for my own laptop backups, it "often" fails and
regress for many reasons (no more disk space, got into sleep state...)

So do I understand well that there is a potential disk space leak in my
repository after multiple regressions leaving "non recovered hard
links" ?

If so, how should I evaluate this disk space loss and eventually recover
it "by hand" ?


reply via email to

[Prev in Thread] Current Thread [Next in Thread]