guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: guix gc takes long in "deleting unused links ..."


From: Caleb Ristvedt
Subject: Re: guix gc takes long in "deleting unused links ..."
Date: Wed, 06 Feb 2019 15:32:51 -0600
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux)

Ludovic Courtès <address@hidden> writes:

> Note that the database would need to contain hashes of individual files,
> not just store items (it already contains hashes of store item nars).

Indeed! Although last night I thought of a way that it would only need
to contain hashes of *unique* individual files, see below.

> This issue was discussed a while back at
> <https://issues.guix.info/issue/24937>.  Back then we couldn’t agree on
> a solution, but it’d be good to have your opinion with your fresh mind!

The main thing that comes to mind is making the amount of time required
for deleting links scale with the number of things being deleted rather
than the number of "things" in total - O(m) instead of O(n), so to
speak. I actually hadn't even considered things like disk access
patterns.

In my mind, the ideal situation is like this: we get rid of .links, and
instead keep a mapping from hashes to inodes in the
database. Deduplication would then involve just creating a hardlink to
the corresponding inode. The link-deleting phase then becomes entirely
unnecessary, as when the last hardlink is deleted the refcount becomes 0
automatically. Unfortunately, this isn't possible, because AFAIK there
is no way to create a hardlink to an inode directly; it always has to go
through another hardlink. Presumably the necessary system call doesn't
exist because there would be permissions / validation issues (if anyone
happens to know of a function that does something like this, I'd love to
hear about it!).

So the second-most-ideal situation would, to me, be to keep a mapping
from inodes to hashes in the database. Then, when it becomes known
during garbage collection that a file is to be deleted and the refcount
for its inode is 2, the file's inode can be obtained from stat(), from
that the hash can be looked up, and from that the corresponding link
filename can be obtained and removed. After that the inode->hash
association can be removed from the database.

I think this is a reasonable approach, as such a table in the database
shouldn't take up much more disk space than .links does: 8 bytes for an
inode, and 32 bytes for the hash (or 52 if we keep the hash in text
form), for a total of 60 bytes. Based on the numbers from the linked
discussion (10M entries), that's around 400MB or 600MB, plus whatever
extra space sqlite uses, kept on the disk. If that's considered too
high, we could only store the hashes of relatively large files in the
database and fall back to hashing at delete-time for the others.

The main limitation is the lack of portability of inodes. That is, when
copying a store across filesystems, said table would need to be
updated. Also, it requires that everything in the store is on the same
filesystem, though this could be fixed by looking up the hash through
(inode, device number) pairs instead of just inode. In that case, it
would work to copy across filesystems, though I think it still wouldn't
work for copying across systems.

How does that sound?


> The guix-daemon child that handles the session would immediately get
> SIGHUP and terminate (I think), but that’s fine: it’s just that files
> that could have been removed from .links will still be there.

Turns out it's SIGPOLL, actually, but yep. There's a checkInterrupt()
that gets run before each attempt to delete a link, and that triggers
the exit.

- reepca



reply via email to

[Prev in Thread] Current Thread [Next in Thread]