[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 1/1] 9pfs: avoid iterator invalidation in v9fs_mark_fids_unre
From: |
Christian Schoenebeck |
Subject: |
Re: [PATCH 1/1] 9pfs: avoid iterator invalidation in v9fs_mark_fids_unreclaim |
Date: |
Wed, 28 Sep 2022 19:24:03 +0200 |
On Dienstag, 27. September 2022 21:47:02 CEST Greg Kurz wrote:
> On Tue, 27 Sep 2022 19:14:33 +0200
>
> Christian Schoenebeck <qemu_oss@crudebyte.com> wrote:
> > On Dienstag, 27. September 2022 15:05:13 CEST Linus Heckemann wrote:
> > > One more thing has occurred to me. I think the reclaiming/reopening
> > > logic will misbehave in the following sequence of events:
> > >
> > > 1. QEMU reclaims an open fid, losing the file handle
> > > 2. The file referred to by the fid is replaced with a different file
> > >
> > > (e.g. via rename or symlink) outside QEMU
> > >
> > > 3. The file is accessed again by the guest, causing QEMU to reopen a
> > >
> > > _different file_ from before without the guest having performed any
> > > operations that should cause this to happen.
> > >
> > > This is neither introduced nor resolved by my changes. Am I overlooking
> > > something that avoids this (be it documentation that directories exposed
> > > via 9p should not be touched by the host), or is this a real issue? I'm
> > > thinking one could at least detect it by saving inode numbers in
> > > V9fsFidState and comparing them when reopening, but recovering from such
> > > a situation seems difficult.
> >
> > Well, in that specific scenario when rename/move happens outside of QEMU
> > then yes, this might happen unfortunately. The point of this "reclaim
> > fid" stuff is to deal with the fact that there is an upper limit on
> > systems for the max. amount of open file descriptors a process might hold
> > at a time. And on some systems like macOS I think that limit is quite low
> > by default (like 100?).
> >
> > There is also another issue pending that affects pure inner-guest
> > behaviour; the infamous use-after-unlink() use pattern:
> > https://wiki.qemu.org/Documentation/9p#Implementation_Plans
> > https://gitlab.com/qemu-project/qemu/-/issues/103
> >
> > It would make sense to look how other file servers deal with the max.
> > amount of file descriptors limit before starting to just fight the
> > symptoms. This whole reclaim fid stuff in general is PITA.
>
> Yes this reclaim code is just a best effort tentative to not
> starve file descriptors. But since its implementation is path
> based, it gets the per-design limitation that nothing should
> modify the backing fs outside of the current 9p session.
Sure.
> Note: just like the use-after-unlink() infamous pattern (I love
> the wording), you can get this with a "pure inner-guest behaviour"
> using two devices with overlapping backends (shoot in the foot
> setup) :-)
True.
> Recovering from lost state is impossible but the server should
> at least try to detect that and return EIO to the client, pretty
> much like any storage device is expected to do if possible.
Yeah, I agree.
Nevertheless, I just had a glimpse on how this is handled on Samba, and one
important aspect they are doing is trying to increase (hard & soft) limits:
https://github.com/samba-team/samba/blob/master/source3/lib/util.c#L1320
Which makes sense, and now I remember commonly doing that on macOS as well due
to Apple's very low default limit there.
Samba's anticipated default limit is a max. of 10k open files BTW, which is
quite a good ground for not getting into these waters in the first place.
Again, not that I would ignore that space.
Best regards,
Christian Schoenebeck