qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 3/3] nbd/server: Allow MULTI_CONN for shared writable expo


From: Eric Blake
Subject: Re: [PATCH v3 3/3] nbd/server: Allow MULTI_CONN for shared writable exports
Date: Mon, 2 May 2022 16:12:37 -0500
User-agent: NeoMutt/20220415-26-c08bba

On Fri, Apr 29, 2022 at 02:49:35PM +0200, Kevin Wolf wrote:
...
> > Or a multi-pathed connection to network storage, where one QEMU
> > process accesses the network device, but those accesses may
> > round-robin which server they reach, and where any caching at an
> > individual server may be inconsistent with what is seen on another
> > server unless flushing is used to force the round-robin access to
> > synchronize between the multi-path views.
> 
> I don't think this is a realistic scenario. It would mean that you
> successfully write data to the storage, and when you then read the same
> location, you get different data back. This would be inconsistent even
> with a single client. So I'd call this broken storage that should be
> replaced as soon as possible.
> 
> I could imagine problems of this kind with two separate connections to
> the network storage, but here all the NBD clients share a single
> BlockBackend, so for the storage they are a single connection.

I like that chain of reasoning.

> 
> > > In fact, I don't think we even need the flush restriction from the NBD
> > > spec. All clients see the same state (that of the NBD server
> > > BlockBackend) even without anyone issuing any flush. The flush is only
> > > needed to make sure that cached data is written to the backing storage
> > > when writeback caches are involved.
> > > 
> > > Please correct me if I'm misunderstanding something here.
> > 
> > Likewise me, if I'm being overly cautious.
> > 
> > I can certainly write a simpler v4 that just always advertises
> > MULTI_CONN if we allow more than one client, without any knob to
> > override it; it's just that it is harder to write a commit message
> > justifying why I think it is safe to do so.
> 
> Having an explicit option doesn't hurt, but it's the reasoning in the
> commit message that feels wrong to me.
> 
> We could consider changing "auto" to advertise MULTI_CONN even for
> writable exports. There might still be a good reason not to do this by
> default, though, because of the NBD clients. I'm quite sure that the
> backend won't make any trouble, but client might if someone else is
> writing to the same image (this is why we require an explicit
> share-rw=on for guest devices in the same case).

If your worry is about a client trying to determine if writing to an
NBD server is going to race with some external process writing to the
direct image, I don't see how not advertising MULTI_CONN will make
things safer - the NBD client to qemu-nbd will still be going through
a single backend, and that race is present even if there is only one
NBD client.  The point of MULTI_CONN is informing the client that it
can open multiple sockets and see a consistent view across all of
them, and in your scenario of the server competing with some external
process over the underlying data file, that competition is not
controlled by how many NBD clients connect to the server, but by the
external process having access at the same time the server has access
through a single BlockBackend (and would be just as risky as if
MULTI_CONN were not advertised and the client limits itself to one NBD
connection).

If we can argue that our single BlockBackend point of access is safe
enough to default to advertising MULTI_CONN for writable connections
(when we support parallel clients), then exposing an OnOffAuto knob is
overkill.  I'm not even sure I can envision a case where needing to
not advertise the bit would matter to a client (clients are supposed
to ignore unknown feature bits).

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




reply via email to

[Prev in Thread] Current Thread [Next in Thread]