qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 0/2] port network layer onto glib


From: mdroth
Subject: Re: [Qemu-devel] [RFC PATCH 0/2] port network layer onto glib
Date: Wed, 13 Mar 2013 12:06:06 -0500
User-agent: Mutt/1.5.21 (2010-09-15)

On Wed, Mar 13, 2013 at 05:21:02PM +0100, Paolo Bonzini wrote:
> Il 13/03/2013 13:34, Anthony Liguori ha scritto:
> > Paolo Bonzini <address@hidden> writes:
> > 
> >> Il 13/03/2013 06:59, Liu Ping Fan ha scritto:
> >>> These series aim to port network backend onto glib, and
> >>> prepare for moving towards making network layer mutlit-thread.
> >>> The brief of the whole aim and plan is documented on 
> >>> http://wiki.qemu.org/Features/network_reentrant
> >>>
> >>> In these series, attach each NetClientState with a GSource
> >>> At the first, I use AioContext instead of GSource, but after discussion,
> >>> I think with GSource, we can integrated with glib more closely.
> >>
> >> Integrating with glib by itself is pointless.  What is the *benefit*?
> >>
> >> We have a pretty good idea of how to make multithreaded device models
> >> using AioContext, since we are using it for the block layer and
> >> virtio-blk dataplane.  Doing the same work twice, on two different
> >> frameworks, doesn't seem like a very good idea.
> > 
> > Hrm, I had thought on previous threads there was clear agreement that we
> > did not want to use AioContext outside of the block layer.
> > 
> > I think we certainly all agree that moving to a thread aware event loop
> > is a necessary step toward multi-threading.  I think the only question
> > is whether to use AioContext or glib.
> > 
> > AioContext is necessary for the block layer because the block layer
> > still has synchronous I/O.  I think we should aim to replace all sync
> > I/O in the long term with coroutine based I/O.  That lets us eliminate
> > AioContextes entirely which is nice as the semantics are subtle.
> > 
> > I think that's a solid argument for glib over AioContext.  The former is
> > well understood, documented, and makes unit testing easier.
> 
> I don't see anything particularly subtle in AioContext, except
> qemu_bh_schedule_idle and the flush callback.  The flush callback only
> has a naming problem really, it is a relic of qemu_aio_flush().
> qemu_bh_schedule_idle could disappear if we converted the floppy disk
> drive to AIO; patches existed for that but then the poster disappeared.
> 
> glib's main loop has its share of subtleness (GMainLoop vs.
> GMainContext, anyone?), and AioContext's code is vastly simpler than
> GMainLoop's.  AioContext is also documented and unit tested, with tests
> for both standalone and GSource operation.  Unit tests for AioContext
> users are trivial to write, we have one in test-thread-pool.
> 
> > Did you have a specific concern with using glib vs. AioContext?  Is it
> > about reusing code in the block layer where AioContext is required?
> 
> In the short term yes, code duplication is a concern.  We already have
> two implementation of virtio.  I would like the dataplane virtio code to
> grow everything else that needs to be in all dataplane-style devices
> (for example, things such as setting up the guest<->host notifiers), and
> the hw/virtio.c API implemented on top of it (or dead altogether).
> Usage of AioContext is pretty much forced by the block layer.
> 
> However, I'm more worried by overhead.  GMainLoop is great because
> everybody plugs into it.  It enabled the GTK+ front-end, it let us reuse
> GIOChannel for chardev flow control, and one can similarly think of
> integrating Avahi for example.  However, I think it's mostly useful for
> simple-minded non-performance-critical code.  QEMU has worked great in
> almost all scenarios with only one non-VCPU thread, and if we are going
> to move stuff to other threads we should only do that because we want
> performance and control.  I'm not at all confident that GMainLoop can
> provide them.

But isn't there also an effort to make virtio-blk/virtio-net a model for
threaded devices/subsystems in general, as opposed to "accelerators" for
specific use-cases like tap-based backends? I think this is the main
question, because most of the planning seems contingent on that. And it
seems to me that if the answer is no, then we need to consider the fact
that vhost-net seems to serve this purpose already.

If the answer is yes, don't we also need to look at things like interaction
between slirp and a threaded network device? Based on comments in the
other thread, I thought it was agreed that slirp was a particular example
for something that should be rolled into a GMainContext loop as opposed
to an AioContext based one?

To me this suggests that some event loops will ultimately drive
GMainContext handlers in addition to AioContexts (with the latter perhaps
being driven at a higher priority with PI mutexs and whatever else that
entails). This is already a requirement for the QEMU main loop, so perhaps
that event loop can be moved to common code to lessen the subtleties
between running in a dataplane thread as opposed to the io thread.

What would be nice is if the difference between the iothread's event
loop and a dataplane (or QMP/VNC/etc) thread's event loop was simply the
set of AioContexts/GMainContexts that it drives. We could do that purely
with AioContexts as well, but that rules out a large class of
backends that offloaded event loops can interact with, such as Chardevs,
so I think modelling how to handle both will provide a threading model
that scales better with other devices/subsystems.

> 
> On the contrary, AioContext really does two things (g_poll and bottom
> halves) and does them fast.  For really high-performance scenarios, such
> as the ones virtio-blk-dataplane was written for, I'd be surprised if
> glib's main loop had the same performance as AioContext.  Also,
> AioContext could be easily converted to use epoll, while we don't have
> the same level of control on glib's main loop.
> 
> Of course I will easily change my mind if I see patches that show the
> contrary. :)
> 
> Paolo
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]