qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [RFC PATCH v2 0/4] port network layer onto glib
Date: Thu, 11 Apr 2013 11:19:22 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, Apr 09, 2013 at 01:10:29PM +0800, liu ping fan wrote:
> On Mon, Apr 8, 2013 at 7:46 PM, Stefan Hajnoczi <address@hidden> wrote:
> > On Tue, Apr 02, 2013 at 05:49:57PM +0800, liu ping fan wrote:
> >> On Thu, Mar 28, 2013 at 9:40 PM, Stefan Hajnoczi <address@hidden> wrote:
> >> > On Thu, Mar 28, 2013 at 09:42:47AM +0100, Paolo Bonzini wrote:
> >> >> Il 28/03/2013 08:55, Liu Ping Fan ha scritto:
> >> >> >    3rd. block layer's AioContext will block other AioContexts on the 
> >> >> > same thread.
> >> >>
> >> >> I cannot understand this.
> >> >
> >> > The plan is for BlockDriverState to be bound to an AioContext.  That
> >> > means each thread is set up with one AioContext.  BlockDriverStates that
> >> > are used in that thread will first be bound to its AioContext.
> >> >
> >> > It's not very useful to have multiple AioContext in the same thread.
> >> >
> >> But it can be the case that we detach and re-attach the different
> >> device( AioContext) to the same thread.   I think that the design of
> >> io_flush is to sync, but for NetClientState, we need something else.
> >> So if we use AioContext, is it proper to extend readable/writeable
> >> interface for qemu_aio_set_fd_handler()?
> >
> > Devices don't have AioContexts, threads do.  When you bind a device to
> > an AioContext the AioContext already exists independent of the device.
> >
> Oh, yes.  So let me say in this way,   switch the devices among
> different thread. Then if NetClientState happens to exist on the same
> thread with BlockDriverState, it will not be responsive until the
> BlockDriverState has finished the flying job.

It's partially true that devices sharing an event loop may be less
responsive.  That's why we have the option of a 1:1 device-to-thread
mapping.

But remember that QEMU code is (meant to be) designed for event loops.
Therefore, it must not block and should return back to the event loop as
quickly as possible.  So a block and net device in the same event loop
shouldn't inconvenience each other dramatically if the device-to-thread
mappings are reasonable given the host machine, workload, etc.

> > Unfortunately I don't understand your question about io_flush and
> > readable/writeable qemu_aio_set_fd_handler().
> >
> As for readable/writable, I mean something like IOCanReadHandler. If
> NetClientState is unreadable, it just does not poll for G_IO_IN event,
> but not blocks.  But as for io_flush, it will block for pending AIO
> operations. These behaviors are different, so I suggest to expand
> readable/writeable for qemu_aio_set_fd_handler()

I see, thanks for explaining.

In another thread Kevin suggested a solution:

Basically, io_flush() and qemu_aio_wait() should be removed.  Instead
we'll push the synchronous wait into the block layer, which is the only
user.

We can do that by introducing a .bdrv_drain() function which is similar
to io_flush().  Now the qemu_drain_all() which uses qemu_aio_wait() can
change to calling .bdrv_drain() and then executing an event loop
iteration.

In other words, the event loop shouldn't know about io_flush().

I will try to send patches for this today.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]