qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 00/17] Support for multiple "AIO contexts"


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [RFC PATCH 00/17] Support for multiple "AIO contexts"
Date: Thu, 27 Sep 2012 09:43:49 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120911 Thunderbird/15.0.1

Il 27/09/2012 09:11, Kevin Wolf ha scritto:
> Am 26.09.2012 17:48, schrieb Paolo Bonzini:
>> Il 26/09/2012 16:31, Kevin Wolf ha scritto:
>>
>>>>> In fact, after removing io_flush, I don't really see what makes AIO
>>>>> fd handlers special any more.
>>>>
>>>> Note that while the handlers aren't that special indeed, there is still
>>>> some magic because qemu_aio_wait() bottom halves.
>>>
>>> Do you mean the qemu_bh_poll() call? But the normal main loop does the
>>> same, so I don't see what would be special about it.
>>
>> That's an abstraction leakage, IMHO.  After this series the normal main
>> loop does not need anymore to call bottom halves.
> 
> This is something that I find hard to believe. Bottom halves aren't an
> invention of the block layer

Actually they are, they were introduced by commit 83f6409 (async file
I/O API, 2006-08-01).

> but used throughout qemu.

>> (Most usage of bottom halves in hw/* is pointless and also falls under
>> the category of leaked abstractions.  The other uses could also in
>> principle be called at the wrong time inside monitor commands.  Many
>> would be served better by a thread pool if it wasn't for our beloved big
>> lock).
> 
> Possibly, but with the current infrastructure, I'm almost sure that most
> of them are needed and you can't directly call them. Nobody uses BHs
> just for fun.

Most of them are for hw/ptimer.c and are useless wrappers for a
(callback, opaque) pair.  The others are useful, and not used for fun
indeed.

But here is how they typically behave: the VCPU triggers a bottom half,
which wakes up the iothread, which waits until the VCPU frees the global
mutex.  So they are really a shortcut for "do this as soon as we are
done with this subsystem".  If locking were more fine-grained, you might
as well wrap the bottom half handler with a lock/unlock pair, and move
the bottom halves to a thread pool.  It would allow multiple subsystem
to process their bottom halves in parallel, for example.

Bottom halves are more fundamental for AIO, see for example how they
extend the lifetime of AIOCBs.

>> It feels like it's the monitor.  But I think in general it is better if
>> as little QEMU infrastructure as possible is used by the block layer,
>> because you end up with impossibly-knotted dependencies.  Using things
>> such as GSource to mediate between the block layer and everything else
>> is also better with an eye to libqblock.
> 
> I guess my expectation was that if GSource is an improvement for AIO fd
> handlers, it would also be an improvement for the rest of fd handlers.

It would, but you would need a separate GSource, because of the
different Windows implementations for the two.

> It's well known that qemu as a whole suffers from the NIH syndrome, but
> should we really start introducing another NIH wall between the block
> layer an the rest of qemu?

I don't see it as a NIH wall.  By replacing
qemu_bh_update_timeout()+qemu_bh_poll() with a GSource, you use glib for
interoperability instead of ad hoc code.  Basing libqblock AIO support
on GSources would be quite the opposite of NIH, indeed.

>> Also, consider that under Windows there's a big difference: after this
>> series, qemu_aio_wait() only works with EventNotifiers, while
>> qemu_set_fd_handler2 only works with sockets.  Networked block drivers
>> are disabled for Windows by these patches, there's really no way to move
>> forward without sacrificing them.
> 
> Is it really only networked block drivers that you lose this way?

Yes, nothing else calls qemu_aio_set_fd_handler on sockets.  qemu-nbd
uses qemu_set_fd_handler2 so it should work.

Paolo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]