qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v2 0/8] monitor: allow per-monitor thread


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [RFC v2 0/8] monitor: allow per-monitor thread
Date: Thu, 7 Sep 2017 18:35:36 +0100

On Thu, Sep 7, 2017 at 6:14 PM, Dr. David Alan Gilbert
<address@hidden> wrote:
> * Stefan Hajnoczi (address@hidden) wrote:
>> On Thu, Sep 7, 2017 at 1:02 PM, Peter Xu <address@hidden> wrote:
>> > On Thu, Sep 07, 2017 at 11:09:29AM +0100, Stefan Hajnoczi wrote:
>> >> On Thu, Sep 7, 2017 at 10:35 AM, Dr. David Alan Gilbert
>> >> <address@hidden> wrote:
>> >> > * Stefan Hajnoczi (address@hidden) wrote:
>> >> >> On Wed, Sep 6, 2017 at 4:14 PM, Dr. David Alan Gilbert
>> >> >> <address@hidden> wrote:
>> >> >> > * Stefan Hajnoczi (address@hidden) wrote:
>> >> >> >> On Wed, Aug 23, 2017 at 02:51:03PM +0800, Peter Xu wrote:
>> >> >> >> > The root problem is that, monitor commands are all handled in main
>> >> >> >> > loop thread now, no matter how many monitors we specify. And, if 
>> >> >> >> > main
>> >> >> >> > loop thread hangs due to some reason, all monitors will be stuck.
>> >> >> >>
>> >> >> >> I see a larger issue with postcopy: existing QEMU code assumes that
>> >> >> >> guest memory access is instantaneous.
>> >> >> >>
>> >> >> >> Postcopy breaks this assumption and introduces blocking points that 
>> >> >> >> can
>> >> >> >> now take unbounded time.
>> >> >> >>
>> >> >> >> This problem isn't specific to the monitor.  It can also happen to 
>> >> >> >> other
>> >> >> >> components in QEMU like the gdbstub.
>> >> >> >>
>> >> >> >> Do we need an asynchronous memory API?  Synchronous memory access 
>> >> >> >> should
>> >> >> >> only be allowed in vcpu threads.
>> >> >> >
>> >> >> > It would probably be useful for gdbstub where the overhead of async
>> >> >> > doesn't matter;  but doing that for all IO emulation is hard.
>> >> >>
>> >> >> Why is it hard?
>> >> >>
>> >> >> Memory access can be synchronous in the vcpu thread.  That eliminates
>> >> >> a lot of code straight away.
>> >> >>
>> >> >> Anything using dma-helpers.c is already async.  They just don't know
>> >> >> that the memory access part is being made async too :).
>> >> >
>> >> > Can you point me to some info on that ?
>> >>
>> >> IDE and SCSI use dma-helpers.c to perform I/O:
>> >> hw/ide/core.c:892:        s->bus->dma->aiocb =
>> >> dma_blk_io(blk_get_aio_context(s->blk),
>> >> hw/ide/macio.c:189:        s->bus->dma->aiocb =
>> >> dma_blk_io(blk_get_aio_context(s->blk), &s->sg,
>> >> hw/scsi/scsi-disk.c:348:        r->req.aiocb =
>> >> dma_blk_io(blk_get_aio_context(s->qdev.conf.blk),
>> >> hw/scsi/scsi-disk.c:551:        r->req.aiocb =
>> >> dma_blk_io(blk_get_aio_context(s->qdev.conf.blk),
>> >>
>> >> They pass a scatter-gather list of guest RAM addresses to
>> >> dma-helpers.c.  They receive a callback when I/O has finished.
>> >>
>> >> Try following the code path.  Request submission may be from a vcpu
>> >> thread or IOThread.  Completion occurs in the main loop or an
>> >> IOThread.
>> >>
>> >> The main point is that this API is already asynchronous.  If any
>> >> changes are needed for async guest memory access (not sure, I haven't
>> >> checked), then at least the dma-helpers.c users do not need to be
>> >> modified.
>> >>
>> >> >> The remaining cases are virtio and some other devices.
>> >> >>
>> >> >> If you are worried about performance, the first rule is that async
>> >> >> memory access is only needed on the destination side when post-copy is
>> >> >> active.  Maybe use setjmp to return from the signal handler and queue
>> >> >> a callback for when the page has been loaded.
>> >> >
>> >> > I'm not sure it's worth trying to be too clever at avoiding this;
>> >> > I see the fact that we're doing IO with the bql held as a more
>> >> > fundamental problem.
>> >>
>> >> QEMU should be doing I/O syscalls in async fashion or threadpool
>> >> workers (no BQL) so the BQL is not an issue.  Anything else could
>> >> cause unbounded waits even without postcopy.
>> >
>> > E.g. when vcpu got page faulted with BQL taken, while the main thread
>> > needs the BQL to dispatch anything, including monitor commands.
>> >
>> > So I think it's a multiplex problem - we need to solve both (1) main
>> > thread accessing guest memories which is still missing, and (2) BQL
>> > deadlocks between vcpu threads and main thread.
>>
>> I think we need a single solution and cannot treat these as separate.
>> This is because the same virtio device emulation code may run in 3
>> contexts:
>> 1. vcpu thread (ioeventfd=off)
>> 2. main loop thread (ioeventfd=on)
>> 3. IOThread (ioeventfd=on, iothread=<id>)
>>
>> If you try to solve them separately then the code won't work in all 3
>> contexts anymore.
>
> I think you can also get main loop thread hangs on things like
> network packet reception.

That is case #2.  The QEMU net subsystem reads receive packets into a
temporary buffer (it's not zero-copy) and invokes the virtio-net
receive handler function from the main loop.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]