qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 0/8] arm AioContext with its own timer stuff


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [RFC 0/8] arm AioContext with its own timer stuff
Date: Mon, 29 Jul 2013 12:45:44 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130625 Thunderbird/17.0.7

Il 29/07/2013 10:58, Kevin Wolf ha scritto:
> Am 26.07.2013 um 10:43 hat Stefan Hajnoczi geschrieben:
>> On Thu, Jul 25, 2013 at 07:53:33PM +0100, Alex Bligh wrote:
>>>
>>>
>>> --On 25 July 2013 14:32:59 +0200 Jan Kiszka <address@hidden> wrote:
>>>
>>>>> I would happily at a QEMUClock of each type to AioContext. They are after
>>>>> all pretty lightweight.
>>>>
>>>> What's the point of adding tones of QEMUClock instances? Considering
>>>> proper abstraction, how are they different for each AioContext? Will
>>>> they run against different clock sources, start/stop at different times?
>>>> If the answer is "they have different timer list", then fix this
>>>> incorrect abstraction.
>>>
>>> Even if I fix the abstraction, there is a question of whether it is
>>> necessary to have more than one timer list per AioContext, because
>>> the timer list is fundamentally per clock-source. I am currently
>>> just using QEMU_CLOCK_REALTIME as that's what the block drivers normally
>>> want. Will block drivers ever want timers from a different clock source?
>>
>> block.c and block/qed.c use vm_clock because block drivers should not do
>> guest I/O while the vm is stopped.  This is especially true during live
>> migration where it's important to hand off the image file from the
>> source host to the destination host with good cache consistency.  The
>> source host is not allowed to modify the image file anymore once the
>> destination host has resumed the guest.
>>
>> Block jobs use rt_clock because they aren't considered guest I/O.
> 
> But considering your first paragraph, why is it safe to let block jobs
> running while we're migrating? Do we really do that? It sounds unsafe to
> me.

I think we should cancel them (synchronously) before the final
bdrv_drain_all().

Paolo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]