qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC] adding a generic QAPI event for failed device hotunplug


From: Markus Armbruster
Subject: Re: [RFC] adding a generic QAPI event for failed device hotunplug
Date: Fri, 19 Mar 2021 08:55:38 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)

Markus Armbruster <armbru@redhat.com> writes:

> David Gibson <david@gibson.dropbear.id.au> writes:
>
>> On Thu, Mar 11, 2021 at 05:50:42PM -0300, Daniel Henrique Barboza wrote:
>>> 
>>> 
>>> On 3/9/21 3:22 AM, Markus Armbruster wrote:
>>> > Cc: Paolo and Julia in addition to Igor, because the thread is wandering
>>> > towards DeviceState member pending_deleted_event.
>>> > 
>>> > Cc: Laine for libvirt expertise.  Laine, if you're not the right person,
>>> > please loop in the right person.
>>> > 
>>> > David Gibson <david@gibson.dropbear.id.au> writes:
>>> > 
>>> > > On Mon, Mar 08, 2021 at 03:01:53PM -0300, Daniel Henrique Barboza wrote:
>>> > > > 
>>> > > > 
>>> > > > On 3/8/21 2:04 PM, Markus Armbruster wrote:
>>> > > > > Daniel Henrique Barboza <danielhb413@gmail.com> writes:
>>> > > > > 
>>> > > > > > On 3/6/21 3:57 AM, Markus Armbruster wrote:
>>> > [...]
>>> > > > > > > We should document the event's reliability.  Are there unplug 
>>> > > > > > > operations
>>> > > > > > > where we can't detect failure?  Are there unplug operations 
>>> > > > > > > where we
>>> > > > > > > could, but haven't implemented the event?
>>> > > > > > 
>>> > > > > > The current version of the PowerPC spec that the pseries machine 
>>> > > > > > implements
>>> > > > > > (LOPAR) does not predict a way for the virtual machine to report 
>>> > > > > > a hotunplug
>>> > > > > > error back to the hypervisor. If something fails, the hypervisor 
>>> > > > > > is left
>>> > > > > > in the dark.
>>> > > > > > 
>>> > > > > > What happened in the 6.0.0 dev cycle is that we faced a reliable 
>>> > > > > > way of
>>> > > > > 
>>> > > > > Do you mean "unreliable way"?
>>> > > > 
>>> > > > I guess a better word would be 'reproducible', as in we discovered a 
>>> > > > reproducible
>>> > > > way of getting the guest kernel to refuse the CPU hotunplug.
>>> > > 
>>> > > Right.  It's worth noting here that in the PAPR model, there are no
>>> > > "forced" unplugs.  Unplugs always consist of a request to the guest,
>>> > > which is then resposible for offlining the device and signalling back
>>> > > to the hypervisor that it's done with it.
>>> > > 
>>> > > > > > making CPU hotunplug fail in the guest (trying to hotunplug the 
>>> > > > > > last online
>>> > > > > > CPU) and the pseries machine was unprepared for dealing with it. 
>>> > > > > > We ended up
>>> > > > > > implementing a hotunplug timeout and, if the timeout kicks in, 
>>> > > > > > we're assuming
>>> > > > > > that the CPU hotunplug failed in the guest. This is the first 
>>> > > > > > scenario we have
>>> > > > > > today where we want to send a QAPI event informing the CPU 
>>> > > > > > hotunplug error,
>>> > > > > > but this event does not exist in QEMU ATM.
>>> > > > > 
>>> > > > > When the timeout kicks in, how can you know the operation failed?  
>>> > > > > You
>>> > > > > better be sure when you send an error event.  In other words: what
>>> > > > > prevents the scenario where the operation is much slower than you
>>> > > > > expect, the timeout expires, the error event is sent, and then the
>>> > > > > operation completes successfully?
>>> > > > 
>>> > > > A CPU hotunplug in a pseries guest takes no more than a couple of 
>>> > > > seconds, even
>>> > > > if the guest is under heavy load. The timeout is set to 15 seconds.
>>> > > 
>>> > > Right.  We're well aware that a timeout is an ugly hack, since it's
>>> > > not really possible to distinguish it from a guest that's just being
>>> > > really slow.
>>> > 
>>> > As long as unplug failure cannot be detected reliably, we need a timeout
>>> > *somewhere*.  I vaguely recall libvirt has one.  Laine?
>>> 
>>> Yeah, Libvirt has a timeout for hotunplug operations. I agree that QEMU 
>>> doing
>>> the timeout makes more sense since it has more information about the
>>> conditions/mechanics involved.
>>
>> Right.  In particular, I can't really see how libvirt can fully
>> implement that timeout.  AFAIK qemu has no way of listing or
>> cancelling "in flight" unplug requests, so it's entirely possible that
>> the unplug could still complete after libvirt's has "timed out".
>
> QEMU doesn't really keep track of "in flight" unplug requests, and as
> long as that's the case, its timeout even will have the same issue.

If we change QEMU to keep track of "in flight" unplug requests, then we
likely want QMP commands to query and cancel them.

Instead of inventing ad hoc commands, we should look into using the job
framework: query-jobs, job-cancel, ...  See qapi/job.json.

Bonus: we don't need new events, existing JOB_STATUS_CHANGE can do the
job (pardon the pun).

[...]




reply via email to

[Prev in Thread] Current Thread [Next in Thread]