qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH v8 4/6] target/ppc: Build rtas error log upon an M


From: Aravinda Prasad
Subject: Re: [Qemu-ppc] [PATCH v8 4/6] target/ppc: Build rtas error log upon an MCE
Date: Thu, 16 May 2019 10:24:55 +0530
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0


On Thursday 16 May 2019 07:17 AM, David Gibson wrote:
> On Tue, May 14, 2019 at 10:36:17AM +0530, Aravinda Prasad wrote:
>>
>>
>> On Tuesday 14 May 2019 10:10 AM, David Gibson wrote:
>>> On Tue, May 14, 2019 at 09:56:41AM +0530, Aravinda Prasad wrote:
>>>>
>>>>
>>>> On Tuesday 14 May 2019 05:38 AM, David Gibson wrote:
>>>>> On Mon, May 13, 2019 at 01:30:53PM +0200, Greg Kurz wrote:
>>>>>> On Mon, 22 Apr 2019 12:33:26 +0530
>>>>>> Aravinda Prasad <address@hidden> wrote:
>>>>>>
>>>>>>> Upon a machine check exception (MCE) in a guest address space,
>>>>>>> KVM causes a guest exit to enable QEMU to build and pass the
>>>>>>> error to the guest in the PAPR defined rtas error log format.
>>>>>>>
>>>>>>> This patch builds the rtas error log, copies it to the rtas_addr
>>>>>>> and then invokes the guest registered machine check handler. The
>>>>>>> handler in the guest takes suitable action(s) depending on the type
>>>>>>> and criticality of the error. For example, if an error is
>>>>>>> unrecoverable memory corruption in an application inside the
>>>>>>> guest, then the guest kernel sends a SIGBUS to the application.
>>>>>>> For recoverable errors, the guest performs recovery actions and
>>>>>>> logs the error.
>>>>>>>
>>>>>>> Signed-off-by: Aravinda Prasad <address@hidden>
>>>>>>> ---
>>>>>>>  hw/ppc/spapr.c         |    4 +
>>>>>>>  hw/ppc/spapr_events.c  |  245 
>>>>>>> ++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>>>  include/hw/ppc/spapr.h |    4 +
>>>>>>>  3 files changed, 253 insertions(+)
>>>>>>>
>>>>>>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
>>>>>>> index 2779efe..ffd1715 100644
>>>>>>> --- a/hw/ppc/spapr.c
>>>>>>> +++ b/hw/ppc/spapr.c
>>>>>>> @@ -2918,6 +2918,10 @@ static void spapr_machine_init(MachineState 
>>>>>>> *machine)
>>>>>>>          error_report("Could not get size of LPAR rtas '%s'", filename);
>>>>>>>          exit(1);
>>>>>>>      }
>>>>>>> +
>>>>>>> +    /* Resize blob to accommodate error log. */
>>>>>>> +    spapr->rtas_size = spapr_get_rtas_size(spapr->rtas_size);
>>>>>>> +
>>>>>>
>>>>>> This is the only user for spapr_get_rtas_size(), which is trivial.
>>>>>> I suggest you simply open-code it here.
>>>>>
>>>>> I agree.
>>>>
>>>> Sure.
>>>>
>>>>>
>>>>>> But also, spapr->rtas_size is a guest visible thing, "rtas-size" prop in 
>>>>>> the
>>>>>> DT. Since existing machine types don't do that, I guess we should only 
>>>>>> use
>>>>>> the new size if cap-fwnmi-mce=on for the sake of compatibility.
>>>>>
>>>>> Yes, that's a good idea.  Changing this is very unlikely to break a
>>>>> guest, but it's easy to be safe here so let's do it.
>>>>
>>>> I did it like that because the rtas_blob is allocated based on rtas_size
>>>> in spapr_machine_init(). During spapr_machine_init() it is not know if
>>>> the guest calls "ibm, nmi-register". So if we want to use the new size
>>>> only when cap_fwnmi=on, then we have to realloc the blob in "ibm,
>>>> nmi-register".
>>>
>>> What?  Just always allocate the necessary space in
>>> spapr_machine_init() if cap_fwnmi=on, it'll be wasted if
>>> ibm,nmi-register is never called, but it's not that much space so we
>>> don't really care.
>>
>> Yes, not that much space, and ibm,nmi-register is called when the Linux
>> kernel boots. I guess, even though other OSes might not call
>> ibm,nmi-register, they do not constitute significant QEMU on Power users.
>>
>> So I think, I will keep the code as is.
> 
> No, that's not right.  It's impractical to change the allocation
> depending on whether fwnmi is currently active.  But you *can* (and
> should) base the allocation on whether fwnmi is *possible* - that is,
> the value of the spapr cap.

Sure..

> 

-- 
Regards,
Aravinda




reply via email to

[Prev in Thread] Current Thread [Next in Thread]