qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 13/17] ppc/xics: add a xics_get_cpu_index_by_


From: Cédric Le Goater
Subject: Re: [Qemu-devel] [PATCH v5 13/17] ppc/xics: add a xics_get_cpu_index_by_pir helper
Date: Thu, 27 Oct 2016 20:05:02 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

On 10/27/2016 05:12 AM, David Gibson wrote:
> On Tue, Oct 25, 2016 at 12:58:11PM +0200, Cédric Le Goater wrote:
>> On 10/25/2016 07:36 AM, David Gibson wrote:
>>> On Sat, Oct 22, 2016 at 11:46:46AM +0200, Cédric Le Goater wrote:
>>>> We will need this helper to translate the server number of the XIVE
>>>> (which is a PIR) into an ICPState index number (which is a cpu index).
>>>>
>>>> Signed-off-by: Cédric Le Goater <address@hidden>
>>>
>>> Looks correct as far as it goes, but I wonder if this would be more
>>> generally useful as a machine level function that searches the cpu
>>> objects by PIR, returning a pointer.  From that to the cpu_index is
>>> then trivial.
>>
>> Well I guess so. The XICSState argument reduces the scope in case of 
>> multichip but as this routine is used to initialize the xive registers, 
>> it does not need to be fast.
> 
> Ahh.. I was thinking of the top-level xics object as global, rather
> than per-chip.

well, the ICP MMIO region address is linked to the chip but that is all
for the moment.
 
> Except.. does having it per-chip work anyway? The server numbers are 
> globally unique, aren't they?  

yes.
 
> What happens if you put a server number from one chip as the target 
> for an ICE on another chip?

we have the chip number, so we could route ? I haven't gone that far
in the modeling though. It might be overly complex to do for the purpose.

> The xics object is a bit weird, in that it doesn't represent a real
> device in a sense, but is rather something to co-ordinate global
> addressing between ICS and ICP units.  Well, I suppose in that sense
> it represent the interrupt propagation fabric.

yes. See my other email. I think we can get rid of it and simply use
a XICSState which links together the ICPs and the ICS of the system. 
but, let's keep it at the chip level for the moment, it is correct, 
and see if we need to move it upwards when we work on multichip. 

Thanks,

C.
 
>> So you rather have, something like:
>>
>>      PowerPCCPU *ppc_get_vcpu_by_pir(int pir);
>>
>> similar to  :
>>
>>      PowerPCCPU *ppc_get_vcpu_by_dt_id(int cpu_dt_id);
>>
>>
>> Thanks,
>>
>> C. 
>>
>>>> ---
>>>>  hw/intc/xics_native.c | 19 +++++++++++++++++++
>>>>  include/hw/ppc/xics.h |  1 +
>>>>  2 files changed, 20 insertions(+)
>>>>
>>>> diff --git a/hw/intc/xics_native.c b/hw/intc/xics_native.c
>>>> index bbdd786aeb50..6318862f53fc 100644
>>>> --- a/hw/intc/xics_native.c
>>>> +++ b/hw/intc/xics_native.c
>>>> @@ -33,6 +33,25 @@
>>>>  
>>>>  #include <libfdt.h>
>>>>  
>>>> +int xics_get_cpu_index_by_pir(XICSState *xics, int pir)
>>>> +{
>>>> +    int i;
>>>> +
>>>> +    for (i = 0; i < xics->nr_servers; i++) {
>>>> +        ICPState *icp = &xics->ss[i];
>>>> +        if (icp->cs) {
>>>> +            PowerPCCPU *cpu = POWERPC_CPU(icp->cs);
>>>> +            CPUPPCState *env = &cpu->env;
>>>> +
>>>> +            if (env->spr_cb[SPR_PIR].default_value == pir) {
>>>> +                return i;
>>>> +            }
>>>> +        }
>>>> +    }
>>>> +
>>>> +    return -1;
>>>> +}
>>>> +
>>>>  static void xics_native_reset(void *opaque)
>>>>  {
>>>>      device_reset(DEVICE(opaque));
>>>> diff --git a/include/hw/ppc/xics.h b/include/hw/ppc/xics.h
>>>> index 911cdd5e549f..beb232e616c5 100644
>>>> --- a/include/hw/ppc/xics.h
>>>> +++ b/include/hw/ppc/xics.h
>>>> @@ -214,6 +214,7 @@ void xics_set_nr_servers(XICSState *xics, uint32_t 
>>>> nr_servers,
>>>>  
>>>>  /* Internal XICS interfaces */
>>>>  int xics_get_cpu_index_by_dt_id(int cpu_dt_id);
>>>> +int xics_get_cpu_index_by_pir(XICSState *xics, int pir);
>>>>  
>>>>  void icp_set_cppr(ICPState *icp, uint8_t cppr);
>>>>  void icp_set_mfrr(ICPState *icp, uint8_t mfrr);
>>>
>>
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]