qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 3/3] spapr: implement nested-hv support for the TCG virtu


From: Nicholas Piggin
Subject: Re: [RFC PATCH 3/3] spapr: implement nested-hv support for the TCG virtual hypervisor
Date: Tue, 15 Feb 2022 12:57:09 +1000

Excerpts from Nicholas Piggin's message of February 15, 2022 9:28 am:
> Excerpts from Cédric Le Goater's message of February 15, 2022 4:31 am:
>> On 2/10/22 07:53, Nicholas Piggin wrote:
>>> +void spapr_enter_nested(PowerPCCPU *cpu)
>>> +{
>>> +    SpaprMachineState *spapr = SPAPR_MACHINE(qdev_get_machine());
>>> +    PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(cpu);
>>> +    CPUState *cs = CPU(cpu);
>>> +    CPUPPCState *env = &cpu->env;
>>> +    target_ulong hv_ptr = env->gpr[4];
>>> +    target_ulong regs_ptr = env->gpr[5];
>>> +    target_ulong hdec, now = cpu_ppc_load_tbl(env);
>>> +    struct kvmppc_hv_guest_state *hvstate;
>>> +    struct kvmppc_hv_guest_state hv_state;
>>> +    struct kvmppc_pt_regs *regs;
>>> +    hwaddr len;
>>> +    uint32_t cr;
>>> +    int i;
>>> +
>>> +    if (cpu->in_spapr_nested) {
>>> +        env->gpr[3] = H_FUNCTION;
>>> +        return;
>>> +    }
>>> +    if (spapr->nested_ptcr == 0) {
>>> +        env->gpr[3] = H_NOT_AVAILABLE;
>>> +        return;
>>> +    }
>>> +
>>> +    len = sizeof(*hvstate);
>>> +    hvstate = cpu_physical_memory_map(hv_ptr, &len, 
>> 
>> Are you writing to the state ? address_space_map() is a better pratice.
> 
> Yes, in exit_nested it gets written. I'll take a look at 
> address_space_map().

Hmm, address_space_map() says only use it for reads OR writes. Some o 
these are doing both.

Why is it better practice to use address_space_map()? I could split the
operations out into read, then write if necessary. For now I will re
submit the series with cpu_physical_memory_map because there has been a
lot of changes and cleanups includng all your and Fabiano's suggestions
except this one so it should make it easier to review.

Thanks,
Nick



reply via email to

[Prev in Thread] Current Thread [Next in Thread]