qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/3] intc/gicv3_kvm: use kvm_gicc_access to get ICC_CTLR_EL1


From: Peter Maydell
Subject: Re: [PATCH 2/3] intc/gicv3_kvm: use kvm_gicc_access to get ICC_CTLR_EL1
Date: Fri, 17 Apr 2020 12:09:38 +0100

On Mon, 13 Apr 2020 at 10:18, Keqian Zhu <address@hidden> wrote:
>
> Replace kvm_device_access with kvm_gicc_access to simplify
> code.
>
> Signed-off-by: Keqian Zhu <address@hidden>
> ---
>  hw/intc/arm_gicv3_kvm.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/hw/intc/arm_gicv3_kvm.c b/hw/intc/arm_gicv3_kvm.c
> index ca43bf87ca..85f6420498 100644
> --- a/hw/intc/arm_gicv3_kvm.c
> +++ b/hw/intc/arm_gicv3_kvm.c
> @@ -678,9 +678,8 @@ static void arm_gicv3_icc_reset(CPUARMState *env, const 
> ARMCPRegInfo *ri)
>      }
>
>      /* Initialize to actual HW supported configuration */
> -    kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS,
> -                      KVM_VGIC_ATTR(ICC_CTLR_EL1, c->gicr_typer),
> -                      &c->icc_ctlr_el1[GICV3_NS], false, &error_abort);
> +    kvm_gicc_access(s, ICC_CTLR_EL1, c->cpu->cpu_index,
> +                    &c->icc_ctlr_el1[GICV3_NS], false);

This works at the moment, but I'd rather we avoided looking into
cpu->cpu_index inside the GIC code. The cpu_index is the overall
index of the CPU of all CPUs in the system, which is not in
theory the same as "index of this CPU for this GIC". The two
currently match up because arm_gicv3_common_realize() populates
its s->cpu[i].cpu by calling qemu_get_cpu(i), but in future
we might change that code (eg so that the board code has to
explicitly wire up the CPUs to the GIC object by passing
pointers to the CPUs to the GIC via link properties). So I'd
rather not have the internals of the GIC code bake in the
assumption that 'global CPU index is the same as the index
of the CPU for this GIC object'.

(All the other places that call kvm_gicc_access() are doing it
as part of a loop from 0 to n->num_cpus, so they don't have this
issue.)

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]