qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH 2/7] spapr: Move handling of special NVLink numa n


From: Alexey Kardashevskiy
Subject: Re: [Qemu-ppc] [PATCH 2/7] spapr: Move handling of special NVLink numa node from reset to init
Date: Wed, 11 Sep 2019 17:41:25 +1000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.0


On 11/09/2019 14:04, David Gibson wrote:
> The number of NUMA nodes in the system is fixed from the command line.
> Therefore, there's no need to recalculate it at reset time, and we can
> determine the special gpu_numa_id value used for NVLink2 devices at init
> time.
> 
> This simplifies the reset path a bit which will make further improvements
> easier.
> 
> Signed-off-by: David Gibson <address@hidden>


Tested-by: Alexey Kardashevskiy <address@hidden>
Reviewed-by: Alexey Kardashevskiy <address@hidden>


> ---
>  hw/ppc/spapr.c | 21 +++++++++++----------
>  1 file changed, 11 insertions(+), 10 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index c551001f86..e03e874d94 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1737,16 +1737,6 @@ static void spapr_machine_reset(MachineState *machine)
>          spapr_setup_hpt_and_vrma(spapr);
>      }
>  
> -    /*
> -     * NVLink2-connected GPU RAM needs to be placed on a separate NUMA node.
> -     * We assign a new numa ID per GPU in spapr_pci_collect_nvgpu() which is
> -     * called from vPHB reset handler so we initialize the counter here.
> -     * If no NUMA is configured from the QEMU side, we start from 1 as GPU 
> RAM
> -     * must be equally distant from any other node.
> -     * The final value of spapr->gpu_numa_id is going to be written to
> -     * max-associativity-domains in spapr_build_fdt().
> -     */
> -    spapr->gpu_numa_id = MAX(1, machine->numa_state->num_nodes);
>      qemu_devices_reset();
>  
>      /*
> @@ -2885,6 +2875,17 @@ static void spapr_machine_init(MachineState *machine)
>  
>      }
>  
> +    /*
> +     * NVLink2-connected GPU RAM needs to be placed on a separate NUMA node.
> +     * We assign a new numa ID per GPU in spapr_pci_collect_nvgpu() which is
> +     * called from vPHB reset handler so we initialize the counter here.
> +     * If no NUMA is configured from the QEMU side, we start from 1 as GPU 
> RAM
> +     * must be equally distant from any other node.
> +     * The final value of spapr->gpu_numa_id is going to be written to
> +     * max-associativity-domains in spapr_build_fdt().
> +     */
> +    spapr->gpu_numa_id = MAX(1, machine->numa_state->num_nodes);
> +
>      if ((!kvm_enabled() || kvmppc_has_cap_mmu_radix()) &&
>          ppc_type_check_compat(machine->cpu_type, CPU_POWERPC_LOGICAL_3_00, 0,
>                                spapr->max_compat_pvr)) {
> 

-- 
Alexey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]