[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 25/33] ppc: introducing spapr_numa.c NUMA code helper
From: |
David Gibson |
Subject: |
[PULL 25/33] ppc: introducing spapr_numa.c NUMA code helper |
Date: |
Tue, 8 Sep 2020 15:19:45 +1000 |
From: Daniel Henrique Barboza <danielhb413@gmail.com>
We're going to make changes in how spapr handles all
ibm,associativity* related properties to enhance our current NUMA
support.
At this moment we have associativity code scattered all around
spapr_* files, with hardcoded values and array sizes. This
makes it harder to change any NUMA specific parameters in
the future. Having everything in the same place allows not
only for easier tuning, but also easier understanding since all
NUMA related code is on the same file.
This patch introduces a new file to gather all NUMA/associativity
handling code in spapr, spapr_numa.c. To get things started, let's
remove associativity-reference-points and max-associativity-domains
code from spapr_dt_rtas() to a new helper called spapr_numa_write_rtas_dt().
This will decouple spapr_dt_rtas() from the NUMA changes that
are going to happen in those two properties.
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20200901125645.118026-2-danielhb413@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
hw/ppc/meson.build | 3 ++-
hw/ppc/spapr.c | 26 ++-----------------
hw/ppc/spapr_numa.c | 50 +++++++++++++++++++++++++++++++++++++
include/hw/ppc/spapr_numa.h | 20 +++++++++++++++
4 files changed, 74 insertions(+), 25 deletions(-)
create mode 100644 hw/ppc/spapr_numa.c
create mode 100644 include/hw/ppc/spapr_numa.h
diff --git a/hw/ppc/meson.build b/hw/ppc/meson.build
index 918969b320..ffa2ec37fa 100644
--- a/hw/ppc/meson.build
+++ b/hw/ppc/meson.build
@@ -25,7 +25,8 @@ ppc_ss.add(when: 'CONFIG_PSERIES', if_true: files(
'spapr_irq.c',
'spapr_tpm_proxy.c',
'spapr_nvdimm.c',
- 'spapr_rtas_ddw.c'
+ 'spapr_rtas_ddw.c',
+ 'spapr_numa.c',
))
ppc_ss.add(when: 'CONFIG_SPAPR_RNG', if_true: files('spapr_rng.c'))
ppc_ss.add(when: ['CONFIG_PSERIES', 'CONFIG_LINUX'], if_true: files(
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index b0a04443fb..a45912acac 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -81,6 +81,7 @@
#include "hw/mem/memory-device.h"
#include "hw/ppc/spapr_tpm_proxy.h"
#include "hw/ppc/spapr_nvdimm.h"
+#include "hw/ppc/spapr_numa.h"
#include "monitor/monitor.h"
@@ -891,16 +892,9 @@ static int spapr_dt_rng(void *fdt)
static void spapr_dt_rtas(SpaprMachineState *spapr, void *fdt)
{
MachineState *ms = MACHINE(spapr);
- SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(ms);
int rtas;
GString *hypertas = g_string_sized_new(256);
GString *qemu_hypertas = g_string_sized_new(256);
- uint32_t refpoints[] = {
- cpu_to_be32(0x4),
- cpu_to_be32(0x4),
- cpu_to_be32(0x2),
- };
- uint32_t nr_refpoints = ARRAY_SIZE(refpoints);
uint64_t max_device_addr = MACHINE(spapr)->device_memory->base +
memory_region_size(&MACHINE(spapr)->device_memory->mr);
uint32_t lrdr_capacity[] = {
@@ -910,14 +904,6 @@ static void spapr_dt_rtas(SpaprMachineState *spapr, void
*fdt)
cpu_to_be32(SPAPR_MEMORY_BLOCK_SIZE & 0xffffffff),
cpu_to_be32(ms->smp.max_cpus / ms->smp.threads),
};
- uint32_t maxdomain = cpu_to_be32(spapr->gpu_numa_id > 1 ? 1 : 0);
- uint32_t maxdomains[] = {
- cpu_to_be32(4),
- maxdomain,
- maxdomain,
- maxdomain,
- cpu_to_be32(spapr->gpu_numa_id),
- };
_FDT(rtas = fdt_add_subnode(fdt, 0, "rtas"));
@@ -953,15 +939,7 @@ static void spapr_dt_rtas(SpaprMachineState *spapr, void
*fdt)
qemu_hypertas->str, qemu_hypertas->len));
g_string_free(qemu_hypertas, TRUE);
- if (smc->pre_5_1_assoc_refpoints) {
- nr_refpoints = 2;
- }
-
- _FDT(fdt_setprop(fdt, rtas, "ibm,associativity-reference-points",
- refpoints, nr_refpoints * sizeof(refpoints[0])));
-
- _FDT(fdt_setprop(fdt, rtas, "ibm,max-associativity-domains",
- maxdomains, sizeof(maxdomains)));
+ spapr_numa_write_rtas_dt(spapr, fdt, rtas);
/*
* FWNMI reserves RTAS_ERROR_LOG_MAX for the machine check error log,
diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
new file mode 100644
index 0000000000..cdf3288cbd
--- /dev/null
+++ b/hw/ppc/spapr_numa.c
@@ -0,0 +1,50 @@
+/*
+ * QEMU PowerPC pSeries Logical Partition NUMA associativity handling
+ *
+ * Copyright IBM Corp. 2020
+ *
+ * Authors:
+ * Daniel Henrique Barboza <danielhb413@gmail.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#include "qemu/osdep.h"
+#include "qemu-common.h"
+#include "hw/ppc/spapr_numa.h"
+#include "hw/ppc/fdt.h"
+
+/*
+ * Helper that writes ibm,associativity-reference-points and
+ * max-associativity-domains in the RTAS pointed by @rtas
+ * in the DT @fdt.
+ */
+void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
+{
+ SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
+ uint32_t refpoints[] = {
+ cpu_to_be32(0x4),
+ cpu_to_be32(0x4),
+ cpu_to_be32(0x2),
+ };
+ uint32_t nr_refpoints = ARRAY_SIZE(refpoints);
+ uint32_t maxdomain = cpu_to_be32(spapr->gpu_numa_id > 1 ? 1 : 0);
+ uint32_t maxdomains[] = {
+ cpu_to_be32(4),
+ maxdomain,
+ maxdomain,
+ maxdomain,
+ cpu_to_be32(spapr->gpu_numa_id),
+ };
+
+ if (smc->pre_5_1_assoc_refpoints) {
+ nr_refpoints = 2;
+ }
+
+ _FDT(fdt_setprop(fdt, rtas, "ibm,associativity-reference-points",
+ refpoints, nr_refpoints * sizeof(refpoints[0])));
+
+ _FDT(fdt_setprop(fdt, rtas, "ibm,max-associativity-domains",
+ maxdomains, sizeof(maxdomains)));
+}
diff --git a/include/hw/ppc/spapr_numa.h b/include/hw/ppc/spapr_numa.h
new file mode 100644
index 0000000000..7a370a8768
--- /dev/null
+++ b/include/hw/ppc/spapr_numa.h
@@ -0,0 +1,20 @@
+/*
+ * QEMU PowerPC pSeries Logical Partition NUMA associativity handling
+ *
+ * Copyright IBM Corp. 2020
+ *
+ * Authors:
+ * Daniel Henrique Barboza <danielhb413@gmail.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+
+#ifndef HW_SPAPR_NUMA_H
+#define HW_SPAPR_NUMA_H
+
+#include "hw/ppc/spapr.h"
+
+void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas);
+
+#endif /* HW_SPAPR_NUMA_H */
--
2.26.2
- [PULL 15/33] target/arm: Move start-powered-off property to generic CPUState, (continued)
- [PULL 15/33] target/arm: Move start-powered-off property to generic CPUState, David Gibson, 2020/09/08
- [PULL 16/33] target/arm: Move setting of CPU halted state to generic code, David Gibson, 2020/09/08
- [PULL 19/33] mips/cps: Use start-powered-off CPUState property, David Gibson, 2020/09/08
- [PULL 24/33] hw/ppc/ppc4xx_pci: Replace pointless warning by assert(), David Gibson, 2020/09/08
- [PULL 20/33] sparc/sun4m: Don't set cs->halted = 0 in main_cpu_reset(), David Gibson, 2020/09/08
- [PULL 22/33] target/s390x: Use start-powered-off CPUState property, David Gibson, 2020/09/08
- [PULL 18/33] ppc/e500: Use start-powered-off CPUState property, David Gibson, 2020/09/08
- [PULL 17/33] ppc/spapr: Use start-powered-off CPUState property, David Gibson, 2020/09/08
- [PULL 21/33] sparc/sun4m: Use start-powered-off CPUState property, David Gibson, 2020/09/08
- [PULL 23/33] hw/ppc/ppc4xx_pci: Use ARRAY_SIZE() instead of magic value, David Gibson, 2020/09/08
- [PULL 25/33] ppc: introducing spapr_numa.c NUMA code helper,
David Gibson <=
- [PULL 26/33] ppc/spapr_nvdimm: turn spapr_dt_nvdimm() static, David Gibson, 2020/09/08
- [PULL 28/33] spapr, spapr_numa: handle vcpu ibm,associativity, David Gibson, 2020/09/08
- [PULL 29/33] spapr, spapr_numa: move lookup-arrays handling to spapr_numa.c, David Gibson, 2020/09/08
- [PULL 30/33] spapr_numa: move NVLink2 associativity handling to spapr_numa.c, David Gibson, 2020/09/08
- [PULL 31/33] spapr: move h_home_node_associativity to spapr_numa.c, David Gibson, 2020/09/08
- [PULL 27/33] spapr: introduce SpaprMachineState::numa_assoc_array, David Gibson, 2020/09/08
- [PULL 32/33] spapr_numa: create a vcpu associativity helper, David Gibson, 2020/09/08
- [PULL 33/33] spapr_numa: use spapr_numa_get_vcpu_assoc() in home_node hcall, David Gibson, 2020/09/08
- Re: [PULL 00/33] ppc-for-5.2 queue 20200908, Peter Maydell, 2020/09/08