qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH] target/ppc: migrate VPA related state


From: Greg Kurz
Subject: [Qemu-devel] [PATCH] target/ppc: migrate VPA related state
Date: Wed, 09 May 2018 13:16:56 +0200
User-agent: StGit/0.17.1-46-g6855-dirty

QEMU implements the "Shared Processor LPAR" (SPLPAR) option, which allows
the hypervisor to time-slice a physical processor into multiple virtual
processor. The intent is to allow more guests to run, and to optimize
processor utilization.

The guest OS can cede idle VCPUs, so that their processing capacity may
be used by other VCPUs, with the H_CEDE hcall. The guest OS can also
optimize spinlocks, by confering the time-slice of a spinning VCPU to the
spinlock holder if it's currently notrunning, with the H_CONFER hcall.

Both hcalls depend on a "Virtual Processor Area" (VPA) to be registered
by the guest OS, generally during early boot. Other per-VCPU areas can
be registerer: the "SLB Shadow Buffer" which allows a more efficient
dispatching of VCPUs, and the "Dispatch Trace Log Buffer" (DTL) which
is used to compute time stolen by the hypervisor. Both DTL and SLB Shadow
areas depend on the VPA to be registered.

The VPA/SLB Shadow/DTL are state that QEMU should migrate, but this doesn't
happen, for no apparent reason other than it was just never coded. This
causes the features listed above to stop working after migration, and it
breaks the logic of the H_REGISTER_VPA hcall in the destination.

This patch fixes it for newer machine types (ie, version > 2.12) by adding
a "cpu/vpa" subsection to the CPU migration stream. It ensures backward
migration to existing QEMU versions.

Since DTL and SLB Shadow are optional and both depend on VPA, they get
their own subsections "cpu/vpa/slb_shadow" and "cpu/vpa/dtl" hanging from
the "cpu/vpa" subsection.

Signed-off-by: Greg Kurz <address@hidden>
---
 target/ppc/machine.c |   62 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/target/ppc/machine.c b/target/ppc/machine.c
index ba1b9e531f97..b0d4040b37f9 100644
--- a/target/ppc/machine.c
+++ b/target/ppc/machine.c
@@ -677,6 +677,67 @@ static const VMStateDescription vmstate_compat = {
     }
 };
 
+static bool slb_shadow_needed(void *opaque)
+{
+    PowerPCCPU *cpu = opaque;
+
+    return cpu->env.slb_shadow_addr != 0;
+}
+
+static const VMStateDescription vmstate_slb_shadow = {
+    .name = "cpu/vpa/slb_shadow",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .needed = slb_shadow_needed,
+    .fields = (VMStateField[]) {
+        VMSTATE_UINT64(env.slb_shadow_addr, PowerPCCPU),
+        VMSTATE_UINT64(env.slb_shadow_size, PowerPCCPU),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
+static bool dtl_needed(void *opaque)
+{
+    PowerPCCPU *cpu = opaque;
+
+    return cpu->env.dtl_addr != 0;
+}
+
+static const VMStateDescription vmstate_dtl = {
+    .name = "cpu/vpa/dtl",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .needed = dtl_needed,
+    .fields = (VMStateField[]) {
+        VMSTATE_UINT64(env.dtl_addr, PowerPCCPU),
+        VMSTATE_UINT64(env.dtl_size, PowerPCCPU),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
+static bool vpa_needed(void *opaque)
+{
+    PowerPCCPU *cpu = opaque;
+
+    return !cpu->pre_2_13_migration && cpu->env.vpa_addr != 0;
+}
+
+static const VMStateDescription vmstate_vpa = {
+    .name = "cpu/vpa",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .needed = vpa_needed,
+    .fields = (VMStateField[]) {
+        VMSTATE_UINT64(env.vpa_addr, PowerPCCPU),
+        VMSTATE_END_OF_LIST()
+    },
+    .subsections = (const VMStateDescription * []) {
+        &vmstate_slb_shadow,
+        &vmstate_dtl,
+        NULL
+    }
+};
+
 const VMStateDescription vmstate_ppc_cpu = {
     .name = "cpu",
     .version_id = 5,
@@ -731,6 +792,7 @@ const VMStateDescription vmstate_ppc_cpu = {
         &vmstate_tlbemb,
         &vmstate_tlbmas,
         &vmstate_compat,
+        &vmstate_vpa,
         NULL
     }
 };




reply via email to

[Prev in Thread] Current Thread [Next in Thread]