qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PULL 3/4] hw/ppc/spapr: Fix the selection of the processor


From: David Gibson
Subject: [Qemu-devel] [PULL 3/4] hw/ppc/spapr: Fix the selection of the processor features
Date: Thu, 13 Oct 2016 16:15:33 +1100

From: Thomas Huth <address@hidden>

The current code uses pa_features_206 for POWERPC_MMU_2_06, and
for everything else, it uses pa_features_207. This is bad in some
cases because there is also a "degraded" MMU version of ISA 2.06,
called POWERPC_MMU_2_06a, which should of course use the flags for
2.06 instead. And there is also the possibility that the user runs
the pseries machine with a POWER5+ or even 970 processor. In that
case we certainly do not want to set the flags for 2.07, and rather
simply skip the setting of the pa-features property instead.

Signed-off-by: Thomas Huth <address@hidden>
Reviewed-by: Cédric Le Goater <address@hidden>
Signed-off-by: David Gibson <address@hidden>
(cherry picked from commit 4cbec30d769a73853b60dc7f275e6e7da9ab5162)
---
 hw/ppc/spapr.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 36d9077..9f0d99b 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -607,12 +607,19 @@ static void spapr_populate_pa_features(CPUPPCState *env, 
void *fdt, int offset)
     uint8_t *pa_features;
     size_t pa_size;
 
-    if (env->mmu_model == POWERPC_MMU_2_06) {
+    switch (env->mmu_model) {
+    case POWERPC_MMU_2_06:
+    case POWERPC_MMU_2_06a:
         pa_features = pa_features_206;
         pa_size = sizeof(pa_features_206);
-    } else { /* env->mmu_model == POWERPC_MMU_2_07 */
+        break;
+    case POWERPC_MMU_2_07:
+    case POWERPC_MMU_2_07a:
         pa_features = pa_features_207;
         pa_size = sizeof(pa_features_207);
+        break;
+    default:
+        return;
     }
 
     if (env->ci_large_pages) {
-- 
2.7.4




reply via email to

[Prev in Thread] Current Thread [Next in Thread]