qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] tcg/svm: use host cr4 during NPT page table walk


From: Alexander Boettcher
Subject: [PATCH] tcg/svm: use host cr4 during NPT page table walk
Date: Mon, 29 Jun 2020 15:25:03 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0

Hello,

during a page table walk of TCG+SVM the code in target/i386/excp_helper.c 
get_hphys() uses the cr4 register of the guest instead of the hypervisor to 
check for the PSE bit. In the test case we have, the guest have not enabled 
(yet) the PSE bit and so the page table walk results in a wrong host physical 
address resolution and wrong content read by the guest.

Attached patch is against 4.2.1, but works also on 3.1.0. It fixes the issue 
for our automated testcase, which is a 32bit hypervisor w/o PAE support running 
a guest VM with tcg+svm.
The test worked beforehand up to qemu 2.12, started to fail with qemu 3.0 and 
later. The added TCG/SVM NPT commit seems to introduce the regression.

In case someone want to try to reproduce it, the iso is at [0], the good case 
is [1] and the failing case is [2]. The used commandline is:

qemu-system-i386 -no-kvm -nographic -cpu phenom -m 512 -machine q35 -cdrom 
seoul-vmm-test.iso

[0] https://depot.genode.org/alex-ab/images/seoul-vmm-test.iso
[1] https://depot.genode.org/alex-ab/images/seoul-vmm-good.txt
[2] https://depot.genode.org/alex-ab/images/seoul-vmm-bad.txt

-- 
Alexander Boettcher
Genode Labs

https://www.genode-labs.com - https://www.genode.org

Genode Labs GmbH - Amtsgericht Dresden - HRB 28424 - Sitz Dresden
Geschäftsführer: Dr.-Ing. Norman Feske, Christian Helmuth




Signed-off-by: Alexander Boettcher <alexander.boettcher@genode-labs.com>
---
 target/i386/excp_helper.c | 4 ++--
 target/i386/svm.h         | 1 +
 target/i386/svm_helper.c  | 7 ++++++-
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/target/i386/excp_helper.c b/target/i386/excp_helper.c
index 1447bda7a9..b10c7ecbcc 100644
--- a/target/i386/excp_helper.c
+++ b/target/i386/excp_helper.c
@@ -262,8 +262,8 @@ static hwaddr get_hphys(CPUState *cs, hwaddr gphys, 
MMUAccessType access_type,
         }
         ptep = pde | PG_NX_MASK;
 
-        /* if PSE bit is set, then we use a 4MB page */
-        if ((pde & PG_PSE_MASK) && (env->cr[4] & CR4_PSE_MASK)) {
+        /* if host cr4 PSE bit is set, then we use a 4MB page */
+        if ((pde & PG_PSE_MASK) && (env->nested_pg_mode & SVM_NPT_PSE)) {
             page_size = 4096 * 1024;
             pte_addr = pde_addr;
 
diff --git a/target/i386/svm.h b/target/i386/svm.h
index 23a3a040b8..ae30fc6f79 100644
--- a/target/i386/svm.h
+++ b/target/i386/svm.h
@@ -135,6 +135,7 @@
 #define SVM_NPT_PAE         (1 << 0)
 #define SVM_NPT_LMA         (1 << 1)
 #define SVM_NPT_NXE         (1 << 2)
+#define SVM_NPT_PSE         (1 << 3)
 
 #define SVM_NPTEXIT_P       (1ULL << 0)
 #define SVM_NPTEXIT_RW      (1ULL << 1)
diff --git a/target/i386/svm_helper.c b/target/i386/svm_helper.c
index 7b8105a1c3..6224387eab 100644
--- a/target/i386/svm_helper.c
+++ b/target/i386/svm_helper.c
@@ -209,16 +209,21 @@ void helper_vmrun(CPUX86State *env, int aflag, int 
next_eip_addend)
 
     nested_ctl = x86_ldq_phys(cs, env->vm_vmcb + offsetof(struct vmcb,
                                                           control.nested_ctl));
+
+    env->nested_pg_mode = 0;
+
     if (nested_ctl & SVM_NPT_ENABLED) {
         env->nested_cr3 = x86_ldq_phys(cs,
                                 env->vm_vmcb + offsetof(struct vmcb,
                                                         control.nested_cr3));
         env->hflags2 |= HF2_NPT_MASK;
 
-        env->nested_pg_mode = 0;
         if (env->cr[4] & CR4_PAE_MASK) {
             env->nested_pg_mode |= SVM_NPT_PAE;
         }
+        if (env->cr[4] & CR4_PSE_MASK) {
+            env->nested_pg_mode |= SVM_NPT_PSE;
+        }
         if (env->hflags & HF_LMA_MASK) {
             env->nested_pg_mode |= SVM_NPT_LMA;
         }
-- 
2.17.1



reply via email to

[Prev in Thread] Current Thread [Next in Thread]