qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 6/6] accel/tcg: Remain TLB_INVALID_MASK in the address when T


From: Weiwei Li
Subject: Re: [PATCH 6/6] accel/tcg: Remain TLB_INVALID_MASK in the address when TLB is re-filled
Date: Tue, 18 Apr 2023 08:48:43 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0


On 2023/4/18 00:25, Daniel Henrique Barboza wrote:


On 4/13/23 06:01, Weiwei Li wrote:
When PMP entry overlap part of the page, we'll set the tlb_size to 1, and
this will make the address set with TLB_INVALID_MASK to make the page
un-cached. However, if we clear TLB_INVALID_MASK when TLB is re-filled, then the TLB host address will be cached, and the following instructions can use this host address directly which may lead to the bypass of PMP related check.

Signed-off-by: Weiwei Li <liweiwei@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqiang@iscas.ac.cn>
---

For this commit I believe it's worth mentioning that it's partially reverting
commit c3c8bf579b431b6b ("accel/tcg: Suppress auto-invalidate in
probe_access_internal") that was made to handle a particularity/quirk that was
present in s390x code.

At first glance this patch seems benign but we must make sure that no other assumptions were made with this particular change in probe_access_internal().

I think this change will introduce no external function change except that we should

always walk the page table(fill_tlb) for memory access to that page. And this is needed

for pages that are partially overlapped by PMP region.

Regards,

Weiwei Li




Thanks,

Daniel

  accel/tcg/cputlb.c | 7 -------
  1 file changed, 7 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index e984a98dc4..d0bf996405 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1563,13 +1563,6 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
              /* TLB resize via tlb_fill may have moved the entry.  */
              index = tlb_index(env, mmu_idx, addr);
              entry = tlb_entry(env, mmu_idx, addr);
-
-            /*
-             * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately,
-             * to force the next access through tlb_fill. We've just
-             * called tlb_fill, so we know that this entry *is* valid.
-             */
-            flags &= ~TLB_INVALID_MASK;
          }
          tlb_addr = tlb_read_ofs(entry, elt_ofs);
      }




reply via email to

[Prev in Thread] Current Thread [Next in Thread]