[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 01/16] exec: Use TARGET_PAGE_BITS_MIN for TLB flags
From: |
Richard Henderson |
Subject: |
[PULL 01/16] exec: Use TARGET_PAGE_BITS_MIN for TLB flags |
Date: |
Wed, 25 Sep 2019 11:45:33 -0700 |
These bits do not need to vary with the actual page size
used by the guest.
Reviewed-by: Alex Bennée <address@hidden>
Reviewed-by: David Hildenbrand <address@hidden>
Reviewed-by: Paolo Bonzini <address@hidden>
Signed-off-by: Richard Henderson <address@hidden>
---
include/exec/cpu-all.h | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index d2d443c4f9..e0c8dc540c 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -317,20 +317,24 @@ CPUArchState *cpu_copy(CPUArchState *env);
#if !defined(CONFIG_USER_ONLY)
-/* Flags stored in the low bits of the TLB virtual address. These are
- * defined so that fast path ram access is all zeros.
+/*
+ * Flags stored in the low bits of the TLB virtual address.
+ * These are defined so that fast path ram access is all zeros.
* The flags all must be between TARGET_PAGE_BITS and
* maximum address alignment bit.
+ *
+ * Use TARGET_PAGE_BITS_MIN so that these bits are constant
+ * when TARGET_PAGE_BITS_VARY is in effect.
*/
/* Zero if TLB entry is valid. */
-#define TLB_INVALID_MASK (1 << (TARGET_PAGE_BITS - 1))
+#define TLB_INVALID_MASK (1 << (TARGET_PAGE_BITS_MIN - 1))
/* Set if TLB entry references a clean RAM page. The iotlb entry will
contain the page physical address. */
-#define TLB_NOTDIRTY (1 << (TARGET_PAGE_BITS - 2))
+#define TLB_NOTDIRTY (1 << (TARGET_PAGE_BITS_MIN - 2))
/* Set if TLB entry is an IO callback. */
-#define TLB_MMIO (1 << (TARGET_PAGE_BITS - 3))
+#define TLB_MMIO (1 << (TARGET_PAGE_BITS_MIN - 3))
/* Set if TLB entry contains a watchpoint. */
-#define TLB_WATCHPOINT (1 << (TARGET_PAGE_BITS - 4))
+#define TLB_WATCHPOINT (1 << (TARGET_PAGE_BITS_MIN - 4))
/* Use this mask to check interception with an alignment mask
* in a TCG backend.
--
2.17.1
- [PULL 00/16] tcg patch queue, Richard Henderson, 2019/09/25
- [PULL 01/16] exec: Use TARGET_PAGE_BITS_MIN for TLB flags,
Richard Henderson <=
- [PULL 02/16] cputlb: Disable __always_inline__ without optimization, Richard Henderson, 2019/09/25
- [PULL 03/16] qemu/compiler.h: Add qemu_build_not_reached, Richard Henderson, 2019/09/25
- [PULL 04/16] cputlb: Use qemu_build_not_reached in load/store_helpers, Richard Henderson, 2019/09/25
- [PULL 05/16] cputlb: Split out load/store_memop, Richard Henderson, 2019/09/25
- [PULL 06/16] cputlb: Introduce TLB_BSWAP, Richard Henderson, 2019/09/25
- [PULL 07/16] exec: Adjust notdirty tracing, Richard Henderson, 2019/09/25
- [PULL 09/16] cputlb: Move NOTDIRTY handling from I/O path to TLB path, Richard Henderson, 2019/09/25
- [PULL 08/16] cputlb: Move ROM handling from I/O path to TLB path, Richard Henderson, 2019/09/25
- [PULL 11/16] cputlb: Merge and move memory_notdirty_write_{prepare, complete}, Richard Henderson, 2019/09/25
- [PULL 12/16] cputlb: Handle TLB_NOTDIRTY in probe_access, Richard Henderson, 2019/09/25