[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH] Improve the alignment check infrastructure
From: |
Sergey Sorokin |
Subject: |
Re: [Qemu-devel] [PATCH] Improve the alignment check infrastructure |
Date: |
Mon, 20 Jun 2016 20:33:45 +0300 |
A
A
20.06.2016, 18:45, "Richard Henderson" <address@hidden>:
On 06/20/2016 06:56 AM, Sergey Sorokin wrote:
A A /* Flags stored in the low bits of the TLB virtual address.
These are
A - defined so that fast path ram access is all zeros. */
A + * defined so that fast path ram access is all zeros.
A + * They start after address alignment bits.
A + */
A +#define TLB_FLAGS_START_BIT 6
A A /* Zero if TLB entry is valid. */
A -#define TLB_INVALID_MASK (1 << 3)
A +#define TLB_INVALID_MASK (1 << (TLB_FLAGS_START_BIT + 0))
A A /* Set if TLB entry references a clean RAM page. The iotlb entry
will
A A A A A contain the page physical address. */
A -#define TLB_NOTDIRTY (1 << 4)
A +#define TLB_NOTDIRTY (1 << (TLB_FLAGS_START_BIT + 1))
A A /* Set if TLB entry is an IO callback. */
A -#define TLB_MMIO (1 << 5)
A +#define TLB_MMIO (1 << (TLB_FLAGS_START_BIT + 2))
I think we may need to assert that TLB_FLAGS_START_BIT + 3 <
TARGET_PAGE_BITS.
Or perhaps start from TARGET_PAGE_BITS-1 and subtract?
I thought about this but left unchecked as it was. Anyway, idea is
good.
A
I'm thinking of the AVR target currently under review which requires
TARGET_PAGE_BITS == 8 in order to support the memory device layout.
What is maximum alignment size for AVR? I think it's possible to
implement some checks for this maximum.
A @@ -1195,8 +1195,8 @@ static inline void
tcg_out_tlb_load(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
A A A A A A TCGType ttype = TCG_TYPE_I32;
A A A A A A TCGType tlbtype = TCG_TYPE_I32;
A A A A A A int trexw = 0, hrexw = 0, tlbrexw = 0;
A - int s_mask = (1 << (opc & MO_SIZE)) - 1;
A - bool aligned = (opc & MO_AMASK) == MO_ALIGN || s_mask == 0;
A + int a_bits = get_alignment_bits(opc);
A + uint64_t tlb_mask;
tlb_mask should be target_ulong.
A @@ -1099,9 +1109,15 @@ void tcg_dump_ops(TCGContext *s)
A A A A A A A A A A A A A A A A A A A A A A A A A A qemu_log(",$0x%x
,%u", op, ix);
A A A A A A A A A A A A A A A A A A A A A A } else {
A A A A A A A A A A A A A A A A A A A A A A A A A A const char *s_al
= "", *s_op;
A + int a_bits;
A A A A A A A A A A A A A A A A A A A A A A A A A A if (op &
MO_AMASK) {
A - if ((op & MO_AMASK) == MO_ALIGN) {
A - s_al = "al+";
A + a_bits = get_alignment_bits(op);
A + if (a_bits >= 0) {
A + if ((op & MO_SIZE) == a_bits) {
A + s_al = "al+";
A + } else {
A + s_al = alignment_name[a_bits];
A + }
I think perhaps we should be more explicit about what's actually
encoded here.
Eg only print "al+" if (op & MO_AMASK) == MO_ALIGN. So if an
explicit al4 is
encoded for size 4, print that. Which does simplify all this code to
+static const char * const alignment_name[MO_AMASK >> MO_ASHIFT] = {
+ [MO_UNALN >> MO_ASHIFT] = "un+",
+ [MO_ALIGN >> MO_ASHIFT] = "al+",
+ [MO_ALIGN_2 >> MO_ASHIFT] = "al2+",
+ [MO_ALIGN_4 >> MO_ASHIFT] = "al4+",
+ [MO_ALIGN_8 >> MO_ASHIFT] = "al8+",
+ [MO_ALIGN_16 >> MO_ASHIFT] = "al16+",
+ [MO_ALIGN_32 >> MO_ASHIFT] = "al32+",
+ [MO_ALIGN_64 >> MO_ASHIFT] = "al64+",
+};
s_al = alignment_name[(op & MO_AMASK) >> MO_ASHIFT];
A + /* Specific alignment size. It must be equal or greater
A + * than the access size.
A + */
A + a >>= MO_ASHIFT;
A + assert(a >= s);
A + return a;
tcg_debug_assert.
r~