qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [PATCH v2] Improve the alignment check infrastructure


From: Sergey Sorokin
Subject: Re: [Qemu-arm] [PATCH v2] Improve the alignment check infrastructure
Date: Wed, 22 Jun 2016 19:30:20 +0300

 
 
22.06.2016, 18:50, "Richard Henderson" <address@hidden>:

On 06/22/2016 05:37 AM, Sergey Sorokin wrote:

 +/* Use this mask to check interception with an alignment mask
 + * in a TCG backend.
 + */
 +#define TLB_FLAGS_MASK (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO)


I think we ought to check this in tcg-op.c, rather than wait until generating
code in the backend.


 --- a/tcg/aarch64/tcg-target.inc.c
 +++ b/tcg/aarch64/tcg-target.inc.c
 @@ -1071,19 +1071,21 @@ static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,
      int tlb_offset = is_read ?
          offsetof(CPUArchState, tlb_table[mem_index][0].addr_read)
          : offsetof(CPUArchState, tlb_table[mem_index][0].addr_write);
 - int s_mask = (1 << (opc & MO_SIZE)) - 1;
 + int a_bits = get_alignment_bits(opc);
      TCGReg base = TCG_AREG0, x3;
 - uint64_t tlb_mask;
 + target_ulong tlb_mask;


Hum. I had been talking about i386 specifically when changing the type of
tlb_mask.

For aarch64, a quirk in the code generation logic requires that a 32-bit
tlb_mask be sign-extended to 64-bit. The effect of the actual instruction will
be zero-extension, however.

See is_limm, tcg_out_logicali, and a related comment in tcg_out_movi for
details. We should probably add a comment here in tlb_read for the next person
that comes along...

Thank you for the comment.

 

 diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
 index da10052..3dc38fa 100644
 --- a/tcg/ppc/tcg-target.inc.c
 +++ b/tcg/ppc/tcg-target.inc.c
 @@ -1399,6 +1399,7 @@ static TCGReg tcg_out_tlb_read(TCGContext *s, TCGMemOp opc,
      int add_off = offsetof(CPUArchState, tlb_table[mem_index][0].addend);
      TCGReg base = TCG_AREG0;
      TCGMemOp s_bits = opc & MO_SIZE;
 + int a_bits = get_alignment_bits(opc);

      /* Extract the page index, shifted into place for tlb index. */
      if (TCG_TARGET_REG_BITS == 64) {
 @@ -1456,14 +1457,21 @@ static TCGReg tcg_out_tlb_read(TCGContext *s, TCGMemOp opc,
           * the bottom bits and thus trigger a comparison failure on
           * unaligned accesses
           */
 + if (a_bits > 0) {
 + tcg_debug_assert((((1 << a_bits) - 1) & TLB_FLAGS_MASK) == 0);
 + } else {
 + a_bits = s_bits;
 + }
          tcg_out_rlw(s, RLWINM, TCG_REG_R0, addrlo, 0,
 + (32 - a_bits) & 31, 31 - TARGET_PAGE_BITS);


ppc32 can certainly support over-alignment, just like every other target. It's
just that there are some 32-bit parts that don't support unaligned accesses.

 
I don't understand your point here.
As the comment says this case preserves all alignment bits to go to the slow path in case of any unaligned access regardless of an alignment enabling.
 
 
 
Also I forget about softmmu_template.h. This patch is not full.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]