qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 1/3] target/arm: Add support for FEAT_TLBIRANGE


From: Richard Henderson
Subject: Re: [PATCH v4 1/3] target/arm: Add support for FEAT_TLBIRANGE
Date: Tue, 16 Mar 2021 12:09:45 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.1

On 3/16/21 9:49 AM, Rebecca Cran wrote:
+    for (page = addr; page < (addr + length); page += TARGET_PAGE_SIZE) {

This test means that it's impossible to flush the last page of the address space (addr + length == 0). I think better to do

  for (l = 0; l < length; l += TARGET_PAGE_SIZE)
      page = addr + l;
      ...

+        for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
+            if ((idxmap >> mmu_idx) & 1) {
+                tlb_flush_page_bits_locked(env, mmu_idx, page, bits);

Hmm. I'm not keen on this. You're not able to notice the special cases within, where we flush the entire tlb -- and therefore you do not need to continue the outer loop for this mmuidx.

+                tb_flush_jmp_cache(cpu, page);

This does not need to be in the mmuidx loop. But since above means that the mmuidx loop should be the outer loop, this would go in a separate page loop by itself.

+void tlb_flush_page_range_bits_by_mmuidx(CPUState *cpu,
+                                         target_ulong addr,
+                                         target_ulong length,
+                                         uint16_t idxmap,
+                                         unsigned bits)
+{
+    TLBFlushPageRangeBitsByMMUIdxData d;
+    TLBFlushPageRangeBitsByMMUIdxData *p;
+
+    /* This should already be page aligned */
+    addr &= TARGET_PAGE_BITS;
+
+    /* If all bits are significant, this devolves to tlb_flush_page. */
+    if (bits >= TARGET_LONG_BITS) {
+        tlb_flush_page_by_mmuidx(cpu, addr, idxmap);
+        return;
+    }

This case is incorrect.

The cputlb changes should have remained a separate patch.

@@ -4759,6 +4759,241 @@ static void tlbi_aa64_vae3is_write(CPUARMState *env, 
const ARMCPRegInfo *ri,
                                                    ARMMMUIdxBit_SE3, bits);
  }
+#ifdef TARGET_AARCH64
+static uint64_t tlbi_aa64_range_get_length(CPUARMState *env,
+                                           uint64_t value)
+{
+    unsigned int page_size;
+    unsigned int page_size_granule;
+    uint64_t num;
+    uint64_t scale;
+    uint64_t exponent;
+    uint64_t length;
+
+    num = extract64(value, 39, 4);
+    scale = extract64(value, 44, 2);
+    page_size_granule = extract64(value, 46, 2);
+
+    switch (page_size_granule) {
+    case 1:
+      page_size = 4096;

Indentation is off?

+      break;
+    case 2:
+      page_size = 16384;
+      break;
+    case 3:
+      page_size = 65536;

You might as well have this as page_shift = {12,14,16}, or perhaps page_shift = page_size_granule * 2 + 10 instead of the full switch.

+    exponent = (5 * scale) + 1;
+    length = ((num + 1) << exponent) * page_size;

  length = (num + 1) << (exponent + page_shift);

+    mask = vae1_tlbmask(env);
+    if (regime_has_2_ranges(mask)) {

You can't pass in mask.

All of the mmuidx will have the same form, so ctz32(mask) would pick out the mmuidx for the first bit.

+    if (regime_has_2_ranges(secure ? ARMMMUIdxBit_SE2 : ARMMMUIdxBit_E2)) {

again. Only this time we know that E2 & SE2 have one range. Only (S)EL1&0 and (S)EL2&0 have two ranges.

+    if (regime_has_2_ranges(ARMMMUIdxBit_SE3)) {

Likewise, E3 has only one range.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]