|
From: | Weiwei Li |
Subject: | Re: [PATCH 6/6] accel/tcg: Remain TLB_INVALID_MASK in the address when TLB is re-filled |
Date: | Tue, 18 Apr 2023 16:18:53 +0800 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0 |
On 2023/4/18 15:36, Richard Henderson wrote:
On 4/18/23 09:18, Richard Henderson wrote:- /*- * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately,- * to force the next access through tlb_fill. We've just- * called tlb_fill, so we know that this entry *is* valid.- */ - flags &= ~TLB_INVALID_MASK;I missed the original patch, but this is definitely wrong.Clearing this bit locally (!) is correct because we want to inform the caller of probe_access_* that the access is valid. We know that it is valid because we have just queried tlb_fill (and thus for riscv, PMP).Clearing the bit locally does *not* cause the tlb entry to be cached -- the INVALID bit is still set within the tlb entry. The next access will again go through tlb_fill.What is the original problem you are seeing? The commit message does not say.From 3ace9e9e-91cf-36e6-a18f-494fd44dffab@iscas.ac.cn/">https://lore.kernel.org/qemu-devel/3ace9e9e-91cf-36e6-a18f-494fd44dffab@iscas.ac.cn/I see that it is a problem with execution.
Yeah. I found this problem in PMP check for instruction fetch.
By eye, it appears that get_page_addr_code_hostp needs adjustment, e.g. (void)probe_access_internal(env, addr, 1, MMU_INST_FETCH,cpu_mmu_index(env, true), false, &p, &full, 0);if (p == NULL) { return -1; } + if (full->lg_page_size < TARGET_PAGE_BITS) { + return -1; + } if (hostp) { *hostp = p; }It seems like we could do slightly better than this, perhaps by single-stepping through such a page, but surely this edge case is so uncommon as to not make it worthwhile to consider.
OK. I'll update and test it later. Regards, Weiwei Li
r~
[Prev in Thread] | Current Thread | [Next in Thread] |