|
From: | Mark Cave-Ayland |
Subject: | Re: [PATCH for 8.2] accel/tcg/cputlb: Fix iotlb page alignment check |
Date: | Fri, 8 Dec 2023 19:39:46 +0000 |
User-agent: | Mozilla Thunderbird |
On 08/12/2023 17:42, Richard Henderson wrote:
On 12/7/23 18:06, LIU Zhiwei wrote:For ram memory region the iotlb(which will be filled into the xlat_section of CPUTLBEntryFull) is calculated as: iotlb = memory_region_get_ram_addr(section->mr) + xlat; 1) xlat here is the offset_within_region of a MemoryRegionSection, which maybe not TARGET_PAGE_BITS aligned.The reason we require these bits to be zero is because (A) The lowest bits must be zero so that we may test alignment, (B) The "middle" bits, above alignment and below TARGET_PAGE_BITS, are used for TLB_FLAGS_MASK. If iotlb has these bits non-zero, the softmmu comparison will not work correctly.2) The ram_addr_t returned by memory_region_get_ram_addr is always HOST PAGE ALIGNED.RAM blocks can have larger alignment than host page. See QEMU_VMALLOC_ALIGN and qemu_ram_mmap.But I can see a path by which it *is* only host page aligned, which could fail if the guest has larger alignment than the host. Fortunately, this is rare -- only alpha, cris, openrisc and sparc64 have 8k pages, and tricore has 16k pages, while supporting system mode. Hexagon has 64k pages but does not yet support system mode.We should fix that, but I don't think it's urgent.
FWIW I did check the MMU configuration with the m68k test case, and I can see that the ROM configures the MMU to use 8k pages.
So we cann't assert the sum of them is TARGET_PAGE_BITS aligend. A fail case has been give by the link: https://lore.kernel.org/all/b68ab7d3-d3d3-9f81-569d-454ae9c11b16@linaro.org/T/I think the macfb device is at fault here, not your assert.
The memory region is set up using a standard memory_region_init_rom() and memory_region_add_subregion() combination (see https://gitlab.com/qemu-project/qemu/-/blob/master/hw/nubus/nubus-device.c?ref_type=heads#L79) so should it be that the ROM memory region must be aligned to TARGET_PAGE_MASK?
This feels odd because I don't believe there are any other alignment restrictions for the memory API, and if there are, shouldn't the assert() fire during memory_region_add_subregion() rather than during the first memory access at runtime?
ATB, Mark.
[Prev in Thread] | Current Thread | [Next in Thread] |