qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PULL 23/32] tcg: Support MMU protection regions smalle


From: Laurent Vivier
Subject: Re: [Qemu-devel] [PULL 23/32] tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
Date: Thu, 28 Jun 2018 21:23:56 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0

Le 28/06/2018 à 15:23, Peter Maydell a écrit :
> On 28 June 2018 at 14:03, Laurent Vivier <address@hidden> wrote:
>> Le 26/06/2018 à 18:56, Peter Maydell a écrit :
>>> Add support for MMU protection regions that are smaller than
>>> TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
>>> pages with a flag TLB_RECHECK. This flag causes us to always
>>> take the slow-path for accesses. In the slow path we can then
>>> special case them to always call tlb_fill() again, so we have
>>> the correct information for the exact address being accessed.
>>>
>>> This change allows us to handle reading and writing from small
>>> regions; we cannot deal with execution from the small region.
>>>
>>> Signed-off-by: Peter Maydell <address@hidden>
>>> Reviewed-by: Richard Henderson <address@hidden>
>>> Message-id: address@hidden
>>> ---
>>>  accel/tcg/softmmu_template.h |  24 ++++---
>>>  include/exec/cpu-all.h       |   5 +-
>>>  accel/tcg/cputlb.c           | 131 +++++++++++++++++++++++++++++------
>>>  3 files changed, 130 insertions(+), 30 deletions(-)
>>
>> This patch breaks Quadra 800 emulation, any idea why?
>>
>> ABCFGHIJK
>> qemu: fatal: Unable to handle guest executing from RAM within a small
>> MPU region at 0x0014cb5a
> 
> Hmm, that shouldn't happen unless your target code was
> incorrectly returning a too-small page size. (I say
> "incorrectly" because before this patchseries that was
> unsupported and would have had weird effects depending on
> exactly what the order of guest accesses to the page was.)
> 
> You could look at whether the m68k code is calling tlb_set_page()
> with a wrong page_size value and why that happens. You can
> get back the old behaviour by having your code do
>    if (page_size < TARGET_PAGE_SIZE) {
>        page_size = TARGET_PAGE_SIZE;
>    }
> 
> but that is definitely a bit of a hack.

Thank you to have had a look at this.

I've added traces and tlb_set_page() is always called with page_size ==
TARGET_PAGE_SIZE.

m68k linux kernel always use 4 kB page that is the value of
TARGET_PAGE_SIZE.
68040 MMU can also use 8 kB page, but in our case it doesn't (and of
course 8 kB > TARGET_PAGE_SIZE).

> Does the m68k MMU let you specify permissions and mappings
> for sub-page sizes ?

I'm not aware of subpage in m68k MMU. but we have TLB entries that are
separated between code and data: does it change something in your code?
Accessing an address as a data access and then as an instruction access
could appear like a TLB_RECHECK?

> I do notice an oddity:
> in m68k_cpu_handle_mmu_fault() we call get_physical_address()
> but then ignore the page_size it returns when we call tlb_set_page()
> and instead use TARGET_PAGE_SIZE. But in the ptest helper function
> we use the page_size from get_physical_address() directly.
> Are these bits of code deliberately different?

I remember I had problem to make this to work. But I think  you're
right, it should be page_size everywhere. But I guess it's not the cause
of my problem (I tried :) )...

> In fact it's not clear to me at all that PTEST should be
> updating the QEMU TLB: it only needs to update the MMU
> status registers. (The 68030 manual I have says that in
> hardware PTEST doesn't update the ATC, which is the h/w
> equivalent to doing a TLB update.)

In QEMU, we emulate for the moment the 68040 MMU, and PTEST for 68040 is
not defined as the one for 68030.

For 68040, we have:

"A matching entry in the address translation cache (data or instruction)
specified by the function code will be flushed by PTEST. Completion of
PTEST results in the creation of a new address translation cache entry"

Thanks,
Laurent



reply via email to

[Prev in Thread] Current Thread [Next in Thread]