qemu-riscv
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] target/riscv: reduce overhead of MSTATUS_SUM change


From: Wu, Fei
Subject: Re: [PATCH] target/riscv: reduce overhead of MSTATUS_SUM change
Date: Wed, 22 Mar 2023 11:36:01 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.9.0

On 3/22/2023 11:31 AM, Richard Henderson wrote:
> On 3/21/23 19:47, Wu, Fei wrote:
>>>> You should be making use of different softmmu indexes, similar to how
>>>> ARM uses a separate index for PAN (privileged access never) mode.  If
>>>> I read the manual properly, PAN == !SUM.
>>>>
>>>> When you do this, you need no additional flushing.
>>>
>>> Hi Fei,
>>>
>>> Let's follow Richard's advice.
>>> Yes, I'm thinking about how to do it, and thank Richard for the advice.
>>
>> My question is:
>> * If we ensure this separate index (S+SUM) has no overlapping tlb
>> entries with S-mode (ignore M-mode so far), during SUM=1, we have to
>> look into both (S+SUM) and S index for kernel address translation, that
>> should be not desired.
> 
> This is an incorrect assumption.  S+SUM may very well have overlapping
> tlb entries with S.
> With SUM=1, you *only* look in S+SUM index; with SUM=0, you *only* look
> in S index.
> 
> The only difference is a check in get_physical_address is no longer
> against MSTATUS_SUM directly, but against the mmu_index.
> 
>> * If all the tlb operations are against (S+SUM) during SUM=1, then
>> (S+SUM) could contain some duplicated tlb entries of kernel address in S
>> index, the duplication means extra tlb lookup and fill.
> 
> Yes, if the same address is probed via S and S+SUM, there is a
> duplicated lookup.  But this is harmless.
> 
> 
>> Also if we want
>> to flush tlb entry of specific addr0, we have to flush both index.
> 
> Yes, this is also true.  But so far target/riscv is making no use of
> per-mmuidx flushing. At the moment you're *only* using tlb_flush(cpu),
> which flushes every mmuidx.  Nor are you making use of per-page flushing.
> 
> So, really, no change required at all there.
> 
Got it, let me try this method.

Thanks,
Fei.

> 
> r~




reply via email to

[Prev in Thread] Current Thread [Next in Thread]