qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC/PATCH] i386: Atomically update PTEs with mttcg


From: Richard Henderson
Subject: Re: [Qemu-devel] [RFC/PATCH] i386: Atomically update PTEs with mttcg
Date: Thu, 29 Nov 2018 16:12:12 -0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.3.0

On 11/29/18 2:54 PM, Benjamin Herrenschmidt wrote:
>>     pdpe_addr = (pml4e & PG_ADDRESS_MASK) +
>>                 (((gphys >> 30) & 0x1ff) << 3);
>>     pdpe = x86_ldq_phys(cs, pdpe_addr);
>>     do {
>>         if (!(pdpe & PG_PRESENT_MASK)) {
>>             goto do_fault;
>>         }
>>         if (pdpe & rsvd_mask) {
>>             goto do_fault_rsvd;
>>         }
>>         if (pdpe & PG_ACCESSED_MASK) {
>>             break;
>>         }
>>     } while (!update_entry(cs, pdpe_addr, &pdpe, PG_ACCESSED_MASK));
>>     ptep &= pdpe ^ PG_NX_MASK;
>>
>> ....
> 
> Hrm.. I see. So not re-do the full walk. Not sure it's really worth it
> though, how often do we expect to hit the failing case ?

It is probably rare-ish, I admit.

I suppose we could also signal "success" from update_entry if the cmpxchg
fails, but the value that was reloaded only differs in setting PG_ACCESSED_MASK
| PG_DIRTY_MASK, as long as 'bits' itself was set.

>> Although I think it would be really great if we could figure out something 
>> that
>> allows us to promote this whole load/cmpxchg loop into a primitive that 
>> avoids
>> multiple translations of the address.
>>
>> No, I don't know what that primitive would look like.  :-)
> 
> You mean translating once for the load and for the udpate ? Do you
> expect that translation to have such a significant cost considering
> that all it needs should be in L1 at that point ?

I guess if the update is rare-ish, the re-translating isn't a big deal.  And I
suppose we'd have to retain the RCU lock to hold on to the translation, which
probably isn't the best idea.

Nevermind on this.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]