qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC] memory: pause all vCPUs for the duration of memory trans


From: Vitaly Kuznetsov
Subject: Re: [PATCH RFC] memory: pause all vCPUs for the duration of memory transactions
Date: Tue, 03 Nov 2020 14:07:09 +0100

Peter Xu <peterx@redhat.com> writes:

> Vitaly,
>
> On Mon, Oct 26, 2020 at 09:49:16AM +0100, Vitaly Kuznetsov wrote:
>> Currently, KVM doesn't provide an API to make atomic updates to memmap when
>> the change touches more than one memory slot, e.g. in case we'd like to
>> punch a hole in an existing slot.
>> 
>> Reports are that multi-CPU Q35 VMs booted with OVMF sometimes print something
>> like
>> 
>> !!!! X64 Exception Type - 0E(#PF - Page-Fault)  CPU Apic ID - 00000003 !!!!
>> ExceptionData - 0000000000000010  I:1 R:0 U:0 W:0 P:0 PK:0 SS:0 SGX:0
>> RIP  - 000000007E35FAB6, CS  - 0000000000000038, RFLAGS - 0000000000010006
>> RAX  - 0000000000000000, RCX - 000000007E3598F2, RDX - 00000000078BFBFF
>> ...
>> 
>> The problem seems to be that TSEG manipulations on one vCPU are not atomic
>> from other vCPUs views. In particular, here's the strace:
>> 
>> Initial creation of the 'problematic' slot:
>> 
>> 10085 ioctl(13, KVM_SET_USER_MEMORY_REGION, {slot=6, flags=0, 
>> guest_phys_addr=0x100000,
>>    memory_size=2146435072, userspace_addr=0x7fb89bf00000}) = 0
>> 
>> ... and then the update (caused by e.g. mch_update_smram()) later:
>> 
>> 10090 ioctl(13, KVM_SET_USER_MEMORY_REGION, {slot=6, flags=0, 
>> guest_phys_addr=0x100000,
>>    memory_size=0, userspace_addr=0x7fb89bf00000}) = 0
>> 10090 ioctl(13, KVM_SET_USER_MEMORY_REGION, {slot=6, flags=0, 
>> guest_phys_addr=0x100000,
>>    memory_size=2129657856, userspace_addr=0x7fb89bf00000}) = 0
>> 
>> In case KVM has to handle any event on a different vCPU in between these
>> two calls the #PF will get triggered.
>
> A pure question: Why a #PF?  Is it injected into the guest?
>

Yes, we see a #PF injected in the guest during OVMF boot.

> My understanding (which could be wrong) is that all thing should start with a
> vcpu page fault onto the removed range, then when kvm finds that the memory
> accessed is not within a valid memslot (since we're adding it back but not
> yet), it'll become an user exit back to QEMU assuming it's an MMIO access.  Or
> am I wrong somewhere?

In case it is a normal access from the guest, yes, but AFAIR here
guest's CR3 is pointing to non existent memory and when KVM detects that
it injects #PF by itself without a loop through userspace.

-- 
Vitaly




reply via email to

[Prev in Thread] Current Thread [Next in Thread]