qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH] migration: discard RAMBlocks of type ram_de


From: Kirti Wankhede
Subject: Re: [Qemu-devel] [RFC PATCH] migration: discard RAMBlocks of type ram_device
Date: Fri, 13 Apr 2018 11:30:09 +0530


On 4/12/2018 9:53 PM, Alex Williamson wrote:
> On Thu, 12 Apr 2018 15:59:24 +0000
> "Zhang, Yulei" <address@hidden> wrote:
> 
>>> -----Original Message-----
>>> From: Alex Williamson [mailto:address@hidden
>>> Sent: Thursday, April 12, 2018 1:55 AM
>>> To: Cédric Le Goater <address@hidden>
>>> Cc: address@hidden; Juan Quintela <address@hidden>; Dr .
>>> David Alan Gilbert <address@hidden>; David Gibson
>>> <address@hidden>; Zhang, Yulei <address@hidden>; Tian,
>>> Kevin <address@hidden>; address@hidden;
>>> address@hidden; address@hidden; Wang, Zhi A
>>> <address@hidden>
>>> Subject: Re: [Qemu-devel] [RFC PATCH] migration: discard RAMBlocks of type
>>> ram_device
>>>
>>> [cc +folks working on vfio-mdev migration]
>>>
>>> On Wed, 11 Apr 2018 19:20:14 +0200
>>> Cédric Le Goater <address@hidden> wrote:
>>>   
>>>> Here is some context for this strange change request.
>>>>
>>>> On the POWER9 processor, the XIVE interrupt controller can control
>>>> interrupt sources using MMIO to trigger events, to EOI or to turn off
>>>> the sources. Priority management and interrupt acknowledgment is also
>>>> controlled by MMIO in the presenter subengine.
>>>>
>>>> These MMIO regions are exposed to guests in QEMU with a set of 'ram
>>>> device' memory mappings, similarly to VFIO, and the VMAs are populated
>>>> dynamically with the appropriate pages using a fault handler.
>>>>
>>>> But, these regions are an issue for migration. We need to discard the
>>>> associated RAMBlocks from the RAM state on the source VM and let the
>>>> destination VM rebuild the memory mappings on the new host in the
>>>> post_load() operation just before resuming the system.
>>>>
>>>> This is the goal of the following proposal. Does it make sense ? It
>>>> seems to be working enough to migrate a running guest but there might
>>>> be a better, more subtle, approach.  
>>>
>>> Yulei, is this something you've run into with GVT-g migration?  I don't see
>>> how we can read from or write to ram_device regions in a useful way during
>>> migration anyway, so the change initially looks correct to me.
>>> Thanks,
>>>
>>> Alex
>>>   
>>
>> Didn't meet such issue before. I think the change will be fine if the vendor 
>> driver
>> handle the reconstruction well on the target side. And I agree with Dave's 
>> suggestion,
>> how about vendor driver report a flag about the mapped region to indicate it 
>> can
>> be registered as migratable memory block or not.   
> 
> "migration" and "memory blocks" are not vfio concepts, you'd need to
> come up with flags that actually conveys the device level property of
> the region that you're trying to indicate.  I don't see why we'd do
> this though, the application of such a flag seems too narrow and it
> tarnishes the concept that a vendor driver provides a region, through
> which *all* device state is saved and restored.  Thanks,
> 

I don't think vendor driver need to report region as migratable or not
explicitly.

I hit the issue when MMIO region is mmaped by VFIO, for example BAR1 of
vGPU, that region is marked as ram region.
vfio_region_mmap() -> memory_region_init_ram_device_ptr() -> sets
mr->ram_device = true
Vendor driver specifies which region to be MMAPed, based on that QEMU
can decide to skip that region during migration as in this RFC. Then
vendor driver takes care of restoring it's MMIO region state after
migration.

Thanks,
Kirti



reply via email to

[Prev in Thread] Current Thread [Next in Thread]