qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Reducing vdpa migration downtime because of memory pin / maps


From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Subject: Re: Reducing vdpa migration downtime because of memory pin / maps
Date: Mon, 10 Apr 2023 11:16:53 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.7.0



在 2023/4/10 10:14, Jason Wang 写道:
On Wed, Apr 5, 2023 at 7:38 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:

Hi!

As mentioned in the last upstream virtio-networking meeting, one of
the factors that adds more downtime to migration is the handling of
the guest memory (pin, map, etc). At this moment this handling is
bound to the virtio life cycle (DRIVER_OK, RESET). In that sense, the
destination device waits until all the guest memory / state is
migrated to start pinning all the memory.

The proposal is to bind it to the char device life cycle (open vs
close), so all the guest memory can be pinned for all the guest / qemu
lifecycle.

This has two main problems:
* At this moment the reset semantics forces the vdpa device to unmap
all the memory. So this change needs a vhost vdpa feature flag.

Is this true? I didn't find any codes to unmap the memory in
vhost_vdpa_set_status().


It could depend on the vendor driver, for example, the vdpasim would do something like that.

vhost_vdpa_set_status->vdpa_reset->vdpasim_reset->vdpasim_do_reset->vhost_iotlb_reset

Thanks

* This may increase the initialization time. Maybe we can delay it if
qemu is not the destination of a LM. Anyway I think this should be
done as an optimization on top.

Any ideas or comments in this regard?

Thanks!


.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]