qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Reducing vdpa migration downtime because of memory pin / maps


From: Eugenio Perez Martin
Subject: Re: Reducing vdpa migration downtime because of memory pin / maps
Date: Tue, 11 Apr 2023 14:33:46 +0200

On Wed, Apr 5, 2023 at 1:37 PM Eugenio Perez Martin <eperezma@redhat.com> wrote:
>
> Hi!
>
> As mentioned in the last upstream virtio-networking meeting, one of
> the factors that adds more downtime to migration is the handling of
> the guest memory (pin, map, etc). At this moment this handling is
> bound to the virtio life cycle (DRIVER_OK, RESET). In that sense, the
> destination device waits until all the guest memory / state is
> migrated to start pinning all the memory.
>
> The proposal is to bind it to the char device life cycle (open vs
> close), so all the guest memory can be pinned for all the guest / qemu
> lifecycle.
>
> This has two main problems:
> * At this moment the reset semantics forces the vdpa device to unmap
> all the memory. So this change needs a vhost vdpa feature flag.
> * This may increase the initialization time. Maybe we can delay it if
> qemu is not the destination of a LM. Anyway I think this should be
> done as an optimization on top.
>

Expanding on this we could reduce the pinning even more now that vring
supports VA [1] with the emulated CVQ.

Something like:
- Add VHOST_VRING_GROUP_CAN_USE_VA ioctl to check if a given VQ group
capability. Passthrough devices with emulated CVQ would return false
for the dataplane and true for the control vq group.
- If that is true, qemu does not need to map and translate addresses
for CVQ but to directly provide VA for buffers. This avoids pinning,
translations, etc in this case.

Thanks!

[1] 
https://lore.kernel.org/virtualization/20230404131326.44403-2-sgarzare@redhat.com/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]