qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Reducing vdpa migration downtime because of memory pin / maps


From: Jason Wang
Subject: Re: Reducing vdpa migration downtime because of memory pin / maps
Date: Mon, 10 Apr 2023 11:21:46 +0800

On Mon, Apr 10, 2023 at 11:17 AM Longpeng (Mike, Cloud Infrastructure
Service Product Dept.) <longpeng2@huawei.com> wrote:
>
>
>
> 在 2023/4/10 10:14, Jason Wang 写道:
> > On Wed, Apr 5, 2023 at 7:38 PM Eugenio Perez Martin <eperezma@redhat.com> 
> > wrote:
> >>
> >> Hi!
> >>
> >> As mentioned in the last upstream virtio-networking meeting, one of
> >> the factors that adds more downtime to migration is the handling of
> >> the guest memory (pin, map, etc). At this moment this handling is
> >> bound to the virtio life cycle (DRIVER_OK, RESET). In that sense, the
> >> destination device waits until all the guest memory / state is
> >> migrated to start pinning all the memory.
> >>
> >> The proposal is to bind it to the char device life cycle (open vs
> >> close), so all the guest memory can be pinned for all the guest / qemu
> >> lifecycle.
> >>
> >> This has two main problems:
> >> * At this moment the reset semantics forces the vdpa device to unmap
> >> all the memory. So this change needs a vhost vdpa feature flag.
> >
> > Is this true? I didn't find any codes to unmap the memory in
> > vhost_vdpa_set_status().
> >
>
> It could depend on the vendor driver, for example, the vdpasim would do
> something like that.
>
> vhost_vdpa_set_status->vdpa_reset->vdpasim_reset->vdpasim_do_reset->vhost_iotlb_reset

This looks like a bug. Or I wonder if any user space depends on this
behaviour, if yes, we really need a new flag then.

Thanks

>
> > Thanks
> >
> >> * This may increase the initialization time. Maybe we can delay it if
> >> qemu is not the destination of a LM. Anyway I think this should be
> >> done as an optimization on top.
> >>
> >> Any ideas or comments in this regard?
> >>
> >> Thanks!
> >>
> >
> > .
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]