qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Reducing vdpa migration downtime because of memory pin / maps


From: Eugenio Perez Martin
Subject: Re: Reducing vdpa migration downtime because of memory pin / maps
Date: Tue, 11 Apr 2023 08:28:34 +0200

On Tue, Apr 11, 2023 at 4:26 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Mon, Apr 10, 2023 at 5:05 PM Eugenio Perez Martin
> <eperezma@redhat.com> wrote:
> >
> > On Mon, Apr 10, 2023 at 5:22 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Mon, Apr 10, 2023 at 11:17 AM Longpeng (Mike, Cloud Infrastructure
> > > Service Product Dept.) <longpeng2@huawei.com> wrote:
> > > >
> > > >
> > > >
> > > > 在 2023/4/10 10:14, Jason Wang 写道:
> > > > > On Wed, Apr 5, 2023 at 7:38 PM Eugenio Perez Martin 
> > > > > <eperezma@redhat.com> wrote:
> > > > >>
> > > > >> Hi!
> > > > >>
> > > > >> As mentioned in the last upstream virtio-networking meeting, one of
> > > > >> the factors that adds more downtime to migration is the handling of
> > > > >> the guest memory (pin, map, etc). At this moment this handling is
> > > > >> bound to the virtio life cycle (DRIVER_OK, RESET). In that sense, the
> > > > >> destination device waits until all the guest memory / state is
> > > > >> migrated to start pinning all the memory.
> > > > >>
> > > > >> The proposal is to bind it to the char device life cycle (open vs
> > > > >> close), so all the guest memory can be pinned for all the guest / 
> > > > >> qemu
> > > > >> lifecycle.
> > > > >>
> > > > >> This has two main problems:
> > > > >> * At this moment the reset semantics forces the vdpa device to unmap
> > > > >> all the memory. So this change needs a vhost vdpa feature flag.
> > > > >
> > > > > Is this true? I didn't find any codes to unmap the memory in
> > > > > vhost_vdpa_set_status().
> > > > >
> > > >
> > > > It could depend on the vendor driver, for example, the vdpasim would do
> > > > something like that.
> > > >
> > > > vhost_vdpa_set_status->vdpa_reset->vdpasim_reset->vdpasim_do_reset->vhost_iotlb_reset
> > >
> > > This looks like a bug. Or I wonder if any user space depends on this
> > > behaviour, if yes, we really need a new flag then.
> > >
> >
> > My understanding was that we depend on this for cases like qemu
> > crashes. We don't do an unmap(-1ULL) or anything like that to make
> > sure the device is clean when we bind a second qemu to the same
> > device. That's why I think that close() should clean them.
>
> In vhost_vdpa_release() we do:
>
> vhost_vdpa_release()
>     vhost_vdpa_cleanup()
>         for_each_as()
>             vhost_vdpa_remove_as()
>                 vhost_vdpa_iotlb_unmap(0ULL, 0ULL - 1)
>         vhost_vdpa_free_domain()
>
> Anything wrong here?
>

No, I think we just trusted in different pre-existing cleanup points
"semantics".

> Conceptually, the address mapping is not a part of the abstraction for
> a virtio device now. So resetting the memory mapping during virtio
> device reset seems wrong.
>

I agree. So then no change in the kernel should be needed but to
revert this cleanup on device reset. I guess we should document it
ops->reset just in case?

Thanks!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]