qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Reducing vdpa migration downtime because of memory pin / maps


From: Eugenio Perez Martin
Subject: Re: Reducing vdpa migration downtime because of memory pin / maps
Date: Mon, 10 Apr 2023 11:04:20 +0200

On Mon, Apr 10, 2023 at 5:22 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Mon, Apr 10, 2023 at 11:17 AM Longpeng (Mike, Cloud Infrastructure
> Service Product Dept.) <longpeng2@huawei.com> wrote:
> >
> >
> >
> > 在 2023/4/10 10:14, Jason Wang 写道:
> > > On Wed, Apr 5, 2023 at 7:38 PM Eugenio Perez Martin <eperezma@redhat.com> 
> > > wrote:
> > >>
> > >> Hi!
> > >>
> > >> As mentioned in the last upstream virtio-networking meeting, one of
> > >> the factors that adds more downtime to migration is the handling of
> > >> the guest memory (pin, map, etc). At this moment this handling is
> > >> bound to the virtio life cycle (DRIVER_OK, RESET). In that sense, the
> > >> destination device waits until all the guest memory / state is
> > >> migrated to start pinning all the memory.
> > >>
> > >> The proposal is to bind it to the char device life cycle (open vs
> > >> close), so all the guest memory can be pinned for all the guest / qemu
> > >> lifecycle.
> > >>
> > >> This has two main problems:
> > >> * At this moment the reset semantics forces the vdpa device to unmap
> > >> all the memory. So this change needs a vhost vdpa feature flag.
> > >
> > > Is this true? I didn't find any codes to unmap the memory in
> > > vhost_vdpa_set_status().
> > >
> >
> > It could depend on the vendor driver, for example, the vdpasim would do
> > something like that.
> >
> > vhost_vdpa_set_status->vdpa_reset->vdpasim_reset->vdpasim_do_reset->vhost_iotlb_reset
>
> This looks like a bug. Or I wonder if any user space depends on this
> behaviour, if yes, we really need a new flag then.
>

My understanding was that we depend on this for cases like qemu
crashes. We don't do an unmap(-1ULL) or anything like that to make
sure the device is clean when we bind a second qemu to the same
device. That's why I think that close() should clean them. Or maybe
even open().

The only other option I see is to remove the whole vhost-vdpa device
every time, or am I missing something?

Thanks!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]