qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Reducing vdpa migration downtime because of memory pin / maps


From: Jason Wang
Subject: Re: Reducing vdpa migration downtime because of memory pin / maps
Date: Wed, 12 Apr 2023 14:18:54 +0800

On Wed, Apr 12, 2023 at 1:56 PM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Apr 11, 2023 at 8:34 PM Eugenio Perez Martin
> <eperezma@redhat.com> wrote:
> >
> > On Wed, Apr 5, 2023 at 1:37 PM Eugenio Perez Martin <eperezma@redhat.com> 
> > wrote:
> > >
> > > Hi!
> > >
> > > As mentioned in the last upstream virtio-networking meeting, one of
> > > the factors that adds more downtime to migration is the handling of
> > > the guest memory (pin, map, etc). At this moment this handling is
> > > bound to the virtio life cycle (DRIVER_OK, RESET). In that sense, the
> > > destination device waits until all the guest memory / state is
> > > migrated to start pinning all the memory.
> > >
> > > The proposal is to bind it to the char device life cycle (open vs
> > > close), so all the guest memory can be pinned for all the guest / qemu
> > > lifecycle.
> > >
> > > This has two main problems:
> > > * At this moment the reset semantics forces the vdpa device to unmap
> > > all the memory. So this change needs a vhost vdpa feature flag.
> > > * This may increase the initialization time. Maybe we can delay it if
> > > qemu is not the destination of a LM. Anyway I think this should be
> > > done as an optimization on top.
> > >
> >
> > Expanding on this we could reduce the pinning even more now that vring
> > supports VA [1] with the emulated CVQ.
>
> Note that VA for hardware means the device needs to support page fault
> through either PRI or vendor specific interface.
>
> >
> > Something like:
> > - Add VHOST_VRING_GROUP_CAN_USE_VA ioctl to check if a given VQ group
> > capability. Passthrough devices with emulated CVQ would return false
> > for the dataplane and true for the control vq group.

We don't even need this actually, since the pinning is not visible to
the userspace. Userspace can only see the IOTLB abstraction actually.

We can invent a group->use_va, then when we attach AS to a group that
can use va, we can avoid the pinning.

Thanks

> > - If that is true, qemu does not need to map and translate addresses
> > for CVQ but to directly provide VA for buffers. This avoids pinning,
> > translations, etc in this case.
>
> For CVQ yes, but we only avoid the pinning for CVQ not others.
>
> Thanks
>
> >
> > Thanks!
> >
> > [1] 
> > https://lore.kernel.org/virtualization/20230404131326.44403-2-sgarzare@redhat.com/
> >




reply via email to

[Prev in Thread] Current Thread [Next in Thread]