qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 00/12] NIC vhost-vdpa state restore via Shadow CVQ


From: Jason Wang
Subject: Re: [PATCH v8 00/12] NIC vhost-vdpa state restore via Shadow CVQ
Date: Fri, 19 Aug 2022 12:35:51 +0800

On Thu, Aug 11, 2022 at 2:57 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Tue, Aug 9, 2022 at 7:43 PM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > CVQ of net vhost-vdpa devices can be intercepted since the addition of 
> > x-svq.
> > The virtio-net device model is updated. The migration was blocked because
> > although the state can be megrated between VMM it was not possible to 
> > restore
> > on the destination NIC.
> >
> > This series add support for SVQ to inject external messages without the 
> > guest's
> > knowledge, so before the guest is resumed all the guest visible state is
> > restored. It is done using standard CVQ messages, so the vhost-vdpa device 
> > does
> > not need to learn how to restore it: As long as they have the feature, they
> > know how to handle it.
> >
> > This series needs fix [1] to be applied to achieve full live
> > migration.
> >
> > Thanks!
> >
> > [1] https://lists.nongnu.org/archive/html/qemu-devel/2022-08/msg00325.html
> >
> > v8:
> > - Rename NetClientInfo load to start, so is symmetrical with stop()
> > - Delete copy of device's in buffer at vhost_vdpa_net_load
> >
> > v7:
> > - Remove accidental double free.
> >
> > v6:
> > - Move map and unmap of the buffers to the start and stop of the device. 
> > This
> >   implies more callbacks on NetClientInfo, but simplifies the SVQ CVQ code.
> > - Not assume that in buffer is sizeof(virtio_net_ctrl_ack) in
> >   vhost_vdpa_net_cvq_add
> > - Reduce the number of changes from previous versions
> > - Delete unused memory barrier
> >
> > v5:
> > - Rename s/start/load/
> > - Use independent NetClientInfo to only add load callback on cvq.
> > - Accept out sg instead of dev_buffers[] at vhost_vdpa_net_cvq_map_elem
> > - Use only out size instead of iovec dev_buffers to know if the descriptor 
> > is
> >   effectively available, allowing to delete artificial !NULL 
> > VirtQueueElement
> >   on vhost_svq_add call.
> >
> > v4:
> > - Actually use NetClientInfo callback.
> >
> > v3:
> > - Route vhost-vdpa start code through NetClientInfo callback.
> > - Delete extra vhost_net_stop_one() call.
> >
> > v2:
> > - Fix SIGSEGV dereferencing SVQ when not in svq mode
> >
> > v1 from RFC:
> > - Do not reorder DRIVER_OK & enable patches.
> > - Delete leftovers
> >
> > Eugenio Pérez (12):
> >   vhost: stop transfer elem ownership in vhost_handle_guest_kick
> >   vhost: use SVQ element ndescs instead of opaque data for desc
> >     validation
> >   vhost: Delete useless read memory barrier
> >   vhost: Do not depend on !NULL VirtQueueElement on vhost_svq_flush
> >   vhost_net: Add NetClientInfo prepare callback
> >   vhost_net: Add NetClientInfo stop callback
> >   vdpa: add net_vhost_vdpa_cvq_info NetClientInfo
> >   vdpa: Move command buffers map to start of net device
> >   vdpa: extract vhost_vdpa_net_cvq_add from
> >     vhost_vdpa_net_handle_ctrl_avail
> >   vhost_net: add NetClientState->load() callback
> >   vdpa: Add virtio-net mac address via CVQ at start
> >   vdpa: Delete CVQ migration blocker
> >
> >  include/hw/virtio/vhost-vdpa.h     |   1 -
> >  include/net/net.h                  |   6 +
> >  hw/net/vhost_net.c                 |  17 +++
> >  hw/virtio/vhost-shadow-virtqueue.c |  27 ++--
> >  hw/virtio/vhost-vdpa.c             |  14 --
> >  net/vhost-vdpa.c                   | 225 ++++++++++++++++++-----------
> >  6 files changed, 178 insertions(+), 112 deletions(-)
> >
> > --
> > 2.31.1
> >
> >
> >
>
> Hi Jason,
>
> Should I send a new version of this series with the changes you
> proposed, or can they be done at pull time? (Mostly changes in patch
> messages).

A new series please.


> Can you confirm to me that there is no other action I need
> to perform?

No other from my side.

Thanks

>
> Thanks!
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]