qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for 9.0 07/12] vdpa: set backend capabilities at vhost_vdpa_i


From: Jason Wang
Subject: Re: [PATCH for 9.0 07/12] vdpa: set backend capabilities at vhost_vdpa_init
Date: Thu, 21 Dec 2023 11:39:54 +0800

On Wed, Dec 20, 2023 at 3:08 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Wed, Dec 20, 2023 at 5:34 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Sat, Dec 16, 2023 at 1:28 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > >
> > > The backend does not reset them until the vdpa file descriptor is closed
> > > so there is no harm in doing it only once.
> > >
> > > This allows the destination of a live migration to premap memory in
> > > batches, using VHOST_BACKEND_F_IOTLB_BATCH.
> > >
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > >  hw/virtio/vhost-vdpa.c | 50 ++++++++++++++++--------------------------
> > >  1 file changed, 19 insertions(+), 31 deletions(-)
> > >
> > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > > index 449c3794b2..43f7c382b1 100644
> > > --- a/hw/virtio/vhost-vdpa.c
> > > +++ b/hw/virtio/vhost-vdpa.c
> > > @@ -587,11 +587,25 @@ static int vhost_vdpa_init(struct vhost_dev *dev, 
> > > void *opaque, Error **errp)
> > >      struct vhost_vdpa *v = opaque;
> > >      assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
> > >      trace_vhost_vdpa_init(dev, v->shared, opaque);
> > > +    uint64_t backend_features;
> > > +    uint64_t qemu_backend_features = 0x1ULL << 
> > > VHOST_BACKEND_F_IOTLB_MSG_V2 |
> > > +                                     0x1ULL << 
> > > VHOST_BACKEND_F_IOTLB_BATCH |
> > > +                                     0x1ULL << 
> > > VHOST_BACKEND_F_IOTLB_ASID |
> > > +                                     0x1ULL << VHOST_BACKEND_F_SUSPEND;
> > >      int ret;
> > >
> > >      v->dev = dev;
> > >      dev->opaque =  opaque ;
> > >      v->shared->listener = vhost_vdpa_memory_listener;
> > > +
> > > +    if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, 
> > > &backend_features)) {
> > > +        return -EFAULT;
> > > +    }
> > > +
> > > +    backend_features &= qemu_backend_features;
> > > +
> > > +    dev->backend_cap = backend_features;
> > > +    v->shared->backend_cap = backend_features;
> > >      vhost_vdpa_init_svq(dev, v);
> > >
> > >      error_propagate(&dev->migration_blocker, v->migration_blocker);
> > > @@ -599,6 +613,11 @@ static int vhost_vdpa_init(struct vhost_dev *dev, 
> > > void *opaque, Error **errp)
> > >          return 0;
> > >      }
> > >
> > > +    ret = vhost_vdpa_call(dev, VHOST_SET_BACKEND_FEATURES, 
> > > &backend_features);
> > > +    if (ret) {
> > > +        return -EFAULT;
> > > +    }
> > > +
> > >      /*
> > >       * If dev->shadow_vqs_enabled at initialization that means the 
> > > device has
> > >       * been started with x-svq=on, so don't block migration
> > > @@ -829,36 +848,6 @@ static int vhost_vdpa_set_features(struct vhost_dev 
> > > *dev,
> > >      return vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_FEATURES_OK);
> > >  }
> > >
> > > -static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
> >
> > How about keeping this function but just calling it in vhost_vdpa_init()?
> >
>
> Sure, that is possible. I need to remove the VhostOps
> vhost_set_backend_cap = vhost_vdpa_set_backend_cap, anyway, is that ok
> for you?

Fine with me.

Thanks

>
> Thanks!
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]