qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vhost: accept VIRTIO_F_ORDER_PLATFORM as a valid SVQ feature


From: Eugenio Perez Martin
Subject: Re: [PATCH] vhost: accept VIRTIO_F_ORDER_PLATFORM as a valid SVQ feature
Date: Tue, 14 Feb 2023 09:36:01 +0100

On Tue, Feb 14, 2023 at 8:51 AM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Tue, Feb 14, 2023 at 08:02:08AM +0100, Eugenio Perez Martin wrote:
> > On Tue, Feb 14, 2023 at 7:31 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Tue, Feb 14, 2023 at 3:19 AM Eugenio Pérez <eperezma@redhat.com> wrote:
> > > >
> > > > VIRTIO_F_ORDER_PLATFORM indicates that memory accesses by the driver and
> > > > the device are ordered in a way described by the platform.  Since vDPA
> > > > devices may be backed by a hardware devices, let's allow
> > > > VIRTIO_F_ORDER_PLATFORM.
> > > >
> > > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > > ---
> > > >  hw/virtio/vhost-shadow-virtqueue.c | 1 +
> > > >  1 file changed, 1 insertion(+)
> > > >
> > > > diff --git a/hw/virtio/vhost-shadow-virtqueue.c 
> > > > b/hw/virtio/vhost-shadow-virtqueue.c
> > > > index 4307296358..6bb1998f12 100644
> > > > --- a/hw/virtio/vhost-shadow-virtqueue.c
> > > > +++ b/hw/virtio/vhost-shadow-virtqueue.c
> > > > @@ -34,6 +34,7 @@ bool vhost_svq_valid_features(uint64_t features, 
> > > > Error **errp)
> > > >          switch (b) {
> > > >          case VIRTIO_F_ANY_LAYOUT:
> > > >          case VIRTIO_RING_F_EVENT_IDX:
> > > > +        case VIRTIO_F_ORDER_PLATFORM:
> > >
> > > Do we need to add this bit to vdpa_feature_bits[] as well?
> > >
> >
> > If we want to pass it to the guest, yes we should. I'll send another
> > patch for it.
> >
> > But I think that should be done on top / in parallel actually.
> >
> > Open question: Should all vdpa hardware devices offer it? Or this is
> > only needed on specific archs?
> >
> > Thanks!
>
> I don't get what this is doing at all frankly. vdpa has to
> pass VIRTIO_F_ORDER_PLATFORM to guest at all times - unless
> - it's a x86 host where it kind of works anyway
> - it's vduse which frankly is so slow we can do VIRTIO_F_ORDER_PLATFORM 
> anyway.

That was my understanding, adding vdpasim to the list of exceptions
(please correct me if I'm wrong).

> In short VIRTIO_F_ORDER_PLATFORM has nothing to do with the device
> and everything with qemu itself.
>

I have little experience outside of x86 so I may be wrong here. My
understanding is that this feature allows the guest to optimize
barriers around memory ops:
* If VIRTIO_F_ORDER_PLATFORM is not negotiated, the driver can use
softer memory barriers that protects ordering between different
processors.
* If VIRTIO_F_ORDER_PLATFORM is negotiated, stronger ordering is
needed that also protects transport (PCI) accesses

This is backed up by comments in the standard:
This implies that the driver needs to use memory barriers suitable for
devices described by the platform; e.g. for the PCI transport in the
case of hardware PCI devices.

And in virtio drivers:
For virtio_pci on SMP, we don't need to order with respect to MMIO
accesses through relaxed memory I/O windows, so virt_mb() et al are
sufficient.
For using virtio to talk to real devices (eg. other heterogeneous
CPUs) we do need real barriers.

So the sentence "VIRTIO_F_ORDER_PLATFORM has nothing to do with the
device and everything with qemu itself." is actually the reverse, and
has everything to do with devices?

> Yea we can allow VIRTIO_F_ORDER_PLATFORM from kernel but given
> we never did at this point it will need a protocol feature bit.
> I don't think it's worth it ..
>

With "from kernel" do you mean in vhost-kernel or in virtio ring
driver? The virtio ring driver already supports them.

I'm ok with leaving this for the future but that means hw devices in
non-x86 platforms may not work correctly, isn't it?

Thanks!




reply via email to

[Prev in Thread] Current Thread [Next in Thread]