qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for 8.1 v2 5/6] vdpa: move CVQ isolation check to net_init_vh


From: Jason Wang
Subject: Re: [PATCH for 8.1 v2 5/6] vdpa: move CVQ isolation check to net_init_vhost_vdpa
Date: Mon, 3 Apr 2023 13:32:20 +0800

On Fri, Mar 31, 2023 at 6:12 PM Eugenio Perez Martin
<eperezma@redhat.com> wrote:
>
> On Fri, Mar 31, 2023 at 10:00 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> >
> > 在 2023/3/30 18:42, Eugenio Perez Martin 写道:
> > > On Thu, Mar 30, 2023 at 8:23 AM Jason Wang <jasowang@redhat.com> wrote:
> > >> On Thu, Mar 30, 2023 at 2:20 PM Jason Wang <jasowang@redhat.com> wrote:
> > >>> On Fri, Mar 24, 2023 at 3:54 AM Eugenio Pérez <eperezma@redhat.com> 
> > >>> wrote:
> > >>>> Evaluating it at start time instead of initialization time may make the
> > >>>> guest capable of dynamically adding or removing migration blockers.
> > >>>>
> > >>>> Also, moving to initialization reduces the number of ioctls in the
> > >>>> migration, reducing failure possibilities.
> > >>>>
> > >>>> As a drawback we need to check for CVQ isolation twice: one time with 
> > >>>> no
> > >>>> MQ negotiated and another one acking it, as long as the device supports
> > >>>> it.  This is because Vring ASID / group management is based on vq
> > >>>> indexes, but we don't know the index of CVQ before negotiating MQ.
> > >>> We need to fail if we see a device that can isolate cvq without MQ but
> > >>> not with MQ.
> > >>>
> > >>>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > >>>> ---
> > >>>> v2: Take out the reset of the device from vhost_vdpa_cvq_is_isolated
> > >>>> ---
> > >>>>   net/vhost-vdpa.c | 194 
> > >>>> ++++++++++++++++++++++++++++++++++++-----------
> > >>>>   1 file changed, 151 insertions(+), 43 deletions(-)
> > >>>>
> > >>>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > >>>> index 4397c0d4b3..db2c9afcb3 100644
> > >>>> --- a/net/vhost-vdpa.c
> > >>>> +++ b/net/vhost-vdpa.c
> > >>>> @@ -43,6 +43,13 @@ typedef struct VhostVDPAState {
> > >>>>
> > >>>>       /* The device always have SVQ enabled */
> > >>>>       bool always_svq;
> > >>>> +
> > >>>> +    /* The device can isolate CVQ in its own ASID if MQ is negotiated 
> > >>>> */
> > >>>> +    bool cvq_isolated_mq;
> > >>>> +
> > >>>> +    /* The device can isolate CVQ in its own ASID if MQ is not 
> > >>>> negotiated */
> > >>>> +    bool cvq_isolated;
> > >>> As stated above, if we need a device that cvq_isolated_mq^cvq_isolated
> > >>> == true, we need to fail. This may reduce the complexity of the code?
> > >>>
> > >>> Thanks
> > >> Since we are the mediation layer, Qemu can alway choose to negotiate
> > >> MQ regardless whether or not it is supported by the guest. In this
> > >> way, we can have a stable virtqueue index for cvq.
> > >>
> > > I think it is a great idea and it simplifies this patch somehow.
> > > However, we need something like the queue mapping [1] to do so :).
> > >
> > > To double confirm:
> > > * If the device supports MQ, only probe MQ. If not, only probe !MQ.
> > > * Only store cvq_isolated in VhostVDPAState.
> > >
> > > Now, if the device does not negotiate MQ but the device supports MQ:
> >
> >
> > I'm not sure I understand here, if device supports MQ it should accepts
> > MQ or we can fail the initialization here.
> >
>
> My fault, I wanted to say "if the device offers MQ but the driver does
> not acks it".
>
> >
> > > * All the requests to queue 3 must be redirected to the last queue in
> > > the device. That includes set_vq_address, notifiers regions, etc.
> >
> >
> > This also means we will only mediate the case:
> >
> > 1) Qemu emulated virtio-net has 1 queue but device support multiple queue
> >
> > but not
> >
> > 2) Qemu emulated virtio-net has M queue but device support N queue (N>M)
> >
>
> Right.
>
> >
> > >
> > > I'm totally ok to go this route but it's not immediate.
> >
> >
> > Yes but I mean, we can start from failing the device if
> > cvq_isolated_mq^cvq_isolated == true
> >
>
> So probe the two cases but set VhostVDPAState->cvq_isolated =
> cvq_isolated && cvq_mq_isolated then? No map involved that way, and
> all parents should behave that way.
>
> > (or I wonder if we can meet this condition for any existing parents).
>
> I don't think so, but I think we need to probe the two anyway.
> Otherwise we may change the dataplane asid too.

Just to make sure we are at the same page, I meant we could fail the
initialization of vhost-vDPA is the device:

1) can isolate cvq in the case of singqueue but not multiqueue

or

2) can isolate cvq in the case of multiqueue but not single queue

Because I don't think there are any parents that have such a buggy
implementation.

Thanks

>
> Thanks!
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]