qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V5 1/3] net/filter: Optimize transfer protocol for filter-mir


From: Jason Wang
Subject: Re: [PATCH V5 1/3] net/filter: Optimize transfer protocol for filter-mirror/redirector
Date: Fri, 5 Nov 2021 12:03:07 +0800

On Fri, Nov 5, 2021 at 11:27 AM Zhang, Chen <chen.zhang@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jason Wang <jasowang@redhat.com>
> > Sent: Friday, November 5, 2021 11:17 AM
> > To: Zhang, Chen <chen.zhang@intel.com>; Markus Armbruster
> > <armbru@redhat.com>
> > Cc: qemu-dev <qemu-devel@nongnu.org>; Li Zhijian
> > <lizhijian@cn.fujitsu.com>
> > Subject: Re: [PATCH V5 1/3] net/filter: Optimize transfer protocol for 
> > filter-
> > mirror/redirector
> >
> >
> > 在 2021/11/4 下午1:37, Zhang, Chen 写道:
> > >>>>>
> > >>>>> I wonder if we need to introduce new parameter, e.g force_vnet_hdr
> > >>>>> here, then we can always send vnet_hdr when it is enabled.
> > >>>>>
> > >>>>> Otherwise the "vnet_hdr_support" seems meaningless.
> > >>>> Yes, Current "vnet_hdr_support"  default enabled, and vnet_hdr_len
> > >>> already forced from attached nf->netdev.
> > >>>> Maybe we can introduce a new parameter "force_no_vnet_hdr" here
> > to
> > >>> make the vnet_hdr_len always keep 0.
> > >>>> If you think OK, I will update it in next version.
> > >>> Let me explain, if I was not wrong:
> > >>>
> > >>> "vnet_hdr_support" means whether or not to send vnet header length.
> > >>> If vnet_hdr_support=false, we won't send the vnet header. This looks
> > >>> the same as you "force_no_vent_hdr" above.
> > >> Yes, It was.  But this series changed it.
> > >> Current "vnet_hdr_support" can't decide whether send vnet header
> > >> length, we always send it even 0.
> > >> It will avoid sender/receiver transfer protocol parse issues:
> > >> When sender data with the vnet header length, but receiver can't
> > >> enable the "vnet_hdr_support".
> > >> Filters will auto setup vnet_hdr_len as local nf->netdev and found
> > >> the issue when get different vnet_hdr_len from other filters.
> > >>
> > >>> And my "force_vnet_hdr" seems duplicated with
> > vnet_hdr_support=true.
> > >>> So it looks to me we can leave the mirror code as is and just change
> > >>> the compare? (depends on the mgmt to set a correct vnet_hdr_support)
> > >> OK, I will keep the filter-mirror/filter-redirector/filter-rewriter
> > >> same as this version.
> > >> For the colo-compare module, It will get primary node's filter data's
> > >> vnet_hdr_len as the local value, And compare with secondary node's,
> > >> because it is not attached any nf->netdev.
> > >> So, it looks compare module's "vnet_hdr_support" been auto
> > >> configuration from the filter transport protocol.
> > >> If the "force_vnet_hdr" means hard code a compare's local
> > >> vnet_hdr_len rather than come from input filter's data?
> > >>
> > >> Thanks
> > >> Chen
> > >>
> > > Hi Jason/Markus,
> > >
> > > Rethink about it, How about keep the original "vnet_hdr_support"
> > > function, And add a new optional parameter "auto_vnet_hdr" for
> > filters/compare?
> >
> >
> > It's a way but rethink of the whole thing. I wonder what if we just enable
> > "vnet_hdr_support" by default for filter and colo-compare?
>
> It's works by default for user use -device virtio-net-pci and e1000...
> But it can't resolve this series motivation, how to fix/check user 
> configuration issue:
> For example user enable " vnet_hdr_support " filter-mirror and disable " 
> vnet_hdr_support" filter-redirector
> And connect both filter modules by chardev socket.
> In this case guest will get wrong network workload and filters didn’t 
> perceive any abnormalities,
> but in fact, the whole system is no longer working.
> This series will report error and try to correct it.

The problem is how "auto_vnet_hdr" help in this case. It's a new
parameter which may lead to more wrong configuration?

Thanks

>
> Thanks
> Chen
>
> >
> > Thanks
> >
> >
> > >
> > > Thanks
> > > Chen
> > >
> > >
> > >>> Thanks
> > >>>
> > >>>> Thanks
> > >>>> Chen
> > >>>>
> > >>>>> Thanks
> > >>>>>
> > >>>>>
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]