qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 0/2] vhost-vfio: introduce mdev based HW vhost bac


From: Liang, Cunming
Subject: Re: [Qemu-devel] [RFC 0/2] vhost-vfio: introduce mdev based HW vhost backend
Date: Wed, 7 Nov 2018 15:08:36 +0000


> -----Original Message-----
> From: Jason Wang [mailto:address@hidden
> Sent: Wednesday, November 7, 2018 2:38 PM
> To: Liang, Cunming <address@hidden>; Wang, Xiao W
> <address@hidden>; address@hidden; address@hidden
> Cc: address@hidden; Bie, Tiwei <address@hidden>; Ye, Xiaolong
> <address@hidden>; Wang, Zhihong <address@hidden>; Daly, Dan
> <address@hidden>
> Subject: Re: [RFC 0/2] vhost-vfio: introduce mdev based HW vhost backend
> 
> 
> On 2018/11/7 下午8:26, Liang, Cunming wrote:
> >
> >> -----Original Message-----
> >> From: Jason Wang [mailto:address@hidden
> >> Sent: Tuesday, November 6, 2018 4:18 AM
> >> To: Wang, Xiao W <address@hidden>; address@hidden;
> >> address@hidden
> >> Cc: address@hidden; Bie, Tiwei <address@hidden>; Liang,
> >> Cunming <address@hidden>; Ye, Xiaolong
> >> <address@hidden>; Wang, Zhihong <address@hidden>;
> >> Daly, Dan <address@hidden>
> >> Subject: Re: [RFC 0/2] vhost-vfio: introduce mdev based HW vhost
> >> backend
> >>
> >>
> >> On 2018/10/16 下午9:23, Xiao Wang wrote:
> >>> What's this
> >>> ===========
> >>> Following the patch (vhost: introduce mdev based hardware vhost
> >>> backend) https://lwn.net/Articles/750770/, which defines a generic
> >>> mdev device for vhost data path acceleration (aliased as vDPA mdev
> >>> below), this patch set introduces a new net client type: vhost-vfio.
> >>
> >> Thanks a lot for a such interesting series. Some generic questions:
> >>
> >>
> >> If we consider to use software backend (e.g vhost-kernel or a rely of
> >> virito-vhost- user or other cases) as well in the future, maybe
> >> vhost-mdev is better which mean it does not tie to VFIO anyway.
> > [LC] The initial thought of using term of '-vfio' due to the VFIO UAPI 
> > being used as
> interface, which is the only available mdev bus driver. It causes to use the 
> term of
> 'vhost-vfio' in qemu, while using term of 'vhost-mdev' which represents a 
> helper in
> kernel for vhost messages via mdev.
> >
> >>
> >>> Currently we have 2 types of vhost backends in QEMU: vhost kernel
> >>> (tap) and vhost-user (e.g. DPDK vhost), in order to have a kernel
> >>> space HW vhost acceleration framework, the vDPA mdev device works as
> >>> a generic configuring channel.
> >>
> >> Does "generic" configuring channel means dpdk will also go for this way?
> >> E.g it will have a vhost mdev pmd?
> > [LC] We don't plan to have a vhost-mdev pmd, but thinking to have consistent
> virtio PMD running on top of vhost-mdev.  Virtio PMD supports pci bus and 
> vdev (by
> virtio-user) bus today. Vhost-mdev most likely would be introduced as another 
> bus
> (mdev bus) provider.
> 
> 
> This seems could be eliminated if you keep use the vhost-kernel ioctl API. 
> Then you
> can use virtio-user.
[LC] That's true.

> 
> 
> >   mdev bus DPDK support is in backlog.
> >
> >>
> >>>    It exposes to user space a non-vendor-specific configuration
> >>> interface for setting up a vhost HW accelerator,
> >>
> >> Or even a software translation layer on top of exist hardware.
> >>
> >>
> >>> based on this, this patch
> >>> set introduces a third vhost backend called vhost-vfio.
> >>>
> >>> How does it work
> >>> ================
> >>> The vDPA mdev defines 2 BAR regions, BAR0 and BAR1. BAR0 is the main
> >>> device interface, vhost messages can be written to or read from this
> >>> region following below format. All the regular vhost messages about
> >>> vring addr, negotiated features, etc., are written to this region 
> >>> directly.
> >>
> >> If I understand this correctly, the mdev was not used for passed through 
> >> to guest
> >> directly. So what's the reason of inventing a PCI like device here? I'm 
> >> asking since:
> > [LC] mdev uses mandatory attribute of 'device_api' to identify the layout. 
> > We pick
> up one available from pci, platform, amba and ccw. It works if defining a new 
> one
> for this transport.
> >
> >> - vhost protocol is transport indepedent, we should consider to support 
> >> transport
> >> other than PCI. I know we can even do it with the exist design but it 
> >> looks rather
> odd
> >> if we do e.g ccw device with a PCI like mediated device.
> >>
> >> - can we try to reuse vhost-kernel ioctl? Less API means less bugs and code
> reusing.
> >> E.g virtio-user can benefit from the vhost kernel ioctl API almost with no 
> >> changes
> I
> >> believe.
> > [LC] Agreed, so it reuses CMD defined by vhost-kernel ioctl. But VFIO 
> > provides
> device specific things (e.g. DMAR, INTR and etc.) which is the extra APIs 
> being
> introduced by this transport.
> 
> 
> I'm not quite sure I understand here. I think having vhost-kernel
> compatible ioctl does not conflict of using VFIO ioctl like DMA or INTR?
> 
> Btw, VFIO DMA ioctl is even not a must from my point of view, vhost-mdev
> can forward the mem table information to device driver and let it call
> DMA API to map/umap pages.
[LC] If not regarding vhost-mdev as a device, then forward mem table won't be a 
concern.
If introducing a new mdev bus driver (vhost-mdev) which allows mdev instance to 
be a new type of provider for vhost-kernel. It becomes a pretty good 
alternative to fully leverage vhost-kernel ioctl.
I'm not sure it's the same view as yours when you says reusing vhost-kernel 
ioctl.

> 
> Thanks


reply via email to

[Prev in Thread] Current Thread [Next in Thread]