qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support


From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Subject: RE: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support
Date: Tue, 14 Dec 2021 01:44:46 +0000


> -----Original Message-----
> From: Qemu-devel [mailto:qemu-devel-bounces+longpeng2=huawei.com@nongnu.org]
> On Behalf Of Stefan Hajnoczi
> Sent: Monday, December 13, 2021 11:16 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>
> Cc: mst@redhat.com; jasowang@redhat.com; qemu-devel@nongnu.org; Yechuan
> <yechuan@huawei.com>; xieyongji@bytedance.com; Gonglei (Arei)
> <arei.gonglei@huawei.com>; parav@nvidia.com; Stefano Garzarella
> <sgarzare@redhat.com>
> Subject: Re: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support
> 
> On Sat, Dec 11, 2021 at 04:11:04AM +0000, Longpeng (Mike, Cloud Infrastructure
> Service Product Dept.) wrote:
> >
> >
> > > -----Original Message-----
> > > From: Stefano Garzarella [mailto:sgarzare@redhat.com]
> > > Sent: Thursday, December 9, 2021 11:55 PM
> > > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > <longpeng2@huawei.com>
> > > Cc: Stefan Hajnoczi <stefanha@redhat.com>; jasowang@redhat.com;
> mst@redhat.com;
> > > parav@nvidia.com; xieyongji@bytedance.com; Yechuan <yechuan@huawei.com>;
> > > Gonglei (Arei) <arei.gonglei@huawei.com>; qemu-devel@nongnu.org
> > > Subject: Re: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support
> > >
> > > On Thu, Dec 09, 2021 at 09:16:58AM +0000, Stefan Hajnoczi wrote:
> > > >On Wed, Dec 08, 2021 at 01:20:10PM +0800, Longpeng(Mike) wrote:
> > > >> From: Longpeng <longpeng2@huawei.com>
> > > >>
> > > >> Hi guys,
> > > >>
> > > >> This patch introduces vhost-vdpa-net device, which is inspired
> > > >> by vhost-user-blk and the proposal of vhost-vdpa-blk device [1].
> > > >>
> > > >> I've tested this patch on Huawei's offload card:
> > > >> ./x86_64-softmmu/qemu-system-x86_64 \
> > > >>     -device vhost-vdpa-net-pci,vdpa-dev=/dev/vhost-vdpa-0
> > > >>
> > > >> For virtio hardware offloading, the most important requirement for us
> > > >> is to support live migration between offloading cards from different
> > > >> vendors, the combination of netdev and virtio-net seems too heavy, we
> > > >> prefer a lightweight way.
> > > >>
> > > >> Maybe we could support both in the future ? Such as:
> > > >>
> > > >> * Lightweight
> > > >>  Net: vhost-vdpa-net
> > > >>  Storage: vhost-vdpa-blk
> > > >>
> > > >> * Heavy but more powerful
> > > >>  Net: netdev + virtio-net + vhost-vdpa
> > > >>  Storage: bdrv + virtio-blk + vhost-vdpa
> > > >>
> > > >> [1] https://www.mail-archive.com/qemu-devel@nongnu.org/msg797569.html
> > > >
> > > >Stefano presented a plan for vdpa-blk at KVM Forum 2021:
> > > >https://kvmforum2021.sched.com/event/ke3a/vdpa-blk-unified-hardware-an
> d-so
> > > ftware-offload-for-virtio-blk-stefano-garzarella-red-hat
> > > >
> > > >It's closer to today's virtio-net + vhost-net approach than the
> > > >vhost-vdpa-blk device you have mentioned. The idea is to treat vDPA as
> > > >an offload feature rather than a completely separate code path that
> > > >needs to be maintained and tested. That way QEMU's block layer features
> > > >and live migration work with vDPA devices and re-use the virtio-blk
> > > >code. The key functionality that has not been implemented yet is a "fast
> > > >path" mechanism that allows the QEMU virtio-blk device's virtqueue to be
> > > >offloaded to vDPA.
> > > >
> > > >The unified vdpa-blk architecture should deliver the same performance
> > > >as the vhost-vdpa-blk device you mentioned but with more features, so I
> > > >wonder what aspects of the vhost-vdpa-blk idea are important to you?
> > > >
> > > >QEMU already has vhost-user-blk, which takes a similar approach as the
> > > >vhost-vdpa-blk device you are proposing. I'm not against the
> > > >vhost-vdpa-blk approach in priciple, but would like to understand your
> > > >requirements and see if there is a way to collaborate on one vdpa-blk
> > > >implementation instead of dividing our efforts between two.
> > >
> > > Waiting for the aspects that Stefan asked, I add some details about the
> > > plan for vdpa-blk.
> > >
> > > Currently I'm working on the in-kernel software device. In the next
> > > months I hope to start working on the QEMU part. Anyway that part could
> > > go in parallel with the in-kernel device, so if you are interested we
> > > can collaborate.
> > >
> >
> > The work on QEMU part means supporting the vdpa in BlockDriver and 
> > virtio-blk?
> >
> > In fact, I wanted to support the vdpa in QEMU block layer before I sent this
> > RFC, because the net part uses netdev + virtio-net while the storage part 
> > uses
> > vhost-vdpa-blk (from Yongji) looks like a strange combination.
> >
> > But I found enable vdpa in QEMU block layer would take more time and some
> > features (e.g. snapshot, IO throttling) from the QEMU block layer are not 
> > needed
> > in our hardware offloading case, so I turned to develop the 
> > "vhost-vdpa-net",
> > maybe the combination of vhost-vdpa-net and vhost-vdpa-blk is congruous.
> >
> > > Having only the unified vdpa-blk architecture would allow us to simplify
> > > the management layers and avoid duplicate code, but it takes more time
> > > to develop compared to vhost-vdpa-blk. So if vdpa-blk support in QEMU is
> > > urgent, I could understand the need to add vhost-vdpa-blk now.
> > >
> >
> > I prefer a way that can support vdpa devices (not only net and storage, but
> also
> > other device types) quickly in hardware offloading case, maybe it would
> decreases
> > the universalism, but it could be an alternative to some users.
> 
> If QEMU already had --blockdev vpda-blk, would you use that with
> --device virtio-blk-pci or still want to implement a separate --device
> vhost-vdpa-blk-pci device?
> 

vhost-vdpa-blk/net seems no need now, but a generic vdpa device may be still
needed.

We are still in the research stage, so I cannot decide to use vdpa-blk or the
generic device for the storage devices now.

If we need to migrate the legacy non-offloading instances to the offloading
instances, then we have no choice but to use vdpa-blk. However, migrating from
non-offloading to offloading is a complex project, not only the virtualization
layer needs to support but also other layers, so it's hard to say whether this
is possible in practical reality.

So maybe a good choice for us is :
  Net: -netdev type=vhost-vdpa
  Storage: -blockdev vpda-blk
  Others (e.g. fs, crypto): generic vdpa device


> Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]