qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support


From: Stefano Garzarella
Subject: Re: [RFC] vhost-vdpa-net: add vhost-vdpa-net host device support
Date: Thu, 9 Dec 2021 16:55:22 +0100

On Thu, Dec 09, 2021 at 09:16:58AM +0000, Stefan Hajnoczi wrote:
On Wed, Dec 08, 2021 at 01:20:10PM +0800, Longpeng(Mike) wrote:
From: Longpeng <longpeng2@huawei.com>

Hi guys,

This patch introduces vhost-vdpa-net device, which is inspired
by vhost-user-blk and the proposal of vhost-vdpa-blk device [1].

I've tested this patch on Huawei's offload card:
./x86_64-softmmu/qemu-system-x86_64 \
    -device vhost-vdpa-net-pci,vdpa-dev=/dev/vhost-vdpa-0

For virtio hardware offloading, the most important requirement for us
is to support live migration between offloading cards from different
vendors, the combination of netdev and virtio-net seems too heavy, we
prefer a lightweight way.

Maybe we could support both in the future ? Such as:

* Lightweight
 Net: vhost-vdpa-net
 Storage: vhost-vdpa-blk

* Heavy but more powerful
 Net: netdev + virtio-net + vhost-vdpa
 Storage: bdrv + virtio-blk + vhost-vdpa

[1] https://www.mail-archive.com/qemu-devel@nongnu.org/msg797569.html

Stefano presented a plan for vdpa-blk at KVM Forum 2021:
https://kvmforum2021.sched.com/event/ke3a/vdpa-blk-unified-hardware-and-software-offload-for-virtio-blk-stefano-garzarella-red-hat

It's closer to today's virtio-net + vhost-net approach than the
vhost-vdpa-blk device you have mentioned. The idea is to treat vDPA as
an offload feature rather than a completely separate code path that
needs to be maintained and tested. That way QEMU's block layer features
and live migration work with vDPA devices and re-use the virtio-blk
code. The key functionality that has not been implemented yet is a "fast
path" mechanism that allows the QEMU virtio-blk device's virtqueue to be
offloaded to vDPA.

The unified vdpa-blk architecture should deliver the same performance
as the vhost-vdpa-blk device you mentioned but with more features, so I
wonder what aspects of the vhost-vdpa-blk idea are important to you?

QEMU already has vhost-user-blk, which takes a similar approach as the
vhost-vdpa-blk device you are proposing. I'm not against the
vhost-vdpa-blk approach in priciple, but would like to understand your
requirements and see if there is a way to collaborate on one vdpa-blk
implementation instead of dividing our efforts between two.

Waiting for the aspects that Stefan asked, I add some details about the plan for vdpa-blk.

Currently I'm working on the in-kernel software device. In the next months I hope to start working on the QEMU part. Anyway that part could go in parallel with the in-kernel device, so if you are interested we can collaborate.

Having only the unified vdpa-blk architecture would allow us to simplify the management layers and avoid duplicate code, but it takes more time to develop compared to vhost-vdpa-blk. So if vdpa-blk support in QEMU is urgent, I could understand the need to add vhost-vdpa-blk now.

Let me know if you want more details about the unified vdpa-blk architecture.

Thanks,
Stefano




reply via email to

[Prev in Thread] Current Thread [Next in Thread]