On Wed, Dec 08, 2021 at 01:20:10PM +0800, Longpeng(Mike) wrote:
From: Longpeng <longpeng2@huawei.com>
Hi guys,
This patch introduces vhost-vdpa-net device, which is inspired
by vhost-user-blk and the proposal of vhost-vdpa-blk device [1].
I've tested this patch on Huawei's offload card:
./x86_64-softmmu/qemu-system-x86_64 \
-device vhost-vdpa-net-pci,vdpa-dev=/dev/vhost-vdpa-0
For virtio hardware offloading, the most important requirement for us
is to support live migration between offloading cards from different
vendors, the combination of netdev and virtio-net seems too heavy, we
prefer a lightweight way.
Maybe we could support both in the future ? Such as:
* Lightweight
Net: vhost-vdpa-net
Storage: vhost-vdpa-blk
* Heavy but more powerful
Net: netdev + virtio-net + vhost-vdpa
Storage: bdrv + virtio-blk + vhost-vdpa
[1] https://www.mail-archive.com/qemu-devel@nongnu.org/msg797569.html
Stefano presented a plan for vdpa-blk at KVM Forum 2021:
https://kvmforum2021.sched.com/event/ke3a/vdpa-blk-unified-hardware-and-software-offload-for-virtio-blk-stefano-garzarella-red-hat
It's closer to today's virtio-net + vhost-net approach than the
vhost-vdpa-blk device you have mentioned. The idea is to treat vDPA as
an offload feature rather than a completely separate code path that
needs to be maintained and tested. That way QEMU's block layer features
and live migration work with vDPA devices and re-use the virtio-blk
code. The key functionality that has not been implemented yet is a "fast
path" mechanism that allows the QEMU virtio-blk device's virtqueue to be
offloaded to vDPA.
The unified vdpa-blk architecture should deliver the same performance
as the vhost-vdpa-blk device you mentioned but with more features, so I
wonder what aspects of the vhost-vdpa-blk idea are important to you?
QEMU already has vhost-user-blk, which takes a similar approach as the
vhost-vdpa-blk device you are proposing. I'm not against the
vhost-vdpa-blk approach in priciple, but would like to understand your
requirements and see if there is a way to collaborate on one vdpa-blk
implementation instead of dividing our efforts between two.