qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [RFC 0/2] virtio-vhost-user: add virtio-vhost-user device


From: Stefan Hajnoczi
Subject: [Qemu-devel] [RFC 0/2] virtio-vhost-user: add virtio-vhost-user device
Date: Fri, 19 Jan 2018 13:06:51 +0000

These patches implement the virtio-vhost-user device design that I have
described here:
https://stefanha.github.io/virtio/vhost-user-slave.html#x1-2830007

The goal is to let the guest act as the vhost device backend for other guests.
This allows virtual networking and storage appliances to run inside guests.
This device is particularly interesting for poll mode drivers where exitless
VM-to-VM communication is possible, completely bypassing the hypervisor in the
data path.

The DPDK driver is here:
https://github.com/stefanha/dpdk/tree/virtio-vhost-user

For more information, see
https://wiki.qemu.org/Features/VirtioVhostUser.

virtio-vhost-user is inspired by Wei Wang and Zhiyong Yang's vhost-pci work.
It differs from vhost-pci in that it has:
1. Vhost-user protocol message tunneling, allowing existing vhost-user
   slave software to be reused inside the guest.
2. Support for all vhost device types.
3. Disconnected operation and reconnection support.
4. Asynchronous vhost-user socket implementation that avoids blocking.

I have written this code to demonstrate how the virtio-vhost-user approach
works and that it is more maintainable than vhost-pci because vhost-user slave
software can use both AF_UNIX and virtio-vhost-user without significant code
changes to the vhost device backends.

One of the main concerns about virtio-vhost-user was that the QEMU
virtio-vhost-user device implementation could be complex because it needs to
parse all messages.  I hope this patch series shows that it's actually very
simple because most messages are passed through.

After this patch series has been reviewed, we need to decide whether to follow
the original vhost-pci approach or to use this one.  Either way, both patch
series still require improvements before they can be merged.  Here are my todos
for this series:

 * Implement "Additional Device Resources over PCI" for shared memory,
   doorbells, and notifications instead of hardcoding a BAR with magic
   offsets into virtio-vhost-user:
   https://stefanha.github.io/virtio/vhost-user-slave.html#x1-2920007
 * Implement the VRING_KICK eventfd - currently vhost-user slaves must be poll
   mode drivers.
 * Optimize VRING_CALL doorbell with ioeventfd to avoid QEMU exit.
 * vhost-user log feature
 * UUID config field for stable device identification regardless of PCI
   bus addresses.
 * vhost-user IOMMU and SLAVE_REQ_FD feature
 * VhostUserMsg little-endian conversion for cross-endian support
 * Chardev disconnect using qemu_chr_fe_set_watch() since CHR_CLOSED is
   only emitted while a read callback is registered.  We don't keep a
   read callback registered all the time.
 * Drain txq on reconnection to discard stale messages.

Stefan Hajnoczi (1):
  virtio-vhost-user: add virtio-vhost-user device

Wei Wang (1):
  vhost-user: share the vhost-user protocol related structures

 configure                                   |   18 +
 hw/virtio/Makefile.objs                     |    1 +
 hw/virtio/virtio-pci.h                      |   21 +
 include/hw/pci/pci.h                        |    1 +
 include/hw/virtio/vhost-user.h              |  106 +++
 include/hw/virtio/virtio-vhost-user.h       |   88 +++
 include/standard-headers/linux/virtio_ids.h |    1 +
 hw/virtio/vhost-user.c                      |  100 +--
 hw/virtio/virtio-pci.c                      |   61 ++
 hw/virtio/virtio-vhost-user.c               | 1047 +++++++++++++++++++++++++++
 hw/virtio/trace-events                      |   22 +
 11 files changed, 1367 insertions(+), 99 deletions(-)
 create mode 100644 include/hw/virtio/vhost-user.h
 create mode 100644 include/hw/virtio/virtio-vhost-user.h
 create mode 100644 hw/virtio/virtio-vhost-user.c

-- 
2.14.3




reply via email to

[Prev in Thread] Current Thread [Next in Thread]