qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Outline for VHOST_USER_PROTOCOL_F_VDPA


From: Michael S. Tsirkin
Subject: Re: Outline for VHOST_USER_PROTOCOL_F_VDPA
Date: Wed, 30 Sep 2020 04:07:59 -0400

On Tue, Sep 29, 2020 at 07:38:24PM +0100, Stefan Hajnoczi wrote:
> On Tue, Sep 29, 2020 at 06:04:34AM -0400, Michael S. Tsirkin wrote:
> > On Tue, Sep 29, 2020 at 09:57:51AM +0100, Stefan Hajnoczi wrote:
> > > On Tue, Sep 29, 2020 at 02:09:55AM -0400, Michael S. Tsirkin wrote:
> > > > On Mon, Sep 28, 2020 at 10:25:37AM +0100, Stefan Hajnoczi wrote:
> > > > > Why extend vhost-user with vDPA?
> > > > > ================================
> > > > > Reusing VIRTIO emulation code for vhost-user backends
> > > > > -----------------------------------------------------
> > > > > It is a common misconception that a vhost device is a VIRTIO device.
> > > > > VIRTIO devices are defined in the VIRTIO specification and consist of 
> > > > > a
> > > > > configuration space, virtqueues, and a device lifecycle that includes
> > > > > feature negotiation. A vhost device is a subset of the corresponding
> > > > > VIRTIO device. The exact subset depends on the device type, and some
> > > > > vhost devices are closer to the full functionality of their
> > > > > corresponding VIRTIO device than others. The most well-known example 
> > > > > is
> > > > > that vhost-net devices have rx/tx virtqueues and but lack the 
> > > > > virtio-net
> > > > > control virtqueue. Also, the configuration space and device lifecycle
> > > > > are only partially available to vhost devices.
> > > > > 
> > > > > This difference makes it impossible to use a VIRTIO device as a
> > > > > vhost-user device and vice versa. There is an impedance mismatch and
> > > > > missing functionality. That's a shame because existing VIRTIO device
> > > > > emulation code is mature and duplicating it to provide vhost-user
> > > > > backends creates additional work.
> > > > 
> > > > 
> > > > The biggest issue facing vhost-user and absent in vdpa is
> > > > backend disconnect handling. This is the reason control path
> > > > is kept under QEMU control: we do not need any logic to
> > > > restore control path data, and we can verify a new backend
> > > > is consistent with old one.
> > > 
> > > I don't think using vhost-user with vDPA changes that. The VMM still
> > > needs to emulate a virtio-pci/ccw/mmio device that the guest interfaces
> > > with. If the device backend goes offline it's possible to restore that
> > > state upon reconnection. What have I missed?
> > 
> > The need to maintain the state in a way that is robust
> > against backend disconnects and can be restored.
> 
> QEMU is only bypassed for virtqueue accesses. Everything else still
> goes through the virtio-pci emulation in QEMU (VIRTIO configuration
> space, status register). vDPA doesn't change this.
> 
> Existing vhost-user messages can be kept if they are useful (e.g.
> virtqueue state tracking). So I think the situation is no different than
> with the existing vhost-user protocol.
> 
> > > Regarding reconnection in general, it currently seems like a partially
> > > solved problem in vhost-user. There is the "Inflight I/O tracking"
> > > mechanism in the spec and some wording about reconnecting the socket,
> > > but in practice I wouldn't expect all device types, VMMs, or device
> > > backends to actually support reconnection. This is an area where a
> > > uniform solution would be very welcome too.
> > 
> > I'm not aware of big issues. What are they?
> 
> I think "Inflight I/O tracking" can only be used when request processing
> is idempotent? In other words, it can only be used when submitting the
> same request multiple times is safe.


Not inherently it just does not attempt to address this problem.


Inflight tracking only tries to address issues on the guest side,
that is, making sure the same buffer is used exactly once.

> A silly example where this recovery mechanism cannot be used is if a
> device has a persistent counter that is incremented by the request. The
> guest can't be sure that the counter will be incremented exactly once.
> 
> Another example: devices that support requests with compare-and-swap
> semantics cannot use this mechanism. During recover the compare will
> fail if the request was just completing when the backend crashed.
> 
> Do I understand the limitations of this mechanism correctly? It doesn't
> seem general and I doubt it can be applied to all existing device types.

Device with any kind of atomicity guarantees will
have to use some internal mechanism (e.g. log?) to ensure
internal consistency, that is out of scope for tracking.



> > > There was discussion about recovering state in muser. The original idea
> > > was for the muser kernel module to host state that persists across
> > > device backend restart. That way the device backend can go away
> > > temporarily and resume without guest intervention.
> > > 
> > > Then when the vfio-user discussion started the idea morphed into simply
> > > keeping a tmpfs file for each device instance (no special muser.ko
> > > support needed anymore). This allows the device backend to resume
> > > without losing state. In practice a programming framework is needed to
> > > make this easy and safe to use but it boils down to a tmpfs mmap.
> > > 
> > > > > If there was a way to reuse existing VIRTIO device emulation code it
> > > > > would be easier to move to a multi-process architecture in QEMU. Want 
> > > > > to
> > > > > run --netdev user,id=netdev0 --device virtio-net-pci,netdev=netdev0 
> > > > > in a
> > > > > separate, sandboxed process? Easy, run it as a vhost-user-net device
> > > > > instead of as virtio-net.
> > > > 
> > > > Given vhost-user is using a socket, and given there's an elaborate
> > > > protocol due to need for backwards compatibility, it seems safer to
> > > > have vhost-user interface in a separate process too.
> > > 
> > > Right, with vhost-user only the virtqueue processing is done in the
> > > device backend. The VMM still has to do the virtio transport emulation
> > > (pci, ccw, mmio) and vhost-user connection lifecycle, which is complex.
> > 
> > IIUC all vfio user does is add another protocol in the VMM,
> > and move code out of VMM to backend.
> > 
> > Architecturally I don't see why it's safer.
> 
> It eliminates one layer of device emulation (virtio-pci). Fewer
> registers to emulate means a smaller attack surface.

Well it does not eliminate it as such, it moves it to the backend.
Which in a variety of setups is actually a more sensitive
place as the backend can do things like access host
storage/network which VMM can be prevented from doing.

> It's possible to take things further, maybe with the proposed ioregionfd
> mechanism, where the VMM's KVM_RUN loop no longer handles MMIO/PIO
> exits. A separate process can handle them. Maybe some platform devices
> need CPU state access though.
> 
> BTW I think the goal of removing as much emulation from the VMM as
> possible is interesting.
> 
> Did you have some other approach in mind to remove the PCI and
> virtio-pci device from the VMM?

Architecturally, I think we can have 3 processes:


VMM -- guest device emulation -- host backend


to me this looks like increasing our defence in depth strength,
as opposed to just shifting things around ...




> > Something like multi-process patches seems like a way to
> > add defence in depth by having a process in the middle,
> > outside both VMM and backend.
> 
> There is no third process in mpqemu. The VMM uses a UNIX domain socket
> to communicate directly with the device backend. There is a PCI "proxy"
> device in the VMM that does this communication when the guest accesses
> registers. The device backend has a PCI "remote" host controller that a
> PCIDevice instance is plugged into and the UNIX domain socket protocol
> commands are translated into PCIDevice operations.

Yes, but does anything prevent us from further splitting the backend
up to emulation part and host side part?


> This is exactly the same as vfio-user. The only difference is that
> vfio-user uses an existing set of commands, whereas mpqemu defines a new
> protocol that will eventually need to provide equivalent functionality.
>
> > > Going back to Marc-André's point, why don't we focus on vfio-user so 
> > > the
> > > entire device can be moved out of the VMM?
> > > 
> > > Stefan
> > 
> > The fact that vfio-user adds a kernel component is one issue.
> 
> vfio-user only needs a UNIX domain socket. The muser.ko kernel module
> that was discussed after last KVM Forum is not used by vfio-user.
> 
> Stefan

Sorry I will need to go and read the doc which I didn't yet, sorry
about that.

-- 
MST




reply via email to

[Prev in Thread] Current Thread [Next in Thread]