qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Out-of-Process Device Emulation session at KVM Forum 2020


From: Jason Wang
Subject: Re: Out-of-Process Device Emulation session at KVM Forum 2020
Date: Mon, 2 Nov 2020 10:51:18 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0


On 2020/10/30 下午9:15, Stefan Hajnoczi wrote:
On Fri, Oct 30, 2020 at 12:08 PM Jason Wang <jasowang@redhat.com> wrote:
On 2020/10/30 下午7:13, Stefan Hajnoczi wrote:
On Fri, Oct 30, 2020 at 9:46 AM Jason Wang <jasowang@redhat.com> wrote:
On 2020/10/30 下午2:21, Stefan Hajnoczi wrote:
On Fri, Oct 30, 2020 at 3:04 AM Alex Williamson
<alex.williamson@redhat.com> wrote:
It's great to revisit ideas, but proclaiming a uAPI is bad solely
because the data transfer is opaque, without defining why that's bad,
evaluating the feasibility and implementation of defining a well
specified data format rather than protocol, including cross-vendor
support, or proposing any sort of alternative is not so helpful imo.
The migration approaches in VFIO and vDPA/vhost were designed for
different requirements and I think this is why there are different
perspectives on this. Here is a comparison and how VFIO could be
extended in the future. I see 3 levels of device state compatibility:

1. The device cannot save/load state blobs, instead userspace fetches
and restores specific values of the device's runtime state (e.g. last
processed ring index). This is the vhost approach.

2. The device can save/load state in a standard format. This is
similar to #1 except that there is a single read/write blob interface
instead of fine-grained get_FOO()/set_FOO() interfaces. This approach
pushes the migration state parsing into the device so that userspace
doesn't need knowledge of every device type. With this approach it is
possible for a device from vendor A to migrate to a device from vendor
B, as long as they both implement the same standard migration format.
The limitation of this approach is that vendor-specific state cannot
be transferred.

3. The device can save/load opaque blobs. This is the initial VFIO
approach.
I still don't get why it must be opaque.
If the device state format needs to be in the VMM then each device
needs explicit enablement in each VMM (QEMU, cloud-hypervisor, etc).

Let's invert the question: why does the VMM need to understand the
device state of a _passthrough_ device?

For better manageability, compatibility and debug-ability. If we depends
on a opaque structure, do we encourage device to implement its own
migration protocol? It would be very challenge.

For VFIO in the kernel, I suspect a uAPI that may result a opaque data
to be read or wrote from guest violates the Linux uAPI principle. It
will be very hard to maintain uABI or even impossible. It looks to me
VFIO is the first subsystem that is trying to do this.
I think our concepts of uAPI are different. The uAPI of read(2) and
write(2) does not define the structure of the data buffers. VFIO
device regions are exactly the same, the structure of the data is not
defined by the kernel uAPI.


I think we're talking about different things. It's not about the data structure, it's about whether to data that reads from kernel can be understood by userspace.



Maybe microcode and firmware loading is an example we agree on?


I think not. They are bytecodes that have

1) strict ABI definitions
2) understood by userspace



    A device from vendor A cannot migrate to a device from
vendor B because the format is incompatible. This approach works well
when devices have unique guest-visible hardware interfaces so the
guest wouldn't be able to handle migrating a device from vendor A to a
device from vendor B anyway.
For VFIO I guess cross vendor live migration can't succeed unless we do
some cheats in device/vendor id.
Yes. I haven't looked into the details of PCI (Sub-)Device/Vendor IDs
and how to best enable migration but I hope that can be solved. The
simplest approach is to override the IDs and make them part of the
guest configuration.

That would be very tricky (or requires whitelist). E.g the opaque of the
src may match the opaque of the dst by chance.
Luckily identifying things based on magic constants has been solved
many times in the past.

A central identifier registry prevents all collisions but is a pain to
manage. Or use a 128-bit UUID and self-allocate the identifier with an
extremely low chance of collision:
https://en.wikipedia.org/wiki/Universally_unique_identifier#Collisions


I may miss something. I think we're talking about cross vendor live migration.

Would you want src and dest have same UUID or not?

If they have different UUIDs, how could we know we can live migrate between them.

If they have the same UUID, what's the rule of forcing the the vendors to choose same UUID (a spec)?

Thanks



For at least virtio, they will still go with virtio/vDPA. The advantages
are:

1) virtio/vDPA can serve kernel subsystems which VFIO can't, this is
very important for containers
I'm not sure I understand this. If the kernel wants to use the device
then it doesn't use VFIO, it runs the kernel driver instead.

Current spec is not suitable for all type of device. We've received many
feedbacks that virtio(pci) might not work very well. Another point is
that there could be vendor that don't want go with virtio control path.
Mellanox mlx5 vdpa driver is one example. Yes, they can use mlx5_en, but
there are vendors that want to build a vendor specific control path from
scratch.
Okay, I think I understand you mean now. This is the reason why vDPA exists.

Stefan





reply via email to

[Prev in Thread] Current Thread [Next in Thread]