On Fri, Oct 30, 2020 at 3:04 AM Alex Williamson
<alex.williamson@redhat.com> wrote:
It's great to revisit ideas, but proclaiming a uAPI is bad solely
because the data transfer is opaque, without defining why that's bad,
evaluating the feasibility and implementation of defining a well
specified data format rather than protocol, including cross-vendor
support, or proposing any sort of alternative is not so helpful imo.
The migration approaches in VFIO and vDPA/vhost were designed for
different requirements and I think this is why there are different
perspectives on this. Here is a comparison and how VFIO could be
extended in the future. I see 3 levels of device state compatibility:
1. The device cannot save/load state blobs, instead userspace fetches
and restores specific values of the device's runtime state (e.g. last
processed ring index). This is the vhost approach.
2. The device can save/load state in a standard format. This is
similar to #1 except that there is a single read/write blob interface
instead of fine-grained get_FOO()/set_FOO() interfaces. This approach
pushes the migration state parsing into the device so that userspace
doesn't need knowledge of every device type. With this approach it is
possible for a device from vendor A to migrate to a device from vendor
B, as long as they both implement the same standard migration format.
The limitation of this approach is that vendor-specific state cannot
be transferred.
3. The device can save/load opaque blobs. This is the initial VFIO
approach.
This can be achieved as follows:
1. The VFIO migration blob starts with a unique format identifier such
as a UUID. This way the destination device can identify standard
device state formats and parse them.
2. The VFIO device state ioctl is extended so userspace can enumerate
and select device state formats. This way it's possible to check
available formats on the source and destination devices before
migration and to configure the source device to produce device state
in a common format.
To me it seems #3 makes sense as an initial approach for VFIO since
guest-visible hardware interfaces are often not compatible between PCI
devices. #2 can be added in the future, especially when VFIO drivers
from different vendors become available that present the same
guest-visible hardware interface (NVMe, VIRTIO, etc).