qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add


From: Alex Williamson
Subject: Re: [Qemu-devel] [RFC PATCH v4 0/3] Add Mediated device support[was: Add vGPU support]
Date: Sat, 28 May 2016 08:56:30 -0600

On Fri, 27 May 2016 22:43:54 +0000
"Tian, Kevin" <address@hidden> wrote:

> > From: Alex Williamson [mailto:address@hidden
> > Sent: Friday, May 27, 2016 10:55 PM
> > 
> > On Fri, 27 May 2016 11:02:46 +0000
> > "Tian, Kevin" <address@hidden> wrote:
> >   
> > > > From: Alex Williamson [mailto:address@hidden
> > > > Sent: Wednesday, May 25, 2016 9:44 PM
> > > >
> > > > On Wed, 25 May 2016 07:13:58 +0000
> > > > "Tian, Kevin" <address@hidden> wrote:
> > > >  
> > > > > > From: Kirti Wankhede [mailto:address@hidden
> > > > > > Sent: Wednesday, May 25, 2016 3:58 AM
> > > > > >
> > > > > > This series adds Mediated device support to v4.6 Linux host kernel. 
> > > > > > Purpose
> > > > > > of this series is to provide a common interface for mediated device
> > > > > > management that can be used by different devices. This series 
> > > > > > introduces
> > > > > > Mdev core module that create and manage mediated devices, VFIO 
> > > > > > based driver
> > > > > > for mediated PCI devices that are created by Mdev core module and 
> > > > > > update
> > > > > > VFIO type1 IOMMU module to support mediated devices.  
> > > > >
> > > > > Thanks. "Mediated device" is more generic than previous one. :-)
> > > > >  
> > > > > >
> > > > > > What's new in v4?
> > > > > > - Renamed 'vgpu' module to 'mdev' module that represent generic term
> > > > > >   'Mediated device'.
> > > > > > - Moved mdev directory to drivers/vfio directory as this is the 
> > > > > > extension
> > > > > >   of VFIO APIs for mediated devices.
> > > > > > - Updated mdev driver to be flexible to register multiple types of 
> > > > > > drivers
> > > > > >   to mdev_bus_type bus.
> > > > > > - Updated mdev core driver with mdev_put_device() and 
> > > > > > mdev_get_device() for
> > > > > >   mediated devices.
> > > > > >
> > > > > >  
> > > > >
> > > > > Just curious. In this version you move the whole mdev core under
> > > > > VFIO now. Sorry if I missed any agreement on this change. IIRC Alex
> > > > > doesn't want VFIO to manage mdev life-cycle directly. Instead VFIO is
> > > > > just a mdev driver on created mediated devices....  
> > > >
> > > > I did originally suggest keeping them separate, but as we've progressed
> > > > through the implementation, it's become more clear that the mediated
> > > > device interface is very much tied to the vfio interface, acting mostly
> > > > as a passthrough.  So I thought it made sense to pull them together.
> > > > Still open to discussion of course.  Thanks,
> > > >  
> > >
> > > The main benefit of maintaining a separate mdev framework, IMHO, is
> > > to allow better support of both KVM and Xen. Xen doesn't work with VFIO
> > > today, because other VM's memory is not allocated from Dom0 which
> > > means VFIO within Dom0 doesn't has view/permission to control isolation
> > > for other VMs.  
> > 
> > Isn't this just a matter of the vfio iommu model selected?  There could
> > be a vfio-iommu-xen that knows how to do the grant calls.
> >   
> > > However, after some thinking I think it might not be a big problem to
> > > combine VFIO/mdev together, if we extend Xen to just use VFIO for
> > > resource enumeration. In such model, VFIO still behaves as a single
> > > kernel portal to enumerate mediated devices to user space, but give up
> > > permission control to Qemu which will request a secure agent - Xen
> > > hypervisor - to ensure isolation of VM usage on mediated device (including
> > > EPT/IOMMU configuration).  
> > 
> > The whole point here is to use the vfio user api and we seem to be
> > progressing towards using vfio-core as a conduit where the mediated
> > driver api is also fairly vfio-ish.  So it seems we're really headed
> > towards a vfio-mediated device rather than some sort generic mediated
> > driver interface.  I would object to leaving permission control to
> > QEMU, QEMU is just a vfio user, there are others like DPDK.  The kernel
> > needs to be in charge of protecting itself and users from each other,
> > QEMU can't do this, which is part of reason that KVM has moved to vfio
> > rather than the pci-sysfs resource interface.
> >   
> > > I'm not sure whether VFIO can support this usage today. It is somehow
> > > similar to channel io passthru in s390, where we also rely on Qemu to
> > > mediate ccw commands to ensure isolation. Maybe just some slight
> > > extension is required (e.g. not assume some API must be invoked). Of
> > > course Qemu side vfio code also need some change. If this can work,
> > > at least we can first put it as the enumeration interface for mediated
> > > device in Xen. In the future it may be extended to cover normal Xen
> > > PCI assignment as well instead of using sysfs to read PCI resource
> > > today.  
> > 
> > The channel io proposal doesn't rely on QEMU for security either, the
> > mediation occurs in the host kernel, parsing the ccw command program,
> > and doing translations to replace the guest physical addresses with
> > verified and pinned host physical addresses before submitting the
> > program to be run.  A mediated device is policed by the mediated
> > vendor driver in the host kernel, QEMU is untrusted, just like any
> > other user.
> > 
> > If xen is currently using pci-sysfs for mapping device resources, then
> > vfio should be directly usable, which leaves the IOMMU interfaces, such
> > as pinning and mapping user memory and making use of the IOMMU API,
> > that part of vfio is fairly modular though IOMMU groups is a fairly
> > fundamental concept within the core.  Thanks,
> >   
> 
> My impression was that you don't like hypervisor specific thing in VFIO,
> which makes it a bit tricky to accomplish those tasks in kernel. If we 
> can add Xen specific logic directly in VFIO (like vfio-iommu-xen you 
> mentioned), the whole thing would be easier.

If vfio is hosted in dom0, then Xen is the platform and we need to
interact with the hypervisor to manage the iommu.  That said, there are
aspects of vfio that do not seem to map well to a hypervisor managed
iommu or a Xen-like hypervisor.  For instance, how does dom0 manage
iommu groups and what's the distinction of using vfio to manage a
userspace driver in dom0 versus managing a device for another domain.
In the case of kvm, vfio has no dependency on kvm, there is some minor
interaction, but we're not running on kvm and it's not appropriate to
use vfio as a gateway to interact with a hypervisor that may or may not
exist.  Thanks,

Alex



reply via email to

[Prev in Thread] Current Thread [Next in Thread]