qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v1 1/1] vGPU core driver : to provide common


From: Tian, Kevin
Subject: Re: [Qemu-devel] [RFC PATCH v1 1/1] vGPU core driver : to provide common interface for vGPU.
Date: Tue, 16 Feb 2016 08:10:42 +0000

> From: Neo Jia [mailto:address@hidden
> Sent: Tuesday, February 16, 2016 3:53 PM
> 
> On Tue, Feb 16, 2016 at 07:40:47AM +0000, Tian, Kevin wrote:
> > > From: Neo Jia [mailto:address@hidden
> > > Sent: Tuesday, February 16, 2016 3:37 PM
> > >
> > > On Tue, Feb 16, 2016 at 07:27:09AM +0000, Tian, Kevin wrote:
> > > > > From: Neo Jia [mailto:address@hidden
> > > > > Sent: Tuesday, February 16, 2016 3:13 PM
> > > > >
> > > > > On Tue, Feb 16, 2016 at 06:49:30AM +0000, Tian, Kevin wrote:
> > > > > > > From: Alex Williamson [mailto:address@hidden
> > > > > > > Sent: Thursday, February 04, 2016 3:33 AM
> > > > > > >
> > > > > > > On Wed, 2016-02-03 at 09:28 +0100, Gerd Hoffmann wrote:
> > > > > > > >   Hi,
> > > > > > > >
> > > > > > > > > Actually I have a long puzzle in this area. Definitely 
> > > > > > > > > libvirt will use UUID
> to
> > > > > > > > > mark a VM. And obviously UUID is not recorded within KVM. 
> > > > > > > > > Then how
> does
> > > > > > > > > libvirt talk to KVM based on UUID? It could be a good 
> > > > > > > > > reference to this
> design.
> > > > > > > >
> > > > > > > > libvirt keeps track which qemu instance belongs to which vm.
> > > > > > > > qemu also gets started with "-uuid ...", so one can query qemu 
> > > > > > > > via
> > > > > > > > monitor ("info uuid") to figure what the uuid is.  It is also 
> > > > > > > > in the
> > > > > > > > smbios tables so the guest can see it in the system information 
> > > > > > > > table.
> > > > > > > >
> > > > > > > > The uuid is not visible to the kernel though, the kvm kernel 
> > > > > > > > driver
> > > > > > > > doesn't know what the uuid is (and neither does vfio).  qemu 
> > > > > > > > uses file
> > > > > > > > handles to talk to both kvm and vfio.  qemu notifies both kvm 
> > > > > > > > and vfio
> > > > > > > > about anything relevant events (guest address space changes 
> > > > > > > > etc) and
> > > > > > > > connects file descriptors (eventfd -> irqfd).
> > > > > > >
> > > > > > > I think the original link to using a VM UUID for the vGPU comes 
> > > > > > > from
> > > > > > > NVIDIA having a userspace component which might get launched from 
> > > > > > > a udev
> > > > > > > event as the vGPU is created or the set of vGPUs within that UUID 
> > > > > > > is
> > > > > > > started.  Using the VM UUID then gives them a way to associate 
> > > > > > > that
> > > > > > > userspace process with a VM instance.  Maybe it could register 
> > > > > > > with
> > > > > > > libvirt for some sort of service provided for the VM, I don't 
> > > > > > > know.
> > > > > >
> > > > > > Intel doesn't have this requirement. It should be enough as long as
> > > > > > libvirt maintains which sysfs vgpu node is associated to a VM UUID.
> > > > > >
> > > > > > >
> > > > > > > > qemu needs a sysfs node as handle to the vfio device, something
> > > > > > > > like /sys/devices/virtual/vgpu/<name>.  <name> can be a uuid if 
> > > > > > > > you
> want
> > > > > > > > have it that way, but it could be pretty much anything.  The 
> > > > > > > > sysfs node
> > > > > > > > will probably show up as-is in the libvirt xml when assign a 
> > > > > > > > vgpu to a
> > > > > > > > vm.  So the name should be something stable (i.e. when using a 
> > > > > > > > uuid as
> > > > > > > > name you should better not generate a new one on each boot).
> > > > > > >
> > > > > > > Actually I don't think there's really a persistent naming issue, 
> > > > > > > that's
> > > > > > > probably where we diverge from the SR-IOV model.  SR-IOV cannot
> > > > > > > dynamically add a new VF, it needs to reset the number of VFs to 
> > > > > > > zero,
> > > > > > > then re-allocate all of them up to the new desired count.  That 
> > > > > > > has some
> > > > > > > obvious implications.  I think with both vendors here, we can
> > > > > > > dynamically allocate new vGPUs, so I would expect that libvirt 
> > > > > > > would
> > > > > > > create each vGPU instance as it's needed.  None would be created 
> > > > > > > by
> > > > > > > default without user interaction.
> > > > > > >
> > > > > > > Personally I think using a UUID makes sense, but it needs to be
> > > > > > > userspace policy whether that UUID has any implicit meaning like
> > > > > > > matching the VM UUID.  Having an index within a UUID bothers me a 
> > > > > > > bit,
> > > > > > > but it doesn't seem like too much of a concession to enable the 
> > > > > > > use case
> > > > > > > that NVIDIA is trying to achieve.  Thanks,
> > > > > > >
> > > > > >
> > > > > > I would prefer to making UUID an optional parameter, while not 
> > > > > > tieing
> > > > > > sysfs vgpu naming to UUID. This would be more flexible to different
> > > > > > scenarios where UUID might not be required.
> > > > >
> > > > > Hi Kevin,
> > > > >
> > > > > Happy Chinese New Year!
> > > > >
> > > > > I think having UUID as the vgpu device name will allow us to have an 
> > > > > gpu vendor
> > > > > agnostic solution for the upper layer software stack such as QEMU, 
> > > > > who is
> > > > > supposed to open the device.
> > > > >
> > > >
> > > > Qemu can use whatever sysfs path provided to open the device, regardless
> > > > of whether there is an UUID within the path...
> > > >
> > >
> > > Hi Kevin,
> > >
> > > Then it will provide even more benefit of using UUID as libvirt can be
> > > implemented as gpu vendor agnostic, right? :-)
> > >
> > > The UUID can be VM UUID or vGPU group object UUID which really depends on 
> > > the
> > > high level software stack, again the benefit is gpu vendor agnostic.
> > >
> >
> > There is case where libvirt is not used while another mgmt. stack doesn't 
> > use
> > UUID, e.g. in some Xen scenarios. So it's not about GPU vendor agnostic. 
> > It's
> > about high level mgmt. stack agnostic. That's why we need make UUID as
> > optional in this vGPU-core framework.
> 
> Hi Kevin,
> 
> As long as you have to create an object to represent vGPU or vGPU group, you
> will have UUID, no matter which management stack you are going to use.
> 
> UUID is the most agnostic way to represent an object, I think.
> 
> (a bit off topic since we are supposed to focus on VFIO on KVM)
> 
> Since now you are talking about Xen, I am very happy to discuss that with you.
> You can check how Xen has managed its object via UUID in xapi.
> 

Well, I'm not the expert in this area. IMHO UUID is just an user level
attribute, which can be associated to any sysfs node and managed by
mgmt. stack itself, and then the sysfs path can be opened as the
bridge between user/kernel. I don't understand the necessity of binding 
UUID internally within vGPU core framework here. Alex gave one example
of udev, but I didn't quite catch why only UUID can work there. Maybe
you can elaborate that requirement.

P.S. taking my daily Xen development experience for example, I just use 
xl w/o need to bother managing UUID (Xen hypervisor only uses VMID
instead of UUID). I don't want to eliminate such flexibility in this design. :-)

Thanks
Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]