qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v3 1/3] vGPU Core driver


From: Tian, Kevin
Subject: Re: [Qemu-devel] [RFC PATCH v3 1/3] vGPU Core driver
Date: Wed, 4 May 2016 02:58:57 +0000

> From: Alex Williamson
> Sent: Wednesday, May 04, 2016 6:44 AM
> > +/**
> > + * struct gpu_device_ops - Structure to be registered for each physical 
> > GPU to
> > + * register the device to vgpu module.
> > + *
> > + * @owner:                 The module owner.
> > + * @dev_attr_groups:               Default attributes of the physical 
> > device.
> > + * @vgpu_attr_groups:              Default attributes of the vGPU device.
> > + * @vgpu_supported_config: Called to get information about supported vgpu 
> > types.
> > + *                         @dev : pci device structure of physical GPU.
> > + *                         @config: should return string listing supported 
> > config
> > + *                         Returns integer: success (0) or error (< 0)
> > + * @vgpu_create:           Called to allocate basic resouces in graphics

It's redundant to have vgpu prefixed to every op here. Same comment
for later sysfs node.

> > + *                         driver for a particular vgpu.
> > + *                         @dev: physical pci device structure on which 
> > vgpu
> > + *                               should be created
> > + *                         @uuid: VM's uuid for which VM it is intended to
> > + *                         @instance: vgpu instance in that VM

I didn't quite get @instance here. Is it whatever vendor specific format
to indicate a vgpu?

> > + *                         @vgpu_params: extra parameters required by GPU 
> > driver.
> > + *                         Returns integer: success (0) or error (< 0)
> > + * @vgpu_destroy:          Called to free resources in graphics driver for
> > + *                         a vgpu instance of that VM.
> > + *                         @dev: physical pci device structure to which
> > + *                         this vgpu points to.
> > + *                         @uuid: VM's uuid for which the vgpu belongs to.
> > + *                         @instance: vgpu instance in that VM
> > + *                         Returns integer: success (0) or error (< 0)
> > + *                         If VM is running and vgpu_destroy is called that
> > + *                         means the vGPU is being hotunpluged. Return 
> > error
> > + *                         if VM is running and graphics driver doesn't
> > + *                         support vgpu hotplug.
> > + * @vgpu_start:                    Called to do initiate vGPU 
> > initialization
> > + *                         process in graphics driver when VM boots before
> > + *                         qemu starts.
> > + *                         @uuid: VM's UUID which is booting.
> > + *                         Returns integer: success (0) or error (< 0)
> > + * @vgpu_shutdown:         Called to teardown vGPU related resources for
> > + *                         the VM
> > + *                         @uuid: VM's UUID which is shutting down .
> > + *                         Returns integer: success (0) or error (< 0)

Can you give some specific example about difference between destroy
and shutdown? Want to map it correctly into our side, e.g. whether we
need implement both or just one of them.

Another optional op is 'stop', allowing physical device to stop scheduling
vGPU including wait for in-flight DMA done. It would be useful to support
VM live migration with vGPU assigned.

> > + * @read:                  Read emulation callback
> > + *                         @vdev: vgpu device structure
> > + *                         @buf: read buffer
> > + *                         @count: number bytes to read
> > + *                         @address_space: specifies for which address 
> > space
> > + *                         the request is: pci_config_space, IO register
> > + *                         space or MMIO space.
> > + *                         @pos: offset from base address.
> > + *                         Retuns number on bytes read on success or error.
> > + * @write:                 Write emulation callback
> > + *                         @vdev: vgpu device structure
> > + *                         @buf: write buffer
> > + *                         @count: number bytes to be written
> > + *                         @address_space: specifies for which address 
> > space
> > + *                         the request is: pci_config_space, IO register
> > + *                         space or MMIO space.
> > + *                         @pos: offset from base address.
> > + *                         Retuns number on bytes written on success or 
> > error.
> 
> How do these support multiple MMIO spaces or IO port spaces?  GPUs, and
> therefore I assume vGPUs, often have more than one MMIO space, how
> does the enum above tell us which one?  We could simply make this be a
> region index.
> 
> > + * @vgpu_set_irqs:         Called to send about interrupts configuration
> > + *                         information that qemu set.
> > + *                         @vdev: vgpu device structure
> > + *                         @flags, index, start, count and *data : same as
> > + *                         that of struct vfio_irq_set of
> > + *                         VFIO_DEVICE_SET_IRQS API.
> 
> How do we learn about the supported interrupt types?  Should this be
> called vgpu_vfio_set_irqs if it's following the vfio API?
> 
> > + * @vgpu_bar_info:         Called to get BAR size and flags of vGPU device.
> > + *                         @vdev: vgpu device structure
> > + *                         @bar_index: BAR index
> > + *                         @bar_info: output, returns size and flags of
> > + *                         requested BAR
> > + *                         Returns integer: success (0) or error (< 0)
> 
> This is called bar_info, but the bar_index is actually the vfio region
> index and things like the config region info is being overloaded
> through it.  We already have a structure defined for getting a generic
> region index, why not use it?  Maybe this should just be
> vgpu_vfio_get_region_info.

Or is it more extensible to allow reporting the whole vconfig space?

Thanks
Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]