qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] VFIO based vGPU(was Re: [Announcement] 2015-Q3 release


From: Jike Song
Subject: Re: [Qemu-devel] VFIO based vGPU(was Re: [Announcement] 2015-Q3 release of XenGT - a Mediated ...)
Date: Thu, 28 Jan 2016 14:00:09 +0800
User-agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:17.0) Gecko/20130801 Thunderbird/17.0.8

On 01/28/2016 12:19 AM, Alex Williamson wrote:
> On Wed, 2016-01-27 at 13:43 +0800, Jike Song wrote:
{snip}

>> Had a look at eventfd, I would say yes, technically we are able to
>> achieve the goal: introduce a fd, with fop->{read|write} defined in KVM,
>> call into vgpu device-model, also an iodev registered for a MMIO GPA
>> range to invoke the fop->{read|write}.  I just didn't understand why
>> userspace can't register an iodev via API directly.
> 
> Please elaborate on how it would work via iodev.
>

QEMU forwards BAR0 write to the bus driver, in the bus driver, if
found that MEM bit is enabled, register an iodev to KVM: with an
ops:

        const struct kvm_io_device_ops trap_mmio_ops = {
                .read   = kvmgt_guest_mmio_read,
                .write  = kvmgt_guest_mmio_write,
        };

I may not be able to illustrated it clearly with descriptions but this
should not be a problem, thanks to your explanation, I can understand
and adopt it for KVMGT.


>> Besides, this doesn't necessarily require another thread, right?
>> I guess it can be within the VCPU thread? 
> 
> I would think so too, the vcpu is blocked on the MMIO access, we should
> be able to service it in that context.  I hope.
> 

Thanks for confirmation.

>> And this brought another question: except the vfio bus drvier and
>> iommu backend (and the page_track ulitiy used for guest memory 
>> write-protection), 
>> is it KVMGT allowed to call into kvm.ko (or modify)? Though we are
>> becoming less and less willing to do that with VFIO, it's still better
>> to know that before going wrong.
> 
> kvm and vfio are separate modules, for the most part, they know nothing
> about each other and have no hard dependencies between them.  We do have
> various accelerations we can use to avoid paths through userspace, but
> these are all via APIs that are agnostic of the party on the other end.
> For example, vfio signals interrups through eventfds and has no concept
> of whether that eventfd terminates in userspace or into an irqfd in KVM.
> vfio supports direct access to device MMIO regions via mmaps, but vfio
> has no idea if that mmap gets directly mapped into a VM address space.
> Even with posted interrupts, we've introduced an irq bypass manager
> allowing interrupt producers and consumers to register independently to
> form a connection without directly knowing anything about the other
> module.  That sort or proper software layering needs to continue.  It
> would be wrong for a vfio bus driver to assume KVM is the user and
> directly call into KVM interfaces.  Thanks,
> 

I understand and agree with your point, it's bad if the bus driver
assume KVM is the user and/or call into KVM interfaces.

However, the vgpu device-model, in intel case also a part of i915 driver,
will always need to call some hypervisor-specific interfaces.
For example, when a guest gfx driver submit GPU commands, the device-model
may want to scan it for security or whatever-else purpose:

        - get a GPA (from GPU page tables)
        - want to read 16 bytes from that GPA
        - call hypervisor-specific read_gpa() method
                - for Xen, the GPA belongs to a foreign domain, it must find
                  a way to map & read it - beyond our scope here;
                - for KVM, the GPA can converted to HVA, copy_from_user (if
                  called from vcpu thread) or access_remote_vm (if called from
                  other threads);

Please note that this is not from the vfio bus driver, but from the vgpu
device-model; also this is not DMA addr from GPU talbes, but real GPA.


> Alex
> 

--
Thanks,
Jike




reply via email to

[Prev in Thread] Current Thread [Next in Thread]