qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups


From: Alex Williamson
Subject: Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
Date: Wed, 10 Oct 2012 13:31:52 -0600

On Tue, 2012-10-09 at 09:09 +0200, Jan Kiszka wrote:
> On 2012-10-08 23:11, Alex Williamson wrote:
> > On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
> >> On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
> >>> On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
> >>>> On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
> >>>>> Michael, Jan,
> >>>>>
> >>>>> Any comments on these?  I'd like to make the PCI changes before I update
> >>>>> vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
> >>>>> anyone officially listed as maintainer of pci-assign since it's been
> >>>>> moved to qemu.  I could include the pci-assign patches in my tree if you
> >>>>> prefer.  Thanks,
> >>>>>
> >>>>> Alex
> >>>>
> >>>> Patches themselves look fine, but I'd like to
> >>>> better understand why do we want the INTx fallback.
> >>>> Isn't it easier to add intx routing support?
> >>>
> >>> vfio-pci can work with or without intx routing support.  Its presence is
> >>> just one requirement to enable kvm accelerated intx support.  Regardless
> >>> of whether it's easy or hard to implement intx routing in a given
> >>> chipset, I currently can't probe for it and make useful decisions about
> >>> whether or not to enable kvm support without potentially hitting an
> >>> assert.  It's arguable how important intx acceleration is for specific
> >>> applications, so while I'd like all chipsets to implement it, I don't
> >>> know that it should be a gating factor to chipset integration.  Thanks,
> >>>
> >>> Alex
> >>
> >> Yes but there's nothing kvm specific in the routing API,
> >> and IIRC it actually works fine without kvm.
> > 
> > Correct, but intx routing isn't very useful without kvm.
> 
> Right now: yes. Long-term: no. The concept in general is also required
> for decoupling I/O paths lock-wise from our main thread. We need to
> explore the IRQ path and cache it in order to avoid taking lots of locks
> on each delivery, possibly even the BQL. But we will likely need
> something smarter at that point, i.e. something PCI-independent.

That sounds great long term, but in the interim I think this trivial
extension to the API is more than justified.  I hope that it can go in
soon so we can get vfio-pci kvm intx acceleration in before freeze
deadlines get much closer.  Thanks,

Alex





reply via email to

[Prev in Thread] Current Thread [Next in Thread]