qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups


From: Jan Kiszka
Subject: Re: [Qemu-devel] [PATCH 0/6] Misc PCI cleanups
Date: Tue, 09 Oct 2012 09:09:14 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2012-10-08 23:11, Alex Williamson wrote:
> On Mon, 2012-10-08 at 23:40 +0200, Michael S. Tsirkin wrote:
>> On Mon, Oct 08, 2012 at 01:27:33PM -0600, Alex Williamson wrote:
>>> On Mon, 2012-10-08 at 22:15 +0200, Michael S. Tsirkin wrote:
>>>> On Mon, Oct 08, 2012 at 09:58:32AM -0600, Alex Williamson wrote:
>>>>> Michael, Jan,
>>>>>
>>>>> Any comments on these?  I'd like to make the PCI changes before I update
>>>>> vfio-pci to make use of the new resampling irqfd in kvm.  We don't have
>>>>> anyone officially listed as maintainer of pci-assign since it's been
>>>>> moved to qemu.  I could include the pci-assign patches in my tree if you
>>>>> prefer.  Thanks,
>>>>>
>>>>> Alex
>>>>
>>>> Patches themselves look fine, but I'd like to
>>>> better understand why do we want the INTx fallback.
>>>> Isn't it easier to add intx routing support?
>>>
>>> vfio-pci can work with or without intx routing support.  Its presence is
>>> just one requirement to enable kvm accelerated intx support.  Regardless
>>> of whether it's easy or hard to implement intx routing in a given
>>> chipset, I currently can't probe for it and make useful decisions about
>>> whether or not to enable kvm support without potentially hitting an
>>> assert.  It's arguable how important intx acceleration is for specific
>>> applications, so while I'd like all chipsets to implement it, I don't
>>> know that it should be a gating factor to chipset integration.  Thanks,
>>>
>>> Alex
>>
>> Yes but there's nothing kvm specific in the routing API,
>> and IIRC it actually works fine without kvm.
> 
> Correct, but intx routing isn't very useful without kvm.

Right now: yes. Long-term: no. The concept in general is also required
for decoupling I/O paths lock-wise from our main thread. We need to
explore the IRQ path and cache it in order to avoid taking lots of locks
on each delivery, possibly even the BQL. But we will likely need
something smarter at that point, i.e. something PCI-independent.

Jan

-- 
Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]