qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 05/13] pci: Add pci_device_route_intx_to_irq


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH 05/13] pci: Add pci_device_route_intx_to_irq
Date: Sun, 10 Jun 2012 13:53:15 +0300

On Sun, Jun 10, 2012 at 12:49:11PM +0200, Jan Kiszka wrote:
> On 2012-06-10 12:41, Michael S. Tsirkin wrote:
> > On Sun, Jun 10, 2012 at 12:08:23PM +0200, Jan Kiszka wrote:
> >> On 2012-06-10 11:55, Michael S. Tsirkin wrote:
> >>> On Thu, Jun 07, 2012 at 06:46:38PM +0200, Jan Kiszka wrote:
> >>>> On 2012-06-07 18:28, Michael S. Tsirkin wrote:
> >>>>> On Thu, Jun 07, 2012 at 05:10:17PM +0200, Jan Kiszka wrote:
> >>>>>> On 2012-06-07 16:32, Michael S. Tsirkin wrote:
> >>>>>>> On Mon, Jun 04, 2012 at 10:52:13AM +0200, Jan Kiszka wrote:
> >>>>>>>> @@ -1089,6 +1093,14 @@ static void pci_set_irq(void *opaque, int 
> >>>>>>>> irq_num, int level)
> >>>>>>>>      pci_change_irq_level(pci_dev, irq_num, change);
> >>>>>>>>  }
> >>>>>>>>  
> >>>>>>>> +PCIINTxRoute pci_device_route_intx_to_irq(PCIDevice *dev, int pin)
> >>>>>>>> +{
> >>>>>>>> +    PCIBus *bus = dev->host_bus;
> >>>>>>>> +
> >>>>>>>> +    assert(bus->route_intx_to_irq);
> >>>>>>>> +    return bus->route_intx_to_irq(bus->irq_opaque, 
> >>>>>>>> dev->host_intx_pin[pin]);
> >>>>>>>> +}
> >>>>>>>> +
> >>>>>>>>  /***********************************************************/
> >>>>>>>>  /* monitor info on PCI */
> >>>>>>>>  
> >>>>>>>
> >>>>>>> Just an idea: can devices cache this result, bypassing the
> >>>>>>> intx to irq lookup on data path?
> >>>>>>
> >>>>>> That lookup is part of set_irq which we don't bypass so far and where
> >>>>>> this is generally trivial. If we want to cache the effects of set_irq 
> >>>>>> as
> >>>>>> well, I guess things would become pretty complex (e.g. due to vmstate
> >>>>>> compatibility), and I'm unsure if it would buy us much.
> >>>>>
> >>>>> This is less for performance but more for making
> >>>>> everyone use the same infrastructure rather than
> >>>>> assigned devices being the weird case.
> >>>>
> >>>> Device assignment is weird. It bypasses all state updates as it does not
> >>>> have to bother about migratability.
> >>>>
> >>>> Well, of course we could cache the host bridge routing result as well,
> >>>> for every device. It would have to be in addition to host_intx_pin. But
> >>>> the result would look pretty strange to me.
> >>>>
> >>>> In any case, I would prefer to do this, if at all, on top of this
> >>>> series, specifically as it will require to touch all host bridges.
> >>>
> >>> I'd like to ponder this a bit more then.
> >>>
> >>> If the claim is that device assignment is only needed for
> >>> piix anyway, then why not make it depend on piix *explicitly*?
> >>> Yes ugly but this will make it very easy to find and
> >>> address any missing pieces.
> >>
> >> Because it is conceptually independent of the PIIX, we will need it for
> >> successors of that x86 chipset as well, and I won't add the ugly hack of
> >> qemu-kvm upstream
> > 
> > So you look at an API and see it requires a route
> > callback. And you ask "why doesn't chipset X implement it"?
> > And the answer is "because it's only used by device assignment".
> > Which you will only know if you read this thread. So it's
> > a hack. And I'd rather have the hacks in device-assignment.c
> > than in pci.c even if the former are nastier.
> 
> I don't share your view on this. It is surely _not_ a hack, specifically
> when compared to what we have so far and what could be done otherwise,
> e.g. hacking device-assignment and PIIX to make them cooperate (sorry, I
> would vote against such an attempt). This is just a partially used
> generic API. Any chipset not providing the required routing function
> will cause an assert once some tries to make use of it.
> 
> So, what can I do to make this API more acceptable for you?
> 
> Jan
> 

The idea that I suggested above would address this concern.
Or wait a bit, maybe I'll come around when I ponder
the alternatives.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]