qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] vfio-pci: Add KVM INTx acceleration


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH] vfio-pci: Add KVM INTx acceleration
Date: Tue, 16 Oct 2012 17:08:06 +0200

On Tue, Oct 16, 2012 at 08:48:04AM -0600, Alex Williamson wrote:
> On Tue, 2012-10-16 at 16:14 +0200, Michael S. Tsirkin wrote:
> > On Tue, Oct 16, 2012 at 07:51:43AM -0600, Alex Williamson wrote:
> > > On Tue, 2012-10-16 at 08:39 +0200, Michael S. Tsirkin wrote:
> > > > On Mon, Oct 15, 2012 at 02:28:15PM -0600, Alex Williamson wrote:
> > > > > This makes use of the new level irqfd support enabling bypass of
> > > > > qemu userspace both on INTx injection and unmask.  This significantly
> > > > > boosts the performance of devices making use of legacy interrupts.
> > > > > 
> > > > > Signed-off-by: Alex Williamson <address@hidden>
> > > > > ---
> > > > > 
> > > > > My INTx routing workaround below will probably raise some eyebrows,
> > > > > but I don't feel it's worth subjecting users to core dumps if they
> > > > > want to try vfio-pci on new platforms.  INTx routing is part of some
> > > > > larger plan, but until that plan materializes we have to try to avoid
> > > > > the API unless we think there's a good chance it might be there.
> > > > > I'll accept the maintenance of updating a whitelist in the interim.
> > > > > Thanks,
> > > > > 
> > > > > Alex
> > > > > 
> > > > >  hw/vfio_pci.c |  224 
> > > > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > > >  1 file changed, 224 insertions(+)
> > > > > 
> > > > > diff --git a/hw/vfio_pci.c b/hw/vfio_pci.c
> > > > > index 639371e..777a5f8 100644
> > > > > --- a/hw/vfio_pci.c
> > > > > +++ b/hw/vfio_pci.c
> > > > > @@ -154,6 +154,53 @@ static uint32_t vfio_pci_read_config(PCIDevice 
> > > > > *pdev, uint32_t addr, int len);
> > > > >  static void vfio_mmap_set_enabled(VFIODevice *vdev, bool enabled);
> > > > >  
> > > > >  /*
> > > > > + * PCI code refuses to make it possible to probe whether the chipset
> > > > > + * supports pci_device_route_intx_to_irq() and booby traps the call
> > > > > + * to assert if doesn't.  For us, this is just an optimization, so
> > > > > + * only enable it when we know it's present.  Unfortunately PCIBus is
> > > > > + * private, so we can't just look at the function pointer.
> > > > > + */
> > > > > +static bool vfio_pci_bus_has_intx_route(PCIDevice *pdev)
> > > > > +{
> > > > > +#ifdef CONFIG_KVM
> > > > > +    BusState *bus = qdev_get_parent_bus(&pdev->qdev);
> > > > > +    DeviceState *dev;
> > > > > +
> > > > > +    if (!kvm_irqchip_in_kernel() ||
> > > > > +        !kvm_check_extension(kvm_state, KVM_CAP_IRQFD_RESAMPLE)) {
> > > > > +     return false;
> > > > > +    }
> > > > 
> > > > 
> > > > Shouldn't we update linux-headers/ to get KVM_CAP_IRQFD_RESAMPLE?
> > > > Also for KVM_IRQFD_FLAG_RESAMPLE.
> > > 
> > > I posted the patch for that separately yesterday.  I'll only request a
> > > pull once that's in.
> > 
> > OK missed that. In the future, might be a good idea to note dependencies
> > in the patchset or repost them as part of patchset with appropriate
> > tagging.
> > 
> > > > > +
> > > > > +    for (; bus->parent; bus = qdev_get_parent_bus(dev)) {
> > > > > +
> > > > > +        dev = bus->parent;
> > > > > +
> > > > > +        if (!strncmp("i440FX-pcihost", 
> > > > > object_get_typename(OBJECT(dev)), 14)) {
> > > > > +            return true;
> > > > > +        }
> > > > > +    }
> > > > > +
> > > > > +    error_report("vfio-pci: VM chipset does not support INTx 
> > > > > routing, "
> > > > > +                 "using slow INTx mode\n");
> > > > 
> > > > When does this code trigger? It seems irqchip implies piix ATM -
> > > > is this just dead code?
> > > 
> > > Unused, but not unnecessary.  Another chipset is under development,
> > > which means very quickly irqchip will not imply piix.
> > 
> > So this is for purposes of an out of tree stuff, let's
> > keep these hacks out of tree too. My guess is
> > q35 can just be added with pci_device_route_intx straight away.
> > 
> > >  Likewise irqfd
> > > support is being added to other architectures, so I don't know how long
> > > the kvm specific tests will hold up.  Testing for a specific chipset
> > > could of course be avoided if we were willing to support:
> > > 
> > > bool pci_device_intx_route_supported(PCIDevice *pdev)
> > > 
> > > or the NOROUTE option I posted previously.
> > 
> > This is just moving the pain to users who will
> > get bad performance and will have to debug it. Injecting
> > through userspace is slow, new architectures should
> > simply add irqfd straight away instead of adding
> > work arounds in userspace.
> > 
> > This is exactly why we have the assert in pci core.
> 
> Let's compare user experience:
> 
> As coded here:
> 
> - Error message, only runs slower if the driver actually uses INTx.
> Result: file bug, continue using vfio-pci, maybe do something useful,
> maybe find new bugs to file.
> 
> Blindly calling PCI code w/ assert:
> 
> - Assert.  Result: file bug, full stop.
> 
> Maybe I do too much kernel programming, but I don't take asserts lightly
> and prefer they be saved for "something is really broken and I can't
> safely continue".  This is not such a case.

There's no chance we ship e.g. q35 by mistake without this API: since
there is no way this specific assert can be missed in even basic
testing:

So I see it differently:

As coded here:
        chipset authors get lazy and do not implement API.
        bad performance for all users.

With assert:
        chipset authors implement necessary API.
        good performance for all users.



-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]