qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 0/4] MSI affinity for assigned devices


From: Alex Williamson
Subject: Re: [Qemu-devel] [RFC PATCH 0/4] MSI affinity for assigned devices
Date: Mon, 07 Jan 2013 13:52:01 -0700

On Mon, 2013-01-07 at 20:14 +0000, Krishna J wrote:
> Hi Alex,
> > MSI routing updates aren't currently handled by pci-assign or
> > vfio-pci (when using KVM acceleration), which means that trying to
> > set interrupt SMP affinity in the guest has no effect unless MSI is
> > completely disabled and re-enabled.  This series fixes this for both
> > device assignment backends using similar schemes.  We store the last
> > MSIMessage programmed to KVM and do updates to the MSI route when it
> > changes.  pci-assign takes a little bit of refactoring to make this
> > happen cleanly.  Thanks,
> 
> I am using the MSI affinity for assigned devices patch 1 to 4. I have
> setup the guest such that VCPU0 is pinned to PCPU1, VCPU1 is pinned to
> PCPU2, VCPU2 is pinned to PCPU3 and VCPU3 is pinned to PCPU4. I do
> this by taskset after the guest boots. I then start generating
> interrupts affined to VCPU3. I see all the interrupts directly
> delivered to VCPU 3. Now i do the same test but interrupt affined to
> VCPU 2. Although the interrupts are delivered to VCPU2  there are lot
> of "Rescheduling interrupts" in VCPU 3. I have checked the
> smp_affinity and it is updated to VCPU 2. 
> Wanted to know your feedback on this usecase and what might be the
> impact. 
>            CPU0       CPU1     CPU2       CPU3
>   0:        211          0          0          0   IO-APIC-edge      timer
>   4:     60940          0          0          0   IO-APIC-edge      serial
>   8:         65          0          0          0   IO-APIC-edge      rtc0
>   9:          0          0          0          0   IO-APIC-fasteoi   acpi
>  40:          0          0          0          0   PCI-MSI-edge      
> virtio1-config
>  41:      1910          0          0          0   PCI-MSI-edge      
> virtio1-requests
>  42:          0          0          0          0   PCI-MSI-edge      
> virtio0-config
>  43:        127          0          0          0   PCI-MSI-edge      
> virtio0-input
>  44:          1          0          0          0   PCI-MSI-edge      
> virtio0-output
>  45:          1          0       3377      11194   PCI-MSI-edge      FPGA_DEV
> NMI:          0          0          0          0   Non-maskable interrupts
> LOC:     225880     231572     223670     223612   Local timer interrupts
> SPU:          0          0          0          0   Spurious interrupts
> PMI:          0          0          0          0   Performance monitoring 
> interrupts
> IWI:          0          0          0          0   IRQ work interrupts
> RTR:          0          0          0          0   APIC ICR read retries
> RES:         14         20         21       3398   Rescheduling 
> interrupts-------> Many RES Interrtups!!!!!!
> CAL:          0         14         14         16   Function call interrupts
> TLB:          0          0          0          0   TLB shootdowns

I don't know, but I'll fix the line wrap for anyone else that wants to
have a look.  The count looks roughly similar to the number of
interrupts to VCPU2.  Is your application somehow tied to VCPU3?
Thanks,

Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]