qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v25 03/17] vfio: Add save and load functions for VFIO PCI dev


From: Shenming Lu
Subject: Re: [PATCH v25 03/17] vfio: Add save and load functions for VFIO PCI devices
Date: Tue, 3 Nov 2020 18:40:26 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.2.2

On 6/23/2020 1:58 AM, Alex Williamson wrote:
>> +    } else if (interrupt_type == VFIO_INT_MSIX) {
>> +        uint16_t offset;
>> +
>> +        offset = pci_default_read_config(pdev,
>> +                                       pdev->msix_cap + PCI_MSIX_FLAGS + 1, 
>> 2);
>> +        /* load enable bit and maskall bit */
>> +        vfio_pci_write_config(pdev, pdev->msix_cap + PCI_MSIX_FLAGS + 1,
>> +                              offset, 2);
>> +        msix_load(pdev, f);
>
> Isn't this ordering backwards, or at least less efficient?  The config
> write will cause us to enable MSI-X; presumably we'd have nothing in
> the vector table though.  Then msix_load() will write the vector
> and pba tables and trigger a use notifier for each vector.  It seems
> like that would trigger a bunch of SET_IRQS ioctls as if the guest
> wrote individual unmasked vectors to the vector table, whereas if we
> setup the vector table and then enable MSI-X, we do it with one ioctl.
>

As you said, it's better to call msix_load() first (only restore the vector
and pba tables), and then enable MSI-X, where we will trigger the use notifier
for all unmasked vectors and make only one ioctl (in 
msix_set_vector_notifiers()?).
But what I see is that we will still do_use these vectors one by one in
msix_set_vector_notifiers() and trigger a bunch of SET_IRQS ioctls...
Not sure if I have missed something.

Thanks,
Shenming



reply via email to

[Prev in Thread] Current Thread [Next in Thread]