[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v3 9/9] vfio: defer to commit kvm irq routing when enable msi/msi
From: |
Longpeng(Mike) |
Subject: |
[PATCH v3 9/9] vfio: defer to commit kvm irq routing when enable msi/msix |
Date: |
Tue, 21 Sep 2021 07:02:02 +0800 |
In migration resume phase, all unmasked msix vectors need to be
setup when load the VF state. However, the setup operation would
take longer if the VM has more VFs and each VF has more unmasked
vectors.
The hot spot is kvm_irqchip_commit_routes, it'll scan and update
all irqfds that already assigned each invocation, so more vectors
means need more time to process them.
vfio_pci_load_config
vfio_msix_enable
msix_set_vector_notifiers
for (vector = 0; vector < dev->msix_entries_nr; vector++) {
vfio_msix_vector_do_use
vfio_add_kvm_msi_virq
kvm_irqchip_commit_routes <-- expensive
}
We can reduce the cost by only commit once outside the loop. The
routes is cached in kvm_state, we commit them first and then bind
irqfd for each vector.
The test VM has 128 vcpus and 8 VF (each one has 65 vectors),
we measure the cost of the vfio_msix_enable for each VF, and
we can see 90+% costs can be reduce.
VF Count of irqfds[*] Original With this patch
1st 65 8 2
2nd 130 15 2
3rd 195 22 2
4th 260 24 3
5th 325 36 2
6th 390 44 3
7th 455 51 3
8th 520 58 4
Total 258ms 21ms
[*] Count of irqfds
How many irqfds that already assigned and need to process in this
round.
The optimition can be applied to msi type too.
Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
---
hw/vfio/pci.c | 36 ++++++++++++++++++++++++++++--------
1 file changed, 28 insertions(+), 8 deletions(-)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 2de1cc5425..b26129bddf 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -513,11 +513,13 @@ static int vfio_msix_vector_do_use(PCIDevice *pdev,
unsigned int nr,
* increase them as needed.
*/
if (vdev->nr_vectors < nr + 1) {
- vfio_disable_irqindex(&vdev->vbasedev, VFIO_PCI_MSIX_IRQ_INDEX);
vdev->nr_vectors = nr + 1;
- ret = vfio_enable_vectors(vdev, true);
- if (ret) {
- error_report("vfio: failed to enable vectors, %d", ret);
+ if (!vdev->defer_kvm_irq_routing) {
+ vfio_disable_irqindex(&vdev->vbasedev, VFIO_PCI_MSIX_IRQ_INDEX);
+ ret = vfio_enable_vectors(vdev, true);
+ if (ret) {
+ error_report("vfio: failed to enable vectors, %d", ret);
+ }
}
} else {
Error *err = NULL;
@@ -579,8 +581,7 @@ static void vfio_msix_vector_release(PCIDevice *pdev,
unsigned int nr)
}
}
-/* TODO: invoked when enclabe msi/msix vectors */
-static __attribute__((unused)) void vfio_commit_kvm_msi_virq(VFIOPCIDevice
*vdev)
+static void vfio_commit_kvm_msi_virq(VFIOPCIDevice *vdev)
{
int i;
VFIOMSIVector *vector;
@@ -610,6 +611,9 @@ static __attribute__((unused)) void
vfio_commit_kvm_msi_virq(VFIOPCIDevice *vdev
static void vfio_msix_enable(VFIOPCIDevice *vdev)
{
+ PCIDevice *pdev = &vdev->pdev;
+ int ret;
+
vfio_disable_interrupts(vdev);
vdev->msi_vectors = g_new0(VFIOMSIVector, vdev->msix->entries);
@@ -632,11 +636,22 @@ static void vfio_msix_enable(VFIOPCIDevice *vdev)
vfio_msix_vector_do_use(&vdev->pdev, 0, NULL, NULL);
vfio_msix_vector_release(&vdev->pdev, 0);
- if (msix_set_vector_notifiers(&vdev->pdev, vfio_msix_vector_use,
- vfio_msix_vector_release, NULL)) {
+ vdev->defer_kvm_irq_routing = true;
+
+ ret = msix_set_vector_notifiers(&vdev->pdev, vfio_msix_vector_use,
+ vfio_msix_vector_release, NULL);
+ if (ret < 0) {
error_report("vfio: msix_set_vector_notifiers failed");
+ } else if (!pdev->msix_function_masked) {
+ vfio_commit_kvm_msi_virq(vdev);
+ vfio_disable_irqindex(&vdev->vbasedev, VFIO_PCI_MSIX_IRQ_INDEX);
+ ret = vfio_enable_vectors(vdev, true);
+ if (ret) {
+ error_report("vfio: failed to enable vectors, %d", ret);
+ }
}
+ vdev->defer_kvm_irq_routing = false;
trace_vfio_msix_enable(vdev->vbasedev.name);
}
@@ -645,6 +660,7 @@ static void vfio_msi_enable(VFIOPCIDevice *vdev)
int ret, i;
vfio_disable_interrupts(vdev);
+ vdev->defer_kvm_irq_routing = true;
vdev->nr_vectors = msi_nr_vectors_allocated(&vdev->pdev);
retry:
@@ -671,6 +687,8 @@ retry:
vfio_add_kvm_msi_virq(vdev, vector, i, false);
}
+ vfio_commit_kvm_msi_virq(vdev);
+
/* Set interrupt type prior to possible interrupts */
vdev->interrupt = VFIO_INT_MSI;
@@ -697,9 +715,11 @@ retry:
*/
error_report("vfio: Error: Failed to enable MSI");
+ vdev->defer_kvm_irq_routing = false;
return;
}
+ vdev->defer_kvm_irq_routing = false;
trace_vfio_msi_enable(vdev->vbasedev.name, vdev->nr_vectors);
}
--
2.23.0
- [PATCH v3 0/9] optimize the downtime for vfio migration, Longpeng(Mike), 2021/09/20
- [PATCH v3 5/9] msix: reset poll_notifier to NULL if fail to set notifiers, Longpeng(Mike), 2021/09/20
- [PATCH v3 3/9] vfio: simplify the failure path in vfio_msi_enable, Longpeng(Mike), 2021/09/20
- [PATCH v3 1/9] vfio: simplify the conditional statements in vfio_msi_enable, Longpeng(Mike), 2021/09/20
- [PATCH v3 6/9] kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route, Longpeng(Mike), 2021/09/20
- [PATCH v3 4/9] msix: simplify the conditional in msix_set/unset_vector_notifiers, Longpeng(Mike), 2021/09/20
- [PATCH v3 8/9] Revert "vfio: Avoid disabling and enabling vectors repeatedly in VFIO migration", Longpeng(Mike), 2021/09/20
- [PATCH v3 9/9] vfio: defer to commit kvm irq routing when enable msi/msix,
Longpeng(Mike) <=
- [PATCH v3 7/9] vfio: add infrastructure to commit the deferred kvm routing, Longpeng(Mike), 2021/09/20
- [PATCH v3 2/9] vfio: move re-enabling INTX out of the common helper, Longpeng(Mike), 2021/09/20