On Aug 25 19:16, Jinhao Fan wrote:
On 8/25/2022 5:33 PM, Klaus Jensen wrote:
I'm still a bit perplexed by this issue, so I just tried moving
nvme_init_irq_notifier() to the end of nvme_init_cq() and removing this
first_io_cqe thing. I did not observe any particular issues?
What bad behavior did you encounter, it seems to work fine to me
The kernel boots up and got stuck, waiting for interrupts. Then the request
times out and got retried three times. Finally the driver seems to decide
that the drive is down and continues to boot.
I added some prints during debugging and found that the MSI-X message which
got registered in KVM via kvm_irqchip_add_msi_route() is not the same as the
one actually used in msix_notify().
Are you sure you are using KVM's irqfd?
Pretty sure? Using "ioeventfd=on,irq-eventfd=on" on the controller.
And the following patch.
diff --git i/hw/nvme/ctrl.c w/hw/nvme/ctrl.c
index 30bbda7bb5ae..b2e41d3bd745 100644
--- i/hw/nvme/ctrl.c
+++ w/hw/nvme/ctrl.c
@@ -1490,21 +1490,6 @@ static void nvme_post_cqes(void *opaque)
if (!pending) {
n->cq_pending++;
}
-
- if (unlikely(cq->first_io_cqe)) {
- /*
- * Initilize event notifier when first cqe is posted. For irqfd
- * support we need to register the MSI message in KVM. We
- * can not do this registration at CQ creation time because
- * Linux's NVMe driver changes the MSI message after CQ
creation.
- */
- cq->first_io_cqe = false;
-
- if (n->params.irq_eventfd) {
- nvme_init_irq_notifier(n, cq);
- }
- }
-
}
nvme_irq_assert(n, cq);
@@ -4914,11 +4899,14 @@ static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n,
uint64_t dma_addr,
}
n->cq[cqid] = cq;
cq->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, nvme_post_cqes, cq);
+
/*
* Only enable irqfd for IO queues since we always emulate admin queue
* in main loop thread
*/
- cq->first_io_cqe = cqid != 0;
+ if (cqid && n->params.irq_eventfd) {
+ nvme_init_irq_notifier(n, cq);
+ }
}