Am 03.01.24 um 14:35 schrieb Paolo Bonzini:
On 1/3/24 12:40, Fiona Ebner wrote:
I'm happy to report that I cannot reproduce the CPU-usage-spike issue
with the patch, but I did run into an assertion failure when
trying to
verify that it fixes my original stuck-guest-IO issue. See below
for the
backtrace [0]. Hanna wrote in
https://issues.redhat.com/browse/RHEL-3934
I think it’s sufficient to simply call virtio_queue_notify_vq(vq)
after the virtio_queue_aio_attach_host_notifier(vq, ctx) call,
because
both virtio-scsi’s and virtio-blk’s .handle_output() implementations
acquire the device’s context, so this should be directly callable
from
any context.
I guess this is not true anymore now that the AioContext locking was
removed?
Good point and, in fact, even before it was much safer to use
virtio_queue_notify() instead. Not only does it use the event
notifier
handler, but it also calls it in the right thread/AioContext just by
doing event_notifier_set().
But with virtio_queue_notify() using the event notifier, the
CPU-usage-spike issue is present:
Back to the CPU-usage-spike issue: I experimented around and it
doesn't
seem to matter whether I notify the virt queue before or after
attaching
the notifiers. But there's another functional difference. My patch
called virtio_queue_notify() which contains this block:
if (vq->host_notifier_enabled) {
event_notifier_set(&vq->host_notifier);
} else if (vq->handle_output) {
vq->handle_output(vdev, vq);
In my testing, the first branch was taken, calling
event_notifier_set().
Hanna's patch uses virtio_queue_notify_vq() and there,
vq->handle_output() will be called. That seems to be the relevant
difference regarding the CPU-usage-spike issue.
I should mention that this is with a VirtIO SCSI disk. I also attempted
to reproduce the CPU-usage-spike issue with a VirtIO block disk, but
didn't manage yet.
What I noticed is that in virtio_queue_host_notifier_aio_poll(), one of
the queues (but only one) will always show as nonempty. And then,
run_poll_handlers_once() will always detect progress which explains the
CPU usage.
The following shows
1. vq address
2. number of times vq was passed to
virtio_queue_host_notifier_aio_poll()
3. number of times the result of virtio_queue_host_notifier_aio_poll()
was true for the vq
0x555fd93f9c80 17162000 0
0x555fd93f9e48 17162000 6
0x555fd93f9ee0 17162000 0
0x555fd93f9d18 17162000 17162000
0x555fd93f9db0 17162000 0
0x555fd93f9f78 17162000 0
And for the problematic one, the reason it is seen as nonempty is:
0x555fd93f9d18 shadow_avail_idx 8 last_avail_idx 0