[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 11/13] virtio-scsi: add iothread-vq-mapping parameter
From: |
Kevin Wolf |
Subject: |
Re: [PATCH v2 11/13] virtio-scsi: add iothread-vq-mapping parameter |
Date: |
Tue, 11 Mar 2025 13:06:11 +0100 |
Am 11.03.2025 um 11:11 hat Stefan Hajnoczi geschrieben:
> Allow virtio-scsi virtqueues to be assigned to different IOThreads. This
> makes it possible to take advantage of host multi-queue block layer
> scalability by assigning virtqueues that have affinity with vCPUs to
> different IOThreads that have affinity with host CPUs. The same feature
> was introduced for virtio-blk in the past:
> https://developers.redhat.com/articles/2024/09/05/scaling-virtio-blk-disk-io-iothread-virtqueue-mapping
>
> Here are fio randread 4k iodepth=64 results from a 4 vCPU guest with an
> Intel P4800X SSD:
> iothreads IOPS
> ------------------------------
> 1 189576
> 2 312698
> 4 346744
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> @@ -1218,14 +1224,16 @@ static void virtio_scsi_hotplug(HotplugHandler
> *hotplug_dev, DeviceState *dev,
> {
> VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev);
> VirtIOSCSI *s = VIRTIO_SCSI(vdev);
> + AioContext *ctx = s->vq_aio_context[0];
At the end of the series, this is always qemu_aio_context...
> SCSIDevice *sd = SCSI_DEVICE(dev);
> - int ret;
>
> - if (s->ctx && !s->dataplane_fenced) {
> - ret = blk_set_aio_context(sd->conf.blk, s->ctx, errp);
> - if (ret < 0) {
> - return;
> - }
> + if (ctx != qemu_get_aio_context() && !s->dataplane_fenced) {
> + /*
> + * Try to make the BlockBackend's AioContext match ours. Ignore
> failure
> + * because I/O will still work although block jobs and other users
> + * might be slower when multiple AioContexts use a BlockBackend.
> + */
> + blk_set_aio_context(sd->conf.blk, ctx, errp);
> }
...so this becomes dead code. With multiple AioContexts, it's not clear
which one should be used. virtio-blk just takes the first one. The
equivalent thing here would be to use the one of the first command
queue.
> if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
> @@ -1260,7 +1268,7 @@ static void virtio_scsi_hotunplug(HotplugHandler
> *hotplug_dev, DeviceState *dev,
>
> qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
>
> - if (s->ctx) {
> + if (s->vq_aio_context[0] != qemu_get_aio_context()) {
Same problem here.
> /* If other users keep the BlockBackend in the iothread, that's ok */
> blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL);
> }
As you wanted to avoid squashing patches anyway, I think this can be
fixed on top of this series.
Kevin
- [PATCH v2 01/13] scsi-disk: drop unused SCSIDiskState->bh field, (continued)
- [PATCH v2 01/13] scsi-disk: drop unused SCSIDiskState->bh field, Stefan Hajnoczi, 2025/03/11
- [PATCH v2 02/13] dma: use current AioContext for dma_blk_io(), Stefan Hajnoczi, 2025/03/11
- [PATCH v2 03/13] scsi: track per-SCSIRequest AioContext, Stefan Hajnoczi, 2025/03/11
- [PATCH v2 04/13] scsi: introduce requests_lock, Stefan Hajnoczi, 2025/03/11
- [PATCH v2 05/13] virtio-scsi: introduce event and ctrl virtqueue locks, Stefan Hajnoczi, 2025/03/11
- [PATCH v2 06/13] virtio-scsi: protect events_dropped field, Stefan Hajnoczi, 2025/03/11
- [PATCH v2 07/13] virtio-scsi: perform TMFs in appropriate AioContexts, Stefan Hajnoczi, 2025/03/11
- [PATCH v2 08/13] virtio-blk: extract cleanup_iothread_vq_mapping() function, Stefan Hajnoczi, 2025/03/11
- [PATCH v2 10/13] virtio: extract iothread-vq-mapping.h API, Stefan Hajnoczi, 2025/03/11
- [PATCH v2 11/13] virtio-scsi: add iothread-vq-mapping parameter, Stefan Hajnoczi, 2025/03/11
- Re: [PATCH v2 11/13] virtio-scsi: add iothread-vq-mapping parameter,
Kevin Wolf <=
- [PATCH v2 09/13] virtio-blk: tidy up iothread_vq_mapping functions, Stefan Hajnoczi, 2025/03/11
- [PATCH v2 12/13] virtio-scsi: handle ctrl virtqueue in main loop, Stefan Hajnoczi, 2025/03/11
- [PATCH v2 13/13] virtio-scsi: only expose cmd vqs via iothread-vq-mapping, Stefan Hajnoczi, 2025/03/11