qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq


From: Mike Christie
Subject: Re: [PATCH 00/10] vhost/qemu: thread per IO SCSI vq
Date: Thu, 19 Nov 2020 09:49:28 -0600
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.4.3

On 11/18/20 10:35 PM, Jason Wang wrote:
its just extra code. This patch:
https://urldefense.com/v3/__https://www.spinics.net/lists/linux-scsi/msg150151.html__;!!GqivPVa7Brio!MJS-iYeBuOljoz2xerETyn4c1N9i0XnOE8oNhz4ebbzCMNeQIP_Iie8zH18L7cY7_hur$ would work without the ENABLE ioctl I mean.


That seems to pre-allocate all workers. If we don't care the resources (127 workers) consumption it could be fine.

It only makes what the user requested via num_queues.

That patch will:
1. For the default case of num_queues=1 we use the single worker created from the SET_OWNER ioctl.
2. If num_queues > 1, then it creates a worker thread per num_queue > 1.






And if you guys want to do the completely new interface, then none of this matters I guess :)

For disable see below.




My issue/convern is that in general these calls seems useful, but we don't really need them for scsi because vhost scsi is already stuck creating vqs like how it does due to existing users. If we do the ifcvf_vdpa_set_vq_ready type of design where we just set some bit, then the new ioctl does not give us a lot. It's just an extra
check and extra code.

And for the mlx5_vdpa_set_vq_ready type of design, it doesn't seem like it's going to happen a lot where the admin is going to want to remove vqs from a running device.


In this case, qemu may just disable the queues of vhost-scsi via SET_VRING_ENABLE and then we can free resources?


Some SCSI background in case it doesn't work like net:
-------
When the user sets up mq for vhost-scsi/virtio-scsi, for max perf and no cares about mem use they would normally set num_queues based on the number of vCPUs and MSI-x vectors. I think the default in qemu now is to try and detect that value.

When the virtio_scsi driver is loaded into the guest kernel, it takes the num_queues value and tells the scsi/block mq layer to create num_queues multiqueue hw queues.


If I read the code correctly, for modern device, guest will set queue_enable for the queues that it wants to use. So in this ideal case, qemu can forward them to VRING_ENABLE and reset VRING_ENABLE during device reset.

I was thinking more you want an event like when a device/LUN is added/removed to a host. Instead of kicking off a device scan, you could call the block helper to remap queues. It would then not be too invasive to running IO. I'll look into reset some more.



But it would be complicated to support legacy device and qemu.



------

I was trying to say in the previous email that is if all we do is set some bits to indicate the queue is disabled, free its resources, stop polling/queueing in the scsi/target layer, flush etc, it does not seem useful. I was trying to ask when would a user only want this behavior?


I think it's device reset, the semantic is that unless the queue is enabled, we should treat it as disabled.


Ah ok. I I'll look into that some more. A funny thing is that I was trying to test that a while ago, but it wasn't helpful. I'm guessing it didn't work because it didn't implement what you wanted for disable right now :)



reply via email to

[Prev in Thread] Current Thread [Next in Thread]