qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Emulating device configuration / max_virtqueue_pairs in vhost-vdpa a


From: Jason Wang
Subject: Re: Emulating device configuration / max_virtqueue_pairs in vhost-vdpa and vhost-user
Date: Thu, 2 Feb 2023 11:44:57 +0800
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.6.1


在 2023/2/1 19:48, Eugenio Perez Martin 写道:
On Wed, Feb 1, 2023 at 12:20 PM Michael S. Tsirkin <mst@redhat.com> wrote:
On Wed, Feb 01, 2023 at 12:14:18PM +0100, Maxime Coquelin wrote:
Thanks Eugenio for working on this.

On 1/31/23 20:10, Eugenio Perez Martin wrote:
Hi,

The current approach of offering an emulated CVQ to the guest and map
the commands to vhost-user is not scaling well:
* Some devices already offer it, so the transformation is redundant.
* There is no support for commands with variable length (RSS?)

We can solve both of them by offering it through vhost-user the same
way as vhost-vdpa do. With this approach qemu needs to track the
commands, for similar reasons as vhost-vdpa: qemu needs to track the
device status for live migration. vhost-user should use the same SVQ
code for this, so we avoid duplications.

One of the challenges here is to know what virtqueue to shadow /
isolate. The vhost-user device may not have the same queues as the
device frontend:
* The first depends on the actual vhost-user device, and qemu fetches
it with VHOST_USER_GET_QUEUE_NUM at the moment.
* The qemu device frontend's is set by netdev queues= cmdline parameter in qemu

For the device, the CVQ is the last one it offers, but for the guest
it is the last one offered in config space.

To create a new vhost-user command to decrease that maximum number of
queues may be an option. But we can do it without adding more
commands, remapping the CVQ index at virtqueue setup. I think it
should be doable using (struct vhost_dev).vq_index and maybe a few
adjustments here and there.

Thoughts?
I am fine with both proposals.
I think index remapping will require a bit more rework in the DPDK
Vhost-user library, but nothing insurmountable.

I am currently working on a PoC adding support for VDUSE in the DPDK
Vhost library, and recently added control queue support. We can reuse it
if we want to prototype your proposal.

Maxime

Thanks!


technically backend knows how many vqs are there, last one is cvq...
not sure we need full blown remapping ...

The number of queues may not be the same between cmdline and the device.

If vhost-user device cmdline wants more queues than offered by the
device qemu will print an error. But the reverse (to offer the same
number of queues or less than the device have) is valid at this
moment.

If we add cvq with this scheme, cvq index will not be the same between
the guest and the device.

vhost-vdpa totally ignores the queues parameter, so we're losing the
opportunity to offer a consistent config space in the event of a
migration. I suggest we should act the same way as I'm proposing here
with vhost-user, so:
* QEMU can block the migration in the case the destination cannot
offer the same number of queues.
* The guest will not see a change of the config space under its feets.


As we discussed in the past, it would be easier to fail the device initialization in this case.

Thanks



Now there are other fields in the config space for sure (mtu, rss
size, etc), but I think the most complex case is the number of queues
because cvq.

Is that clearer?

Thanks!





reply via email to

[Prev in Thread] Current Thread [Next in Thread]