[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2] virtio-iommu: Use qemu_real_host_page_mask as default pag
From: |
Michael S. Tsirkin |
Subject: |
Re: [PATCH v2] virtio-iommu: Use qemu_real_host_page_mask as default page_size_mask |
Date: |
Tue, 13 Feb 2024 04:43:55 -0500 |
On Wed, Jan 17, 2024 at 02:20:39PM +0100, Eric Auger wrote:
> We used to set default page_size_mask to qemu_target_page_mask() but
> with VFIO assignment it makes more sense to use the actual host page mask
> instead.
>
> So from now on qemu_real_host_page_mask() will be used as a default.
> To be able to migrate older code, we increase the vmstat version_id
> to 3 and if an older incoming v2 stream is detected we set the previous
> default value.
>
> The new default is well adapted to configs where host and guest have
> the same page size. This allows to fix hotplugging VFIO devices on a
> 64kB guest and a 64kB host. This test case has been failing before
> and even crashing qemu with hw_error("vfio: DMA mapping failed,
> unable to continue") in VFIO common). Indeed the hot-attached VFIO
> device would call memory_region_iommu_set_page_size_mask with 64kB
> mask whereas after the granule was frozen to 4kB on machine init done.
> Now this works. However the new default will prevent 4kB guest on
> 64kB host because the granule will be set to 64kB which would be
> larger than the guest page size. In that situation, the virtio-iommu
> driver fails on viommu_domain_finalise() with
> "granule 0x10000 larger than system page size 0x1000".
>
> The current limitation of global granule in the virtio-iommu
> should be removed and turned into per domain granule. But
> until we get this upgraded, this new default is probably
> better because I don't think anyone is currently interested in
> running a 4kB page size guest with virtio-iommu on a 64kB host.
> However supporting 64kB guest on 64kB host with virtio-iommu and
> VFIO looks a more important feature.
>
> Signed-off-by: Eric Auger <eric.auger@redhat.com>
> Tested-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
What about migration compatibility? In particular, cross-version one?
Don't we need compat machinery for this?
> ---
>
> v1 -> v2:
> - fixed 2 typos in the commit msg and added Jean's R-b and T-b
> ---
> hw/virtio/virtio-iommu.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> index 8a4bd933c6..ec2ba11d1d 100644
> --- a/hw/virtio/virtio-iommu.c
> +++ b/hw/virtio/virtio-iommu.c
> @@ -1313,7 +1313,7 @@ static void virtio_iommu_device_realize(DeviceState
> *dev, Error **errp)
> * in vfio realize
> */
> s->config.bypass = s->boot_bypass;
> - s->config.page_size_mask = qemu_target_page_mask();
> + s->config.page_size_mask = qemu_real_host_page_mask();
> s->config.input_range.end = UINT64_MAX;
> s->config.domain_range.end = UINT32_MAX;
> s->config.probe_size = VIOMMU_PROBE_SIZE;
> @@ -1491,13 +1491,16 @@ static int iommu_post_load(void *opaque, int
> version_id)
> * still correct.
> */
> virtio_iommu_switch_address_space_all(s);
> + if (version_id <= 2) {
> + s->config.page_size_mask = qemu_target_page_mask();
> + }
> return 0;
> }
>
> static const VMStateDescription vmstate_virtio_iommu_device = {
> .name = "virtio-iommu-device",
> .minimum_version_id = 2,
> - .version_id = 2,
> + .version_id = 3,
> .post_load = iommu_post_load,
> .fields = (const VMStateField[]) {
> VMSTATE_GTREE_DIRECT_KEY_V(domains, VirtIOIOMMU, 2,
> --
> 2.41.0