qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] block/nvme: Fix VFIO_MAP_DMA failed: No space left on device


From: Michal Prívozník
Subject: Re: [PATCH] block/nvme: Fix VFIO_MAP_DMA failed: No space left on device
Date: Mon, 14 Jun 2021 17:25:20 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0

On 6/11/21 1:46 PM, Philippe Mathieu-Daudé wrote:
> When the NVMe block driver was introduced (see commit bdd6a90a9e5,
> January 2018), Linux VFIO_IOMMU_MAP_DMA ioctl was only returning
> -ENOMEM in case of error. The driver was correctly handling the
> error path to recycle its volatile IOVA mappings.
> 
> To fix CVE-2019-3882, Linux commit 492855939bdb ("vfio/type1: Limit
> DMA mappings per container", April 2019) added the -ENOSPC error to
> signal the user exhausted the DMA mappings available for a container.
> 
> The block driver started to mis-behave:
> 
>   qemu-system-x86_64: VFIO_MAP_DMA failed: No space left on device
>   (qemu)
>   (qemu) info status
>   VM status: paused (io-error)
>   (qemu) c
>   VFIO_MAP_DMA failed: No space left on device
>   qemu-system-x86_64: block/block-backend.c:1968: blk_get_aio_context: 
> Assertion `ctx == blk->ctx' failed.
> 
> Fix by handling the -ENOSPC error when DMA mappings are exhausted;
> other errors (such -ENOMEM) are still handled later in the same
> function.
> 
> An easy way to reproduce this bug is to restrict the DMA mapping
> limit (65535 by default) when loading the VFIO IOMMU module:
> 
>   # modprobe vfio_iommu_type1 dma_entry_limit=666
> 
> Cc: qemu-stable@nongnu.org
> Reported-by: Michal Prívozník <mprivozn@redhat.com>
> Fixes: bdd6a90a9e5 ("block: Add VFIO based NVMe driver")
> Buglink: https://bugs.launchpad.net/qemu/+bug/1863333
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/65
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
> Michal, is it still possible for you to test this (old bug)?
> 

Unfortunately I no longer have access to the machine. But, IIRC it was
fairly easy to reproduce - just passthrough any NVMe disk using NVMe
disk backend (-blockdev '{"driver":"nvme", ...).

Sorry,
Michal




reply via email to

[Prev in Thread] Current Thread [Next in Thread]