qemu-stable
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] block/nvme: Fix VFIO_MAP_DMA failed: No space left on device


From: Maxim Levitsky
Subject: Re: [PATCH] block/nvme: Fix VFIO_MAP_DMA failed: No space left on device
Date: Thu, 17 Jun 2021 15:40:16 +0300
User-agent: Evolution 3.36.5 (3.36.5-2.fc32)

On Mon, 2021-06-14 at 18:03 +0200, Philippe Mathieu-Daudé wrote:
> On 6/11/21 1:46 PM, Philippe Mathieu-Daudé wrote:
> > When the NVMe block driver was introduced (see commit bdd6a90a9e5,
> > January 2018), Linux VFIO_IOMMU_MAP_DMA ioctl was only returning
> > -ENOMEM in case of error. The driver was correctly handling the
> > error path to recycle its volatile IOVA mappings.
> > 
> > To fix CVE-2019-3882, Linux commit 492855939bdb ("vfio/type1: Limit
> > DMA mappings per container", April 2019) added the -ENOSPC error to
> > signal the user exhausted the DMA mappings available for a container.
> 
> Hmm this commit has been added before v5.1-rc4.
> 
> So while this fixes the behavior of v5.1-rc4+ kernels,
> older kernels using this fix will have the same problem...


Hi!

I wonder why not to check for both -ENOMEM and -ENOSPC
and recycle the mappings in both cases?

I think that would work on both old and new kernels.

What do you think?

Best regards,
        Maxim Levitsky

> 
> Should I check uname(2)'s utsname.release[]? Is it reliable?
> 
> > The block driver started to mis-behave:
> > 
> >   qemu-system-x86_64: VFIO_MAP_DMA failed: No space left on device
> >   (qemu)
> >   (qemu) info status
> >   VM status: paused (io-error)
> >   (qemu) c
> >   VFIO_MAP_DMA failed: No space left on device
> >   qemu-system-x86_64: block/block-backend.c:1968: blk_get_aio_context: 
> > Assertion `ctx == blk->ctx' failed.
> > 
> > Fix by handling the -ENOSPC error when DMA mappings are exhausted;
> > other errors (such -ENOMEM) are still handled later in the same
> > function.
> > 
> > An easy way to reproduce this bug is to restrict the DMA mapping
> > limit (65535 by default) when loading the VFIO IOMMU module:
> > 
> >   # modprobe vfio_iommu_type1 dma_entry_limit=666
> > 
> > Cc: qemu-stable@nongnu.org
> > Reported-by: Michal Prívozník <mprivozn@redhat.com>
> > Fixes: bdd6a90a9e5 ("block: Add VFIO based NVMe driver")
> > Buglink: https://bugs.launchpad.net/qemu/+bug/1863333
> > Resolves: https://gitlab.com/qemu-project/qemu/-/issues/65
> > Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> > ---
> > Michal, is it still possible for you to test this (old bug)?
> > 
> > A functional test using viommu & nested VM is planned (suggested by
> > Stefan and Maxim).
> > ---
> >  block/nvme.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/block/nvme.c b/block/nvme.c
> > index 2b5421e7aa6..12f9dd5cce3 100644
> > --- a/block/nvme.c
> > +++ b/block/nvme.c
> > @@ -1030,7 +1030,7 @@ try_map:
> >          r = qemu_vfio_dma_map(s->vfio,
> >                                qiov->iov[i].iov_base,
> >                                len, true, &iova);
> > -        if (r == -ENOMEM && retry) {
> > +        if (r == -ENOSPC && retry) {
> >              retry = false;
> >              trace_nvme_dma_flush_queue_wait(s);
> >              if (s->dma_map_count) {
> > 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]