qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap


From: Alex Williamson
Subject: Re: [PATCH v10 10/10] vfio: Don't issue full 2^64 unmap
Date: Mon, 2 Nov 2020 10:37:23 -0700

On Fri, 30 Oct 2020 19:19:14 +0100
Paolo Bonzini <pbonzini@redhat.com> wrote:

> On 30/10/20 18:26, Alex Williamson wrote:
> >>  
> >>      if (try_unmap) {
> >> +        if (llsize == int128_2_64()) {
> >> +            /* The unmap ioctl doesn't accept a full 64-bit span. */
> >> +            llsize = int128_rshift(llsize, 1);
> >> +            ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
> >> +            if (ret) {
> >> +                error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> >> +                             "0x%"HWADDR_PRIx") = %d (%m)",
> >> +                             container, iova, int128_get64(llsize), ret);
> >> +            }
> >> +            iova += int128_get64(llsize);
> >> +        }
> >>          ret = vfio_dma_unmap(container, iova, int128_get64(llsize));
> >>          if (ret) {
> >>              error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "    
> > We're still susceptible that splitting the range in two could result in
> > unmap calls that attempt to bisect a mapping that spans both ranges.
> > Both unmap calls would fail in that case.  I think we could solve this
> > more completely with a high water marker, but this probably good enough
> > for now.
> > 
> > Acked-by: Alex Williamson <alex.williamson@redhat.com>    
> 
> Could it also be fixed by passing an Int128 to vfio_dma_unmap?

I think we still have the issue at the vfio ioctl which takes __u64 iova
and size parameters, in bytes.  Therefore we cannot unmap an entire
64-bit address space with a single ioctl call.  We'd need to make use
of a flag to modify the ioctl behavior to work terms of some page size
to achieve this, for example if iova and size were in terms of 4K
pages, we wouldn't have this issue.  Thanks,

Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]