qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd


From: Vishal Annapurve
Subject: Re: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd
Date: Thu, 3 Nov 2022 21:57:11 +0530

On Mon, Oct 24, 2022 at 8:30 PM Kirill A . Shutemov
<kirill.shutemov@linux.intel.com> wrote:
>
> On Fri, Oct 21, 2022 at 04:18:14PM +0000, Sean Christopherson wrote:
> > On Fri, Oct 21, 2022, Chao Peng wrote:
> > > >
> > > > In the context of userspace inaccessible memfd, what would be a
> > > > suggested way to enforce NUMA memory policy for physical memory
> > > > allocation? mbind[1] won't work here in absence of virtual address
> > > > range.
> > >
> > > How about set_mempolicy():
> > > https://www.man7.org/linux/man-pages/man2/set_mempolicy.2.html
> >
> > Andy Lutomirski brought this up in an off-list discussion way back when the 
> > whole
> > private-fd thing was first being proposed.
> >
> >   : The current Linux NUMA APIs (mbind, move_pages) work on virtual 
> > addresses.  If
> >   : we want to support them for TDX private memory, we either need TDX 
> > private
> >   : memory to have an HVA or we need file-based equivalents. Arguably we 
> > should add
> >   : fmove_pages and fbind syscalls anyway, since the current API is quite 
> > awkward
> >   : even for tools like numactl.
>
> Yeah, we definitely have gaps in API wrt NUMA, but I don't think it be
> addressed in the initial submission.
>
> BTW, it is not regression comparing to old KVM slots, if the memory is
> backed by memfd or other file:
>
> MBIND(2)
>        The  specified policy will be ignored for any MAP_SHARED mappings in 
> the
>        specified memory range.  Rather the pages will be allocated according 
> to
>        the  memory  policy  of the thread that caused the page to be 
> allocated.
>        Again, this may not be the thread that called mbind().
>
> It is not clear how to define fbind(2) semantics, considering that multiple
> processes may compete for the same region of page cache.
>
> Should it be per-inode or per-fd? Or maybe per-range in inode/fd?
>

David's analysis on mempolicy with shmem seems to be right. set_policy
on virtual address range does seem to change the shared policy for the
inode irrespective of the mapping type.

Maybe having a way to set numa policy per-range in the inode would be
at par with what we can do today via mbind on virtual address ranges.



> fmove_pages(2) should be relatively straight forward, since it is
> best-effort and does not guarantee that the page will note be moved
> somewhare else just after return from the syscall.
>
> --
>   Kiryl Shutsemau / Kirill A. Shutemov



reply via email to

[Prev in Thread] Current Thread [Next in Thread]