qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd


From: Wang, Wei W
Subject: RE: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd
Date: Thu, 22 Sep 2022 13:23:42 +0000

On Thursday, September 22, 2022 5:11 AM, Andy Lutomirski wrote:
> To: Christopherson,, Sean <seanjc@google.com>; David Hildenbrand
> <david@redhat.com>
> Cc: Chao Peng <chao.p.peng@linux.intel.com>; kvm list
> <kvm@vger.kernel.org>; Linux Kernel Mailing List
> <linux-kernel@vger.kernel.org>; linux-mm@kvack.org;
> linux-fsdevel@vger.kernel.org; Linux API <linux-api@vger.kernel.org>;
> linux-doc@vger.kernel.org; qemu-devel@nongnu.org; Paolo Bonzini
> <pbonzini@redhat.com>; Jonathan Corbet <corbet@lwn.net>; Vitaly
> Kuznetsov <vkuznets@redhat.com>; Wanpeng Li <wanpengli@tencent.com>;
> Jim Mattson <jmattson@google.com>; Joerg Roedel <joro@8bytes.org>;
> Thomas Gleixner <tglx@linutronix.de>; Ingo Molnar <mingo@redhat.com>;
> Borislav Petkov <bp@alien8.de>; the arch/x86 maintainers <x86@kernel.org>;
> H. Peter Anvin <hpa@zytor.com>; Hugh Dickins <hughd@google.com>; Jeff
> Layton <jlayton@kernel.org>; J . Bruce Fields <bfields@fieldses.org>; Andrew
> Morton <akpm@linux-foundation.org>; Shuah Khan <shuah@kernel.org>;
> Mike Rapoport <rppt@kernel.org>; Steven Price <steven.price@arm.com>;
> Maciej S . Szmigiero <mail@maciej.szmigiero.name>; Vlastimil Babka
> <vbabka@suse.cz>; Vishal Annapurve <vannapurve@google.com>; Yu Zhang
> <yu.c.zhang@linux.intel.com>; Kirill A. Shutemov
> <kirill.shutemov@linux.intel.com>; Nakajima, Jun <jun.nakajima@intel.com>;
> Hansen, Dave <dave.hansen@intel.com>; Andi Kleen <ak@linux.intel.com>;
> aarcange@redhat.com; ddutile@redhat.com; dhildenb@redhat.com; Quentin
> Perret <qperret@google.com>; Michael Roth <michael.roth@amd.com>;
> Hocko, Michal <mhocko@suse.com>; Muchun Song
> <songmuchun@bytedance.com>; Wang, Wei W <wei.w.wang@intel.com>;
> Will Deacon <will@kernel.org>; Marc Zyngier <maz@kernel.org>; Fuad Tabba
> <tabba@google.com>
> Subject: Re: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible
> memfd
> 
> (please excuse any formatting disasters.  my internet went out as I was
> composing this, and i did my best to rescue it.)
> 
> On Mon, Sep 19, 2022, at 12:10 PM, Sean Christopherson wrote:
> > +Will, Marc and Fuad (apologies if I missed other pKVM folks)
> >
> > On Mon, Sep 19, 2022, David Hildenbrand wrote:
> >> On 15.09.22 16:29, Chao Peng wrote:
> >> > From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> >> >
> >> > KVM can use memfd-provided memory for guest memory. For normal
> >> > userspace accessible memory, KVM userspace (e.g. QEMU) mmaps the
> >> > memfd into its virtual address space and then tells KVM to use the
> >> > virtual address to setup the mapping in the secondary page table (e.g.
> EPT).
> >> >
> >> > With confidential computing technologies like Intel TDX, the
> >> > memfd-provided memory may be encrypted with special key for special
> >> > software domain (e.g. KVM guest) and is not expected to be directly
> >> > accessed by userspace. Precisely, userspace access to such
> >> > encrypted memory may lead to host crash so it should be prevented.
> >>
> >> Initially my thaught was that this whole inaccessible thing is TDX
> >> specific and there is no need to force that on other mechanisms.
> >> That's why I suggested to not expose this to user space but handle
> >> the notifier requirements internally.
> >>
> >> IIUC now, protected KVM has similar demands. Either access
> >> (read/write) of guest RAM would result in a fault and possibly crash
> >> the hypervisor (at least not the whole machine IIUC).
> >
> > Yep.  The missing piece for pKVM is the ability to convert from shared
> > to private while preserving the contents, e.g. to hand off a large
> > buffer (hundreds of MiB) for processing in the protected VM.  Thoughts
> > on this at the bottom.
> >
> >> > This patch introduces userspace inaccessible memfd (created with
> >> > MFD_INACCESSIBLE). Its memory is inaccessible from userspace
> >> > through ordinary MMU access (e.g. read/write/mmap) but can be
> >> > accessed via in-kernel interface so KVM can directly interact with
> >> > core-mm without the need to map the memory into KVM userspace.
> >>
> >> With secretmem we decided to not add such "concept switch" flags and
> >> instead use a dedicated syscall.
> >>
> >
> > I have no personal preference whatsoever between a flag and a
> > dedicated syscall, but a dedicated syscall does seem like it would
> > give the kernel a bit more flexibility.
> 
> The third option is a device node, e.g. /dev/kvm_secretmem or
> /dev/kvm_tdxmem or similar.  But if we need flags or other details in the
> future, maybe this isn't ideal.
> 
> >
> >> What about memfd_inaccessible()? Especially, sealing and hugetlb are
> >> not even supported and it might take a while to support either.
> >
> > Don't know about sealing, but hugetlb support for "inaccessible"
> > memory needs to come sooner than later.  "inaccessible" in quotes
> > because we might want to choose a less binary name, e.g.
> > "restricted"?.
> >
> > Regarding pKVM's use case, with the shim approach I believe this can
> > be done by allowing userspace mmap() the "hidden" memfd, but with a
> > ton of restrictions piled on top.
> >
> > My first thought was to make the uAPI a set of KVM ioctls so that KVM
> > could tightly tightly control usage without taking on too much
> > complexity in the kernel, but working through things, routing the
> > behavior through the shim itself might not be all that horrific.
> >
> > IIRC, we discarded the idea of allowing userspace to map the "private"
> > fd because
> > things got too complex, but with the shim it doesn't seem _that_ bad.
> 
> What's the exact use case?  Is it just to pre-populate the memory?

Add one more use case here. For TDX live migration support, on the destination 
side,
we map the private fd during migration to store the encrypted private memory 
data sent
from source, and at the end of migration, we unmap it and make it inaccessible 
before
resuming the TD to run.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]