qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 22/33] hostmem-epc: Add the reset interface for EPC backen


From: Paolo Bonzini
Subject: Re: [PATCH v4 22/33] hostmem-epc: Add the reset interface for EPC backend reset
Date: Fri, 10 Sep 2021 21:51:41 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0

On 10/09/21 19:34, Sean Christopherson wrote:
On Fri, Sep 10, 2021, Paolo Bonzini wrote:
On 10/09/21 17:34, Sean Christopherson wrote:
The only other option that comes to mind is a dedicated ioctl().

If it is not too restrictive to do it for the whole mapping at once,
that would be fine.

Oooh, rats.  That reminds me of a complication.  If QEMU creates multiple EPC
sections, e.g. for a vNUMA setup, resetting each section individually will fail
if the guest did an unclean RESET and a given enclaves has EPC pages from 
multiple
sections.  E.g. an SECS in vEPC[X] can have children in vEPC[0..N-1], and all
those children need to be removed before the SECS can be removed.  Yay SGX!

There are two options: 1) QEMU has to handle "failure", or 2) the kernel 
provides
an ioctl() that takes multiple vEPC fds and handles the SECS dependencies.  #1 
is
probably the least awful option.  For #2, in addition to the kernel having to 
deal
with multiple fds, it would also need a second list_head object in each page so
that it could track which pages failed to be removed.  Using the existing 
list_head
would work for now, but it won't work if/when an EPC cgroup is added.

Note, for #1, QEMU would have to potentially do three passes.

   1. Remove child pages for a given vEPC.
   2. Remove SECS for a given vEPC that were pinned by children in the same 
vEPC.
   3. Remove SECS for all vEPC that were pinned by children in different vEPC.

It's also possible that QEMU handles failure, but the kernel does two passes; then QEMU can just do two passes. The kernel will overall do four passes, but:

1) the second (SECS pinned by children in the same vEPC) would be cheaper than a full second pass

2) the fourth would actually do nothing, because there would be no pages failing the EREMOV'al.

A hypothetical other SGX client that only uses one vEPC will do the right thing with a single pass.

Paolo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]