qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] docs: Add measurement calculation details to amd-memory-encr


From: Dov Murik
Subject: Re: [PATCH] docs: Add measurement calculation details to amd-memory-encryption.txt
Date: Thu, 16 Dec 2021 23:41:27 +0200
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.4.0


On 16/12/2021 18:09, Daniel P. Berrangé wrote:
> On Thu, Dec 16, 2021 at 12:38:34PM +0200, Dov Murik wrote:
>>
>>
>> On 14/12/2021 20:39, Daniel P. Berrangé wrote:
>>> On Tue, Dec 14, 2021 at 01:59:10PM +0000, Dov Murik wrote:
>>>> Add a section explaining how the Guest Owner should calculate the
>>>> expected guest launch measurement for SEV and SEV-ES.
>>>>
>>>> Also update the name and link to the SEV API Spec document.
>>>>
>>>> Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
>>>> Suggested-by: Daniel P. Berrangé <berrange@redhat.com>
>>>> ---
>>>>  docs/amd-memory-encryption.txt | 50 +++++++++++++++++++++++++++++++---
>>>>  1 file changed, 46 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/docs/amd-memory-encryption.txt 
>>>> b/docs/amd-memory-encryption.txt
>>>> index ffca382b5f..f97727482f 100644
>>>> --- a/docs/amd-memory-encryption.txt
>>>> +++ b/docs/amd-memory-encryption.txt
>>>> @@ -43,7 +43,7 @@ The guest policy is passed as plaintext. A hypervisor 
>>>> may choose to read it,
>>>>  but should not modify it (any modification of the policy bits will result
>>>>  in bad measurement). The guest policy is a 4-byte data structure 
>>>> containing
>>>>  several flags that restricts what can be done on a running SEV guest.
>>>> -See KM Spec section 3 and 6.2 for more details.
>>>> +See SEV API Spec [1] section 3 and 6.2 for more details.
>>>>  
>>>>  The guest policy can be provided via the 'policy' property (see below)
>>>>  
>>>> @@ -88,7 +88,7 @@ expects.
>>>>  LAUNCH_FINISH finalizes the guest launch and destroys the cryptographic
>>>>  context.
>>>>  
>>>> -See SEV KM API Spec [1] 'Launching a guest' usage flow (Appendix A) for 
>>>> the
>>>> +See SEV API Spec [1] 'Launching a guest' usage flow (Appendix A) for the
>>>>  complete flow chart.
>>>>  
>>>>  To launch a SEV guest
>>>> @@ -113,6 +113,45 @@ a SEV-ES guest:
>>>>   - Requires in-kernel irqchip - the burden is placed on the hypervisor to
>>>>     manage booting APs.
>>>>  
>>>> +Calculating expected guest launch measurement
>>>> +---------------------------------------------
>>>> +In order to verify the guest launch measurement, The Guest Owner must 
>>>> compute
>>>> +it in the exact same way as it is calculated by the AMD-SP.  SEV API Spec 
>>>> [1]
>>>> +section 6.5.1 describes the AMD-SP operations:
>>>> +
>>>> +    GCTX.LD is finalized, producing the hash digest of all plaintext data
>>>> +    imported into the guest.
>>>> +
>>>> +    The launch measurement is calculated as:
>>>> +
>>>> +    HMAC(0x04 || API_MAJOR || API_MINOR || BUILD || GCTX.POLICY || 
>>>> GCTX.LD || MNONCE; GCTX.TIK)
>>>> +
>>>> +    where "||" represents concatenation.
>>>> +
>>>> +The values of API_MAJOR, API_MINOR, BUILD, and GCTX.POLICY can be obtained
>>>> +from the 'query-sev' qmp command.
>>>> +
>>>> +The value of MNONCE is part of the response of 
>>>> 'query-sev-launch-measure': it
>>>> +is the last 16 bytes of the base64-decoded data field (see SEV API Spec 
>>>> [1]
>>>> +section 6.5.2 Table 52: LAUNCH_MEASURE Measurement Buffer).
>>>> +
>>>> +The value of GCTX.LD is SHA256(firmware_blob || kernel_hashes_blob || 
>>>> vmsas_blob),
>>>> +where:
>>>> +
>>>> +* firmware_blob is the content of the entire firmware flash file (for 
>>>> example,
>>>> +  OVMF.fd).
>>>
>>> Lets add a caveat that the firmware flash should be built to be stateless
>>> ie that it is not secure to attempt to measure a guest where the firmware
>>> uses an NVRAM store.
>>>
>>
>> * firmware_blob is the content of the entire firmware flash file (for   
>>   example, OVMF.fd).  Note that you must build a stateless firmware file    
>>   which doesn't use an NVRAM store, because the NVRAM area is not
>>   measured, and therefore it is not secure to use a firmware which uses 
>>   state from an NVRAM store.
> 
> Looks good to me.
> 
>>>> +* if kernel is used, and kernel-hashes=on, then kernel_hashes_blob is the
>>>> +  content of PaddedSevHashTable (including the zero padding), which itself
>>>> +  includes the hashes of kernel, initrd, and cmdline that are passed to 
>>>> the
>>>> +  guest.  The PaddedSevHashTable struct is defined in target/i386/sev.c .
>>>> +* if SEV-ES is enabled (policy & 0x4 != 0), vmsas_blob is the 
>>>> concatenation of
>>>> +  all VMSAs of the guest vcpus.  Each VMSA is 4096 bytes long; its 
>>>> content is
>>>> +  defined inside Linux kernel code as struct vmcb_save_area, or in AMD APM
>>>> +  Volume 2 [2] Table B-2: VMCB Layout, State Save Area.
>>>
>>> Is there any practical guidance we can give apps on the way the VMSAs
>>> can be expected to be initialized ? eg can they assume essentially
>>> all fields in vmcb_save_area are 0 initialized except for certain
>>> ones ? Is initialization likely to vary at all across KVM or EDK2
>>> vesions or something ?
>>
>> From my own experience, the VMSA of vcpu0 doesn't change; it is basically 
>> what QEMU
>> sets up in x86_cpu_reset() (which is mostly zeros but not all).  I don't 
>> know if it
>> may change in newer QEMU (machine types?) or kvm.  As for vcpu1+, in SEV-ES 
>> the
>> CS:EIP for the APs is taken from a GUIDed table at the end of the OVMF 
>> image, and has
>> actually changed a few months ago when the memory layout changed to support 
>> both TDX
>> and SEV.
> 
> That is an unplesantly large number of moving parts that could
> potentially impact the expected state :-(  I think we need to
> be careful to avoid gratuitous changes, to avoid creating a
> combinatorial expansion in the number of possibly valid VMSA
> blocks.
> 
> It makes me wonder if we need to think about defining some
> standard approach for distro vendors (and/or cloud vendors)
> to publish the expected contents for various combinations
> of their software pieces.
> 
>>
>>
>> Here are the VMSAs for my 2-vcpu SEV-ES VM:
>>
>>
>> $ hd vmsa/vmsa_cpu0.bin
> 
> ...snipp...
> 
> was there a nice approach / tool you used to capture
> this initial state ?
> 

I wouldn't qualify this as nice: I ended up modifying my
host kernel's kvm (see patch below).  Later I wrote a
script to parse that hex dump from the kernel log into
proper 4096-byte binary VMSA files.



diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 7fbce342eec4..4e45fe37b93d 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -624,6 +624,12 @@ static int sev_launch_update_vmsa(struct kvm *kvm, struct 
kvm_sev_cmd *argp)
                 */
                clflush_cache_range(svm->vmsa, PAGE_SIZE);

+                /* dubek */
+                pr_info("DEBUG_VMSA - cpu %d START ---------------\n", i);
+                print_hex_dump(KERN_INFO, "DEBUG_VMSA", DUMP_PREFIX_OFFSET, 
16, 1, svm->vmsa, PAGE_SIZE, true);
+                pr_info("DEBUG_VMSA - cpu %d END ---------------\n", i);
+                /* ----- */
+
                vmsa.handle = sev->handle;
                vmsa.address = __sme_pa(svm->vmsa);
                vmsa.len = PAGE_SIZE;





reply via email to

[Prev in Thread] Current Thread [Next in Thread]