[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: SEV guest attestation

From: Daniel P . Berrangé
Subject: Re: SEV guest attestation
Date: Thu, 25 Nov 2021 13:56:47 +0000
User-agent: Mutt/2.1.3 (2021-09-10)

On Thu, Nov 25, 2021 at 03:50:46PM +0200, Dov Murik wrote:
> On 25/11/2021 15:27, Daniel P. Berrangé wrote:
> > On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
> >> * Daniel P. Berrangé (berrange@redhat.com) wrote:
> >>> On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> >>>> Hi,
> >>>>
> >>>> We recently discussed a way for remote SEV guest attestation through 
> >>>> QEMU.
> >>>> My initial approach was to get data needed for attestation through 
> >>>> different
> >>>> QMP commands (all of which are already available, so no changes required
> >>>> there), deriving hashes and certificate data; and collecting all of this
> >>>> into a new QMP struct (SevLaunchStart, which would include the VM's 
> >>>> policy,
> >>>> secret, and GPA) which would need to be upstreamed into QEMU. Once this 
> >>>> is
> >>>> provided, QEMU would then need to have support for attestation before a 
> >>>> VM
> >>>> is started. Upon speaking to Dave about this proposal, he mentioned that
> >>>> this may not be the best approach, as some situations would render the
> >>>> attestation unavailable, such as the instance where a VM is running in a
> >>>> cloud, and a guest owner would like to perform attestation via QMP (a 
> >>>> likely
> >>>> scenario), yet a cloud provider cannot simply let anyone pass arbitrary 
> >>>> QMP
> >>>> commands, as this could be an issue.
> >>>
> >>> As a general point, QMP is a low level QEMU implementation detail,
> >>> which is generally expected to be consumed exclusively on the host
> >>> by a privileged mgmt layer, which will in turn expose its own higher
> >>> level APIs to users or other apps. I would not expect to see QMP
> >>> exposed to anything outside of the privileged host layer.
> >>>
> >>> We also use the QAPI protocol for QEMU guest agent commmunication,
> >>> however, that is a distinct service from QMP on the host. It shares
> >>> most infra with QMP but has a completely diffent command set. On the
> >>> host it is not consumed inside QEMU, but instead consumed by a
> >>> mgmt app like libvirt. 
> >>>
> >>>> So I ask, does anyone involved in QEMU's SEV implementation have any 
> >>>> input
> >>>> on a quality way to perform guest attestation? If so, I'd be interested.
> >>>
> >>> I think what's missing is some clearer illustrations of how this
> >>> feature is expected to be consumed in some real world application
> >>> and the use cases we're trying to solve.
> >>>
> >>> I'd like to understand how it should fit in with common libvirt
> >>> applications across the different virtualization management
> >>> scenarios - eg virsh (command line),  virt-manger (local desktop
> >>> GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
> >>> And of course any non-traditional virt use cases that might be
> >>> relevant such as Kata.
> >>
> >> That's still not that clear; I know Alice and Sergio have some ideas
> >> (cc'd).
> >> There's also some standardisation efforts (e.g. 
> >> https://www.potaroo.net/ietf/html/ids-wg-rats.html 
> >> and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
> >> ) - that I can't claim to fully understand.
> >> However, there are some themes that are emerging:
> >>
> >>   a) One use is to only allow a VM to access some private data once we
> >> prove it's the VM we expect running in a secure/confidential system
> >>   b) (a) normally involves requesting some proof from the VM and then
> >> providing it some confidential data/a key if it's OK
> > 
> > I guess I'm wondering what the threat we're protecting against is,
> > and / or which pieces of the stack we can trust ?
> > 
> > eg, if the host has 2 VMs running, we verify the 1st and provide
> > its confidental data back to the host, what stops the host giving
> > that dat to the 2nd non-verified VM ? 
> The host can't read the injected secret: It is encrypted with a key that
> is available only to the PSP.  The PSP receives it and writes it in a
> guest-encrypted memory (which the host also cannot read; for the guest
> it's a simple memory access with C-bit=1).  So it's a per-vm-invocation
> secret.

Is there some way the PSP verifies which VM is supposed to receive
the injected data. ie the host can't read it, but it can tell the
PSP to inject it to VM B instead of VM A.

> > Presumably the data has to be encrypted with a key that is uniquely
> > tied to this specific boot attempt of the verified VM, and not
> > accessible to any other VM, or to future boots of this VM ?
> Yes, launch blob, which (if I recall correctly) the Guest Owner should
> generate and give to the Cloud Provider so it can start a VM with it
> (this is one of the options on the sev-guest object).

Does something stop the host from booting a 2nd VM on the side with
the same launch blob, and thus be able to also tell the PSP to inject
the secret data into this 2nd VM later too ?

|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

reply via email to

[Prev in Thread] Current Thread [Next in Thread]