[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: SEV guest attestation

From: Brijesh Singh
Subject: Re: SEV guest attestation
Date: Mon, 29 Nov 2021 08:49:13 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0

On 11/29/21 8:29 AM, Brijesh Singh wrote:

On 11/25/21 7:59 AM, Dov Murik wrote:
[+cc Tom, Brijesh]

On 25/11/2021 15:42, Daniel P. Berrangé wrote:
On Thu, Nov 25, 2021 at 02:44:51PM +0200, Dov Murik wrote:
[+cc jejb, tobin, jim, hubertus]

On 25/11/2021 9:14, Sergio Lopez wrote:
On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
* Daniel P. Berrangé (berrange@redhat.com) wrote:
On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:

We recently discussed a way for remote SEV guest attestation through QEMU. My initial approach was to get data needed for attestation through different QMP commands (all of which are already available, so no changes required there), deriving hashes and certificate data; and collecting all of this into a new QMP struct (SevLaunchStart, which would include the VM's policy, secret, and GPA) which would need to be upstreamed into QEMU. Once this is provided, QEMU would then need to have support for attestation before a VM is started. Upon speaking to Dave about this proposal, he mentioned that this may not be the best approach, as some situations would render the attestation unavailable, such as the instance where a VM is running in a cloud, and a guest owner would like to perform attestation via QMP (a likely scenario), yet a cloud provider cannot simply let anyone pass arbitrary QMP
commands, as this could be an issue.

As a general point, QMP is a low level QEMU implementation detail,
which is generally expected to be consumed exclusively on the host
by a privileged mgmt layer, which will in turn expose its own higher
level APIs to users or other apps. I would not expect to see QMP
exposed to anything outside of the privileged host layer.

We also use the QAPI protocol for QEMU guest agent commmunication,
however, that is a distinct service from QMP on the host. It shares
most infra with QMP but has a completely diffent command set. On the
host it is not consumed inside QEMU, but instead consumed by a
mgmt app like libvirt.

So I ask, does anyone involved in QEMU's SEV implementation have any input on a quality way to perform guest attestation? If so, I'd be interested.

I think what's missing is some clearer illustrations of how this
feature is expected to be consumed in some real world application
and the use cases we're trying to solve.

I'd like to understand how it should fit in with common libvirt
applications across the different virtualization management
scenarios - eg virsh (command line),  virt-manger (local desktop
GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
And of course any non-traditional virt use cases that might be
relevant such as Kata.

That's still not that clear; I know Alice and Sergio have some ideas
There's also some standardisation efforts (e.g. https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.potaroo.net%2Fietf%2Fhtml%2Fids-wg-rats.html&data=04%7C01%7Cbrijesh.singh%40amd.com%7C3c94b09f0cd5450460a808d9b01be1f8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637734456065941078%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=E%2FeaI6JNF2ckosTeAbFRaCZUJOZ3zG0GNfKP8082INQ%3D&reserved=0 and https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Farchive%2Fid%2Fdraft-ietf-rats-architecture-00.html&data=04%7C01%7Cbrijesh.singh%40amd.com%7C3c94b09f0cd5450460a808d9b01be1f8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637734456065951077%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=WEkMIZZp3O5Gyay5jZT8KSUH9fyarNfXy5O0Z%2FpHdnQ%3D&reserved=0
) - that I can't claim to fully understand.
However, there are some themes that are emerging:

   a) One use is to only allow a VM to access some private data once we
prove it's the VM we expect running in a secure/confidential system
   b) (a) normally involves requesting some proof from the VM and then
providing it some confidential data/a key if it's OK
   c) RATs splits the problem up:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ietf.org%2Farchive%2Fid%2Fdraft-ietf-rats-architecture-00.html%23name-architectural-overview&data=04%7C01%7Cbrijesh.singh%40amd.com%7C3c94b09f0cd5450460a808d9b01be1f8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637734456065951077%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=%2FwNFMGAfojFZyGIj79D5%2BW%2BRPPuwumJiqIrf5UVrkPU%3D&reserved=0      I don't fully understand the split yet, but in principal there are
at least a few different things:

   d) The comms layer
   e) Something that validates the attestation message (i.e. the
signatures are valid, the hashes all add up etc)
   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
8.4 kernel, or that's a valid kernel command line)
   g) Something that holds some secrets that can be handed out if e & f
are happy.

   There have also been proposals (e.g. Intel HTTPA) for an attestable connection after a VM is running; that's probably quite different from
(g) but still involves (e) & (f).

In the simpler setups d,e,f,g probably live in one place; but it's not
clear where they live - for example one scenario says that your cloud
management layer holds some of them, another says you don't trust your
cloud management layer and you keep them separate.

So I think all we're actually interested in at the moment, is (d) and
(e) and the way for (g) to get the secret back to the guest.

Unfortunately the comms and the contents of them varies heavily with
technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES) while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
SEV-ES in some cases).

SEV-ES has pre-launch measurement and secret injection, just like SEV
(except that the measurement includes the initial states of all vcpus,
that is, their VMSAs.  BTW that means that in order to calculate the
measurement the Attestation Server must know exactly how many vcpus are
in the VM).

Does that work with CPU hotplug ? ie cold boot with -smp 4,maxcpus=8
and some time later try to enable the extra 4 cpus at runtime ?

AFAIK all generations of SEV don't support CPU hotplug. Tom, Brijesh -
is that indeed the case?

I think we may able to do a CPU hotplug on SEV but hotplug will not work for the SEV-ES and SEV-SNP. This is mainly because the register state need to be measured before the boot.

Tom just pointed me out, theoretically we could do a hotplug of CPUs under the SEV-SNP but I will need to check the security team just to be sure that we are good from the attestation flow. I can update you guys on it.


I don't know about TDX.


So my expectation at the moment is libvirt needs to provide a transport
layer for the comms, to enable an external validator to retrieve the
measurements from the guest/hypervisor and provide data back if
necessary.  Once this shakes out a bit, we might want libvirt to be
able to invoke the validator; however I expect (f) and (g) to be much
more complex things that don't feel like they belong in libvirt.

We experimented with the attestation flow quite a bit while working on
SEV-ES support for libkrun-tee. One important aspect we noticed quite
early, is that there's more data that's needed to be exchange of top
of the attestation itself.

For instance, even before you start the VM, the management layer in
charge of coordinating the confidential VM launch needs to obtain the
Virtualization TEE capabilities of the Host (SEV-ES vs. SEV-SNP
vs. TDX) and the platform version, to know which features are
available and whether that host is a candidate for running the VM at

With that information, the mgmt layer can build a guest policy (this
is SEV's terminology, but I guess we'll have something similar in
TDX) and feed it to component launching the VMM (libvirt, in this

For SEV-SNP, this is pretty much the end of the story, because the
attestation exchange is driven by an agent inside the guest. Well,
there's also the need to have in the VM a well-known vNIC bridged to a
network that's routed to the Attestation Server, that everyone seems
to consider a given, but to me, from a CSP perspective, looks like
quite a headache. In fact, I'd go as far as to suggest this
communication should happen through an alternative channel, such as
vsock, having a proxy on the Host, but I guess that depends on the CSP

If we have an alternative channel (vsock?) and a proxy on the host,
maybe we can share parts of the solution between SEV and SNP.

For SEV/SEV-ES, as the attestation happens at the VMM level, there's
still the need to have some interactions with it. As Tyler pointed
out, we basically need to retrieve the measurement and, if valid,
inject the secret. If the measurement isn't valid, the VM must be shut
down immediately.

In libkrun-tee, this operation is driven by the VMM in libkrun, which
contacts the Attestation Server with the measurement and receives the
secret in exchange. I guess for QEMU/libvirt we expect this to be
driven by the upper management layer through a delegated component in
the Host, such as NOVA. In this case, NOVA would need to:

  - Based on the upper management layer info and the Host properties,
    generate a guest policy and use it while generating the compute
    instance XML.

  - Ask libvirt to launch the VM.

Launch the VM with -S (suspended; so it doesn't actually begin running
guest instructions).

  - Wait for the VM to be in SEV_STATE_LAUNCH_SECRET state *.

  - Retrieve the measurement *.

Note that libvirt holds the QMP socket to QEMU.  So whoever fetches the
measurement needs either (a) to ask libvirt to it; or (b) to connect to
another QMP listening socket for getting the measurement and injecting
the secret.

Libvirt would not be particularly happy with allowing (b) because it
enables the 3rd parties to change the VM state behind libvirt's back
in ways that can ultimately confuse its understanding of the state
of the VM. If there's some task that needs  interaction with a QEMU
managed by libvirt, we need to expose suitable APIs in libvirt (if
they don't already exist).


reply via email to

[Prev in Thread] Current Thread [Next in Thread]