qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM


From: Stefan Berger
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
Date: Thu, 28 Jan 2016 09:51:47 -0500

"Daniel P. Berrange" <address@hidden> wrote on 01/28/2016 08:15:21 AM:


>
> On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > "Daniel P. Berrange" <address@hidden> wrote on 01/20/2016 10:00:41
> > AM:
> >
> > > Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE
> > > > The CUSE TPM and associated tools can be found here:
> > > >
> > > >
https://github.com/stefanberger/swtpm
> > > >
> > > > (please use the latest version)
> > > >
> > > > To use the external CUSE TPM, the CUSE TPM should be started as
> > follows:
> > > >
> > > > # terminate previously started CUSE TPM
> > > > /usr/bin/swtpm_ioctl -s /dev/vtpm-test
> > > >
> > > > # start CUSE TPM
> > > > /usr/bin/swtpm_cuse -n vtpm-test
> > >
> > > IIUC, there needs to be one swtpm_cuse process running per QEMU
> > > TPM device ?  This makes my wonder why we need this separate
> >
> > Correct. See reason in answer to previous email.
> >
> > > process at all - it would make sense if there was a single
> > > swtpm_cuse shared across all QEMU's, but if there's one per
> > > QEMU device, it feels like it'd be much simpler to just have
> > > the functionality linked in QEMU.  That avoids the problem
> >
> > I tried having it linked in QEMU before. It was basically rejected.
> >
> > > of having to manage all these extra processes alongside QEMU
> > > which can add a fair bit of mgmt overhead.
> >
> > For libvirt, yes, there is mgmt. overhead but it's quite transparent. So
> > libvirt is involved in the creation of the directory for the vTPMs, the
> > command line creation for the external process as well as the startup of
> > the process, but otherwise it's not a big issue (anymore). I have the
> > patches that do just for an older libvirt version that along with setting
> > up SELinux labels, cgroups etc. for each VM that wants an attached vTPM.
>
> A question that just occurred is how this will work with live migration.
> If we live migrate a VM we need the file that backs the guest's vTPM
> device to either be on shared storage, or it needs to be copied. With


The vTPM implements commands over the control channel to get the vTPM's state blobs upon migration (suspend) and set it back into the vTPM upon end of migration (resume). The code is here:

http://lists.nongnu.org/archive/html/qemu-devel/2016-01/msg00088.html

This function implements the retrieval of the state.

+int tpm_util_cuse_get_state_blobs(int tpm_fd,
+                                  bool decrypted_blobs,
+                                  TPMBlobBuffers *tpm_blobs)



> modern QEMU we are using drive-mirror to copy all block backends over
> an NBD connection. If the file backing the vTPM is invisible to QEMU
> hidden behind the swtpm_cuse ioctl(), then there's no way for us to
> leverage QEMUs block mirror to copy across the TPM state file AFAICT.


The vTPM's state is treated like any other device's state and is serialized upon machine suspend (alongside all the other VM devices) and de-serialized upon machine resume (with the addition that the state is pushed into the external vTPM device over the control channel and there are control channel commands to resume the vTPM with that state).

It is correct that the vTPM writes its state into a plain text file otherwise. This vTPM state needs to go alongside the image of the VM for all TPM related applications to run seamlessly under all circumstances (I can go into more detail here but don't want to confuse). There's currently one problem related to running snapshots and snapshots being 'volatile' that I mentioned here (volatile = state of VM filesystem is discarded upon shutdown of the VM running a snapshot):

 https://lists.gnu.org/archive/html/qemu-devel/2016-01/msg04047.html

I haven't gotten around trying to run a snapshot and migrating it to another machine. Let's say one was to create a new file /root/XYZ while running that snapshot and that snapshot is shut down on the machine where its destination was. Will that file /root/XYZ appear in the filesystem then upon restart of that VM? The 'normal' behavior when not migrating is that while running a snapshot and creating a new file /root/XYZ that file will not appear when restarting that snapshot (of course!) or when starting the machine 'normally'. So VM image state is 'volatile' if running a snapshot and that snapshot is shut down. The state of the vTPM would have to be treated equally volatile or non-volatile.

Does this explanation clarify things?

Regards,
Stefan

>
> Regards,
> Daniel
> --
> |:
http://berrange.com     -o-    http://www.flickr.com/photos/dberrange/:|
> |:
http://libvirt.org             -o-             http://virt-manager.org:|
> |:
http://autobuild.org      -o-         http://search.cpan.org/~danberr/:|
> |:
http://entangle-photo.org      -o-       http://live.gnome.org/gtk-vnc:|
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]