qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
Date: Thu, 16 Jun 2016 16:22:22 +0100
User-agent: Mutt/1.6.1 (2016-04-27)

* Stefan Berger (address@hidden) wrote:
> On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
> > * Stefan Berger (address@hidden) wrote:
> > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > <snip>
> > 
> > > > So what was the multi-instance vTPM proxy driver patch set about?
> > > That's for containers.
> > Why have the two mechanisms? Can you explain how the multi-instance
> > proxy works; my brief reading when I saw your patch series seemed
> > to suggest it could be used instead of CUSE for the non-container case.
> 
> The multi-instance vtpm proxy driver basically works through usage of an
> ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
> The front-end is a new /dev/tpm%d device that then can be moved into the
> container (mknod + device cgroup setup). The backend is an anonymous file
> descriptor that is to be passed to a TPM emulator for reading TPM requests
> coming in from that /dev/tpm%d and returning responses to. Since it is
> implemented as a kernel driver, we can hook it into the Linux Integrity
> Measurement Architecture (IMA) and have it be used by IMA in place of a
> hardware TPM driver. There's ongoing work in the area of namespacing support
> for IMA to have an independent IMA instance per container so that this can
> be used.
> 
> A TPM does not only have a data channel (/dev/tpm%d) but also a control
> channel, which is primarily implemented in its hardware interface and is
> typically not fully accessible to user space. The vtpm proxy driver _only_
> supports the data channel through which it basically relays TPM commands and
> responses from user space to the TPM emulator. The control channel is
> provided by the software emulator through an additional TCP or UnixIO socket
> or in case of CUSE through ioctls. The control channel allows to reset the
> TPM when the container/VM is being reset or set the locality of a command or
> retrieve the state of the vTPM (for suspend) and set the state of the vTPM
> (for resume) among several other things. The commands for the control
> channel are defined here:
> 
> https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
> 
> For a container we would require that its management stack initializes and
> resets the vTPM when the container is rebooted. (These are typically
> operations that are done through pulses on the motherboard.)
> 
> In case of QEMU we would need to have more access to the control channel,
> which includes initialization and reset of the vTPM, getting and setting its
> state for suspend/resume/migration, setting the locality of commands, etc.,
> so that all low-level functionality is accessible to the emulator (QEMU).
> The proxy driver does not help with this but we should use the swtpm
> implementation that either has that CUSE interface with control channel
> (through ioctls) or provides UnixIO and TCP sockets for the control channel.

OK, that makes sense; does the control interface need to be handled by QEMU
or by libvirt or both?
Either way, I think you're saying that with your kernel interface + a UnixIO
socket you can avoid the CUSE stuff?

Dave

>     Stefan
> 
> > 
> > Dave
> > P.S. I've removed Jeff from the cc because I got a bounce from
> > his AT&T address saying 'restricted/not authorized'
> > 
> > >      Stefan
> > > 
> > --
> > Dr. David Alan Gilbert / address@hidden / Manchester, UK
> > 
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]