[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Separate trusted computing designs

From: Christian Stüble
Subject: Separate trusted computing designs
Date: Wed, 16 Aug 2006 19:09:15 +0200
User-agent: KMail/1.9.1

Hi Marcus, hi all,

thanks for the questions. As I said in my last message, I would prefer to 
discuss only OS-related questions in this list..

To prevent misunderstandings: I don't want to promote TC, nor do I like its 
technical instantiation completely. IMO there are a lot of technical and 
social issues to be corrected; that's the reason why I am working on this 
topic. Nevertheless, a lot of intelligent researchers have worked on it, and 
therefore it makes IMO sense to analyse what can be done with this 
technology. In fact, who else should do this?

You are asking a lot of questions that I cannot answer, because they are
the well known "open issues". The challenge is to be able to answer them 

A last note: You asked for use cases that may require security properties as 
provided by TC, but that could be of interest for users of hurd. In fact, 
these are more or less use cases I would be interesed in. If there are two 
comparable open operating systems - one providing these features and one that 
does not, I would select the one that does. I do not want to discuss the 
opinion of the government or the industry. And I don't want to discuss 
whether people are intelligent enough to use privacy-protecting features or 
not. If other people do not want to use them, they don't have to. My 
requirement is that they have the chance to decide (explicitly or by 
defining, or using a predefined, privacy policy enforced by the system).

> I assume you are familiar with
> http://lists.gnu.org/archive/html/l4-hurd/2006-05/msg00184.html
> http://lists.gnu.org/archive/html/l4-hurd/2006-05/msg00324.html
Not fully. I read it quickly yesterday evening, but I have to find more time 
to read it more deeply. Sorry if I use other terms for now.

> Christian Stüble <address@hidden> wrote:
> > General: Since I am not aware of a multi-server system designs that
> > fulfills today's requirements, our group has to design and implement a
> > lot of services from scratch - wasting a lot of time, since our main
> > focus is security. Therefore, we would like to collaborate with further
> > projects like hurd and coyotos, to share design ideas, use cases and
> > implementations. Unfortunately, this seems to be impossible due to
> > conflicting requirements (at least with hurd): We are using TC technology
> > and we are even developing DRM-like applications (whatever this means).
> It is only impossible if the aspects of "trusted computing" that I
> find inacceptable are inseparable from the rest of the system
> architecture.  However, _if_ they are inseparable, then that, IMO,
> points at a defect of the system architecture, because the user in
> such an architecture perpetually alienates his rights to major part of
> his computing infrastructure (as explained in the emails referenced
> above).
> So, the question to you is: Can you clearly separate the aspects of
> your system design that require "trusted computing" from those aspects
> of your system design that don't?
From a high-level view, definitively yes. The main concept we are using
TC for is to enable what we call a "trusted channel": Secure channels between 
(remote) compartments that allow the involved parties (sender, receiver)
to get information about the 'properties' of the communication partner. A 
property could be the information whether the user can access the state of 
the process or not. But it could also be a list of hash values (e.g., IMA).

In our design, we try to abstract away the functionality offered by, e.g., a 
TPM and to use more generic concepts. Example: A service providing persistent 
storage for applications provides different kinds of "secure storage". Bound 
to a user, bound to the TCB, bound to the application behavior (including the 
TCB), whatever. If some properties are missing (in our design this will 
depend on a user-defined policy, in your design maybe a compiling flag), then 
applications cannot use them (and applications that require them will not 

We have not yet finished deriving low-level requirements from our high-level 
ones, but maybe the difference between "your" design and "our" design is
only a configuration option of the microkernel, or a command line option, or 
only an entry of the system-wide security policy. Would that be acceptable?

> Examples where this may be a problem for you are: Window management
> (does the user have exclusive control over the window content?),
> device drivers, debugging, virtualization, etc.
This is (except of the elementary security properties provided by the 
underlying virtualization layer, e.g., a microkernel) an implementation
detail of the appropriate service. There may be implementations enforcing 
strong isolation between compartments and others that do not. That't basic 
idea behind our high-level design how to provide multilateral security: The 
system enforces the user-defined security policy with one exception: 
Applications can decide themselves whether they want to continue execution 
based on the (integer) information they get (e.g., whether the GUI enforces 
isolation or not). But this requires that users cannot access the 
applications's internal state. 

Of couse, this is a very inaccurate description of the aimed system behavior 
and it may open more questions than it answers. But it should suffice to 
explain our basic concepts.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]