[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Alternative network stack design (was: Re: Potential use case for op

From: Marcus Brinkmann
Subject: Re: Alternative network stack design (was: Re: Potential use case for opaque space bank: domain factored network stack
Date: Mon, 08 Jan 2007 07:11:59 +0100
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Mon, 8 Jan 2007 05:43:09 +0100,
Pierre THIERRY <address@hidden> wrote:
> Be clear on the issues that we try to resolve. So I'll ask the question
> again, but as clear as possible:

Thanks, that helped.
> The mechanism of opaque memory has been used in scientific litterature
> (could someone tell if it has already been used in deployed systems?) to
> achieve POLA at a very high level, even in system components, while
> retaining very good performance, by avoiding copy accroos protection
> boundaries, and enabling more secure designs, by naturally adding
> resource accountability and rendering services resistant to classical
> denial of service attacks.


> You claimed that it could be used to implement policies you consider
> to be security threats. What harmful policies can be achieved
> through this mechanism that cannot easily without?

The canonical example is the encapsulated constructor: The developer
of a program gives a capability naming a constructor to a user
(confined or not confined, doesn't really matter).  The user gives a
spacebank capability when invoking the constructor.  At startup the
program verifies that the spacebank capability is genuine (otherwise
it terminates immediately).  Then it allocates opaque memory on which
it executes.

This is the base scenario, required for implementation of blackbox
algorithms and DRM, for example (usually not confined so billing can
occur).  Clearly it can not be implemented without opaque memory.
I trust that I have made sufficiently clear earlier why I consider
this application to be harmful.

However, I have yet to explain how it can occur at all if the user
doesn't engage in such contracts willingly.

To see this, consider application code which we download from the net
and which we do not trust.  How can we ensure that it can not engage
into a contract like the one above?

* We could inspect it statically or dynamically.  Unfeasible, except
  under very narrow assumptions.

* We could try to prevent it from getting any constructor capability
  such as above (more generally: any capability that would allow it to
  engage into harmful contracts).  This may be feasible in some cases,
  but only if the user shell is very careful in which capabilities are
  delegated to the application.  In practice this means that only a
  small static set of well-known "good" capabilities can be delegated,
  which excludes all interesting capabilities like arbitrary data
  objects (which may come from an untrusted source).

* We could not give it any capability that allows untrusted agents
  allocation of opaque storage.  This is what I proposed for sake of

The difference is that in the second method above, the shell gives the
application access privileges and then tries to prevent it from
exercising them.  In the third method, which I prefer, the principle
of least privilege is adhered to and the program only gets the
privileges the shell wants it to have.

> How does your proposal protect users from these threats?

Very simply: By default, no allocation of opaque storage is possible
by untrusted agents, which means that none occurs.  The policy which
agents are trusted in this respect lies within the user's shell, which
makes it possible to customize it flexibly (by the tagging method, for

I would instruct my shell to only allow allocation of opaque storage
by trusted system and user agents (for example, the user's GnuPG server).

Other people (maybe you?) might be happy to rely on the safe defaults,
but engage with other users on the system in an exploration of using
opaque storage for special purposes, after willful negotiation and
explicit authorisation through the shell.

I hesitate to think that there would be people who configure their
shell voluntarily to allow anybody opaque allocation without
discrimation.  But they are bound to exist, so there they go.

Note that the mechanism I proposed is, at an abstract level (we have
to be careful because we did not discuss implementation issues),
strictly more powerful than what my original system structure
provides, where only transparent allocation was possible.  But it is
also strictly more powerful than what EROS provides, where only opaque
allocation is possible.

As for implementation details, that requires a careful analysis.  At a
hunch, complexity is hardly an issue, as the mechanisms to implement
something like this are already provided for (resource containers and
branding).  I can't get myself to believe that the extra setup costs
for tagging are significant, and the whole mechanism may in fact be
swallowed completely by a more general and sophisticated resource
container infrastructure (to be checked).  I don't see any reason to
be pessimistic about it.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]