[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Alternative network stack design (was: Re: Potential use case for op

From: Marcus Brinkmann
Subject: Re: Alternative network stack design (was: Re: Potential use case for opaque space bank: domain factored network stack
Date: Mon, 08 Jan 2007 09:31:30 +0100
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Mon, 8 Jan 2007 07:42:04 +0100,
Pierre THIERRY <address@hidden> wrote:
> Scribit Marcus Brinkmann dies 08/01/2007 hora 07:11:
> > > You claimed that it could be used to implement policies you consider
> > > to be security threats. What harmful policies can be achieved
> > > through this mechanism that cannot easily without?
> > The canonical example is the encapsulated constructor: The developer
> > of a program gives a capability naming a constructor to a user
> > (confined or not confined, doesn't really matter).
> I'm tempted to tell you you're speaking about mechanisms where it's
> about policy. ;-)
> IIUC, opaque memory only makes it easy to add resource accountability
> and DoS resistance to processes hiding their code and data. But the
> policy in itself (executing a program without enabling the user to
> inspect it) is already achievable without opaque memory.

Ah, very clever :) You are indeed right, and now I finally understand
what you said all along.  Let me support your objection, and then I
will have to see how I wriggle myself out of that trap.

A couple of years ago, web applications where considered not
sufficiently protected by the GPL, because their primary use does not
involve distribution of the software to its users, thus the GPL fails
to ensure user freedom of derivative works.  This was at the time
considered a major motivation to write a new version of the GPL that
required web applications to provide a download button so users could
download a copy of the source code.  This was perceived to be urgent
because it was expected that many applications would move from the
local machine to a remote server.  Eventually this was resolved by
adding an extra clause to some programs, I believe (I would have to
check if there are any examples where this was actually done).

In the end, the major drift of applications from the desktop to
servers didn't happen (this was pre-Google), and free software thrived
within the old model of distribution.  Now the hot topic is DRM, and
that is urgent enough that the GPL v3 actually happens.

What this shows is that mechanisms can influence and change or
obsolete policy decisions.  New threat scenarios require a response
and a redefinition of the existing policies, hopefully in agreement
with some higher level goals or principles.

Thanks for pointing this out, it has been very fruitful.  It makes
clear that aside from mechanisms and policies there is yet another
layer, the goals and principles that are cast into the policy with
respect to a certain environment.  At some point I will have to sit
down and formulate some of these goals and principles in a better way
than my previous attempts on that matter.
> > This is the base scenario, required for implementation of blackbox
> > algorithms and DRM, for example (usually not confined so billing can
> > occur).
> I don't know about effective DRM, but blackbox algorithms don't require
> opaque memory.

Can you elaborate?
> > How can we ensure that it can not engage into a contract like the one
> > above?
> > 
> > * We could inspect it statically or dynamically.  Unfeasible, except
> > under very narrow assumptions.
> > 
> > * We could try to prevent it from getting any constructor capability
> > such as above (more generally: any capability that would allow it to
> > engage into harmful contracts). [...]
> > 
> > * We could not give it any capability that allows untrusted agents
> > allocation of opaque storage.  This is what I proposed for sake of
> > discussion.
> > 
> > The difference is that in the second method above, the shell gives the
> > application access privileges and then tries to prevent it from
> > exercising them.
> Why should the shell in the system with opaque memory give more
> authority? I don't see why POLA wouldn't be enforced as tightly as
> possible when there is opaque memory.
> As I said before, if you give to a process A, that you can inspect, a
> capability to another process B, that you can inspect,

A better word here is "proxy".

> and B ask A
> opaque memory, you still have authority to inspect that memory, because
> you can inspect B.
> So it all boils down to avoid givind to a process you can inspect a
> capability to a process you can't inspect.

Uhm, but then it can't use any service requiring opaque allocation of
user-provided memory resources.  Wasn't that the whole point of the

The scenario above is that B delegates the capability to yet another
process C, which can not be inspected by either A or B.  C verifies
that the capability is a genuine space bank (so that proxying is
detected and denied) before making opaque allocations.  These
allocations can not be inspected by neither A nor B.

> But one of the very purpose of a capability-based OS is precisely to let
> you use programs while giving them the least authority they need to
> operate. So I don't see any conceptual difference between EROS and your
> proposal (WRT to this specific point). If you want your processes to be
> inspectable, don't give them authority to use memory you can't inspect.

Please reread the example carefully, accounting for the fact that
there is a third process C involved.  If there is no C, then none of
this makes sense, as you have rightly pointed out before (when you
corrected my bogus browser example).


reply via email to

[Prev in Thread] Current Thread [Next in Thread]