l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Distributed Capabilities


From: Marcus Brinkmann
Subject: Re: Distributed Capabilities
Date: Tue, 28 Mar 2006 21:26:46 +0200
User-agent: Wanderlust/2.14.0 (Africa) SEMI/1.14.6 (Maruoka) FLIM/1.14.7 (Sanjō) APEL/10.6 Emacs/21.4 (i486-pc-linux-gnu) MULE/5.0 (SAKAKI)

Hi,

At Tue, 28 Mar 2006 13:38:33 -0500,
"Jonathan S. Shapiro" <address@hidden> wrote:
> 
> On Tue, 2006-03-28 at 20:15 +0200, Marcus Brinkmann wrote:
> > It's not clear to me (even after reading your response) why selective
> > assembly of people by means of computational systems would include
> > establishing the identity of any of the involved platforms to anybody
> > except the person using that platform to participate in the assembly.
> 
> That is because I did not mean "selective assembly of people". I meant
> "selective assembly of the elements of a trustworthy collaboration".
> Some of those elements are human. Some are computational. A
> computational collaboration cannot occur without both.

Actually, this is what I know under the name "multi-party computing",
and is one of the applications for "Trusted Computing" or DRM (the
difference between DRM and multiparty computing is only in the balance
of power between the involved parties, and its consequences).

I see no reason why this should be a fundamental freedom (and you
skipped the call for explanation in your reply).  In fact, I find all
applications that I have learned about so far, and that have _any_
impact on society at all, morally objectionable.

> > > Even if I trust you, Marcus, personally, and even though you personally
> > > are quite expert, you are simply not capable of giving me any *credible*
> > > assurance of what is running on your system. The reason is that in
> > > practice you do not actually *know* what is running on your system.
> > > Upgrades make it impossible to track this in practice.
> > 
> > That is an argument to make it possible for me to attest, locally,
> > what software I am running.  It is not an argument for other
> > participants in the assembly to get any assurance about that beyond my
> > word for it.
> 
> This is why you are free *not* to attest. You are always free to say
> "trust me". I am always free to say "that is not good enough".

What do you mean by "free"?  For me, the freedom to make a decision
means that I can make the decision independent of any other decision
(out of a certain "decision space" of course).  In this case, if you
say "[my refusal to attest] is not good enough", then the decision to
attest and the decision to participate in the assembly are _not_
independent.  Thus, assuming that my actual goal is to participate in
the assembly, I am _not_ free not to attest.  I am, in this scenario,
"free" to "attest and participate" or "not to attest and not to
participate".  Claiming that I would be free not to attest under these
circumstances is just deceptive.

I have heard this argument quite often now, and always in the same
context: in support of TC and DRM.  And it annoys me, because it is
really Orwellian Newspeak.  In the good old days, if you had to do A
to achieve B, we said that "A is a requirement for B".  We didn't say
that "you are free not to do A".

> The problem with your argument is that you are imposing a policy.

I don't think there is a policy-neutral interrelationship between
humans.  Imposing policy is not a problem in my opinion.  It certainly
is not exclusive to my strategies: the multiparty computing protocols
are likewise imposing lots and lots of policies.  The sole fact that
policy is imposed is not objectionable to me.  What can be
objectionable is what type policy is imposed and enforced by whom over
whom.

> You are saying that, as a matter of policy, *I* should not be free to say
> "trust me because I am able to present a highly credible reason to do
> so." In short, you are saying that the ability to verify is an intrinsic
> violation of freedom.
> Fundamentally, you are arguing that everyone should have an intrinsic
> right to behave deceptively.

That is not at all what I am saying.  Please let me represent myself.

> I can see some good arguments for this in
> certain circumstances, and I can also see some compelling arguments
> *against* it. At best, it is a difficult question. Unfortunately, we
> simply do not know how to technically accomplish both goals (deceit and
> credibility) at the same time. The best we know how to do technically is
> to choose between
> 
>   1) Zero credibility for everyone, or
>   2) Choose between credibility and not speaking at all.
> 
> If *I* (personally) must choose between giving up deceit and giving up
> verifiability,

> I will give up deceit. As a society, we can learn to live
> successfully in a world where overt deceit is impossible, but there is a
> great deal that we cannot accomplish if verification is impossible.

That sounds like a very bold claim.  What's the evidence?  Actually, I
am not even sure what the claim means :)
 
> > Maybe you can elaborate on what these "valid circumstances" would be.
> 
> For example: I trust you, but we both want to avoid disclosing
> something. The value is high enough that neither of us wishes to accept
> liability for error. In this situation, I need to be able to *verify*
> that what you say is true within reason. It is an issue of acceptable
> standards of dilligence.

This is not a specific example rooted in the real world.  It is just a
rephrasing of the abstract case that is in need for specialisation,
but in a different vocabular.  So, I can't really respond.
 
> > > So: any robust mechanism for selective assembly must answer two
> > > independent questions:
> > > 
> > >   1. Do I trust the remote administrator/user?
> > >   2. Do I have credible reason to believe that the remote
> > >      administrator/user is, in fact, in control of their system.
> > 
> > No mechanism can tell you if you "trust" the remote administrator or
> > user.  I assume you actually meant that the mechanism should establish
> > the identity of the remote administrator/user.  With this provision, I
> > already conceded 1.
> 
> Excuse me. I should have said "any robust mechanism for selective
> [computational] assembly is *predicated on answering* two independent
> questions.

Ok.

> > Number 2 is a question that no mechanism can answer.  The remote user
> > may sit at the computer next to a bad guy with a gun.  In this case,
> > the remote user is not in control of their system.  I actually think
> > that remote attestation does not give you _any_ information on this
> > question.
> 
> Actually, that is completely false. Remote attestation means that I know
> the behavior of the remote software. In consequence, I know that certain
> things **cannot be done at all** by the remote user, even with a gun to
> their head.

I didn't understand that in your definition of "assembly" you would
actually rely on computations made on a remote platform.  Now that I
realized that it is just multi-party computing, it's clearer.

Thanks,
Marcus





reply via email to

[Prev in Thread] Current Thread [Next in Thread]