[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Distributed Capabilities

From: Jonathan S. Shapiro
Subject: Re: Distributed Capabilities
Date: Tue, 28 Mar 2006 13:38:33 -0500

On Tue, 2006-03-28 at 20:15 +0200, Marcus Brinkmann wrote:
> It's not clear to me (even after reading your response) why selective
> assembly of people by means of computational systems would include
> establishing the identity of any of the involved platforms to anybody
> except the person using that platform to participate in the assembly.

That is because I did not mean "selective assembly of people". I meant
"selective assembly of the elements of a trustworthy collaboration".
Some of those elements are human. Some are computational. A
computational collaboration cannot occur without both.

> > Even if I trust you, Marcus, personally, and even though you personally
> > are quite expert, you are simply not capable of giving me any *credible*
> > assurance of what is running on your system. The reason is that in
> > practice you do not actually *know* what is running on your system.
> > Upgrades make it impossible to track this in practice.
> That is an argument to make it possible for me to attest, locally,
> what software I am running.  It is not an argument for other
> participants in the assembly to get any assurance about that beyond my
> word for it.

This is why you are free *not* to attest. You are always free to say
"trust me". I am always free to say "that is not good enough".

The problem with your argument is that you are imposing a policy. You
are saying that, as a matter of policy, *I* should not be free to say
"trust me because I am able to present a highly credible reason to do
so." In short, you are saying that the ability to verify is an intrinsic
violation of freedom.

Fundamentally, you are arguing that everyone should have an intrinsic
right to behave deceptively. I can see some good arguments for this in
certain circumstances, and I can also see some compelling arguments
*against* it. At best, it is a difficult question. Unfortunately, we
simply do not know how to technically accomplish both goals (deceit and
credibility) at the same time. The best we know how to do technically is
to choose between

  1) Zero credibility for everyone, or
  2) Choose between credibility and not speaking at all.

If *I* (personally) must choose between giving up deceit and giving up
verifiability, I will give up deceit. As a society, we can learn to live
successfully in a world where overt deceit is impossible, but there is a
great deal that we cannot accomplish if verification is impossible.

> Maybe you can elaborate on what these "valid circumstances" would be.

For example: I trust you, but we both want to avoid disclosing
something. The value is high enough that neither of us wishes to accept
liability for error. In this situation, I need to be able to *verify*
that what you say is true within reason. It is an issue of acceptable
standards of dilligence.

> > So: any robust mechanism for selective assembly must answer two
> > independent questions:
> > 
> >   1. Do I trust the remote administrator/user?
> >   2. Do I have credible reason to believe that the remote
> >      administrator/user is, in fact, in control of their system.
> No mechanism can tell you if you "trust" the remote administrator or
> user.  I assume you actually meant that the mechanism should establish
> the identity of the remote administrator/user.  With this provision, I
> already conceded 1.

Excuse me. I should have said "any robust mechanism for selective
[computational] assembly is *predicated on answering* two independent

> Number 2 is a question that no mechanism can answer.  The remote user
> may sit at the computer next to a bad guy with a gun.  In this case,
> the remote user is not in control of their system.  I actually think
> that remote attestation does not give you _any_ information on this
> question.

Actually, that is completely false. Remote attestation means that I know
the behavior of the remote software. In consequence, I know that certain
things **cannot be done at all** by the remote user, even with a gun to
their head.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]