[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Broken dream of mine :(

From: William Leslie
Subject: Re: Broken dream of mine :(
Date: Tue, 6 Oct 2009 14:14:37 +1100

2009/10/6 Jonathan S. Shapiro <address@hidden>:
> On Mon, Oct 5, 2009 at 12:12 AM, William Leslie
> <address@hidden> wrote:
>> 2009/10/5 Jonathan S. Shapiro <address@hidden>:
>>> But with safe languages gaining acceptance, I think we now
>>> would need to re-examine that.
>> 0. Language level object models tend to be finer grained than those
>> exposed by the operating system.
> Yes. This can be a blessing or a curse. Finer grain lets you pack
> concepts more tightly, but there is such a thing as an object that is
> too small to adequately do the job. We don't teach OO programming very
> well in school, so this tends to be a problem in real programmers.
> Another way to say this is that factoring is only good when the
> concepts involved actually factor. When they don't, you just end up
> making things more complicated.

This is an interesting problem, and one that doesn't have a
straightforward answer in most of the languages today.  A similar
problem is how to determine who pays for what resources at this level.
 I think I have a vague idea on a solution, which I might save for a
future blog entry (there are a couple of prerequisites to this

>> 1. Safe languages introduce new opportunities for optimisation.
> Not in practice. Current safe languages turn out to have a bunch of
> fairly low-level design issues that make optimization quite difficult.
> In principle what you say should be true, but in practice we fumbled
> the early implementations.
> Trivial example 1: The "readonly" keyword in C# is (correctly and
> necessarily) ignored by most C# compilers. Exercise for the reader:
> explain why.
> Trivial example 2: Most pointers in Java/C# can be null, which carries
> an enormous optimization penalty.
> JIT doesn't help either of these.

Any discussion of optimisation needs to keep in mind the premise that
optimisations are only applicable if they preserve the semantics of
the language.  Any attempt to take advantage of readonly would need to
show that, on the domain of interest, no paths modify the region of
interest, and that there are no memory barriers; effect analysis of
this depth is very expensive if all you are getting out of it is to
show that a readonly is a loop invariant, indeed, if you are doing
that kind of analysis, readonly is redundant.  What safe languages
bring to the table is that they are easier to do this kind of analysis
on, because there are more assumptions you can make.

> With apologies to Jeff Irving, this insight was known back in the
> Cedar/Mesa days.
>> 2. Safe languages provide security benefits that go beyond confinement.
>> Even when a stack overflow exploit in a network server can't do any
>> damage to the filesystem directly, it can forge communications, which
>> could be just as bad.  Safe languages don't eliminate all possible
>> bugs, but they sure make a difference, and depending on the intended
>> target audience that could be a serious positive.
> Yes. They eliminate between 50% and 60% of current vulnerabilities.
> But be careful. You need to test and calibrate the runtime cost of this...

>> -1. There is a large amount of legacy code that is not just going to go away.
> Perhaps, but the beauty of legacy code is that 20% of it dies of old
> age every 5 years. Most of it's junk. Unfortunately, the code that
> survives many culls is important and complex code.

Most of it is junk, but replacing it is still a lot of work.  Who has
the time and energy to rewrite unix kernel functionality, gui toolkit
libraries, databases, safe language runtimes, etc, in a safe language?
 Ultimately, it would be good, but throwing away everything to start
from scratch does not seem to be viable.

> JIT code is bad because we don't know how to assure anything as
> complex as a JIT compiler.

Any transformation a JIT compiler makes must preserve the semantics of
the original program, otherwise it would not be useful.  Since the
program must have been shown to be safe to have been compiled the
first time, the problem has been reduced to proving that the compiler
preserves the semantics of the language, on the domain of all valid
programs and correctly typed data.

This is not exactly trivial, the kind of bugs that are common to JIT
compilation are very obscure.  But proving that the semantics are
preserved seems like a reasonable task, especially given a model that
is easy to reason about.

William Leslie

reply via email to

[Prev in Thread] Current Thread [Next in Thread]