[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Vulnerabilities in Synchronous IPC Designs

From: Jonathan S. Shapiro
Subject: Re: Vulnerabilities in Synchronous IPC Designs
Date: 04 Aug 2003 08:40:38 -0400

Apologies for the delay in responding. I appreciate Espen's comments.
Wanted to explain about one or two things.

On Mon, 2003-06-02 at 07:29, Espen Skoglund wrote:
> [Jean-Charles Salzeber]
> > Hi,
> > As stand in
> > http://www.eros-os.org/papers/IPC-Assurance.ps
> > the L4 IPC system might be vulnerable to DoS attacks.  What are your
> > opinions about this?
> Just had a quick glance at the paper.  Here are some initial thoughts:
>   o Jonathan is talking about an 8 year old L4 API.  While many
>     weaknesses has been identified and fixed in the new API, some of
>     the issues he addresses has not been dealt with yet.

I was aware that changes had happened to the API and that further
changes were contemplated. I did look for an updated API document and
could not locate one. EROS, I must acknowledge, is in very much the same
state. That being said, I'm not aware that timeouts have been dropped.
If there were significant errors arising from changes in the API, I
would like to know about them and I would be happy to put a followup
note on the EROS web site.

The most important changes in L4 for my purposes in writing that paper
were the thread mapping/indirection proposals. At the time I prepared
the paper, I asked Gernot about this, and I was told that selection of a
thread mapping or IPC indirection proposal had not yet converged.

Under the circumstances, the best I felt I could do was go with the
public document. I hoped, in part, that the paper might provide you
folks with an opportunity to write some future paper talking about how
these issues had been addressed.

Just to be clear, I have every confidence that these issues can and will
be addressed. The only issue in the L4 architecture that I don't know
how to address straightforwardly is the design of the mapping mechanism.
This may reflect my ignorance rather than any fault of the mapping

>   o He mentions that segment registers are not reloaded on context
>     switches for the kernel with which we did performance
>     measurements.  This is wrong.  Segment registers must always be
>     reloaded when doing context switches on a kernel with small
>     spaces.  (L4Ka::Pistachio currently does not implement this.)

In the versions of the L4 kernel that I have seen, only CS/SS/DS/ES were
reloaded. Changes to FS/GS were not preserved -- these segment registers
were always reloaded with the NULL selector.

Channel proceeds as follows:

        Application loads any valid selector into FS/GS
        Application periodically checks FS/GS to see if it
          has been nullified.

This reveals the arrival of an interrupt or context switch in the
particular implementations of L4 that I have examined.

This is a very minor bug -- it is of course trivial to fix. The problem
is that the segment reloads are expensive (which is why they were
omitted) and without them one gets a skewed view of the cost of secure
context switch.

Has this been fixed in later implementations?

>   o His claim about transparent interposition in the alternative IPC
>     redirection model being difficult is debatable.

I would be interested to understand this statement more fully. Can you

>   o An L4 server would typically never use timeouts (i.e., it will use
>     zero-timeouts) for message transfer, and the claim that timeouts
>     pose a denial-of-service threat for servers is therefore dubious.

I think that if you believe this you have not considered fully the
problem of message payloads whose size is not statically knowable. The
choices available in the L4 API appear to be timeout or map. Timeout has
problems raised in the paper. Mapping raises concerns of durability, as
the server and client now have a sharing relationship that must be
managed. This extends the temporal scope of the transaction considerably
beyond the bounds of the IPC operation.

>   o I don't buy the argument about reproducibility.  If you want
>     reproducibility of a program you must ensure that the whole system
>     state (including hardware) can be set to some initial state and
>     that the re-run of the program is handled exactly in the same
>     way at the hardware level.  Given that this is not doable in
>     current systems, there is no way that exact reproducibility can be
>     guaranteed.  There will always be some timing implications.  It
>     might be that he talks about reproducibility at a different level
>     here, though.

I agree that perfect reproducibility requires deterministic execution. I
also agree that this is impractical. This wasn't the source of my

It was my experience during 10 years of delivering production quality
systems code that timeouts were a recurring source of untestable flaws.
During many years in which these sorts of problems were known to be
hard, nobody EVER came up with an adequate way to test for them. I have
talked about this issue with many experienced system builders over the
years, and none ever had a viable testing method.

Further, I observe as a system designer that if there is a really stupid
way to do something that is attractively simple, programmers can be
relied on to program using it. It is not impossible in EROS to implement
IPC timeouts. It is convoluted enough to induce the programmer to look
for cleaner solutions.

Ultimately, my real objection is empirical. In the real world, timeouts
are a fatal impediment to quality or security assurance.

>   o The Karslruhe group has worked on a another IPC model which
>     handles complete subsystem isolation, while still enabling
>     transparent interposition and having minimal performance overhead.
>     Unfortunately, the paper describing the model was not accepted
>     for publication.  Should probably get around to polish it a bit
>     more and make a TR out of it one of these days.

This is WONDERFUL news! Please let me know if the reactions of an
outside reader would be helpful -- I would be very interested to see
this work published.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]