[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: C++

From: Jonathan S. Shapiro
Subject: Re: C++
Date: Mon, 9 Nov 2009 08:08:10 -0800

On Mon, Nov 9, 2009 at 5:45 AM, Bas Wijnen <address@hidden> wrote:
> ...the linker cannot choose the proper function depending on the type: it
> doesn't know about types.

So first, this is a weakness of current linkers, not an essential
restriction. And some current link phases are smarter than this.

But second, if the template system were properly typed this would not
be true. It turns out that what the linker needs to do to implement
link-time expansion is slightly fancier relocations. Basically, you
drive the high-level types down to low-level types that deal with
things at the size and alignment level, and that's something that the
linker already knows about.

The real hidden issue in all this is inlining, not typing. If you
don't let the linker re-arrange the code substantially, then you can't
inline things like an object-specific less-than operator whose
resolution you don't know at (static) compile time. Ultimately, it is
the inlining issue that drives code replication.

The reason for the current behavioral division between linkers and
compilers is an artifact of legacy. It made sense in a day when
machine performance was measured in KIPs and a large data center disk
drive was 30 megabytes. In today's terms -- and especially with main
memories going into the 6-12 gigabyte range this year -- we have
plenty of resource to change the arrangement:

  - front end validates the code, generates high-level IL, does
high-level optimization.
  - linker accepts IL, does optimizing code generation.

The interesting challenge in this approach is dealing with library
interface boundaries. And as you point out, it's not really clear at
this point that the compile system should treat those as a boundary at

> The reason languages like Python an
> javascript can do this, is that they are interpreted languages, so that
> compile-time is the same as run-time (Python-"compiling" is only
> optimizing).

Yes and no. What you say is basically true. All I want to add is that
the decision about when to compile isn't "all or nothing". Java and C#
split the load between static compile time and run time, and they get
some advantage out of that.

> > This means that two compilation units that use minimum<int> will each
> > have their own instance of it.
> But there is a solution to that, which is implemented in C++: think make
> each instance a weak symbol, which may be multiply defined.  If it is,
> only one function is actually included at link time.

Yes. And most compilers actually implement this today.

But see my point about inlining above. Basically, you can have (for
example) one copy of the B-tree implementation, or you can have
multiple copies where the comparison operator is inlined.
Unfortunately, there is no obvious answer for which way that judgment
decision should go in all cases, and in the separate compilation
scenario, you may not have enough information on hand to inline



reply via email to

[Prev in Thread] Current Thread [Next in Thread]