tinycc-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Tinycc-devel] inline assembly and optimization passes


From: Jared Maddox
Subject: Re: [Tinycc-devel] inline assembly and optimization passes
Date: Fri, 27 Sep 2013 02:02:54 -0500

> Date: Thu, 26 Sep 2013 11:33:38 +0200
> From: Sylvain BERTRAND <address@hidden>
> To: address@hidden
> Subject: Re: [Tinycc-devel] inline assembly and optimization passes
> Message-ID: <address@hidden>
> Content-Type: text/plain; charset=us-ascii
>

>> If I was doing that then I'd be looking at an application
>> language that would easily integrate with a systems language,
>> hence high-end features along with C compatibility.
>>
>> If I was going for a new systems language then I'd just take C,
>> modify the syntax for pointers & declarations, and maybe modify
>> the standard library. Presumably a smaller job than a
>> from-scratch language.
>
> I do not agree. I would go C- for system and application. I don't
> like my software stack to depend on tons of different languages
> (because at the end, it's what we get).
>

A purpose for everything, and for everything a purpose. C is a
difficult language to write big programs in, most of all because you
have to do all memory management yourself (non-cyclic dependencies
make this easy, cyclic dependencies make this hard, data outside of
basic foundations such as trees usually involves cyclic
dependencies... and unlike those you can introduce to basic data
structures, these dependencies are not simple to manage). It's also an
awkward language due to it's lack of higher-end constructs such as
classes, closures (well, lambda functions, really), casting operators,
and language-enforced RAII.

The applications language would be C-based, a closure would look
something like the following:

_int( _int ) _closure abs = @< _int ( _int arg ) { _return( arg >= 0 ?
arg : -arg ); };

Basically, a rationalized form of C (e.g., pointers are now part of
the TYPE portion of the declaration, NOT the VARIABLE part; a keyword
is used instead of a re-purposed operator; and the pointer operators
have their own character sequences instead of reusing other's), and
THEN adapted to the application-programming realm. Part of the idea is
that while C has the right basic syntax, when you get out of the
bare-metal realm you want some richer (and, more importantly, more
convenient) features to use. Thus, a language that takes the most of
the good from C, tries to fix the bad (declaration and pointer syntax
goofs, namespace collisions, multiple types in a single declaration),
and adds some general convenience features, such as GC (because doing
it correctly is hard), and some things that GC suddenly makes
practical (like closures).

The output of the compiler would be C, intended to be linked with a
provided runtime library (it wouldn't be that big, as the "standard
library" would be something separate: the runtime provides the bits
inherently required by the language, the standard library provides the
stuff you normally use, and thus can be replaced with what the
programmer wants). This would certainly be made easier by the firm
resemblance to C (everything will either be lifted from C, implemented
with a C library, or built on top of such capabilities).

Frankly, if I were running a software company, I would keep most of my
employees away from languages like C-- while on company time. The
moment you get into memory management, everything becomes more
complex, and the simple reference counting scheme that seems to
commonly be used isn't up to the job of correctly managing
reference-loops. It's a task best left to a dedicated library.

I don't think the designers of C-- would suggest it for applications
programming, or at least some systems programming, either. C-- (unless
you're talking about the Sphinx one, which I don't remember anything
about) was designed as a language for COMPILERS to target. And indeed,
you could design a fairly good compiler target language if you lifted
syntax from C. Unfortunately, C-- is (at least to the best of my
knowledge) tied rather directly to the x86 architecture. Furthermore,
as I best recall the only real advantage that it had was a swap
operator, and some more-efficient versions of e.g. if. This is not
really what you want in a compiler-target language. What you really
want to a way to describe functions and data structures in such a way
that you both convey the details that you require, and allow the "C--"
compiler to reinterpret your information into an improved form.
Annotations are certainly useful for this, but you're probably not
quite looking at C, because I'm pretty certain that you would really
want compile-time code to be run during that phase, which C in no way
provides for.

>>>> So you want to hook into the TCC parser itself?
>>>
>>> I said, if this has no obvious blockers, we could use fake targets
>>> that would be optimization passes. They would output C code.
>>
>> Yeah, I mostly paid attention to the fake-target bit, since
>> outputting IL/IC from that seemed like the easiest way into the
>> "standard route".
>
> I don't know what "this standard" route is, all I know is ouputing
> C code with fake targets to handle some optimization passes seems to
> be a good tradeoff to avoid a lot of kludge and to minimize impact on tcc
> internals.
>

The "standard route" to optimizations is that your C parse outputs
something that represents the C input, but in a more convenient form,
which is then read in by your optimizer and assembler. As I understand
it, this is normally some version of Polish notation (either normal,
or reverse) due to the greater ease in parsing it.

C is sometimes used as intermediate code, but in those instances it is
used as if it were assembly code, being handed to an assembler, rather
than intermediate code being handed to an optimizer. You should be
able to come up with a C-derived syntax that would work well as a true
IC, but I wouldn't try to do it with C itself, just with something
based on it.

Honestly, using C as an intermediate code for optimizers IS a kludge:
if it were genuinely a good idea, it would have become common. C aims
to be a systems language, and while you could PROBABLY make your idea
work, I wouldn't want to bet on it working any better than attempting
to write an OS in Java, if even that well.

Besides which, all of the important bits of optimization are in the
syntax tree. C might be chosen by someone as an output for an
optimization stage, but x86 assembly would do roughly as well, or JVM
bytecode, or CIL, or whatever else. The output is a distraction, you
should erase it from your mind.

>>> Regarding the unused code elimitation across compilation units, it
>>> involves probably the linker. Then the "trick" of the fake target
>>> may not be as easy.
>>
>> That depends, Eliminating unused functions & variables works like
>> that, but it only requires the ability to detect when you're in
>> the file-scope instead of a scope contained within a file, and
>> take that as a signal to place any additional variables or
>> functions into a new section. That and info on what those
>> sections need to import are all that you need, and all of that
>> should (at least presumably, it's been a while since I poked at
>> object file formats) be supported by your ordinary object file
>> format. Or, at least, your ordinary library file format.
>>
>> Once you have that, it's a matter of creating a linker mode that
>> will assign two bits (one for "needed", one for "supplied") in a
>> memory block to each of the sections, and starting a search from
>> your "root section" (probably the one containing the "main" or
>> equivalent function, but possibly a file declaring exports too).
>> Every time that you find a dependency, you ensure that it has
>> either it's "needed" or "supplied" bit set. Once you've finished
>> checking through every section that you ALREADY knew to check,
>> you output it, make a new list of sections to check (they'll be
>> the ones marked "needed" instead of "supplied"), switch all of
>> the "needed" sections to be only "supplied" instead, and start
>> the dependencies search again. You only stop once you run out of
>> sections that are "needed".
>>
>>> We may have to "annotate" the generated C code for the real target
>>> to insert the proper information in the object file for the
>>> linker. I bet that optimization pass would be kind of the last
>>> one.
>>
>> Unused function removal works as I stated above: you find a
>> starting point, find all of it's dependencies, write out the
>> starting point, and recursively check dependencies for new
>> dependencies, and write old dependencies out.
>>
>> Other forms of unused code removal either never leave the
>> compiler (e.g. removing if( 0 ) blocks), or should be left for a
>> later date (some things are more foundational than others).
>>
>>>  - A compilation unit scoped dead/unused code removal fake target
>>
>> Let's worry about unused function removal first, since that
>> should be the fastest to implement, okay? Depending on details
>> that Grischka would know but I don't, we might need to build a
>> parse tree before we can eliminate unused code INSIDE of
>> functions, in which case the target will be a "parse tree" output
>> of SOME kind, regardless of whether it's assembly-ish, C-ish, or
>> something-else-ish.
>>
>> Also, by virtue of some of Grischka's comments below, I don't
>> think that just sticking the optimizations entirely inside a fake
>> target will be enough: we'll need to build a parse tree for
>> anything other than the minor stuff, in which case we might as
>> well have the fake target be the parse tree instead of making it
>> be the actual optimization.
>>
>>>  - A C code annotation target which create a dependency tree of
>>>    machine code sections for the linker to optimize out or not.
>>
>> Allowing individual sections to have their own dependencies will
>> do this perfectly fine.
>
> Allright, then a fake target to anotate C code which anotations
> will add extra info in generated elf object, seems a good path
> for dead code/unused code removal optimization passes, at file
> scope.
> Compilation unit scope passes can output C code without the need
> of anotations for the linker, could start from here.
>

No, no, no. A real target will output an alternate form of object file
where each function and variable is in a different file section, or an
optimization target will completely remove code associated with
always-false conditionals. Genuine C code doesn't get annotations in
any implementation that we'll even theoretically see anywhere in the
near future, because if you can add the annotations, then you can do
the work that the annotations call for.

Go back and read my last few posts again, I've been talking about a
total of four different versions of dead code elimination, one of
which involves object files and might be quick to implement, one of
which works entirely inside the compiler and might be quick to
implement, two of which are distinguished by requiring PARSE TREES,
not annotations. Stop mixing multiple optimizations into one, it
causes problems.

> But, I'm more concerned with "variable aliasing" optimization
> passes.  As I said before, my general "ground" experience tells
> me that a lot C code use many variables to get to the same data.
> (I use aliasing a lot to make my code more readable).
> And I was wondering how much of the registers/stack space dance
> we would be able to avoid with such passes. I feel it will give
> a significant performance boost. I may be wrong though.
>

Get a parse-tree builder first. Worry about specific uses afterwards.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]