discuss-gnustep
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: FOSDEM and beyond (next stable release of base)


From: Nicola Pero
Subject: Re: FOSDEM and beyond (next stable release of base)
Date: Tue, 9 Feb 2010 18:32:37 +0000


It would also be nice if GNUstep Make could easily support a precompiled header without needing custom rules. Ideally, just setting whether you wanted strict OS X compatibility, and what you were building would select the correct header to precompile and would use -include to pre-include it in all Objective-C source files.

Sounds like an interesting idea. We already support precompiled (prefix) headers. They're quite easy to use but only make sense if you are using lots of headers, eg both gnustep-base and gnustep-gui. Renaissance uses them; check it for an example.

We could automate some of that. I spent some time experimenting with it today.

I'm a bit disappointed with the results; as usual, if you're only using gnustep-base, precompiled headers might make no difference or even slow down your build - particularly if you have few files to compile or you don't include Foundation/Foundation.h but have polished source files that only include the files that they need (eg, only NSArray.h, NSDictionary.h and whatever else you need) - which is quite common in our source.

I guess we could still detect the case when gnustep-gui (AppKit) is being used, enough ObjC files are being compiled, and in that case, create a default precompiled header that simply includes Cocoa/Cocoa.h. I did that, but when I tried in actual practice, many non-gnustep-gui libraries seem to have not bothered setting NEEDS_GUI = no, so preincluding Cocoa/Cocoa.h might not work or even break them. Not a good idea to
automatically do it for all projects :-(

We could automatically precompile and preinclude Foundation/ Foundation.h, but I have no evidence that this would speed up anything except in
some special cases!

--

To be honest, if anyone needs more build speed, they should:

* get a recent processor, then use 'make -j 2' or 'make -j 4' or whatever good number they can use. That will dramatically shorten their build times. Truth is, building files parallelizes amazingly well. On servers, I use 'make -j 8' and large ObjC projects compile extremely fast - we're talking of almost an order of magnitude improvement in build times compared to a simple 'make'. As the trend is to increase parallelization, in a few years we'll presumably get to a 'make -j 32' point where an entire project is built in a single, large parallel run of 32 instances of GCC. You could build the entire core of gnustep-base in a couple of seconds then. (note how creating a precompiled header would double compile times in this situation since you have to add an initial, non-parallelizable step, where you create the precompiled header!) (then, most of the time would be spent doing configure, linking and doing all the messy serialized custom rules; to shrink build times even more we should really rewrite autoconf so that it can run tests in parallel, and rewrite our custom
written rules to parallelize them)

* each subproject (or aggregate project) is built separately; so if you have 4 files in one subproject and 4 files in another subproject, going over 'make -j 4' won't help. gnustep-make 2.2.1 supports having source files in subdirectories without using subprojects; then the build is efficiently parallelized over the subdirectories. Eg, if you 4 files in one subdir, and 4 files in another, driven by a single project (with no subproject), then you can go up to 'make -j 8' and build them all in a single pass. Summary is - switch to gnustep-make 2.2.1 once released, remove subprojects and get more build scalability (which you probably won't be able to use yet unless you have good hardware, but if you have, it may be worth it!). ;-)

* preprocessed headers are worth it (sometimes), but you really need to spend a bit of time setting them up for your project for them to actually be of any visible benefit. Expect a reduction in build times from 10% to 50% depending on the project (PS: just going from 'make' to 'make -j 2' with a modern processor might speed up your build more - and much more easily!). Btw, the more you parallelize, the smaller the speedup, because creating the preprocessed header is an initial, serial step that can't be parallelized. Eg, if you have 16 files and can run 'make -j 4' and build everything in 4 passes, adding a precompiler header generation pass would mean you now require 5 passes to compile - which takes about 20% more time. It then really needs to be seen how much the precompilation will speed up the other 4 passes ... but from what I've seen, the speed up might be of a comparable order of magnitude, making it uncertain whether you're gaining any speed at all (and if you use 'make -j 8' you may well be losing speed if you precompile your header!). Anyway if anyone still wants to use a precompiled header, it's all supported and here's how to do: create a header that includes all the main headers you want to include in all the files in your project. Let's say it's MyFile.h. For example, it could contain

 #import <Foundation/Foundation.h>
 #import <AppKit/AppKit.h>
 #import <MyFramework/A.h>
 #import <MyFramework/B.h>
 ...

Then set

  xxx_OBJC_PRECOMPILED_HEADERS = MyFile.h

this will cause the header to be automatically precompiled before the build (but only if the compiler supports precompiled headers). If you want
to have it automatically preincluded whenever compiling a file, also add

 ADDITIONAL_OBJC_FLAGS += -include MyFile.h -Winvalid-pch

then that would do it. Try building again and see how your build times change. It should go down a bit if the precompiled header is big enough and you were previously including these headers, but not precompiled. (check Renaissance for an example).

--

Summarizing, since the future is in multi processors/cores, and 'make - j N' scales really well as you add more processors/cores, while precompiled headers don't scale at all - in fact, they scale "negatively" in the sense that the more you parallelize the less efficient they become (until they slow you down instead of speeding you up) I suggest we don't invest any more in precompiled header support, but think of how to parallelize things more instead. :-)

A common case, for example, is building many tools from the same GNUmakefile - each of them created by compiling a file or two. We don't parallelize building
the tools at the moment ....

Thanks




reply via email to

[Prev in Thread] Current Thread [Next in Thread]