[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[ft-devel] Autoconf's checking of size of xxx should not be used with mu
From: |
suzuki toshiya |
Subject: |
[ft-devel] Autoconf's checking of size of xxx should not be used with multiarch-at-once compiler (Re: a couple of warnings from 2.5.4 with mingw/darwinx) |
Date: |
Fri, 30 Jan 2015 14:47:14 +0900 |
User-agent: |
Mozilla-Thunderbird 2.0.0.24 (X11/20100329) |
Dear Werner,
During the trace of Hin-Tak's report, I found that
the sizes of the types checked by Autoconf are not
reliable when building is multiarch-at-once manner.
Maybe I should discuss with Autoconf maintainers
whether it is good idea to make these macros forcibly
in such case. Please let me explain the background.
1) What is "multiarch-at-once"?
I'm not sure if it has been since NeXTSTEP era,
but anyway, during the migration of Mac OS X from
PowerPC to Intel, Apple's diversion of GCC is extended
to accept multiple "-arch" options. When multiple
"-arch" options are given, gcc works as a wrapper
command to invoke the real compiler for each architecture
(ppc, ppc64, i386, x86_64, ... and arm?), and combine
the objects for each architecture into single fat
binary (by "lipo" command). If 4 "-arch" are given,
the compilers are invoked 4 times. If something goes
wrong in an architecture, even if other architectures
work well, the wrapper aborts with an error.
Although astute developers would not use this technique
for Autoconf'ed software because of the inconsistency
of Carbon APIs, there are so many developers using this
technique because (they expect as) the pitfalls by the
inconsistency are not so frequent.
2) Why Autoconf's checking of size of xxx does not
work well?
Because Autoconf's checking of size of xxx is ready
for cross compile, it could not check the size of
a type by overflow checking in the runtime. Taking
a glance on the code compiled during configure, it
seems that asking the compiler to declare an array
with the consideration of "sizeof (xxx)" macro and
assumed size. If too short size is assumed, the
compiler cannot declare an array due to negative
array size. We have to care that "the larger assumption
does not cause an error".
When "multiarch-at-once" situation, how it will work?
In my understanding, it will work as the least common
multiple of the sizes for each architecture. For example,
when we check the size of long, the checking process
would be...
a) assume 32-bit -> ppc=ok, ppc64=fail, i386=ok, amd64=fail -> failure
b) assume 64-bit -> ppc=ok, ppc64=ok, i386=ok, amd64=ok -> ok
As a result, configured result seems to be as if the
platform is LP64. But we should not expect long for
ppc and i386 are capable to store 64-bit integer :-(
3) How "--enable-biarch-config" should be modified?
Currently, for a backward compatibility, I keep the
int/long size checking by Autoconf macros. Afterwards,
I insert the size computation of int/long by C-preprocessor,
and compare the results by C-preprocessor with the
results by Autoconf macros. If "--enable-biarch-config"
is given, the size computation by C-preprocessor is
prioritized. So I think no need to change the behaviour.
However, "broken but use" is confusing message. How
should I change it? There is a posibility that the
size computations by C-preprocessor is incorrect, so
I want to find the case of "multiarch-at-once" situation
before the size computation by Autoconf macro and
issue some warning about the reliabilities (and do
not issue "broken" for size computations by C-preprocessor".
How do you think of?
Regards,
mpsuzuki
- [ft-devel] Autoconf's checking of size of xxx should not be used with multiarch-at-once compiler (Re: a couple of warnings from 2.5.4 with mingw/darwinx),
suzuki toshiya <=