automake
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: HPUX: PARALLEL=4 make -P


From: Ralf Wildenhues
Subject: Re: HPUX: PARALLEL=4 make -P
Date: Wed, 14 Feb 2007 19:21:40 +0100
User-agent: Mutt/1.5.13 (2006-08-11)

Oh, boy.  The simple summary is: for parallel builds, use GNU make,
avoid HP-UX make.

Here's a longer one: I have been able to reproduce the issue on the HP
testdrive host "HP-UX 11i v2 on Integrity rx1620", which is a two-way
system.  Here's a reduced self-contained example:

tr T \\t >Makefile <<'EOF'
all: one two
two: two.o
T$(CC) -o $@ two.o
one: one.o
T$(CC) -o $@ one.o
two.o: two.c dirstamp
T$(CC) -c -o two.o two.c
one.o: one.c dirstamp
T$(CC) -c -o one.o one.c
dirstamp:
T: > dirstamp
clean:
Trm -f *.o one two
cleaner: clean
Trm -f dirstamp
EOF
echo 'int main() { return 0; }' > one.c
cp one.c two.c
PARALLEL=4
export PARALLEL
make -P

| Making target"dirstamp"
|         : > dirstamp
| Making target"two"
|         cc -o two two.o
| cc: warning 1913: `two.o' does not exist or cannot be read
| Making target"one.o"
|         cc -c one.c
| Making target"two.o"
|         cc -c two.c
| ld: I/O error, file "two.o": No such file or directory
| Fatal error.
| two: *** Error exit code 1      [/house/rwild/t/Makefile]

In other words, this parallel make looks buggy and unusable to me,
and there is really nothing Automake can do about it.
Am I missing anything?

Now, I figured for a minute that maybe we're just understanding things
wrongly.  The manual at <http://docs.hp.com/en/B2355-60105/make.1.html>
states .MUTEX can be used to prevent having some targets updated in
parallel.  And indeed if I add
  .MUTEX: dirstamp two.o

then things seem to work.  But with three targets instead of two, I'd
have to list another one.  Thus I've already defeated the purpose of
parallel builds at this point.

If you (or someone else who read this far) has a support contract with
HP, please report this as a bug and report back the tracker URL if
publicly available.  Thanks.

Cheers,
Ralf




reply via email to

[Prev in Thread] Current Thread [Next in Thread]