[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
strange optimisation behaviour
From: |
Gav Wood |
Subject: |
strange optimisation behaviour |
Date: |
Fri, 30 Apr 2004 17:57:56 +0100 |
User-agent: |
KNode/0.7.7 |
hi,
i have a program that uses the functionality of the destructor to do some
useful stuff (like notify a parent object that it's done with the resource
and that it can free it or whatever); a simple reference counting mechanism
allows me to use pass-by-value semantics and it works flawlessly when
compiled with -O0.
the code is like this:
while(1)
{
Child c(parent, someEndOfLifeAction);
// do something with child
}
each iteration around the loop 'c' will get destroyed and it's destructor
will run, and carry out some possible EndOfLifeAction on its parent.
the problem is that i require this action to be done (i.e. the destructor
must be called:) prior to a new Child being "created", as it is a strict
requirement that the parent only ever has at most one child.
compiling this without any optimisations makes the code work fine. with
optimisations (-O2) seems to make gcc compile this code instead:
Child c;
while(1)
{
c = Child(parent, someEndOfLifeAction);
// do something with the child
}
this would work fine except for the problem that two children coexist at the
same time, breaking my system.
from looking around on google, i see that a likely suspect is
-fstrength-reduce.
is there anyway i can disable gcc's optimisation purely for my Child/parent
system (or at least on some particular loops) without actually turning the
offendinf optimisation off?
if i was to turn off globally for my system (and any extensions), would it
likely cause a significant speed hit (given the software is mostly
mathematical stuff with potentially a few loops in?)
thanks,
gav
--
Gav Wood,
University of York, UK.
- strange optimisation behaviour,
Gav Wood <=