Do all platforms make the same distinction between asynchronous
(signal) and synchronous contexts?
I like Johan's suggestion of an alternate unw_init_local interface
(unw_init_from_signal_context_local) that takes the signal context as
the starting point. From what you are saying, an ia64 implementation of
this would have to ignore the signal context, call getcontext, and
unwind until it found the signal frame. But is that true for all
David Mosberger wrote:
On Fri, 26 Mar 2004 08:41:09 -0800, "Young, Mark" <address@hidden> said:
Mark> I second the motion. Using a signal context to start unwinding
Mark> would be useful in our application.
That will never be possible. It would always require non-trivial
extra work (namely storing the "preserved" state) and that of course
would slow down _all_ signal delivery.
Mark> In addition to the above sorts of uses, we use libunwind to
Mark> implement a time-sampled call-stack profiler. On each interval
Mark> timer signal, we unwind and capture the call stack. The
Mark> interrupts are frequent and anything done to improve
Mark> performance would be helpful.
Oh, I'm 100% with you there. But there is no magic trick. If you
want to unwind, you _must_ have "preserved" state, there is just no
two ways about it. BTW: my colleague, Hans Boehm, has a tool called
qprof which also can do time-sampled call-stack profiling. In
addition, he has a garbage collecter which can be used as a
leak-detector and in that mode, it can be desirable to unwind the
stack for each allocation. Once I actually manage to find some time
to do real work, I'll use this as an important benchmark for
If you watch the bk tree, you'll have seen that most of my recent work
has been on performance tuning libunwind. My goal is to have it beat
anything else in existence and to get as close as possible to the
performance of a frame-chain based stack-tracer.