[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [libunwind] libunwind segv with gcc 2.96 programs run on Redhat EL3

From: David Mosberger
Subject: RE: [libunwind] libunwind segv with gcc 2.96 programs run on Redhat EL3 with GLIBC 2.3.2
Date: Wed, 18 Feb 2004 21:47:57 -0800

>>>>> On Tue, 17 Feb 2004 17:10:37 -0500, "Harrow, Jerry" <address@hidden> said:

  Jerry> The patch works wonders.


  Jerry> Thanks for looking into it.  I really appreciate it.

You're very welcome.

It turns out there are a couple of other bugs related to NaT-bit
handling.  I'm adding test-cases and fixing them as the get
discovered.  This testing is long overdue, so I'm glad that your bug
report reminding me that this is an area in need of work.  (None of
the bugs found so far should causes segfaults however and for pure
backtracing, you won't care about NaT bits most of the time.)

  >>> Jerry, just to be clear: even so, libunwind doesn't (and really
  >>> cannot) guarantee that local unwinding with bad unwind info
  >>> won't cause a crash (remote unwinding doesn't have this issue).
  >>> So if you want to be super-safe, you may want to install a
  >>> SIGSEGV handler.

  Jerry> I understand.  I don't think we have any other methods to
  Jerry> provide call stacks though.  The callstack collection can be
  Jerry> turned off by a user of our tool, so we will just release
  Jerry> note the problem and the workarounds (turn off callstack
  Jerry> collection or re-compile with a newer compiler).

OK, sounds reasonable.

I suspect you understand this already, but just to be clear: the
problem is really a fundamental one and not really a limitation of
libunwind itself.  There is no thread-safe and efficient way to check
whether a memory access is safe, so the best way to handle untrusted
unwinding would be to install a SIGSEGV handler in your program for
the duration of an unwind (of course, depending on how your tool
works, that may or may not be feasible).  Having said that, SEGVs are
quite rare in my experience, because even with bad unwind data, the
sanity checks in libunwind will usually the errors long before they
cause "damage".

  Jerry> Coincidentally, right after I got past the failures with
  Jerry> libunwind, I discovered we are also getting a SEGV in
  Jerry> pthread_exit() which apparently has problems unwinding gcc
  Jerry> 2.93-compiled application stacks also.

  Jerry>        Program received signal SIGSEGV, Segmentation fault.
  Jerry> [Switching to Thread 2305843009230485712 (zombie)]
  Jerry> 0x20000000008a1a41 in _Unwind_GetBSP () from
  Jerry> /lib/

Argh.  Not sure where that's coming from.

  Jerry> Likewise, recompiling with a newer gcc fixes this problem
  Jerry> also.  Obviously, upgrading to a newer gcc is strongly
  Jerry> "suggested" for all :-).

Yes, 2.96 is really very old for ia64-purposes.

Anyhow, let me know if you see any other unexpected behavior in libunwind.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]