libunwind-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Libunwind-devel] 10% lost unwind traces on x86-64?


From: Lassi Tuura
Subject: Re: [Libunwind-devel] 10% lost unwind traces on x86-64?
Date: Tue, 9 Mar 2010 18:09:32 +0100

Hi,

I've debugged a number of the failures I saw. This is still ongoing as it's 
pretty laborious, but so far everything has been in one of the following clear 
categories:

1) Trace stops because of missing unwind info: PLT, __do_global_ctors_aux / 
_init.

2) Failure to trace at function entry; .eh_frame information exists and is 
correct. I suspect fetch_proc_info() should use "ip", not "--ip" to locate the 
FDE. In all cases I examined there is no adjacent preceding FDE, so lookup by 
ip-1 would come up empty.

3) Failure to trace at CFA transition boundary; .eh_frame information exists 
and is correct, libunwind either stops or reports corrupt addresses as if it 
was looking at wrong location on stack. I suspect run_cfi_program() loop should 
run until curr_ip <= ip, not curr_ip < ip.

4) Failure to trace within function epilogue. Code moves frame pointer (e.g. 
adds to %rsp or pops registers off stack) but these CFA changes are not 
recorded in .eh_frame, and libunwind appears to read wrong location on stack. 
AFAICT GCC 4.4 (or even 4.5) is required to get correct unwind information in 
epilogues. It is possible -fasynchronous-unwind-tables is also needed.

5) unw_is_signal_frame() needs c->validate = 1 because of above bugs; it tends 
to crash.

I'll look if I can patch at least for 2 and 3, maybe PLT tables from 1, and 
will see if GCC 4.4+ and maybe -fasynchronous-unwind-tables helps with 1 for 
missing unwind info for global ctors and 4. I'm still investigating if we need 
to rebuild globally with -fasynchronous-unwind-tables.

Patch for 5 was already circulated, although I save and restore c->validate 
around the dwarf_get() calls.

Any wisdom on why --ip for 2, and why not <= for 3 in current code would be 
very welcome.

Regards,
Lassi





reply via email to

[Prev in Thread] Current Thread [Next in Thread]