bug-gdb
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

gdb 5.0 "ia64-unknown-linux" segv error


From: gjertsen
Subject: gdb 5.0 "ia64-unknown-linux" segv error
Date: Fri, 8 Dec 2000 09:42:59 -0500


Summary
------------
I am running across a bug with gdb 5.0 configured as
"ia64-unknown-linux" in the IA64 Turbolinux
environment that results in a segmentation violation in gdb
(in the gdb source file gdb/dwarf2read.c).

Environment
----------------

gdb-001121-1 (GNU gdb 5.0, "ia64-unknown-linux")
IA64 Big-Sur (2 proc) using TurboLinux
kernel-2.4.0test10-55
glibc-2.2-001117
gnupro-2.96-001117p2

New toolchain and glibc 2.2 for beta Turbolinux release on 11-23

Description
---------------
I am getting a segmentation violation with gdb that occurs when I
try to set a break point for my traced program right after
loading the program into gdb, but before trying to execute/trace
the program itself. I noticed the problem when I originally was
getting a segv with gdb while tracing the program,
which had hit a segv itself, and gdb died when I tried
to get a stack back trace on my halted program.

I can't reproduce this with a toy program and I can run gdb on
itself OK.

Debugging Info
-------------------

Using a copy of gdb compiled with -g I get the following:
(Note: the core dump was not as useful, so we look at this at runtime.)

address@hidden mmfsd]$ /usr/gnu/bin/gdb /usr/gnu/bin/gdb
(gdb) set args ./mmfsd -s ./mmfsd.map
(gdb) run
Starting program: /usr/gnu/bin/gdb ./mmfsd -s ./mmfsd.map
(gdb) break mainBody

Program received signal SIGSEGV, Segmentation fault.
read_typedef (die=Cannot access memory at address 0xffffffffffffffe0
) at dwarf2read.c:2793
2793    {

(gdb) bt
#0  read_typedef (die=Cannot access memory at address 0xffffffffffffffe0)
      at dwarf2read.c:2793
#1  0x40000000003a4890 in read_type_die (die=0x6000000000321c50,
      objfile=0x6000000000094970, cu_header=0x9fffffffffffe810)
      at dwarf2read.c:4576
#2  0x40000000003a40e0 in tag_type_to_type (die=0x6000000000321c50,
      objfile=0x6000000000094970, cu_header=0x9fffffffffffe810)
       at dwarf2read.c:4526
#3  0x40000000003a3b30 in die_type (die=0x6000000000321c50,
       objfile=0x6000000000094970, cu_header=0x9fffffffffffe810)
       at dwarf2read.c:4456
#4  0x4000000000392eb0 in read_typedef (die=0x6000000000321c50,
       objfile=0x6000000000094970, cu_header=0x9fffffffffffe810)
       at dwarf2read.c:2801
#5  0x40000000003a4890 in read_type_die (die=0x6000000000321c50,
      objfile=0x6000000000094970, cu_header=0x9fffffffffffe810)
       at dwarf2read.c:4576
#6  0x40000000003a40e0 in tag_type_to_type (die=0x6000000000321c50,
       objfile=0x6000000000094970, cu_header=0x9fffffffffffe810)
       at dwarf2read.c:4526
 #7  0x40000000003a3b30 in die_type (die=0x6000000000321c50,
       objfile=0x6000000000094970,cu_header=0x9fffffffffffe810)
       at dwarf2read.c:4456
...

(gdb) info registers
r0             0x0      0
r1             0x6000000000020be8       6917529027641215976
r2             0x9fffffffff9cb830       -6917529027647588304
r3             0x9fffffffffffe660       -6917529027641088416
r4             0x43ae0  277216
r5             0x3ffd6d40       1073573184
r6             0x58f    1423
r7             0x3f53e120       1062461728
r8             0x6000000000321c50       6917529027644365904
r9             0x2000000000328208       2305843009217004040
r10            0x0      0
r11            0x9ffffffffffffcd0       -6917529027641082672
r12            0x9fffffffff9cb7f0       -6917529027647588368
r13            0xe00000003f1d0000       -2305843008154828800

At first glance it looks like we are stuck in a recursive cycle
that eventually overflows the stack,  and even though
it looks like this cycling is typically done with another
toy program I've traced, there are at least 100K stack
records here in this cycle (and I'm still counting).





reply via email to

[Prev in Thread] Current Thread [Next in Thread]